With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data — while preserving identity privacy — remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to publicly verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.
Maintaining Data Integrity for Shared Data in Cloud IJERA Editor
Cloud computing is defined as a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle the applications. User can easily modify the shared and stored data in the cloud. To overcome this data modification in cloud the signature is provided to each individual user who accesses the data in cloud. Once the data is modified by the user on a block, the user must ensure that the signature is provided on that specific block. When user misbehaves or misuses the system the admin has authority to revoke that particular user from the group. After revoking that user, the existing user must re-sign the data signed by the revoked user. In addition to this, the security of the data is also enhanced with the help of public Auditor who is always able to audit the integrity of shared data without retrieving the entire data from the cloud
Maintaining Data Integrity for Shared Data in Cloudacijjournal
Cloud computing is defined as a type of computing that relies on sharing computing resources rather than
having local servers or personal devices to handle the applications. User can easily modify the shared and
stored data in the cloud. To overcome this data modification in cloud the signature is provided to each
individual user who accesses the data in cloud.
Once the data is modified by the user on a block, the user must ensure that the signature is provided on that
specific block. When user misbehaves or misuses the system the admin has authority to revoke that
particular user from the group. After revoking that user, the existing user must re-sign the data signed by
the revoked user. In addition to this, the security of the data is also enhanced with the help of public
Auditor who is always able to audit the integrity of shared data without retrieving the entire data from the
cloud.
The cloud user can remotely access software, services, application whenever they require over the
internet. The user can put their data remotely to the cloud storage. So, It is necessary that the cloud must have to
ensure data integrity and privacy of data of user.
The security is the major issue about cloud computing. The user may feel insecure for storing the data in
cloud storage. To overcome this issue, here we are giving public auditing mechanism for cloud storage. For this,
we studied Oruta system that providing public auditing mechanism. Revocation is all about the problems with
security occur in system. And we are revoked these many problems from the system. We are also revoking
existing members and adding new members in a group. In this way, we overcome the problem of static group. In
this system, TPA is Third Party Auditor which maintains all the log credentials of user and it verifies the proof of
data integrity and identity privacy of user. So, TPA plays a very important role in our system. Here we defining
statement of our model as,“Privacy Preserving using PAM in Cloud Computing ”.
.Keywords: Cloud Service Provider, Provable Data Possesion, Third Part Auditor, Public Auditing, Identity
Privacy, Shared Data, Cloud Computing.
Integrity Privacy to Public Auditing for Shared Data in Cloud ComputingIJERA Editor
In cloud computing, many mechanisms have been proposed to allow not only a data owner itself but also a public verifier to efficiently perform integrity checking without downloading the entire data from the cloud, which is referred to as public auditing . In these mechanisms, data is divided into many small blocks, where each block is independently signed by the owner; and a random combination of all the blocks instead of the whole data is retrieved during integrity checking .However, public auditing for such shared data— while preserving identity privacy — remains to be an open challenge. Here, we only consider how to audit the integrity of shared data in the cloud with static groups. It means the group is pre-defined before shared data is created in the cloud and the membership of users in the group is not changed during data sharing. The original user is responsible for deciding who is able to share her data before outsourcing data to the cloud. Another interesting problem is how to audit the integrity of shared data in the cloud with dynamic groups — a new user can be added into the group and an existing group member can be revoked during data sharing.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Maintaining Data Integrity for Shared Data in Cloud IJERA Editor
Cloud computing is defined as a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle the applications. User can easily modify the shared and stored data in the cloud. To overcome this data modification in cloud the signature is provided to each individual user who accesses the data in cloud. Once the data is modified by the user on a block, the user must ensure that the signature is provided on that specific block. When user misbehaves or misuses the system the admin has authority to revoke that particular user from the group. After revoking that user, the existing user must re-sign the data signed by the revoked user. In addition to this, the security of the data is also enhanced with the help of public Auditor who is always able to audit the integrity of shared data without retrieving the entire data from the cloud
Maintaining Data Integrity for Shared Data in Cloudacijjournal
Cloud computing is defined as a type of computing that relies on sharing computing resources rather than
having local servers or personal devices to handle the applications. User can easily modify the shared and
stored data in the cloud. To overcome this data modification in cloud the signature is provided to each
individual user who accesses the data in cloud.
Once the data is modified by the user on a block, the user must ensure that the signature is provided on that
specific block. When user misbehaves or misuses the system the admin has authority to revoke that
particular user from the group. After revoking that user, the existing user must re-sign the data signed by
the revoked user. In addition to this, the security of the data is also enhanced with the help of public
Auditor who is always able to audit the integrity of shared data without retrieving the entire data from the
cloud.
The cloud user can remotely access software, services, application whenever they require over the
internet. The user can put their data remotely to the cloud storage. So, It is necessary that the cloud must have to
ensure data integrity and privacy of data of user.
The security is the major issue about cloud computing. The user may feel insecure for storing the data in
cloud storage. To overcome this issue, here we are giving public auditing mechanism for cloud storage. For this,
we studied Oruta system that providing public auditing mechanism. Revocation is all about the problems with
security occur in system. And we are revoked these many problems from the system. We are also revoking
existing members and adding new members in a group. In this way, we overcome the problem of static group. In
this system, TPA is Third Party Auditor which maintains all the log credentials of user and it verifies the proof of
data integrity and identity privacy of user. So, TPA plays a very important role in our system. Here we defining
statement of our model as,“Privacy Preserving using PAM in Cloud Computing ”.
.Keywords: Cloud Service Provider, Provable Data Possesion, Third Part Auditor, Public Auditing, Identity
Privacy, Shared Data, Cloud Computing.
Integrity Privacy to Public Auditing for Shared Data in Cloud ComputingIJERA Editor
In cloud computing, many mechanisms have been proposed to allow not only a data owner itself but also a public verifier to efficiently perform integrity checking without downloading the entire data from the cloud, which is referred to as public auditing . In these mechanisms, data is divided into many small blocks, where each block is independently signed by the owner; and a random combination of all the blocks instead of the whole data is retrieved during integrity checking .However, public auditing for such shared data— while preserving identity privacy — remains to be an open challenge. Here, we only consider how to audit the integrity of shared data in the cloud with static groups. It means the group is pre-defined before shared data is created in the cloud and the membership of users in the group is not changed during data sharing. The original user is responsible for deciding who is able to share her data before outsourcing data to the cloud. Another interesting problem is how to audit the integrity of shared data in the cloud with dynamic groups — a new user can be added into the group and an existing group member can be revoked during data sharing.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Oruta privacy preserving public auditing for shared data in the cloudNexgen Technology
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
Panda: Public Auditing for Shared Data with Efficient User Revocation in the ...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
PUBLIC INTEGRITY AUDITING FOR SHARED DYNAMIC CLOUD DATA WITH GROUP USER REVO...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
HOMOMORPHIC AUTHENTICABLE RING SIGNATURE MECHANISM FOR PUBLIC AUDITING ON SHA...Journal For Research
With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data  while preserving identity privacy remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third-party auditor (TPA), who is still able to publicly verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
USER-DEFINED PRIVACY GRID SYSTEM FOR CONTINUOUS LOCATION-BASED SERVICES - IEE...Nexgen Technology
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Oruta privacy preserving public auditing for shared data in the cloudNexgen Technology
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
Panda: Public Auditing for Shared Data with Efficient User Revocation in the ...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
PUBLIC INTEGRITY AUDITING FOR SHARED DYNAMIC CLOUD DATA WITH GROUP USER REVO...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
HOMOMORPHIC AUTHENTICABLE RING SIGNATURE MECHANISM FOR PUBLIC AUDITING ON SHA...Journal For Research
With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data  while preserving identity privacy remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third-party auditor (TPA), who is still able to publicly verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
USER-DEFINED PRIVACY GRID SYSTEM FOR CONTINUOUS LOCATION-BASED SERVICES - IEE...Nexgen Technology
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
SURVEY ON DYNAMIC DATA SHARING IN PUBLIC CLOUD USING MULTI-AUTHORITY SYSTEMijiert bestjournal
The continuous development of cloud computing,seve ral trends are opening up to new forms of outsourci ng. Public data integrity auditing is not secure and efficient for shared dynamic data. In existing scheme figure out the collusion attack and provide an efficient public integrity au diting scheme,with the help of secure group user r evocation based on vector commitment and verifier�local revocation group signature. It provides secure and efficient s cheme which support public checking and efficient user revocati on. Problem of existing work they used TPA (Third p arty auditor) for key generation and key agreement. Use of TPA as central system if it fails then whole system gets failed. If we are working with cloud,user identity is major conc ern because user doesn�t want to reveal his persona l information to public. This concept not included in it. In this paper,based these con�s we proposed a dynamic dat a sharing in public cloud using multi-authority system. The prop osed scheme is able to protect user�s privacy again st each single authority. .
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
JPJ1409 Oruta: Privacy-Preserving Public Auditing for Shared Data in the Cloudchennaijp
We are good ieee java projects development center in chennai and pondicherry. We guided advanced java techonolgies projects of cloud computing, data mining, Secure Computing, Networking, Parallel & Distributed Systems, Mobile Computing and Service Computing (Web Service).
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/java-projects/
Improved Data Integrity Protection Regenerating-Coding Based Cloud StorageIJSRD
In today’s world a huge amount of data is loaded on the cloud storage. The protection of such type of data is main concern. It is somewhat difficult to protect such data against corruption, checking the integrity of data and also representation of failure data. Together with this it is also critical to implement fault tolerance among such type of data against corruptions. The private auditing for regenerating codes which is nothing but the existing system, developed to address such types of problems. The private auditing for regenerating codes can generate codes for such corrupted and incomplete data, but for this, data owners always have to stay online for checking completeness as well as integrity of data. In this paper, we are introducing the public auditing technique for regenerating code, based on cloud storage. The proxy is the main component in public auditing to regenerate failed authenticators in the absence of owner of the data. A public verifiable authenticator is also designed, which is generated by a several keys and can be regenerate using partial keys. We are also using pseudorandom function to preserve data privacy by randomizing the encode coefficient. Thus our technique can successfully regenerate the failed authenticators without data owner. Experiment implementation also indicates that our scheme is highly efficient and can be used to regenerate code in cloud based storage.
Improved Data Integrity Protection Regenerating-Coding Based Cloud StorageIJSRD
In today’s world a huge amount of data is loaded on the cloud storage. The protection of such type of data is main concern. It is somewhat difficult to protect such data against corruption, checking the integrity of data and also representation of failure data. Together with this it is also critical to implement fault tolerance among such type of data against corruptions. The private auditing for regenerating codes which is nothing but the existing system, developed to address such types of problems. The private auditing for regenerating codes can generate codes for such corrupted and incomplete data, but for this, data owners always have to stay online for checking completeness as well as integrity of data. In this paper, we are introducing the public auditing technique for regenerating code, based on cloud storage. The proxy is the main component in public auditing to regenerate failed authenticators in the absence of owner of the data. A public verifiable authenticator is also designed, which is generated by a several keys and can be regenerate using partial keys. We are also using pseudorandom function to preserve data privacy by randomizing the encode coefficient. Thus our technique can successfully regenerate the failed authenticators without data owner. Experiment implementation also indicates that our scheme is highly efficient and can be used to regenerate code in cloud based storage.
A secure anti collusion data sharing scheme for dynamic groups in the cloud1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Secure Data Sharing For Dynamic Groups in Multi-Attorney Manner Using Cloudpaperpublications3
Abstract: Cloud computing provides an economical and efficient solution for sharing data among the cloud users in the group , users sharing data in a multi-attorney manner preserving data and identity privacy from an untrusted cloud, it is still a challenging issue, due to frequent change of the membership in the group. In this paper, we propose a multi-attorney data sharing scheme for the dynamic groups in the cloud. By combing group signature and Tripple DES encryption techniques, any cloud user anonymously share the data with others. In addition, we analyze the security of our scheme with rigorous proofs, and demonstrate the efficiency of our scheme in experiments.Keywords: cloud computing, data sharing, privacy-preserving, access control, and dynamic groups.
Title: Secure Data Sharing For Dynamic Groups in Multi-Attorney Manner Using Cloud
Author: Vijaya Kumar Patil C, Manjunath H
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revoc...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Object Oriented Programming -- Dr Robert Harlesuthi
This course absorbs what was “Programming Methods” and provides a more formal look at Object Oriented programming with an emphasis on Java
Four Parts ---
1. Computer Fundamentals
2. Object-Oriented Concepts
3. The Java Platform
4. Design Patterns and OOP design examples
THE ROLE OF EDGE COMPUTING IN INTERNET OF THINGSsuthi
Edge computing refers to the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IoT services. Here we define “edge” as any computing and network resources along the path between data sources and cloud data centers. For example, a smart phone is the edge between body things and cloud, a gateway in a smart home is the edge between home things and cloud, a micro data center and a cloudlet is the edge between a mobile device and cloud. The rationale of edge computing is that computing should happen at the proximity of data sources. From our point of view, edge computing is interchangeable with fog computing, but edge computing focus more toward the things side, while fog computing focus more on the infrastructure side. Edge computing could have as big an impact on our society as has the cloud computing.
The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.
Edge computing refers to the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IoT services. Here we define “edge” as any computing and network resources along the path between data sources and cloud data centers. For example, a smart phone is the edge between body things and cloud, a gateway in a smart home is the edge between home things and cloud, a micro data center and a cloudlet is the edge between a mobile device and cloud. The rationale of edge computing is that computing should happen at the proximity of data sources. From our point of view, edge computing is interchangeable with fog computing, but edge computing focus more toward the things side, while fog computing focus more on the infrastructure side. Edge computing could have as big an impact on our society as has the cloud computing.
Document Classification Using KNN with Fuzzy Bags of Word Representationsuthi
Abstract — Text classification is used to classify the documents depending on the words, phrases and word combinations according to the declared syntaxes. There are many applications that are using text classification such as artificial intelligence, to maintain the data according to the category and in many other. Some keywords which are called topics are selected to classify the given document. Using these Topics the main idea of the document can be identified. Selecting the Topics is an important task to classify the document according to the category. In this proposed system keywords are extracted from documents using TF-IDF and Word Net. TF-IDF algorithm is mainly used to select the important words by which document can be classified. Word Net is mainly used to find similarity between these candidate words. The words which are having the maximum similarity are considered as Topics(keywords). In this experiment we used TF-IDF model to find the similar words so that to classify the document. Decision tree algorithm gives the better accuracy for text classification when compared to other algorithms fuzzy system to classify text written in natural language according to topic. It is necessary to use a fuzzy classifier for this task, due to the fact that a given text can cover several topics with different degrees. In this context, traditional classifiers are inappropriate, as they attempt to sort each text in a single class in a winner-takes-all fashion. The classifier we proposeautomatically learns its fuzzy rules from training examples. We have applied it to classify news articles, and the results we obtained are promising. The dimensionality of a vector is very important in text classification. We can decrease this dimensionality by using clustering based on fuzzy logic. Depending on the similarity we can classify the document and thus they can be formed into clusters according to their Topics. After formation of clusters one can easily access the documents and save the documents very easily. In this we can find the similarity and summarize the words called Topics which can be used to classify the Documents.
Short Notes on Automata Theory
Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science). The word automata (the plural of automaton) comes from the Greek word αὐτόματα, which means "self-making".
OBJECT ORIENTED PROGRAMMING LANGUAGE - SHORT NOTESsuthi
Short Notes on OOP
Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data, in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods). A feature of objects is an object's procedures that can access and often modify the data fields of the object with which they are associated (objects have a notion of "this" or "self"). In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types.
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESsuthi
Short Notes on Parallel Computing
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time.
C is a general-purpose, procedural, imperative computer programming language developed in 1972 by Dennis M. Ritchie at the Bell Telephone Laboratories to develop the UNIX operating system. C is the most widely used computer language. It keeps fluctuating at number one scale of popularity along with Java programming language, which is also
equally popular and most widely used among modern software programmers.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
ORUTA BASE PAPER
1. Oruta: Privacy-Preserving Public Auditing
for Shared Data in the Cloud
Boyang Wang, Baochun Li, Member, IEEE, and Hui Li, Member, IEEE
Abstract—With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple
users. However, public auditing for such shared data — while preserving identity privacy — remains to be an open challenge. In this
paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular,
we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the
identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to publicly verify the
integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our
proposed mechanism when auditing shared data.
Index Terms—Public auditing, privacy-preserving, shared data, cloud computing.
3
1 INTRODUCTION
CLOUD service providers manage an enterprise-class
infrastructure that offers a scalable, secure and re-
liable environment for users, at a much lower marginal
cost due to the sharing nature of resources. It is routine
for users to use cloud storage services to share data with
others in a team, as data sharing becomes a standard fea-
ture in most cloud storage offerings, including Dropbox
and Google Docs.
The integrity of data in cloud storage, however, is
subject to skepticism and scrutiny, as data stored in an
untrusted cloud can easily be lost or corrupted, due
to hardware failures and human errors [1]. To protect
the integrity of cloud data, it is best to perform public
auditing by introducing a third party auditor (TPA), who
offers its auditing service with more powerful computa-
tion and communication abilities than regular users.
The first provable data possession (PDP) mechanism
[2] to perform public auditing is designed to check the
correctness of data stored in an untrusted server, without
retrieving the entire data. Moving a step forward, Wang
et al. [3] (referred to as WWRL in this paper) is designed
to construct a public auditing mechanism for cloud data,
so that during public auditing, the content of private
data belonging to a personal user is not disclosed to the
third party auditor.
We believe that sharing data among multiple users is
perhaps one of the most engaging features that motivates
cloud storage. A unique problem introduced during the
• Boyang Wang and Hui Li are with the State Key Laboratory of Integrated
Services Networks, Xidian University, Xi’an, 710071, China. Boyang
Wang is also a visiting Ph.D student at Department of Electrical and
Computer Engineering, University of Toronto.
E-mail: {bywang,lihui}@mail.xidian.edu.cn
• Baochun Li is with the Department of Electrical and Computer Engineer-
ing, University of Toronto, Toronto, ON, M5S 3G4, Canada.
E-mail: bli@eecg.toronto.edu
process of public auditing for shared data in the cloud is
how to preserve identity privacy from the TPA, because
the identities of signers on shared data may indicate that
a particular user in the group or a special block in shared
data is a higher valuable target than others.
For example, Alice and Bob work together as a group
and share a file in the cloud. The shared file is divided
into a number of small blocks, which are independently
signed by users. Once a block in this shared file is
modified by a user, this user needs to sign the new block
using her public/private key pair. The TPA needs to
know the identity of the signer on each block in this
shared file, so that it is able to audit the integrity of the
whole file based on requests from Alice or Bob.
a block signed by Alice a block signed by Bob
A A A A A A B A B B
A B
A A A A A A A B B B
A A A A B A A A B B
8th
8th
8th
Auditing Task 1
Auditing Task 2
Auditing Task 3
TPA
Fig. 1. Alice and Bob share a file in the cloud. The TPA
audits the integrity of shared data with existing mecha-
nisms.
As shown in Fig. 1, after performing several auditing
tasks, some private and sensitive information may reveal
to the TPA. On one hand, most of the blocks in shared
file are signed by Alice, which may indicate that Alice
is a important role in this group, such as a group leader.
On the other hand, the 8-th block is frequently modified
by different users. It means this block may contain high-
value data, such as a final bid in an auction, that Alice
IEEE 5TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING YEAR 2014
2. and Bob need to discuss and change it several times.
As described in the example above, the identities of
signers on shared data may indicate which user in the
group or block in shared data is a higher valuable target
than others. Such information is confidential to the group
and should not be revealed to any third party. However,
no existing mechanism in the literature is able to perform
public auditing on shared data in the cloud while still
preserving identity privacy.
In this paper, we propose Oruta1
, a new privacy-
preserving public auditing mechanism for shared data in
an untrusted cloud. In Oruta, we utilize ring signatures
[4], [5] to construct homomorphic authenticators [2], [6],
so that the third party auditor is able to verify the
integrity of shared data for a group of users without
retrieving the entire data — while the identity of the
signer on each block in shared data is kept private from
the TPA. In addition, we further extend our mechanism
to support batch auditing, which can audit multiple
shared data simultaneously in a single auditing task.
Meanwhile, Oruta continues to use random masking
[3] to support data privacy during public auditing, and
leverage index hash tables [7] to support fully dynamic
operations on shared data. A dynamic operation indi-
cates an insert, delete or update operation on a single
block in shared data. A high-level comparison between
Oruta and existing mechanisms in the literature is shown
in Table 1. To our best knowledge, this paper represents
the first attempt towards designing an effective privacy-
preserving public auditing mechanism for shared data
in the cloud.
TABLE 1
Comparison with Existing Mechanisms
PDP [2] WWRL [3] Oruta
Public auditing Yes Yes Yes
Data privacy No Yes Yes
Identity privacy No No Yes
The remainder of this paper is organized as follows.
In Section 2, we present the system model and threat
model. In Section 3, we introduce cryptographic prim-
itives used in Oruta. The detailed design and security
analysis of Oruta are presented in Section 4 and Section
5. In Section 6, we evaluates the performance of Oruta.
Finally, we briefly discuss related work in Section 7, and
conclude this paper in Section 8.
2 PROBLEM STATEMENT
2.1 System Model
As illustrated in Fig. 2, our work in this paper involves
three parties: the cloud server, the third party auditor
(TPA) and users. There are two types of users in a
group: the original user and a number of group users.
The original user and group users are both members
1. Oruta: One Ring to Rule Them All.
of the group. Group members are allowed to access
and modify shared data created by the original user
based on access control polices [8]. Shared data and its
verification information (i.e. signatures) are both stored
in the cloud server. The third party auditor is able to
verify the integrity of shared data in the cloud server on
behalf of group members.
Users
Cloud Server
Third Party Auditor (TPA)
Shared Data Flow
1.Auditing
R
equest
4.Auditing
R
eport
2. Auditing Message3. Auditing Proof
Fig. 2. Our system model includes the cloud server, the
third party auditor and users.
In this paper, we only consider how to audit the
integrity of shared data in the cloud with static groups.
It means the group is pre-defined before shared data
is created in the cloud and the membership of users
in the group is not changed during data sharing. The
original user is responsible for deciding who is able
to share her data before outsourcing data to the cloud.
Another interesting problem is how to audit the integrity
of shared data in the cloud with dynamic groups — a
new user can be added into the group and an existing
group member can be revoked during data sharing —
while still preserving identity privacy. We will leave this
problem to our future work.
When a user (either the original user or a group user)
wishes to check the integrity of shared data, she first
sends an auditing request to the TPA. After receiving the
auditing request, the TPA generates an auditing message
to the cloud server, and retrieves an auditing proof of
shared data from the cloud server. Then the TPA verifies
the correctness of the auditing proof. Finally, the TPA
sends an auditing report to the user based on the result
of the verification.
2.2 Threat Model
2.2.1 Integrity Threats
Two kinds of threats related to the integrity of shared
data are possible. First, an adversary may try to corrupt
the integrity of shared data and prevent users from
using data correctly. Second, the cloud service provider
may inadvertently corrupt (or even remove) data in
its storage due to hardware failures and human errors.
Making matters worse, in order to avoid jeopardizing its
reputation, the cloud server provider may be reluctant
to inform users about such corruption of data.
3. 2.2.2 Privacy Threats
The identity of the signer on each block in shared
data is private and confidential to the group. During
the process of auditing, a semi-trusted TPA, who is only
responsible for auditing the integrity of shared data, may
try to reveal the identity of the signer on each block in
shared data based on verification information. Once the
TPA reveals the identity of the signer on each block, it
can easily distinguish a high-value target (a particular
user in the group or a special block in shared data).
2.3 Design Objectives
To enable the TPA efficiently and securely verify
shared data for a group of users, Oruta should be
designed to achieve following properties: (1) Public Au-
diting: The third party auditor is able to publicly verify
the integrity of shared data for a group of users without
retrieving the entire data. (2) Correctness: The third
party auditor is able to correctly detect whether there is
any corrupted block in shared data. (3) Unforgeability:
Only a user in the group can generate valid verification
information on shared data. (4) Identity Privacy: During
auditing, the TPA cannot distinguish the identity of the
signer on each block in shared data.
3 PRELIMINARIES
In this section, we briefly introduce cryptographic
primitives and their corresponding properties that we
implement in Oruta.
3.1 Bilinear Maps
We first introduce a few concepts and properties re-
lated to bilinear maps. We follow notations from [5], [9]:
1) G1, G2 and GT are three multiplicative cyclic
groups of prime order p;
2) g1 is a generator of G1, and g2 is a generator of G2;
3) ψ is a computable isomorphism from G2 to G1,
with ψ(g2) = g1;
4) e is a bilinear map e: G1 × G2 → GT with the
following properties: Computability: there exists
an efficiently computable algorithm for computing
the map e. Bilinearity: for all u ∈ G1, v ∈ G2 and
a, b ∈ Zp, e(ua
, vb
) = e(u, v)ab
. Non-degeneracy:
e(g1, g2) = 1.
These properties further imply two additional proper-
ties: (1) for any u1, u2 ∈ G1 and v ∈ G2, e(u1 · u2, v) =
e(u1, v) · e(u2, v); (2) for any u, v ∈ G2, e(ψ(u), v) =
e(ψ(v), u).
3.2 Complexity Assumptions
Definition 1: Discrete Logarithm Problem. For a ∈
Zp, given g, h = ga
∈ G1, output a.
The Discrete Logarithm assumption holds in G1 if no
t-time algorithm has advantage at least ǫ in solving the
Discrete Logarithm problem in G1, which means it is
computational infeasible to solve the Discrete Logarithm
problem in G1.
Definition 2: Computational Co-Diffie-Hellman
Problem. For a ∈ Zp, given g2, ga
2 ∈ G2 and h ∈ G1,
compute ha
∈ G1.
The co-CDH assumption holds in G1 and G2 if no t-
time algorithm has advantage at least ǫ in solving the
co-CDH problem in G1 and G2. When G1 = G2 and
g1 = g2, the co-CDH problem can be reduced to the
standard CDH problem in G1. The co-CDH assumption
is a stronger assumption than the Discrete Logarithm
assumption.
Definition 3: Computational Diffie-Hellman Prob-
lem. For a, b ∈ Zp, given g1, ga
1 , gb
1 ∈ G1, compute
gab
1 ∈ G1.
The CDH assumption holds in G1 if no t-time al-
gorithm has advantage at least ǫ in solving the CDH
problem in G1.
3.3 Ring Signatures
The concept of ring signatures is first proposed by
Rivest et al. [4] in 2001. With ring signatures, a verifier
is convinced that a signature is computed using one of
group members’ private keys, but the verifier is not able
to determine which one. This property can be used to
preserve the identity of the signer from a verifier.
The ring signature scheme introduced by Boneh et
al. [5] (referred to as BGLS in this paper) is constructed
on bilinear maps. We will extend this ring signature
scheme to construct our public auditing mechanism.
3.4 Homomorphic Authenticators
Homomorphic authenticators (also called homomor-
phic verifiable tags) are basic tools to construct data
auditing mechanisms [2], [3], [6]. Besides unforgeability
(only a user with a private key can generate valid signa-
tures), a homomorphic authenticable signature scheme,
which denotes a homomorphic authenticator based on
signatures, should also satisfy the following properties:
Let (pk, sk) denote the signer’s public/private key
pair, σ1 denote a signature on block m1 ∈ Zp, σ2 denote
a signature on block m2 ∈ Zp.
• Blockless verification: Given σ1 and σ2, two ran-
dom values α1, α2 ∈ Zp and a block m′
=
α1m1 + α2m2 ∈ Zp, a verifier is able to check the
correctness of block m′
without knowing block m1
and m2.
• Non-malleability Given σ1 and σ2, two random
values α1, α2 ∈ Zp and a block m′
= α1m1 + α2m2 ∈
Zp, a user, who does not have private key sk, is not
able to generate a valid signature σ′
on block m′
by
linearly combining signature σ1 and σ2.
Blockless verification allows a verifier to audit the cor-
rectness of data stored in the cloud server with a single
block, which is a linear combination of all the blocks
in data. If the combined block is correct, the verifier
4. believes that the blocks in data are all correct. In this way,
the verifier does not need to download all the blocks to
check the integrity of data. Non-malleability indicates
that an attacker cannot generate valid signatures on
invalid blocks by linearly combining existing signatures.
Other cryptographic techniques related to homomor-
phic authenticable signatures includes aggregate sig-
natures [5], homomorphic signatures [10] and batch-
verification signatures [11]. If a signature scheme is
blockless verifiable and malleable, it is a homomorphic
signature scheme. In the construction of data auditing
mechanisms, we should use homomorphic authenticable
signatures, not homomorphic signatures.
4 HOMOMORPHIC AUTHENTICABLE RING
SIGNATURES
4.1 Overview
In this section, we introduce a new ring signature
scheme, which is suitable for public auditing. Then, we
will show how to build the privacy-preserving public
auditing mechanism for shared data in the cloud based
on this new ring signature scheme in the next section.
As we introduced in previous sections, we intend to
utilize ring signatures to hide the identity of the signer
on each block, so that private and sensitive information
of the group is not disclosed to the TPA. However,
traditional ring signatures [4], [5] cannot be directly used
into public auditing mechanisms, because these ring
signature schemes do not support blockless verification.
Without blockless verification, the TPA has to download
the whole data file to verify the correctness of shared
data, which consumes excessive bandwidth and takes
long verification times.
Therefore, we first construct a new homomorphic
authenticable ring signature (HARS) scheme, which is
extended from a classic ring signature scheme [5], de-
noted as BGLS. The ring signatures generated by HARS
is able not only to preserve identity privacy but also to
support blockless verification.
4.2 Construction of HARS
HARS contains three algorithms: KeyGen, RingSign
and RingVerify. In KeyGen, each user in the group
generates her public key and private key. In RingSign, a
user in the group is able to sign a block with her private
key and all the group members’ public keys. A verifier
is allowed to check whether a given block is signed by
a group member in RingVerify.
Scheme Details. Let G1, G2 and GT be multiplicative
cyclic groups of order p, g1 and g2 be generators of G1
and G2 respectively. Let e : G1 × G2 → GT be a bilinear
map, and ψ : G2 → G1 be a computable isomorphism
with ψ(g2) = g1. There is a public map-to-point hash
function H1: {0, 1}∗
→ G1. The global parameters are
(e, ψ, p, G1, G2, GT , g1, g2, H1). The total number of users
in the group is d. Let U denote the group that includes
all the d users.
KeyGen. For a user ui in the group U, she randomly
picks xi ∈ Zp and computes wi = gxi
2 ∈ G2. Then, user
ui’s public key is pki = wi and her private key is ski =
xi.
RingSign. Given all the d users’ public keys
(pk1, ..., pkd) = (w1, ..., wd), a block m ∈ Zp, the iden-
tifier of this block id and the private key sks for some
s, user us randomly chooses ai ∈ Zp for all i = s, where
i ∈ [1, d], and let σi = gai
1 . Then, she computes
β = H1(id)gm
1 ∈ G1, (1)
and sets
σs =
β
ψ( i=s wai
i )
1/xs
∈ G1. (2)
And the ring signature of block m is σσσ = (σ1, ..., σd) ∈
Gd
1.
RingVerify. Given all the d users’ public keys
(pk1, ..., pkd) = (w1, ..., wd), a block m, an identifier id
and a ring signature σσσ = (σ1, ..., σd), a verifier first
computes β = H1(id)gm
1 ∈ G1, and then checks
e(β, g2)
?
=
d
i=1
e(σi, wi). (3)
If the above equation holds, then the given block m is
signed by one of these d users in the group. Otherwise,
it is not.
4.3 Security Analysis of HARS
Now, we discuss some important properties of HARS,
including correctness, unforgeability, blockless verifica-
tion, non-malleability and identity privacy.
Theorem 1: Given any block and its ring signature, a
verifier is able to correctly check the integrity of this block
under HARS.
Proof: To prove the correctness of HARS is equiva-
lent of proving Equation (3) is correct. Based on prop-
erties of bilinear maps, the correctness of this equation
can be proved as follows:
d
i=1
e(σi, wi) = e(σs, ws) ·
i=s
e(σi, wi)
= e(
β
ψ( i=s wai
i )
1
xs
, gxs
2 ) ·
i=s
e(gai
1 , gxi
2 )
= e(
β
ψ( i=s gxiai
2 )
, g2) ·
i=s
e(gaixi
1 , g2)
= e(
β
i=s g1
aixi
, g2) · e(
i=s
gaixi
1 , g2)
= e(
β
i=s g1
aixi
·
i=s
g1
aixi
, g2)
= e(β, g2).
5. Now we prove that HARS is able to resistance to
forgery. We follow the security model and the game
defined in BGLS [5]. In the game, an adversary is given
all the d users’ public key (pk1, ..., pkd) = (w1, ..., wd),
and is given access to the hash oracle and the ring-
signing oracle. The goal of the adversary is to output a
valid ring signature on a pair of block/identifier (id, m),
where this pair of block/identifier (id, m) has never been
presented to the ring-signing oracle. If the adversary
achieves this goal, then it wins the game.
Theorem 2: Suppose A is a (t′
, ǫ′
)-algorithm that can
generate a forgery of a ring signature on a group of users of
size d. Then there exists a (t, ǫ)-algorithm that can solve the
co-CDH problem with t ≤ 2t′
+2cG1
(qH +dqs+qs+d)+2cG2
d
and ǫ ≥ (ǫ′
/(e + eqs))2
, where A issues at most qH hash
queries and at most qs ring-signing queries, e = limqs→∞(1+
1/qs)qs
, exponentiation and inversion on G1 take time cG1
,
and exponentiation and inversion on G2 take time cG2
.
Proof: The co-CDH problem can be solved by solving
two random instances of the following problem: Given
gab
1 , ga
2 (and g1,g2), compute gb
1. We shall construct an
algorithm B that solves this problem. This problem is
easy if a = 0. In what follows, we assume a = 0.
Initially, B randomly picks x2, ..., xn from Zp and sets
x1 = 1. Then, it sets pki = wi = (ga
2 )xi
. Algorithm
A is given the public keys (pk1, ..., pkd) = (w1, ..., wd).
Without loss of generality, we assume A can submit
distinct queries, which means for every ring-signing
query on a block m and its identifier id, A has previously
issued a hash query on block m and identifier id.
On a hash query, B flips a coin that shows 0 with prob-
ability pc and 1 otherwise, where pc will be determined
later. Then B randomly picks r ∈ Zp, if the coin shows
0, B returns (gab
1 )r
, otherwise it returns ψ(ga
2 )r
.
Suppose A issues a ring sign query on a block m and
its identifier id. By the assumption, a hash query has
been issued on this pair of block/identifier (m, id). If the
coin B flipped for this hash query showed 0, then B fails
and exits. Otherwise B has returned H(id)gm
1 = ψ(ga
2 )r
for some r. In this case, B chooses random a2, ..., ad ∈ Zp,
computes a1 = r − (a2x2 + ... + adxd), and returns the
signature σσσ = (ga1
1 , ..., gad
1 ).
Eventually A outputs a forgery σσσ = (σ1, ..., σd)
on block m and identifier id. Again by the assump-
tion, a hash query has been issued on this pair of
block/identifier (m, id). If the coin flipped by B for this
hash query did not show 0 then B fails. Otherwise,
H(id)gm
1 = gabr
1 for some r chosen by B, and B can
output gb
1 by computing (
d
i=1 σxi
i )1/r
.
Algorithm A cannot distinguish between B’s sim-
ulation and real life. If A successfully forges a ring
signature, then B can output gb
1. The probability that
B will not fail is pqs
c (1 − p), which is maximized when
pc = qs/(qs + 1), then the bound of this probability is
1/(e·(1 + qs)), where e = limqs→∞(1 + 1/qs)qs
. Algorithm
B requires d exponentiations on G2 in setup, one expo-
nentiations on G1 for each of A’s hash queries, d + 1
exponentiations on G1 for each of A’s signature queries,
and d exponentiations on G1 in the output phase, so
the running time of B is the running time of A plus
cG1
(qH + dqs + qs + d) + cG2
d.
Because the co-CDH problem can be solved by solving
two random instances of algorithm B. Therefore, if A is
a (t′
, ǫ′
)-algorithm that can generate a forgery of ring
signature on a group of users of size d, then there exists
a (t, ǫ)-algorithm that can solve the co-CDH problem
with t ≤ 2t′
+ 2cG1
(qH + dqs + qs + d) + 2cG2
d and
ǫ ≥ (ǫ′
/(e + eqs))2
.
Theorem 3: For an adversary, it is computational infeasi-
ble to forge a ring signature under HARS.
Proof: As we already proved in Theorem 2, if an
adversary can forge a ring signature, then we can find a
(t, ǫ)-algorithm to solve the co-CDH problem in G1 and
G2, which contradicts to the fact that the co-CDH prob-
lem in G1 and G2 is hard. Therefore, for an adversary,
it is computational infeasible to forge a ring signature
under HARS.
Then, based on Theorem 1 and 3, we show that HARS
is a homomorphic authenticable ring signature scheme.
Theorem 4: HARS is a homomorphic authenticable ring
signature scheme.
Proof: To prove HARS is a homomorphic authenti-
cable ring signature scheme, we first prove that HARS is
able to support blockless verification, which we defined
in Section 3. Then we show HARS is also non-malleable.
Given all the d users’ public keys (pk1, ..., pkd) =
(w1, ..., wd), two identifiers id1 and id2, two ring signa-
tures σσσ1 = (σ1,1, ..., σ1,d) and σσσ2 = (σ2,1, ..., σ2,d), and two
random values y1, y2 ∈ Zp, a verifier is able to check the
correctness of a combined block m′
= y1m1 + y2m2 ∈ Zp
without knowing block m1 and m2 by verifying:
e(H1(id1)y1
H1(id2)y2
gm′
1 , g2)
?
=
d
i=1
e(σy1
1,i · σy2
2,i, wi).
Based on Theorem 1, the correctness of the above
equation can be proved as:
e(H1(id1)y1
H1(id2)y2
gm′
1 , g2)
= e(H1(id1)y1
gy1m1
1 , g2) · e(H1(id2)y2
gy2m2
1 , g2)
= e(β1, g2)y1
· e(β2, g2)y2
=
d
i=1
e(σ1,i, wi)y1
·
d
i=1
e(σ2,i, wi)y2
=
d
i=1
e(σy1
1,i · σy2
2,i, wi).
If the combined block m′
is correct, the verifier also be-
lieves that block m1 and m2 are both correct. Therefore,
HARS is able to support blockless verification.
Meanwhile, an adversary, who does not have any
user’s private key, cannot generate a valid ring signature
σσσ′
on an invalid block m′
by linearly combining σσσ1 and
σσσ2 with y1 and y2. Because if an element σ′
i in σσσ′
is
computed as σ′
i = σy1
1,i · σy2
2,i, the whole ring signature
σσσ′
= (σ′
1, ..., σ′
d) cannot pass Equation (3) in RingVerify.
6. More specifically, if block m1 and m2 are signed by the
same user, for example, user us, then σ′
s can be computed
as
σ′
s = σy1
1,s · σy2
2,s =
βy1
1 βy2
2
i=s w
y1a1,i
1,i ·w
y2a2,i
2,i
1/xs
.
For all i = s, σ′
i = σy1
1,i · σy2
2,i = g
(y1a1,i+y2a2,i)
1 , where a1,i
and a2,i are random values. When ring signature σσσ′
=
(σ′
1, ..., σ′
d) is verified with the invalid block m′
using
Equation (3):
d
i=1
e(σ′
i, wi) = e(βy1
1 βy2
2 , g2) = e(β′
, g2),
which means it always fails to pass the verification.
Because βy1
1 βy2
2 = H(id1)y1
H(id2)y2
gm′
1 is not equal to
β′
= H(id′
)gm′
1 .
If block m1 and m2 are signed by different users, for
example, user us and user ut, then σ′
s and σ′
t can be
presented as
σ′
s =
βy1
1
i=s w
y1a1,i
i
1/xs
· g
y2a2,s
1 ,
σ′
t = g
y1a1,t
1 ·
βy2
2
i=t w
y2a2,i
i
1/xt
.
For all i = s and i = t, σ′
i = σy1
1,i · σy2
2,i = g
(y1a1,i+y2a2,i)
1 ,
where a1,i and a2,i are random values. When ring sig-
nature σσσ′
= (σ′
1, ..., σ′
d) is verified with the invalid block
m′
using Equation (3):
d
i=1
e(σ′
i, wi) = e(βy1
1 βy2
2 , g2) = e(β′
, g2),
which means it always fails to pass the verification.
Therefore, an adversary cannot output valid ring signa-
tures on invalid blocks by linearly combining existing
signatures, which indicates that HARS is non-malleable.
Because HARS is not only blockless verifiable and but
also non-malleable, it is a homomorphic authenticable
signature scheme.
Now, following the theorem in [5], we show that
a verifier cannot distinguish the identity of the signer
among a group of users under HARS.
Theorem 5: For any algorithm A, any group U with d
users, and a random user us ∈ U, the probability Pr[A(σσσ) =
us] is at most 1/d under HARS, where σσσ is a ring signature
generated with user us’s private key sks.
Proof: For any h ∈ G1, and any s, 1 ≤ s ≤ d, the
distribution {ga1
1 , ..., gad
1 : ai
R
← Zp for i = s, as chosen
such that
d
i=1 gai
1 = h} is identical to the distribution
{ga1
1 , ..., gad
1 :
d
i=1 gai
1 = h}. Therefore, given σσσ =
(σ1, ..., σd), the probability algorithm A distinguishes σs,
which indicates the identity of the signer, is at most 1/d.
Details of the proof about identity privacy can be found
in [5].
5 PRIVACY-PRESERVING PUBLIC AUDITING
FOR SHARED DATA IN THE CLOUD
5.1 Overview
Using HARS and its properties we established in the
previous section, we now construct Oruta, our privacy-
preserving public auditing mechanism for shared data in
the cloud. With Oruta, the TPA can verify the integrity
of shared data for a group of users without retrieving
the entire data. Meanwhile, the identity of the signer on
each block in shared data is kept private from the TPA
during the auditing.
5.2 Reduce Signature Storage
Another important issue we should consider in the
construction of Oruta is the size of storage used for
ring signatures. According to the generation of ring
signatures in HARS, a block m is an element of Zp
and its ring signature contains d elements of G1, where
G1 is a cyclic group with order p. It means a |p|-bit
block requires a d × |p|-bit ring signature, which forces
users to spend a huge amount of space on storing ring
signatures. It is very frustrating for users, because cloud
service providers, such as Amazon, will charge users
based on the storage space they used. To reduce the
storage for ring signatures and still allow the TPA to
audit shared data efficiently, we exploit an aggregated
approach from [6]. Specifically, we aggregate a block
mmmj = (mj,1, ..., mj,k) ∈ Zk
p in shared data as
k
l=1 ηmj,l
instead of computing gm
1 in Equation (1), where η1, ..., ηk
are random values of G1. With the aggregation, the
length of a ring signature is only d/k of the length of
a block. Similar methods to reduce the storage space of
signatures can also be found in [7]. Generally, to obtain
a smaller size of a ring signature than the size of a block,
we choose k > d. As a trade-off, the communication cost
will be increasing with an increase of k.
5.3 Support Dynamic Operations
To enable each user in the group to easily modify
data in the cloud and share the latest version of data
with the rest of the group, Oruta should also support
dynamic operations on shared data. An dynamic opera-
tion includes an insert, delete or update operation on a
single block. However, since the computation of a ring
signature includes an identifier of a block (as presented
in HARS), traditional methods, which only use the index
of a block as its identifier, are not suitable for supporting
dynamic operations on shared data. The reason is that,
when a user modifies a single block in shared data by
performing an insert or delete operation, the indices of
blocks that after the modified block are all changed (as
shown in Figure 3 and 4), and the changes of these
indices require users to re-compute the signatures of
these blocks, even though the content of these blocks
are not modified.
7. Index Block
1 mmm1
2 mmm2
3 mmm3
...
...
n mmmn
Insert
Index Block
1 mmm1
2 mmm′
2
3 mmm2
4 mmm3
...
...
n + 1 mmmn
Fig. 3. After inserting block mmm′
2, all the indices after block
mmm′
2 are changed
Index Block
1 mmm1
2 mmm2
3 mmm3
4 mmm4
...
...
n mmmn
Delete
Index Block
1 mmm1
2 mmm3
3 mmm4
...
...
n − 1 mmmn
Fig. 4. After deleting block mmm2, all the indices after block
mmm1 are changed
By utilizing index hash tables [7], our mechanism can
allow a user to efficiently perform a dynamic operation
on a single block, and avoid this type of re-computation
on other blocks. Different from [7], in our mechanism, an
identifier from the index hash table is described as idj =
{vj, rj}, where vj is the virtual index of block mmmj, and
rj is a random generated by a collision-resistance hash
function H2 : {0, 1}∗
→ Zq with rj = H2(mmmj||vj). Here, q
is a much smaller prime than p. The collision-resistance
of H2 ensures that each block has a unique identifier. The
virtual indices are able to ensure that all the blocks in
shared data are in the right order. For example, if vi < vj,
then block mmmi is ahead of block mmmj in shared data. When
shared data is created by the original user, the initial
virtual index of block mmmj is computed as vj = j·δ, where
δ is a system parameter decided by the original user.
If a new block mmm′
j is inserted, the virtual index of this
new block mmm′
j is v′
j = (vj−1 + vj)/2. Clearly, if block
mmmj and block mmmj+1 are both originally created by the
original user, the maximal number of inserted blocks that
is allowed between block mmmj and block mmmj+1 is δ. The
original user can estimate and choose a proper value of δ
based on the original size of shared data, the number of
users in the group, the subject of content in shared data
and so on, Generally, we believe δ = 10, 000 or 100, 000 is
good enough for the maximal insert blocks between two
blocks, which are originally created by the original user.
Examples of different dynamic operations on shared data
with index hash tables are described in Figure 5 and 6.
5.4 Construction of Oruta
Now, we present the details of our public auditing
mechanism, Oruta. It includes five algorithms: KeyGen,
SigGen, Modify, ProofGen and ProofVerify. In Key-
Gen, users generate their own public/private key pairs.
Insert
Index Block V R
1 mmm1 δ r1
2 mmm′
2 3δ/2 r′
2
3 mmm2 2δ r2
4 mmm3 3δ r3
...
...
...
...
n + 1 mmmn nδ rn
Index Block V R
1 mmm1 δ r1
2 mmm2 2δ r2
3 mmm3 3δ r3
...
...
...
...
n mmmn nδ rn
Fig. 5. Insert block mmm′
2 into shared data using an index
hash table as identifiers.
Update
Index Block V R
1 mmm′
1 δ r′
1
2 mmm2 2δ r2
3 mmm4 4δ r4
4 mmm5 5δ r5
...
...
...
...
n − 1 mmmn nδ rn
Index Block V R
1 mmm1 δ r1
2 mmm2 2δ r2
3 mmm3 3δ r3
4 mmm4 4δ r4
5 mmm5 5δ r5
...
...
...
...
n mmmn nδ rn
Delete
Fig. 6. Update block mmm1 and delete block mmm3 in shared
data using an index hash table as identifiers.
In SigGen, a user (either the original user or a group
user) is able to compute ring signatures on blocks in
shared data. Each user in the group is able to perform
an insert, delete or update operation on a block, and
compute the new ring signature on this new block in
Modify. ProofGen is operated by the TPA and the cloud
server together to generate a proof of possession of
shared data. In ProofVerify, the TPA verifies the proof
and sends an auditing report to the user.
Note that the group is pre-defined before shared data
is created in the cloud and the membership of the group
is not changed during data sharing. Before the original
user outsources shared data to the cloud, she decides all
the group members, and computes all the initial ring
signatures of all the blocks in shared data with her
private key and all the group members’ public keys.
After shared data is stored in the cloud, when a group
member modifies a block in shared data, this group
member also needs to compute a new ring signature on
the modified block.
Scheme Details. Let G1, G2 and GT be multiplicative
cyclic groups of order p, g1 and g2 be generators of
groups G1, G2, respectively. Let e : G1 × G2 → GT
be a bilinear map, and ψ : G2 → G1 be a com-
putable isomorphism with ψ(g2) = g1. There are three
hash functions H1: {0, 1}∗
→ G1, H2 : {0, 1}∗
→
Zq and h : G1 → Zp. The global parameters are
(e, ψ, p, q, G1, G2, GT , g1, g2, H1, H2, h). The total number
of users in the group is d. Let U denote the group that
includes all the d users.
Shared data M is divided into n blocks, and each block
mmmj is further divided into k elements in Zp. Therefore,
8. shared data M can be described as a n × k matrix:
M =
mmm1
...
mmmn
=
m1,1 . . . m1,k
...
...
...
mn,1 . . . mn,k
∈ Zn×k
p .
KeyGen. For user ui in the group U, she randomly
picks xi ∈ Zp and computes wi = gxi
2 . The user ui’s
public key is pki = wi and her private key is ski =
xi. The original user also randomly generates a public
aggregate key pak = (η1, ..., ηk), where ηl are random
elements of G1.
SigGen. Given all the d group members’ public keys
(pk1, ..., pkd) = (w1, ..., wd), a block mmmj = (mj,1, ..., mj,k),
its identifier idj, a private key sks for some s, user us
computes the ring signature of this block as follows:
1) She first aggregates block mmmj with the public ag-
gregate key pak, and computes
βj = H1(idj)
k
l=1
η
mj,l
l ∈ G1. (4)
2) After computing βj, user us randomly chooses
aj,i ∈ Zp and sets σj,i = g
aj,i
1 , for all i = s. Then
she calculates
σj,s =
βj
ψ( i=s w
aj,i
i )
1/xs
∈ G1. (5)
The ring signature of block mmmj is σσσj =
(σj,1, ..., σj,d) ∈ Gd
1.
Modify. A user in the group modifies the j-th block
in shared data by performing one of the following three
operations:
• Insert. This user inserts a new block mmm′
j into shared
data. She computes the new identifier of the in-
serted block mmm′
j as id′
j = {v′
j, r′
j}. The virtual index
v′
j = (vj−1 + vj)/2, and r′
j = H2(mmm′
j||v′
j). For the
rest of blocks, the identifiers of these blocks are
not changed (as explained in Figure 5). This user
outputs the new ring signature σσσ′
j of the inserted
block mmm′
j with SigGen, and uploads {mmm′
j, id′
j,σσσ′
j}
to the cloud server. The total number of blocks in
shared data increases to n + 1.
• Delete. This user deletes block mmmj, its identifier idj
and ring signature σσσ′
j from the cloud server. The
identifiers of other blocks in shared data are remain
the same. The total number of blocks in shared data
decreases to n − 1.
• Update. This user updates the j-th block in shared
data with a new block mmm′
j. The virtual index of this
block is remain the same, and r′
j is computed as
r′
j = H2(mmm′
j||vj). The new identifier of this updated
block is id′
j = {vj, r′
j}. The identifiers of other
blocks in shared data are not changed. This user
outputs the new ring signature σσσ′
j of this new block
with SigGen, and uploads {mmm′
j, id′
j,σσσ′
j} to the cloud
server. The total number of blocks in shared data is
still n.
ProofGen. To audit the integrity of shared data, a
user first sends an auditing request to the TPA. After
receiving an auditing request, the TPA generates an
auditing message [2] as follows:
1) The TPA randomly picks a c-element subset J of
set [1, n] to locate the c selected blocks that will be
checked in this auditing process, where n is total
number of blocks in shared data.
2) For j ∈ J , the TPA generates a random value yj ∈
Zq.
Then, the TPA sends an auditing message {(j, yj)}j∈J to
the cloud server (as illustrated in Fig. 7).
Third Party Auditor Cloud Server
{(j, yj)}j∈J
Fig. 7. The TPA sends an auditing message to the cloud
server.
After receiving an auditing message {(j, yj)}j∈J , the
cloud server generates a proof of possession of selected
blocks with the public aggregate key pak. More specifi-
cally:
1) The cloud server chooses a random element rl ∈
Zq, and calculates λl = ηrl
l ∈ G1, for l ∈ [1, k].
2) To hide the linear combination of selected blocks
using random masking, the cloud server computes
µl = j∈J yjmj,l + rlh(λl) ∈ Zp, for l ∈ [1, k].
3) The cloud server aggregates signatures as φi =
j∈J σ
yj
j,i, for i ∈ [1, d].
After the computation, the cloud server sends an au-
diting proof {λλλ,µµµ,φφφ, {idj}j∈J } to the TPA, where λλλ =
(λ1, ..., λk), µµµ = (µ1, ..., µk) and φφφ = (φ1, ..., φd) (as shown
in Fig. 8).
Third Party Auditor Cloud Server
{λλλ,µµµ,φφφ, {idj}j∈J }
Fig. 8. The cloud server sends an auditing proof to the
TPA.
ProofVerify. With an auditing proof {λλλ,µµµ,φφφ, {idj}j∈J
}, an auditing message {(j, yj)}j∈J , public aggregate
key pak = (η1, ..., ηk), and all the group members’
public keys (pk1, ..., pkd) = (w1, ..., wd), the TPA verifies
the correctness of this proof by checking the following
9. equation:
e(
j∈J
H1(idj)yj
·
k
l=1
ηµl
l , g2)
?
=
d
i=1
e(φi, wi) · e(
k
l=1
λ
h(λl)
l , g2). (6)
If the above equation holds, then the TPA believes that
the blocks in shared data are all correct, and sends a
positive auditing report to the user. Otherwise, it sends
a negative one.
Discussion. Based on the properties of bilinear maps,
we can further improve the efficiency of verification by
computing d+2 pairing operations in verification instead
of computing d+3 pairing operations with Equation (6).
Specifically, Equation (6) can also be described as
e(
j∈J
H1(idj)yj
·
k
l=1
ηµl
l · (
k
l=1
λ
h(λl)
l )−1
, g2)
?
=
d
i=1
e(φi, wi).
(7)
In the construction of Oruta, we leverage random
masking [3] to support identity privacy. If a user wants
to protect the content of private data in the cloud, this
user can also encrypt data before outsourcing it into
the cloud server with encryption techniques, such as
attribute-based encryption (ABE) [8], [12]. With sampling
strategies [2], the TPA can detect any corrupted block in
shared data with a high probability by only choosing a
subset of all blocks in each auditing task. Previous work
[2] has already proved that, if the total number of blocks
in shared data is n = 1, 000, 000 and 1% of all the blocks
are lost or removed, the TPA can detect these corrupted
blocks with a probability greater than 99% by choosing
460 random blocks.
5.5 Security Analysis of Oruta
Now, we discuss security properties of Oruta, includ-
ing its correctness, unforgeability, identity privacy and
data privacy.
Theorem 6: During an auditing task, the TPA is able to
correctly audit the integrity of shared data under Oruta.
Proof: To prove the correctness of Oruta is equivalent
of proving Equation (6) is correct. Based on properties of
bilinear maps and Theorem 1, the right-hand side (RHS)
of Equation (6) can be expanded as follows:
RHS =
d
i=1
e(
j∈J
σ
yj
j,i, wi)
· e(
k
l=1
λ
h(λl)
l , g2)
=
d
i=1
(
j∈J
e(σ
yj
j,i, wi))
· e(
k
l=1
η
rlh(λl)
l , g2)
=
j∈J
(
d
i=1
e(σj,i, wi)yj
)
· e(
k
l=1
η
rlh(λl)
l , g2)
=
j∈J
e(βj, g2)yj
· e(
k
l=1
η
rlh(λl)
l , g2)
= e(
j∈J
(H1(idj)
k
l=1
η
mj,l
l )yj
, g2) · e(
k
l=1
η
rlh(λl)
l , g2)
= e(
j∈J
H1(idj)yj
·
k
l=1
η j∈J mj,lyj
l ·
k
l=1
η
rlh(λl)
l , g2)
= e(
j∈J
H1(idj)yj
·
k
l=1
η j∈J mj,lyj +rlh(λl)
l , g2)
= e(
j∈J
H1(idj)yj
·
k
l=1
ηµl
l , g2).
Theorem 7: For an untrusted cloud, it is computational
infeasible to generate an invalid auditing proof that can pass
the verification under Oruta.
Proof: As proved in Theorem ??, for an untrusted
cloud, if co-CDH problem in G1 and G2 is hard, it is
computational infeasible to compute a valid ring signa-
ture on an invalid block under HARS.
In Oruta, besides generating valid ring signatures on
arbitrary blocks, if the untrusted cloud can win Game 1,
it can generate an invalid auditing proof for corrupted
shared data, and successfully pass the verification. We
define Game 1 as follows:
Game 1: The TPA sends an auditing message
{j, yj}j∈J to the cloud, the correct auditing proof should
be {λλλ,µµµ,φφφ, {idj}j∈J }, which can pass the verification
with Equation (6). The untrusted cloud generates an in-
valid proof as {λλλ,µµµ′
,φφφ, {idj}j∈J }, where µµµ′
= (µ′
1, ..., µ′
k).
Define ∆µl = µ′
l − µl for 1 ≤ l ≤ k, and at least one
element of {∆µl}1≤l≤k is nonzero. If this invalid proof
still pass the verification, then the untrusted cloud wins.
Otherwise, it fails.
Now, we prove that, if the untrusted cloud can win
Game 1, we can find a solution to the Discrete Logarithm
problem, which contradicts to the fact. We first assume
the untrusted cloud can win Game 1. Then, according to
Equation (6), we have
e(
j∈J
H1(idj)yj
·
k
l=1
η
µ′
l
l , g2) = (
d
i=1
e(φi, wi))e(
k
l=1
λ
h(λl)
l , g2).
10. Because {λλλ,µµµ,φφφ, {idj}j∈J } is a correct auditing proof, we
also have
e(
j∈J
H1(idj)yj
·
k
l=1
ηµl
l , g2) = (
d
i=1
e(φi, wi))e(
k
l=1
λ
h(λl)
l , g2).
Then, we can learn that
k
l=1
η
µ′
l
l =
k
l=1
ηµl
l ,
k
l=1
η∆µl
l = 1.
For two elements g, h ∈ G1, there exists x ∈ Zp that
g = hx
because G1 is a cyclic group. Without loss of
generality, given g, h ∈ G1, each ηl is able to randomly
and correctly generated by computing ηl = gξl
hγl
∈ G1,
where ξl and γl are random values of Zp. Then, we have
1 =
k
l=1
η∆µl
l =
k
l=1
(gξl
hγl
)∆µl
= g
k
l=1 ξl∆µl
·h
k
l=1 γl∆µl
.
Clearly, we can find a solution to the discrete logarithm
problem. More specifically, given g, hx
∈ G1, we can
compute
h = g
k
l=1 ξl∆µl
k
l=1
γl∆µl = gx
.
unless the denominator is zero. However, as we defined
in Game 1, at least one element of {∆µl}1≤l≤k is nonzero,
and γl is random element of Zp, therefore, the denomi-
nator is zero with probability of 1/p, which is negligible.
It means, if the untrusted cloud wins Game 1, we can
find a solution to the Discrete Logarithm problem with
a probability of 1 − 1/p, which contradicts the fact that
the Discrete Logarithm problem is hard in G1.
Therefore, for an untrusted cloud, it is computational
infeasible to win Game 1 and generate an invalid proof,
which can pass the verification.
Now, we show that the TPA is able to audit the
integrity of shared data, but the identity of the signer
on each block in shared data is not disclosed to the TPA.
Theorem 8: During an auditing task, the probability for
the TPA to distinguish the identities of all the signers on the
c selected blocks in shared data is at most 1/dc
.
Proof: With Theorem 5, we have, for any algorithm
A, the probability to reveal the signer on one block
in shared data is 1/d. Because the c selected blocks
in an auditing task are signed independently, the total
probability that the TPA can distinguish all the signers’
identities on the c selected blocks in shared data is at
most 1/dc
.
Let us reconsider the example in Sec. 1. With Oruta,
the TPA knows each block in shared data is either signed
by Alice or Bob, because it needs both users’ public
keys to verify the correctness of shared data. However,
it cannot distinguish who is the signer on a single block
(as shown in Fig. 9). Therefore, this third party cannot
obtain private and sensitive information, such as who
signs the most blocks in shared data or which block is
frequently modified by different group members.
a block signed by a user in the group
N N N N N N N N N N
N
N N N N N N N N N N
N N N N N N N N N N
8th
8th
8th
Auditing Task 1
Auditing Task 2
Auditing Task 3
TPA
Fig. 9. Alice and Bob share a file in the cloud, and the
TPA audit the integrity of shared data with Oruta.
Following a similar theorem in [3], we show that our
scheme is also able to support data privacy.
Theorem 9: Given an auditing proof = {λλλ,µµµ,φφφ, {idj}jJ
}, it is computational infeasible for the TPA to reveal any
private data in shared data under Oruta.
Proof: If the combined element j∈J yjmj,l, which
is a linear combination of elements in blocks, is directly
sent to the TPA, the TPA can learn the content of data
by solving linear equations after collecting a sufficient
number of linear combinations. To preserve private data
from the TPA, the combined element is computed with
random masking as µl = j∈J yjmj,l +rlh(λl). In order
to still solve linear equations, the TPA must know the
value of rl ∈ Zp. However, given ηl ∈ G1 λl = ηrl
l ∈ G1,
computing rl is as hard as solving the Discrete Loga-
rithm problem in G1, which is computational infeasible.
Therefore, give λλλ and µµµ, the TPA cannot directly obtain
any linear combination of elements in blocks, and cannot
further reveal any private data in shared data M by
solving linear equations.
5.6 Batch Auditing
With the usage of public auditing in the cloud, the TPA
may receive amount of auditing requests from different
users in a very short time. Unfortunately, allowing the
TPA to verify the integrity of shared data for these users
in several separate auditing tasks would be very ineffi-
cient. Therefore, with the properties of bilinear maps, we
further extend Oruta to support batch auditing, which
can improve the efficiency of verification on multiple
auditing tasks.
More concretely, we assume there are B auditing tasks
need to be operated, the shared data in all the B auditing
tasks are denoted as M1, ..., MB and the number of users
sharing data Mb is described as db, where 1 ≤ b ≤ B. To
efficiently audit these shared data for different users in a
single auditing task, the TPA sends an auditing message
as {(j, yj)}j∈J to the cloud server. After receiving the
auditing message, the cloud server generates an auditing
proof {λλλb,µµµb,φφφb, {idb,j}j∈J } for each shared data Mb as
we presented in ProofGen, where 1 ≤ b ≤ B, 1 ≤ l ≤ k,
11. 1 ≤ i ≤ db and
λb,l = η
rb,l
b,l
µb,l = j∈J yjmb,j,l + rb,lh(λb,l)
φb,i = j∈J σ
yj
b,j,i
Here idb,j is described as idb,j = {fb, vj, rj}, where fb is
the identifier of shared data Mb, e.g. the name of shared
data Mb. Clearly, if two blocks are in the same shared
data, these two blocks have the same identifier of shared
data. As before, when a user modifies a single block in
shared data Mb, the identifiers of other blocks in shared
data Mb are not changed.
After the computation, the cloud server sends all the
B auditing proofs together to the TPA. Finally, the TPA
verifies the correctness of these B proofs simultaneously
by checking the following equation with all the
B
b=1 db
users’ public keys:
e(
B
b=1
j∈J
H(idb,j)yj
·
k
l=1
η
µb,l
b,l
, g2)
?
=
B
b=1
db
i=1
e(φb,i, wb,i) · e(
B
b=1
k
l=1
λ
h(λb,l)
b,l , g2), (8)
where pkb,i = wb,i. If the above verification equation
holds, then the TPA believes that the integrity of all the
B shared data is correct. Otherwise, there is at least one
shared data is corrupted.
Based on the correctness of Equation (6), the correct-
ness of batch auditing can be presented as follows:
B
b=1
db
i=1
e(φb,i, wb,i) · e(
B
b=1
k
l=1
λ
h(λb,l)
b,l , g2)
=
B
b=1
db
i=1
e(φb,i, wb,i) ·
B
b=1
e(
k
l=1
λ
h(λb,l)
b,l , g2)
=
B
b=1
(
db
i=1
e(φb,i, wb,i)) · e(
k
l=1
λ
h(λb,l)
b,l , g2)
=
B
b=1
e(
j∈J
H(idb,j)yj
·
k
l=1
η
µb,l
b,l , g2)
= e(
B
b=1
j∈J
H(idb,j)yj
·
k
l=1
η
µb,l
b,l
, g2).
If all the B auditing requests on B shared data are
from the same group, the TPA can further improve the
efficiency of batch auditing by verifying
e(
B
b=1
j∈J
H(idb,j)yj
·
k
l=1
η
µb,l
b,l
, g2)
?
=
d
i=1
e(
B
b=1
φb,i, wi) · e(
B
b=1
k
l=1
λ
h(λb,l)
b,l , g2). (9)
Note that batch auditing will fail if at least one
incorrect auditing proof exists in all the B auditing
proofs. To allow most of auditing proofs to still pass
the verification when there is only a small number of
incorrect auditing proofs, we can utilize binary search
[3] during batch auditing. More specifically, once the
batch auditing of the B auditing proofs fails, the TPA
divides the set of all the B auditing proofs into two
subsets, which contains B/2 auditing proofs in each
subset, and re-checks the correctness of auditing proofs
in each subset using batch auditing. If the verification
result of one subset is correct, then all the auditing proofs
in this subset are all correct. Otherwise, this subset is
further divided into two sub-subsets, and the TPA re-
checks the correctness of auditing proofs in the each
sub-subsets with batch auditing until all the incorrect
auditing proofs are found. Clearly, when the number
of incorrect auditing proofs increases, the efficiency of
batch auditing will be reduced. Experimental results in
Section 6 shows that, when less than 12% of auditing
proofs among all the B auditing proofs are incorrect,
batching auditing is still more efficient than verifying
these auditing proofs one by one.
6 PERFORMANCE
In this section, we first analysis the computation and
communication costs of Oruta, and then evaluate the
performance of Oruta in experiments.
6.1 Computation Cost
The main cryptographic operations used in Oruta in-
clude multiplications, exponentiations, pairing and hash-
ing operations. For simplicity, we omit additions in the
following discussion, because they are much easier to be
computed than the four types of operations mentioned
above.
During auditing, the TPA first generates some random
values to construct the auditing message, which only in-
troduces a small cost in computation. Then, after receiv-
ing the auditing message, the cloud server needs to com-
pute a proof {λλλ,µµµ,φφφ, {idj}j∈J }. The computation cost of
calculating a proof is about (k + dc)ExpG1
+ dcMulG1
+
ckMulZp
+ kHashZp
, where ExpG1
denotes the cost of
computing one exponentiation in G1, MulG1
denotes
the cost of computing one multiplication in G1, MulZp
and HashZp
respectively denote the cost of computing
one multiplcation and one hashing operation in Zp. To
check the correctness of the proof {λλλ,µµµ,φφφ, {idj}j∈J }, the
TPA verifies it based on Equation (6). The total cost of
verifying the proof is (2k + c)ExpG1
+ (2k + c)MulG1
+
dMulGT
+ cHashG1
+ (d + 2)PairG1,G2
. We use PairG1,G2
to denote the cost of computing one pairing operation
in G1 and G2.
6.2 Communication Cost
The communication cost of Oruta is mainly intro-
duced by two factors: the auditing message and the
auditing proof. For each auditing message {j, yj}j∈J , the
12. 0 5 10 15 20
d: the size of the group
0
20
40
60
80
100Generationtime(ms) k=100
k=200
Fig. 10. Impact of d on signature
generation time (ms).
0 50 100 150 200
k: the number of elements per block
0
10
20
30
40
50
60
70
80
Generationtime(ms)
d=10
d=20
Fig. 11. Impact of k on signature
generation time (ms).
0 5 10 15 20
d: the size of the group
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Auditingtime(s)
c=460
c=300
Fig. 12. Impact of d on auditing
time (s), where k = 100.
0 50 100 150 200
k: the number of elements per block
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Auditingtime(s)
c=460
c=300
Fig. 13. Impact of k on auditing
time (s), where d = 10.
0 5 10 15 20
d: the size of the group
0
2
4
6
8
10
12
14
Communicationcost(KB)
c=460
c=300
Fig. 14. Impact of d on communi-
cation cost (KB), where k = 100.
0 20 40 60 80 100
k: the number of elements per block
0
2
4
6
8
10
12
14
Communicationcost(KB)
c=460
c=300
Fig. 15. Impact of k on communi-
cation cost (KB), where d = 10.
0 20 40 60 80 100
B: the number of audting tasks
1150
1175
1200
1225
1250
1275
1300
1325
Averageauditingtime(ms)
Seperate Auditing
Batch Auditing-D
Batch Auditing-S
Fig. 16. Impact of B on the ef-
ficiency of batch auditing, where
k = 100 and d = 10.
0 2 4 6 8 10 12 14 16
A: the number of incorrect proofs
1150
1175
1200
1225
1250
1275
1300
1325
Averageauditingtime(ms)
Seperate Auditing
Batch Auditing-S
Fig. 17. Impact of A on the ef-
ficiency of batch auditing, where
B = 128.
0 2 4 6 8 10 12 14 16
A: the number of incorrect proofs
1290
1295
1300
1305
1310
1315
1320
1325
Averageauditingtime(ms) Seperate Auditing
Batch Auditing-D
Fig. 18. Impact of A on the ef-
ficiency of batch auditing, where
B = 128.
communication cost is c(|q| + |n|) bits, where |q| is the
length of an element of Zq and |n| is the length of an
index. Each auditing = {λλλ,µµµ,φφφ, {idj}j∈J } contains (k+d)
elements of G1, k elements of Zp and c elememts of Zq,
therefore the communication cost of one auditing proof
is (2k + d)|p| + c|q| bits.
6.3 Experimental Results
We now evaluate the efficiency of Oruta in experi-
ments. To implement these complex cryptographic op-
erations that we mentioned before, we utilize the GNU
Multiple Precision Arithmetic (GMP)2
library and Pair-
ing Based Cryptography (PBC)3
library. All the following
experiments are based on C and tested on a 2.26 GHz
Linux system over 1, 000 times.
Because Oruta needs more exponentiations than pair-
ing operations during the process of auditing, the elliptic
2. http://gmplib.org/
3. http://crypto.stanford.edu/pbc/
curve we choose in our experiments is an MNT curve
with a base field size of 159 bits, which has a better
performance than other curves on computing exponen-
tiations. We choose |p| = 160 bits and |q| = 80 bits.
We assume the total number of blocks in shared data
is n = 1, 000, 000 and |n| = 20 bits. The size of shared
data is 2 GB. To keep the detection probability greater
than 99%, we set the number of selected blocks in an
auditing task as c = 460 [2]. If only 300 blocks are
selected, the detection probability is greater than 95%.
We also assume the size of the group d ∈ [2, 20] in the
following experiments. Certainly, if a larger group size
is used, the total computation cost will increase due to
the increasing number of exponentiations and pairing
operations.
6.3.1 Performance of Signature Generation
According to Section 5, the generation time of a ring
signature on a block is determined by the number of
users in the group and the number of elements in each
13. block. As illustrated in Fig. 10 and Fig. 11, when k is
fixed, the generation time of a ring signature is linearly
increasing with the size of the group; when d is fixed, the
generation time of a ring signature is linearly increasing
with the number of elements in each block. Specifically,
when d = 10 and k = 100, a user in the group requires
about 37 milliseconds to compute a ring signature on a
block in shared data.
6.3.2 Performance of Auditing
Based on our proceeding analysis, the auditing perfor-
mance of Oruta under different detection probabilities is
illustrated in Fig. 12–15, and Table 2. As shown in Fig. 12,
the auditing time is linearly increasing with the size of
the group. When c = 300, if there are two users sharing
data in the cloud, the auditing time is only about 0.5
seconds; when the number of group member increases
to 20, it takes about 2.5 seconds to finish the same
auditing task. The communication cost of an auditing
task under different parameters is presented in Fig. 14
and Fig. 15. Compare to the size of entire shared data,
the communication cost that the TPA consumes in an
auditing task is very small. It is clear in Table 2 that
when maintaining a higher detection probability, the
TPA needs to consume more computation and commu-
nication overhead to finish the auditing task. Specifically,
when c = 300, it takes the TPA 1.32 seconds to audit the
correctness of shared data, where the size of shared data
is 2 GB; when c = 460, the TPA needs 1.94 seconds to
verify the integrity of the same shared data.
TABLE 2
Performance of Auditing
System Parameters k = 100, d = 10,
Storage Usage 2GB + 200MB (data + signatures)
Selected Blocks c 460 300
Communication Cost 14.55KB 10.95KB
Auditing Time 1.94s 1.32s
6.3.3 Performance of Batch Auditing
As we discussed in Section 5, when there are multiple
auditing tasks, the TPA can improve the efficiency of ver-
ification by performing batch auditing. In the following
experiments, we choose c = 300, k = 100 and d = 10.
We can see from Fig. 16 that, compare to verifying these
auditing tasks one by one, if these B auditing tasks are
from different groups, batching auditing can save 2.1%
of the auditing time per auditing task on average. If
these B auditing tasks are from the same group, batching
auditing can save 12.6% of the average auditing time per
auditing task.
Now we evaluate the performance of batch auditing
when incorrect auditing proofs exist in the B auditing
proofs. As we mentioned in Section 5, we can use binary
search in batch auditing, so that we can distinguish the
incorrect ones from the B auditing proofs. However,
the increasing number of incorrect auditing proofs will
reduce the efficiency of batch auditing. It is important for
us to find out the maximal number of incorrect auditing
proofs exist in the B auditing proofs, so that the batch
auditing is still more efficient than separate auditing.
In this experiment, we assume the total number of
auditing proofs in the batch auditing is B = 128 (because
we leverage binary search, it is better to set B as a
power of 2), the number of elements in each block is
k = 100 and the number of users in the group is d = 10.
Let A denote the number of incorrect auditing proofs.
In addition, we also assume that it always requires
the worst-case algorithm to detect the incorrect auditing
proofs in the experiment. According to Equation (8) and
(9), the extra computation cost in binary search is mainly
introduced by extra pairing operations. As shown in Fig.
17, if all the 128 auditing proofs are from the same group,
when the number of incorrect auditing proofs is less than
16 (12% of all the auditing proofs), batching auditing
is still more efficient than separate auditing. Similarly,
in Fig. 18, if all the auditing proofs are from different
groups, when the number of incorrect auditing proofs
is more than 16, batching auditing is less efficient than
verifying these auditing proofs separately.
7 RELATED WORK
Provable data possession (PDP), first proposed by
Ateniese et al. [2], allows a verifier to check the correct-
ness of a client’s data stored at an untrusted server. By
utilizing RSA-based homomorphic authenticators and
sampling strategies, the verifier is able to publicly audit
the integrity of data without retrieving the entire data,
which is referred to as public verifiability or public
auditing. Unfortunately, their mechanism is only suitable
for auditing the integrity of static data. Juels and Kaliski
[13] defined another similar model called proofs of re-
trievability (POR), which is also able to check the correct-
ness of data on an untrusted server. The original file is
added with a set of randomly-valued check blocks called
sentinels. The verifier challenges the untrusted server
by specifying the positions of a collection of sentinels
and asking the untrusted server to return the associated
sentinel values. Shacham and Waters [6] designed two
improved POR schemes. The first scheme is built from
BLS signatures, and the second one is based on pseudo-
random functions.
To support dynamic operations on data, Ateniese et
al. [14] presented an efficient PDP mechanism based on
symmetric keys. This mechanism can support update
and delete operations on data, however, insert opera-
tions are not available in this mechanism. Because it
exploits symmetric keys to verify the integrity of data, it
is not public verifiable and only provides a user with a
limited number of verification requests. Wang et al. uti-
lized Merkle Hash Tree and BLS signatures [9] to support
fully dynamic operations in a public auditing mecha-
nism. Erway et al. [15] introduced dynamic provable data
possession (DPDP) by using authenticated dictionaries,
14. which are based on rank information. Zhu et al. exploited
the fragment structure to reduce the storage of signa-
tures in their public auditing mechanism. In addition,
they also used index hash tables to provide dynamic
operations for users. The public mechanism proposed by
Wang et al. [3] is able to preserve users’ confidential data
from the TPA by using random maskings. In addition,
to operate multiple auditing tasks from different users
efficiently, they extended their mechanism to enable
batch auditing by leveraging aggregate signatures [5].
Wang et al. [16] leveraged homomorphic tokens to
ensure the correctness of erasure codes-based data dis-
tributed on multiple servers. This mechanism is able
not only to support dynamic operations on data, but
also to identify misbehaved servers. To minimize com-
munication overhead in the phase of data repair, Chen
et al. [17] also introduced a mechanism for auditing
the correctness of data with the multi-server scenario,
where these data are encoded by network coding instead
of using erasure codes. More recently, Cao et al. [18]
constructed an LT codes-based secure and reliable cloud
storage mechanism. Compare to previous work [16], [17],
this mechanism can avoid high decoding computation
cost for data users and save computation resource for
online data owners during data repair.
To prevent special attacks exist in remote data storage
system with deduplication, Halevi et al. [19] introduced
the notation of proofs-of-ownership (POWs), which al-
lows a client to prove to a server that she actually holds
a data file, rather than just some hash values of the data
file. Zheng et al. [20] further discussed that POW and
PDP can co-exist under the same framework.
Recently, Franz et al. [21] proposed an oblivious out-
sourced storage scheme based on Oblivious RAM tech-
niques, which is able to hide users’ access patterns on
outsourced data from an untrusted cloud. Vimercati et
al. [22] utilize shuffle index structure to protect users’
access patterns on outsourced data.
8 CONCLUSION
In this paper, we propose Oruta, the first privacy-
preserving public auditing mechanism for shared data
in the cloud. We utilize ring signatures to construct
homomorphic authenticators, so the TPA is able to audit
the integrity of shared data, yet cannot distinguish who
is the signer on each block, which can achieve identity
privacy. To improve the efficiency of verification for mul-
tiple auditing tasks, we further extend our mechanism
to support batch auditing. An interesting problem in our
future work is how to efficiently audit the integrity of
shared data with dynamic groups while still preserving
the identity of the signer on each block from the third
party auditor.
REFERENCES
[1] M. Armbrust, A. Fox, R. Griffith, A. D.Joseph, R. H.Katz, A. Kon-
winski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M. Za-
haria, “A View of Cloud Computing,” Communications of the ACM,
vol. 53, no. 4, pp. 50–58, Apirl 2010.
[2] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peter-
son, and D. Song, “Provable Data Possession at Untrusted Stores,”
in Proc. ACM Conference on Computer and Communications Security
(CCS), 2007, pp. 598–610.
[3] C. Wang, Q. Wang, K. Ren, and W. Lou, “Privacy-Preserving
Public Auditing for Data Storage Security in Cloud Computing,”
in Proc. IEEE International Conference on Computer Communications
(INFOCOM), 2010, pp. 525–533.
[4] R. L. Rivest, A. Shamir, and Y. Tauman, “How to Leak a Secret,”
in Proc. International Conference on the Theory and Application of
Cryptology and Information Security (ASIACRYPT). Springer-
Verlag, 2001, pp. 552–565.
[5] D. Boneh, C. Gentry, B. Lynn, and H. Shacham, “Aggregate and
Verifiably Encrypted Signatures from Bilinear Maps,” in Proc. In-
ternational Conference on the Theory and Applications of Cryptographic
Techniques (EUROCRYPT). Springer-Verlag, 2003, pp. 416–432.
[6] H. Shacham and B. Waters, “Compact Proofs of Retrievability,”
in Proc. International Conference on the Theory and Application of
Cryptology and Information Security (ASIACRYPT). Springer-
Verlag, 2008, pp. 90–107.
[7] Y. Zhu, H. Wang, Z. Hu, G.-J. Ahn, H. Hu, and S. S.Yau, “Dynamic
Audit Services for Integrity Verification of Outsourced Storage in
Clouds,” in Proc. ACM Symposium on Applied Computing (SAC),
2011, pp. 1550–1557.
[8] S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving Secure, Scalable,
and Fine-grained Data Access Control in Cloud Computing,” in
Proc. IEEE International Conference on Computer Communications
(INFOCOM), 2010, pp. 534–542.
[9] D. Boneh, B. Lynn, and H. Shacham, “Short Signature from the
Weil Pairing,” in Proc. International Conference on the Theory and
Application of Cryptology and Information Security (ASIACRYPT).
Springer-Verlag, 2001, pp. 514–532.
[10] D. Boneh and D. M. Freeman, “Homomorphic Signatures for
Polynomial Functions,” in Proc. International Conference on the
Theory and Applications of Cryptographic Techniques (EUROCRYPT).
Springer-Verlag, 2011, pp. 149–168.
[11] A. L. Ferrara, M. Green, S. Hohenberger, and M. Ø. Pedersen,
“Practical Short Signature Batch Verification,” in Proc. RSA Con-
ference, the Cryptographers’ Track (CT-RSA). Springer-Verlag, 2009,
pp. 309–324.
[12] V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-Based
Encryption for Fine-Grained Access Control of Encrypted Data,”
in Proc. ACM Conference on Computer and Communications Security
(CCS), 2006, pp. 89–98.
[13] A. Juels and B. S. Kaliski, “PORs: Proofs pf Retrievability for Large
Files,” in Proc. ACM Conference on Computer and Communications
Security (CCS), 2007, pp. 584–597.
[14] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, “Scalable
and Efficient Provable Data Possession,” in Proc. International
Conference on Security and Privacy in Communication Networks
(SecureComm), 2008.
[15] C. Erway, A. Kupcu, C. Papamanthou, and R. Tamassia, “Dynamic
Provable Data Possession,” in Proc. ACM Conference on Computer
and Communications Security (CCS), 2009, pp. 213–222.
[16] C. Wang, Q. Wang, K. Ren, and W. Lou, “Ensuring Data Storage
Security in Cloud Computing,” in Proc. IEEE/ACM International
Workshop on Quality of Service (IWQoS), 2009, pp. 1–9.
[17] B. Chen, R. Curtmola, G. Ateniese, and R. Burns, “Remote Data
Checking for Network Coding-based Distributed Stroage Sys-
tems,” in Proc. ACM Cloud Computing Security Workshop (CCSW),
2010, pp. 31–42.
[18] N. Cao, S. Yu, Z. Yang, W. Lou, and Y. T. Hou, “LT Codes-
based Secure and Reliable Cloud Storage Service,” in Proc. IEEE
International Conference on Computer Communications (INFOCOM),
2012.
[19] S. Halevi, D. Harnik, B. Pinkas, and A. Shulman-Peleg, “Proofs of
Ownership in Remote Storage Systems,” in Proc. ACM Conference
on Computer and Communications Security (CCS), 2011, pp. 491–500.
[20] Q. Zheng and S. Xu, “Secure and Efficient Proof of Storage with
Deduplication,” in Proc. ACM Conference on Data and Application
Security and Privacy (CODASPY), 2012.
[21] M. Franz, P. Williams, B. Carbunar, S. Katzenbeisser, and R. Sion,
“Oblivious Outsourced Storage with Delegation,” in Proc. Finan-
cial Cryptography and Data Security Conference (FC), 2011, pp. 127–
140.
[22] S. D. C. di Vimercati, S. Foresti, S. Paraboschi, G. Pelosi, and
P. Samarati, “Efficient and Private Access to Outsourced Data,”
15. in Proc. IEEE International Conference on Distributed Computing
Systems (ICDCS), 2011, pp. 710–719.