Fs2510501055

  • 231 views
Uploaded on

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
231
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
14
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 Data Storage Collateral In Cloud Computing Annamaneni Samatha*,Nimmala Jeevan Reddy**,P.Pradeep Kumar*** *(Student,Department of Computer Science, JNTU University, Hyderabad-85) ** (Student,Department of Computer Science JNTU University, Hyderabad-85) ***(Head of Department ,Department of Computer Science JNTU University, Hyderabad-85)ABSTRACT Cloud Computing has been predicted as examples. While these internet-based online servicesthe next-generation architecture of IT Enterprise. do provide huge amounts of storage space andIn additional to traditional solutions, where the IT customizable computing resources, this computingservices are under proper physical, logical and platform shift, however, is eliminating thepersonnel controls, Cloud Computing moves the responsibility of local machines for data maintenanceapplication software and databases to the large at the same time. As a result, users are at the mercydata centers, where the management of data has of their cloud service providers for the availabilitymany security challenges To Overcome this, in and integrity of their data. Recent downtime ofthis paper we focused on data security in cloud Amazons S3 is such an example [2].computing, which has always been an important From the perspective of data security, whichaspect of quality of service. To ensure the has always been an important aspect of quality ofcorrectness of users data in the cloud, we propose service, Cloud Com-puting inevitably poses newan effective and flexible distributed scheme with challenging security threats for number of reasons.two salient features, opposing to its predecessors. Firstly, traditional cryptographic primitives for theBy utilizing the homomorphic token with purpose of data security protection can not be directlydistributed verification of erasure-coded data, our adopted due to the users loss control of data underscheme achieves the integration of storage Cloud Computing. Therefore, verification of correctcorrectness insurance and data error localization, data storage in the cloud must be conducted withouti.e., the identification of misbehaving g server(s). explicit knowledge of the whole data. ConsideringUnlike most prior works, the new scheme further various kinds of data for each user stored in the cloudsupports secure and efficient dynamic operations and the demand of long term continuous assurance ofon data base, including data update, delete and their data safety, the problem of verifying correctnessinsert. Extensive security and performance of data storage in the cloud becomes even moreanalysis shows that the proposed scheme is highly challenging. Secondly, Cloud Computing is not just aefficient and resilient against Byzantine failure, third party data warehouse. The data stored in themalicious data modification attack, and even it cloud may be frequently updated by the users,avoids server colluding attacks. including insertion, deletion, modification, appending, reordering, etc. To ens ure storageKeywords - Cloud Computing, Security, correctness under dynamic data update is hence ofDistributed scheme, homomorphic token, paramount importance. However, this dynamicdistributed verification. feature also makes traditional integrity insurance techniques futile and entails new solutions. Last butI. INTRODUCTION not the least, the deployment of Cloud Computing is Several trends are opening up the era of powered by data centers running in a simultaneous,Cloud Computing, which is an Internet-based cooperated and distributed manner. Individual usersdevelopment and use of computer technology. The data is redundantly stored in multiple physical loca-ever cheaper and more powerful processors, together tions to further reduce the data integrity threats.with the software as a service (SaaS) computing Therefore, distributed protocols for storagearchi-tecture, are transforming data centers into pools correctness assurance will be of most importance inof computing service on a huge scale. The increasing achieving a robust and secure cloud data storagenetwork bandwidth and reliable yet flexible network system in the real world. However, such importantconnections make it even possi ble that users can now area remains to be fully explored in the literature.subscribe high quality services from data and In this paper, we propose an effective andsoftware that reside solely on remote data centers. flexible distribut ed scheme with explicit dynamicMoving data into the cloud offers great convenience data support to ensure the correctness of users data into users since they dont have to care about the the cloud. We rely on erasure-correcting code in thecomplexities of direct hardware management. The file distribution preparation to prov ide redundanciespioneer of Cloud Com-puting vendors, Amazon and guarantee the data dependability. This con-Simple Storage Service (S3) and Amazon Elastic struction drastically reduces the communication andCompute Cloud (EC2) [1] are both well known storage overhead as compared to the traditional 1050 | P a g e
  • 2. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055replication-based file distribution techniques. Byutilizing the homomorphic token with distributedverification of erasure-coded data, our sc hemeachieves the storage correctness insurance as well asdata error localization: whenever data corruption hasbeen detected during the storage correctnessverification, our scheme can almost guarantee thesimultaneous localization of data errors, i.e., theidentification of the misbehaving server(s)Our work is among the first few ones in this field to In cloud data storage, a user stores his dataconsider distributed data storage in Cloud through a CSP into a set of cloud servers, which areComputing. Our contribution can be summarized as running in a simulta-neous, cooperated andthe following three aspect. distributed manner. Data redundancy can be employed with technique of erasure-correcting code1) Compared to many of its predecessors, which only to further tolerate faults or server crash as users dataprovide binary results about the storage state across grows in size and importance. Thereafter, forthe distributed servers, the challenge-response application purposes, the user interacts with the cloudprotocol in our work further provides the localization servers via CSP to access or retrieve his data. In someof data error. cases, the user may need to perform block level2) Unlike most prior works for ensuring remote data operations on his data. The most general forms ofintegrity, the new scheme supports secure and these operations we are considering are block update,efficient dynamic opera-tions on data blocks, delete, insert and append. As users no longer possessincluding: update, delete and append. their data locally, it is of critical importance to assure3) Extensive security and performance analysis users that their data are being correctly stored andshows that the proposed scheme is highly efficient maintained. That is, users should be equipped withand resilient agains t Byzantine failure, malicious security means so that they can make continuousdata modification attack, and even server colluding correctness assurance of their stored data evenattacks. without the existence of local copies. In case that The rest of the paper is organized as follows. users do not necessarily have the time, feasibility orSection II introduces the system model, adversary resources to monitor their data, they can delegate themodel, our design goal and notations. Then we tasks to an optional trusted TPA of their respectiveprovide the detailed description of our scheme in choices.Section III and IV. Section V gives the securityanalysis and performance evaluations, followed by B. Adversary ModelSection VI which overviews the related work. Security threats faced by cloud data storageFinally, Section VII gives the concluding remark of can come from two different sources. On the onethe whole paper. hand, a CSP can be self-interested, untrusted and possibly malicious. Not only does it desire to moveII. PROBLEM STATEMENT data that has not been or is rarely accessed to a lowerA. System Model tier of storage than agreed for monetary reasons, but A representative network architecture for it may also attempt to hide a data loss incident due tocloud data storage is illustrated in Figure 1. Three management errors, Byzantine failures and so on. Ondifferent network entities can be identified as the other hand, there may also exist an economically-follows: motivated adversary, who has the capability to• User: users, who have data to be stored in the compromise a number of cloud data storage servers cloud and rely on the cloud for data computation, in different time intervals and subsequently is able to consist of both individual consumers and modify or delete users data while remaining organizations. undetected by CSPs for a certain period. Specifically,• Cloud Service Provider (CSP): a CSP, who has we consider two types of adversary with differ ent signif-icant resources and expertise in building levels of capability in this paper: and managing distributed cloud storage servers, Weak Adversary: The adversary is interested owns and operates live Cloud Computing in corrupting the users data files stored on individual systems. servers. Once a server is comprised, an adversary can• Third Party Auditor (TPA): an optional TPA, pollute the original data files b y modifying or who has expertise and capabilities that users may introducing its own fraudulent data to prevent the not have, is trusted to assess and expose risk of original data from being retrieved by the user. cloud storage services on behalf of the users Strong Adversary: This is the worst case scenario, in upon request. which we assume that the adversary can compromise all the storage servers so that he can intentionally modify the data files as long as they are internally 1051 | P a g e
  • 3. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 consistent. In fact, this is equivalent to the case where inconsistencies are successfully detected, to fin d all servers are colluding together to hide a data loss which server the data error lies in is also of great or corruption incident. significan ce, since it can be the first step to fast C. Design Goals recover the storage errors. To ensure the security and dependability for Subsequently, it is also shown how to derive a cloud data storage under the aforementioned challenge-response protocol for verifying the storage adversary model, we aim to design efficient correctness as well as identifying misbehaving mechanisms for dynamic data verification and servers. Finally, the procedure for file retrieval and operation and achieve the following goals: (1) error recovery based on erasure-correcti ng code is Storage correctness: to ensure users that their data are outlined. indeed stored appropriately and kept intact all the A. File Distribution Preparation time in the cloud. (2) Fast localization of data error: It is well known that erasure-correcting code may be to effectively locate the mal-functioning server when used to tolerate multiple failures in distributed data corruption has been detected. (3) Dynamic data storage systems. support: to maintain the same level of storage correctness assurance even if users modify, delete or Algorithm 1 Token Pre-computation append their data files in the cloud. (4) 1: procedure Dependability: to enhance d ata availability against Byzantine failures, malicious data modifi-cation and 2: Choose parameters l, n and function f, φ; server colluding attacks, i.e. minimizing the effect 3: Choose the number t of tokens; brought by data errors or server failures. (5) 4: Choose the number r of indices per Lightweight: to enable users to perform storage verification; correctness checks with minimum overhead. 5: Generate master key Kprp and challenge kchal;D. Notation and Preliminaries 6: for vector G(j) , j ← 1, n do 7: for round i← 1, t do k• F – the data file to be stored. We assume that F can be 8: Derive α = f ch (i) and k(i) from K P . denoted as a matrix of m equal-sized data vectors, each i al prp RP consisting of l blocks. Data blocks are all well represented (j) r q (j) as elements in Galois Field GF (2p ) for p = 8 or 16. (i) • A – The dispersal matrix used for Reed- 9: Compute vi = Pq=1 αi ∗ G [φkprp (q)] Solomon coding. 10: end for • G – The encoded file matrix, which includes a set of 11: end for n = m + k vectors, each consisting of l blocks. 12: Store all the vis locally. • fkey (·) – pseudorandom function (PRF), which is defined 13: end procedure as f : {0, 1}∗ × key → GF (2p). In cloud data storage, we rely on this • φkey (·) – pseudorandom permutation (PRP), which is technique to disperse the data file F redundantly log (l) defined as φ : {0, 1} 2 × key → {0, 1}log2 (l). across a set of n = m + k distributed servers. A (m + k, k) Reed-Solomon erasure-correcting code is used • ver – a version number bound with the index for to create k redundancy parity vectors from m data individ-ual blocks, which records the times the block vectors in such a way that the original m data vectors has been can be reconstructed from any m out of the m + k modified. Initially we assume ver is 0 for all data data and parity vectors. By placing each of the m + k blocks. vectors on a different server, the original data file can • sverij – the seed for PRF, which depends on the survive the failure of any k of the m + k servers file name, block index i, the server position j as well without any data loss, with a space overhead of k/m. as the optional block version number ver. For support of efficient sequential I/O to the original file, our file layout is systematic, i.e., the unmodified III. ENSURING CLOUD DATA m data file vectors together with k parity vectors is STORAGE distributed across m + k different servers. In cloud data storage system, users store their B. Challenge Token Precomputation data in the cloud and no longer possess the data In order to achieve assurance of data storage locally. Thus, the correctness and availability of the correctness and data error localization data files being stored o n the distributed cloud simultaneously, our scheme entirely relies on the pre- servers must be guaranteed. One of the key issues is computed verification tokens. The main ide a is as to effectively detect any unauthorized data modifi ca- follows: before file distribution the user pre-compute tion and corruption, possibly due to server s a certain number of short verification tokens on compromise and/or random Byzantine failures. individual ve ctor G(j) (j ∈ {1, . . . , n}), each token Besides, in the distributed case when such covering a random subset of data blocks. Later, when 1052 | P a g e
  • 4. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055the user wants to make sure the storage correctness rows specified in the challenge and regenerate thefor the data in the cloud, he challenges the cloud correct blocks by erasure correction, shown inservers with a set of randomly generated block Algorithm 3, as long as there are at most kindices. Upon receiving challenge, each cloud server misbehaving servers are identified. The newlycomputes a short ―signature‖ over the specified recovered blocks can then be redistributed to theblocks and returns the m to the user. The values of misbehaving servers to maintain the correctness ofthese signatures should match the corresponding storage.tokens pre-computed by the user. Meanwhile, as allservers operate over the same subset of the indices, IV. PROVIDING DYNAMIC DATAthe requested response values for integrity check OPERATION SUPPORTmust also be a valid codeword determined by secret So far, we assumed that F represents staticmatrix P. or archived data. This model may fit some application scenarios, such as libraries and scientificAlgorithm 2 Correctness Verification and Error datasets. However, in cloud data storage, there areLocalization many potential scenarios where data stored in the cloud is dynamic, like electronic documents, photos,1) procedure CHALLENGE(i) or log files etc. Therefore, it is crucial to consider the2) Recompute αi = fkchal (i) and kprp(i) from KP dynamic case, where a user may wish to performRP ; various block-level3) Send {αi, kprp(i)} to all the cloud servers; Algorithm 3 Error Recovery4) Receive from servers:{R(j) Pr αq ∗ G(j) (q)]|1 ≤ j ≤ n} 1: procedurei q=1 i [φk(i)=prp % Assume the block corruptions have been5: for (j ← m + 1, n) do detected among6: R(j) ← R(j) −Prq=1 fkj (sIq ,j )·αqi , Iq = φk(i) (q) prp7: end for % the specified r rows;8: if ((Ri(1) , . . . , Ri(m)) · P==(Ri(m+1), . . . , Ri(n)))then % Assume s ≤ k servers have been identified misbehaving9: Accept and ready for the next challenge. 2: Download r rows of blocks from servers; 3: Treat s servers as erasures and recover the10: else blocks.11: for (j ← 1, n) do12: if (Ri(j) ! =vi(j) ) then 4: Resend the recovered blocks to corresponding13: return server j is misbehaving. servers.14: end if 5: end procedure15: end for operations of update, delete and append to modify the data fil e while maintaining the storage16: end if correctness assurance. In this section, we will show how our scheme can17: end procedure explicitly and efficiently handle dynamic data operations for cloud data storage.D. File Retrieval and Error Recovery Since our layout of file matrix is systematic, A. Update Operationthe user can reconstruct the original file by In cloud data storage, sometimes the userdownloading the data vector s assurance is a may need to modify some data block(s) stored in theprobabilistic one. However, by choosing system cloud, from its current value fij to a new one, fij + fij .param-eters (e.g., r, l, t) appropriately and conducting We refer this operation as data update. Due to theenough times of verification, we can guarantee the linear property of Reed-Solomon code, a user cansuccessful file retriev al with high probability. On the perform the update operation and generate theother hand, whenever the data corruption is detected, updated parity blocks by using fij only, withoutthe comparison of pre-computed tokens and received involving any other unchanged blocks.response values can guarantee the identificati on ofmisbehaving server(s), again with high probability, B. Delete Operationwhich will be discussed shortly. Therefore, the user Sometimes, after being stored in the cloud, certaincan always ask servers to send back blocks of the r data 1053 | P a g e
  • 5. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055blocks may need to be deleted. The delete operation that this ―sampling‖ strategy on selecte d rowswe are considering is a general one, in which user instead of all can greatly reduce scheme, servers arereplaces the data block with zero or some special required to operate on specified rows thereserved data symbol. From this point of view, the computational overhead on the server, whiledelete operation is actually a special case of the data maintaining the detection of the data corruption withupdate operation, where the original data blocks can high probability.be replaced with zeros or some predeterminedspecial blocks. Therefore, we can rely on the update B. Security Strength Against Strong Adversaryprocedure to support delete operation, i.e., by setting In this section, we analyze the securityfij in F to be − fij . Also, all the affected tokens have strength of our schemes against server colludingto be modified and the updated parity information attack and explain why blind-ing the parity blockshas to be blinded using the same method specified in can help improve the security strength of ourupdate operation. proposed scheme. Recall that in the file distributionC. Append Operation preparation, the redun-dancy parity vectors are In some cases, the user may want to calculated via multiplying the file matrix F by P,increase the size of his stored data by adding blocks where P is the secret parity generation matrix weat the end of the data file, which we refer as data later rely on for storage correctness assurance. If weappend. We anticipate that the most frequent append disperse all the generated vectors directly after tokenoperation in cloud data storage is bulk append, in precomputation, i.e., without blinding, maliciouswhich the user needs to upload a large number of servers that collaborate can reconstruct the secret Pblocks (not a single block) at one time. matrix easily: they can pick blocks from the same rows among the data and parity vectors to establish aD. Insert Operation set of m · k linear equations and solve for the m · k An insert operation to the data file refers to entries of the parit generation matrix P. Once theyan append operation at the desired index position have the knowledge of P, those malicious serverswhile maintaining the same data block structure for can consequently modify any part of the data blocksthe whole data file, i.e., inser ting a block F [j] and calculate the corresponding parity blocks, andcorresponds to shifting all blocks starting with index vice versa, making their codeword relationshipj + 1 by one slot. An insert operation may affect always consistent. Therefore, our stor-agemany rows in the logical data file matrix F, and a correctness challenge scheme would besubstantial number of computations are required to undermined—even if those modified blocks arerenumber all the subsequent blocks as well as re- covered by the specified rows, the storagecompute the challenge-response tokens. Therefore, correctness check equation would always hold.an efficient insert operation is difficult to supp ortand thus we leave it for our future work. C. Performance Evaluation 1)File Distribution Preparation: We implementedV. SECURITY ANALYSIS AND the gen-eration of parity vectors for our schemePERFORMANCE EVALUATION under field GF (28). Our experiment is conducted In this section, we analyze our proposed using C on a system with an Intel Core 2 processorscheme in terms of security and efficiency. Our running at 1.86 GHz, 2048 MB of RAM, and a 7200security analysis focuses on the adversary model RPM Western Digital 250 GB Serial ATA drivedefined in Section II. We also evaluate the efficiency with an 8 MB buffer. We consider two sets ofof our scheme via implementation of both file different parameters for the (m + k, m) Reed-distribution preparation and verification token Solomon encoding. Table I shows the averageprecomputation. encoding cost over 10 trials for an 8 GB file. 2) Challenge Token Pre-computation: Although inA. Security Strength Against Weak Adversary our scheme the number of verification token t is a Detection Probability against data fixed priori determined before file distribution, wemodification: In our in each correctness verification can overcome this issue by choosing sufficient largefor the calculation of requested token. We will show t in practice. homomorphic authenticator which enables unlimitedVI. RELATED WORK number of queries and requires less communication Juels et al. [3] described a formal ―proof of overhead. Bowers et al. [5] pro-posed an improvedretrievability ‖ (POR) model for ensuring the remote framework for POR protocols that general-izes bothdata integrity. Their scheme combines spot-cheking Juels and Shachams work. Later in their subsequentand error-correcting code to ensure both possession work, Bowers et al. [10] extended POR model toand retrievability of files on archiv e service systems. distributed systems. However, all these schemes areShacham et al. [4] built on this model and focusing on static data. The effectiveness of theirconstructed a random linear function based schemes rests primarily on the preprocessing steps 1054 | P a g e
  • 6. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055that the user conducts before outsourcing the data file [6] G. Ateniese, R. Burns, R. Curtmola, J.F . Any change to the contents of F, even few bits, Herring, L. Kissner, Z. Peterson, and D.must propagate through the error-correcting code, Song, ―Provable Data Possession atthus introducing significant computation and Untrusted Stores, ‖ Proc. of CCS 07, pp.communicatio n complexity. 598–609, 2007. [7] G. Ateniese, R. D. Pietro, L. V. Mancini,VII. CONCLUSION and G. Tsudik, ―S calable and Efficient In this paper, we investigated the problem of Provable Data Possession,‖ Proc. ofdata security in cloud data storage, which is SecureComm 08, pp. 1– 10, 2008.essentially a distribute storage system. To ensure the [8] T. S. J. Schwarz and E. L. Miller, ―Store,correctness of users data in cloud data storage, we Forget, and Chec k: Using Algebraicproposed an effective and flexible distribu ted Signatures to Check Remotely Administeredscheme with explicit dynamic data support, including Storage,‖ Proc. of ICDCS 06, pp. 12–12,block update, delete, and append. We rely on erasure- 2006.correcting code in the file distribution preparation to [9] M. Lillibridge, S. Elnikety, A. Birrell, M.provide redundancy parity vectors and guarantee the Burrows, and M. Isard, ―A Cooperativedata dependability. By utilizing the homomorphic Internet Backup Scheme,‖ Proc. of the 2003token with distributed verification of erasure - coded USENIX Annual Technical Conferencedata, our scheme achieves the integration of storage (General Track), pp. 29–41, 2003.cor-rectness insurance and data error localization, [10] K. D. Bowers, A. Juels, and A. Oprea,i.e., whenever data corruption has been detected ―HAIL: A High-Avail ability and Integrityduring the storage correct-ness verification across the Layer for Cloud Storage,‖ Cryptology ePrintdistributed servers, we can alm ost guarantee the Arch ive, Report 2008/489, 2008,simultaneous identification of the misbehavi ng http://eprint.iacr.org/.server(s). Through detailed security and performance [11] L. Carter and M. Wegman, ―Universal Hashanalysis, we show that our scheme is highly efficient Functions,‖ Journal of Computer andand resilient to Byzantine failure, malicious data System Sciences, vol. 18, no. 2, pp. 143–154,modification attack, and even server colluding 1979.attacks. [12] J. Hendricks, G. Ganger, and M. Reiter, ―Verifying Dist ributed Erasure-codedACKNOWLEDGEMENT Data,‖ Proc. 26th ACM Symposium on This work was supported in part by the US Principles of Distributed Computing, pp.National Science Foundation under grant CNS- 139–146, 2007.0831963, CNS-0626601, CNS-0716306, and CNS- [13] J. S. Plank and Y. Ding, ―Note: Correction0831628. to the 1997 Tut orial on Reed-Solomon Coding,‖ University of Tennessee, Tech.REFERENCE Rep. CS-03-504, 2003. [1] Amazon.com, ―Amazon Web Services [14] Q. Wang, K. Ren, W. Lou, and Y. Zhang, (AWS),‖ Online at http ://aws. amazon.com, ―Dependable and Sec ure Sensor Data 2008. Storage with Dynamic Integrity Assurance,‖ [2] N. Gohring, ―Amazons S3 down for several Proc. of IEEE INFOCOM, 2009. hours,‖ Online at [15] R. Curtmola, O. Khan, R. Burns, and G. http://www.pcworld.com/businesscenter/arti Ateniese, ―MR-PDP : Multiple-Replica cle/142549/amazons s3 down for several Provable Data Possession,‖ Proc. of ICDCS hours.html, 2008. 08, pp. 411–420, 2008. A. Juels and J. Burton S. Kaliski, ―PORs: Proofs of [16] D. L. G. Filho and P. S. L. M. Barreto, Retrie vability for Large Files,‖ Proc. of ―Demonstrating Dat a Possession and CCS 07, pp. 584–597, 2007. Uncheatable Data Transfer,‖ Cryptology [4] H. Shacham and B. Waters, ―Compact ePrint Archive , Report 2006/150, 2006, Proofs of Retrievabil ity,‖ Proc. of Asiacrypt http://eprint.iacr.org/. 08, Dec. 2008. [17] M. A. Shah, M. Baker, J. C. Mogul, and R. [5] K. D. Bowers, A. Juels, and A. Oprea, Swaminathan, ―Au diting to Keep Online ―Proofs of Retrievab ility: Theory and Storage Services Honest,‖ Proc. 11th Implementation,‖ Cryptology ePrint USENIX Workshop on Hot Topics in Archive, Report 20 08/175, 2008, Operating Sy http://eprint.iacr.org/. 1055 | P a g e