SlideShare a Scribd company logo
1 of 6
Download to read offline
Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of
     Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                  Vol. 2, Issue 5, September- October 2012, pp.1050-1055
                  Data Storage Collateral In Cloud Computing
  Annamaneni Samatha*,Nimmala Jeevan Reddy**,P.Pradeep Kumar***
                *(Student,Department of Computer Science, JNTU University, Hyderabad-85)
               ** (Student,Department of Computer Science JNTU University, Hyderabad-85)
         ***(Head of Department ,Department of Computer Science JNTU University, Hyderabad-85)


ABSTRACT
          Cloud Computing has been predicted as           examples. While these internet-based online services
the next-generation architecture of IT Enterprise.        do provide huge amounts of storage space and
In additional to traditional solutions, where the IT      customizable computing resources, this computing
services are under proper physical, logical and           platform shift, however, is eliminating the
personnel controls, Cloud Computing moves the             responsibility of local machines for data maintenance
application software and databases to the large           at the same time. As a result, users are at the mercy
data centers, where the management of data has            of their cloud service providers for the availability
many security challenges To Overcome this, in             and integrity of their data. Recent downtime of
this paper we focused on data security in cloud           Amazon's S3 is such an example [2].
computing, which has always been an important                        From the perspective of data security, which
aspect of quality of service. To ensure the               has always been an important aspect of quality of
correctness of users' data in the cloud, we propose       service, Cloud Com-puting inevitably poses new
an effective and flexible distributed scheme with         challenging security threats for number of reasons.
two salient features, opposing to its predecessors.       Firstly, traditional cryptographic primitives for the
By utilizing the homomorphic token with                   purpose of data security protection can not be directly
distributed verification of erasure-coded data, our       adopted due to the users' loss control of data under
scheme achieves the integration of storage                Cloud Computing. Therefore, verification of correct
correctness insurance and data error localization,        data storage in the cloud must be conducted without
i.e., the identification of misbehaving g server(s).      explicit knowledge of the whole data. Considering
Unlike most prior works, the new scheme further           various kinds of data for each user stored in the cloud
supports secure and efficient dynamic operations          and the demand of long term continuous assurance of
on data base, including data update, delete and           their data safety, the problem of verifying correctness
insert. Extensive security and performance                of data storage in the cloud becomes even more
analysis shows that the proposed scheme is highly         challenging. Secondly, Cloud Computing is not just a
efficient and resilient against Byzantine failure,        third party data warehouse. The data stored in the
malicious data modification attack, and even it           cloud may be frequently updated by the users,
avoids server colluding attacks.                          including       insertion,   deletion,    modification,
                                                          appending, reordering, etc. To ens ure storage
Keywords      - Cloud       Computing, Security,          correctness under dynamic data update is hence of
Distributed    scheme,      homomorphic  token,           paramount importance. However, this dynamic
distributed verification.                                 feature also makes traditional integrity insurance
                                                          techniques futile and entails new solutions. Last but
I. INTRODUCTION                                           not the least, the deployment of Cloud Computing is
         Several trends are opening up the era of         powered by data centers running in a simultaneous,
Cloud Computing, which is an Internet-based               cooperated and distributed manner. Individual user's
development and use of computer technology. The           data is redundantly stored in multiple physical loca-
ever cheaper and more powerful processors, together       tions to further reduce the data integrity threats.
with the software as a service (SaaS) computing           Therefore, distributed protocols for storage
archi-tecture, are transforming data centers into pools   correctness assurance will be of most importance in
of computing service on a huge scale. The increasing      achieving a robust and secure cloud data storage
network bandwidth and reliable yet flexible network       system in the real world. However, such important
connections make it even possi ble that users can now     area remains to be fully explored in the literature.
subscribe high quality services from data and                        In this paper, we propose an effective and
software that reside solely on remote data centers.       flexible distribut ed scheme with explicit dynamic
Moving data into the cloud offers great convenience       data support to ensure the correctness of users' data in
to users since they don't have to care about the          the cloud. We rely on erasure-correcting code in the
complexities of direct hardware management. The           file distribution preparation to prov ide redundancies
pioneer of Cloud Com-puting vendors, Amazon               and guarantee the data dependability. This con-
Simple Storage Service (S3) and Amazon Elastic            struction drastically reduces the communication and
Compute Cloud (EC2) [1] are both well known               storage overhead as compared to the traditional


                                                                                                 1050 | P a g e
Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of
         Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                      Vol. 2, Issue 5, September- October 2012, pp.1050-1055
replication-based file distribution techniques. By
utilizing the homomorphic token with distributed
verification of erasure-coded data, our sc heme
achieves the storage correctness insurance as well as
data error localization: whenever data corruption has
been detected during the storage correctness
verification, our scheme can almost guarantee the
simultaneous localization of data errors, i.e., the
identification of the misbehaving server(s)
Our work is among the first few ones in this field to                In cloud data storage, a user stores his data
consider distributed data storage in Cloud                 through a CSP into a set of cloud servers, which are
Computing. Our contribution can be summarized as           running in a simulta-neous, cooperated and
the following three aspect.                                distributed manner. Data redundancy can be
                                                           employed with technique of erasure-correcting code
1) Compared to many of its predecessors, which only        to further tolerate faults or server crash as user's data
provide binary results about the storage state across      grows in size and importance. Thereafter, for
the distributed servers, the challenge-response            application purposes, the user interacts with the cloud
protocol in our work further provides the localization     servers via CSP to access or retrieve his data. In some
of data error.                                             cases, the user may need to perform block level
2) Unlike most prior works for ensuring remote data        operations on his data. The most general forms of
integrity, the new scheme supports secure and              these operations we are considering are block update,
efficient dynamic opera-tions on data blocks,              delete, insert and append. As users no longer possess
including: update, delete and append.                      their data locally, it is of critical importance to assure
3) Extensive security and performance analysis             users that their data are being correctly stored and
shows that the proposed scheme is highly efficient         maintained. That is, users should be equipped with
and resilient agains t Byzantine failure, malicious        security means so that they can make continuous
data modification attack, and even server colluding        correctness assurance of their stored data even
attacks.                                                   without the existence of local copies. In case that
          The rest of the paper is organized as follows.   users do not necessarily have the time, feasibility or
Section II introduces the system model, adversary          resources to monitor their data, they can delegate the
model, our design goal and notations. Then we              tasks to an optional trusted TPA of their respective
provide the detailed description of our scheme in          choices.
Section III and IV. Section V gives the security
analysis and performance evaluations, followed by          B. Adversary Model
Section VI which overviews the related work.                         Security threats faced by cloud data storage
Finally, Section VII gives the concluding remark of        can come from two different sources. On the one
the whole paper.                                           hand, a CSP can be self-interested, untrusted and
                                                           possibly malicious. Not only does it desire to move
II.     PROBLEM STATEMENT                                  data that has not been or is rarely accessed to a lower
A. System Model                                            tier of storage than agreed for monetary reasons, but
          A representative network architecture for        it may also attempt to hide a data loss incident due to
cloud data storage is illustrated in Figure 1. Three       management errors, Byzantine failures and so on. On
different network entities can be identified as            the other hand, there may also exist an economically-
follows:                                                   motivated adversary, who has the capability to
• User: users, who have data to be stored in the           compromise a number of cloud data storage servers
     cloud and rely on the cloud for data computation,     in different time intervals and subsequently is able to
     consist of both individual consumers and              modify or delete users' data while remaining
     organizations.                                        undetected by CSPs for a certain period. Specifically,
• Cloud Service Provider (CSP): a CSP, who has             we consider two types of adversary with differ ent
     signif-icant resources and expertise in building      levels of capability in this paper:
     and managing distributed cloud storage servers,                 Weak Adversary: The adversary is interested
     owns and operates live Cloud Computing                in corrupting the user's data files stored on individual
     systems.                                              servers. Once a server is comprised, an adversary can
• Third Party Auditor (TPA): an optional TPA,              pollute the original data files b y modifying or
     who has expertise and capabilities that users may     introducing its own fraudulent data to prevent the
     not have, is trusted to assess and expose risk of     original data from being retrieved by the user.
     cloud storage services on behalf of the users         Strong Adversary: This is the worst case scenario, in
     upon request.                                         which we assume that the adversary can compromise
                                                           all the storage servers so that he can intentionally
                                                           modify the data files as long as they are internally

                                                                                                   1051 | P a g e
Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of
              Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                           Vol. 2, Issue 5, September- October 2012, pp.1050-1055
       consistent. In fact, this is equivalent to the case where      inconsistencies are successfully detected, to fin d
       all servers are colluding together to hide a data loss         which server the data error lies in is also of great
       or corruption incident.                                        significan ce, since it can be the first step to fast
       C. Design Goals                                                recover the storage errors.
                 To ensure the security and dependability for                Subsequently, it is also shown how to derive a
       cloud data storage under the aforementioned                    challenge-response protocol for verifying the storage
       adversary model, we aim to design efficient                    correctness as well as identifying misbehaving
       mechanisms for dynamic data verification and                   servers. Finally, the procedure for file retrieval and
       operation and achieve the following goals: (1)                 error recovery based on erasure-correcti ng code is
       Storage correctness: to ensure users that their data are       outlined.
       indeed stored appropriately and kept intact all the            A. File Distribution Preparation
       time in the cloud. (2) Fast localization of data error:        It is well known that erasure-correcting code may be
       to effectively locate the mal-functioning server when          used to tolerate multiple failures in distributed
       data corruption has been detected. (3) Dynamic data            storage systems.
       support: to maintain the same level of storage
       correctness assurance even if users modify, delete or          Algorithm 1 Token Pre-computation
       append their data files in the cloud. (4)                      1: procedure
       Dependability: to enhance d ata availability against
       Byzantine failures, malicious data modifi-cation and           2:       Choose parameters l, n and function f, φ;
       server colluding attacks, i.e. minimizing the effect           3:       Choose the number t of tokens;
       brought by data errors or server failures. (5)                 4:       Choose the number r of indices per
       Lightweight: to enable users to perform storage                verification;
       correctness checks with minimum overhead.                      5:       Generate master key Kprp and challenge kchal;
D. Notation and Preliminaries                                         6:       for vector G(j) , j ← 1, n do
                                                                      7:            for round i← 1, t do
                                                                                                       k
•   F – the data file to be stored. We assume that F can be            8:       Derive α = f            ch (i) and k(i) from K P .
    denoted as a matrix of m equal-sized data vectors, each                     i                      al          prp         RP
    consisting of l blocks. Data blocks are all well represented                (j)                         r q (j)
    as elements in Galois Field GF (2p ) for p = 8 or 16.                                                                 (i)
         •      A – The dispersal matrix used for Reed-                9:       Compute vi             = Pq=1 αi   ∗ G [φkprp (q)]
         Solomon coding.                                              10:           end for
         •      G – The encoded file matrix, which
         includes a set of                                            11:    end for
      n = m + k vectors, each consisting of l blocks.                 12:    Store all the vis locally.
    • fkey (·) – pseudorandom function (PRF), which is defined        13: end procedure
      as f : {0, 1}∗ × key → GF (2p).
                                                                                In cloud data storage, we rely on this
    • φkey (·) – pseudorandom permutation (PRP), which           is
                                                                      technique to disperse the data file F redundantly
                              log (l)
         defined as φ : {0, 1}   2 × key → {0, 1}log2 (l).
                                                                      across a set of n = m + k distributed servers. A (m +
                                                                      k, k) Reed-Solomon erasure-correcting code is used
        •     ver – a version number bound with the index for         to create k redundancy parity vectors from m data
        individ-ual blocks, which records the times the block
                                                                      vectors in such a way that the original m data vectors
        has been
                                                                      can be reconstructed from any m out of the m + k
        modified. Initially we assume ver is 0 for all data
                                                                      data and parity vectors. By placing each of the m + k
        blocks.
                                                                      vectors on a different server, the original data file can
        •     sverij – the seed for PRF, which depends on the         survive the failure of any k of the m + k servers
        file name, block index i, the server position j as well       without any data loss, with a space overhead of k/m.
        as the optional block version number ver.
                                                                      For support of efficient sequential I/O to the original
                                                                      file, our file layout is systematic, i.e., the unmodified
        III.    ENSURING               CLOUD              DATA        m data file vectors together with k parity vectors is
                STORAGE                                               distributed across m + k different servers.
               In cloud data storage system, users store their        B. Challenge Token Precomputation
        data in the cloud and no longer possess the data                        In order to achieve assurance of data storage
        locally. Thus, the correctness and availability of the        correctness       and      data    error       localization
        data files being stored o n the distributed cloud             simultaneously, our scheme entirely relies on the pre-
        servers must be guaranteed. One of the key issues is          computed verification tokens. The main ide a is as
        to effectively detect any unauthorized data modifi ca-        follows: before file distribution the user pre-compute
        tion and corruption, possibly due to server                   s a certain number of short verification tokens on
        compromise and/or random Byzantine failures.                  individual ve ctor G(j) (j ∈ {1, . . . , n}), each token
        Besides, in the distributed case when such                    covering a random subset of data blocks. Later, when

                                                                                                               1052 | P a g e
Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of
        Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                     Vol. 2, Issue 5, September- October 2012, pp.1050-1055
the user wants to make sure the storage correctness                    rows specified in the challenge and regenerate the
for the data in the cloud, he challenges the cloud                     correct blocks by erasure correction, shown in
servers with a set of randomly generated block                         Algorithm 3, as long as there are at most k
indices. Upon receiving challenge, each cloud server                   misbehaving servers are identified. The newly
computes a short ―signature‖ over the specified                        recovered blocks can then be redistributed to the
blocks and returns the m to the user. The values of                    misbehaving servers to maintain the correctness of
these signatures should match the corresponding                        storage.
tokens pre-computed by the user. Meanwhile, as all
servers operate over the same subset of the indices,                   IV.  PROVIDING DYNAMIC                            DATA
the requested response values for integrity check                      OPERATION SUPPORT
must also be a valid codeword determined by secret                               So far, we assumed that F represents static
matrix P.                                                              or archived data. This model may fit some
                                                                       application scenarios, such as libraries and scientific
Algorithm 2 Correctness Verification and Error                         datasets. However, in cloud data storage, there are
Localization                                                           many potential scenarios where data stored in the
                                                                       cloud is dynamic, like electronic documents, photos,
1) procedure CHALLENGE(i)                                              or log files etc. Therefore, it is crucial to consider the
2)    Recompute αi = fkchal (i) and kprp(i) from KP                    dynamic case, where a user may wish to perform
RP ;                                                                   various block-level

3)        Send {αi, kprp(i)} to all the cloud servers;                 Algorithm 3 Error Recovery
4)        Receive from servers:
{R(j) Pr αq ∗ G(j)            (q)]|1 ≤ j ≤ n}                          1:   procedure
i q=1 i [φk(i)=
prp                                                                    % Assume the block corruptions have been
5:        for (j ← m + 1, n) do                                          detected among
6: R(j) ← R(j) −Prq=1 fkj (sIq ,j )·αqi , Iq = φk(i) (q) prp
7:        end for                                                      % the specified r rows;
8:        if ((Ri(1) , . . . , Ri(m)) · P==(Ri(m+1), . . . , Ri(n)))
then                                                                   % Assume s ≤ k servers have been identified
                                                                          misbehaving
9:            Accept and ready for the next challenge.                 2: Download r rows of blocks from servers;
                                                                       3: Treat s servers as erasures and recover the
10:       else                                                            blocks.
11:           for (j ← 1, n) do
12:               if (Ri(j) ! =vi(j) ) then                            4:   Resend the recovered blocks to corresponding
13:                   return server j is misbehaving.                       servers.

14:                end if                                              5:   end procedure

15:           end for                                                            operations of update, delete and append to
                                                                       modify the data fil e while maintaining the storage
16:       end if                                                       correctness assurance.
                                                                       In this section, we will show how our scheme can
17: end procedure                                                      explicitly and efficiently handle dynamic data
                                                                       operations for cloud data storage.
D. File Retrieval and Error Recovery
         Since our layout of file matrix is systematic,                A.       Update Operation
the user can reconstruct the original file by                                   In cloud data storage, sometimes the user
downloading the data vector s assurance is a                           may need to modify some data block(s) stored in the
probabilistic one. However, by choosing system                         cloud, from its current value fij to a new one, fij + fij .
param-eters (e.g., r, l, t) appropriately and conducting               We refer this operation as data update. Due to the
enough times of verification, we can guarantee the                     linear property of Reed-Solomon code, a user can
successful file retriev al with high probability. On the               perform the update operation and generate the
other hand, whenever the data corruption is detected,                  updated parity blocks by using fij only, without
the comparison of pre-computed tokens and received                     involving any other unchanged blocks.
response values can guarantee the identificati on of
misbehaving server(s), again with high probability,                    B. Delete Operation
which will be discussed shortly. Therefore, the user                   Sometimes, after being stored in the cloud, certain
can always ask servers to send back blocks of the r                    data

                                                                                                                1053 | P a g e
Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of
     Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                  Vol. 2, Issue 5, September- October 2012, pp.1050-1055
blocks may need to be deleted. The delete operation         that this ―sampling‖ strategy on selecte d rows
we are considering is a general one, in which user          instead of all can greatly reduce scheme, servers are
replaces the data block with zero or some special           required to operate on specified rows the
reserved data symbol. From this point of view, the          computational overhead on the server, while
delete operation is actually a special case of the data     maintaining the detection of the data corruption with
update operation, where the original data blocks can        high probability.
be replaced with zeros or some predetermined
special blocks. Therefore, we can rely on the update        B. Security Strength Against Strong Adversary
procedure to support delete operation, i.e., by setting               In this section, we analyze the security
fij in F to be − fij . Also, all the affected tokens have   strength of our schemes against server colluding
to be modified and the updated parity information           attack and explain why blind-ing the parity blocks
has to be blinded using the same method specified in        can help improve the security strength of our
update operation.                                           proposed scheme.
                                                                      Recall that in the file distribution
C. Append Operation                                         preparation, the redun-dancy parity vectors are
         In some cases, the user may want to                calculated via multiplying the file matrix F by P,
increase the size of his stored data by adding blocks       where P is the secret parity generation matrix we
at the end of the data file, which we refer as data         later rely on for storage correctness assurance. If we
append. We anticipate that the most frequent append         disperse all the generated vectors directly after token
operation in cloud data storage is bulk append, in          precomputation, i.e., without blinding, malicious
which the user needs to upload a large number of            servers that collaborate can reconstruct the secret P
blocks (not a single block) at one time.                    matrix easily: they can pick blocks from the same
                                                            rows among the data and parity vectors to establish a
D. Insert Operation                                         set of m · k linear equations and solve for the m · k
         An insert operation to the data file refers to     entries of the parit generation matrix P. Once they
an append operation at the desired index position           have the knowledge of P, those malicious servers
while maintaining the same data block structure for         can consequently modify any part of the data blocks
the whole data file, i.e., inser ting a block F [j]         and calculate the corresponding parity blocks, and
corresponds to shifting all blocks starting with index      vice versa, making their codeword relationship
j + 1 by one slot. An insert operation may affect           always consistent. Therefore, our stor-age
many rows in the logical data file matrix F, and a          correctness     challenge     scheme      would      be
substantial number of computations are required to          undermined—even if those modified blocks are
renumber all the subsequent blocks as well as re-           covered by the specified rows, the storage
compute the challenge-response tokens. Therefore,           correctness check equation would always hold.
an efficient insert operation is difficult to supp ort
and thus we leave it for our future work.                    C. Performance Evaluation
                                                             1)File Distribution Preparation: We implemented
V.    SECURITY ANALYSIS                           AND        the gen-eration of parity vectors for our scheme
PERFORMANCE EVALUATION                                       under field GF (28). Our experiment is conducted
          In this section, we analyze our proposed           using C on a system with an Intel Core 2 processor
scheme in terms of security and efficiency. Our              running at 1.86 GHz, 2048 MB of RAM, and a 7200
security analysis focuses on the adversary model             RPM Western Digital 250 GB Serial ATA drive
defined in Section II. We also evaluate the efficiency       with an 8 MB buffer. We consider two sets of
of our scheme via implementation of both file                different parameters for the (m + k, m) Reed-
distribution preparation and verification token              Solomon encoding. Table I shows the average
precomputation.                                              encoding cost over 10 trials for an 8 GB file.
                                                             2) Challenge Token Pre-computation: Although in
A. Security Strength Against Weak Adversary                  our scheme the number of verification token t is a
         Detection      Probability   against    data        fixed priori determined before file distribution, we
modification: In our in each correctness verification        can overcome this issue by choosing sufficient large
for the calculation of requested token. We will show         t in practice.
                                                            homomorphic authenticator which enables unlimited
VI. RELATED WORK                                            number of queries and requires less communication
         Juels et al. [3] described a formal ―proof of      overhead. Bowers et al. [5] pro-posed an improved
retrievability ‖ (POR) model for ensuring the remote        framework for POR protocols that general-izes both
data integrity. Their scheme combines spot-cheking          Juels and Shacham's work. Later in their subsequent
and error-correcting code to ensure both possession         work, Bowers et al. [10] extended POR model to
and retrievability of files on archiv e service systems.    distributed systems. However, all these schemes are
Shacham et al. [4] built on this model and                  focusing on static data. The effectiveness of their
constructed a random linear function based                  schemes rests primarily on the preprocessing steps

                                                                                                  1054 | P a g e
Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of
     Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
                  Vol. 2, Issue 5, September- October 2012, pp.1050-1055
that the user conducts before outsourcing the data file   [6]    G. Ateniese, R. Burns, R. Curtmola, J.
F . Any change to the contents of F, even few bits,              Herring, L. Kissner, Z. Peterson, and D.
must propagate through the error-correcting code,                Song, ―Provable Data Possession at
thus introducing significant computation and                     Untrusted Stores, ‖ Proc. of CCS '07, pp.
communicatio n complexity.                                       598–609, 2007.
                                                          [7]    G. Ateniese, R. D. Pietro, L. V. Mancini,
VII. CONCLUSION                                                  and G. Tsudik, ―S calable and Efficient
          In this paper, we investigated the problem of          Provable Data Possession,‖ Proc. of
data security in cloud data storage, which is                    SecureComm '08, pp. 1– 10, 2008.
essentially a distribute storage system. To ensure the    [8]    T. S. J. Schwarz and E. L. Miller, ―Store,
correctness of users' data in cloud data storage, we             Forget, and Chec k: Using Algebraic
proposed an effective and flexible distribu ted                  Signatures to Check Remotely Administered
scheme with explicit dynamic data support, including             Storage,‖ Proc. of ICDCS '06, pp. 12–12,
block update, delete, and append. We rely on erasure-            2006.
correcting code in the file distribution preparation to   [9]    M. Lillibridge, S. Elnikety, A. Birrell, M.
provide redundancy parity vectors and guarantee the              Burrows, and M. Isard, ―A Cooperative
data dependability. By utilizing the homomorphic                 Internet Backup Scheme,‖ Proc. of the 2003
token with distributed verification of erasure - coded           USENIX Annual Technical Conference
data, our scheme achieves the integration of storage             (General Track), pp. 29–41, 2003.
cor-rectness insurance and data error localization,       [10]   K. D. Bowers, A. Juels, and A. Oprea,
i.e., whenever data corruption has been detected                 ―HAIL: A High-Avail ability and Integrity
during the storage correct-ness verification across the          Layer for Cloud Storage,‖ Cryptology ePrint
distributed servers, we can alm ost guarantee the                Arch ive, Report 2008/489, 2008,
simultaneous identification of the misbehavi ng                  http://eprint.iacr.org/.
server(s). Through detailed security and performance      [11]   L. Carter and M. Wegman, ―Universal Hash
analysis, we show that our scheme is highly efficient            Functions,‖ Journal of Computer and
and resilient to Byzantine failure, malicious data               System Sciences, vol. 18, no. 2, pp. 143–154,
modification attack, and even server colluding                   1979.
attacks.                                                  [12]   J. Hendricks, G. Ganger, and M. Reiter,
                                                                 ―Verifying Dist ributed Erasure-coded
ACKNOWLEDGEMENT                                                  Data,‖ Proc. 26th ACM Symposium on
        This work was supported in part by the US                Principles of Distributed Computing, pp.
National Science Foundation under grant CNS-                     139–146, 2007.
0831963, CNS-0626601, CNS-0716306, and CNS-               [13]   J. S. Plank and Y. Ding, ―Note: Correction
0831628.                                                         to the 1997 Tut orial on Reed-Solomon
                                                                 Coding,‖ University of Tennessee, Tech.
REFERENCE                                                        Rep. CS-03-504, 2003.
  [1]   Amazon.com, ―Amazon Web Services                  [14]   Q. Wang, K. Ren, W. Lou, and Y. Zhang,
        (AWS),‖ Online at http ://aws. amazon.com,               ―Dependable and Sec ure Sensor Data
        2008.                                                    Storage with Dynamic Integrity Assurance,‖
  [2]   N. Gohring, ―Amazon's S3 down for several                Proc. of IEEE INFOCOM, 2009.
        hours,‖                  Online             at    [15]   R. Curtmola, O. Khan, R. Burns, and G.
        http://www.pcworld.com/businesscenter/arti               Ateniese, ―MR-PDP : Multiple-Replica
        cle/142549/amazons s3 down for several                   Provable Data Possession,‖ Proc. of ICDCS
        hours.html, 2008.                                        '08, pp. 411–420, 2008.
  A. Juels and J. Burton S. Kaliski, ―PORs: Proofs of     [16]   D. L. G. Filho and P. S. L. M. Barreto,
        Retrie vability for Large Files,‖ Proc. of               ―Demonstrating Dat a Possession and
        CCS '07, pp. 584–597, 2007.                              Uncheatable Data Transfer,‖ Cryptology
  [4]   H. Shacham and B. Waters, ―Compact                       ePrint Archive , Report 2006/150, 2006,
        Proofs of Retrievabil ity,‖ Proc. of Asiacrypt           http://eprint.iacr.org/.
        '08, Dec. 2008.                                   [17]   M. A. Shah, M. Baker, J. C. Mogul, and R.
  [5]   K. D. Bowers, A. Juels, and A. Oprea,                    Swaminathan, ―Au diting to Keep Online
        ―Proofs of Retrievab ility: Theory and                   Storage Services Honest,‖ Proc. 11th
        Implementation,‖         Cryptology     ePrint           USENIX Workshop on Hot Topics in
        Archive, Report 20 08/175, 2008,                         Operating Sy
        http://eprint.iacr.org/.




                                                                                             1055 | P a g e

More Related Content

What's hot

Paper id 27201433
Paper id 27201433Paper id 27201433
Paper id 27201433IJRAT
 
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
 
Enhancement of the Cloud Data Storage Architectural Framework in Private Cloud
Enhancement of the Cloud Data Storage Architectural Framework in Private CloudEnhancement of the Cloud Data Storage Architectural Framework in Private Cloud
Enhancement of the Cloud Data Storage Architectural Framework in Private CloudINFOGAIN PUBLICATION
 
Excellent Manner of Using Secure way of data storage in cloud computing
Excellent Manner of Using Secure way of data storage in cloud computingExcellent Manner of Using Secure way of data storage in cloud computing
Excellent Manner of Using Secure way of data storage in cloud computingEditor IJMTER
 
Security policy enforcement in cloud infrastructure
Security policy enforcement in cloud infrastructureSecurity policy enforcement in cloud infrastructure
Security policy enforcement in cloud infrastructurecsandit
 
Survey on Division and Replication of Data in Cloud for Optimal Performance a...
Survey on Division and Replication of Data in Cloud for Optimal Performance a...Survey on Division and Replication of Data in Cloud for Optimal Performance a...
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
 
Enhanced Data Partitioning Technique for Improving Cloud Data Storage Security
Enhanced Data Partitioning Technique for Improving Cloud Data Storage SecurityEnhanced Data Partitioning Technique for Improving Cloud Data Storage Security
Enhanced Data Partitioning Technique for Improving Cloud Data Storage SecurityEditor IJMTER
 
Unit 3 -Data storage and cloud computing
Unit 3 -Data storage and cloud computingUnit 3 -Data storage and cloud computing
Unit 3 -Data storage and cloud computingMonishaNehkal
 
Migration of Virtual Machine to improve the Security in Cloud Computing
Migration of Virtual Machine to improve the Security in Cloud Computing Migration of Virtual Machine to improve the Security in Cloud Computing
Migration of Virtual Machine to improve the Security in Cloud Computing IJECEIAES
 
Requirements and Challenges for Securing Cloud Applications and Services
Requirements and Challenges for Securing Cloud Applications  and ServicesRequirements and Challenges for Securing Cloud Applications  and Services
Requirements and Challenges for Securing Cloud Applications and ServicesIOSR Journals
 
Flaw less coding and authentication of user data using multiple clouds
Flaw less coding and authentication of user data using multiple cloudsFlaw less coding and authentication of user data using multiple clouds
Flaw less coding and authentication of user data using multiple cloudsIRJET Journal
 

What's hot (17)

Paper id 27201433
Paper id 27201433Paper id 27201433
Paper id 27201433
 
50620130101004
5062013010100450620130101004
50620130101004
 
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...
 
Enhancement of the Cloud Data Storage Architectural Framework in Private Cloud
Enhancement of the Cloud Data Storage Architectural Framework in Private CloudEnhancement of the Cloud Data Storage Architectural Framework in Private Cloud
Enhancement of the Cloud Data Storage Architectural Framework in Private Cloud
 
Excellent Manner of Using Secure way of data storage in cloud computing
Excellent Manner of Using Secure way of data storage in cloud computingExcellent Manner of Using Secure way of data storage in cloud computing
Excellent Manner of Using Secure way of data storage in cloud computing
 
Security policy enforcement in cloud infrastructure
Security policy enforcement in cloud infrastructureSecurity policy enforcement in cloud infrastructure
Security policy enforcement in cloud infrastructure
 
Survey on Division and Replication of Data in Cloud for Optimal Performance a...
Survey on Division and Replication of Data in Cloud for Optimal Performance a...Survey on Division and Replication of Data in Cloud for Optimal Performance a...
Survey on Division and Replication of Data in Cloud for Optimal Performance a...
 
Fog doc
Fog doc Fog doc
Fog doc
 
489 493
489 493489 493
489 493
 
improve cloud security
improve cloud securityimprove cloud security
improve cloud security
 
E0332427
E0332427E0332427
E0332427
 
Enhanced Data Partitioning Technique for Improving Cloud Data Storage Security
Enhanced Data Partitioning Technique for Improving Cloud Data Storage SecurityEnhanced Data Partitioning Technique for Improving Cloud Data Storage Security
Enhanced Data Partitioning Technique for Improving Cloud Data Storage Security
 
Unit 3 -Data storage and cloud computing
Unit 3 -Data storage and cloud computingUnit 3 -Data storage and cloud computing
Unit 3 -Data storage and cloud computing
 
Dn35636640
Dn35636640Dn35636640
Dn35636640
 
Migration of Virtual Machine to improve the Security in Cloud Computing
Migration of Virtual Machine to improve the Security in Cloud Computing Migration of Virtual Machine to improve the Security in Cloud Computing
Migration of Virtual Machine to improve the Security in Cloud Computing
 
Requirements and Challenges for Securing Cloud Applications and Services
Requirements and Challenges for Securing Cloud Applications  and ServicesRequirements and Challenges for Securing Cloud Applications  and Services
Requirements and Challenges for Securing Cloud Applications and Services
 
Flaw less coding and authentication of user data using multiple clouds
Flaw less coding and authentication of user data using multiple cloudsFlaw less coding and authentication of user data using multiple clouds
Flaw less coding and authentication of user data using multiple clouds
 

Viewers also liked

Never smoke!
Never smoke!Never smoke!
Never smoke!Atheon13
 
Learn BEM: CSS Naming Convention
Learn BEM: CSS Naming ConventionLearn BEM: CSS Naming Convention
Learn BEM: CSS Naming ConventionIn a Rocket
 
How to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media PlanHow to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media PlanPost Planner
 
SEO: Getting Personal
SEO: Getting PersonalSEO: Getting Personal
SEO: Getting PersonalKirsty Hulse
 
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika AldabaLightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldabaux singapore
 

Viewers also liked (7)

Never smoke!
Never smoke!Never smoke!
Never smoke!
 
Trabajo Solp
Trabajo SolpTrabajo Solp
Trabajo Solp
 
Learn BEM: CSS Naming Convention
Learn BEM: CSS Naming ConventionLearn BEM: CSS Naming Convention
Learn BEM: CSS Naming Convention
 
How to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media PlanHow to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media Plan
 
SEO: Getting Personal
SEO: Getting PersonalSEO: Getting Personal
SEO: Getting Personal
 
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika AldabaLightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
 
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job? Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
 

Similar to Fs2510501055

Distributed Scheme to Authenticate Data Storage Security in Cloud Computing
Distributed Scheme to Authenticate Data Storage Security in Cloud ComputingDistributed Scheme to Authenticate Data Storage Security in Cloud Computing
Distributed Scheme to Authenticate Data Storage Security in Cloud ComputingAIRCC Publishing Corporation
 
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTING
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGDISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTING
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGAIRCC Publishing Corporation
 
Crypto multi tenant an environment of secure computing using cloud sql
Crypto multi tenant an environment of secure computing using cloud sqlCrypto multi tenant an environment of secure computing using cloud sql
Crypto multi tenant an environment of secure computing using cloud sqlijdpsjournal
 
A Secure Framework for Cloud Computing With Multi-cloud Service Providers
A Secure Framework for Cloud Computing With Multi-cloud Service ProvidersA Secure Framework for Cloud Computing With Multi-cloud Service Providers
A Secure Framework for Cloud Computing With Multi-cloud Service Providersiosrjce
 
Enhanced security framework to ensure data security in cloud using security b...
Enhanced security framework to ensure data security in cloud using security b...Enhanced security framework to ensure data security in cloud using security b...
Enhanced security framework to ensure data security in cloud using security b...eSAT Journals
 
Enhanced security framework to ensure data security
Enhanced security framework to ensure data securityEnhanced security framework to ensure data security
Enhanced security framework to ensure data securityeSAT Publishing House
 
Evaluation Of The Data Security Methods In Cloud Computing Environments
Evaluation Of The Data Security Methods In Cloud Computing EnvironmentsEvaluation Of The Data Security Methods In Cloud Computing Environments
Evaluation Of The Data Security Methods In Cloud Computing Environmentsijfcstjournal
 
S.A.kalaiselvan toward secure and dependable storage services
S.A.kalaiselvan toward secure and dependable storage servicesS.A.kalaiselvan toward secure and dependable storage services
S.A.kalaiselvan toward secure and dependable storage serviceskalaiselvanresearch
 
fog computing provide security to the data in cloud
fog computing provide security to the data in cloudfog computing provide security to the data in cloud
fog computing provide security to the data in cloudpriyanka reddy
 
A survey on secured data outsourcing in cloud computing
A survey on secured data outsourcing in cloud computingA survey on secured data outsourcing in cloud computing
A survey on secured data outsourcing in cloud computingIAEME Publication
 
A Novel Computing Paradigm for Data Protection in Cloud Computing
A Novel Computing Paradigm for Data Protection in Cloud ComputingA Novel Computing Paradigm for Data Protection in Cloud Computing
A Novel Computing Paradigm for Data Protection in Cloud ComputingIJMER
 
Enhanced Integrity Preserving Homomorphic Scheme for Cloud Storage
Enhanced Integrity Preserving Homomorphic Scheme for Cloud StorageEnhanced Integrity Preserving Homomorphic Scheme for Cloud Storage
Enhanced Integrity Preserving Homomorphic Scheme for Cloud StorageIRJET Journal
 
Data Partitioning Technique In Cloud: A Survey On Limitation And Benefits
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsData Partitioning Technique In Cloud: A Survey On Limitation And Benefits
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Privacy and Integrity Preserving in Cloud Storage Devices
Privacy and Integrity Preserving in Cloud Storage DevicesPrivacy and Integrity Preserving in Cloud Storage Devices
Privacy and Integrity Preserving in Cloud Storage DevicesIOSR Journals
 

Similar to Fs2510501055 (20)

Distributed Scheme to Authenticate Data Storage Security in Cloud Computing
Distributed Scheme to Authenticate Data Storage Security in Cloud ComputingDistributed Scheme to Authenticate Data Storage Security in Cloud Computing
Distributed Scheme to Authenticate Data Storage Security in Cloud Computing
 
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTING
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGDISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTING
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTING
 
Crypto multi tenant an environment of secure computing using cloud sql
Crypto multi tenant an environment of secure computing using cloud sqlCrypto multi tenant an environment of secure computing using cloud sql
Crypto multi tenant an environment of secure computing using cloud sql
 
V04405122126
V04405122126V04405122126
V04405122126
 
I017225966
I017225966I017225966
I017225966
 
A Secure Framework for Cloud Computing With Multi-cloud Service Providers
A Secure Framework for Cloud Computing With Multi-cloud Service ProvidersA Secure Framework for Cloud Computing With Multi-cloud Service Providers
A Secure Framework for Cloud Computing With Multi-cloud Service Providers
 
Enhanced security framework to ensure data security in cloud using security b...
Enhanced security framework to ensure data security in cloud using security b...Enhanced security framework to ensure data security in cloud using security b...
Enhanced security framework to ensure data security in cloud using security b...
 
Enhanced security framework to ensure data security
Enhanced security framework to ensure data securityEnhanced security framework to ensure data security
Enhanced security framework to ensure data security
 
Evaluation Of The Data Security Methods In Cloud Computing Environments
Evaluation Of The Data Security Methods In Cloud Computing EnvironmentsEvaluation Of The Data Security Methods In Cloud Computing Environments
Evaluation Of The Data Security Methods In Cloud Computing Environments
 
S.A.kalaiselvan toward secure and dependable storage services
S.A.kalaiselvan toward secure and dependable storage servicesS.A.kalaiselvan toward secure and dependable storage services
S.A.kalaiselvan toward secure and dependable storage services
 
fog computing provide security to the data in cloud
fog computing provide security to the data in cloudfog computing provide security to the data in cloud
fog computing provide security to the data in cloud
 
A survey on secured data outsourcing in cloud computing
A survey on secured data outsourcing in cloud computingA survey on secured data outsourcing in cloud computing
A survey on secured data outsourcing in cloud computing
 
Iw qo s09
Iw qo s09Iw qo s09
Iw qo s09
 
A Novel Computing Paradigm for Data Protection in Cloud Computing
A Novel Computing Paradigm for Data Protection in Cloud ComputingA Novel Computing Paradigm for Data Protection in Cloud Computing
A Novel Computing Paradigm for Data Protection in Cloud Computing
 
Enhanced Integrity Preserving Homomorphic Scheme for Cloud Storage
Enhanced Integrity Preserving Homomorphic Scheme for Cloud StorageEnhanced Integrity Preserving Homomorphic Scheme for Cloud Storage
Enhanced Integrity Preserving Homomorphic Scheme for Cloud Storage
 
Data Partitioning Technique In Cloud: A Survey On Limitation And Benefits
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsData Partitioning Technique In Cloud: A Survey On Limitation And Benefits
Data Partitioning Technique In Cloud: A Survey On Limitation And Benefits
 
R180203114117
R180203114117R180203114117
R180203114117
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Privacy and Integrity Preserving in Cloud Storage Devices
Privacy and Integrity Preserving in Cloud Storage DevicesPrivacy and Integrity Preserving in Cloud Storage Devices
Privacy and Integrity Preserving in Cloud Storage Devices
 
G033030035
G033030035G033030035
G033030035
 

Recently uploaded

WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 

Recently uploaded (20)

WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 

Fs2510501055

  • 1. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 Data Storage Collateral In Cloud Computing Annamaneni Samatha*,Nimmala Jeevan Reddy**,P.Pradeep Kumar*** *(Student,Department of Computer Science, JNTU University, Hyderabad-85) ** (Student,Department of Computer Science JNTU University, Hyderabad-85) ***(Head of Department ,Department of Computer Science JNTU University, Hyderabad-85) ABSTRACT Cloud Computing has been predicted as examples. While these internet-based online services the next-generation architecture of IT Enterprise. do provide huge amounts of storage space and In additional to traditional solutions, where the IT customizable computing resources, this computing services are under proper physical, logical and platform shift, however, is eliminating the personnel controls, Cloud Computing moves the responsibility of local machines for data maintenance application software and databases to the large at the same time. As a result, users are at the mercy data centers, where the management of data has of their cloud service providers for the availability many security challenges To Overcome this, in and integrity of their data. Recent downtime of this paper we focused on data security in cloud Amazon's S3 is such an example [2]. computing, which has always been an important From the perspective of data security, which aspect of quality of service. To ensure the has always been an important aspect of quality of correctness of users' data in the cloud, we propose service, Cloud Com-puting inevitably poses new an effective and flexible distributed scheme with challenging security threats for number of reasons. two salient features, opposing to its predecessors. Firstly, traditional cryptographic primitives for the By utilizing the homomorphic token with purpose of data security protection can not be directly distributed verification of erasure-coded data, our adopted due to the users' loss control of data under scheme achieves the integration of storage Cloud Computing. Therefore, verification of correct correctness insurance and data error localization, data storage in the cloud must be conducted without i.e., the identification of misbehaving g server(s). explicit knowledge of the whole data. Considering Unlike most prior works, the new scheme further various kinds of data for each user stored in the cloud supports secure and efficient dynamic operations and the demand of long term continuous assurance of on data base, including data update, delete and their data safety, the problem of verifying correctness insert. Extensive security and performance of data storage in the cloud becomes even more analysis shows that the proposed scheme is highly challenging. Secondly, Cloud Computing is not just a efficient and resilient against Byzantine failure, third party data warehouse. The data stored in the malicious data modification attack, and even it cloud may be frequently updated by the users, avoids server colluding attacks. including insertion, deletion, modification, appending, reordering, etc. To ens ure storage Keywords - Cloud Computing, Security, correctness under dynamic data update is hence of Distributed scheme, homomorphic token, paramount importance. However, this dynamic distributed verification. feature also makes traditional integrity insurance techniques futile and entails new solutions. Last but I. INTRODUCTION not the least, the deployment of Cloud Computing is Several trends are opening up the era of powered by data centers running in a simultaneous, Cloud Computing, which is an Internet-based cooperated and distributed manner. Individual user's development and use of computer technology. The data is redundantly stored in multiple physical loca- ever cheaper and more powerful processors, together tions to further reduce the data integrity threats. with the software as a service (SaaS) computing Therefore, distributed protocols for storage archi-tecture, are transforming data centers into pools correctness assurance will be of most importance in of computing service on a huge scale. The increasing achieving a robust and secure cloud data storage network bandwidth and reliable yet flexible network system in the real world. However, such important connections make it even possi ble that users can now area remains to be fully explored in the literature. subscribe high quality services from data and In this paper, we propose an effective and software that reside solely on remote data centers. flexible distribut ed scheme with explicit dynamic Moving data into the cloud offers great convenience data support to ensure the correctness of users' data in to users since they don't have to care about the the cloud. We rely on erasure-correcting code in the complexities of direct hardware management. The file distribution preparation to prov ide redundancies pioneer of Cloud Com-puting vendors, Amazon and guarantee the data dependability. This con- Simple Storage Service (S3) and Amazon Elastic struction drastically reduces the communication and Compute Cloud (EC2) [1] are both well known storage overhead as compared to the traditional 1050 | P a g e
  • 2. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 replication-based file distribution techniques. By utilizing the homomorphic token with distributed verification of erasure-coded data, our sc heme achieves the storage correctness insurance as well as data error localization: whenever data corruption has been detected during the storage correctness verification, our scheme can almost guarantee the simultaneous localization of data errors, i.e., the identification of the misbehaving server(s) Our work is among the first few ones in this field to In cloud data storage, a user stores his data consider distributed data storage in Cloud through a CSP into a set of cloud servers, which are Computing. Our contribution can be summarized as running in a simulta-neous, cooperated and the following three aspect. distributed manner. Data redundancy can be employed with technique of erasure-correcting code 1) Compared to many of its predecessors, which only to further tolerate faults or server crash as user's data provide binary results about the storage state across grows in size and importance. Thereafter, for the distributed servers, the challenge-response application purposes, the user interacts with the cloud protocol in our work further provides the localization servers via CSP to access or retrieve his data. In some of data error. cases, the user may need to perform block level 2) Unlike most prior works for ensuring remote data operations on his data. The most general forms of integrity, the new scheme supports secure and these operations we are considering are block update, efficient dynamic opera-tions on data blocks, delete, insert and append. As users no longer possess including: update, delete and append. their data locally, it is of critical importance to assure 3) Extensive security and performance analysis users that their data are being correctly stored and shows that the proposed scheme is highly efficient maintained. That is, users should be equipped with and resilient agains t Byzantine failure, malicious security means so that they can make continuous data modification attack, and even server colluding correctness assurance of their stored data even attacks. without the existence of local copies. In case that The rest of the paper is organized as follows. users do not necessarily have the time, feasibility or Section II introduces the system model, adversary resources to monitor their data, they can delegate the model, our design goal and notations. Then we tasks to an optional trusted TPA of their respective provide the detailed description of our scheme in choices. Section III and IV. Section V gives the security analysis and performance evaluations, followed by B. Adversary Model Section VI which overviews the related work. Security threats faced by cloud data storage Finally, Section VII gives the concluding remark of can come from two different sources. On the one the whole paper. hand, a CSP can be self-interested, untrusted and possibly malicious. Not only does it desire to move II. PROBLEM STATEMENT data that has not been or is rarely accessed to a lower A. System Model tier of storage than agreed for monetary reasons, but A representative network architecture for it may also attempt to hide a data loss incident due to cloud data storage is illustrated in Figure 1. Three management errors, Byzantine failures and so on. On different network entities can be identified as the other hand, there may also exist an economically- follows: motivated adversary, who has the capability to • User: users, who have data to be stored in the compromise a number of cloud data storage servers cloud and rely on the cloud for data computation, in different time intervals and subsequently is able to consist of both individual consumers and modify or delete users' data while remaining organizations. undetected by CSPs for a certain period. Specifically, • Cloud Service Provider (CSP): a CSP, who has we consider two types of adversary with differ ent signif-icant resources and expertise in building levels of capability in this paper: and managing distributed cloud storage servers, Weak Adversary: The adversary is interested owns and operates live Cloud Computing in corrupting the user's data files stored on individual systems. servers. Once a server is comprised, an adversary can • Third Party Auditor (TPA): an optional TPA, pollute the original data files b y modifying or who has expertise and capabilities that users may introducing its own fraudulent data to prevent the not have, is trusted to assess and expose risk of original data from being retrieved by the user. cloud storage services on behalf of the users Strong Adversary: This is the worst case scenario, in upon request. which we assume that the adversary can compromise all the storage servers so that he can intentionally modify the data files as long as they are internally 1051 | P a g e
  • 3. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 consistent. In fact, this is equivalent to the case where inconsistencies are successfully detected, to fin d all servers are colluding together to hide a data loss which server the data error lies in is also of great or corruption incident. significan ce, since it can be the first step to fast C. Design Goals recover the storage errors. To ensure the security and dependability for Subsequently, it is also shown how to derive a cloud data storage under the aforementioned challenge-response protocol for verifying the storage adversary model, we aim to design efficient correctness as well as identifying misbehaving mechanisms for dynamic data verification and servers. Finally, the procedure for file retrieval and operation and achieve the following goals: (1) error recovery based on erasure-correcti ng code is Storage correctness: to ensure users that their data are outlined. indeed stored appropriately and kept intact all the A. File Distribution Preparation time in the cloud. (2) Fast localization of data error: It is well known that erasure-correcting code may be to effectively locate the mal-functioning server when used to tolerate multiple failures in distributed data corruption has been detected. (3) Dynamic data storage systems. support: to maintain the same level of storage correctness assurance even if users modify, delete or Algorithm 1 Token Pre-computation append their data files in the cloud. (4) 1: procedure Dependability: to enhance d ata availability against Byzantine failures, malicious data modifi-cation and 2: Choose parameters l, n and function f, φ; server colluding attacks, i.e. minimizing the effect 3: Choose the number t of tokens; brought by data errors or server failures. (5) 4: Choose the number r of indices per Lightweight: to enable users to perform storage verification; correctness checks with minimum overhead. 5: Generate master key Kprp and challenge kchal; D. Notation and Preliminaries 6: for vector G(j) , j ← 1, n do 7: for round i← 1, t do k • F – the data file to be stored. We assume that F can be 8: Derive α = f ch (i) and k(i) from K P . denoted as a matrix of m equal-sized data vectors, each i al prp RP consisting of l blocks. Data blocks are all well represented (j) r q (j) as elements in Galois Field GF (2p ) for p = 8 or 16. (i) • A – The dispersal matrix used for Reed- 9: Compute vi = Pq=1 αi ∗ G [φkprp (q)] Solomon coding. 10: end for • G – The encoded file matrix, which includes a set of 11: end for n = m + k vectors, each consisting of l blocks. 12: Store all the vis locally. • fkey (·) – pseudorandom function (PRF), which is defined 13: end procedure as f : {0, 1}∗ × key → GF (2p). In cloud data storage, we rely on this • φkey (·) – pseudorandom permutation (PRP), which is technique to disperse the data file F redundantly log (l) defined as φ : {0, 1} 2 × key → {0, 1}log2 (l). across a set of n = m + k distributed servers. A (m + k, k) Reed-Solomon erasure-correcting code is used • ver – a version number bound with the index for to create k redundancy parity vectors from m data individ-ual blocks, which records the times the block vectors in such a way that the original m data vectors has been can be reconstructed from any m out of the m + k modified. Initially we assume ver is 0 for all data data and parity vectors. By placing each of the m + k blocks. vectors on a different server, the original data file can • sverij – the seed for PRF, which depends on the survive the failure of any k of the m + k servers file name, block index i, the server position j as well without any data loss, with a space overhead of k/m. as the optional block version number ver. For support of efficient sequential I/O to the original file, our file layout is systematic, i.e., the unmodified III. ENSURING CLOUD DATA m data file vectors together with k parity vectors is STORAGE distributed across m + k different servers. In cloud data storage system, users store their B. Challenge Token Precomputation data in the cloud and no longer possess the data In order to achieve assurance of data storage locally. Thus, the correctness and availability of the correctness and data error localization data files being stored o n the distributed cloud simultaneously, our scheme entirely relies on the pre- servers must be guaranteed. One of the key issues is computed verification tokens. The main ide a is as to effectively detect any unauthorized data modifi ca- follows: before file distribution the user pre-compute tion and corruption, possibly due to server s a certain number of short verification tokens on compromise and/or random Byzantine failures. individual ve ctor G(j) (j ∈ {1, . . . , n}), each token Besides, in the distributed case when such covering a random subset of data blocks. Later, when 1052 | P a g e
  • 4. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 the user wants to make sure the storage correctness rows specified in the challenge and regenerate the for the data in the cloud, he challenges the cloud correct blocks by erasure correction, shown in servers with a set of randomly generated block Algorithm 3, as long as there are at most k indices. Upon receiving challenge, each cloud server misbehaving servers are identified. The newly computes a short ―signature‖ over the specified recovered blocks can then be redistributed to the blocks and returns the m to the user. The values of misbehaving servers to maintain the correctness of these signatures should match the corresponding storage. tokens pre-computed by the user. Meanwhile, as all servers operate over the same subset of the indices, IV. PROVIDING DYNAMIC DATA the requested response values for integrity check OPERATION SUPPORT must also be a valid codeword determined by secret So far, we assumed that F represents static matrix P. or archived data. This model may fit some application scenarios, such as libraries and scientific Algorithm 2 Correctness Verification and Error datasets. However, in cloud data storage, there are Localization many potential scenarios where data stored in the cloud is dynamic, like electronic documents, photos, 1) procedure CHALLENGE(i) or log files etc. Therefore, it is crucial to consider the 2) Recompute αi = fkchal (i) and kprp(i) from KP dynamic case, where a user may wish to perform RP ; various block-level 3) Send {αi, kprp(i)} to all the cloud servers; Algorithm 3 Error Recovery 4) Receive from servers: {R(j) Pr αq ∗ G(j) (q)]|1 ≤ j ≤ n} 1: procedure i q=1 i [φk(i)= prp % Assume the block corruptions have been 5: for (j ← m + 1, n) do detected among 6: R(j) ← R(j) −Prq=1 fkj (sIq ,j )·αqi , Iq = φk(i) (q) prp 7: end for % the specified r rows; 8: if ((Ri(1) , . . . , Ri(m)) · P==(Ri(m+1), . . . , Ri(n))) then % Assume s ≤ k servers have been identified misbehaving 9: Accept and ready for the next challenge. 2: Download r rows of blocks from servers; 3: Treat s servers as erasures and recover the 10: else blocks. 11: for (j ← 1, n) do 12: if (Ri(j) ! =vi(j) ) then 4: Resend the recovered blocks to corresponding 13: return server j is misbehaving. servers. 14: end if 5: end procedure 15: end for operations of update, delete and append to modify the data fil e while maintaining the storage 16: end if correctness assurance. In this section, we will show how our scheme can 17: end procedure explicitly and efficiently handle dynamic data operations for cloud data storage. D. File Retrieval and Error Recovery Since our layout of file matrix is systematic, A. Update Operation the user can reconstruct the original file by In cloud data storage, sometimes the user downloading the data vector s assurance is a may need to modify some data block(s) stored in the probabilistic one. However, by choosing system cloud, from its current value fij to a new one, fij + fij . param-eters (e.g., r, l, t) appropriately and conducting We refer this operation as data update. Due to the enough times of verification, we can guarantee the linear property of Reed-Solomon code, a user can successful file retriev al with high probability. On the perform the update operation and generate the other hand, whenever the data corruption is detected, updated parity blocks by using fij only, without the comparison of pre-computed tokens and received involving any other unchanged blocks. response values can guarantee the identificati on of misbehaving server(s), again with high probability, B. Delete Operation which will be discussed shortly. Therefore, the user Sometimes, after being stored in the cloud, certain can always ask servers to send back blocks of the r data 1053 | P a g e
  • 5. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 blocks may need to be deleted. The delete operation that this ―sampling‖ strategy on selecte d rows we are considering is a general one, in which user instead of all can greatly reduce scheme, servers are replaces the data block with zero or some special required to operate on specified rows the reserved data symbol. From this point of view, the computational overhead on the server, while delete operation is actually a special case of the data maintaining the detection of the data corruption with update operation, where the original data blocks can high probability. be replaced with zeros or some predetermined special blocks. Therefore, we can rely on the update B. Security Strength Against Strong Adversary procedure to support delete operation, i.e., by setting In this section, we analyze the security fij in F to be − fij . Also, all the affected tokens have strength of our schemes against server colluding to be modified and the updated parity information attack and explain why blind-ing the parity blocks has to be blinded using the same method specified in can help improve the security strength of our update operation. proposed scheme. Recall that in the file distribution C. Append Operation preparation, the redun-dancy parity vectors are In some cases, the user may want to calculated via multiplying the file matrix F by P, increase the size of his stored data by adding blocks where P is the secret parity generation matrix we at the end of the data file, which we refer as data later rely on for storage correctness assurance. If we append. We anticipate that the most frequent append disperse all the generated vectors directly after token operation in cloud data storage is bulk append, in precomputation, i.e., without blinding, malicious which the user needs to upload a large number of servers that collaborate can reconstruct the secret P blocks (not a single block) at one time. matrix easily: they can pick blocks from the same rows among the data and parity vectors to establish a D. Insert Operation set of m · k linear equations and solve for the m · k An insert operation to the data file refers to entries of the parit generation matrix P. Once they an append operation at the desired index position have the knowledge of P, those malicious servers while maintaining the same data block structure for can consequently modify any part of the data blocks the whole data file, i.e., inser ting a block F [j] and calculate the corresponding parity blocks, and corresponds to shifting all blocks starting with index vice versa, making their codeword relationship j + 1 by one slot. An insert operation may affect always consistent. Therefore, our stor-age many rows in the logical data file matrix F, and a correctness challenge scheme would be substantial number of computations are required to undermined—even if those modified blocks are renumber all the subsequent blocks as well as re- covered by the specified rows, the storage compute the challenge-response tokens. Therefore, correctness check equation would always hold. an efficient insert operation is difficult to supp ort and thus we leave it for our future work. C. Performance Evaluation 1)File Distribution Preparation: We implemented V. SECURITY ANALYSIS AND the gen-eration of parity vectors for our scheme PERFORMANCE EVALUATION under field GF (28). Our experiment is conducted In this section, we analyze our proposed using C on a system with an Intel Core 2 processor scheme in terms of security and efficiency. Our running at 1.86 GHz, 2048 MB of RAM, and a 7200 security analysis focuses on the adversary model RPM Western Digital 250 GB Serial ATA drive defined in Section II. We also evaluate the efficiency with an 8 MB buffer. We consider two sets of of our scheme via implementation of both file different parameters for the (m + k, m) Reed- distribution preparation and verification token Solomon encoding. Table I shows the average precomputation. encoding cost over 10 trials for an 8 GB file. 2) Challenge Token Pre-computation: Although in A. Security Strength Against Weak Adversary our scheme the number of verification token t is a Detection Probability against data fixed priori determined before file distribution, we modification: In our in each correctness verification can overcome this issue by choosing sufficient large for the calculation of requested token. We will show t in practice. homomorphic authenticator which enables unlimited VI. RELATED WORK number of queries and requires less communication Juels et al. [3] described a formal ―proof of overhead. Bowers et al. [5] pro-posed an improved retrievability ‖ (POR) model for ensuring the remote framework for POR protocols that general-izes both data integrity. Their scheme combines spot-cheking Juels and Shacham's work. Later in their subsequent and error-correcting code to ensure both possession work, Bowers et al. [10] extended POR model to and retrievability of files on archiv e service systems. distributed systems. However, all these schemes are Shacham et al. [4] built on this model and focusing on static data. The effectiveness of their constructed a random linear function based schemes rests primarily on the preprocessing steps 1054 | P a g e
  • 6. Annamaneni Samatha, Nimmala Jeevan Reddy, P.Pradeep Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.1050-1055 that the user conducts before outsourcing the data file [6] G. Ateniese, R. Burns, R. Curtmola, J. F . Any change to the contents of F, even few bits, Herring, L. Kissner, Z. Peterson, and D. must propagate through the error-correcting code, Song, ―Provable Data Possession at thus introducing significant computation and Untrusted Stores, ‖ Proc. of CCS '07, pp. communicatio n complexity. 598–609, 2007. [7] G. Ateniese, R. D. Pietro, L. V. Mancini, VII. CONCLUSION and G. Tsudik, ―S calable and Efficient In this paper, we investigated the problem of Provable Data Possession,‖ Proc. of data security in cloud data storage, which is SecureComm '08, pp. 1– 10, 2008. essentially a distribute storage system. To ensure the [8] T. S. J. Schwarz and E. L. Miller, ―Store, correctness of users' data in cloud data storage, we Forget, and Chec k: Using Algebraic proposed an effective and flexible distribu ted Signatures to Check Remotely Administered scheme with explicit dynamic data support, including Storage,‖ Proc. of ICDCS '06, pp. 12–12, block update, delete, and append. We rely on erasure- 2006. correcting code in the file distribution preparation to [9] M. Lillibridge, S. Elnikety, A. Birrell, M. provide redundancy parity vectors and guarantee the Burrows, and M. Isard, ―A Cooperative data dependability. By utilizing the homomorphic Internet Backup Scheme,‖ Proc. of the 2003 token with distributed verification of erasure - coded USENIX Annual Technical Conference data, our scheme achieves the integration of storage (General Track), pp. 29–41, 2003. cor-rectness insurance and data error localization, [10] K. D. Bowers, A. Juels, and A. Oprea, i.e., whenever data corruption has been detected ―HAIL: A High-Avail ability and Integrity during the storage correct-ness verification across the Layer for Cloud Storage,‖ Cryptology ePrint distributed servers, we can alm ost guarantee the Arch ive, Report 2008/489, 2008, simultaneous identification of the misbehavi ng http://eprint.iacr.org/. server(s). Through detailed security and performance [11] L. Carter and M. Wegman, ―Universal Hash analysis, we show that our scheme is highly efficient Functions,‖ Journal of Computer and and resilient to Byzantine failure, malicious data System Sciences, vol. 18, no. 2, pp. 143–154, modification attack, and even server colluding 1979. attacks. [12] J. Hendricks, G. Ganger, and M. Reiter, ―Verifying Dist ributed Erasure-coded ACKNOWLEDGEMENT Data,‖ Proc. 26th ACM Symposium on This work was supported in part by the US Principles of Distributed Computing, pp. National Science Foundation under grant CNS- 139–146, 2007. 0831963, CNS-0626601, CNS-0716306, and CNS- [13] J. S. Plank and Y. Ding, ―Note: Correction 0831628. to the 1997 Tut orial on Reed-Solomon Coding,‖ University of Tennessee, Tech. REFERENCE Rep. CS-03-504, 2003. [1] Amazon.com, ―Amazon Web Services [14] Q. Wang, K. Ren, W. Lou, and Y. Zhang, (AWS),‖ Online at http ://aws. amazon.com, ―Dependable and Sec ure Sensor Data 2008. Storage with Dynamic Integrity Assurance,‖ [2] N. Gohring, ―Amazon's S3 down for several Proc. of IEEE INFOCOM, 2009. hours,‖ Online at [15] R. Curtmola, O. Khan, R. Burns, and G. http://www.pcworld.com/businesscenter/arti Ateniese, ―MR-PDP : Multiple-Replica cle/142549/amazons s3 down for several Provable Data Possession,‖ Proc. of ICDCS hours.html, 2008. '08, pp. 411–420, 2008. A. Juels and J. Burton S. Kaliski, ―PORs: Proofs of [16] D. L. G. Filho and P. S. L. M. Barreto, Retrie vability for Large Files,‖ Proc. of ―Demonstrating Dat a Possession and CCS '07, pp. 584–597, 2007. Uncheatable Data Transfer,‖ Cryptology [4] H. Shacham and B. Waters, ―Compact ePrint Archive , Report 2006/150, 2006, Proofs of Retrievabil ity,‖ Proc. of Asiacrypt http://eprint.iacr.org/. '08, Dec. 2008. [17] M. A. Shah, M. Baker, J. C. Mogul, and R. [5] K. D. Bowers, A. Juels, and A. Oprea, Swaminathan, ―Au diting to Keep Online ―Proofs of Retrievab ility: Theory and Storage Services Honest,‖ Proc. 11th Implementation,‖ Cryptology ePrint USENIX Workshop on Hot Topics in Archive, Report 20 08/175, 2008, Operating Sy http://eprint.iacr.org/. 1055 | P a g e