International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), 
ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING  
TECHNOLOGY (IJCET) 
ISSN 0976 – 6367(Print) 
ISSN 0976 – 6375(Online) 
Volume 5, Issue 7, July (2014), pp. 36-42 
© IAEME:
Journal Impact Factor (2014): 8.5328 (Calculated by GISI) 
 
IJCET 
© I A E M E 
PRIVACY-PRESERVING PUBLIC AUDITING FOR SECURE CLOUD 
STORAGE 
Mr. Navanath Jadhav 
M.Tech (CSE), Dept.of CSE MLR Institute technology,Dundigal Hyderabad, Telangana-500043 
Mrs. L.Laxmi 
Asst.Professor, Dept.of CSE MLR Institute technology, Dundigal Hyderabad Telangana-500043 
36 
ABSTRACT 
IT has moved into next generation with cloud computing being realized. The way application 
software and databases are stored has been changed. Now they are stored in cloud data centers in 
which security is a concern from client point of view. The new phenomenon which is used to store and 
manage data without capital investment has brought many security challenges which are not 
thoroughly understood. This paper focuses on the security and integrity of data stored in cloud data 
servers. The data integrity verification is done by using a third party auditor who is authorized to check 
integrity of data periodically on behalf of client. 
The client of the data gets notifications from third party auditor when data integrity is lost. Not 
only verification of data integrity, the proposed system also supports data dynamics. The work that has 
been done in this line lacks data dynamics and true public auditability. The auditing task monitors data 
modifications, insertions and deletions. The proposed system is capable of supporting both public 
auditability and data dynamics. The review of literature has revealed the problems with existing 
systems and that is the motivation behind taking up this work. Merkle Hash Tree is used to improve 
block level authentication. In order to handle auditing tasks simultaneously, bilinear aggregate 
signature is used. This enables TPA to perform auditing concurrently for multiple clients. Hence here I 
are presenting the evaluation of multi user based TPA system. The experiments reveal that the 
proposed system is very efficient and also secure. 
Index Terms: Data Storage, Privacy-Preserving, Public Auditability, Cryptographic Protocols, 
Cloud Computing.
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), 
ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 
37 
I. INTRODUCTION 
One of the next generations IT Enterprise is Cloud Computing which moves the application 
software and databases to the centralized large data centers, where the management of the data and 
services may not be fully trustworthy. Several trends are opening up the era of Cloud Computing, 
which is an Internet-based development and use of computer technology. The ever cheaper and more 
powerful processors, together with the “software as a service” (SaaS) computing architecture, are 
transforming data centers into pools of computing service on a huge scale. Meanwhile, the increasing 
network bandwidth and reliable yet flexible network connections make it even possible that clients can 
now subscribe high quality services from data and software that reside solely on remote data centers. 
Although envisioned as a promising service platform for the Internet, the new data storage paradigm in 
“Cloud” brings about many challenging design issues which have profound influence on the security 
and performance of the overall system. One of the biggest concerns with cloud data storage is that of 
data integrity verification at untrusted servers. What is more serious is that for saving money and 
storage space the service provider might neglect to keep or deliberately delete rarely accessed data files 
which belong to an ordinary client. Consider the large size of the outsourced electronic data and the 
client’s constrained resource capability, the core of the problem can be generalized as how can the 
client find an efficient way to perform periodical integrity verifications without the local copy of data 
files. Considering the role of the verifier in the model, all the schemes presented before fall into two 
categories: private auditability and public auditability. Although schemes with private auditability can 
achieve higher scheme efficiency, public auditability allows anyone, not just the client (data owner), to 
challenge the cloud server for correctness of data storage while keeping no private information. Then, 
clients are able to delegate the evaluation of the service performance to an independent third party 
auditor (TPA), without devotion of their computation resources. In the cloud, the clients themselves 
are unreliable or may not be able to afford the overhead of performing frequent integrity checks. 
II. LITERATURE SURVEY 
The purpose of this review is to report, evaluate, and discuss the findings from research. A 
particular focus of this review is to facilitating privacy-preserving public auditing for secure cloud 
storage. 
• Mehul A. Shah Ram Swaminathan Mary Baker, A growing number of online services, such as 
Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these 
services to store valuable data such as email, family photos and videos, and disk backups. Today, a 
customer must entirely trust such external services to maintain the integrity of hosted data and return it 
intact. Unfortunately, no service is infallible. To make storage services accountable for data loss, we 
present protocols that allow a third party auditor to periodically verify the data stored by a service and 
assist in returning the data intact to the customer. Most importantly, our protocols are 
privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the 
burden of verification from the customer, alleviates both the customer’s and storage service’s fear of 
data leakage, and provides a method for independent arbitration of data retention contracts. 
• Cong Wang, Qian Wang, KuiRen, Wenjing Lou, Cloud Computing has been envisioned as the 
next generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services 
are under proper physical, logical and personnel controls, Cloud Computing moves the application 
software and databases to the large data centers, where the management of the data and services may 
not be fully trustworthy. This unique attribute, however, poses many new security challenges which 
have not been well understood. In this article, we focus on cloud data storage security, which has
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), 
ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 
always been an important aspect of quality of service. To ensure the correctness of users’ data in the 
cloud, we propose an effective and flexible distributed scheme with woo salient features, opposing to 
its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded 
data, our scheme achieves the integration of storage correctness insurance and data 
error localization, i.e., the identification of misbehaving server(s).Unlike most prior works, the new 
scheme further supports secure and efficient dynamic operations on data blocks, including: data 
update, delete and append. Extensive security and performance analysis shows that the proposed 
scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, 
and even server colluding attacks. 
• Giuseppe Ateniese, Roberto Di Pietro, Luigi V. Mancini, and Gene Tsudik, Storage 
outsourcing is a rising trend which prompts a number of interesting security issues, many of which 
have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic 
that has only recently appeared in the research literature. The main issue is how to frequently, 
efficiently and securely verify that a storage server is faithfully storing its client’s(potentially very 
large) outsourced data. The storage server is assumed to be untrusted in terms of both security and 
reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate 
it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device 
with limited resources. Prior work has addressed this problem using either public key cryptography or 
requiring the client to outsource its data in encrypted form. In this paper, we construct a highly 
efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not 
requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows 
outsourcing of dynamic data, i.e., it efficiently supports operations, such as block modification, 
deletion and append. 
• C. Erway, A. Kupcu, C. Papamanthou, and R. Tamassia, As storage-outsourcing services and 
resource-sharing networks have become popular, the problem of efficiently proving the integrity of 
data stored at untrusted servers has received increased attention. In the provable data possession (PDP) 
model, the client preprocesses the data and then sends it to an untrusted server for storage, while 
keeping a small amount of meta-data. The client later asks the server to prove that the stored data has 
not been tampered with or deleted (without downloading the actual data). However, the original PDP 
scheme applies only to static (or append-only) files. We present a definitional framework and efficient 
constructions for dynamic provable data possession (DPDP), which extends the PDP model to support 
provable updates to stored data. We use a new version of authenticated dictionaries based on rank 
information. The price of dynamic updates is a performance change from O(1) to O(log n) (or O(n  
log n)), for a file consisting of n blocks, while maintaining the same (or better, respectively) probability 
of misbehavior detection. Our experiments show that this slow down is very low in practice (e.g., 
415KB proof size and 30ms computational overhead for a 1 GBfile). We also show how to apply our 
DPDP scheme to outsourced file systems and version control systems (e.g., CVS). 
• K.D. Bowers, A. Juels, and A. Oprea, We introduce HAIL (High-Availability and Integrity 
Layer), a distributed cryptographic system that permits aset of servers to prove to a client that a stored 
file is intact and retrievable. HAIL strengthens, formally unifies, and streamlines distinct approaches 
from the cryptographic and distributed-systems communities. Proofs in HAIL are efficiently 
computable by servers and highly compact typically tens or hundreds of bytes, irrespective of file size. 
HAIL cryptographically verifies and reactively reallocates file shares. It is robust against an active, 
mobile adversary, i.e., one that may progressively corrupt the full set of servers. We propose a strong, 
formal adversarial model for HAIL, and rigorous analysis and parameter choices. We show how HAIL 
38
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 
ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 
36-42 © IAEME 
improves on the security and efficiency of 
deployed on individual servers. We also report on a prototype implementation 
III. IMPLEMENTATION DETAILS 
3.1 Existing Work 
existing tools, like Proofs of Retrievability 
A most important cause of poor website design is that the web developers perceptive of how a 
website should be structured and 
differences result in cases where users cannot easily find the desired information in a website. This 
issue is difficult to handle because when creating a website, web develop 
understanding of users’ preferences and can only organize pages based on their own ideas. 
Existing System Algorithm 
can be considerably different from those of the users. Such 
In an existing system k-means algorithm is used for e 
structure improvement. 
Input: set of k means m1 
(1),…,mk 
(1) 
Assignment step: Assign each observation to the cluster whose mean yields the least within 
sum of squares (WCSS). Since the sum of squares is the squared Euclidean, this is intuitively the 
nearest mean. (Mathematically, this means partitioning the observations according to the 
diagram generated by the means). 
Where each is assigned to exactly one 
Update step: Calculate the new means to be the 
Since the arithmetic mean is a 
sum of squares (WCSS) objective. 
3.2 Proposed Work 
In this project we are presenting and extending the 
navigation through website structure improvement 
through website structure. This approach delivers the efficiency as 
methods for improvement of website 
creating website web developers don’t have clear understanding of 
project our main aim is to present approaches to overcome the limitations. In this 
will add new algorithm which will efficiently do the 
structure. For this purpose we are using 
39 
developers may not have a cl 
effective user navigation through website 
: , even if it could be is assigned to two or more of them. 
: centroids of the observations in the new clusters. 
least-squares estimator, this also minimizes the within 
Cure clustering algorithm 
improvement. The current method is dealing with 
. well as effectiveness of proposed 
website. However this method is suffered from limitations like 
clients’ requirement. 
o existing method we 
improved user navigation through website 
CURE clustering algorithm. 
0976-6367(Print), 
(PORs) 
ers clear 
ffective within-cluster 
Voronoi 
, within-cluster 
for user 
. user navigation 
. while 
Thus in this
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), 
ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 
40 
Algorithm 
Input: Datasets of real websites 
• Random sampling: To handle large data sets, we do random sampling and draw a sample data 
set. Generally the random sample fits in main memory. Also because of the random sampling there is 
a tradeoff between accuracy and efficiency. 
• Partitioning for speed up: The basic idea is to partition the sample space into p partitions. Each 
partition contains n/p elements. Then in the first pass partially cluster each partition until the final 
number of clusters reduces to n/pq for some constant q  1. Then run a second clustering pass 
on n/q partial clusters for all the partitions. For the second pass we only store the representative points 
since the merge procedure only requires representative points of previous clusters before computing 
the new representative points for the merged cluster. The advantage of partitioning the input is that we 
can reduce the execution times. 
• Labeling data on disk: Since we only have representative points for k clusters, the remaining 
data points should also be assigned to the clusters. For this a fraction of randomly selected 
representative points for each of the k clusters is chosen and data point is assigned to the cluster 
containing the representative point closest to it. 
Output: Set of web links that needs to be redesign and relink. 
IV. RESULTS 
4.1 Hardware and Software Used 
Hardware Configuration 
- Processor - PentiumIV 2.6 ghz 
- RAM - 512 mbdd ram 
- Monitor - 15” color 
- Hard Disk - 20 GB 
- Key Board - Standard Windows Keyboard 
Software Configuration 
- Operating System - Windows XP/7 
- Programming Language - Java 
- Database - MySQL 
- Tool - Netbeans
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), 
ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 
41 
4.2 Results of Practical Work 
Performance comparison between individual auditing and batch auditing. 
V. CONCLUSION 
This paper presents a comprehensive study for improvement of user navigation through 
website structure using CURE algorithm. We use this algorithm to improve the navigation 
effectiveness of a website while minimizing changes to its current structure. The tests on a real 
websites dataset showed that CURE algorithm could provide significant improvements to user 
navigation by adding only few new links. Optimal solutions were quickly obtained, suggesting that the 
CURE algorithm is very effective to real world websites datasets. 
REFERENCES 
[1] M.A.Shah, R.Swaminathan, M. Baker (2008), Privacy preserving audit and extraction of 
digital contents, Cryptology ePrint Archive. 
[2] Cong Wang, QianWang, KuiRen, Wenjing Lou (2009),Ensuring Data Storage Security in 
Cloud Computing. 
[3] G. Ateniese, R.D. Pietro, L.V. Mancini, and G. Tsudik, “Scalable and Efficient Provable Data 
Possession,” Proc. Fourth Int’l Conf. Security and Privacy in Comm. Networks (Secure Comm 
’08), pp. 1-10,2008. 
[4] C. Erway, A. Kupcu, C. Papamanthou, and R. Tamassia, “Dynamic Provable Data 
Possession,” Proc. 16th ACM Conf. Computer and Comm. Security (CCS ’09), 2009. 
[5] K.D. Bowers, A. Juels, and A. Oprea, “Hail: A High-Availability and Integrity Layer for Cloud 
Storage,” Proc. 16th ACM Conf. Computer and Comm. Security (CCS ’09), pp. 187-198, 
2009. 
[6] A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, M. Zaharia, “Above the clouds: A 
berkeley view of cloud computing,” University of California, Berkeley, Tech. Rep, 2009. 
[7] Cong Wang, QianWang, KuiRen, Wenjing Lou (2010),Privacy Preserving Public Auditing 
for Data Storage Security in Cloud Computing. 
[8] A. L. Ferrara, M. Greeny, S. Hohenberger, M. Pedersen (2009), Practical short signature batch 
verification, in Proceedings of CT-RSA, volume 5473 of LNCS. Springer-Verlag, 
pp. 309–324.

50120140507005

  • 1.
    International Journal ofComputer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING TECHNOLOGY (IJCET) ISSN 0976 – 6367(Print) ISSN 0976 – 6375(Online) Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME:
  • 2.
    Journal Impact Factor(2014): 8.5328 (Calculated by GISI) IJCET © I A E M E PRIVACY-PRESERVING PUBLIC AUDITING FOR SECURE CLOUD STORAGE Mr. Navanath Jadhav M.Tech (CSE), Dept.of CSE MLR Institute technology,Dundigal Hyderabad, Telangana-500043 Mrs. L.Laxmi Asst.Professor, Dept.of CSE MLR Institute technology, Dundigal Hyderabad Telangana-500043 36 ABSTRACT IT has moved into next generation with cloud computing being realized. The way application software and databases are stored has been changed. Now they are stored in cloud data centers in which security is a concern from client point of view. The new phenomenon which is used to store and manage data without capital investment has brought many security challenges which are not thoroughly understood. This paper focuses on the security and integrity of data stored in cloud data servers. The data integrity verification is done by using a third party auditor who is authorized to check integrity of data periodically on behalf of client. The client of the data gets notifications from third party auditor when data integrity is lost. Not only verification of data integrity, the proposed system also supports data dynamics. The work that has been done in this line lacks data dynamics and true public auditability. The auditing task monitors data modifications, insertions and deletions. The proposed system is capable of supporting both public auditability and data dynamics. The review of literature has revealed the problems with existing systems and that is the motivation behind taking up this work. Merkle Hash Tree is used to improve block level authentication. In order to handle auditing tasks simultaneously, bilinear aggregate signature is used. This enables TPA to perform auditing concurrently for multiple clients. Hence here I are presenting the evaluation of multi user based TPA system. The experiments reveal that the proposed system is very efficient and also secure. Index Terms: Data Storage, Privacy-Preserving, Public Auditability, Cryptographic Protocols, Cloud Computing.
  • 3.
    International Journal ofComputer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 37 I. INTRODUCTION One of the next generations IT Enterprise is Cloud Computing which moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. Several trends are opening up the era of Cloud Computing, which is an Internet-based development and use of computer technology. The ever cheaper and more powerful processors, together with the “software as a service” (SaaS) computing architecture, are transforming data centers into pools of computing service on a huge scale. Meanwhile, the increasing network bandwidth and reliable yet flexible network connections make it even possible that clients can now subscribe high quality services from data and software that reside solely on remote data centers. Although envisioned as a promising service platform for the Internet, the new data storage paradigm in “Cloud” brings about many challenging design issues which have profound influence on the security and performance of the overall system. One of the biggest concerns with cloud data storage is that of data integrity verification at untrusted servers. What is more serious is that for saving money and storage space the service provider might neglect to keep or deliberately delete rarely accessed data files which belong to an ordinary client. Consider the large size of the outsourced electronic data and the client’s constrained resource capability, the core of the problem can be generalized as how can the client find an efficient way to perform periodical integrity verifications without the local copy of data files. Considering the role of the verifier in the model, all the schemes presented before fall into two categories: private auditability and public auditability. Although schemes with private auditability can achieve higher scheme efficiency, public auditability allows anyone, not just the client (data owner), to challenge the cloud server for correctness of data storage while keeping no private information. Then, clients are able to delegate the evaluation of the service performance to an independent third party auditor (TPA), without devotion of their computation resources. In the cloud, the clients themselves are unreliable or may not be able to afford the overhead of performing frequent integrity checks. II. LITERATURE SURVEY The purpose of this review is to report, evaluate, and discuss the findings from research. A particular focus of this review is to facilitating privacy-preserving public auditing for secure cloud storage. • Mehul A. Shah Ram Swaminathan Mary Baker, A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups. Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact. Unfortunately, no service is infallible. To make storage services accountable for data loss, we present protocols that allow a third party auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer. Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts. • Cong Wang, Qian Wang, KuiRen, Wenjing Lou, Cloud Computing has been envisioned as the next generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has
  • 4.
    International Journal ofComputer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME always been an important aspect of quality of service. To ensure the correctness of users’ data in the cloud, we propose an effective and flexible distributed scheme with woo salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s).Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks. • Giuseppe Ateniese, Roberto Di Pietro, Luigi V. Mancini, and Gene Tsudik, Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client’s(potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e., it efficiently supports operations, such as block modification, deletion and append. • C. Erway, A. Kupcu, C. Papamanthou, and R. Tamassia, As storage-outsourcing services and resource-sharing networks have become popular, the problem of efficiently proving the integrity of data stored at untrusted servers has received increased attention. In the provable data possession (PDP) model, the client preprocesses the data and then sends it to an untrusted server for storage, while keeping a small amount of meta-data. The client later asks the server to prove that the stored data has not been tampered with or deleted (without downloading the actual data). However, the original PDP scheme applies only to static (or append-only) files. We present a definitional framework and efficient constructions for dynamic provable data possession (DPDP), which extends the PDP model to support provable updates to stored data. We use a new version of authenticated dictionaries based on rank information. The price of dynamic updates is a performance change from O(1) to O(log n) (or O(n log n)), for a file consisting of n blocks, while maintaining the same (or better, respectively) probability of misbehavior detection. Our experiments show that this slow down is very low in practice (e.g., 415KB proof size and 30ms computational overhead for a 1 GBfile). We also show how to apply our DPDP scheme to outsourced file systems and version control systems (e.g., CVS). • K.D. Bowers, A. Juels, and A. Oprea, We introduce HAIL (High-Availability and Integrity Layer), a distributed cryptographic system that permits aset of servers to prove to a client that a stored file is intact and retrievable. HAIL strengthens, formally unifies, and streamlines distinct approaches from the cryptographic and distributed-systems communities. Proofs in HAIL are efficiently computable by servers and highly compact typically tens or hundreds of bytes, irrespective of file size. HAIL cryptographically verifies and reactively reallocates file shares. It is robust against an active, mobile adversary, i.e., one that may progressively corrupt the full set of servers. We propose a strong, formal adversarial model for HAIL, and rigorous analysis and parameter choices. We show how HAIL 38
  • 5.
    International Journal ofComputer Engineering and Technology (IJCET), ISSN 0976 ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME improves on the security and efficiency of deployed on individual servers. We also report on a prototype implementation III. IMPLEMENTATION DETAILS 3.1 Existing Work existing tools, like Proofs of Retrievability A most important cause of poor website design is that the web developers perceptive of how a website should be structured and differences result in cases where users cannot easily find the desired information in a website. This issue is difficult to handle because when creating a website, web develop understanding of users’ preferences and can only organize pages based on their own ideas. Existing System Algorithm can be considerably different from those of the users. Such In an existing system k-means algorithm is used for e structure improvement. Input: set of k means m1 (1),…,mk (1) Assignment step: Assign each observation to the cluster whose mean yields the least within sum of squares (WCSS). Since the sum of squares is the squared Euclidean, this is intuitively the nearest mean. (Mathematically, this means partitioning the observations according to the diagram generated by the means). Where each is assigned to exactly one Update step: Calculate the new means to be the Since the arithmetic mean is a sum of squares (WCSS) objective. 3.2 Proposed Work In this project we are presenting and extending the navigation through website structure improvement through website structure. This approach delivers the efficiency as methods for improvement of website creating website web developers don’t have clear understanding of project our main aim is to present approaches to overcome the limitations. In this will add new algorithm which will efficiently do the structure. For this purpose we are using 39 developers may not have a cl effective user navigation through website : , even if it could be is assigned to two or more of them. : centroids of the observations in the new clusters. least-squares estimator, this also minimizes the within Cure clustering algorithm improvement. The current method is dealing with . well as effectiveness of proposed website. However this method is suffered from limitations like clients’ requirement. o existing method we improved user navigation through website CURE clustering algorithm. 0976-6367(Print), (PORs) ers clear ffective within-cluster Voronoi , within-cluster for user . user navigation . while Thus in this
  • 6.
    International Journal ofComputer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 40 Algorithm Input: Datasets of real websites • Random sampling: To handle large data sets, we do random sampling and draw a sample data set. Generally the random sample fits in main memory. Also because of the random sampling there is a tradeoff between accuracy and efficiency. • Partitioning for speed up: The basic idea is to partition the sample space into p partitions. Each partition contains n/p elements. Then in the first pass partially cluster each partition until the final number of clusters reduces to n/pq for some constant q 1. Then run a second clustering pass on n/q partial clusters for all the partitions. For the second pass we only store the representative points since the merge procedure only requires representative points of previous clusters before computing the new representative points for the merged cluster. The advantage of partitioning the input is that we can reduce the execution times. • Labeling data on disk: Since we only have representative points for k clusters, the remaining data points should also be assigned to the clusters. For this a fraction of randomly selected representative points for each of the k clusters is chosen and data point is assigned to the cluster containing the representative point closest to it. Output: Set of web links that needs to be redesign and relink. IV. RESULTS 4.1 Hardware and Software Used Hardware Configuration - Processor - PentiumIV 2.6 ghz - RAM - 512 mbdd ram - Monitor - 15” color - Hard Disk - 20 GB - Key Board - Standard Windows Keyboard Software Configuration - Operating System - Windows XP/7 - Programming Language - Java - Database - MySQL - Tool - Netbeans
  • 7.
    International Journal ofComputer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 7, July (2014), pp. 36-42 © IAEME 41 4.2 Results of Practical Work Performance comparison between individual auditing and batch auditing. V. CONCLUSION This paper presents a comprehensive study for improvement of user navigation through website structure using CURE algorithm. We use this algorithm to improve the navigation effectiveness of a website while minimizing changes to its current structure. The tests on a real websites dataset showed that CURE algorithm could provide significant improvements to user navigation by adding only few new links. Optimal solutions were quickly obtained, suggesting that the CURE algorithm is very effective to real world websites datasets. REFERENCES [1] M.A.Shah, R.Swaminathan, M. Baker (2008), Privacy preserving audit and extraction of digital contents, Cryptology ePrint Archive. [2] Cong Wang, QianWang, KuiRen, Wenjing Lou (2009),Ensuring Data Storage Security in Cloud Computing. [3] G. Ateniese, R.D. Pietro, L.V. Mancini, and G. Tsudik, “Scalable and Efficient Provable Data Possession,” Proc. Fourth Int’l Conf. Security and Privacy in Comm. Networks (Secure Comm ’08), pp. 1-10,2008. [4] C. Erway, A. Kupcu, C. Papamanthou, and R. Tamassia, “Dynamic Provable Data Possession,” Proc. 16th ACM Conf. Computer and Comm. Security (CCS ’09), 2009. [5] K.D. Bowers, A. Juels, and A. Oprea, “Hail: A High-Availability and Integrity Layer for Cloud Storage,” Proc. 16th ACM Conf. Computer and Comm. Security (CCS ’09), pp. 187-198, 2009. [6] A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, M. Zaharia, “Above the clouds: A berkeley view of cloud computing,” University of California, Berkeley, Tech. Rep, 2009. [7] Cong Wang, QianWang, KuiRen, Wenjing Lou (2010),Privacy Preserving Public Auditing for Data Storage Security in Cloud Computing. [8] A. L. Ferrara, M. Greeny, S. Hohenberger, M. Pedersen (2009), Practical short signature batch verification, in Proceedings of CT-RSA, volume 5473 of LNCS. Springer-Verlag, pp. 309–324.