Introduction to Heat Exchangers: Principle, Types and Applications
SECURE DATA TRANSMISSION AND DELETION FROM BLOOM FILTER (2).pptx
1. SECURE DATA TRANSMISSION AND
DELETION FROM BLOOM FILTER IN
CLOUD COMPUTING
BY
SANGUSAI KRISHNA-19WJ1A05R3
SAPAVATH PAVAN -19WJ1A05R4
SABAVATHSRIKANTH-20WJ5A0525
INTERNAL GUIDE: DR.S.MADHU
3. Introduction:
Cloud computing, an emerging and very promising computing paradigm, connects
large scale distributed storage resources, computing resources and network
bandwidths together.By using these resources, it can provide tenants with plenty of
high-quality cloud services.
Due to the attractive advantages, the services (especially cloud storage service) have
been widely applied, by which the resource-constraint data owners can outsource
their data to the cloud server, which can greatly reduce the data owners’ local
storage overhead.
To enjoy more suitable cloud storage service, the data owners might change the cloud
storage service providers. Hence, they might migrate their out sourced data from one
cloud to another, and then delete the transferred data from the original cloud.
Bloom Filter is a probabilistic data structure used mainly for checking if an element
exists in a set or not. It is known to be space-efficient as it uses only bits for storing
the data.
Also, one of the Bloom Filter’s advantages is time-efficiency because it takes a
constant time complexity to add and look-up an element. Bloom Filters depend on
hashing functions to assign elements into slots..
4. Existing System:
In Existing approaches to securely migrate the data from one cloud to another and
permanently delete the transferred data from the original cloud becomes a primary
concern of data owners.
In short, the cloud storage service is economically attractive, but it inevitably suffers from
some serious security challenges, specifically for the secure.
These challenges, if not solved suitably, might prevent the public from accepting and
employing cloud storage service.
Disadvantages of Existing System:
■ Does not provide more security
■ Less Efficient and practical.
■ Not maintained public verifiability
5. Proposed System:
We construct a new counting Bloom filter-based scheme in this paper.
The proposed scheme not only can achieve secure data transfer but also can realize permanent data
deletion.
Additionally, the proposed scheme can satisfy the public verifiability without requiring any trusted third
party.
We aim to achieve verifiable data transfer between two different clouds and reliable data deletion in
cloud storage.
Advantages of the Proposed System:
■ We prove that our new proposal can satisfy the desired design goals through security analysis.
■ Our new proposal is more efficient and practical.
■ We briefly introduce the system framework, security challenges and security goals.
8. User Interface Design:
To connect with server user must give their username and password then only they can able
to connect the server. If the user already exits directly can login into the server else user
must register their details such as username, password, Email id, City and Country into the
server. Database will create the account for the entire user to maintain upload and download
rate. Name will be set as user id. Logging in is usually used to enter a specific page. It will
search the query and display the query.
10. Data Owner:
This is the second module in our project where Data Owner process. Data Owner has to
register and login with valid username and password. After login successful he can do some
operations such as user details. If Data Owner what to Upload Data click on Upload Data and
storage status it shows Memory in cloud A. Data Owner wants to view files in Cloud A click on
view files. Data Owner wants to transfer files/Delete files result click on transfer Result/Delete
result, these files from Cloud B.
11.
12. Cloud A:
This is the third module in our project where Cloud A plays the main server part of the
project role.
Enter Cloud A name and password then login to the application. First verified in database
then display home page.
When Cloud A click on View Users it shows Registered Data Owners, when Cloud A click on
Accept User Files it shows User files for Accept.
If Cloud A wants to View User files click on View User files, when click on Transfer Request
from DO it shows Transfer Request Files from Data Owner and send to Cloud B for transfer
the File when click on FeedBack it shows feedback Messages from clients.
13.
14. Cloud B:
This is the fourth module in our project where Cloud B plays the main server part of the
project role.
Enter Cloud B name and password then login to the application. First verified in database
then display home page.
If Cloud B wants to View User Files click on View User Files in Cloud B.
If Cloud B wants to Accept Transfer request, click on Transfer Request from Cloud A and
accept it.
If Cloud B wants to View Delete request, click on Delete Request from Cloud A and accept
it.
15.
16. GIVEN INPUT EXPECTED OUTPUT:
User Interface Design
Input : Enter Login name and Password (User, Router, CA, Publisher)
Output : If valid user name and password then directly open the home page otherwise show error message and
redirect to the registration page.
Data Owner
Input : Data Owner Login name and Password
Output: If valid operator name and password then directly open the operator home page otherwise show error message
and if Data Owner what to Upload Data click on Upload Data and storage status it shows Memory in cloud A.
Cloud A
Input : Enter email and password , verify all details.
Output : Cloud A verify all data owner requests and accept DO data then data send to DO. Admin will verify all data
status and DO feedback also.
Cloud B
Input : Enter the name and password and stored data.
Output: If valid Cloud B name and password then directly open the Cloud B home page. All the resources added by
some options. If Cloud B wants to Accept Transfer request, click on Transfer Request from Cloud A and accept it.
17. Register
All Data Owners
Transfer Request From DO
Delete Request From DO
Login
Data Owner Request
Accept Data Owner
Data Owner
Database
Cloud A
Cloud B
Transfer Request From Cloud A
Data Flow Diagrams:
Use Case Diagram:
19. Object Diagram:
Cloud A : Cloud A Register : Register
Cloud B : Cloud B
Login : Login DataOwner : DataOwner
Database : Database
20. State Diagram:
Data Owner Cloud A
Cloud B
Upload Data
View Users View User Files in Cloud B
Storage Status Accept User Files
Transfer Request From Cloud
View Files
View User Files
Delete Request From Cloud
Transfer/Delete Request From DO Result
Database
Register / Login
Transfer Result/Delete
21. Sequence Diagram:
Data Owner Login Home Files Transfer Logout
1 : login()
2 : verification()
3 : if fail()
4 : if success()
5 : upload file()
6 : success()
7 : file storage()
8 : view files()
9 : Transfer file()
10 : success()
11 : Delete request()
12 : success()
13 : Transfer or delete result()
14 : logout request()
15 : loggedout()
22. Collaboration Diagram:
Data Owner
Login
Home
Files
Transfer
Logout
1 : login()
2 : verification()
3 : if fail()
4 : if success()
5 : upload file()
6 : file storage()
7 : view files()
8 : Transfer file()
9 : Delete request()
10 : Transfer or delete result()
11 : logout request()
12 : loggedout()
29. Conclusion:
In cloud storage, the data owner does not believe that the cloud server might
execute the data transfer and deletion operations honestly.
To solve this problem, we propose a CBF-based secure data transfer scheme,
which can also realize verifiable data deletion.
In our scheme, the cloud B can check the transferred data integrity, which can
guarantee the data is entirely migrated.
Moreover, the cloud A should adopt CBF to generate a deletion evidence after
deletion, which will be used to verify the deletion result by the data owner.
Hence, the cloud A cannot behave maliciously and cheat the data owner
successfully.
Finally, the security analysis and simulation results validate the security and
practicability of our proposal, respectively.