automation in it's next level,applications of fog computing,need of fog computing,fog vs cloud, Internet of things,fog vs cloud vs IOT ,existing cloud system, proposed system presentation conclusion
automation in it's next level,applications of fog computing,need of fog computing,fog vs cloud, Internet of things,fog vs cloud vs IOT ,existing cloud system, proposed system presentation conclusion
Edge computing allows data produced by internet of things (IoT) devices to be processed closer to where it is created instead of sending it across long routes to data centers or clouds.
Doing this computing closer to the edge of the network lets organizations analyze important data in near real-time – a need of organizations across many industries, including manufacturing, health care, telecommunications and finance.Edge computing deployments are ideal in a variety of circumstances. One is when IoT devices have poor connectivity and it’s not efficient for IoT devices to be constantly connected to a central cloud.
Other use cases have to do with latency-sensitive processing of information. Edge computing reduces latency because data does not have to traverse over a network to a data center or cloud for processing. This is ideal for situations where latencies of milliseconds can be untenable, such as in financial services or manufacturing.
Fog computing, also known as fogging/edge computing, it is a model in which data, processing and applications are concentrated in devices at the network edge rather than existing almost entirely in the cloud.
The term "Fog Computing" was introduced by the Cisco Systems .
Its extended from cloud
Edge computing is a method of enabling small processing units near to the source of the data from sensors and central data servers. It utilizes cloud computing systems by performing data processing at the edge of the network, near the source of the data. This reduces the communications bandwidth needed between sensors by performing analytics and data processing.
Through this presentation, you will get to know about Edge computing and explore the fields where it is needed.
You can start exploring the technical knowledge by seeing what industries are working on now-days
Cloud storage provides a convenient, massive, and scalable storage at low cost, but data privacy is a major concern that prevents users from storing files on the cloud trustingly.
Edge computing allows data produced by internet of things (IoT) devices to be processed closer to where it is created instead of sending it across long routes to data centers or clouds.
Doing this computing closer to the edge of the network lets organizations analyze important data in near real-time – a need of organizations across many industries, including manufacturing, health care, telecommunications and finance.Edge computing deployments are ideal in a variety of circumstances. One is when IoT devices have poor connectivity and it’s not efficient for IoT devices to be constantly connected to a central cloud.
Other use cases have to do with latency-sensitive processing of information. Edge computing reduces latency because data does not have to traverse over a network to a data center or cloud for processing. This is ideal for situations where latencies of milliseconds can be untenable, such as in financial services or manufacturing.
Fog computing, also known as fogging/edge computing, it is a model in which data, processing and applications are concentrated in devices at the network edge rather than existing almost entirely in the cloud.
The term "Fog Computing" was introduced by the Cisco Systems .
Its extended from cloud
Edge computing is a method of enabling small processing units near to the source of the data from sensors and central data servers. It utilizes cloud computing systems by performing data processing at the edge of the network, near the source of the data. This reduces the communications bandwidth needed between sensors by performing analytics and data processing.
Through this presentation, you will get to know about Edge computing and explore the fields where it is needed.
You can start exploring the technical knowledge by seeing what industries are working on now-days
Cloud storage provides a convenient, massive, and scalable storage at low cost, but data privacy is a major concern that prevents users from storing files on the cloud trustingly.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
Scientific development is an ever-evolving journey, driven by the exchange of data and ideas among researchers across the globe.One such remarkable publication dedicated to facilitating this exchange within the fields of Pharmacy and Bio Sciences is the Indo-American Journal of Pharma and Bio Sciences of the research publish journal.
Nowadays,Data is rising over internet in terabytes and Exabyte. So,there is a need of storing these data which has been fulfilled by cloud computing. Though the s ervice of cloud appear to be efficient and cost eff ective. Yet there are some challenges which are faced in cloud computing such as Data security and Authentication. In cloud Storage the data of Owner is stored in cloud where the cloud servers are remotely located the owner of the data does not have any direct control over the data. If the data over cloud is modified by the cloud,Third Party Auditor (TPA) or any other person there is no preci sion such that the owner of the data gets the infor mation about the modification of the data. TPA is a Third Party Auditor who has Experience in checking the integrity of data. TPA v erifies the files stored over the cloud if they are modified or not. Our scheme provides the solution to this probl em such that if there is any modification in the da ta the owner will get information about the change in the data. Our scheme only provides the information about the change in data it does not keep data intact or secure from mo dification over cloud.
Data Back-Up and Recovery Techniques for Cloud Server Using Seed Block AlgorithmIJERA Editor
In cloud computing, data generated in electronic form are large in amount. To maintain this data efficiently, there is a necessity of data recovery services. To cater this, we propose a smart remote data backup algorithm, Seed Block Algorithm. The objective of proposed algorithm is twofold; first it help the users to collect information from any remote location in the absence of network connectivity and second to recover the files in case of the file deletion or if the cloud gets destroyed due to any reason. The time related issues are also being solved by proposed seed block algorithm such that it will take minimum time for the recovery process. Proposed seed block algorithm also focuses on the security concept for the back-up files stored at remote server, without using any of the existing encryption techniques.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
1. A
Seminar Report
On
“SEED BLOCK ALGORITHM: A REMOTE SMART DATA
BACKUP TECHNIQUE FOR CLOUD COMPUTING”
Submitted By
MR. BADHE DIPAK S.
T.E. [COMPUTER ENGINEERING]
Guided By
PROF. BOMBALE G.R.
DEPARTMENT OF COMPUTER ENGINEERING
Jagdamba Education Society’s
S.N.D.College of Engineering and Research Centre,
Babhulgaon, Yeola-423401
2. For Academic Year
2013-2014
Jagdamba Education Society’s
S.N.D.College of Engineering and Research Centre,
Babhulgaon, Yeola-423401
2013-2014
CERTIFICATE
This is to certify that the seminar report titled “SEED BLOCK ALGORITHM:
A REOTE SMART DATA BACKUP TECHNIQUE FOR CLOUD COMPUTING”
Has been delivered by MR. BADHE DIPAK S. in fulfillment of requirements leading to
bachelor’s Degree in COMPUTER ENGINEERING awarded by,
UNIVERSITY OF PUNE
2013 - 2014
Prof. CHUNCHURE B.S.
Head of Department
COMPUTER ENGINEERING
Prof. BHANDARE M.G.
Seminar Co-ordinator,
Department of
COMPUTER ENGINEERING
Prof. BOMBALE G.R.
Seminar Guide,
Department of
COMPUTER ENGINEERING
3. ACKNOWLEDGEMENT
An act of gratitude is that which acknowledges the blessing of well-wishers
and supporting guidance of their rich experiences, which enlighten, inspires and
motivates to do something valuable.
I have a great pleasure in presenting Seminar called “Seed Block Algorithm:
A Remote Smart Data Backup Technique For Cloud Computing” which is
related to Security Purpose System reporting. I sincerely acknowledge the kind-
hearted co- operation by the Seminar Guide Prof. Bombale G.R. and Seminar
Coordinator Prof. Bhandare M.G. There were times in my seminar where I was
running short of time and energy, which made us to think about finishing the
seminar once and for all. My seminar guide made me endure during such times
with his expert guidance, kind advice and unfailing humor, which helped me to be
determined about my seminar. Thus he had been my beacon light throughout my
seminar as guided us precisely.
I also express my deepest thanks to our HOD Prof. Chunchure B.S. whose
benevolent help in making available the facilities to us for our seminar in our
laboratory and making a true success. Without his kind and keen co-operation our
seminar would have been satisfied is standstill. Last but not least, we would like to
thank and acknowledge kind support facilities specially made available to us by
my institute “S.N.D. College of Engg. & Research Center, Yeola”. We could not
have completed without their whole hearted support.
BY:
MR. BADHE DIPAK S.
ROLL NO: 04
EXAM SEAT: T80504231
T.E. COMPUTER
4. ABSTRACT
In cloud computing, data generated in electronic form are large in amount. To
maintain this data efficiently, there is a necessity of data recovery services. To cater this,
in this paper we propose a smart remote data backup algorithm, Seed Block Algorithm.
The objective of proposed algorithm is twofold; first it help the users to collect
information from any remote location in the absence of network connectivity and second
to recover the files in case of the file deletion or if the cloud gets destroyed due to any
reason. The time related issues are also being solved by proposed SBA such that it will
take minimum time for the recovery process. Proposed SBA also focuses on the security
concept for the back-up files stored at remote server, without using any of the existing
encryption techniques.
5. INDEX
CHAPTER
NO
NAME OF TOPIC
PAGE
NO.
1 ABSTRACT IV
2 INTRODUCTION 1
3 LITERATURE REVIEW 3
4 REMOTE DATA BAKUP SERVER
6
5 DESIGN OF SEED BLOCK ALGORITHM
8
6 EXPERIMENTATION AND RESULT ANALYSIS
11
7 SYSTEM CONFIGURATION 14
8 ADVANTAGES 15
9 FUTURE SCOPE 16
CONCLUSION VII
REFERENCES VIII
6. LIST OF FIGURES
LIST OF TABLE
FIG. NO. TITLE PAGE NO.
1 Remote Data Backup Server and its Architecture.
6
2 Seed Block Algorithm Architecture 8
3 Graph Showing Processor Utilization
13
4 Sample Output Image Of Seed Block Algorithm 13
TABLE
NO.
TITLE PAGE
NO.
I
Performance analysis for different types of files
11
II Effect of data size on processing time 12
7. CHAPTER-01
INTRODUCTION
National Institute of Standard and Technology defines as a model for enabling
convenient, on-demand network access to a share pool of configurable computing service
(for ex- networks, servers, storage, applications and services) that can be provisioned
rapidly and released with minimal management effort or services provider. Today,
Cloud Computing is itself a gigantic technology which is surpassing all the previous
technology of computing (like cluster, grid, distributed etc.) of this competitive and
challenging IT world. The need of cloud computing is increasing day by day as its
advantages overcome the disadvantage of various early computing techniques. Cloud
storage provides online storage where data stored in form of virtualized pool that is
usually hosted by third parties. The company operates large data on large data center and
according to the requirements of the customer these data center virtualized the resources
and expose them as the storage pools that help user to store files or data objects.
As number of user shares the storage and other resources, it is possible that other
customers can access your data. Either the human error, faulty equipment’s, network
connectivity, a bug or any criminal intent may put our cloud storage on the risk and
danger. And changes in the cloud are also made very frequently; we can term it as data
dynamics. The data dynamics is supported by various operations such as insertion,
deletion and block modification. Since services are not limited for archiving and taking
backup of data; remote data integrity is also needed. Because the data integrity always
focuses on the validity and fidelity of the complete state of the server that takes care of
the heavily generated data which remains unchanged during storing at main cloud remote
server and transmission. Integrity plays an important role in back-up and recovery
services.
In literature many techniques have been proposed HSDRT, PCS, ERGOT, Linux
Box, Cold/Hot backup strategy etc. that, discussed the data recovery process. However,
still various successful techniques are lagging behind some critical issues like
implementation complexity, low cost, security and time related issues. To cater this
issues, in this paper we propose a smart remote data backup algorithm, Seed Block
Algorithm (SBA). The contribution of the proposed SBA is twofold; first SBAhelps the
8. users to collect information from any remote location in the absence of network
connectivity and second to recover the files in case of the file deletion or if the cloud gets
destroyed due to any reason.
9. CHAPTER-02
LITERETURE REVIEW
In literature, we study most of the recent back-up and recovery techniques that
have been developed in cloud computing domain such as HSDRT, PCS, ERGOT,
Linux Box, Cold/Hot backup strategy etc. Detail review shows that none of these
techniques are able to provide best performances under all uncontrolled circumstances
such as cost, security, low implementation complexity, redundancy and recovery in
short span of time.
PCS (Parity Cloud Service) is comparatively reliable from among all technique.
simple, easy to use and more convenient for data recovery totally based on parity
recovery service. It can recover data with very high probability. For data recovery, it
generates a virtual disk in user system for data backup, make parity groups across
virtual disk, and store parity data of parity group in cloud. It uses the Exclusive–OR(
) for creating Parity information. However, it is unable to control the implementation
complexities.
HSDRT (High Speed Data Rate Transfer) has come out an efficient technique for
the movable clients such as laptop, smartphones etc. nevertheless it fails to manage the
low cost for the implementation of the recovery and also unable to control the data
duplication. It an innovative file back-up concept, which makes use of an effective
ultra-widely distributed data transfer mechanism and a high-speed encryption
technology. The HS-DRT is an innovative file back-up concept, which makes use of an
effective ultra-widely distributed data transfer mechanism and a high-speed encryption
technology. This proposed system follows two sequences one is Backup sequence and
second is Recovery sequence. In Backup sequence, it receives the data to be backed-up
and in Recovery Sequence, when some disasters occur or periodically, the Supervisory
Server (one of the components of the HSDRT) starts the recovery sequence. However
there are some limitation in this model and therefore, this model is somehow unable to
10. declare as perfect solution for back-up and recovery.
ERGOT (Efficient Routing Grounded on Taxonomy) is totally based on the
semantic analysis and unable to focus on time and implementation complexity. It is a
Semantic-based System which helps for Service Discovery in cloud computing.
Similarly, we found a unique way of data retrieval. We made a focus on this technique
as it is not a back-up technique but it provide an efficient retrieval of data that is
completely based on the semantic similarity between service descriptions and service
requests. ERGOT is built upon 3 components 1) A DHT (Distributed Hash Table)
protocol 2) A SON (Semantic Overlay Network), 3) A measure of semantic similarity
among service description [4]. Hence, ERGOT combines both these network Concept.
By building a SON over a DHT, ERGOT proposed semantic-driven query answering in
DHT-based systems. However does not go well with semantic similarity search
models.
LINUX BOX is having very simple concept of data back-up and recovery with very
low cost. However, in this model protection level is very low. It also makes the process
of migration from one cloud service provider to other very easy. It is affordable to all
consumers and Small and Medium Business (SMB). This solution eliminates
consumer’s dependency on the ISP and its associated backup cost. It can do all these at
little cost named as simple Linux box which will sync up the data at block/file level
from the cloud service provider to the consumer. It incorporates an application on
Linux box that will perform backup of the cloud onto local drives. The data
transmission will be secure and encrypted. The limitation we found that a consumer
can backup not only the Data but Sync the entire Virtual Machine[5] which somehow
waste the bandwidth because every time when backup takes place it will do back-up of
entire virtual machine. Similarly, we also found that one technique basically focuses on
the significant cost reduction and router failure scenario i.e. (SBBR). It concerns IP
logical connectivity that will be remain unchanged even after a router failure and the
most important factor is that it provides the network management system via multi-
layer signaling. Additionally, it shows how service imposed maximum outage
11. requirements that have a direct effect on the setting of the SBRR architecture (e.g.
imposing a minimum number of network-wide shared router resource locations).
However, it is unable to include optimization concept with cost reduction. With
entirely new concept of virtualization REN cloud focuses on the low cost infrastructure
with the complex implementation and low security level. Another technique we found
in the field of the data backup is a REN (Research Education Network) cloud. The
lowest cost point of view we found a model “Rent Out the Rented Resources”. Its goal
is to reduce the cloud service’s monetary cost. It proposed a three phase model for
cross cloud federation that are discovery, matchmaking and authentication. This model
is based on concept of cloud vendor that rent the resources from venture(s) and after
virtualization, rents it to the clients inform of cloud services.
COLD/HOT BACKUP STRATEGY that performs backup and recovery on trigger
basis of failure detection. In Cold Backup Service Replacement Strategy (CBSRS)
recovery process, it is triggered upon the detection of the service failures and it will not
be triggered when the service is available. In Hot Backup Service Replacement
Strategy (HBSRS), a transcendental recovery strategy for service composition in
dynamic network is applied. During the implementation of service, the backup services
always remain in the activated states, and then the first returned results of services will
be adopted to ensure the successful implementation of service composition. Although
each one of the backup solution in cloud computing is unable to achieve all the issues
of remote data back-up server. And due to the high applicability of backup process in
the companies, the role of a remote data back –up server is very crucial and hot
research topic.
12. CHAPTER-03
REMOTE DATA BACKUP SERVER OF SBA
When we talk about Backup server of main cloud, we only think about the copy
of main cloud. When this Backup server is at remote location (i.e. far away from the main
server) and having the complete state of the main cloud, then this remote location server
is termed as Remote Data Backup Server. The main cloud is termed as the central
repository and remote backup cloud is termed as remote repository.
Fig.1: Remote Data Backup Server and its Architecture.
And if the central repository lost its data under any circumstances either of any
natural calamity (for ex- earthquake, flood, fire etc.) or by human attack or deletion that
has been done mistakenly and then it uses the information from the remote repository.
13. The main objective of the remote backup facility is to help user to collect information
from any remote location even if network connectivity is not available or if data not
found on main cloud. As shown in Fig-1 clients are allowed to access the files from
remote repository if the data is not found on central repository (i.e. indirectly).
The Remote backup services should cover the following issues:
3.1 DATA INTEGRITY:-
Data Integrity is concerned with complete state and the whole structure of the server.
It verifies that data such that it remains unaltered during transmission and reception. It is
the measure of the validity and fidelity of the data present in the server.
3.2 DATA SECURITY:-
Giving full protection to the client’s data is also the utmost priority for the remote
server. And either intentionally or unintentionally, it should be not able to access by
third party or any other users/client’s.
3.3 DATA CONFIDENTIALITY:-
Sometimes client’s data files should be kept confidential such that if no. of users
simultaneously accessing the cloud, then data files that are personal to only particular
client must be able to hide from other clients on the cloud during accessing of file.
3.4 TRUSTWORTHINESS:-
The remote cloud must possess the Trustworthiness characteristic. Because the
user/client stores their private data; therefore the cloud and remote backup cloud must
play a trustworthy role.
3.5 COST EFFIENCY:-
The cost of process of data recovery should be efficient so that maximum no. of
company/clients can take advantage of back-up and recovery service.
14. CHAPTER-04
DESIGN OF SEED BLOCK ALGORITHM
As discussed in literature, many techniques have been proposed for recovery and
backup such as HSDRT, PCS, ERGOT, Linux Box, Cold/Hot backup strategy etc. As
discussed above low implementation complexity, low cost, security and time related
issues are still challenging in the field of cloud computing. To tackle these issues we
propose SBA algorithm and in forth coming section, we will discuss the design of
proposed SBA in detail.
4.1 SEED BLOCK ALGORTHM (SBA) ARCHITECTURE:-
This algorithm focuses on simplicity of the back-up and recovery process. It
basically uses the concept of Exclusive–OR (XOR) operation of the computing world.
For ex: - Suppose there are two data files: A and B. When we XOR A and B it produced
X i.e. X =A B. If suppose A data file get destroyed and we want our A data file back
then we are able to get A data file back, then it is very easy to get back it with the help of
B and X data file .i.e. A = X B
Fig.2: Seed Block Algorithm Architecture
15. Similarly, the Seed Block Algorithm works to provide the simple Back-up and
recovery process. Its architecture is shown in Fig-2 consists of the Main Cloud and its
clients and the Remote Server. Here, first we set a random number in the cloud and
unique client id for every client. Second, whenever the client id is being register in the
main cloud; then client id and random number is getting EXO-Red ( ) with each other to
generate seed block for the particular client. The generated seed block corresponds to
each client is stored at remote server.
Whenever client creates the file in cloud first time, it is stored at the main cloud.
When it is stored in main server, the main file of client is being EXOR-Ed with the Seed
Block of the particular client. And that EXOR-Ed file is stored at the remote server in the
form of file’ (pronounced as File dash). If either unfortunately file in main cloud crashed
/damaged or file is been deleted mistakenly, then the user will get the original file by
EXOR-ing file’ with the seed block of the corresponding client to produce the original
file and return the resulted file i.e. original file back to the requested client. The
architecture representation of the Seed Block Algorithm is shown in the Fig.2.
16. 4.2 SEED BLOCK ALGORITHM (SBA) :-
The proposed SBA algorithm is as follows:
Algorithm:
Initialization: Main Cloud: Mc;
Remote Server: Rs;
Clients of Main Cloud: Ci;
Files: a1and a1
’
Seed block: Si;
Random Number: r;
Client’s ID: Client_idi
Input:
a1 created by Ci; r is generated at Mc;
Output:
Recovered filea1after deletion at Mc
Given:
Authenticated clients could allow uploading, downloading and do
modification on its own the files only.
Step 1: Generate a random number.
int r = rand( )
Step 2: Create a seed Block: Si for each Ci and Store Si at Rs
- (Repeat step 2 for all clients)
Step 3: If Ci/ Admin creates/modifies aa1and stores at Mc,
Then a1
’
create as
a1
’
= a1 Si
Step 4: Store a1
’
at Rs.
Step 5: If server crashes a1deleted from Mc Then, we do EXOR to retrieve the original
a1as:
a1= a1
’
Si
Step 6: Returna1toCi
Step 7: END.
17. CHAPTER-05
EXPERIMENTATION AND RESULT ANALYSIS
5.1 Performance analysis for different types of files:-
During experimentation, we found that size of original data file stored at main
cloud is exactly similar to the size of Back-up file stored at Remote Server as depicted in
Table-I. In order to make this fact plausible, we perform this experiment for different
types of files. Results tabulated in Table-I for this experiment shows that proposed SBA
is very much robust in maintaining the size of recovery file same as that the original data
file. From this we conclude that proposed SBA recover the data file without any data
loss.
Type Size Of Main
File in Server
Size Of Backup
File in
Remote Server
Size Of Recovered
File After Every
Process
Text
(.txt/ .doc/ .docx/
.xl/ .pdf)
434 KB 434 KB 434 KB
2.5 MB 2.5 MB 2.5 MB
Image
(.jpeg/ .gif/ .png/
.bitmap)
80 KB 80 KB 80 KB
4 MB 4 MB 4 MB
Table-I: Performance analysis for different types of files
18. 5.2 Effect of data size on processing time:-
Processing Time means time taken by the process when client uploads a file at
main cloud and that includes the assembling of data such as the random number from
main cloud, seed block of the corresponding client from the remote server for EXORing
operation; after assembling, performing the EXOR-Ed operation of the contents of the
uploaded file with the seed block and finally stored the EXOR-Ed file onto the remote
server. Performance of this experiment is tabulated in Table-II. We also observed that as
data size increases, the processing time increases. On other hand, we also found that
performance which is megabyte per sec (MB/sec) being constant at some level even if
the data size increases as shown in Table-II.
Practical data
Size
Processing time
On Main Cloud
Time (in sec)
(Approx.)
Processing time
On Remote Cloud
Time (in sec)
(Approx.)
Performance
(MB/sec)
1 KB 6.76 2 150
64 KB 12.8 3 160
2 MB 3600 5 164
32 MB 8400 8 250
1 GB 16200 15 280
4 GB 32100 35 280
6 GB 52000 45 280
Table-II:-Effect of data size on processing time
5.3 Graph Showing Processor Utilization:-
The Fig-3 shows the CPU utilization at Main Cloud and Remote Server. As shown
in Fig-3 the Main Cloud’s CPU utilization starts with 0% and as per the client uploads the
file onto it then utilization increases; such that it has to check whether the client is
authenticated or not, at the same the time it send request to Remote Server for the
corresponding Seed Block. When request reached to Remote Server it started collecting
the details as well as the seed Block and gives response in form of the seed Block and
19. during this period, load at Main Cloud decreases which in return cause for gradual
decreases in CPU utilization at main cloud. After receiving the requested data, CPU
utilization at main cloud increases as it has to perform the EXOR-Ed operation. Again the
Final EXOR-Ed file sends to Remote Server. As compared to Table-II the processing
time given can be compare with the time showing in Fig-3.
Fig.3: Graph Showing Processor Utilization
5.4 Sample Output Image of Seed Block Algorithm:-
The Fig-4 shows the experimentation result of proposed SBA. As fig-4 (a) shows
the original file which is uploaded by the client on main cloud. Fig-4 (b) shows the
EXOR-Ed file which is stored on the remote server. This file contains the secured
EXOR-Ed content of original file and seed block content of the corresponding client. Fig-
4 (c) shows there covered file; which indirectly sent to client in the absence of network
connectivity and in case of the file deletion or if the cloud gets destroyed due to any
reason.
a) Original file b)XOR-Ed file c)Recovered file
Fig.4: Sample Output Image of Seed Block Algorithm
20. CHAPTER-06
SYSTEM CONFIGURATION
H/W System Configuration:-
Processor : Pentium –III
Speed : 1.1 GHz
RAM : 256 MB (min)
Hard Disk : 20 GB
Floppy Drive : 1.44 MB
Key Board : Standard Windows Keyboard
Mouse : Two or Three Button Mouse
Monitor : SVGA
S/W System Configuration:-
Operating System : Windows95/98/2000/XP
Application Server : Tomcat5.0/6.X
Front End : HTML, Java, Jsp
Scripts : JavaScript.
Server side Script : Java Server Pages.
Database : MySQL
Database Connectivity : JDBC.
21. CHAPTER-07
ADVANTAGES
Recover Same Size of Data:-
Size of original data file stored at main cloud is exactly similar to the size of
Back-up file stored at Remote Server. In order to make this fact plausible, we perform
this experiment for different types of files. This experiment shows that proposed SBA
is very much robust in maintaining the size of recovery file same as that the original
data file. From this we conclude that proposed SBA recover the data file without any
data loss.
Low Cost:-
The cost of process of data recovery should be efficient so that maximum no. of
company/clients can take advantage of back-up and recovery service.
Privacy:-
Giving full protection to the client’s data is also the ulmost priority for the remote
server. And either intentionally or unintentionally, it should be not able to access by
third party or any other users/client’s.
Required less time to access the data:-
data size increases, the processing time increases. On other hand, we also found
that performance which is megabyte per sec (MB/sec) being constant at some level
even if the data size increases
Maintenance:-
Maintenance of Cloud Computing application is easier, because they do not need
to be installed on each user’s computer and can be accessed from different places
22. CHAPTER-08
FUTURE SCOPE
DATA CHALLENGES : STREAMING, MULTIMEDIA, BIG
DATA:-
CLOUDs, following the web service development, still have a very classical client
server like organization, i.e. the services and applications on the CLOUD are invoked for
a specific request, which is processed remotely and the result returned to the requesting
agent. A growing amount of use cases requires interactivity with the service, even with
multiple users at the same time and potentially involves a large degree of different
multimedia streams, including e.g. voice, video chat, live virtualization etc.
ECONOMIC OPPORTUNITIES & EXPERTISE GAPS:-
It has been noted multiple times in the preceding section that relevant expertise for
supporting CLOUD uptake is needed in various contexts. This lack of knowledge hinders
uptake to the extent that would be possible, if new users could build up on an existing
pool of expertise and in particular experience from a longer time of CLOUD
employment. Such a period of knowledge gathering would also have given rise to new
cost and business concepts to effectively deal with the CLOUD and thus help new up
takers to assess the value / benefit of the CLOUD for their purposes.
MULTI-TENANCY ISSUES:-
The impact of multi-tenancy is easily underestimated, but can raise major
technological challenges. Whilst maintenance of consistency across multiple ten antsiest
an obvious concern, isolation creates actually more difficulties. Depending on what is
shared and to what degree, it is currently difficult to impossible to distinguish which part
of the resource consumption is caused by which user - for example shared
communication over a given network line. This however is necessary for accurate usage /
cost assessment per individual user.
23. CONCLUSION
In this Seminar, we presented detail design of proposed SBA algorithm. Proposed
SBA is robust in helping the users to collect information from any remote location in the
absence of network connectivity and also to recover the files in case of the file deletion or
if the cloud gets destroyed due to any reason. Experimentation and result analysis shows
that proposed SBA also focuses on the security concept for the back-up files stored at
remote server, without using any of the existing encryption techniques. The time related
issues are also being solved by proposed SBA such that it will take minimum time for the
recovery process.
24. REFERENCES
www.google.com
www.chennaisunday.com
Fifth International conference on System and Network Communication, pp 256-
259
“ERGOT: A Semantic-based System for Service Discovery in Distributed
Infrastructures,”10th
IEEE/ACM International Conference on Cluster, Cloud and
Grid Computing.
http://www.eecs.berkeley.edu/Pubs/TechRpts/2009//EEC S-2009-28.pdf.
“Recovery Strategies for Service Composition in Dynamic Network,”
International Conference on Cloud and Service Computing.
F.BKashani, C.Chen, C.Shahabi.WSPDS, 2004, “Web Services Peerto-Peer
Discovery Service” ICOMP.
“Recovery Time Analysis for the Shared Backup Router Resources
(SBRR) Architecture”, IEEE ICC.
“Resilience in Multilayer Networks,” IEEE Communications Magazine, Vol. 37,
No. 8, p.70-76.