Cloud storage (CS) is gaining much popularity nowadays because it offers low-cost and convenient network storage services. In this big data era, the explosive growth in digital data moves the users towards CS to store their massive data. This explosive growth of data causes a lot of storage pressure on CS systems because a large volume of this data is redundant. Data deduplication is a most-effective data reduction technique that identifies and eliminates the redundant data. Dynamic nature of data makes security and ownership of data as a very important issue. Proof-of-ownership schemes are a robust way to check the ownership claimed by any owner. However to protect the privacy of data, many users encrypt it before storing in CS. This method affects the deduplication process because encryption methods have varying characteristics. Convergent encryption (CE) scheme is widely used for secure data deduplication, but it destroys the message equality. Although, DupLESS provides strong privacy by enhancing CE, but it is also found insufficient. The problem with the CE-based scheme is that the user can decrypt the cloud data while he has lost his ownership. This paper addresses the problem of ownership revocation by proposing a secure deduplication scheme for encrypted data. The proposed scheme enhances the security against unauthorized encryption and poison attack on the predicted set of data.
Role Based Access Control Model (RBACM) With Efficient Genetic Algorithm (GA)...dbpublications
Cloud computing is one the promising and emerging field in Information Technology because of its performance, low cost and great availability. Cloud computing basically gives services to an individual and the organization through the network with the capability to scale down or up their different kinds of services. The basic service of cloud computing system is known as a cloud storage system which containing a collection of storage servers. These storage servers gives long-term storage services by using the internet with free of cost. However, the storing data using cloud system of third party causes very serious problem over data confidentiality. Typically, different kinds of encryption schemes are used to protect the cloud data confidentiality, but it take more time to process even a single operation. Thus, in this paper proposes cloud data confidentiality by integrates encoding, encrypting and forwarding. Token Based Data Security Algorithm (TBDSA) along with RSA and AES is used for decryption and encryption process and Role Based Access Control Model (RBACM) is access at the time of data forwarding. Here, cloud user’s accessing password is created by using encoding process which is done by Genetic Algorithm (GA) and process of GA is presented in this paper. This
TBDSA and GA algorithms takes minimum time to execute and raises the system performance.
Secure Data Sharing In an Untrusted CloudIJERA Editor
Cloud computing is a huge area which basically provides many services on the basis of pay as you go. One of the fundamental services provided by cloud is data storage. Cloud provides cost efficiency and an efficient solution for sharing resource among cloud users. A secure and efficient data sharing scheme for groups in cloud is not an easy task. On one hand customers are not ready to share their identity but on other hand want to enjoy the cost efficiency provided by the cloud. It needs to provide identity privacy, multiple owner and dynamic data sharing without getting effected by the number of cloud users revoked. In this paper, any member of a group can completely enjoy the data storing and sharing services by the cloud. A secure data sharing scheme for dynamic cloud users is proposed in this paper. For which it uses group signature and dynamic broadcast encryption techniques such that any user in a group can share the information in a secured manner. Additionally the permission option is proposed for the security reasons. This means the file access permissions are generated by the admin and given to the user using Role Based Access Control (RBA) algorithm. The file access permissions are read, write and delete. In this, owner can provide files with options and accepts the users using that option. The revocation of cloud user is a function generated by the Admin for security purpose. The encryption computational cost and storage overhead is not dependent on the number of users revoked. We analyze the security by proofs and produce the cloud efficiency report using cloudsim.
Abstract: Cloud computing model are obtaining ubiquitous authorization due to the heterogeneous convenience they provide. Although, the
security & privacy problems are the main considerable encumbrance holding back the universal adoption of this new emerging technology.
Various researches are concentrated on enhancing the security on Software as well as Hardware levels on the cloud. But these interpretations do
not mainly furnish the complete security way and therefore the data security compute (measure) are still kept under the access control of service
provider. Trusted Computing is another research concept. In actuality, these furnish a set of tools controlled by the third party technologies to
secure the Virtual Machines from the cloud computing providers. These approaches provides the tools to its consumers to assess and monitor the
aspects of security their data, they don’t allocate the cloud consumers with high control capability. While as the new emerging DCS approach
aims to provide the security of data owners of their data. But the DCS approach concept is elucidate in many ways and there is not a
standardized framework of cloud computing environment model for applying this approach.
Role Based Access Control Model (RBACM) With Efficient Genetic Algorithm (GA)...dbpublications
Cloud computing is one the promising and emerging field in Information Technology because of its performance, low cost and great availability. Cloud computing basically gives services to an individual and the organization through the network with the capability to scale down or up their different kinds of services. The basic service of cloud computing system is known as a cloud storage system which containing a collection of storage servers. These storage servers gives long-term storage services by using the internet with free of cost. However, the storing data using cloud system of third party causes very serious problem over data confidentiality. Typically, different kinds of encryption schemes are used to protect the cloud data confidentiality, but it take more time to process even a single operation. Thus, in this paper proposes cloud data confidentiality by integrates encoding, encrypting and forwarding. Token Based Data Security Algorithm (TBDSA) along with RSA and AES is used for decryption and encryption process and Role Based Access Control Model (RBACM) is access at the time of data forwarding. Here, cloud user’s accessing password is created by using encoding process which is done by Genetic Algorithm (GA) and process of GA is presented in this paper. This
TBDSA and GA algorithms takes minimum time to execute and raises the system performance.
Secure Data Sharing In an Untrusted CloudIJERA Editor
Cloud computing is a huge area which basically provides many services on the basis of pay as you go. One of the fundamental services provided by cloud is data storage. Cloud provides cost efficiency and an efficient solution for sharing resource among cloud users. A secure and efficient data sharing scheme for groups in cloud is not an easy task. On one hand customers are not ready to share their identity but on other hand want to enjoy the cost efficiency provided by the cloud. It needs to provide identity privacy, multiple owner and dynamic data sharing without getting effected by the number of cloud users revoked. In this paper, any member of a group can completely enjoy the data storing and sharing services by the cloud. A secure data sharing scheme for dynamic cloud users is proposed in this paper. For which it uses group signature and dynamic broadcast encryption techniques such that any user in a group can share the information in a secured manner. Additionally the permission option is proposed for the security reasons. This means the file access permissions are generated by the admin and given to the user using Role Based Access Control (RBA) algorithm. The file access permissions are read, write and delete. In this, owner can provide files with options and accepts the users using that option. The revocation of cloud user is a function generated by the Admin for security purpose. The encryption computational cost and storage overhead is not dependent on the number of users revoked. We analyze the security by proofs and produce the cloud efficiency report using cloudsim.
Abstract: Cloud computing model are obtaining ubiquitous authorization due to the heterogeneous convenience they provide. Although, the
security & privacy problems are the main considerable encumbrance holding back the universal adoption of this new emerging technology.
Various researches are concentrated on enhancing the security on Software as well as Hardware levels on the cloud. But these interpretations do
not mainly furnish the complete security way and therefore the data security compute (measure) are still kept under the access control of service
provider. Trusted Computing is another research concept. In actuality, these furnish a set of tools controlled by the third party technologies to
secure the Virtual Machines from the cloud computing providers. These approaches provides the tools to its consumers to assess and monitor the
aspects of security their data, they don’t allocate the cloud consumers with high control capability. While as the new emerging DCS approach
aims to provide the security of data owners of their data. But the DCS approach concept is elucidate in many ways and there is not a
standardized framework of cloud computing environment model for applying this approach.
An proficient and Confidentiality-Preserving Multi- Keyword Ranked Search ove...Editor IJCATR
Cloud computing has developed progressively prevalent for data owners to outsource their data to public cloud servers
while consenting data users to reclaim this data. For isolation disquiets, a secure rifle over encrypted cloud data has stirred numerous
research mechanisms underneath the particular owner model. Conversely, most cloud servers in practice do not just assist one owner,
as an alternative, their sustenance gives multiple owners to share the assistances carried by cloud computing. In this proficient and
confidentiality-Preserving Multi-Keyword Ranked Search over Encrypted Cloud Data, new schemes to deal with Privacy preserving
Ranked Multi-keyword Search in a Multi-owner model (PRMSM) has been introduced. To facilitate cloud servers to execute secure
search without knowing the actual data of both keywords and trapdoors, we thoroughly build a novel secure search protocol. To rank
the search results and domain the privacy of relevance scores amongst keywords and files. To thwart the assailants from snooping
secret keys and fantasizing to be legal data users submitting pursuits, a novel dynamic secret key generation protocol and a new data
user authentication protocol is discussed.
Accessing secured data in cloud computing environmentIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection
The Royal Split Paradigm: Real-Time Data Fragmentation and Distributed Networ...CSCJournals
With data encryption, access control, and monitoring technology, high profile data breaches still
occur. To address this issue, this work focused on securing data at rest and data in motion by
utilizing current distributed network technology in conjunction with a data fragmenting and
defragmenting algorithm. Software prototyping was used to exhaustively test this new paradigm
within the confines of the Defense Technology Experimental Research (DETER) virtual testbed.
The virtual testbed was used to control all aspects within the testing network including: node
population, topology, file size, and number of fragments. In each topology, and for each
population size, different sized files were fragmented, distributed to nodes on the network,
recovered, and defragmented. All of these tests were recorded and documented. The results
produced by this prototype showed, with the max wait time removed, an average wait time of
.0287 s/fragment and by increasing the number of fragments, N, the complexity, X, would
increase as demonstrated in the formula: X = (.00287N!).
A PRACTICAL CLIENT APPLICATION BASED ON ATTRIBUTE-BASED ACCESS CONTROL FOR UN...cscpconf
One of widely used cryptographic primitives for the cloud application is Attribute Based Encryption (ABE) where users can have their own attributes and a ciphertext encrypted by an access policy. Though ABE provides many benefits, the novelty often only exists in an academic world and it is often difficult to find a practical use of ABE for a real application. In this paper, we discuss the design and implementation of a cloud storage client application which supports the concept of ABE. Our proposed client provides an effective access control mechanism where it allows different types of access policy to be defined thus allowing large datasets to be shared by multiple users. Using different access policy, each user only needs to access only a small part of the big data. The goal of our experiment is to explore the right set of strategies for developing a practical ABE-based system. Through the implementation and evaluation, we have determined the various characteristics and issues associated with developing a practical ABEbased
application.
This is a common fact nowadays to use the external third party resources for data storage and
sharing among multiple personnel of the same organization or different organizations. Such external
resources are collectively known as Cloud Computing resources. Cloud Computing resources save
time, cost and efforts required to manage the huge data of organizations. Due to the rapid growth of
using cloud services in many organizations or individuals, there are many concerns resulted. The
major concerns are data sharing, security and efficiency. Since from last 15 years, there are number
of solutions and researches were conducted and applied. Data sharing both single user and multi-user
in Cloud Computing, and hence it is required that data sharing is strongly secured, number of recent
cryptography base methods such as Identity Based Encryption or Attributed Based Encryption are
designed for secure data sharing among multiple users. All the recent methods have some limitations
and advantages. This paper addresses the current research problems of data security and privacy
preserving in cloud servers. The study was presented over different methods of cloud data security
and their comparative analysis first. At we discussed the research limitations of those methods.
The capability of involving the selection sharing encrypted data with different users via public
cloud storage may greatly ease security concerns over not intended data leaks in the cloud. A key
challenge to designing such encryption schemes to be sustainable in the efficient management of
encryption keys. The desired flexibility of sharing any group of selected documents with any group of
users need for something different encryption keys to be used for different documents. However, this
also implies the urgent need of securely distributing to users a large number of keys for both encryption
and search, and those users will have to protected from danger store the received keys, and submit an
equally large number of keyword trapdoors to the cloud in order to perform search over the shared data
implied need for secure communication, storage, and complexity clearly to give to someone the
approach impractical. In this work a data owner only needs to distribute a single key to a user for
sharing a very large number of documents, and the user only needs to submit a single trapdoor to the
cloud for querying the shared documents. User Revocation is used for Key Updation. Forward Secrecy
and Backward Secrecy is used.
Data Sharing with Sensitive Information Hiding in Data Storage using Cloud Co...ijtsrd
With cloud storage services, users can remotely store their data to the cloud and realize the data sharing with others. Remote data integrity auditing scheme is proposed to guarantee the integrity of the data stored in the cloud. In some common cloud storage systems such as the Electronic Health Records EHRs system, the cloud file might contain some sensitive information. The sensitive information should not be exposed to others when the cloud file is shared. Encrypting the whole shared file can realize the sensitive information hiding, but will make this shared file unable to be used by others. How to realize data sharing with sensitive information hiding in remote data integrity auditing still has not been explored up to now. In order to address this problem, we propose a remote data integrity auditing scheme that realizes data sharing with sensitive information hiding in this system. Paruvathavarthini M | Prasuna K S | Sermakani. A. M ""Data Sharing with Sensitive Information Hiding in Data Storage using Cloud Computing"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30007.pdf
Paper Url : https://www.ijtsrd.com/engineering/information-technology/30007/data-sharing-with-sensitive-information-hiding-in-data-storage-using-cloud-computing/paruvathavarthini-m
Secure Medical Data Computation using Virtual_ID Authentication and File Swap...IJASRD Journal
PHR provides users with a great deal in leakage of sensitive information. However, securing the sensitive medical data also brings very serious security problems, especially for the data security which is stored in the medical cloud data. Once the data is leaked to a third party, then the data privacy has become a major problem, mainly such as authentication, availability of data, confidentiality etc., and which is to be taken into consideration very effectively. An authentication scheme based on virtual smartcard using hashing function for medical data is proposed to solve the problem of in which the illegal users access the resources of servers. Here, we also maintain PHR sensitive data in the cloud by using file swapping concept. Once the user access the data. The user data will be swapped into different places by file swapping concept, so the file will be more secured and no one can hack or theft our data.
Mitigating the Integrity Issues in Cloud Computing Utilizing Cryptography Alg...AJASTJournal
The cloud can be created, monitored, and disseminated with slight disruption or service provider involvement. Among the most rapidly evolving phenomenon, cloud computing provides users with a variety of low-cost solutions. By putting the ideas of confidentiality, authentication, encryption techniques, non-repudiation, intrusion prevention, and effectiveness into practice, the challenge of cloud information security for computers and cloud storage security has been resolved in its totality. As cloud security has become a growing problem, cloud technology is prominent throughout many emerging disciplines of study in which a significant amount of research is conducted in this field. Each of these efforts uses a cryptography approach. Current solutions to these issues have certain important drawbacks. To protect sensitive information stored in the cloud, one needs to design programs that implement hybrid cryptographic mechanisms using challenging encryption algorithms. This research elaborates on an examination of using cryptographic techniques to mitigate the integrity problems in cloud computing.
Cloud computing is the emerging trend in todays world. Cloud computing is not a separate technology, it is platform which provides platform as a service, Infrastructure as a service and Software as a service. The most important thing with cloud is that we hire everything from a third party or store our important datas in a third parties place .Here comes the major issue of how our datas are secured. In this paper, we discuss about how to protect our datas in the cloud with various cryptographic techniques. Padmapriya I | Ragini H "Cloud Cryptography" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21547.pdf
Paper URL: https://www.ijtsrd.com/computer-science/computer-network/21547/cloud-cryptography/padmapriya-i
An proficient and Confidentiality-Preserving Multi- Keyword Ranked Search ove...Editor IJCATR
Cloud computing has developed progressively prevalent for data owners to outsource their data to public cloud servers
while consenting data users to reclaim this data. For isolation disquiets, a secure rifle over encrypted cloud data has stirred numerous
research mechanisms underneath the particular owner model. Conversely, most cloud servers in practice do not just assist one owner,
as an alternative, their sustenance gives multiple owners to share the assistances carried by cloud computing. In this proficient and
confidentiality-Preserving Multi-Keyword Ranked Search over Encrypted Cloud Data, new schemes to deal with Privacy preserving
Ranked Multi-keyword Search in a Multi-owner model (PRMSM) has been introduced. To facilitate cloud servers to execute secure
search without knowing the actual data of both keywords and trapdoors, we thoroughly build a novel secure search protocol. To rank
the search results and domain the privacy of relevance scores amongst keywords and files. To thwart the assailants from snooping
secret keys and fantasizing to be legal data users submitting pursuits, a novel dynamic secret key generation protocol and a new data
user authentication protocol is discussed.
Accessing secured data in cloud computing environmentIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection
The Royal Split Paradigm: Real-Time Data Fragmentation and Distributed Networ...CSCJournals
With data encryption, access control, and monitoring technology, high profile data breaches still
occur. To address this issue, this work focused on securing data at rest and data in motion by
utilizing current distributed network technology in conjunction with a data fragmenting and
defragmenting algorithm. Software prototyping was used to exhaustively test this new paradigm
within the confines of the Defense Technology Experimental Research (DETER) virtual testbed.
The virtual testbed was used to control all aspects within the testing network including: node
population, topology, file size, and number of fragments. In each topology, and for each
population size, different sized files were fragmented, distributed to nodes on the network,
recovered, and defragmented. All of these tests were recorded and documented. The results
produced by this prototype showed, with the max wait time removed, an average wait time of
.0287 s/fragment and by increasing the number of fragments, N, the complexity, X, would
increase as demonstrated in the formula: X = (.00287N!).
A PRACTICAL CLIENT APPLICATION BASED ON ATTRIBUTE-BASED ACCESS CONTROL FOR UN...cscpconf
One of widely used cryptographic primitives for the cloud application is Attribute Based Encryption (ABE) where users can have their own attributes and a ciphertext encrypted by an access policy. Though ABE provides many benefits, the novelty often only exists in an academic world and it is often difficult to find a practical use of ABE for a real application. In this paper, we discuss the design and implementation of a cloud storage client application which supports the concept of ABE. Our proposed client provides an effective access control mechanism where it allows different types of access policy to be defined thus allowing large datasets to be shared by multiple users. Using different access policy, each user only needs to access only a small part of the big data. The goal of our experiment is to explore the right set of strategies for developing a practical ABE-based system. Through the implementation and evaluation, we have determined the various characteristics and issues associated with developing a practical ABEbased
application.
This is a common fact nowadays to use the external third party resources for data storage and
sharing among multiple personnel of the same organization or different organizations. Such external
resources are collectively known as Cloud Computing resources. Cloud Computing resources save
time, cost and efforts required to manage the huge data of organizations. Due to the rapid growth of
using cloud services in many organizations or individuals, there are many concerns resulted. The
major concerns are data sharing, security and efficiency. Since from last 15 years, there are number
of solutions and researches were conducted and applied. Data sharing both single user and multi-user
in Cloud Computing, and hence it is required that data sharing is strongly secured, number of recent
cryptography base methods such as Identity Based Encryption or Attributed Based Encryption are
designed for secure data sharing among multiple users. All the recent methods have some limitations
and advantages. This paper addresses the current research problems of data security and privacy
preserving in cloud servers. The study was presented over different methods of cloud data security
and their comparative analysis first. At we discussed the research limitations of those methods.
The capability of involving the selection sharing encrypted data with different users via public
cloud storage may greatly ease security concerns over not intended data leaks in the cloud. A key
challenge to designing such encryption schemes to be sustainable in the efficient management of
encryption keys. The desired flexibility of sharing any group of selected documents with any group of
users need for something different encryption keys to be used for different documents. However, this
also implies the urgent need of securely distributing to users a large number of keys for both encryption
and search, and those users will have to protected from danger store the received keys, and submit an
equally large number of keyword trapdoors to the cloud in order to perform search over the shared data
implied need for secure communication, storage, and complexity clearly to give to someone the
approach impractical. In this work a data owner only needs to distribute a single key to a user for
sharing a very large number of documents, and the user only needs to submit a single trapdoor to the
cloud for querying the shared documents. User Revocation is used for Key Updation. Forward Secrecy
and Backward Secrecy is used.
Data Sharing with Sensitive Information Hiding in Data Storage using Cloud Co...ijtsrd
With cloud storage services, users can remotely store their data to the cloud and realize the data sharing with others. Remote data integrity auditing scheme is proposed to guarantee the integrity of the data stored in the cloud. In some common cloud storage systems such as the Electronic Health Records EHRs system, the cloud file might contain some sensitive information. The sensitive information should not be exposed to others when the cloud file is shared. Encrypting the whole shared file can realize the sensitive information hiding, but will make this shared file unable to be used by others. How to realize data sharing with sensitive information hiding in remote data integrity auditing still has not been explored up to now. In order to address this problem, we propose a remote data integrity auditing scheme that realizes data sharing with sensitive information hiding in this system. Paruvathavarthini M | Prasuna K S | Sermakani. A. M ""Data Sharing with Sensitive Information Hiding in Data Storage using Cloud Computing"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30007.pdf
Paper Url : https://www.ijtsrd.com/engineering/information-technology/30007/data-sharing-with-sensitive-information-hiding-in-data-storage-using-cloud-computing/paruvathavarthini-m
Secure Medical Data Computation using Virtual_ID Authentication and File Swap...IJASRD Journal
PHR provides users with a great deal in leakage of sensitive information. However, securing the sensitive medical data also brings very serious security problems, especially for the data security which is stored in the medical cloud data. Once the data is leaked to a third party, then the data privacy has become a major problem, mainly such as authentication, availability of data, confidentiality etc., and which is to be taken into consideration very effectively. An authentication scheme based on virtual smartcard using hashing function for medical data is proposed to solve the problem of in which the illegal users access the resources of servers. Here, we also maintain PHR sensitive data in the cloud by using file swapping concept. Once the user access the data. The user data will be swapped into different places by file swapping concept, so the file will be more secured and no one can hack or theft our data.
Mitigating the Integrity Issues in Cloud Computing Utilizing Cryptography Alg...AJASTJournal
The cloud can be created, monitored, and disseminated with slight disruption or service provider involvement. Among the most rapidly evolving phenomenon, cloud computing provides users with a variety of low-cost solutions. By putting the ideas of confidentiality, authentication, encryption techniques, non-repudiation, intrusion prevention, and effectiveness into practice, the challenge of cloud information security for computers and cloud storage security has been resolved in its totality. As cloud security has become a growing problem, cloud technology is prominent throughout many emerging disciplines of study in which a significant amount of research is conducted in this field. Each of these efforts uses a cryptography approach. Current solutions to these issues have certain important drawbacks. To protect sensitive information stored in the cloud, one needs to design programs that implement hybrid cryptographic mechanisms using challenging encryption algorithms. This research elaborates on an examination of using cryptographic techniques to mitigate the integrity problems in cloud computing.
Cloud computing is the emerging trend in todays world. Cloud computing is not a separate technology, it is platform which provides platform as a service, Infrastructure as a service and Software as a service. The most important thing with cloud is that we hire everything from a third party or store our important datas in a third parties place .Here comes the major issue of how our datas are secured. In this paper, we discuss about how to protect our datas in the cloud with various cryptographic techniques. Padmapriya I | Ragini H "Cloud Cryptography" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21547.pdf
Paper URL: https://www.ijtsrd.com/computer-science/computer-network/21547/cloud-cryptography/padmapriya-i
Cloud computing has become an integral part of most of the private and public organizations and being used for data storage and retrieval. There are many usage of cloud computing and widely used in highly confidential national services like military and treasury for storing confidential information. The cloud computing for example Google drive, Amazon Web Service and Microsoft Azure are beneficial for organizations and end-users. Using Cloud computing and its services, organisation/end-users can store their data. There are multiple challenges while saving organisations highly confidential documents in servers. Hence, the objective of this paper is to provide a high level design for a storage system maximising security and personal privacy. Though servers are highly protected against unauthorized access, there are incidents where confidential files stored on servers are accessed by the maintenance staffs. Hence this research paper provides introductory structure for fully protection of files stored in the server by using Hybrid Cryptosystem.
Providing Secure Cloud for College Campusvivatechijri
In colleges data stored on the server can be access by any college staff, student or professor. Data is
very important and should not be altered or accessed without permission of its owner. But in these type of medium
scale organizations server can be access by anyone. A better approach to maintain the data security and
sustainable storage is cloud. Cloud provides user management for authentication and authorized access of stored
data. Since data is upload in cloud through network therefore its security during this phase is very important.
For this, encryption algorithms can be used to protect it from hacker. It provides efficient way to carryout
operations such as uploading and downloading data. An efficient use of storage should be a primary concern for
which data deduplication technique can be applied. Using this technique uploading of duplicate files can be
avoided.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
With the growth of cloud technologies, computing
resources and cloud storage have become the most
demanding online services. There are several companies
desiring to outsource their data storage and resources as
well. While storing private and sensitive data on a third
party data center, it is necessary to consider security and
privacy which become major issues. In this paper, a novel
Double Encryption with Single Decryption (DESD) crypto
technique is proposed to secure the data in cloud storage.
The proposed technique comprises of encryption and
decryption phases where in the encryption phase the data is
randomly partitioned into multiple fragments. Double
encryption is done on each fragment by prime numbers, as
well as Invertible Non-linear Function (INF). These
multiple encrypted data are stored at the multiple cloud
storages with the help of cloud service provider (CSP).
After all verification process the data user collects the key
from the data owner and decrypts the gathered data from
the cloud with the knowledge of inverse INF. The proposed
crypto technique provides more security and privacy to
cloud data and any illegitimate users cannot retrieve the
original data. The performance of the proposed DESD
technique is compared with AES and Triple DES
techniques and the experimental results are plotted which
shows the proposed technique is efficient and faster.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
A Survey Paper On Data Confidentiatity And Security in Cloud Computing Using ...IJSRD
Now days rapidly increased use of cloud computing in the many organization and IT industries and provides latest software solution with minimum cost. So the cloud computing give us number of benefits with minimum cost and of data accessibility through Internet. The ensuring security risks of the cloud computing is the main factor in the cloud computing environment, The evolving essence is Cloud computing, that is beneficial in cost effective parts, such as capability inflexible computing, decreasing the time period to market and insufficient computing power. By using the complete ability of cloud computing, data are transmitted, processed and stored on the outside cloud service providers. The fact is that, the owner of the data is feeling extremely unconfident to locate their data outer to their own control. Security and Confidentiality of data stored in the cloud are key setbacks in the area of Cloud Computing. Security and Confidentiality are the key issues for cloud storage. This paper proposes a KIST encryption algorithm to concentrate on the security and Confidentiality issues in cloud storage and also compressed cipher text data in order to protect the data stored in the cloud.
A New Approach for Securely Sharing Data between Cloud Users with Dual Keysijtsrd
In this project, we propose the Secure Data Sharing in Clouds sedasc methodology that provides data confidentiality and integrity access control data sharing forwarding without using compute intensive re encryption insider threat security and forward and backward access control. The Sedasc methodology encrypts a file with a single encryption key. Two different key shares for each of the users are generated, with the user only getting one share. The possession of a single share of a key allows the Sedasc methodology to counter the insider threats. The other key share is stored by a trusted third party, which is called the cryptographic server. We implement a working prototype of the Sedasc methodology and evaluate its performance based on the time consumed during various operations. Varsha K N | Prof Ganeshan M "A New Approach for Securely Sharing Data between Cloud Users with Dual Keys" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49459.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/49459/a-new-approach-for-securely-sharing-data-between-cloud-users-with-dual-keys/varsha-k-n
Improving Data Storage Security in Cloud using HadoopIJERA Editor
The rising abuse of information stored on large data centres in cloud emphasizes the need to safe guard the data. Despite adopting strict authentication policies for cloud users data while transferred over to secure channel when reaches data centres is vulnerable to numerous attacks .The most widely adoptable methodology is safeguarding the cloud data is through encryption algorithm. Encryption of large data deployed in cloud is actually a time consuming process. For the secure transmission of information AES encryption has been used which provides most secure way to transfer the sensitive information from sender to the intended receiver. The main purpose of using this technique is to make sensitive information unreadable to all other except the receiver. The data thus compressed enables utilization of storage space in cloud environment. It has been augmented with Hadoop‟s map-reduce paradigm which works in a parallel mode. The experimental results clearly reflect the effectiveness of the methodology to improve the security of data in cloud environment.
Today, the growth of digitalization has made the ease for livelihood for all the organizations. Cloud computing the storage provider for all the computer resources has made it easy for accessing the data from anywhere anytime. But at the same time the security for cloud data storage is the major drawback which is provided by various cryptographic algorithms. These algorithms convert the data into unreadable format, known as cipher text, Rivest, Shamir and Adleman (RSA) one of the most popularly used asymmetric algorithm. This paper gives detailed review about such different cryptographic algorithms used for the cloud data security. The comparison study is also made for the size of data and to analyze the encryption time and decryption time, which concludes that to enhance the cloud data security some addon techniques are to be used along with these cryptographic algorithms. To increase the security level and to increase the transmission speed of plaintext, integrated method will be proposed by encoding the plaintext to intermediate plaintext and then intermediate plaintext will be compressed using any one of the compression techniques to increase the compression ratio, lastly the compressed file is encrypted to further enhance the security level.
Proposed system for data security in distributed computing in using triple d...IJECEIAES
Cloud computing is considered a distributed computing paradigm in which resources are provided as services. In cloud computing, the applications do not run from a user’s personal computer but are run and stored on distributed servers on the Internet. The resources of the cloud infrastructures are shared on cloud computing on the Internet in the open environment. This increases the security problems in security such as data confidentiality, data integrity and data availability, so the solution of such problems are conducted by adopting data encryption is very important for securing users data. In this paper, a comparative study is done between the two security algorithms on a cloud platform called eyeOS. From the comparative study it was found that the Rivest Shamir Adlemen (3kRSA) algorithm outperforms that triple data encryption standard (3DES) algorithm with respect to the complexity, and output bytes. The main drawback of the 3kRSA algorithm is its computation time, while 3DES is faster than that 3kRSA. This is useful for storing large amounts of data used in the cloud computing, the key distribution and authentication of the asymmetric encryption, speed, data integrity and data confidentiality of the symmetric encryption are also important also it enables to execute required computations on this encrypted data.
This is a common fact nowadays to use the external third party resources for data storage and sharing among
multiple personnel of the same organization or different organizations. Such external resources are collectively
known as Cloud Computing resources. Cloud Computing resources save time, cost and efforts required to
manage the huge data of organizations. Due to the rapid growth of using cloud services in many organizations
or individuals, there are many concerns resulted. The major concerns are data sharing, security and efficiency.
Since from last 15 years, there are number of solutions and researches were conducted and applied. Data
sharing both single user and multi-user in Cloud Computing, and hence it is required that data sharing is
strongly secured, number of recent cryptography base methods such as Identity Based Encryption or Attributed
Based Encryption are designed for secure data sharing among multiple users. All the recent methods have some
limitations and advantages. This paper addresses the current research problems of data security and privacy
preserving in cloud servers. The study was presented over different methods of cloud data security and their
comparative analysis first. At we discussed the research limitations of those methods.
ACCESSING SECURED DATA IN CLOUD COMPUTING ENVIRONMENTIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection.
In our homes or offices, security has been a vital issue. Control of home security system remotely always offers huge advantages like the arming or disarming of the alarms, video monitoring, and energy management control apart from safeguarding the home free up intruders. Considering the oldest simple methods of security that is the mechanical lock system that has a key as the authentication element, then an upgrade to a universal type, and now unique codes for the lock. The recent advancement in the communication system has brought the tremendous application of communication gadgets into our various areas of life. This work is a real-time smart doorbell notification system for home Security as opposes of the traditional security methods, it is composed of the doorbell interfaced with GSM Module, a GSM module would be triggered to send an SMS to the house owner by pressing the doorbell, the owner will respond to the guest by pressing a button to open the door, otherwise, a message would be displayed to the guest for appropriate action. Then, the keypad is provided for an authorized person for the provision of password for door unlocking, if multiple wrong password attempts were made to unlock, a message of burglary attempt would be sent to the house owner for prompt action. The main benefit of this system is the uniqueness of the incorporation of the password and messaging systems which denies access to any unauthorized personality and owner's awareness method.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
Graphs have become the dominant life-form of many tasks as they advance a
structure to represent many tasks and the corresponding relations. A powerful
role of networks/graphs is to bridge local feats that exist in vertices as they
blossom into patterns that help explain how nodal relations and their edges
impacts a complex effect that ripple via a graph. User cluster are formed as a
result of interactions between entities. Many users can hardly categorize their
contact into groups today such as “family”, “friends”, “colleagues” etc. Thus,
the need to analyze such user social graph via implicit clusters, enables the
dynamism in contact management. Study seeks to implement this dynamism
via a comparative study of deep neural network and friend suggest algorithm.
We analyze a user’s implicit social graph and seek to automatically create
custom contact groups using metrics that classify such contacts based on a
user’s affinity to contacts. Experimental results demonstrate the importance
of both the implicit group relationships and the interaction-based affinity in
suggesting friends.
This paper projects Gryllidae Optimization Algorithm (GOA) has been applied to solve optimal reactive power problem. Proposed GOA approach is based on the chirping characteristics of Gryllidae. In common, male Gryllidae chirp, on the other hand some female Gryllidae also do as well. Male Gryllidae draw the females by this sound which they produce. Moreover, they caution the other Gryllidae against dangers with this sound. The hearing organs of the Gryllidae are housed in an expansion of their forelegs. Through this, they bias to the produced fluttering sounds. Proposed Gryllidae Optimization Algorithm (GOA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show that the projected algorithms reduced the real power loss considerably.
In the wake of the sudden replacement of wood and kerosene by gas cookers for several purposes in Nigeria, gas leakage has caused several damages in our homes, Laboratories among others. installation of a gas leakage detection device was globally inspired to eliminate accidents related to gas leakage. We present an alternative approach to developing a device that can automatically detect and control gas leakages and also monitor temperature. The system detects the leakage of the LPG (Liquefied Petroleum Gas) using a gas sensor, then triggred the control system response which employs ventilator system, Mobile phone alert and alarm when the LPG concentration in the air exceeds a certain level. The performance of two gas sensors (MQ5 and MQ6) were tested for a guided decision. Also, when the temperature of the environment poses a danger, LED (indicator), buzzer and LCD (16x2) display was used to indicate temperature and gas leakage status in degree Celsius and PPM respectively. Attension was given to the response time of the control system, which was ascertained that this system significantly increases the chances and efficiency of eliminating gas leakage related accident.
Feature selection problem is one of the main important problems in the text and data mining domain. This paper presents a comparative study of feature selection methods for Arabic text classification. Five of the feature selection methods were selected: ICHI square, CHI square, Information Gain, Mutual Information and Wrapper. It was tested with five classification algorithms: Bayes Net, Naive Bayes, Random Forest, Decision Tree and Artificial Neural Networks. In addition, Data Collection was used in Arabic consisting of 9055 documents, which were compared by four criteria: Precision, Recall, F-measure and Time to build model. The results showed that the improved ICHI feature selection got almost all the best results in comparison with other methods.
In this paper Gentoo Penguin Algorithm (GPA) is proposed to solve optimal reactive power problem. Gentoo Penguins preliminary population possesses heat radiation and magnetizes each other by absorption coefficient. Gentoo Penguins will move towards further penguins which possesses low cost (elevated heat concentration) of absorption. Cost is defined by the heat concentration, distance. Gentoo Penguins penguin attraction value is calculated by the amount of heat prevailed between two Gentoo penguins. Gentoo Penguins heat radiation is measured as linear. Less heat is received in longer distance, in little distance, huge heat is received. Gentoo Penguin Algorithm has been tested in standard IEEE 57 bus test system and simulation results show the projected algorithm reduced the real power loss considerably.
08 20272 academic insight on applicationIAESIJEECS
This research has thrown up many questions in need of further investigation.There was an expressive quantitative-qualitative research, which a common investigation form was used in.The dialogue item was also applied to discover if the contributors asserted the media-based attitude supplements their learning of academic English writing classes or not.Data recounted academic” insights toward using Skype as a sustaining implement for lessons releasing based on chosen variables: their occupation, year of education, and knowledge with Skype discovered that there were no important statistical differences in the use of Skype units because of medical academics major knowledge. There are statistically important differences in using Skype units. The findings also, disclosed that there are statistically significant differences in using Skype units due to the practice with Skype variable, in favors of academics with no Skype use practice. Skype instrument as an instructive media is a positive medium to be employed to supply academic medical writing data and assist education. Academics who do not have enough time to contribute in classes believe comfortable using the Skype-based attitude in scientific writing. They who took part in the course claimed that their approval of this media is due to learning academic innovative medical writing.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Notary is an official authorized to make an authentic deed regarding all deeds, agreements and stipulations required by a general rule. Activities carried out at the notary office such as recording client data and file data still use traditional systems that tend to be manual. The problem that occurs is the inefficiency in data processing and providing information to clients. Clients have difficulty getting information related to the progress of documents that are being taken care of at the notary's office. The client must take the time to arrive to the notary's office repeatedly to check the progress of the work of the document file. The purpose of this study is to facilitate clients in obtaining information about the progress of the work in progress, and make it easier for employees to process incoming documents by implementing an administrative system. This system was developed with the waterfall system development method and uses the Multi-Channel Access Technology integrated in the website to simplify the process of delivering information and requesting information from clients and to clients with Telegram and SMS Gateway. Clients will come to the office only when there is a notification from the system via Telegram or SMS notifying that the client must come directly to the notary's office, thus leading to an efficient time and avoiding excessive transportation costs. The overall functional system can function properly based on the results of alpha testing. The results of beta testing conducted by distributing the system feasibility test questionnaire to end users, get a percentage of 96% of users agree the system is feasible to be implemented.
In this work Tundra wolf algorithm (TWA) is proposed to solve the optimal reactive power problem. In the projected Tundra wolf algorithm (TWA) in order to avoid the searching agents from trapping into the local optimal the converging towards global optimal is divided based on two different conditions. In the proposed Tundra wolf algorithm (TWA) omega tundra wolf has been taken as searching agent as an alternative of indebted to pursue the first three most excellent candidates. Escalating the searching agents’ numbers will perk up the exploration capability of the Tundra wolf wolves in an extensive range. Proposed Tundra wolf algorithm (TWA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show the proposed algorithm reduced the real power loss effectively.
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
In the present-day communications speech signals get contaminated due to
various sorts of noises that degrade the speech quality and adversely impacts
speech recognition performance. To overcome these issues, a novel approach
for speech enhancement using Modified Wiener filtering is developed and
power spectrum computation is applied for degraded signal to obtain the
noise characteristics from a noisy spectrum. In next phase, MMSE technique
is applied where Gaussian distribution of each signal i.e. original and noisy
signal is analyzed. The Gaussian distribution provides spectrum estimation
and spectral coefficient parameters which can be used for probabilistic model
formulation. Moreover, a-priori-SNR computation is also incorporated for
coefficient updation and noise presence estimation which operates similar to
the conventional VAD. However, conventional VAD scheme is based on the
hard threshold which is not capable to derive satisfactory performance and a
soft-decision based threshold is developed for improving the performance of
speech enhancement. An extensive simulation study is carried out using
MATLAB simulation tool on NOIZEUS speech database and a comparative
study is presented where proposed approach is proved better in comparison
with existing technique.
Previous research work has highlighted that neuro-signals of Alzheimer’s disease patients are least complex and have low synchronization as compared to that of healthy and normal subjects. The changes in EEG signals of Alzheimer’s subjects start at early stage but are not clinically observed and detected. To detect these abnormalities, three synchrony measures and wavelet-based features have been computed and studied on experimental database. After computing these synchrony measures and wavelet features, it is observed that Phase Synchrony and Coherence based features are able to distinguish between Alzheimer’s disease patients and healthy subjects. Support Vector Machine classifier is used for classification giving 94% accuracy on experimental database used. Combining, these synchrony features and other such relevant features can yield a reliable system for diagnosing the Alzheimer’s disease.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
The cognitive radio prototype performance is to alleviate the scarcity of spectral resources for wireless communication through intelligent sensing and quick resource allocation techniques. Secondary users (SU’s) actively obtain the spectrum access opportunity by supporting primary users (PU’s) in cognitive radio networks (CRNs). In present generation, spectrum access is endowed through cooperative communication-based link-level frame-based cooperative (LLC) principle. In this SUs independently act as conveyors for PUs to achieve spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. Therefore, to overcome this con, network level cooperative (NLC) principle was used, where SUs are integrated mutually to collaborate with PUs session by session, instead of frame based cooperation for spectrum access opportunities. NLC approach has justified the challenges facing in LLC approach. In this paper we make a survey of some models that have been proposed to tackle the problem of LLC. We show the relevant aspects of each model, in order to characterize the parameters that we should take in account to achieve a spectrum access opportunity.
In this paper, the author provides insights and lessons that can be learned from colleagues at American universities about their online education experiences. The literature review and previous studies of online educations gains are explored and summarized in this research. Emerging trends in online education are discussed in detail, and strategies to implement these trends are explained. The author provides several tools and strategies that enable universities to ensure the quality of online education. At the end of this research paper, the researcher provides examples from Arab universities who have successfully implemented online education and expanded their impact on the society. This research provides a strategy and a model that can be used by universities in the Middle East as a roadmap to implement online education in their regions.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 8, No. 2, August 2019 : 77– 86
78
whole file. The customer usually encrypts his private data with his own encryption key and uploads the
encrypted data to CS to preserve his privacy. Randomization in encryption, i.e., different keys generate
different ciphertext for the same data leads to infeasibility in deduplication.
Convergent encryption (CE) is a good solution to this problem. In CE, the encryption key is derived
from the data itself for encryption [9]. Therefore, every user having the same data will get the same ciphertext.
By this, deduplication of encrypted data is made possible. The major security flaw in CE is that key derivation
process is deterministic, i.e., same files always derive the same encryption key. If attackers get the key, then
CE becomes worthless [9]. The problem of CE is solved by DupLESS [10] by providing a robust security
scheme. However, in early proposed CE schemes including DupLESS, a user may decrypt the data after losing
his ownership of uploaded data, i.e., ownership revocation.
In this paper, a secure deduplication scheme is proposed to store encrypted data. It ensures the
authorized access to the shared data which is examined as a major challenge for secure and efficient CS services
[11]. The ownership changes dynamically hence each ownership group is handled by a group key management
mechanism. The proposed scheme guarantees no leaking of any confidential information. The paper is
organized as follows: related work is given in section 2. Section 3 describes the system model and security
requirements. Chapter 4 explains the preliminaries and notations. The proposed secure deduplication scheme
is demonstrated in section 5. The security results of the proposed scheme are described in section 6. Finally,
section 7 concludes the paper.
2. RELATED WORK
Data deduplication is an advanced data compression technique which identifies the redundant data by
comparing the hash values of data chunks. It stores only the unique copy and creates logical links for redundant
copies instead of storing them [9, 12]. By this, it reduces the volume of data stored and as a result, the running
cost of the system decreases. Generally, it is applied where the data is stored or transmitted in CS [13]. It is
successfully adopted by backup storage in clouds to manage physical capacity and network traffic [14, 15]. It
can be divided into two categories: deduplication over unencrypted data and deduplication over encrypted data.
Data privacy is also an important issue in CS. It protects the data from malicious attackers outside and inside
the CS. Harnick et al. [16] use data deduplication as a side channel to reveal information to attackers. They use
a randomized threshold to perform an attack on CS services. When a malicious user temporarily embraces a
server and gets the hash values of data in CS, then he can download all these data. A similar attack scenario
was brought forward on CS that uses deduplication across multiple users. Hence by obtaining an only hash
value for a specific data, the malicious user becomes so powerful that he can access all data that is stored in
CS. Dropbox was accessed using manipulation of the hash value attack by Mullazani et al. [11]. He showed
that if the malicious user retrieves the SHA-256 hash values of file's chunks then other Dropbox user’s data
may be accessed by spoofing the hash value attack.
The notion of proof-of-ownership is proposed by Halevi et al. [17] to overcome these attacks. In this,
the file is held using Merkle trees instead of hash value. Deduplication over unencrypted data stores the plain
data in CS and plain data is sent to the user after verifying his ownership hence CS should be trusted. So users
are always more concern about their privacy. To keep their data privacy, users may encrypt their data before
storing to CS. Different users use different mechanisms to encrypt the data. Due to this, different ciphertexts
are generated for the same data which makes the deduplication infeasible. To overcome this problem, Douceur
et al. [5] proposed a new approach known as convergent encryption. In CE, data owner generates its own key
from the data M , ( )K H M where H be a hash function. The key K and data M are mapped into ciphertext
( , )C E K M , where E is a block cipher. Then only K is maintained, M is deleted and C is uploaded to
CS. If another user encrypts the same data then the same ciphertext will be generated. Hence if CS receives the
C from another user then CS will not store it, only meta-data will be updated. Only authorized users can
request and download ciphertext C and decrypt it with K .
However, CE suffers from the poison attack [18]. It mainly arises the tag consistency problem in
which malicious user uploads maliciously generated ciphertext and its tag. When another user tries to store his
ciphertext; the server finds the same tag and deletes it. When a user downloads this ciphertext and decrypts it,
the message is not found original. By this, the integrity of data and CS is lost. CE does not offer required
security, and it is found vulnerable to poison attack [5]. This security flaw is resolved by oblivious pseudo-
random function (O-PRF) based DupLESS [10]. It replaced the data derived encryption key with the key
generated by the interaction between the key server and the user. It provides security against poison attack, but
the key is still deterministic.
3. Int J Inf & Commun Technol ISSN: 2252-8776
A secure deduplication scheme for encrypted data (Vishal Passricha)
79
3. SYSTEM MODEL AND SECURITY REQUIREMENT
In this section, data deduplication architecture and security requirements are described. Based on
granularity, deduplication models are divided into two categories: course grained and fine grained. For the sake
of simplicity, coarse-grained deduplication is preferred.
3.1. System Description
Figure 1 shows the system model of the data deduplication system for cloud storage, having the
following entities:
Figure 1. The model of data deduplication system
i. Data owner: The data owner uploads the data to CS. He may encrypt his data before uploading and uploads
ciphertext to CS with its index information. If the data uploaded by data owner do not already reside in CS,
then data owner is known as initial uploader otherwise he is called as the subsequent uploader. An
ownership group is a group of data owners who share the same data in CS.
ii. Cloud Server (CSS): Cloud server and cloud storage are two different entities. CSS performs the
deduplication process on uploaded data and if this data is not present in CS then CSS stores this in CS. It
manages the list of owners for stored data and also controls the access of stored data using the list of
ownership. It is a group key authority that maintains group key for each ownership group. It executes
assigned tasks honestly but learns as much information about the encrypted contents as possible. When the
data owners request to download a file, after authenticating the owner’s identity, it provides the file’s access.
iii. Cloud Storage (CS): CS is a logical server managed by the cloud service provider over the internet. It
offers similar functionality as a traditional server. It offers distributed storage with immediate access. It is
highly configurable and interchangeable.
3.2. Security Requirements
The following security requirements are tried to achieve in this paper.
Data Confidentiality: The unauthorized and malicious user should be restricted to change and know
about the encrypted data. If data stored in CS are maliciously modified, then its integrity may be lost. The
deduplication algorithms should be able to protect it from any poison attacks. It only allows authentic users to
access data and guarantees that the data uploaded in CS are not altered.
Collusion Resistance: The data owners want to keep their privacy. However, malicious users who do
not have valid ownership of data may access it from CS. Hence CSS must restrict them to decrypt or access it.
Backward and Forward Secrecy: To restrict any user from accessing plaintext of the outsourced data
before uploading is known as backward secrecy. To restrict any user, who has deleted or modified the data in
CS from accessing the outsourced data after deletion or modification is known as forward secrecy.
4. PRELIMINARIES
In this section, the basic definitions of a secure deduplication scheme for encrypted data are described.
4.1. Notations
The random and uniform selection of the element x from the finite set S is denoted by Sx
. It
means one value from S is randomly assigned to x. 1
( ,..., )n
Aly x x is denoted as an algorithm Al which
4. ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 8, No. 2, August 2019 : 77– 86
80
runs on the input set 1
( ,..., )n
x x and its result is assigned to y . 1
means a string of length having all 1s.
The concatenation of two strings a and b is expressed as a b .
1
( ,..., )n
u u is a world of user denoted by U . The identity of the ith
user i
u is defined as iID . Let
a
G U is a group of users having data aM and ,{ }aa aL T G is the ownership list of aM . Here aT denotes the
tag and CSS maintains the ownership list. Ga
K is an ownership group key, generally shared by valid
owners of aG .
4.2. Definitions
A secure deduplication scheme for encrypted data is defined in this section. The algorithms used in
this scheme are described below:
a. ( )KEY KEYGen u
: The set of user u is given as input to the ( )KEYGen algorithm. It generates
KEYs corresponding to each user set of u for secure ownership group key distribution.
b. ( ,1 )iC Enc M
: The data M and security parameter is given as input to ( )Enc function to encrypt
it into the corresponding ciphertext iC . iC is composed of encrypted data and tag information indexing.
c. '
Re ( , )i iC Enc C G
: The ciphertext iC and ownership group G is given as input to Re ( )Enc for
mapping the ciphertext iC into corresponding encrypted ciphertext
'
iC . The characteristic of encrypted
ciphertext
'
iC is that valid owners in G only do its decryption.
d.
'
( , , )iM Dec C K Pk : The encrypted ciphertext
'
iC , message encryption key K , and a set of keys Pk
is supplied as input to ( )Dec function to decrypt
'
iC into the corresponding message M . Note that K
should be derived from data M .
5. PROPOSED DEDUPLICATION SCHEME
In this paper, a secure deduplication scheme for encrypted data is proposed that offers dynamic
ownership management capability. Randomized CE algorithm [19] is applied in the proposed scheme to
randomize the encrypted data. It powered the proposed system to defend against poison attacks. Re-encryption
module is also combined in the proposed scheme for owner revocation. The outsourced ciphertext is re-
encrypted using this module for owner revocation. It also distributes the re-encryption key to authentic owners
using CSS. Figure 2 illustrates the architecture of the proposed scheme. The ownership list for each data is
maintained by the CSS to handle dynamic ownership management. As CSS gets ownership list, it does not
disobey the security requirement. It only allows the re-encrypted ciphertext, and rest information regarding
data encryption is meaningless.
Figure 2. Scheme overview and corresponding security
5.1. System Construction
Let ( )KE M denotes symmetric encryption [20] with the key K of data M where ( )KE is a block
cipher. A cryptographic hash function : 0,1{ }
k
H is used to derive an encryption key and tag from the message
and k denotes the length of the key K .
5. Int J Inf & Commun Technol ISSN: 2252-8776
A secure deduplication scheme for encrypted data (Vishal Passricha)
81
Key Generation: The KEYs for the users in u are generated by CSS by running ( )KEYGen u . Initially,
a binary KEY tree is designed for the world of users u by CSS. Figure 3 shows the binary KEY tree and users
u are getting ownership group keys through this tree. Each user (node) jv has a KEY, represented as KEYj.
Path keys are the set of keys from leaf to root, and it is separately maintained by each user (node). Figure 3
shows how 3
u keeps KEY6, KEY3, and KEY1 as its path keys 3Pk . For ju U , the path keys of ju is denoted
by jPk . CSS constructs the KEY tree in the following manners:
1. Each user of U gets a leaf node. Randomly generated keys are assigned to each node of the tree.
2. Each user ju U gets securely the path key jPk . In the re-encryption phase, CSS uses the path keys
as KEYs to encrypt the ownership group keys.
Figure 3. KEY tree for ownership group key distribution
Data Encryption: If a data owner ju wants to upload data M to CS then he runs ( ,1 )Enc M
module
to encrypt M .
( )
{0,1}
k
L
denotes the random encryption key and algorithm ( )k drives the length of the
encryption key. ( )a aK H M is computed from aM . It is used to encrypt message encryption key L .
( )a aT H K is computed from aK as indexing information of data.
1
( )a L aiC E M and
2
a aiC L K encrypt
the data and encryption key respectively. The ciphertext aiC is constructed by concatenation of
1
aiC and 2
aiC ,
i.e.,
1 2
a a ai i iC C C .
After generation of
1
aiC , a a aiuploads T C ID are sent to CS. After that aM is deleted to save storage
space and aK is kept. If au is the first uploader then cloud server embeds aID into aG and generates
, }{ a aaL T G , and store aiC in CS otherwise it embeds aID to aG and discards aiC .
Data Re-Encryption: Cloud server runs ( , )Re aaiC GEnc to re-encrypt the ciphertext aiC using the
ownership group information before sharing. The main objective of the re-encryption algorithm is to control
the dynamically controlled owners of outsourced data. The re-encryption is done in the following steps:
1. 1 1'
( )aGka ai iC E C is generated by re-encrypting the ciphertext
1
aiC with the random ownership group aGk .
2. Select the root nodes in such a way that it should cover all leaf nodes associated with users in aG . ( )aKEY G
represents a set of KEYs of root nodes of subtrees of aG . In Figure 3, if 1 3 4{ , , }aG u u u then
43( ) ,aKEY G KEY KEY . It is because root nodes 3 4,( )v v cover all the members of aG . All the users in aG
are covered by this set. No user except au G cannot know about any KEY in ( )aKEY G .
3. Derive
3
( ){ ( )} ak KEY Ga ikC E Gk ownership group key are encrypted by this algorithm before distributing to valid
owners. When a user ju requests the data then query for j jT ID is received by the CSS. CSS validates jL
and then '
j jiT C is sent to the user otherwise request declined.
6. ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 8, No. 2, August 2019 : 77– 86
82
Data Decryption: If a au G , then the module '
( , , )a a aiC K PkDec is run to decrypt the ciphertext aiC
which is received by the user au from the CSS. Here ownership group key decryption is also performed by
parsing aiC into
1 2 3
, , ,a a a ai i iT C C C . The ownership group key is extracted by
3
( ( ) ) ( )KEY KEY G Pka aa a iGk D C . Only the
valid owner can decrypt the ownership group key aGk by formula ( )a aKEY KEY G Pk .
If the user au is not a member of aG then, he cannot decrypt aGk . When aGk is retrieved, the ciphertext is
decrypted into a message using formula.
1'
( )a Gk aa
i iD CC , 2i
iaiL C K , 1
( )a aL iM D C ,
1
( )i aT H M
Any modification made by poison attack is detected by validating tag. If 1
a aT T , then it means tag
inconsistency is present so drop the message otherwise aM is the original data outsourced by him, and it is
accepted. It is to be noted that if a valid owner requests the cloud server to delete and modify his data in CS,
then his ownership must be canceled out from the ownership list and restrict him to access the data. By this
step, forward secrecy is maintained. When the subsequent user uploads the same data. It is re-encrypted to
offer backward secrecy. By this, subsequent users are prevented from accessing previously encrypted data. The
working procedure of the proposed secure scheme shown in Figure 4.
Figure 4. The working procedure of the proposed secure scheme
Request 𝑇𝑎 ∥ 𝐼𝐷 𝑎
𝐶𝑖 𝑎
1′
= 𝐸 𝐺𝑘 𝑎
𝐶𝑖 𝑎
′
𝐶𝑖 𝑎
3
= 𝐸 𝑘 𝐺𝑘 𝑎 𝐾∈𝐾𝐸𝑌 𝐺 𝑎
𝐶𝑖 𝑎
′
= 𝐶𝑖 𝑎
1
∥ 𝐶𝑖 𝑎
2
∥ 𝐶𝑖 𝑎
3𝐷𝑒𝑐 𝐶𝑖 𝑎
′
, 𝐾𝑎, 𝑃𝑘 𝑎
𝑃𝑎𝑟𝑠𝑒 𝑇𝑎 ∥ 𝐶𝑖 𝑎
′
into 𝑇𝑎, 𝐶𝑖 𝑎
1′
, 𝐶𝑖 𝑎
2
, 𝐶𝑖 𝑎
3
𝐺𝑘 𝑎 = 𝐷 𝐾𝐸𝑌∈ 𝐾𝐸𝑌 𝐺𝑘 ∩𝑃𝑘 𝑎
𝐶𝑖 𝑎
3
Choose a random ownership group key
𝐶𝑖 𝑎
1
⟵ 𝐷 𝐺𝑘 𝑎
𝐶𝑖 𝑎
1′
𝐿 ⟵ 𝐶𝑖 𝑎
2
⊕ 𝐾𝑎
𝑀 𝑎 ⟵ 𝐷𝐿 𝐶𝑖 𝑎
1
𝑇𝑖
1
⟵ 𝐻 𝑀𝑖
If 𝑇𝑎
1
≠ 𝑇𝑎
Tag Inconsistency
Discard 𝑀 𝑎
Accept 𝑀 𝑎
User
𝐸𝑛𝑐 𝑀, 1 𝛼
𝐿
℘
0,1 𝐾 𝑎
𝐾𝑎 ⟵ 𝐻 𝑀 𝑎
𝑇𝑎 ⟵ 𝐻 𝐾𝑎
𝐶𝑖 𝑎
′
⟵ 𝐸𝐿 𝑀 𝑎
𝐶𝑖 𝑎
′
⟵ 𝐿 ⊕ 𝐾𝑎
𝐶𝑖 𝑎
⟵ 𝐶𝑖 𝑎
1
∥ 𝐶𝑖 𝑎
2
𝑢𝑝𝑙𝑜𝑎𝑑 ∥ 𝑇𝑎 ∥ 𝐶𝑖 𝑎
∥ 𝐼𝐷 𝑎
𝑑𝑒𝑙𝑒𝑡𝑒 𝑀 𝑎
Insert 𝐼𝐷 𝑎 into 𝐺 𝑎
𝐾𝐸𝑌𝑠 = 𝐾𝐸𝑌𝐺𝑒𝑛 𝑈
Construct Binary KEY Tree
𝐿 𝑎 = 𝑇𝑎, 𝐺 𝑎
If 𝐿 𝑎 already
exist
Store 𝐶𝑖 𝑎
Cloud Server Cloud Storage
Discard 𝐶𝑖 𝑎
Retrieve 𝐶𝑖 𝑎
Yes No
No
Yes
7. Int J Inf & Commun Technol ISSN: 2252-8776
A secure deduplication scheme for encrypted data (Vishal Passricha)
83
6. SCHEME ANALYSIS
This section presents the comparisons of the proposed scheme with earlier deduplication schemes.
Table 1 compares the proposed scheme with CE [5], weak-leakage resilient deduplication [18], randomized
CE [19] over encrypted data. These all scheme guarantee the data privacy in CSS by allowing the data owner
to encrypt their data. But these all schemes do not manage dynamic ownership management.
Table 1. The Comparison of the Proposed Scheme in Terms Of Encrypted Deduplication, Tag Consistency,
and Dynamic Ownership Management
Scheme Encrypted Deduplication Tag Consistency Dynamic Ownership Management
CE [5] YES NO NO
WLR [18] YES YES NO
RCE [19] YES YES NO
Proposed YES YES YES
The notations used are as follows:
Sc: Size of accepted data
Sk: Size of a key
ST: Size of a tag
SID: Size of an identity of a user
Sr: Size of a node value in Merkle hash tree
SPoW: Size of exchange message for PoS
The ‘key size’ and ‘tag size’ are used to calculate the storage overhead. They are the size of keys and
tag information which are stored by each owner respectively. Table 2 shows the comparisons of all these
techniques in terms of storage overhead
Table 2. The Comparison of the Proposed Scheme in Terms of Storage Overhead
Scheme
Storage Overhead
Key size Tag Size
CE [5] Sk St
WLR [18] 2Sk+SM*St St
RCE [19] Sk St
Proposed Scheme log 𝑛 + 1 St
Table 3 shows the comparisons of all these techniques in terms of communication overhead. The
communication cost of the proposed scheme is evaluated by network simulation.
Table 3. The Comparison of the Proposed Scheme in Terms of Communication Overhead
Scheme
Communication Overhead
Upload Message Size Download Message Size
CE [5] Sc + Sr + SID Sc
WLR [18] Sc + 3Sk + Sr + SID Sc + Spow+ Sk + ST
RCE [19] Sc + Sk + Sr + SID Sc + Sk + ST
Proposed Scheme Sc + Sk + St + SID Sc + Sk + ST
Figure 5 shows the encryption and decryption time for proposed scheme w.r.t. data size. Figure 6
shows the encryption time of the proposed scheme for large size data chunks for different attributes and Figure
7 shows the decryption time of the proposed scheme for large size data chunks for different attributes.
8. ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 8, No. 2, August 2019 : 77– 86
84
Figure 5. Encryption and decryption time w.r.t. size
of data for the proposed scheme
Figure 6. Encryption process (data size vs time)
Figure 7. Decryption time (data size vs time)
6.1. Security
The proposed scheme is proved more secure in terms of data confidentiality, collusion resistance, and
backward & forward secrecy.
Data Confidentiality: In the proposed scheme, the CSS is not fully trusted even if honest. So there is
a need to keep plain data secret from the CSS and unauthorized users. Without loss of generality, a malicious
user may request for data iM with a tag iT and a valid user’s identity t iID G . He gets the ciphertext
1' 2 3
i i i i
T C C C . If a malicious user is not the owner of data iM then it is impossible for him to retrieve iK
and the data encryption key L because of the cryptographic hash function. Therefore, the proposed scheme
guarantees data confidentiality against unauthorized users.
Collusion Resistance: The unauthorized users without valid ownership should be restricted to decrypt
data even if they collude. In this model, the data encryption key L and ownership group key KG are necessary
to decrypt ciphertext into plain data. Luckily, a malicious user may get the encrypted key L but KG , the
ownership group is not possible to get. Hence, the proposed scheme offers proper security against collusion.
Backward and Forward Security: If a user tries to upload already unloaded data in CS, the ownership
0
5
10
15
20
25
30
35
40
1 2 4 6 8 10
Time(ms)
Size of Data (MB)
Encryption Decryption
0
1
2
3
4
5
6
7
8
9
10
64 128 256 512 1024
ExecutionTime(Seconds)
Data Size (MB)
40 attr 80 attr 120 attr
0
5
10
15
20
25
30
35
64 128 256 512 1024
ExecutionTime(Seconds)
Data Size (MB)
40 attr 80 attr 120 attr
9. Int J Inf & Commun Technol ISSN: 2252-8776
A secure deduplication scheme for encrypted data (Vishal Passricha)
85
group key is updated randomly for KG to 'KG . 'KG is sent confidentially and rapidly to the data’s owner.
11'
( )GkE CC is encrypted with KG and in parallel 'C is re-encrypted with the updated ownership group key
'KG by CSS. Hence, the user cannot decrypt the previous ciphertext. By this, the proposed algorithm offers
backward and forward secrecy.
7. CONCLUSION
Deduplication is an efficient technique that is successfully employed in various large scale and cloud
storage. Recently, the conflicts that arise when deduplication is applied on encrypted data has been resolved.
However, the ownership of outsourced data is still an issue. Dynamic ownership management is also a very
challenging issue. In this paper, a secure deduplication scheme for encrypted data has been proposed. The re-
encryption techniques provide power to manage any ownership changes in CSS. As the ownership changes in
the ownership group of outsourced data, rapidly the data are re-encrypted using ownership group key, and this
key is also updated among valid owners. By this, data confidentiality in CS is strengthened. Tag consistency
is also used to take advantages of efficient data deduplication over encrypted data. The proposed scheme is
more effective than earlier schemes in terms of communication cost. Therefore, the proposed scheme offers
secure and efficient data deduplication in cloud storage.
REFERENCES
[1] Reinsel D, Gantz J, Rydning J. “Data Age 2025: The Evolution of Data to Life-Critical”. An IDC White Paper.
Seagate, November 2018.
[2] Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee,
David Patterson, Ariel Rabkin, Ion Stoica, and Matei Zaharia. “A view of cloud computing.” Commun. ACM, vol.
53, no 4 (April 2010), pp. 50-58.
[3] Bolosky WJ, Douceur JR, Ely D, Theimer M. “Feasibility of a serverless distributed file system deployed on an
existing set of desktop.” ACM SIGMETRICS Performance Evaluation Review. Vol. 28, no. 1 (2000), pp. 34-43.
[4] Clements AT, Ahmad I, Vilayannur M, Li J, editors. “Decentralized Deduplication in SAN Cluster File Systems.”
USENIX annual technical conference; 2009; San Diego.
[5] Douceur JR, Adya A, Bolosky WJ, Simon P, Theimer M. “Reclaiming space from duplicate files in a serverless
distributed file system.” 22nd International Conference on Distributed Computing Systems; 2002; Vienna, Austria.
[6] Dropbox. www.dropbox.com. 2014.
[7] Google Drive. https://drive.google.com. 2016.
[8] IDRIVE. https://www.idrive.com. 2017.
[9] Storer MW, Greenan K, Long DD, Miller EL, editors. “Secure data deduplication.” 4th ACM international workshop
on Storage security and survivability; 2008; Alexandria, USA.
[10] Bellare, Mihir & Keelveedhi, Sriram & Ristenpart, Thomas. (2013). “DupLESS: Server-aided encryption for
deduplicated storage.” 22nd USENIX Conference on Security. 179-194.
[11] Mulazzani M, Schrittwieser S, Leithner M, Huber M, Weippl ER, editors. “Dark Clouds on the Horizon: Using
Cloud Storage as Attack Vector and Online Slack Space.” USENIX security symposium; 2011; San Francisco, CA.
[12] J. Yuan and S. Yu, "Secure and constant cost public cloud storage auditing with deduplication," 2013 IEEE
Conference on Communications and Network Security (CNS), National Harbor, MD, 2013, pp. 145-153.
[13] Shin, Youngjoo & Koo, Dongyoung & Hur, Junbeom. “A Survey of Secure Data Deduplication Schemes for Cloud
Storage Systems.” ACM Computing Surveys. vol. 49, no. 74. pp. 1-38. January 2017.
[14] V. Javaraiah, "Backup for cloud and disaster recovery for consumers and SMBs," 2011 Fifth IEEE International
Conference on Advanced Telecommunication Systems and Networks (ANTS), Bangalore, 2011, pp. 1-3.
[15] Y. Tan, H. Jiang, D. Feng, L. Tian, Z. Yan and G. Zhou, "SAM: A Semantic-Aware Multi-tiered Source De-
duplication Framework for Cloud Backup," 2010 39th International Conference on Parallel Processing, San Diego,
CA, 2010, pp. 614-623.
[16]. D. Harnik, B. Pinkas and A. Shulman-Peleg, "Side Channels in Cloud Services: Deduplication in Cloud Storage,"
in IEEE Security & Privacy, vol. 8, no. 6, pp. 40-47, Nov.-Dec. 2010.
[17] Halevi, Shai & Harnik, Danny & Pinkas, Benny & Shulman-Peleg, Alexandra. “Proofs of ownership in remote
storage systems”. Proceedings of the ACM Conference on Computer and Communications Security. 491-500. 2011.
[18] Xu, Susan Jia, Ee-Chien Chang and Jianying Zhou. “Weak leakage-resilient client-side deduplication of encrypted
data in cloud storage.” AsiaCCS (2013).
[19] Bellare M, Keelveedhi S, Ristenpart T, editors. “Message-locked encryption and secure deduplication.” International
Conference on the Theory and Applications of Cryptographic Techniques; 2013; Athens, Greece.
[20] A. Russell and Hong Wang, "How to fool an unbounded adversary with a short key," in IEEE Transactions on
Information Theory, vol. 52, no. 3, pp. 1130-1140, March 2006.
10. ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 8, No. 2, August 2019 : 77– 86
86
BIOGRAPHIES OF AUTHORS
Vishal Passricha received his B.Tech. with honors in Computer Engineering from Kurukshetra
University, India, in 2010, He received his M.Tech. with honors in Computer Engineering from
YMCAUST, Faridabad India in 2012. He is currently working as an assistant professor in the
Computer Engineering Department, National Institute of Technology, Kurukshetra. He is also
pursuing Ph.D. from the Computer Engineering Department, National Institute of Technology,
Kurukshetra. His current research focuses on Data deduplication and Machine Learning.
Ashish Chopra is currently an Assistant Professor at the National Institute of Technology,
Kurukshetra, India. He holds his Ph.D. degree and MCA degree in Department of Computer
Application from the NIT, Kurukshetra in 2013 and 2006 respectively. His Ph.D. research work
mainly focused on Mobile Adhoc Networks. His current research focuses on Wireless Sensor
Networks and Data Deduplication.
Pooja Sharma received her Bachelor of Science in Computer Application from Kurukshetra
University, Kurukshetra, India in 2009. She received her master in computer application from
Kurukshetra University, Kurukshetra in 2012. She is working as a lecturer in Government college
for Women, Karnal since August 2013. She has been guided many undergraduates and
postgraduates project.
Shubhanshi Singhal received her B. Tech in computer science and engineering from utter pradesh
technical university, Lucknow, India. She did her Master in Technology in Computer Engineering
from Kurukshetra University, Kurukshetra. Presently she is working as Assistant Professor in
Computer Science and Engineering Department, TERii, Kurukshetra. Her area of Interest is data
deduplication and machine learning.