This is my Capstone Project for my Masters in Computer Science 2023 at the Rochester Institute of Technology. I want to fully thank Dr. M. Mustafa Rafique and Dr. Hans-Peter Bischof for their guidance and support throughout this process.
This paper talks about how to improve and build upon existing data distribution algorithms for a fog computing environment. It implements libraries from AES, Reed Solomon to improve the existing architecture.
This paper is also based off the existing research: ian Wang et al. A Three-Layer Privacy Preserving Cloud Storage Scheme
Based on Computational Intelligence in Fog Computing in IEEE TETCI, vol.
2, no. 1, pp. 3-12, Feb. 2018 .
IRJET - Multi Authority based Integrity Auditing and Proof of Storage wit...IRJET Journal
This document proposes a method for secure data storage and integrity auditing in the cloud using multi-level encryption and data deduplication. The method first checks for duplicate files using hash comparisons and avoids uploading duplicate data to save storage space. It then encrypts data using AES, DES and RSA algorithms sequentially, generating keys each time. The keys are then re-encrypted using AES and stored on separate servers. This multi-level encryption and distribution of encrypted data and keys across servers makes the data more secure. It also enables proof of storage and authentication of authorized users using time-based keys.
Improving Data Storage Security in Cloud using HadoopIJERA Editor
The rising abuse of information stored on large data centres in cloud emphasizes the need to safe guard the data. Despite adopting strict authentication policies for cloud users data while transferred over to secure channel when reaches data centres is vulnerable to numerous attacks .The most widely adoptable methodology is safeguarding the cloud data is through encryption algorithm. Encryption of large data deployed in cloud is actually a time consuming process. For the secure transmission of information AES encryption has been used which provides most secure way to transfer the sensitive information from sender to the intended receiver. The main purpose of using this technique is to make sensitive information unreadable to all other except the receiver. The data thus compressed enables utilization of storage space in cloud environment. It has been augmented with Hadoop‟s map-reduce paradigm which works in a parallel mode. The experimental results clearly reflect the effectiveness of the methodology to improve the security of data in cloud environment.
An Approach towards Shuffling of Data to Avoid Tampering in CloudIRJET Journal
This document proposes an approach to secure data stored in the cloud by using data shuffling, access control policies, and deduplication. It discusses encrypting user data using AES before uploading it to the cloud. An administrator can control the shuffling of encrypted data between servers at regular intervals to avoid tampering. Access control policies require authorized users to authenticate with a secret key before performing file operations. Deduplication prevents duplicate files from being stored to reduce storage usage by hashing and comparing file contents. The proposed approach aims to enhance security, prevent data tampering and duplication, and efficiently use cloud storage.
A Novel Approach for Data Security in Cloud EnvironmentSHREYASSRINATH94
Businesses and enterprises are shifting their work base from traditional obsolete systems to cloud servers. The reasons of the shift are rapid deployment of an application which enables the developers to launch their applications within a short duration of time, lower cost operating models, scalability and ability to use an operational budget. The increased demand for computer-intensive applications led to a huge growth in cloud services. Security of the stored files and applications in the cloud is a major concern due to lack of standardized control of the cloud. Traditional cryptographic algorithms like AES and RSA, though robust in nature, are not lightweight in a mobile cloud environment. This paper proposes a novel cryptographic method wherein the file is first encrypted and then stored in cloud and is decrypted using the same key which is used for encryption as and when needed. The experimental results of the identified test cases indeed prove that the proposed algorithm is lightweight in nature in terms of less execution time and less processing cycles. The lightweight approach of the proposed algorithm will benefit the user to encrypt large files in cloud.
Multi-part Dynamic Key Generation For Secure Data EncryptionCSCJournals
Storage of user or application-generated user-specific private, confidential data on a third party storage provider comes with its own set of challenges. Although such data is usually encrypted while in transit, securely storing such data at rest presents unique security challenges. The first challenge is the generation of encryption keys to implement the desired threat containment. The second challenge is secure storage and management of these keys. This can be accomplished in several ways. A naive approach can be to trust the boundaries of a secure network and store the keys within these bounds in plain text. A more sophisticated method can be devised to calculate or infer the encryption key without explicitly storing it. This paper focuses on the latter approach. Additionally, the paper also describes the implementation of a system that in addition to exposing a set of REST APIs for secure CRUD operations also provides a means for sharing the data among specific users.
Role Based Access Control Model (RBACM) With Efficient Genetic Algorithm (GA)...dbpublications
This document summarizes a research paper that proposes a new cloud data security model using role-based access control, encryption, and genetic algorithms. The model uses Token Based Data Security Algorithm (TBDSA) combined with RSA and AES encryption to securely encode, encrypt, and forward cloud data. A genetic algorithm is used to generate encrypted passwords for cloud users. Role managers are assigned to control user roles and data access. The aim is to integrate encoding, encrypting, and forwarding for secure cloud storage while minimizing processing time.
This document discusses effective modular order preserving encryption on cloud using multivariate hypergeometric distribution (MHGD). It begins with an abstract that describes how order preserving encryption allows efficient range queries on encrypted data. It then provides background on cloud computing security concerns and discusses existing approaches to searchable encryption, including probabilistic encryption, deterministic encryption, homomorphic encryption, and order preserving encryption. The key proposed approach is to improve the security of existing modular order preserving encryption approaches by utilizing MHGD.
This document summarizes a research paper that proposes a security architecture for cloud computing that dynamically configures cryptographic algorithms and keys based on security policies and inputs like network access risk and data sensitivity. The architecture aims to improve security while reducing costs by only using the necessary level of encryption for each situation. It describes using the Blowfish algorithm instead of AES and adjusting the key size from 128 to 448 bits depending on factors like network type and data size. Results show Blowfish has better performance than AES, especially with larger keys on larger amounts of data. The goal is to provide flexible, efficient security tailored to each user's needs.
IRJET - Multi Authority based Integrity Auditing and Proof of Storage wit...IRJET Journal
This document proposes a method for secure data storage and integrity auditing in the cloud using multi-level encryption and data deduplication. The method first checks for duplicate files using hash comparisons and avoids uploading duplicate data to save storage space. It then encrypts data using AES, DES and RSA algorithms sequentially, generating keys each time. The keys are then re-encrypted using AES and stored on separate servers. This multi-level encryption and distribution of encrypted data and keys across servers makes the data more secure. It also enables proof of storage and authentication of authorized users using time-based keys.
Improving Data Storage Security in Cloud using HadoopIJERA Editor
The rising abuse of information stored on large data centres in cloud emphasizes the need to safe guard the data. Despite adopting strict authentication policies for cloud users data while transferred over to secure channel when reaches data centres is vulnerable to numerous attacks .The most widely adoptable methodology is safeguarding the cloud data is through encryption algorithm. Encryption of large data deployed in cloud is actually a time consuming process. For the secure transmission of information AES encryption has been used which provides most secure way to transfer the sensitive information from sender to the intended receiver. The main purpose of using this technique is to make sensitive information unreadable to all other except the receiver. The data thus compressed enables utilization of storage space in cloud environment. It has been augmented with Hadoop‟s map-reduce paradigm which works in a parallel mode. The experimental results clearly reflect the effectiveness of the methodology to improve the security of data in cloud environment.
An Approach towards Shuffling of Data to Avoid Tampering in CloudIRJET Journal
This document proposes an approach to secure data stored in the cloud by using data shuffling, access control policies, and deduplication. It discusses encrypting user data using AES before uploading it to the cloud. An administrator can control the shuffling of encrypted data between servers at regular intervals to avoid tampering. Access control policies require authorized users to authenticate with a secret key before performing file operations. Deduplication prevents duplicate files from being stored to reduce storage usage by hashing and comparing file contents. The proposed approach aims to enhance security, prevent data tampering and duplication, and efficiently use cloud storage.
A Novel Approach for Data Security in Cloud EnvironmentSHREYASSRINATH94
Businesses and enterprises are shifting their work base from traditional obsolete systems to cloud servers. The reasons of the shift are rapid deployment of an application which enables the developers to launch their applications within a short duration of time, lower cost operating models, scalability and ability to use an operational budget. The increased demand for computer-intensive applications led to a huge growth in cloud services. Security of the stored files and applications in the cloud is a major concern due to lack of standardized control of the cloud. Traditional cryptographic algorithms like AES and RSA, though robust in nature, are not lightweight in a mobile cloud environment. This paper proposes a novel cryptographic method wherein the file is first encrypted and then stored in cloud and is decrypted using the same key which is used for encryption as and when needed. The experimental results of the identified test cases indeed prove that the proposed algorithm is lightweight in nature in terms of less execution time and less processing cycles. The lightweight approach of the proposed algorithm will benefit the user to encrypt large files in cloud.
Multi-part Dynamic Key Generation For Secure Data EncryptionCSCJournals
Storage of user or application-generated user-specific private, confidential data on a third party storage provider comes with its own set of challenges. Although such data is usually encrypted while in transit, securely storing such data at rest presents unique security challenges. The first challenge is the generation of encryption keys to implement the desired threat containment. The second challenge is secure storage and management of these keys. This can be accomplished in several ways. A naive approach can be to trust the boundaries of a secure network and store the keys within these bounds in plain text. A more sophisticated method can be devised to calculate or infer the encryption key without explicitly storing it. This paper focuses on the latter approach. Additionally, the paper also describes the implementation of a system that in addition to exposing a set of REST APIs for secure CRUD operations also provides a means for sharing the data among specific users.
Role Based Access Control Model (RBACM) With Efficient Genetic Algorithm (GA)...dbpublications
This document summarizes a research paper that proposes a new cloud data security model using role-based access control, encryption, and genetic algorithms. The model uses Token Based Data Security Algorithm (TBDSA) combined with RSA and AES encryption to securely encode, encrypt, and forward cloud data. A genetic algorithm is used to generate encrypted passwords for cloud users. Role managers are assigned to control user roles and data access. The aim is to integrate encoding, encrypting, and forwarding for secure cloud storage while minimizing processing time.
This document discusses effective modular order preserving encryption on cloud using multivariate hypergeometric distribution (MHGD). It begins with an abstract that describes how order preserving encryption allows efficient range queries on encrypted data. It then provides background on cloud computing security concerns and discusses existing approaches to searchable encryption, including probabilistic encryption, deterministic encryption, homomorphic encryption, and order preserving encryption. The key proposed approach is to improve the security of existing modular order preserving encryption approaches by utilizing MHGD.
This document summarizes a research paper that proposes a security architecture for cloud computing that dynamically configures cryptographic algorithms and keys based on security policies and inputs like network access risk and data sensitivity. The architecture aims to improve security while reducing costs by only using the necessary level of encryption for each situation. It describes using the Blowfish algorithm instead of AES and adjusting the key size from 128 to 448 bits depending on factors like network type and data size. Results show Blowfish has better performance than AES, especially with larger keys on larger amounts of data. The goal is to provide flexible, efficient security tailored to each user's needs.
IRJET - A Novel Approach Implementing Deduplication using Message Locked Encr...IRJET Journal
This document proposes a novel approach to implementing data deduplication on the cloud using message locked encryption. It aims to overcome limitations of existing deduplication techniques like convergent encryption by using erasure code technology, encryption algorithms like DES and MD5 hashing, and tokenization to securely store and protect client data on the cloud. The proposed system gives clients proof of ownership of their data by allowing them to choose who can access their files and see any changes made over time. The system architecture involves a client uploading encrypted data to the cloud, and recipients selected by the client being able to access and retrieve encrypted pieces of the data.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Enforcing multi user access policies in cloud computingIAEME Publication
This document discusses enforcing multi-user access policies in cloud computing. It describes how encryption techniques can be used to securely store data in the cloud and allow authorized users to access encrypted data through key management. The document also discusses security risks in cloud computing like authentication, access control and data leaks. It argues that a policy-based approach is needed to define and enforce access policies for users to access encrypted data securely in the cloud.
Dynamic Resource Allocation and Data Security for CloudAM Publications
Cloud computing is the next generation of IT organization. Cloud computing moves the software and
databases to the large centres where the management of services and data may not be fully trusted. In this system, we
focus on cloud data storage security, which has been an important aspect of quality of services. To ensure the
correctness of user’s data in the cloud, we propose an effective scheme with Advanced Encryption Standard and MD5
algorithm. Extensive security and performance analysis shows that the proposed scheme is highly efficient. In
proposed work we have developed efficient parallel data processing in clouds and present our research project for
parallel security. Parallel security is the data processing framework to explicitly exploit the dynamic storage along with
data security. We have proposed a strong, formal model for data security on cloud and corruption detection.
Bio-Cryptography Based Secured Data Replication Management in Cloud StorageIJERA Editor
Cloud computing is new way of economical and efficient storage. The single data mart storage system is a less
secure because data remain under a single data mart. This can lead to data loss due to different causes like
hacking, server failure etc. If an attacker chooses to attack a specific client, then he can aim at a fixed cloud
provider, try to have access to the client’s information. This makes an easy job of the attackers, both inside and
outside attackers get the benefit of using data mining to a great extent. Inside attackers refer to malicious
employees at a cloud provider. Thus single data mart storage architecture is the biggest security threat
concerning data mining on cloud, so in this paper present the secure replication approach that encrypt based on
biocrypt and replicate the data in distributed data mart storage system. This approach involves the encryption,
replication and storage of data
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
A PRACTICAL CLIENT APPLICATION BASED ON ATTRIBUTE-BASED ACCESS CONTROL FOR UN...cscpconf
One of widely used cryptographic primitives for the cloud application is Attribute Based Encryption (ABE) where users can have their own attributes and a ciphertext encrypted by an access policy. Though ABE provides many benefits, the novelty often only exists in an academic world and it is often difficult to find a practical use of ABE for a real application. In this paper, we discuss the design and implementation of a cloud storage client application which supports the concept of ABE. Our proposed client provides an effective access control mechanism where it allows different types of access policy to be defined thus allowing large datasets to be shared by multiple users. Using different access policy, each user only needs to access only a small part of the big data. The goal of our experiment is to explore the right set of strategies for developing a practical ABE-based system. Through the implementation and evaluation, we have determined the various characteristics and issues associated with developing a practical ABEbased
application.
Improved deduplication with keys and chunks in HDFS storage providersIRJET Journal
The document proposes a new deduplication scheme for HDFS storage providers that improves reliability. It uses MD5 hashing to generate unique tags for files and blocks, and stores the tags in a metadata file. Ownership verification is done during upload/download by checking these tags. Encrypted data is distributed across block servers for reliability. Convergent keys derived from hashes encrypt the data blocks using 3DES. This ensures security while allowing deduplication. The scheme achieves both file-level and block-level deduplication and uses distributed key servers for reliable key management at large scale.
Today, the growth of digitalization has made the ease for livelihood for all the organizations. Cloud computing the storage provider for all the computer resources has made it easy for accessing the data from anywhere anytime. But at the same time the security for cloud data storage is the major drawback which is provided by various cryptographic algorithms. These algorithms convert the data into unreadable format, known as cipher text, Rivest, Shamir and Adleman (RSA) one of the most popularly used asymmetric algorithm. This paper gives detailed review about such different cryptographic algorithms used for the cloud data security. The comparison study is also made for the size of data and to analyze the encryption time and decryption time, which concludes that to enhance the cloud data security some addon techniques are to be used along with these cryptographic algorithms. To increase the security level and to increase the transmission speed of plaintext, integrated method will be proposed by encoding the plaintext to intermediate plaintext and then intermediate plaintext will be compressed using any one of the compression techniques to increase the compression ratio, lastly the compressed file is encrypted to further enhance the security level.
A novel cloud storage system with support of sensitive data applicationijmnct
Most users are willing to store their data in the c
loud storage system and use many facilities of clou
d. But
their sensitive data applications faces with potent
ial serious security threats. In this paper, securi
ty
requirements of sensitive data application in the c
loud are analyzed and improved structure for the ty
pical
cloud storage system architecture is proposed. The
hardware USB-Key is used in the proposed architectu
re
for purpose of enhancing security of user identity
and interaction security between the users and the
cloud
storage system. Moreover, drawn from the idea of da
ta active protection, a data security container is
introduced in the system to enhance the security of
the data transmission process; by encapsulating th
e
encrypted data, increasing appropriate access contr
ol and data management functions. The static data
blocks are replaced with a dynamic executable data
security container. Then, an enhanced security
architecture for software of cloud storage terminal
is proposed for more adaptation with the user's sp
ecific
requirements, and its functions and components can
be customizable. Moreover, the proposed architectur
e
have capability of detecting whether the execution
environment is according with the pre-defined
environment requirements.
Iaetsd secured and efficient data scheduling of intermediate data setsIaetsd Iaetsd
This document discusses securing and efficiently scheduling intermediate data sets in cloud computing. It proposes using an upper bound constraint approach to identify sensitive intermediate data sets for encryption. Suppression techniques like semi-suppression and full-suppression are applied to sensitive data sets to reduce time and costs while the Value Generalization Hierarchy protocol is used to provide security during data access. Optimized balanced scheduling is also used to balance system loads and minimize costs. The goal is to efficiently manage intermediate data sets while preserving privacy.
This document discusses securely mining data stored in the cloud using encryption techniques. It proposes using k-means clustering on the data, then encrypting it with AES. Homomorphic encryption is then performed using Paillier cryptosystem to allow computations on the encrypted data while preserving privacy. The key advantages discussed are that this approach allows for secure data mining and analysis in the cloud without revealing private information to unauthorized parties. It also analyzes related work on encryption and homomorphic techniques for secure cloud computing and big data analysis.
A Secure Multi-Owner Data Sharing Scheme for Dynamic Group in Public Cloud. IJCERT JOURNAL
In cloud computing outsourcing group resource among cloud users is a major challenge, so cloud computing provides a low-cost and well-organized solution. Due to frequent change of membership, sharing data in a multi-owner manner to an untrusted cloud is still its challenging issue. In this paper we proposed a secure multi-owner data sharing scheme for dynamic group in public cloud. By providing AES encryption with convergent key while uploading the data, any cloud user can securely share data with others. Meanwhile, the storage overhead and encryption computation cost of the scheme are independent with the number of revoked users. In addition, I analyze the security of this scheme with rigorous proofs. One-Time Password is one of the easiest and most popular forms of authentication that can be used for securing access to accounts. One-Time Passwords are often referred to as secure and stronger forms of authentication in multi-owner manner. Extensive security and performance analysis shows that our proposed scheme is highly efficient and satisfies the security requirements for public cloud based secure group sharing.
Revocation based De-duplication Systems for Improving Reliability in Cloud St...IRJET Journal
1) The document discusses improving the reliability of deduplication systems in cloud storage by implementing user revocation along with Shamir's secret sharing scheme and ramp secret sharing scheme.
2) Deduplication systems aim to eliminate redundant data and achieve single instance storage, but reliability and security are ongoing issues when users are revoked.
3) The paper proposes using Shamir's secret sharing algorithm and ramp secret sharing scheme for encryption to maintain reliability when users are removed by allowing the data to be rechecked for duplication.
This document summarizes research on personality-based distributed provable data ownership in multi-cloud storage. It discusses how current provable data possession protocols have limitations such as authentication overhead and lack of flexibility. The proposed approach eliminates authentication management by using identity-based cryptography. It aims to provide a secure, efficient and adaptable protocol for integrity checking of outsourced data across multiple cloud servers.
IRJET- Providing Privacy in Healthcare Cloud for Medical Data using Fog Compu...IRJET Journal
This document proposes a method to improve healthcare data security when stored in the cloud. It discusses how currently, healthcare data stored entirely in the cloud loses user control and faces privacy risks. The proposed method uses "Split and Combine" (SaC) technique to split data between the cloud and a local server. 80% of the split data is encrypted and uploaded to the cloud, while the remaining 20% is stored locally. When a user requests data, it is downloaded from both sources, combined and decrypted. The document evaluates this method and finds it increases security by partially storing data locally compared to traditional cloud-only storage. It concludes the SaC technique enhances privacy but the local server must always be online to combine data.
Encryption Technique for a Trusted Cloud Computing EnvironmentIOSR Journals
This document discusses encryption techniques for securing data in cloud computing environments. It begins with an introduction to cloud deployment models (public, private, hybrid, community) and service models (IaaS, PaaS, SaaS). It then addresses security concerns with cloud computing including data theft, incomplete data uploads, and lack of notification about infrastructure changes. The document proposes encrypting data before uploading it to cloud servers using algorithms like AES to protect data even if stolen. It reviews older encryption techniques like the Caesar cipher and argues stronger algorithms are needed for cloud security.
Encryption Technique for a Trusted Cloud Computing EnvironmentIOSR Journals
This document summarizes an encryption technique for securing data in cloud computing environments. It begins by introducing cloud computing and some of the security concerns with storing data in the cloud. It then discusses previous encryption algorithms like the Caesar cipher, Vigenere cipher, and Playfair cipher and their limitations. The document proposes using the Advanced Encryption Standard (AES) algorithm with Rijndael cipher to encrypt data before uploading it to cloud servers. It describes implementing AES encryption in two steps: 1) using an authentication channel to verify user identities, and 2) encrypting the data using the AES Rijndael algorithm in 9 to 13 rounds depending on the key size. The AES Rijndael algorithm uses byte substitution, shift rows
This document summarizes an encryption technique for securing data in cloud computing environments. It begins by introducing cloud computing and some of the security concerns with storing data in the cloud. It then discusses previous encryption algorithms like the Caesar cipher, Vigenere cipher, and Playfair cipher and their limitations. The document proposes using the Advanced Encryption Standard (AES) algorithm with Rijndael cipher to encrypt data before uploading it to cloud servers. It describes implementing AES encryption in two steps: 1) using an authentication channel to verify user identities, and 2) encrypting the data using the AES Rijndael algorithm in 9 to 13 rounds depending on the key size. The document argues this encryption technique can help make customer data in the
Encryption Technique for a Trusted Cloud Computing EnvironmentIOSR Journals
This document summarizes an encryption technique for ensuring security in cloud computing environments. It begins by introducing cloud computing and some of the security concerns with storing data in the cloud. These include lack of transparency about security measures, incomplete or corrupted data uploads, and potential data theft without the user's knowledge. The document then reviews some traditional encryption algorithms like the Caesar cipher, Vigenere cipher, and Playfair cipher and their limitations. It proposes using the Advanced Encryption Standard (AES) algorithm with Rijndael, which is more secure than older standards. The technique implements AES encryption with an authentication channel using challenge-response and encrypts the data before uploading to the cloud. This ensures the encrypted data is useless even if stolen,
IRJET - A Novel Approach Implementing Deduplication using Message Locked Encr...IRJET Journal
This document proposes a novel approach to implementing data deduplication on the cloud using message locked encryption. It aims to overcome limitations of existing deduplication techniques like convergent encryption by using erasure code technology, encryption algorithms like DES and MD5 hashing, and tokenization to securely store and protect client data on the cloud. The proposed system gives clients proof of ownership of their data by allowing them to choose who can access their files and see any changes made over time. The system architecture involves a client uploading encrypted data to the cloud, and recipients selected by the client being able to access and retrieve encrypted pieces of the data.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Enforcing multi user access policies in cloud computingIAEME Publication
This document discusses enforcing multi-user access policies in cloud computing. It describes how encryption techniques can be used to securely store data in the cloud and allow authorized users to access encrypted data through key management. The document also discusses security risks in cloud computing like authentication, access control and data leaks. It argues that a policy-based approach is needed to define and enforce access policies for users to access encrypted data securely in the cloud.
Dynamic Resource Allocation and Data Security for CloudAM Publications
Cloud computing is the next generation of IT organization. Cloud computing moves the software and
databases to the large centres where the management of services and data may not be fully trusted. In this system, we
focus on cloud data storage security, which has been an important aspect of quality of services. To ensure the
correctness of user’s data in the cloud, we propose an effective scheme with Advanced Encryption Standard and MD5
algorithm. Extensive security and performance analysis shows that the proposed scheme is highly efficient. In
proposed work we have developed efficient parallel data processing in clouds and present our research project for
parallel security. Parallel security is the data processing framework to explicitly exploit the dynamic storage along with
data security. We have proposed a strong, formal model for data security on cloud and corruption detection.
Bio-Cryptography Based Secured Data Replication Management in Cloud StorageIJERA Editor
Cloud computing is new way of economical and efficient storage. The single data mart storage system is a less
secure because data remain under a single data mart. This can lead to data loss due to different causes like
hacking, server failure etc. If an attacker chooses to attack a specific client, then he can aim at a fixed cloud
provider, try to have access to the client’s information. This makes an easy job of the attackers, both inside and
outside attackers get the benefit of using data mining to a great extent. Inside attackers refer to malicious
employees at a cloud provider. Thus single data mart storage architecture is the biggest security threat
concerning data mining on cloud, so in this paper present the secure replication approach that encrypt based on
biocrypt and replicate the data in distributed data mart storage system. This approach involves the encryption,
replication and storage of data
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
A PRACTICAL CLIENT APPLICATION BASED ON ATTRIBUTE-BASED ACCESS CONTROL FOR UN...cscpconf
One of widely used cryptographic primitives for the cloud application is Attribute Based Encryption (ABE) where users can have their own attributes and a ciphertext encrypted by an access policy. Though ABE provides many benefits, the novelty often only exists in an academic world and it is often difficult to find a practical use of ABE for a real application. In this paper, we discuss the design and implementation of a cloud storage client application which supports the concept of ABE. Our proposed client provides an effective access control mechanism where it allows different types of access policy to be defined thus allowing large datasets to be shared by multiple users. Using different access policy, each user only needs to access only a small part of the big data. The goal of our experiment is to explore the right set of strategies for developing a practical ABE-based system. Through the implementation and evaluation, we have determined the various characteristics and issues associated with developing a practical ABEbased
application.
Improved deduplication with keys and chunks in HDFS storage providersIRJET Journal
The document proposes a new deduplication scheme for HDFS storage providers that improves reliability. It uses MD5 hashing to generate unique tags for files and blocks, and stores the tags in a metadata file. Ownership verification is done during upload/download by checking these tags. Encrypted data is distributed across block servers for reliability. Convergent keys derived from hashes encrypt the data blocks using 3DES. This ensures security while allowing deduplication. The scheme achieves both file-level and block-level deduplication and uses distributed key servers for reliable key management at large scale.
Today, the growth of digitalization has made the ease for livelihood for all the organizations. Cloud computing the storage provider for all the computer resources has made it easy for accessing the data from anywhere anytime. But at the same time the security for cloud data storage is the major drawback which is provided by various cryptographic algorithms. These algorithms convert the data into unreadable format, known as cipher text, Rivest, Shamir and Adleman (RSA) one of the most popularly used asymmetric algorithm. This paper gives detailed review about such different cryptographic algorithms used for the cloud data security. The comparison study is also made for the size of data and to analyze the encryption time and decryption time, which concludes that to enhance the cloud data security some addon techniques are to be used along with these cryptographic algorithms. To increase the security level and to increase the transmission speed of plaintext, integrated method will be proposed by encoding the plaintext to intermediate plaintext and then intermediate plaintext will be compressed using any one of the compression techniques to increase the compression ratio, lastly the compressed file is encrypted to further enhance the security level.
A novel cloud storage system with support of sensitive data applicationijmnct
Most users are willing to store their data in the c
loud storage system and use many facilities of clou
d. But
their sensitive data applications faces with potent
ial serious security threats. In this paper, securi
ty
requirements of sensitive data application in the c
loud are analyzed and improved structure for the ty
pical
cloud storage system architecture is proposed. The
hardware USB-Key is used in the proposed architectu
re
for purpose of enhancing security of user identity
and interaction security between the users and the
cloud
storage system. Moreover, drawn from the idea of da
ta active protection, a data security container is
introduced in the system to enhance the security of
the data transmission process; by encapsulating th
e
encrypted data, increasing appropriate access contr
ol and data management functions. The static data
blocks are replaced with a dynamic executable data
security container. Then, an enhanced security
architecture for software of cloud storage terminal
is proposed for more adaptation with the user's sp
ecific
requirements, and its functions and components can
be customizable. Moreover, the proposed architectur
e
have capability of detecting whether the execution
environment is according with the pre-defined
environment requirements.
Iaetsd secured and efficient data scheduling of intermediate data setsIaetsd Iaetsd
This document discusses securing and efficiently scheduling intermediate data sets in cloud computing. It proposes using an upper bound constraint approach to identify sensitive intermediate data sets for encryption. Suppression techniques like semi-suppression and full-suppression are applied to sensitive data sets to reduce time and costs while the Value Generalization Hierarchy protocol is used to provide security during data access. Optimized balanced scheduling is also used to balance system loads and minimize costs. The goal is to efficiently manage intermediate data sets while preserving privacy.
This document discusses securely mining data stored in the cloud using encryption techniques. It proposes using k-means clustering on the data, then encrypting it with AES. Homomorphic encryption is then performed using Paillier cryptosystem to allow computations on the encrypted data while preserving privacy. The key advantages discussed are that this approach allows for secure data mining and analysis in the cloud without revealing private information to unauthorized parties. It also analyzes related work on encryption and homomorphic techniques for secure cloud computing and big data analysis.
A Secure Multi-Owner Data Sharing Scheme for Dynamic Group in Public Cloud. IJCERT JOURNAL
In cloud computing outsourcing group resource among cloud users is a major challenge, so cloud computing provides a low-cost and well-organized solution. Due to frequent change of membership, sharing data in a multi-owner manner to an untrusted cloud is still its challenging issue. In this paper we proposed a secure multi-owner data sharing scheme for dynamic group in public cloud. By providing AES encryption with convergent key while uploading the data, any cloud user can securely share data with others. Meanwhile, the storage overhead and encryption computation cost of the scheme are independent with the number of revoked users. In addition, I analyze the security of this scheme with rigorous proofs. One-Time Password is one of the easiest and most popular forms of authentication that can be used for securing access to accounts. One-Time Passwords are often referred to as secure and stronger forms of authentication in multi-owner manner. Extensive security and performance analysis shows that our proposed scheme is highly efficient and satisfies the security requirements for public cloud based secure group sharing.
Revocation based De-duplication Systems for Improving Reliability in Cloud St...IRJET Journal
1) The document discusses improving the reliability of deduplication systems in cloud storage by implementing user revocation along with Shamir's secret sharing scheme and ramp secret sharing scheme.
2) Deduplication systems aim to eliminate redundant data and achieve single instance storage, but reliability and security are ongoing issues when users are revoked.
3) The paper proposes using Shamir's secret sharing algorithm and ramp secret sharing scheme for encryption to maintain reliability when users are removed by allowing the data to be rechecked for duplication.
This document summarizes research on personality-based distributed provable data ownership in multi-cloud storage. It discusses how current provable data possession protocols have limitations such as authentication overhead and lack of flexibility. The proposed approach eliminates authentication management by using identity-based cryptography. It aims to provide a secure, efficient and adaptable protocol for integrity checking of outsourced data across multiple cloud servers.
IRJET- Providing Privacy in Healthcare Cloud for Medical Data using Fog Compu...IRJET Journal
This document proposes a method to improve healthcare data security when stored in the cloud. It discusses how currently, healthcare data stored entirely in the cloud loses user control and faces privacy risks. The proposed method uses "Split and Combine" (SaC) technique to split data between the cloud and a local server. 80% of the split data is encrypted and uploaded to the cloud, while the remaining 20% is stored locally. When a user requests data, it is downloaded from both sources, combined and decrypted. The document evaluates this method and finds it increases security by partially storing data locally compared to traditional cloud-only storage. It concludes the SaC technique enhances privacy but the local server must always be online to combine data.
Encryption Technique for a Trusted Cloud Computing EnvironmentIOSR Journals
This document discusses encryption techniques for securing data in cloud computing environments. It begins with an introduction to cloud deployment models (public, private, hybrid, community) and service models (IaaS, PaaS, SaaS). It then addresses security concerns with cloud computing including data theft, incomplete data uploads, and lack of notification about infrastructure changes. The document proposes encrypting data before uploading it to cloud servers using algorithms like AES to protect data even if stolen. It reviews older encryption techniques like the Caesar cipher and argues stronger algorithms are needed for cloud security.
Encryption Technique for a Trusted Cloud Computing EnvironmentIOSR Journals
This document summarizes an encryption technique for securing data in cloud computing environments. It begins by introducing cloud computing and some of the security concerns with storing data in the cloud. It then discusses previous encryption algorithms like the Caesar cipher, Vigenere cipher, and Playfair cipher and their limitations. The document proposes using the Advanced Encryption Standard (AES) algorithm with Rijndael cipher to encrypt data before uploading it to cloud servers. It describes implementing AES encryption in two steps: 1) using an authentication channel to verify user identities, and 2) encrypting the data using the AES Rijndael algorithm in 9 to 13 rounds depending on the key size. The AES Rijndael algorithm uses byte substitution, shift rows
This document summarizes an encryption technique for securing data in cloud computing environments. It begins by introducing cloud computing and some of the security concerns with storing data in the cloud. It then discusses previous encryption algorithms like the Caesar cipher, Vigenere cipher, and Playfair cipher and their limitations. The document proposes using the Advanced Encryption Standard (AES) algorithm with Rijndael cipher to encrypt data before uploading it to cloud servers. It describes implementing AES encryption in two steps: 1) using an authentication channel to verify user identities, and 2) encrypting the data using the AES Rijndael algorithm in 9 to 13 rounds depending on the key size. The document argues this encryption technique can help make customer data in the
Encryption Technique for a Trusted Cloud Computing EnvironmentIOSR Journals
This document summarizes an encryption technique for ensuring security in cloud computing environments. It begins by introducing cloud computing and some of the security concerns with storing data in the cloud. These include lack of transparency about security measures, incomplete or corrupted data uploads, and potential data theft without the user's knowledge. The document then reviews some traditional encryption algorithms like the Caesar cipher, Vigenere cipher, and Playfair cipher and their limitations. It proposes using the Advanced Encryption Standard (AES) algorithm with Rijndael, which is more secure than older standards. The technique implements AES encryption with an authentication channel using challenge-response and encrypts the data before uploading to the cloud. This ensures the encrypted data is useless even if stolen,
Similar to Secure_Data_Distribution_Algorithm_for_Fog_Computing.pdf (20)
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
1. RIT Computer Science • Capstone Report • 2228
Secure Data Distribution Algorithm for Fog
Computing
Hima Bindu Krovvidi
Department of Computer Science
Golisano College of Computing and Information Sciences
Rochester Institute of Technology
Rochester, NY 14623
hk4233@rit.edu
Abstract – The use of cloud computing is turning into
an increasingly prominent way to access and store data
primarily because of its scalability and simplicity. Several
techniques have been taken to increase cloud infrastruc-
ture security, such as employing a three-layer privacy-
preserving cloud storage system so that the infrastructure
is built, such that even if an insider has access to the
data, they will have access to only a fraction of it and
cannot retrieve the rest. The current distribution algo-
rithm focuses only on providing a fault-tolerant and data
integrity module and has scope for security improvements.
In order to solve this problem, we propose a stronger
data fragmentation and data distribution algorithm to
help increase data confidentiality and enhance encryption.
When this enhancement is combined with the existing
design, it will result in a highly secure and fault-tolerant
system where users may protect their data. We describe
the proposed design’s architecture and evaluate its security
challenges.
Index Terms—Fog Computing, AES, Reed Solomon, Data
Distribution
I. INTRODUCTION
Since the beginning of cloud computing, there have always
been advancements in how to make the cloud more robust
and secure. It has undergone numerous enhancements, making
the cloud stronger than before. With increasing reliance on
the cloud, consumers store the majority of their data on a
cloud framework [1]. As a result, the cloud is critical for
data storage. This implies that cloud security is an incredibly
essential concern. With an increase in breaches comes an
increase in security. But there have always been attacks and
ways to access the data in spite of these extreme security
measures [2]. This gives us a reason to keep upgrading and
improving our data.
Data breaches caused by internal attacks can take place for
several reasons [3]. Employees or contractors may misuse
their privileges and steal or sell this data to unauthorized third-
party users. Reasons like employee negligence where they
leave a system unlocked or fail to follow appropriate security
measures is also a reason for attackers to gain access. Even
backdoor exploitation, where attackers analyze the weaknesses
in the system and exploit them to gain access to the data in
the cloud, is another possibility to gain access to the data in
the cloud.
To protect against these kinds of attacks, there have been
several studies on how one can improve cloud security. Meth-
ods like creating a three-layered architecture [4] or involving
fog computing [5] have all been inclusive in designing a
constructive architecture in improving the security of the
cloud. Studies have focused on implementing the idea that
instead of putting all the data on one server, spreading it across
various other servers would prevent the attacker from having
access to all the data, if at all they do.
The distribution and fragmentation algorithms used in this
paper [4] have focused on dividing the data on three different
servers, such as the local machine, the fog server, and the cloud
server. The distribution the authors have designed is such that
they have 1% of the data on the local machine, 4% on the
fog server, and 95% on the cloud server. They [4] have used
the Hash Soloman code algorithm [4] [6] to separate the data
which primarily focuses on providing a fault-tolerant and data
integrity module. However, when having 95% of the data on a
single cloud server, it is important to understand that although
the attacker does not have access to other servers, they still
have access to 95% of the data which is significantly large. So
adding security and encryption as another layer of protection
within each server is necessary and crucial to ensure that
the data distributed amongst these different servers is highly
encrypted and unbreakable.
Therefore, we propose a stronger, more secure data dis-
tribution module in which the data is divided and encrypted
in combination with the existing data fragmentation algo-
rithm to maintain fault tolerance and add a layer of data
confidentiality. The proposed algorithm focuses on restricting
any kind of access to the data, whether it be a portion on
a server or the entire data. Compared to previous methods
[4], our method gives us an enhanced layer of security in
combination with the Hash Soloman Code algorithm [4], [6]
to provide a comprehensive security solution that addresses
both data confidentiality and integrity. The remainder of the
paper is structured as follows: Section 3 focuses on related
work; Section 4 covers the project methodology; Section 5
describes the evaluation methodologies; Section 6 describes
Rochester Institute of Technology 1 | Page
2. RIT Computer Science • Capstone Report • 2228
the outcomes; Section 7 describes future work and the scope
of improvements; and Section 8 concludes the paper.
II. APPROACH
The project aims on improving the data confidentiality that
could be provided by the three-layer cloud storage system.
By introducing an enhanced security algorithm to the existing
infrastructure, we aim to create a more robust model to sort
and distribute data across different servers ensuring security.
Adding advanced encryption standard (AES) algorithm [7]in
combination with Reed-Solomon Code algorithm [4], [6]
helps us achieve this requirement. The existing data is pro-
cessed through a distribution algorithm and passed through this
new algorithm generated, to create another layer of security.
To solve this, we take the data sent to the cloud and add
redundant data to ensure data integrity, and then jumble this
data, and divide it into blocks. After this process, each block
will be sent to the particular servers, i.e., cloud server, fog
server, and local machine based on the existing distribution
algorithm. Before the data is sent into the respective servers,
it undergoes AES encryption [7].
The main part of our proposed algorithm lies in its combi-
nation with Reed-Solomon Code algorithm [4], [6] with AES
[7]. For this, we make use of the PKCS7 padding, to ensure
that the input data is appropriately aligned and meets the block
size as required by AES for the encryption and decryption
process. By using this padding, we are able to introduce false
or misleading information that would essentially scramble the
genuine data with the decoiled data, hence, making it difficult
for the malicious attackers to actually find the genuine data,
which is hidden amongst the padding.
Since AES [7] uses symmetric key for encryption and
decryption, it is important that we formulate a method such
that only the user has access to the data on the cloud. To
do this, we generate a private key that is only known to
the user (using a secure key generation algorithm), and the
symmetric key is encrypted using the secret key of the user.
This algorithm ensures a fault-tolerant system, and mitigates
data loss and corruption of data. The usage of AES [7] makes
it more difficult to breach and increases data confidentiality.
Using this method is definitely an improvement to previous
techniques since, this algorithm ensures that the attacker will
not have access to any portion of the data.
III. LITERATURE REVIEW
The importance of security in cloud storage has attracted
a lot of attention in both the industry and from the ed-
ucational perspective. There has been lots of research in
the field of secure cloud storage architectures. We look at
different encryption studies to provide a secure data stor-
age system in combination with a Fog computing server.
Arwa [8] talks about how using a key-exchange protocol
based on attribute-based encryption provides secure inter-
communication amongst different nodes. Xu [9] talks about
another method involving homomorphic encryption, based on
an article [10] and access policies to provide secure data
sharing. Xu [9] also talks about an access control system for
Fog computing environments, such that the data is confidential
and there is selective access only. According to Seol [11],
data leakage can occur when software is exploited or when
employees or administrators misuse sensitive knowledge. The
authors offer a system wherein an encrypted storage service is
used to protect data from potentially compromised software
and administrators. The authors discuss leveraging crypto
processors to provide a safe storage service for consumers,
with details inferences from a paper written by Gaspar [12].
The system is secure since the information that is saved has
been secured and will not be disclosed even with privileged
access.
There have been several security algorithms employed in
various cloud storage systems, and are yet to be utilized to
build a more robust system. Concepts like AES (Advanced
Encryption Standard) [7] focus on providing a strongly built,
symmetric encryption and decryption algorithm. AES aims on
providing data confidentiality. The paper [13] talks about how
it combines AES with quantum cryptography to create a highly
secure application. The authors say with the combination
of AES and quantum security, the application created is of
unprecedented level and can be used in high security requiring
systems such as the nuclear defense system. The paper [13]
mentions how AES is the fastest block cipher and is used for
practical security. On a comparative study on AES, DES, RSA,
and OTP, [14] explains how all four encryption algorithms
are extremely robust and describes AES and DES on their
accuracy of performance.
Reddy [15] was able to create a solution to the emerging
requirement for cloud data security. The technique employs
hybrid cryptography as well as file splitting. Hybrid cryptog-
raphy is an approach that combines the concepts of symmetric
encryption and public-key encryption. The author emphasized
that symmetric encryption is efficient but insecure because the
same key is shared, hence the solution entails encrypting the
symmetric key using the public key. This solution made it
sturdy, secure, and efficient. File splitting entails separating the
file into different pieces and encrypting each part individually,
such that even if one of them is gained by an attack, the
attacker will be unable to decode it.
IV. IMPLEMENTATION
The implementation of the proposed algorithm involves
making changes and enhancing the security of the data dis-
tribution algorithm. The proposed algorithm will focus on
dividing the data in a more secure way, such that the data
exposed on one server will not affect the data or security on
the other server.
The project basically aims to provide a robust and secure
data distribution algorithm for Fog computing architectures.
With the help of encryption, data fragmentation, and access
control methods, we are able to implement this algorithm
successfully. Below are the descriptions of the implementation
in detail:
Rochester Institute of Technology 2 | Page
3. RIT Computer Science • Capstone Report • 2228
Fig. 1. Architecture of Data Distribution on Cloud Servers
Fig. 2. Data Retrieval from the Cloud Servers
A. System Design
As the paper [4] describes their architecture on dividing
the data into three servers mainly, the cloud, fog, and the local
machine, we have implemented the same architecture.
We use this architecture to deploy our data distribution
algorithm. As seen in the Figure 1, we can find that the data
after going through the distribution algorithm will be divided
into servers, cloud, fog, and local machines by the ratio of
95:4:1. The majority of the data exists in the cloud.
From the Figure 2, we can understand the retrieval process
of data when the user requests information from the cloud
server. When the user requests some information or data from
the cloud server, the data will be extracted from combining
all the information that is stored and distributed on the three
different servers. The process initially begins with the cloud
server receiving the request of the user, then the cloud server
that holds 95% of the data will prepare the data and look for
the information that the user has requested after retrieving the
data. The request that the user has sent will go to the fog server
and it will prepare the data and retrieve it and will combine it
Fig. 3. System Design of Data Encryption and Distribution
with the cloud server’s data giving 99% of the data. Then, the
cloud server sends the request to the local server which has the
remaining 1% of data. This 1% of data will be retrieved and
will then be prepared by a local server and will be combined
with the remaining 99% of data that we retrieve from the other
two servers to get all the data, i.e., 100%. The data that has
been requested by the users will be fetched and will be sent
to the user.
B. System Architecture
The Data Distribution Algorithm is basically responsible
for the entire of data so that it is secure and efficient after
passing through the data distribution algorithm. In addition to
the fault tolerance and data integrity that the Reed-Solomon
algorithm offers, we are adding an important feature, which
is data confidentiality. Data confidentiality is very important,
since the data that is stored on one server, if attacked, will not
allow the attacker to have information about the data on other
servers.
1) Data Fragmentation with Padding: Initially in our algo-
rithm, the data is divided into different blocks and fragments
so that the data is manageable and can be further processed
for security and encryption during the distribution. Padded data
is beneficial as it does not have any meaning or contain any
sensitive information. After this data has been divided into
blocks, we add a layer of padding. The padding we use here
is the PKCS7. This is a very popular padding algorithm, that
is used for AES encryption. We use this particular padding, so
that the total length of the data after we divide it into chunks,
is essentially the multiple of the specified block size, i.e., as
per AES requirement of 16 bytes. This has been depicted in
the Figure 3.
2) Redundancy Addition: To improve fault tolerance, we
use the Reed-Solomon Algorithm. This algorithm is known
for storing all the files safely and when a server goes down for
maintenance, the files will still be available even though they
are on the down server. This is achieved by using redundant
Rochester Institute of Technology 3 | Page
4. RIT Computer Science • Capstone Report • 2228
Fig. 4. System Design of Data Retrieval
information, also known as parity blocks. The Reed-Solomon
Algorithm is responsible for generating the parity blocks so
that there is efficient recovery of lost or corrupted data.
3) Private-Key Generation: We create a private key that is
known only to the user so that it can be used for the encryption
and decryption of the data that would be put on the cloud. The
key generated will be done through a third-party independent
of the cloud and will allow only the user to have access to
it. This key will be of 16-bytes to support the AES 256-bit
encryption algorithm. It has been designed such that in order
to perform encryption or decryption, the user must input the
private key each time to perform the operation. The secret key
will be generated only once and the user must store it.
4) Encryption Algorithm: Once the data has been padded
and added parity bits to support fault tolerance, the algorithm
applies Advanced Encryption Standard (AES) to each of the
blocks. By encrypting the data blocks, the data is secure
even if an attacker gains unauthorized access to a server. By
using the 256-bit key, we are able to ensure maximum data
confidentiality. After this, the encrypted data will be divided
onto the Cloud server, Fog server, and local machine.
5) Data Retrieval: As discussed in Figure 2, all the data is
combined after retrieval from each cloud server. However, the
restored data is encrypted. In order to retrieve the data, the
user has to enter the 16-byte private key that had been issued
earlier. On using this key for decryption, the plain text is now
visible. However, it may be possible that the data may have
been corrupted or simply lost. In order to recover the files
completely, we process Reed-Solomon to retrieve any missing
files that may have been requested by the user. After this
process, the user has his original data that has been securely
protected.
V. PERFORMANCE EVALUATION
Evaluating the model is a crucial aspect of assessing the
effectiveness and performance of our algorithm. To understand
where our data distribution algorithm is doing well and to
find where it is lacking, we assess our algorithm on several
factors including, security measures, vulnerabilities, attack
vectors, effectiveness of data distribution and confidentiality,
resource utilization, processing speed, and memory usage, and
the overall performance.
The security measures of this algorithm are used to provide
confidentiality, integrity, and authentication. We will also by
analyzing the vulnerabilities of this algorithm and will evaluate
how resistant it is to different attacks, like brute force, or
data tampering attacks. In the case of resource utilization, we
will evaluate CPU usage, memory consumption, and network
bandwidth. Factors such as the processing speed in comparison
with previous algorithms will also be assessed.
A. Evaluation Methodology
To evaluate our designed system, we test the model on
various scales of data such as text files, images, and video files
that range from 480p to a 4K file. This allows our algorithm
to be tested on a large range of data sizes so that we can
understand and analyze its performance on smaller data files
and how the overhead changes when processing a large data
file. We will be evaluating diverse sets of data sets, which
include multi-media files and text documents to assess the
performance of the algorithm.
B. Performance Metrics
To understand the effectiveness of our system and algorithm,
performance metrics are useful. In our paper, several aspects
play a critical role in the functionality and impact of the overall
system. These metrics determine whether our algorithm is
strong, weak, or ineffective. By evaluating the performance
metrics of our system, we are able to understand where our
system is performing well and where it lacks. This will allow
future researchers to improve the system further. Some of the
key performance metrics that we will be taking into account
by trying to evaluate the model are:
1) Encryption and Decryption Speed: This metric quan-
tifies the efficiency of the AES distribution algorithm. By
evaluating this, we are able to understand how well our
encryption algorithm is performing with respect to our data.
2) Data Retrieval Time: Data retrieval time essentially
measures the amount of time it takes for the user to retrieve the
data once uploaded onto the cloud servers. The process of data
retrieval includes several steps once the data is stored on the
cloud server. Retrieval of the data includes the user providing
the private key to perform decryption of the data once the data
is restored from the cloud servers. After the data is decrypted,
measures are taken to verify whether there has been missing,
lost or corrupted data. If this happens to be the case, the Reed-
Solomon algorithm will allow the same restoration of the data.
This is all measured in the data retrieval time.
Rochester Institute of Technology 4 | Page
5. RIT Computer Science • Capstone Report • 2228
3) Memory Utilization: The memory utilization metric
measures how efficiently the algorithm utilizes memory across
the computing machine. By evaluating the memory utilization,
we are able to see where potential bottlenecks can take place
and are able to carefully analyze and target what is contributing
to this memory utilization.
4) Resource Utilization: Resource utilization is one of the
most prominent performance evaluation metrics that we focus
on in this paper. We take into account several aspects such as
CPU usage, network bandwidth, processing speed, and overall
performance. Having evaluated this, we can understand where
our algorithm lacks and improve on it.
5) Throughput: Throughput is a basic measure of how
much work is done per unit of time. We can test how
well our system is doing by measuring the throughput. A
higher throughput means more work is done, whereas a low
throughput means less work is done.
6) Security Vulnerabilities: Security vulnerabilities of the
system include a comparison of potential attack vectors,
the effectiveness of data encryption, and data confidentiality.
Measuring this allows us to understand how resistant our
encryption scheme is. This will be a qualitative analysis to
understand and compare the proposed model with the existing
ones. We will examine various techniques that could break the
encryption scheme.
7) Error Handling: This evaluation examines how error
correction is performed and how fast it is done. Another aspect
that is tested is the amount of memory that gets consumed. To
test this, we will take into account the time to recover from
server failures. We will measure the overhead taken for when
one server is down, two, and so on. This will be very helpful
to understand how efficient our system is in failure recovery.
This also has a real-time application advantage as corruption
of data or loss is not highly uncommon. Having this feature
in our system proves to be an advantage.
C. Experimental Setup
The experimental setup for evaluating the proposed algo-
rithm involves creating a controlled environment that simu-
lates the computing scenarios. The setup has the necessary
hardware, software, and network components to replicate the
data distribution process environment. The algorithm has been
tested locally on a personal computer.
1) Hardware Components: The setup includes a personal
computing device having 12th Gen Intel(R) Core(TM) i7-
12700H 2.30 GHz x-64 based processor, with a RAM of 16.0
GB and a 64-bit operating system.
2) Software Components: The software components in-
clude the implementation of the proposed algorithm using VS
Code IDE, Docker for containerization, and relevant libraries
to perform encryption, decryption, Reed-Solomon encoding,
and tracking the utilization of resources. The libraries include
ReedSolo and Cryptography modules.
3) Data-sets: We selected a diverse set of different data
types such as multi-media files and text documents with
Fig. 5. Processing Time With Varying File Sizes
varying data volumes so that the algorithm’s performance is
tested across various data sizes.
4) Performance Measurement Tools: To measure and mon-
itor the different performance metrics such as data distribution
time, encryption and decryption speed, data retrieval time,
CPU usage, memory usage, and network bandwidth, we em-
ploy tools such as psutil in Python to collect these metrics. We
also containerized the application by using Docker, which gave
us the application’s performance in an isolated environment
without the interference of any other application running in
the background.
VI. PERFORMANCE RESULTS
After deploying our application into Docker, we measured
various metrics which are crucial to understand the algorithm’s
effectiveness, efficiency, and overall impact on data distribu-
tion and security. Having measured these results, we were pro-
vided valuable insights into how the algorithm is performing
under various conditions and we are able to identify areas for
improvement or optimizations. Following are some of the key
results we recorded:
A. Data Retrieval Time
The first performance metric that we measured is the time
for data retrieval including encryption and decryption of the
data that is stored on the cloud server. We tested the application
in two different environments, one being a 14-core system and
another being an 8-core system. This provided insights into
how quickly the data would be processed and retrieved as
the number of cores increased, resembling a real-time server
having a good number of cores to process large amounts of
data.
For easy understanding and even distribution on the graph,
we employed Log-scale X-axis to represent an increase in file
sizes. The file sizes taken are of various file types ranging
Rochester Institute of Technology 5 | Page
6. RIT Computer Science • Capstone Report • 2228
Fig. 6. Memory Utilization
from 10 KB up to 1 GB. On the Y-axis, the processing times
are represented in seconds. When the graph is plotted on
both the 14-core and 8-core systems, we observe a common
pattern, i.e., the processing time growing exponentially with
the increase in file size. The processing time lies between 0 to
10 seconds within the first MB of data. However, after that,
there is a steep slope, and the processing time increases.
B. Impact of Memory Utilization
The performance metric we measured next is memory
utilization. We evaluated the memory utilization on a 16-
core computing device and took into account two aspects of
memory consumption including the Memory Percentage and
RAM Memory. The memory percentage was evaluated based
on a 16 GB ram computing device. By evaluating this, we
were given insights on how much memory was consumed,
and whether the overhead is too expensive. Also, the amount
of memory needed to be employed if a large-scale application
was deployed was inferred.
For easy understanding and even distribution on the graph,
we employed Log-scale X-axis to represent an increase in file
sizes. The file sizes taken are of various file types ranging
from 10 KB and up to 1 GB but were represented in the
powers of 10. On the Y-axis, the memory percentage was
recorded and on the minor axis, the RAM memory measured
in terms of GB(s) was recorded. Our evaluation plotted on the
graph recorded these two aspects simultaneously as they are
significantly dependent on one another. There are two graphs
plotted, one for each metric. If we take a look at the memory
percentage graph, there was not much increase until the file
size crossed 1000 MB. And the other graph has a similarity
with the memory percentage graph, where there has been kind
of same utilization for the file sizes under 1 GB, but for the
files above 1 GB, there has been a steep increase, depicting
the usage of the memory.
C. Impact of Resource Utilization
As a part of resource utilization, we analyzed the CPU
utilization of the application as it was running. CPU utilization
Fig. 7. CPU Utilization
is an important aspect as it helps us understand the amount
of resources that are being consumed by the application on
the computing device. The CPU utilization has been recorded
on a personal computing device having 12th Gen Intel(R)
Core(TM) i7-12700H 2.30 GHz x-64 based processor, with
a RAM of 16.0 GB and 64-bit operating system.
The CPU utilization can be understood from the graph,
where the file sizes are expressed logarithmically on the X-
axis to provide even distribution, whereas the Percentage of
CPU usage is represented on the Y-axis. The CPU utilization
seems to be almost steady and below 2% for file sizes up to 1
MB, but after that, there has been an increase in the slope for
file sizes greater than 1 MB. It can also be observed that the
inclination for file sizes greater than 1 MB was just half of
what has been achieved for file sizes greater than 500 MB. It
can be inferred that as larger data files require to be processed,
a device with higher specifications would be required.
D. Qualitative Analysis
In the qualitative analysis of this paper, we discuss the
effectiveness of our data model and how robust it is against
potential attack vectors, and how it provides effective data
encryption and confidentiality against many common threats
in fog computing environments.
Most importantly, by equipping AES [7] to include in
our algorithm, we significantly enhance the data protection in
comparison to the previous methods which lacked any sort
of encryption mechanisms. Especially, since we used AES
in combination with a private key, which enables strict data
access control [16]. This allows only authorized users (in
our case, only the user) to access the data, preventing all
unauthorized parties including the cloud administrators from
accessing any kind of information. This, in turn, provides a
defense against insider attacks.
Another advantage of using AES is the protection against
eavesdroppers during data transmission [17]. This makes
the data protected and indecipherable during data upload and
retrieval.
Overall, in combination with AES with Reed-Solomon, the
system achieves data confidentiality and provides a robust
method to keep the data from undergoing permanent loss [18].
Rochester Institute of Technology 6 | Page
7. RIT Computer Science • Capstone Report • 2228
Fig. 8. Impact on Recovery of Node Failure
E. Impact of Error Handling
Our algorithm employs the Reed-Solomon code algorithm
to reinforce a fault-tolerant, data integrity-holding model. In
combination with our encryption techniques, it is important
to understand and observe how the Reed-Solomon algorithm
behaves with respect to time and overall performance. To
evaluate this, we performed testing on a 16-core computing
device and employed 10 nodes to hold the data. We observed
how the system is performing when a single node is down,
two, and four, and recorded how long it was taking for the
application to bounce back with the same data, which has
been unchanged and has been successfully recovered.
Similar to the previous graphs, we plotted the file sizes on
the X-axis and the corresponding number of node failures on
the Y-axis. We equipped a heat map to display the recovery
times in comparison to the number of nodes that have been
corrupted. We can see that for a particular file, for example,
of size 10 KB, the recovery time increases with the increase
in number of nodes that have failed. This pattern remains
consistent with all the file sizes. Another observation that can
be made is that with the increase in file size, the time to recover
from node failure also increases. Most importantly, given out
of 10 servers, 4 have failed, the application has successfully
been able to recover all the lost data. The time to recover is
recorded in seconds.
VII. DISCUSSIONS
Fog computing has emerged as a promising platform to
overcome the challenges posed in contemporary computing
settings by the enormous influx of data. The decentralized
nature of fog computing provides faster computing and stores
the data on different servers, where the entire data is not stored
on the cloud server. But even this decentralized architecture
has some security concerns. When we deal with sensitive
data, it is important to ensure that the data is safeguarded
against data exposure and unauthorized access. This is a very
important aspect where there can be more robust and safer
security algorithms used to safeguard the data.
This paper proposes, along with the existing three-layer
storage framework that [4] states, to involve a hybrid approach
where both AES and Reed Solomon Code Algorithm are used.
This ensures that even if a portion of the data on a certain
server is compromised, the attacker will not be able to gain any
information. The proposed algorithm goes beyond traditional
methods and offers another layer of security, where data is
protected during data transmission, and the system has a fault-
tolerant design.
Compared to the conventional data distribution techniques
employed in fog computing environments, the suggested algo-
rithm has significant advantages. Due to the inclusion of both
a data confidentiality algorithm and a data recovery model,
the foremost advantage is that the data cannot be either lost
or hacked.
The additional feature of allowing a user-private password
keeps away everyone other than the user from performing
encryption or decryption on the data that is to be stored on
the data servers. This prevents all of the unauthorized cloud
server access that could take place, and thus the data from
getting stolen.
By leveraging the processing power of edge devices, the
proposed data distribution algorithm further improves effi-
ciency in data retrieval and dissemination while lowering sys-
tem latency. This enables efficient simultaneous data retrieval
from various servers and improves data access times. This
allows the entire system suitable for real-time.
VIII. FUTURE WORK
Some potential avenues for future work and development are
for one, focusing on further enhancing the security measures.
The research done has focused on working on integrating
additional techniques to improve security, but further research
can be done to strengthen data confidentiality by incorporating
quantum-crypto techniques. Another aspect would be to im-
prove the dynamic data distribution by developing dynamically
adjustable data distribution based on the workloads, network
conditions, and device capabilities.
Scalability and performance are other aspects that can
be improved, and also thinking about distributing the data
in an energy-efficient manner is another aspect to look at.
An interesting work that could be worked on is integrating
machine learning-based security to detect any anomaly and or
threats to remove the security threats, thereby improving the
algorithm even more.
IX. CONCLUSION
Therefore, the proposed three-layer storage system for
fog computing, which combines the Hash Solomon Code
algorithm and AES encryption, provides a comprehensive
solution for the security problems that inevitably occur in
Rochester Institute of Technology 7 | Page
8. RIT Computer Science • Capstone Report • 2228
these circumstances. The method significantly improves the
efficiency, fault tolerance, and data secrecy of fog computing
systems, strengthening their security posture. The successful
development and evaluation of the algorithm in real-world
deployment settings attest to its efficacy and suitability for
data distribution in cloud and edge computing systems.
Ensuring cloud security has remained an issue in the digital
world, whereas the proposed strategy significantly advances
data distribution and encryption approaches, paving the way
for more secure and long-lasting fog computing infrastruc-
tures. By conducting more research and looking into ways to
improve the algorithm’s performance and scalability, it will
only become clearer how useful the algorithm may be as a
fundamental framework for safeguarding private information
in distributed computing systems.
REFERENCES
[1] J. F. Gantz, D. Reinsel, and J. Rydning, “The u.s. datasphere: Consumers
flocking to cloud,” IDC, 2019.
[2] D. Kolevski, K. Michael, R. Abbas, and M. Freeman, “Cloud computing
data breaches: A review of u.s. regulation and data breach notification
literature,” in 2021 IEEE International Symposium on Technology and
Society (ISTAS), 2021, pp. 1–7.
[3] Y. Sun, J. Zhang, Y. Xiong, and G. Zhu, “Data security and privacy
in cloud computing,” International Journal of Distributed Sensor
Networks, vol. 10, no. 7, p. 190903, 2014. [Online]. Available:
https://doi.org/10.1155/2014/190903
[4] T. Wang, J. Zhou, X. Chen, G. Wang, A. Liu, and Y. Liu, “A three-
layer privacy preserving cloud storage scheme based on computational
intelligence in fog computing,” IEEE Transactions on Emerging Topics
in Computational Intelligence, vol. 2, no. 1, pp. 3–12, 2018.
[5] S. Cao, H. Han, J. Wei, Y. Zhao, S. Yang, and L. Yan, “Space
cloud-fog computing: Architecture, application and challenge,” in
Proceedings of the 3rd International Conference on Computer Science
and Application Engineering, ser. CSAE ’19. New York, NY, USA:
Association for Computing Machinery, 2019. [Online]. Available:
https://doi-org.ezproxy.rit.edu/10.1145/3331453.3361637
[6] H. Xu and D. Bhalerao, “Reliable and secure distributed cloud data
storage using reed-solomon codes,” International Journal of Software
Engineering and Knowledge Engineering, vol. 25, pp. 1611–1632, 11
2015.
[7] M. Dworkin, E. Barker, J. Nechvatal, J. Foti, L. Bassham, E. Roback,
and J. Dray, “Advanced encryption standard (aes),” 2001-11-26 2001.
[8] A. Alrawais, A. Alhothaily, C. Hu, X. Xing, and X. Cheng, “An attribute-
based encryption scheme to secure fog communications,” IEEE Access,
vol. PP, pp. 1–1, 05 2017.
[9] Q. Xu, C. Tan, Z. Fan, W. Zhu, Y. Xiao, and F. Cheng, “Secure
data access control for fog computing based on multi-authority
attribute-based signcryption with computation outsourcing and attribute
revocation,” Sensors, vol. 18, no. 5, 2018. [Online]. Available:
https://www.mdpi.com/1424-8220/18/5/1609
[10] X. Yi, R. Paulet, and E. Bertino, Homomorphic Encryption. Cham:
Springer International Publishing, 2014, pp. 27–46.
[11] J. Seol, S. Jin, and S. Maeng, “Secure storage service for iaas
cloud users,” in Proceedings of the 13th IEEE/ACM International
Symposium on Cluster, Cloud, and Grid Computing, ser. CCGRID
’13. IEEE Press, 2013, p. 190–191. [Online]. Available: https://doi-
org.ezproxy.rit.edu/10.1109/CCGrid.2013.31
[12] L. Gaspar, “Crypto-processor - architecture, programming and evaluation
of the security,” 11 2012.
[13] G. Sharma and S. Kalra, “A novel scheme for data security in
cloud computing using quantum cryptography,” in Proceedings of the
International Conference on Advances in Information Communication
Technology & Computing, ser. AICTC ’16. New York, NY, USA:
Association for Computing Machinery, 2016. [Online]. Available:
https://doi-org.ezproxy.rit.edu/10.1145/2979779.2979816
[14] T. Talaei Khoei, E. Ghribi, R. Prakash, and N. Kaabouch, “A perfor-
mance comparison of encryption/decryption algorithms for uav swarm
communications,” 02 2021.
[15] H. S. C. Reddy, V. V. Karthik, D. V, A. Pavan, and S. V, “Data storage on
cloud using split-merge and hybrid cryptographic techniques,” in 2022
International Conference for Advancement in Technology (ICONAT),
2022, pp. 1–5.
[16] S. Duggal, V. Mohindru, P. Vadiya, and S. Sharma, “A comparative
analysis of private key cryptography algorithms: Des, aes and triple
des,” International Journal of Advanced Research in Computer Science
and Software Engineering, vol. 6, p. 1373, 06 2014.
[17] M. Yahaya and A. Ajibola, “Cryptosystem for secure data transmission
using advance encryption standard (aes) and steganography,” Interna-
tional Journal of Scientific Research in Computer Science, Engineering
and Information Technology, vol. Volume 5, pp. 317–322, 12 2019.
[18] D. U. Singh, “Error detection and correction using reed solomon codes,”
Error Detection and Correction Using Reed Solomon Codes, vol. 3, 03
2013.
Rochester Institute of Technology 8 | Page