This document proposes a key-aggregate encryption scheme called Input Cumulative Cryptosystem for secure and scalable data sharing in cloud computing. The scheme allows a data owner to generate a constant-size aggregate decryption key that can decrypt multiple ciphertexts. The key has the combined decryption power of all individual secret keys. An intrusion detection system also monitors communication between hosts to only allow data sharing between trusted hosts, improving security. The proposed system aims to address issues with existing approaches that require distributing multiple decryption keys or have fixed hierarchies for access control. It allows flexible delegation of decryption rights for dynamic sets of ciphertexts in cloud storage.
This document proposes an efficient keyword searching technique (EKST) for encrypted data stored in cloud computing environments. EKST allows for fuzzy keyword searches that tolerate minor typos or inconsistencies. It does this by constructing fuzzy keyword sets for predefined keywords that include variations within a certain edit distance. EKST then designs an efficient search approach based on these fuzzy keyword sets to securely retrieve matching encrypted files from the cloud server while revealing minimal information. The document outlines the problem formulation, related work, and proposes techniques like wildcard-based construction of fuzzy keyword sets to improve storage efficiency and search performance for fuzzy keyword search over encrypted cloud data.
Key aggregate searchable encryption (kase) for group data sharing via cloud s...Pvrtechnologies Nellore
This document describes a proposed cryptosystem for secure and efficient data sharing in cloud storage. It allows a user to encrypt files with different public keys but send a receiver a single constant-size decryption key that gives decryption rights to any set of ciphertexts. This allows flexible sharing of encrypted data while keeping decryption keys compact. The proposed system aims to address disadvantages of existing approaches like unexpected privilege escalation exposing all data or inefficient key sizes. It provides security based on number-theoretic assumptions without relying on servers for access control.
15.secure keyword search and data sharing mechanism for cloud computingVenkat Projects
The document proposes a ciphertext-policy attribute-based mechanism called CPAB-KSDS that allows encrypted data stored in the cloud to be securely searched and shared. Existing solutions only support either search or sharing of encrypted data, not both. CPAB-KSDS supports both attribute-based keyword search and data sharing while also enabling keyword updates during sharing without interacting with the private key generator. The system architecture requires a processor, 40GB hard disk, 2GB RAM, Windows OS, Java/J2EE coding, MySQL database, and Netbeans IDE.
IRJET- Secure Sharing of Personal Data on Cloud using Key Aggregation and...IRJET Journal
This document proposes a method for secure sharing of personal data on cloud storage using key aggregation and cryptography. It discusses how traditional cloud storage raises privacy and security issues due to outsourcing of data. The proposed method uses key-aggregate encryption to encrypt data files and generate a single aggregate key, reducing the need to exchange keys for individual files. This allows data owners to selectively and securely share a large number of encrypted files with data users by distributing the aggregate encryption key. When data users search for files, a trapdoor is generated and sent to the cloud for searching over authorized encrypted files. The method aims to enable secure, efficient and flexible sharing of encrypted personal data on cloud storage.
1) The document proposes a system model for secure data sharing in cloud environments using cryptography.
2) It aims to provide data confidentiality, access control of shared data, remove the burden of key management and file encryption/decryption for users, and support dynamic changes to user membership without requiring the data owner to always be online.
3) The proposed system addresses common challenges with secure data sharing in cloud computing like data security, access control, key management, and user revocation and rejoining.
A review on key aggregate cryptosystem for scalable data sharing in cloud sto...eSAT Journals
This document summarizes a research paper on key-aggregate cryptosystem (KAC) for secure data sharing in cloud storage. KAC allows data owners to efficiently share decryption keys for selected ciphertext classes by generating an aggregate key of constant size. The data owner first encrypts data and generates keys, then can create an aggregate key over a set of ciphertexts to share with others. When received, the aggregate key allows downloading and accessing the selected encrypted data. KAC provides an efficient way to delegate decryption rights for cloud-stored data while maintaining security and flexibility in data sharing.
Secure Medical Data Computation using Virtual_ID Authentication and File Swap...IJASRD Journal
PHR provides users with a great deal in leakage of sensitive information. However, securing the sensitive medical data also brings very serious security problems, especially for the data security which is stored in the medical cloud data. Once the data is leaked to a third party, then the data privacy has become a major problem, mainly such as authentication, availability of data, confidentiality etc., and which is to be taken into consideration very effectively. An authentication scheme based on virtual smartcard using hashing function for medical data is proposed to solve the problem of in which the illegal users access the resources of servers. Here, we also maintain PHR sensitive data in the cloud by using file swapping concept. Once the user access the data. The user data will be swapped into different places by file swapping concept, so the file will be more secured and no one can hack or theft our data.
Data Leakage Detection and Security Using Cloud ComputingIJERA Editor
The data owner will store the data in the cloud. Every user must registered in the cloud. Cloud provider must
verify the authorized user. If someone try to access the account, data will get leaked. This leaked data will
present in an unauthorized place (e.g., on the internet or someone’s laptop). In this paper, we propose Division
and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that collectively
approaches the security and performance issues. In DROPS methodology, we have to select the file and then
store the particular file in the cloud account. In order to provide security we are going to implement DROPS
concepts. Now we divide the file into various fragments based on the threshold value. Each and every fragments
are stored in the node using T-Coloring. After the placement of fragments in node, it is necessary to replicate
each fragments for one time in cloud.
This document proposes an efficient keyword searching technique (EKST) for encrypted data stored in cloud computing environments. EKST allows for fuzzy keyword searches that tolerate minor typos or inconsistencies. It does this by constructing fuzzy keyword sets for predefined keywords that include variations within a certain edit distance. EKST then designs an efficient search approach based on these fuzzy keyword sets to securely retrieve matching encrypted files from the cloud server while revealing minimal information. The document outlines the problem formulation, related work, and proposes techniques like wildcard-based construction of fuzzy keyword sets to improve storage efficiency and search performance for fuzzy keyword search over encrypted cloud data.
Key aggregate searchable encryption (kase) for group data sharing via cloud s...Pvrtechnologies Nellore
This document describes a proposed cryptosystem for secure and efficient data sharing in cloud storage. It allows a user to encrypt files with different public keys but send a receiver a single constant-size decryption key that gives decryption rights to any set of ciphertexts. This allows flexible sharing of encrypted data while keeping decryption keys compact. The proposed system aims to address disadvantages of existing approaches like unexpected privilege escalation exposing all data or inefficient key sizes. It provides security based on number-theoretic assumptions without relying on servers for access control.
15.secure keyword search and data sharing mechanism for cloud computingVenkat Projects
The document proposes a ciphertext-policy attribute-based mechanism called CPAB-KSDS that allows encrypted data stored in the cloud to be securely searched and shared. Existing solutions only support either search or sharing of encrypted data, not both. CPAB-KSDS supports both attribute-based keyword search and data sharing while also enabling keyword updates during sharing without interacting with the private key generator. The system architecture requires a processor, 40GB hard disk, 2GB RAM, Windows OS, Java/J2EE coding, MySQL database, and Netbeans IDE.
IRJET- Secure Sharing of Personal Data on Cloud using Key Aggregation and...IRJET Journal
This document proposes a method for secure sharing of personal data on cloud storage using key aggregation and cryptography. It discusses how traditional cloud storage raises privacy and security issues due to outsourcing of data. The proposed method uses key-aggregate encryption to encrypt data files and generate a single aggregate key, reducing the need to exchange keys for individual files. This allows data owners to selectively and securely share a large number of encrypted files with data users by distributing the aggregate encryption key. When data users search for files, a trapdoor is generated and sent to the cloud for searching over authorized encrypted files. The method aims to enable secure, efficient and flexible sharing of encrypted personal data on cloud storage.
1) The document proposes a system model for secure data sharing in cloud environments using cryptography.
2) It aims to provide data confidentiality, access control of shared data, remove the burden of key management and file encryption/decryption for users, and support dynamic changes to user membership without requiring the data owner to always be online.
3) The proposed system addresses common challenges with secure data sharing in cloud computing like data security, access control, key management, and user revocation and rejoining.
A review on key aggregate cryptosystem for scalable data sharing in cloud sto...eSAT Journals
This document summarizes a research paper on key-aggregate cryptosystem (KAC) for secure data sharing in cloud storage. KAC allows data owners to efficiently share decryption keys for selected ciphertext classes by generating an aggregate key of constant size. The data owner first encrypts data and generates keys, then can create an aggregate key over a set of ciphertexts to share with others. When received, the aggregate key allows downloading and accessing the selected encrypted data. KAC provides an efficient way to delegate decryption rights for cloud-stored data while maintaining security and flexibility in data sharing.
Secure Medical Data Computation using Virtual_ID Authentication and File Swap...IJASRD Journal
PHR provides users with a great deal in leakage of sensitive information. However, securing the sensitive medical data also brings very serious security problems, especially for the data security which is stored in the medical cloud data. Once the data is leaked to a third party, then the data privacy has become a major problem, mainly such as authentication, availability of data, confidentiality etc., and which is to be taken into consideration very effectively. An authentication scheme based on virtual smartcard using hashing function for medical data is proposed to solve the problem of in which the illegal users access the resources of servers. Here, we also maintain PHR sensitive data in the cloud by using file swapping concept. Once the user access the data. The user data will be swapped into different places by file swapping concept, so the file will be more secured and no one can hack or theft our data.
Data Leakage Detection and Security Using Cloud ComputingIJERA Editor
The data owner will store the data in the cloud. Every user must registered in the cloud. Cloud provider must
verify the authorized user. If someone try to access the account, data will get leaked. This leaked data will
present in an unauthorized place (e.g., on the internet or someone’s laptop). In this paper, we propose Division
and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that collectively
approaches the security and performance issues. In DROPS methodology, we have to select the file and then
store the particular file in the cloud account. In order to provide security we are going to implement DROPS
concepts. Now we divide the file into various fragments based on the threshold value. Each and every fragments
are stored in the node using T-Coloring. After the placement of fragments in node, it is necessary to replicate
each fragments for one time in cloud.
Two Level Auditing Architecture to Maintain Consistent In Cloudtheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Theoretical work submitted to the Journal should be original in its motivation or modeling structure. Empirical analysis should be based on a theoretical framework and should be capable of replication. It is expected that all materials required for replication (including computer programs and data sets) should be available upon request to the authors.
The International Journal of Engineering & Science would take much care in making your article published without much delay with your kind cooperation
Secure Data Sharing Algorithm for Data Retrieval In Military Based NetworksIJTET Journal
Abstract— Mobile knots now armed atmospheres such equally battlefield or aggressive area remain expected toward smart after irregular net connectivity and regular panels. Disruption Tolerant Network (DTN) tools stay attractive positive keys that agree knots toward connect with each other in these dangerous interacting atmospheres.The problem of applying the security mechanisms to DTN introduces several security challenges.Since nearly handlers could modification their related characteristics by approximately argument and reliability of data should be changed otherwise around isolated secrets power remain bargained significant reversal aimed at respectively characteristic is essential in command toward create organisms safe in this research a novel approaches are used to overcome the above mentioned problems called secure data sharing algorithm. This algorithm calculate hash importance aimed at coded documents which is used to check the reliability of encrypted confidential data.
A Review on Key-Aggregate Cryptosystem for Climbable Knowledge Sharing in Clo...Editor IJCATR
The Data sharing is an important functionality in cloud storage. In this article, we show how to securely, efficiently, and
flexibly share data with others in cloud storage. We describe new public-key cryptosystems which produce constant-size ciphertexts
such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set
of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other
words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the
other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in
a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also
describe other application of our schemes. In particular, our schemes give the first public-key patient controlled encryption for flexible
hierarchy, which was yet to be known.
The document summarizes key-aggregate cryptosystem (KAC), which allows efficient and flexible sharing of encrypted
data in cloud storage. KAC encrypts data under a public key and ciphertext class. The key owner can generate an
aggregate decryption key that decrypts any ciphertext whose class is contained in the key, while keeping a constant size.
This compact aggregate key can be shared to delegate decryption rights for a set of ciphertexts, without sharing individual
keys. KAC schemes aim to achieve constant-size ciphertexts, public keys, master secrets and aggregate keys to enable
flexible and efficient data sharing in cloud storage.
A Privacy Preserving Three-Layer Cloud Storage Scheme Based On Computational ...IJSRED
This document proposes a three-layer cloud storage scheme based on fog computing to improve privacy protection. The scheme splits user data into three parts that are stored in the cloud server, fog server, and user's local machine. It uses a Hash-Solomon encoding technique to distribute the data in a way that original data cannot be reconstructed from partial information. The scheme leverages fog computing to both utilize cloud storage and securely protect data privacy against insider attacks. Theoretical analysis and experiments demonstrate that the proposed scheme effectively addresses privacy issues in existing cloud storage models.
IRJET- Privacy Preserving Cloud Storage based on a Three Layer Security M...IRJET Journal
This document proposes a three-layer security model for privacy-preserving cloud storage. The model uses encryption techniques like AES and Triple DES to encrypt user data before storing it in the cloud. The encrypted data is then divided into blocks that are distributed across different cloud, fog, and local storage locations. This prevents data leakage even if some blocks are lost or accessed. Computational intelligence paradigms help optimize the distribution of data blocks for efficiency and security. The model aims to provide stronger privacy protection compared to traditional cloud storage security methods.
Key aggregate searchable encryption (kase) for group data sharing via cloud s...LeMeniz Infotech
Key aggregate searchable encryption (kase) for group data sharing via cloud storage
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
This document summarizes a research paper that proposes a security architecture for cloud computing that dynamically configures cryptographic algorithms and keys based on security policies and inputs like network access risk and data sensitivity. The architecture aims to improve security while reducing costs by only using the necessary level of encryption for each situation. It describes using the Blowfish algorithm instead of AES and adjusting the key size from 128 to 448 bits depending on factors like network type and data size. Results show Blowfish has better performance than AES, especially with larger keys on larger amounts of data. The goal is to provide flexible, efficient security tailored to each user's needs.
Audit free cloud storage via deniable attribute based encryptionMano Sriram
1) Cloud storage services have become popular, but user privacy is a concern as data owners do not want unauthorized access. Existing encryption schemes assume cloud providers are trusted, but they could be forced to reveal secrets.
2) The document proposes a new encryption scheme called deniable Ciphertext Policy Attribute Based Encryption (CP-ABE) that allows cloud providers to create fake user secrets, protecting real secrets even if the provider is coerced.
3) By using deniable CP-ABE, cloud providers can convince coercers that obtained secrets are genuine while actually protecting user privacy, addressing a key limitation of prior encryption schemes.
This document discusses secure data access and sharing in cloud computing environments. It first outlines some security requirements for data sharing in clouds, including data security, privacy, confidentiality, access control, user revocation, and scalability. It then surveys several cryptographic techniques for secure data sharing, including attribute-based encryption (ABE), key-policy attribute-based encryption (KP-ABE), and identity-based encryption (IBE). It also discusses proxy re-encryption as another technique that supports secure data sharing using a semi-trusted proxy to convert ciphertexts between users' public keys.
SECURE CLOUD STORAGE USING DENIABLE ATTRIBUTE BASED ENCRYPTIONadeij1
Cloud storage services are a lot of well-liked today . To secure information from those that don't have access, several encoding schemes are projected. Most of the projected schemes assume cloud storage service suppliers or trustworthy third parties handling key management are trustworthy and can't be hacked; but, in follow, some entities could intercept communications between users and cloud storage suppliers and so compel storage suppliers to unleash user secrets by victimisation government power or alternative means that. During this case, encrypted information are assumed to be identified and storage suppliers are requested to unleash user secrets. Since it's tough to fight against outside coercion, we tend to aimed to create Associate in Nursing encoding theme that might facilitate cloud storage suppliers avoid this plight. We provide cloud storage suppliers means that to make pretend user secrets. Given such pretend user secrets, outside coercers will solely obtained solid information from a user’s keep cipher text. Once coercers suppose the received secrets are real, they'll be happy and a lot of significantly cloud storage suppliers won't have discovered any real secrets. Therefore, user privacy continues to be protected.
KEY AGGREGATE CRYPTOSYSTEM FOR SCALABLE DATA SHARING IN CLOUDNaseem nisar
1. EASiER proposes an encryption-based access control architecture for social networks that uses attribute-based encryption. It introduces a minimally trusted proxy to enable efficient revocation without reissuing keys.
2. Multi-authority attribute based encryption schemes allow multiple authorities to issue secret keys for attributes. This is useful in applications with attributes managed by different authorities.
3. Existing social network privacy architectures focus on encryption-based access control but do not address efficient revocation of users or attributes. EASiER addresses this issue.
IRJET- An EFficiency and Privacy-Preserving Biometric Identification Scheme i...IRJET Journal
This document proposes an efficient and privacy-preserving approach for outsourced data from resource-constrained mobile devices in cloud computing. It employs probabilistic public key encryption to encrypt the data and performs ranked keyword search over the encrypted data to retrieve files from the cloud. The approach aims to achieve efficient encryption without sacrificing data privacy. The ranked keyword search improves usability by returning the most relevant files and ensuring retrieval accuracy, while reducing computation and communication overhead. A thorough security and performance analysis proves the approach is semantically secure and efficient.
SECURE SENSITIVE DATA SHARING ON BIG DATA PLATFORMAM Publications
Big data concern extremely large volume and complex data both structured and unstructured to reveal patterns and trends .The organisation procure large data storage, data delivery on semi-trusted big data sharing platform. An enterprise can obtain huge amount of sensitive data by storing, analysing, processing these data. In digital world, keeping sensitive data secure from theft and vulnerability is very difficult .This abstract proposes a framework for secure sensitive data sharing on a big data platform using effective encryption algorithm. We present an identity based conditional proxy encryption based on heterogeneous cipher text transformation .It protects security of user’s sensitive data on big data platform.
A PRACTICAL CLIENT APPLICATION BASED ON ATTRIBUTE-BASED ACCESS CONTROL FOR UN...cscpconf
One of widely used cryptographic primitives for the cloud application is Attribute Based Encryption (ABE) where users can have their own attributes and a ciphertext encrypted by an access policy. Though ABE provides many benefits, the novelty often only exists in an academic world and it is often difficult to find a practical use of ABE for a real application. In this paper, we discuss the design and implementation of a cloud storage client application which supports the concept of ABE. Our proposed client provides an effective access control mechanism where it allows different types of access policy to be defined thus allowing large datasets to be shared by multiple users. Using different access policy, each user only needs to access only a small part of the big data. The goal of our experiment is to explore the right set of strategies for developing a practical ABE-based system. Through the implementation and evaluation, we have determined the various characteristics and issues associated with developing a practical ABEbased
application.
Cloud storage (CS) is gaining much popularity nowadays because it offers low-cost and convenient network storage services. In this big data era, the explosive growth in digital data moves the users towards CS to store their massive data. This explosive growth of data causes a lot of storage pressure on CS systems because a large volume of this data is redundant. Data deduplication is a most-effective data reduction technique that identifies and eliminates the redundant data. Dynamic nature of data makes security and ownership of data as a very important issue. Proof-of-ownership schemes are a robust way to check the ownership claimed by any owner. However to protect the privacy of data, many users encrypt it before storing in CS. This method affects the deduplication process because encryption methods have varying characteristics. Convergent encryption (CE) scheme is widely used for secure data deduplication, but it destroys the message equality. Although, DupLESS provides strong privacy by enhancing CE, but it is also found insufficient. The problem with the CE-based scheme is that the user can decrypt the cloud data while he has lost his ownership. This paper addresses the problem of ownership revocation by proposing a secure deduplication scheme for encrypted data. The proposed scheme enhances the security against unauthorized encryption and poison attack on the predicted set of data.
This document discusses secure data deduplication techniques in cloud storage. It proposes using convergent encryption to encrypt duplicate data only once while allowing deduplication. Managing the large number of encryption keys is a challenge. The document proposes Dekey, which distributes encryption key shares across multiple servers rather than having users manage keys directly. It also proposes using user behavior profiling and decoy files/information. Profiling a user's normal access patterns can help detect abnormal access, while decoys confuse attackers by providing bogus information if unauthorized access is detected. The combination of these techniques aims to provide strong security against insider and outsider attackers in deduplicated cloud storage systems.
Towards Secure Data Distribution Systems in Mobile Cloud Computing: A SurveyIRJET Journal
This document summarizes 6 research papers related to security in mobile cloud computing. It discusses issues like data integrity, authentication, and access control when mobile devices' data and computations are integrated with cloud computing. Several cryptographic techniques are described that can help ensure privacy and security, such as proxy provable data possession, attribute-based encryption, and proxy re-encryption. The document concludes that while mobile cloud computing provides benefits, security of user data shared in the cloud is the main challenge, and various frameworks have been proposed but no single system addresses all security aspects.
This document discusses big data security issues and encryption techniques. It begins with introducing the authors and providing an abstract about addressing security issues related to data integrity, confidentiality and availability. The main body then covers authentication, data, network and generic security challenges with big data as well as proposed solutions like encryption, logging and honeypot nodes. The document focuses on describing symmetric and asymmetric encryption algorithms including DES, AES, RSA and ECC. It compares the algorithms and concludes that encryption techniques can help secure big data stored in clouds, though current security levels may be improved.
IRJET- Review on Privacy Preserving on Multi Keyword Search over Encrypte...IRJET Journal
The document summarizes a proposed system for multi-keyword search over encrypted data in cloud computing. It aims to retrieve the top k most relevant documents matching a user's query while preserving data privacy. The system uses Lucene indexing to build an index of keywords extracted from outsourced documents. When documents are added or removed, the index is updated. A top-k query technique ranks document relevance and returns the top matching results. Encryption is done using the Blowfish algorithm before documents are outsourced to the untrusted cloud server. This allows efficient search over the encrypted data based on keyword queries.
This document discusses hybrid renewable energy systems and their suitability for rural regions in India. It notes that about 75% of India's population lives in rural areas that often face electricity shortages, hindering development. Hybrid systems that combine two or more renewable sources like biomass, wind, solar, and hydro could help address this issue by providing a more reliable supply. The document outlines several hybrid system examples and notes their advantages like increased reliability, flexibility, and lower operating costs compared to individual renewable systems. However, hybrid systems also present challenges like complex power conditioning, stochastic resource availability, and coordination with electric grids.
The document provides a review of persistence of vision (POV) displays. It discusses how POV displays use the phenomenon of the human eye retaining images to create the illusion of motion from individual LEDs spinning at a high frequency. The summary discusses key components of the POV display like the Raspberry Pi, RGB LEDs, motors, and touch interface. It also outlines several applications for POV displays in education, gaming, and as an interactive display. The review concludes the POV display provides an improved viewing experience and new ways of interacting with displays compared to traditional screens.
Two Level Auditing Architecture to Maintain Consistent In Cloudtheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Theoretical work submitted to the Journal should be original in its motivation or modeling structure. Empirical analysis should be based on a theoretical framework and should be capable of replication. It is expected that all materials required for replication (including computer programs and data sets) should be available upon request to the authors.
The International Journal of Engineering & Science would take much care in making your article published without much delay with your kind cooperation
Secure Data Sharing Algorithm for Data Retrieval In Military Based NetworksIJTET Journal
Abstract— Mobile knots now armed atmospheres such equally battlefield or aggressive area remain expected toward smart after irregular net connectivity and regular panels. Disruption Tolerant Network (DTN) tools stay attractive positive keys that agree knots toward connect with each other in these dangerous interacting atmospheres.The problem of applying the security mechanisms to DTN introduces several security challenges.Since nearly handlers could modification their related characteristics by approximately argument and reliability of data should be changed otherwise around isolated secrets power remain bargained significant reversal aimed at respectively characteristic is essential in command toward create organisms safe in this research a novel approaches are used to overcome the above mentioned problems called secure data sharing algorithm. This algorithm calculate hash importance aimed at coded documents which is used to check the reliability of encrypted confidential data.
A Review on Key-Aggregate Cryptosystem for Climbable Knowledge Sharing in Clo...Editor IJCATR
The Data sharing is an important functionality in cloud storage. In this article, we show how to securely, efficiently, and
flexibly share data with others in cloud storage. We describe new public-key cryptosystems which produce constant-size ciphertexts
such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set
of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other
words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the
other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in
a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also
describe other application of our schemes. In particular, our schemes give the first public-key patient controlled encryption for flexible
hierarchy, which was yet to be known.
The document summarizes key-aggregate cryptosystem (KAC), which allows efficient and flexible sharing of encrypted
data in cloud storage. KAC encrypts data under a public key and ciphertext class. The key owner can generate an
aggregate decryption key that decrypts any ciphertext whose class is contained in the key, while keeping a constant size.
This compact aggregate key can be shared to delegate decryption rights for a set of ciphertexts, without sharing individual
keys. KAC schemes aim to achieve constant-size ciphertexts, public keys, master secrets and aggregate keys to enable
flexible and efficient data sharing in cloud storage.
A Privacy Preserving Three-Layer Cloud Storage Scheme Based On Computational ...IJSRED
This document proposes a three-layer cloud storage scheme based on fog computing to improve privacy protection. The scheme splits user data into three parts that are stored in the cloud server, fog server, and user's local machine. It uses a Hash-Solomon encoding technique to distribute the data in a way that original data cannot be reconstructed from partial information. The scheme leverages fog computing to both utilize cloud storage and securely protect data privacy against insider attacks. Theoretical analysis and experiments demonstrate that the proposed scheme effectively addresses privacy issues in existing cloud storage models.
IRJET- Privacy Preserving Cloud Storage based on a Three Layer Security M...IRJET Journal
This document proposes a three-layer security model for privacy-preserving cloud storage. The model uses encryption techniques like AES and Triple DES to encrypt user data before storing it in the cloud. The encrypted data is then divided into blocks that are distributed across different cloud, fog, and local storage locations. This prevents data leakage even if some blocks are lost or accessed. Computational intelligence paradigms help optimize the distribution of data blocks for efficiency and security. The model aims to provide stronger privacy protection compared to traditional cloud storage security methods.
Key aggregate searchable encryption (kase) for group data sharing via cloud s...LeMeniz Infotech
Key aggregate searchable encryption (kase) for group data sharing via cloud storage
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
This document summarizes a research paper that proposes a security architecture for cloud computing that dynamically configures cryptographic algorithms and keys based on security policies and inputs like network access risk and data sensitivity. The architecture aims to improve security while reducing costs by only using the necessary level of encryption for each situation. It describes using the Blowfish algorithm instead of AES and adjusting the key size from 128 to 448 bits depending on factors like network type and data size. Results show Blowfish has better performance than AES, especially with larger keys on larger amounts of data. The goal is to provide flexible, efficient security tailored to each user's needs.
Audit free cloud storage via deniable attribute based encryptionMano Sriram
1) Cloud storage services have become popular, but user privacy is a concern as data owners do not want unauthorized access. Existing encryption schemes assume cloud providers are trusted, but they could be forced to reveal secrets.
2) The document proposes a new encryption scheme called deniable Ciphertext Policy Attribute Based Encryption (CP-ABE) that allows cloud providers to create fake user secrets, protecting real secrets even if the provider is coerced.
3) By using deniable CP-ABE, cloud providers can convince coercers that obtained secrets are genuine while actually protecting user privacy, addressing a key limitation of prior encryption schemes.
This document discusses secure data access and sharing in cloud computing environments. It first outlines some security requirements for data sharing in clouds, including data security, privacy, confidentiality, access control, user revocation, and scalability. It then surveys several cryptographic techniques for secure data sharing, including attribute-based encryption (ABE), key-policy attribute-based encryption (KP-ABE), and identity-based encryption (IBE). It also discusses proxy re-encryption as another technique that supports secure data sharing using a semi-trusted proxy to convert ciphertexts between users' public keys.
SECURE CLOUD STORAGE USING DENIABLE ATTRIBUTE BASED ENCRYPTIONadeij1
Cloud storage services are a lot of well-liked today . To secure information from those that don't have access, several encoding schemes are projected. Most of the projected schemes assume cloud storage service suppliers or trustworthy third parties handling key management are trustworthy and can't be hacked; but, in follow, some entities could intercept communications between users and cloud storage suppliers and so compel storage suppliers to unleash user secrets by victimisation government power or alternative means that. During this case, encrypted information are assumed to be identified and storage suppliers are requested to unleash user secrets. Since it's tough to fight against outside coercion, we tend to aimed to create Associate in Nursing encoding theme that might facilitate cloud storage suppliers avoid this plight. We provide cloud storage suppliers means that to make pretend user secrets. Given such pretend user secrets, outside coercers will solely obtained solid information from a user’s keep cipher text. Once coercers suppose the received secrets are real, they'll be happy and a lot of significantly cloud storage suppliers won't have discovered any real secrets. Therefore, user privacy continues to be protected.
KEY AGGREGATE CRYPTOSYSTEM FOR SCALABLE DATA SHARING IN CLOUDNaseem nisar
1. EASiER proposes an encryption-based access control architecture for social networks that uses attribute-based encryption. It introduces a minimally trusted proxy to enable efficient revocation without reissuing keys.
2. Multi-authority attribute based encryption schemes allow multiple authorities to issue secret keys for attributes. This is useful in applications with attributes managed by different authorities.
3. Existing social network privacy architectures focus on encryption-based access control but do not address efficient revocation of users or attributes. EASiER addresses this issue.
IRJET- An EFficiency and Privacy-Preserving Biometric Identification Scheme i...IRJET Journal
This document proposes an efficient and privacy-preserving approach for outsourced data from resource-constrained mobile devices in cloud computing. It employs probabilistic public key encryption to encrypt the data and performs ranked keyword search over the encrypted data to retrieve files from the cloud. The approach aims to achieve efficient encryption without sacrificing data privacy. The ranked keyword search improves usability by returning the most relevant files and ensuring retrieval accuracy, while reducing computation and communication overhead. A thorough security and performance analysis proves the approach is semantically secure and efficient.
SECURE SENSITIVE DATA SHARING ON BIG DATA PLATFORMAM Publications
Big data concern extremely large volume and complex data both structured and unstructured to reveal patterns and trends .The organisation procure large data storage, data delivery on semi-trusted big data sharing platform. An enterprise can obtain huge amount of sensitive data by storing, analysing, processing these data. In digital world, keeping sensitive data secure from theft and vulnerability is very difficult .This abstract proposes a framework for secure sensitive data sharing on a big data platform using effective encryption algorithm. We present an identity based conditional proxy encryption based on heterogeneous cipher text transformation .It protects security of user’s sensitive data on big data platform.
A PRACTICAL CLIENT APPLICATION BASED ON ATTRIBUTE-BASED ACCESS CONTROL FOR UN...cscpconf
One of widely used cryptographic primitives for the cloud application is Attribute Based Encryption (ABE) where users can have their own attributes and a ciphertext encrypted by an access policy. Though ABE provides many benefits, the novelty often only exists in an academic world and it is often difficult to find a practical use of ABE for a real application. In this paper, we discuss the design and implementation of a cloud storage client application which supports the concept of ABE. Our proposed client provides an effective access control mechanism where it allows different types of access policy to be defined thus allowing large datasets to be shared by multiple users. Using different access policy, each user only needs to access only a small part of the big data. The goal of our experiment is to explore the right set of strategies for developing a practical ABE-based system. Through the implementation and evaluation, we have determined the various characteristics and issues associated with developing a practical ABEbased
application.
Cloud storage (CS) is gaining much popularity nowadays because it offers low-cost and convenient network storage services. In this big data era, the explosive growth in digital data moves the users towards CS to store their massive data. This explosive growth of data causes a lot of storage pressure on CS systems because a large volume of this data is redundant. Data deduplication is a most-effective data reduction technique that identifies and eliminates the redundant data. Dynamic nature of data makes security and ownership of data as a very important issue. Proof-of-ownership schemes are a robust way to check the ownership claimed by any owner. However to protect the privacy of data, many users encrypt it before storing in CS. This method affects the deduplication process because encryption methods have varying characteristics. Convergent encryption (CE) scheme is widely used for secure data deduplication, but it destroys the message equality. Although, DupLESS provides strong privacy by enhancing CE, but it is also found insufficient. The problem with the CE-based scheme is that the user can decrypt the cloud data while he has lost his ownership. This paper addresses the problem of ownership revocation by proposing a secure deduplication scheme for encrypted data. The proposed scheme enhances the security against unauthorized encryption and poison attack on the predicted set of data.
This document discusses secure data deduplication techniques in cloud storage. It proposes using convergent encryption to encrypt duplicate data only once while allowing deduplication. Managing the large number of encryption keys is a challenge. The document proposes Dekey, which distributes encryption key shares across multiple servers rather than having users manage keys directly. It also proposes using user behavior profiling and decoy files/information. Profiling a user's normal access patterns can help detect abnormal access, while decoys confuse attackers by providing bogus information if unauthorized access is detected. The combination of these techniques aims to provide strong security against insider and outsider attackers in deduplicated cloud storage systems.
Towards Secure Data Distribution Systems in Mobile Cloud Computing: A SurveyIRJET Journal
This document summarizes 6 research papers related to security in mobile cloud computing. It discusses issues like data integrity, authentication, and access control when mobile devices' data and computations are integrated with cloud computing. Several cryptographic techniques are described that can help ensure privacy and security, such as proxy provable data possession, attribute-based encryption, and proxy re-encryption. The document concludes that while mobile cloud computing provides benefits, security of user data shared in the cloud is the main challenge, and various frameworks have been proposed but no single system addresses all security aspects.
This document discusses big data security issues and encryption techniques. It begins with introducing the authors and providing an abstract about addressing security issues related to data integrity, confidentiality and availability. The main body then covers authentication, data, network and generic security challenges with big data as well as proposed solutions like encryption, logging and honeypot nodes. The document focuses on describing symmetric and asymmetric encryption algorithms including DES, AES, RSA and ECC. It compares the algorithms and concludes that encryption techniques can help secure big data stored in clouds, though current security levels may be improved.
IRJET- Review on Privacy Preserving on Multi Keyword Search over Encrypte...IRJET Journal
The document summarizes a proposed system for multi-keyword search over encrypted data in cloud computing. It aims to retrieve the top k most relevant documents matching a user's query while preserving data privacy. The system uses Lucene indexing to build an index of keywords extracted from outsourced documents. When documents are added or removed, the index is updated. A top-k query technique ranks document relevance and returns the top matching results. Encryption is done using the Blowfish algorithm before documents are outsourced to the untrusted cloud server. This allows efficient search over the encrypted data based on keyword queries.
This document discusses hybrid renewable energy systems and their suitability for rural regions in India. It notes that about 75% of India's population lives in rural areas that often face electricity shortages, hindering development. Hybrid systems that combine two or more renewable sources like biomass, wind, solar, and hydro could help address this issue by providing a more reliable supply. The document outlines several hybrid system examples and notes their advantages like increased reliability, flexibility, and lower operating costs compared to individual renewable systems. However, hybrid systems also present challenges like complex power conditioning, stochastic resource availability, and coordination with electric grids.
The document provides a review of persistence of vision (POV) displays. It discusses how POV displays use the phenomenon of the human eye retaining images to create the illusion of motion from individual LEDs spinning at a high frequency. The summary discusses key components of the POV display like the Raspberry Pi, RGB LEDs, motors, and touch interface. It also outlines several applications for POV displays in education, gaming, and as an interactive display. The review concludes the POV display provides an improved viewing experience and new ways of interacting with displays compared to traditional screens.
This document proposes SIEVE, a decentralized technique to identify malicious nodes in mobile ad hoc networks (MANETs). SIEVE uses rateless coding and the LT decoding process to detect corrupted data packets. It constructs a factor graph based on "checks" that nodes generate when decoding data. Checks contain the identifiers of nodes that provided data and a flag for corruption. SIEVE runs belief propagation on the factor graph to compute the probability of each node being malicious. Simulation results show SIEVE can accurately identify malicious nodes and is robust against various attacks, with low overhead. It provides an effective solution for securing data dissemination in decentralized MANETs.
The document analyzes the likelihood of intruder detection in wireless sensor networks (WSNs) distributed uniformly, Gaussianly, and cohesively. It finds that cohesive networks have the highest detection likelihood as sensing range increases, followed by Gaussian and uniform distributions. The detection probability is calculated for single and multiple sensor detection models under varying parameters like sensing range, number of sensors, and intrusion distance. Clustering sensors improves energy efficiency without impacting intruder detection performance.
This document provides a critical review of factors influencing labor productivity in the construction industry. It identifies supervision, skill of laborers, tools/equipment, absenteeism, and financial constraints as the most significant factors based on prior research. The review also examines how factors can be grouped, with human, management, material/tool, environmental, and technological groups found to be important. Analysis methods like factor analysis are discussed that can help identify independent and group factors to improve labor productivity.
This document discusses the scope and potential of using information and communication technologies (ICTs) to improve service delivery in rural India. It outlines some of the key challenges, including lack of adequate infrastructure and broadband connectivity in rural areas. It then describes several sectors where ICTs could make a significant impact, such as education through e-learning tools, healthcare via telemedicine and online medical records, agriculture by providing farmers access to market data and best practices, and e-governance to improve access to government services. While India has made efforts such as the National IT Policy and Digital India program, fully realizing ICT's benefits will require addressing issues of affordability, relevance of content, capacity building, and public-private partnerships.
This document discusses effective modular order preserving encryption on cloud using multivariate hypergeometric distribution (MHGD). It begins with an abstract that describes how order preserving encryption allows efficient range queries on encrypted data. It then provides background on cloud computing security concerns and discusses existing approaches to searchable encryption, including probabilistic encryption, deterministic encryption, homomorphic encryption, and order preserving encryption. The key proposed approach is to improve the security of existing modular order preserving encryption approaches by utilizing MHGD.
Effective Cancer Detection Using Soft Computing TechniqueIOSR Journals
This document discusses using soft computing techniques for effective cancer detection. It begins by providing background on cancer research, classification, and early detection. Gene expression profiles from multiple sources are collected and an ontological store is created. An ant colony optimization technique is then used to analyze gene expression clusters and detect cancer using the acquired knowledge. The proposed system architecture involves storing expert gene expression data in a database. Sample gene expression from a patient is input and compared to the database using data mining techniques to identify characteristic genes. The results are clustered and cancer is predicted by analyzing the clusters.
This document proposes a cryptographic key generation technique for 2D graphics images using RGB pixel shuffling and transposition. The technique extracts RGB pixel values from an input image, shuffles them to generate a cipher image, and can decrypt the image back to its original form. It aims to increase image security during transmission by manipulating pixel values rather than expanding pixel data. The algorithm is implemented in Java. Experimental results show the technique can encrypt and decrypt images while maintaining the original size and shape. Advantages include effectively increasing security against attacks and easily reconstructing image features from RGB values.
This document describes a gesture-controlled robot that uses image processing and an Arduino board. The robot has two wheels driven by motors connected to an L293D motor driver circuit. An Arduino board controls the motor driver circuit. A Python program uses OpenCV to detect hand gestures based on colored objects. When an object of a certain color moves in different directions, the Python program sends signals to the Arduino board to move the robot forward, backward, left or right. The Python program and Arduino program communicate over serial to control the robot's movement based on hand gestures.
This document describes the design of an integrated LC filter using multilayer flexible ferrite sheets. The structure consists of a spiral inductor sandwiched between two ferrite layers, which act as a magnetic core, and a multilayer capacitor above the ferrite layers. Analytical equations are provided to calculate the inductance, capacitance, and resistance of the design based on its geometric parameters and the material properties. A design procedure is outlined to size the components based on desired inductance and capacitance values. The integrated LC filter is simulated and a prototype is fabricated and tested to validate the analytical model.
This document proposes a new approach called Embedded Condition Based Maintenance (ECBM) that aims to improve existing Condition Based Maintenance (CBM) systems. ECBM would use diagnostic and prognostic signals that are already embedded in industrial control systems, rather than additional sensors. This would make ECBM lower cost and faster at fault detection compared to traditional CBM. The document discusses challenges with CBM, outlines the proposed ECBM approach, and describes how ECBM would use clustering, intelligent schemes and Petri nets with existing control modules to perform online fault detection and maintenance recommendations.
The document summarizes a study that used screening curves to identify potential renewable energy candidate plants for green-based generation expansion planning in Kenya. The screening curves analyzed the total annual generation costs of various options based on their capital costs, fixed operating costs, variable fuel costs, capacity factors, and other technical parameters. The study found the most suitable base load candidate plants were 140MW geothermal, 140MW low grand falls hydro, 300MW wind, 1000MW imports, 60MW Mutonga hydro and 1000MW nuclear plants. Suitable peaking plants included 180MW gas turbine using natural gas, 100MW solar PV, and imports. These plants provide a mix of renewable generation options for Kenya's generation expansion planning to lower costs and reduce
1. The document discusses strengthening of reinforced concrete (RC) beams with externally bonded glass fiber reinforced polymers (GFRPs).
2. Shear failure of RC beams is identified as a disastrous failure mode, and GFRP composites have become a popular technique for shear strengthening due to advantages like high strength and corrosion resistance.
3. The document reviews several studies that have examined using GFRP wraps, strips, and grids for shear strengthening RC beams, finding they can increase shear capacity significantly.
This document analyzes the NACA 6412 airfoil for use as a propeller on a flying bike. It summarizes the analysis conducted using JavaProp software. The analysis found that a two-blade propeller with the NACA 6412 airfoil would produce sufficient thrust for the flying bike while keeping weight low. Under shrouded and squared tip conditions, thrust was maximized. The analysis validated thrust values against prior mathematical modeling, meeting targets with less than 0.02% difference. It was concluded that a two-blade propeller would be better than three blades for this application due to better weight efficiency while producing equivalent thrust.
This document evaluates the hydraulic conductivity of a marble dust-soil composite. Laboratory tests were conducted to determine the index properties and compaction characteristics of a clayey soil treated with 0-20% marble dust. Hydraulic conductivity tests using a consolidometer found that permeability decreased with up to 12.5% marble dust addition and remained constant thereafter. Permeability also decreased with higher compaction moisture content. Based on the results, 25% marble dust addition was proposed to satisfy the regulatory permeability requirement of ≤ 1 x 10-9 m/s for landfill liner materials.
Synthesis Characterization And Antimicrobial Activity Of 6- Oxido-1- ((5 -5- ...IOSR Journals
This document describes the synthesis and characterization of several novel carbamate derivatives and their evaluation for antimicrobial activity. Specifically, it details the multi-step synthesis of cyclopropyl/cyclohexyl/terahydro-2H-pyran-4-yl/tetrahydro-2H-thiopyran-4-yl/perfluorophenyl (6-oxido-1-((5-(5-(pyridin-3-yl)-1H-tetrazol-1-yl)-1,3,4-thiadiazol-2-yl)methyl)-4,8-dihydro-1H-[1,3,2]dioxaphosphepino
The document summarizes research on using activated flux in TIG welding of mild steel. Activated TIG welding involves brushing a thin layer of activated flux onto the welding joint before welding. Several studies found that activated flux can increase weld penetration, reduce weld width, and increase the depth-to-width ratio compared to conventional TIG welding. Cr2O3 flux produced the most significant effects in one study, increasing penetration on mild steel while decreasing hardness and increasing the depth-to-width ratio. Overall, activated flux aided TIG welding has been shown to improve weld quality and mechanical properties for mild steel compared to conventional TIG welding.
1. The document discusses data hiding techniques for images, specifically uniform embedding. It reviews existing methods like LSB substitution and proposes developing a new technique to select pixels for embedding, reduce embedded text size, and increase confidentiality.
2. It surveys related work on minimizing distortion in steganography, a modified matrix encoding technique for low distortion, and designing adaptive steganographic schemes.
3. The objectives are to develop a new pixel selection technique for embedding, reduce embedded text size, and increase resistance to extraction through high confidentiality. The significance is providing a solution to digital image steganography problems and focusing on choosing pixels to embed text under conditions.
The document proposes a privacy-preserving reputation system for location-based queries. It aims to allow users to query a database of location data (points of interest) while protecting their location information and preventing unauthorized access. The system uses an adaptive oblivious transfer protocol for secure data transmission between the user and location server. It also establishes a secure communication mechanism using encryption and decryption during the data retrieval process. Additionally, the system incorporates a privacy-preserving reputation technique using authorization rules and data integrity checks to control misleading data and ensure data accuracy. The experimental results show that the proposed system using elliptic curve cryptography encryption has lower overhead and delay than existing systems using RSA encryption for private information retrieval.
The document discusses secure data sharing in cloud storage using a key-aggregate cryptosystem (KAC) which allows efficient delegation of decryption rights for any set of ciphertexts. KAC produces constant size ciphertexts and allows any set of secret keys to be aggregated into a single key encompassing the power of the keys being aggregated. This aggregate key can then be sent to others for decryption of the ciphertext set while keeping files outside the set confidential.
iaetsd Secured multiple keyword ranked search over encrypted databasesIaetsd Iaetsd
This document proposes a Robust Key-Aggregate Cryptosystem (RKAC) that allows flexible and efficient assignment of decryption rights for encrypted data stored in cloud storage. The RKAC produces constant-sized ciphertexts such that a constant-sized aggregate decryption key can decrypt any subset of ciphertexts. This allows the data owner to share access to selected encrypted files by sending a single small aggregate key to authorized users, without decrypting the files themselves or distributing individual keys. The RKAC is described as providing a secure and flexible method for sharing encrypted data stored in the cloud.
IRJET- Mutual Key Oversight Procedure for Cloud Security and Distribution of ...IRJET Journal
The document proposes a mutual key oversight procedure for cloud security and distribution of data based on a hierarchy method. It discusses using attribute-based encryption to encrypt data before outsourcing it to the cloud. The proposed scheme uses a hierarchical structure with a cloud authority, domain authorities, and users to provide security and scalability. It allows both private and public uploading and sharing of files within this hierarchy.
Secure Data Sharing and Search in Cloud Based Data Using Authoritywise Dynami...IOSRjournaljce
The Data sharing is an important functionality in cloud storage. We describe new public key crypto systems which produce constant-size cipher texts such that efficient delegation of decryption rights for any set of cipher texts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. Ensuring the security of cloud computing is second major factor and dealing with because of service availability failure the single cloud providers demonstrated less famous failure and possibility malicious insiders in the single cloud. A movement towards Multi-Clouds, In other words ”Inter-Clouds” or ”Cloud-Of-Clouds” as emerged recently. This works aim to reduce security risk and better flexibility and efficiency to the user. Multi-cloud environment has ability to reduce the security risks as well as it can ensure the security and reliability.
Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud S...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
EXPLORING WOMEN SECURITY BY DEDUPLICATION OF DATAIRJET Journal
The document proposes a new deduplication system that supports endorsed duplicate check and separates the record structure and substance. It utilizes AES calculation to encrypt information for security. MD5 and sha1 calculations are utilized to recognize duplicate records. The private cloud is outfitted with mystery validation to give more prominent security. The proposed framework separates record structure and substance and underpins endorsed duplicate check to give proficient deduplication while ensuring information security in the cloud.
This document proposes a method for preventing cheating of messages based on block cipher using a digital envelope. It begins with an introduction to the need for data security during storage and transmission. It then discusses the AES encryption algorithm and related work involving encryption for wireless devices. The proposed method uses AES key expansion techniques to generate multiple keys for encryption and decryption of messages using a digital envelope, packing the encrypted message and key into a single packet. It claims this prevents attackers from accessing sensitive data. Future work could expand this to image encryption and decryption applications.
Prevention of Cheating Message based on Block Cipher using Digital Envelopeiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses securely mining data stored in the cloud using encryption techniques. It proposes using k-means clustering on the data, then encrypting it with AES. Homomorphic encryption is then performed using Paillier cryptosystem to allow computations on the encrypted data while preserving privacy. The key advantages discussed are that this approach allows for secure data mining and analysis in the cloud without revealing private information to unauthorized parties. It also analyzes related work on encryption and homomorphic techniques for secure cloud computing and big data analysis.
The document describes a secure cloud storage system that supports data forwarding without retrieving data. It uses a threshold proxy re-encryption scheme combined with a decentralized erasure code. This allows storage servers to directly re-encrypt and forward encrypted data to another user, without having the plaintext. The system has four phases: setup, storage, forwarding, and retrieval. It discusses parameters for the number of storage servers and key shares to provide security and robustness. The scheme supports encoding and forwarding of encrypted data in a distributed manner across independent servers.
This document discusses enhancing cloud computing security for data sharing within group members. It first introduces cloud computing and some common security issues like identity privacy and lack of a multiple owner model. It then discusses different types of clouds like public, private, community, and hybrid clouds. The document analyzes using the AES encryption algorithm to secure data sharing in a group compared to other algorithms like DES and Blowfish. It explains the AES encryption process and compares it favorably to other algorithms based on factors like memory usage, weakness of keys, and performance. The conclusion is that AES provides better security for data sharing within groups in the cloud.
Enhancing Cloud Computing Security for Data Sharing Within Group Membersiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A robust and verifiable threshold multi authority access control system in pu...IJARIIT
Attribute-based Encryption is observed as a promising cryptographic leading tool to assurance data owners’ direct
regulator over their data in public cloud storage. The former ABE schemes include only one authority to maintain the whole
attribute set, which can carry a single-point bottleneck on both security and performance. Then, certain multi-authority
schemes are planned, in which numerous authorities distinctly maintain split attribute subsets. However, the single-point
bottleneck problem remains unsolved. In this survey paper, from another perspective, we conduct a threshold multi-authority
CP-ABE access control scheme for public cloud storage, named TMACS, in which multiple authorities jointly manage a
uniform attribute set. In TMACS, taking advantage of (t, n) threshold secret allocation, the master key can be shared among
multiple authorities, and a lawful user can generate his/her secret key by interacting with any t authorities. Security and
performance analysis results show that TMACS is not only verifiable secure when less than t authorities are compromised, but
also robust when no less than t authorities are alive in the system. Also, by efficiently combining the traditional multi-authority
scheme with TMACS, we construct a hybrid one, which satisfies the scenario of attributes coming from different authorities as
well as achieving security and system-level robustness.
The Time-Consuming Task Of Preparing A Data Set For...Kimberly Thomas
The document discusses preparing data sets for analysis in data mining and privacy preserving techniques. It states that preparing data sets is a time-consuming task that requires complex SQL queries, joining tables, and aggregating columns. Significant manual effort is needed to build data sets in a horizontal layout. It also discusses the need for privacy-preserving algorithms to protect sensitive data during the data mining process. The document proposes using case, pivot and SPJ methods to horizontally aggregate data, then employing a homomorphic encryption scheme to preserve privacy during the aggregations. Homomorphic encryption allows computations on encrypted data to produce an encrypted result that matches the result of operations on plaintext.
Implementation of De-Duplication AlgorithmIRJET Journal
The document describes an implementation of a data de-duplication algorithm using convergent encryption. It discusses how data de-duplication works to reduce storage usage by identifying and removing duplicate copies of data. Convergent encryption is used, which generates the same encrypted form of a file from the original file's hash, allowing duplicate encrypted files to be de-duplicated while preserving privacy. The algorithm divides files into blocks, generates hashes for each block, and encrypts the file blocks using the hashes as keys. When a file is uploaded, its hash is checked against existing hashes to identify duplicates, with duplicates replaced by pointers to the stored copy. This allows efficient de-duplication while encrypting data for privacy and security when stored
Multi-part Dynamic Key Generation For Secure Data EncryptionCSCJournals
Storage of user or application-generated user-specific private, confidential data on a third party storage provider comes with its own set of challenges. Although such data is usually encrypted while in transit, securely storing such data at rest presents unique security challenges. The first challenge is the generation of encryption keys to implement the desired threat containment. The second challenge is secure storage and management of these keys. This can be accomplished in several ways. A naive approach can be to trust the boundaries of a secure network and store the keys within these bounds in plain text. A more sophisticated method can be devised to calculate or infer the encryption key without explicitly storing it. This paper focuses on the latter approach. Additionally, the paper also describes the implementation of a system that in addition to exposing a set of REST APIs for secure CRUD operations also provides a means for sharing the data among specific users.
AWS Cloud Based Encryption Decryption SystemIRJET Journal
This document describes an AWS cloud-based encryption and decryption system. The system uses a web app that allows users to easily encrypt and decrypt files for added security and privacy. Files are encrypted using the XOR cipher and SHA-512 hashing algorithm, making them very difficult to decrypt without the proper password. Encrypted files can only be decrypted using the web app. The system aims to increase security awareness and provide a simple encryption tool for common users to protect their confidential data.
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 1, Ver. III (Jan – Feb. 2016), PP 36-40
www.iosrjournals.org
DOI: 10.9790/0661-18133640 www.iosrjournals.org 36 | Page
Input Cumulative Cryptosystem for Scalable Information
Contribution in Cloud
Kamma Madhavi1
, Venkatasivasankara Reddy2
1
Student of M.Tech (CSE)
2
Asst. Prof, Department of Computer Science and Engineering, QIS Institute of technology, Ongole. A.P, INDIA
Abstract: In Cloud computing, data storing and sharing is a capable methodology. This study illuminates
secure, powerful, and versatile system to give data to different people in Cloud storage structure. This review,
depict novel open key cryptosystems. This structure create relentless size figure messages such that compelling
assignment of unraveling rights for any course of action of figure works are possible. This improvement
arrangement can add up to any course of action of riddle keys and make them as a decreased single key .The
power of the significant number of keys being gathered in a lone key. Toward the day's end, holder of the key
can release a predictable size aggregate key for versatile choices of figure substance set in Cloud storing .In
this arrangement other encoded records outside the figure substance set stay mystery. The aggregate key can be
suitably sent to others or be secured in a smart card with incredibly obliged secure storage. Distributed
computing advancement is for the most part used so that the data can be outsourced on cloud can got to easily.
Particular people can share that data through various virtual machines yet appear on single physical machine.
Keywords: Cloud Computing, Key-aggregate encryption, Attribute based Encryption, Aggregate keys,
I. Introduction
Cloud computing is creating as a gigantic stage for capacity, Maintaining and sharing data. Enthusiasm
for data upkeep and limit is extending in all fields, whether customers are from corporate, military, IT
affiliations et cetera. Data assurance has transformed into a basic sensitivity toward cloud customers. Customers
don't trust fogs similarly as mystery. Fogs are unequivocally used for sharing data. Cloud hosts can grant subset
of their information to their sidekicks and accomplices. While sharing data, security is an essential concern. For
the most part we trust untouchable server for giving security. Requesting is sent to the server for check, hosts
getting to fogs are constrained to trust the outcast for their security. Nevertheless, there are potential outcomes
of tricking, hacking and intrusion strikes [1]. Expect that in a mending office organization structure masters and
patients are getting to the cloud for sharing information about ailment and meds. Authority exchanges
information about his patient on the cloud, yet he is not content with the security principles of cloud. Along
these lines, pro mixed all his data and after that exchange records on the cloud. Taking after two days one of the
patients requested the information material to him. As authority has starting now mixed data, the unscrambling
key will be allocated to the patient. If standard approach is considered for selecting interpreting key, three
conditions are developing: 1. all archives are mixed with similar encryption key. Here, Doctor needs to send one
unraveling key which will uncover riddle of all the data. 2. All records are encoded with specific or distinctive
key. For this circumstance specific interpreting keys will be sent, which is basically inefficient, as data
proprietor needs to send a no. Of translating keys. 3. While assigning the secret keys there are shots of intruder’s
attack. Some untouchable may endeavor to get crucial information. To overcome above impediments while
sharing the data, an answer is proposed in this paper. The proposed course of action is to "Scramble all data with
divergent encryption key and send simply single unscrambling key. This single disentangling key should have
the ability to unscramble various figure content. The promising segment of unscrambling key is that, it is
aggregate of the entire interpreting key yet it stays littler in size as a single key [1]. The hosts incorporated into
correspondence should have the ability to screen the security bursts, hence an intrusion acknowledgment
structure should be given". The unscrambling key is allocated securely on a secured channel. Minimal size of
unscrambling key is pined for, as we can use it for cutting edge cellular telephones, remote sensors, and splendid
cards etc. This paper finds its application for facility organization, military organizations et cetera.
II. Related Work
An encryption course of action which is at initially proposed for rapidly transmitting colossal number
of keys in telecast situation. The progression is essential and we quickly ponder its key reasoning handle here
for a bond depiction of what are the beguiling properties we need to accomplish. The thinking of the key for a
game-plan of classes (which is a subset of all conceivable ciphertext classes) is as takes after. A composite
modulus is picked where p and q are two unfathomable sporadic primes. A pro question key is picked at
optional. Every class is connected with a particular prime. All these prime numbers can be set in the broad group
2. Input Cumulative Cryptosystem For Scalable Information Contribution In Cloud
DOI: 10.9790/0661-18133640 www.iosrjournals.org 37 | Page
framework parameter. An anticipated size key for set can be conveyed. For the general population who have
been distributed the section rights for Sˈ can be made. Notwithstanding, it is made game plans for the
symmetric-key setting. The substance supplier needs to get the relating riddle keys to encode information, which
is not suitable for a couple of uses. Since strategy is utilized to convey riddle respect rather than a few
open/mystery keys, it is questionable how to apply this thought for open key encryption course of action. At
long last, we watch that there are game plans which attempt to lessen the key size for accomplishing
certification in symmetric-key encryption, On the other hand, offering of deciphering force is not an anxiety in
these course of action. Identity based encryption (IBE) is an open key encryption in which the general
population key of a client can be set as an personality string of the client (e.g., an email address, portable
number). There is a private key generator (PKG) in IBE which holds a expert mystery key and issues a mystery
key to every client with deference to the client personality. The content supplier can take general society
parameter and a client personality to scramble a message. The beneficiary can decode this ciphertext by his
mystery key. To manufacture IBE with key conglomeration. In their plans, key accumulation is obliged as in all
keys to be accumulated must originated from diverse ―identity divisions. While there are an exponential
number of characters and subsequently mystery keys, just a polynomial number of them can be aggregated This
altogether builds the expenses of putting away and transmitting cipher texts, which is illogical by and large, for
example, imparted distributed storage. As another approach to do this is to apply hash capacity to the string
indicating the class, and continue hashing over and again until a prime is acquired as the yield of the hash
function we specified, our plans highlight steady ciphertext size, and their security holds in the standard model.
In fluffy IBE one single minimized mystery key can decode ciphertexts encoded under numerous personalities
which are shut in a certain metric space, however not for a discretionary arrangement of characters furthermore,
subsequently it doesn't coordinate with our concept of key to key.
III. Data Sharing:
KAC in meant for the data sharing. The data owner can share the data in desired amount with
confidentiality. KCA is easy and secure way to transfer the delegation authority. For sharing selected data on the
server Alice first performs the Setup. Later the public/master key pair (pk, mk) is generated by executing the
KeyGen. The msk master key is kept secret and the public key pk and param are made public.Anyone can
encrypt the data m and this data is uploaded on server. With the decrypting authority the other users can access
those data. If Alice is wants to share a set S of her data with a friend Bob then she can perform the aggregate key
KS for Bob by executing Extract (mk, S). As kS is a constant size key and the key can be shared through secure
e-mail. When the aggregate key has got Bob can download the data and access it.
IV. Security Of Cloud Data Storage
Many cloud service providers provide storage as a form of service. They take the data from the users
and store them on large data centers, hence providing users a means of storage. Although these cloud service
providers say that the data stored in the cloud is utmost safe but there have been cases when the data stored in
these clouds have been modified or lost may be due to some security breach or some human error. Various
cloud service providers adopt different technologies to safeguard the data stored in their cloud. But the question
is: Whether the data stored in these clouds is secure enough against any sort of security breach? The virtualized
nature of cloud storage makes the traditional mechanisms unsuitable for handling the security issues. These
service providers use different encryption techniques like public key encryption and private key encryption to
secure the data resting in the cloud. Another major issue that is mostly neglected is of Data-Remanence. It refers
to the data left out in case of data removal. It causes minimal security threats in private cloud computing
offerings, however severe security issues may emerge out in case of public cloud offerings as a result of
dataremanence. Various cases of cloud security breach came into light in the last few years. Cloud based email
marketing services company, Epsilon suffered the data breach, due to which a large section of its customers
including JP Morgan Chase, Citibank, Barclays Bank, hotel chains such as Marriott and Hilton, and big retailers
such as Best Buy and Walgreens were affected heavily and huge chunk of customer data was exposed to the
hackers which includes customer email ids and bank account details. Another similar incident happened with
Amazon causing the disruption of its EC2 service. The damage caused had proved to be quite costly for both the
users and the system administrators. The above mentioned events depict the vulnerability of the cloud services.
Another important aspect is that the known and popular domains have been used to launch malicious software or
hack into the companies’ secured database. It is proved that Amazon is prone to side-channel attacks, and a
malicious virtual machine, occupying the same server as the target, can easily gain access to confidential data
[10]. The question is: whether any such security policy should be in place for these trusted users as well? An
incident relating to the data loss occurred last year with the online storage service provider “Media max” also
known as “The Linkup” when due to system administration error, active customer data was deleted, leading to
the data loss. SLA’s with the Cloud Service providers should contain all the points that may cause data loss
3. Input Cumulative Cryptosystem For Scalable Information Contribution In Cloud
DOI: 10.9790/0661-18133640 www.iosrjournals.org 38 | Page
either due to some human or system generated error. Virtualization in general increases the security of a cloud
environment. With virtualization, a single machine can be divided into many virtual machines, thus providing
better data isolation and safety against denial of service attacks [10]. The VMs provide a security test-bed for
execution of untested code from un-trusted users.
V. Data Privacy In Cloud Computing Environment
Considering data privacy in cloud computing environment, a traditional way to ensure data privacy is
to rely on the server to enforce the access control after authentication, which means any unexpected privilege
increase will expose all data. In a shared-lease cloud computing environment, things become even bad. Data
from different users can be hosted on separate virtual machines (VMs) but reside on a single physical machine.
Data in a target VirtualMachine could be stolen by instantiating another Virtual Machine cooccupant with the
target one
VI. Problem Statement
Constant-size decryption key require pre-defined hierarchical relationship.
The fixed hierarchy is used. In that there is only one way in which we can partition the record. If we want to
give out access rights based on something else (e.g. based on document type or sensitivity of data) we will have
to look at all the low level categories involved, and give a separate decryption key for each [2].
More number of decryption key was used [1]
VII. System Architecture
A key-aggregate encryption scheme consists of five polynomial-time algorithms [1] as shown in
Figure. The data owner establishes the public system parameter via Setup and generates a public/master-secret3
key pair via KeyGen. Messages can be encrypted via Encrypt by anyone who also decides what cipher text class
is associated with the plaintext message to be encrypted. The data owner can use the master-secret to generate
an aggregate decryption key for a set of cipher text classes via Extract. The generated keys can be passed to
receivers securely (via secure e-mails or secure devices). Finally, any user with an aggregate key can decrypt
any cipher text provided that the cipher text’s class is contained in the aggregate key via Decrypt4.
A. Setup (1;n):
Executed by the data owner to setup an account on an untrusted server. On input a security level parameter 1
and the number of cipher text classes n (i.e., class index should be an integer bounded by 1 and n), it outputs the
public system parameter param, which is omitted from the input of the other algorithms for brevity.
B. KeyGen():
Executed by the data owner and randomly generates a master-secret key (msk).
C. Encrypt(pk; i;m):
Executed by data owner to encrypt data. On input msk, an index i denoting the cipher text class, and a message
m, it outputs a cipher text C..
D. Extract(msk; S):
Executed by the data owner for delegating the decrypting power for a certain set of cipher text classes to the
receiver. On input the master secret key msk and a set S of indices corresponding to different classes, it outputs
the aggregate key for set S denoted by KS.
E. Decrypt(KS; S; i; C):
Executed by a receiver who received an aggregate key KS generated by Extract. On input KS, the set S, an
index i denoting the cipher text class the cipher text C belongs to, and C, it outputs the decrypted result m if i in
S.
VIII. Proposed Work
In this paper we propose a technique to make data sharing secure and leak resilient. The purpose of this
article is to provide a way for secure data sharing on cloud using key aggregate encryption and Intrusion
Detection (KAEID). In KAEID Decryption key is made more and more powerful so that it can decrypt multiple
cipher texts. At the same time Intrusion detection system (IDS) monitors data exchange between two hosts and
ensures if these are trusted hosts [2]. Specifically, the problem statement is “To generate a constant size
aggregate decryption key by data owner which can decrypt multiple cipher text. The decryption key is aggregate
key which encompasses the power of all secret keys. This data sharing system also supports intrusion detection
to find out the suspicious activities of hosts. If hosts involved in communication are trusted hosts data sharing
will take place else rejected.” In KAEID user encrypts message under public key cryptosystem. Messages are
encrypted by one who decides public key as well as cipher text category. Cipher text is categorized under
different “classes”. Plain messages which are subset of cipher text class possess few common features. Here all
the hosts set up an account on the cloud server. Hosts can login to the cloud server; they can perform their task
4. Input Cumulative Cryptosystem For Scalable Information Contribution In Cloud
DOI: 10.9790/0661-18133640 www.iosrjournals.org 39 | Page
and logout of the server. The data owner generates public key/ master key pair. Public key is used for encryption
while master key is kept secret. Master key is used for aggregating all the decryption keys. The aggregate key is
extracted out of master key and corresponding cipher text class identifier. This aggregate key is delegated to
data recipient. The data recipient compares the set of cipher text classes and decrypts the message. Hence, it also
prevents the downloading of unwanted data. Each host in the data sharing system works as IDS. An IDS collects
IP address of all hosts in its sub network, and keep eyes on suspicious activities in the network. If any suspicious
host is found it is blacklisted. Data sharing with suspicious host is rejected. As shown in Fig-1. Two hosts data
owner and data recipient are accessing the cloud network. Data owner encrypts the data and uploads data on
cloud server. Aggregate key is delegated to Data recipient for decryption of requested messages. Hosts involved
in communication are also working as IDS. IDS collects and lists IP addresses of corresponding sub network.
Monitors the suspicious activities and reject data sharing with the hosts found blacklisted.
Fig. Proposed Architecture diagram
IX. Results And Discussion
In this experiment it is assumed that we have n number of cipher text classes denoted as CI. S is a set of
cipher text class identifier CI, represented as S ={CI| CI=1,2,3...n}. The encryption phase is independent of the
account setup. Encryption time does not depends upon the no of message to be encrypted. Encryption is done in
constant time. r is the portion of cipher text classes to which data recipient is concern. r represents the ratio of
delegated cipher text classes to the total no. Of cipher text classes. Decryption is done in group; decryption key
matches keys for cipher text classes, with pairing operations where S is the set of cipher text classes. Each host
in communication collects the IP addresses of neighbours in ∆t time interval. IDS blacklist the suspicious IP
addresses. Blacklist’s size increases for fixed no. peers. As no. of peers increases detection delay increases. Here
routing time is not much significant, it is less than the time taken to handle increased load.
X. Conclusion
As we all know data security is a major concern for cloud users. This paper comes with a technique,
which helps to achieve a secured and leak proof system. Here modern cryptographic algorithms and intrusion
detection algorithms are used in order to achieve a secured way of data sharing. In this system data owner uses
distinct encryption keys and encrypts messages before uploading it on cloud and sends a single decryption key
to other host. This single decryption key decrypts multiple cipher text at a time thereby saving the time as well
as storage space. Unwanted data will not be downloaded at data recipient’s side. Intrusion detection systems
monitor the security breakdown in the network. Data sharing is stopped if any un-trusted party comes in the
network. Obtaining an ideal system without data any leakage is practically is not possible, but this research work
helps to solve certain problems very efficiently. It saves the storage space; it also saves time spent in key
exchange. Key sizes remains constant and compact.
References
[1] ”Key-Aggregate Cryptosystem for Scalable Data sharing in Cloud Storage” Cheng-Kang Chu, Sherman S.M Chow, Wen-Guey
Tzen, Jianying Zhou, Robert H. Deng IEEE, 2014
[2] ”A Peer-to-Peer Collaborative Intrusion Detection System” Chenfeng Vincent Zhou, Shanika Karunasekera and Christopher Leckie
National ICT Australia Department of Computer Science and Software Engineering.
[3] University of Melbourne, Australia 2005 [3] Q. Zhang and Y. Wang, ”A Centralized Key Management Scheme for Hierarchical
Access Control,” in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM 04). IEEE, 2004, pp. 20672071.
[4] M. J. Atallah, M. Blanton, N. Fazio, and K. B. Frikken, ”Dynamic and Efficient Key Management for Access Hierarchies,” ACM
Transac ormation and System Security (TISSEC), vol. 12, no. 3, 2009tions on Inf.
[5] A. Sahai and B. Waters, ”Fuzzy Identity-Based Encryption,” in Proceedings of Advances in Cryptology - EUROCRYPT 05, ser.
LNCS, vol. 3494. Springer, 2005, pp. 457473.
[6] S. S. M. Chow, Y. Dodis, Y. Rouselakis, and B. Waters, ”Practical Leakage-Resilient Identity-Based Encryption from Simple
Assumptions,” in ACM Conference on Computer and Communications Security, 2010, pp. 152161.
5. Input Cumulative Cryptosystem For Scalable Information Contribution In Cloud
DOI: 10.9790/0661-18133640 www.iosrjournals.org 40 | Page
[7] V. Goyal, O. Pandey, A. Sahai, and B. Waters, ”Attribute-Based Encryption for Fine-Grained Access Control of Encrypted data,” in
Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS 06). ACM, 2006, pp. 8998.
[8]” Multi-Authority Attribute-Based Encryption,” in ACM Conference on Computer and Communications Security, 2009, pp. 121130
[9] CERT Coordination Center, ”Module 2 - Internet Security Overview”, 2003. [9] M. E. Locasto, J. J. Parekh, A. D. Keromytis, S. J.
Stolfo, ”Towards Collaborative Security and P2P Intrusion Detection”, 2005 IEEE Workshop on IAS, June 2005.
[10] B. Y. Zhao, J. D. Kubiatowicz, and A. D. Joseph, ”Tapestry: An infrastructure for fault-tolerant wide-area location and routing”,
Technical Report CSD-01-1141, University of California, Berkeley, 2000.
AUTHORS PROFILE
Author 1
KAMMA MADHAVIPursuing M.Tech (Computer Science and Engineering) in QIS Institute of Technology,
PrakasamDist, Andhra Pradesh, India.
Author 2
VENKATASIVASANKARA REDDY currently working as Asst. Professor in QIS Institute of technology, in
the Department of Computer Science and Engineering, Ongole, PrakasamDist, Andhra Pradesh, India.