We are providing training on IEEE 2016-17 projects for Ph.D Scalars, M.Tech, B.E, MCA, BCA and Diploma students for
all branches for their academic projects.
For more details call us or watsapp us @ 7676768124 0r 9545252155
Email your base papers to "adritsolutions@gmail.co.in"
We are providing IEEE projects on
1) Cloud Computing, Data Mining, BigData Projects Using JAva
2) Image Processing and Video Procesing (MATLAB) , Signal Processing
3) NS2 (Wireless Sensor, MANET, VANET)
4) ANDRIOD APPS
5) JAVA, JEE, J2EE, J2ME
6) Mechanical Design projects
7) Embedded Systems and IoT Projects
8) VLSI- Verilog Projects (ModelSim and Xilinx using FPGA)
For More details Please Visit us at
Adrit Solutions
Near Maruthi Mandir
#42/5, 18th Cross, 21st Main
Vijaynagar
Bangalore.
IRJET - Efficient and Verifiable Queries over Encrypted Data in CloudIRJET Journal
This document proposes a scheme for efficient and verifiable queries over encrypted data stored in the cloud. It aims to allow an authorized user to query encrypted documents of interest while maintaining privacy. The scheme provides a verification mechanism to allow users to check the correctness of query results and identify any valid results omitted by a potentially untrustworthy cloud server. The document reviews related work on searchable encryption and verifiable queries. It then outlines the proposed approach to build secure verifiable queries for encrypted cloud data.
Efficient Privacy Preserving Clustering Based Multi Keyword Search IRJET Journal
This document proposes an efficient privacy-preserving clustering-based multi-keyword search system. It uses hierarchical clustering to generate clusters of encrypted documents in the cloud. The system aims to improve search efficiency while maintaining security. It utilizes EM clustering, SHA-1 hashing for deduplication, and a user revocation method. Experimental results show the framework has advantages such as efficient memory and time utilization, secure search over encrypted data, secure data storage, and deduplication.
An proficient and Confidentiality-Preserving Multi- Keyword Ranked Search ove...Editor IJCATR
Cloud computing has developed progressively prevalent for data owners to outsource their data to public cloud servers
while consenting data users to reclaim this data. For isolation disquiets, a secure rifle over encrypted cloud data has stirred numerous
research mechanisms underneath the particular owner model. Conversely, most cloud servers in practice do not just assist one owner,
as an alternative, their sustenance gives multiple owners to share the assistances carried by cloud computing. In this proficient and
confidentiality-Preserving Multi-Keyword Ranked Search over Encrypted Cloud Data, new schemes to deal with Privacy preserving
Ranked Multi-keyword Search in a Multi-owner model (PRMSM) has been introduced. To facilitate cloud servers to execute secure
search without knowing the actual data of both keywords and trapdoors, we thoroughly build a novel secure search protocol. To rank
the search results and domain the privacy of relevance scores amongst keywords and files. To thwart the assailants from snooping
secret keys and fantasizing to be legal data users submitting pursuits, a novel dynamic secret key generation protocol and a new data
user authentication protocol is discussed.
Improving security for data migration in cloud computing using randomized enc...IOSR Journals
1) The document proposes an encryption technique using randomization to improve security for data migration in cloud computing. It aims to address major security issues in cloud data migration like confidentiality, integrity, reliability and data security.
2) The proposed method uses a random key to encrypt data, and then encrypts the random key with a shared key before transmission. This adds an extra layer of security by obscuring the actual encryption key.
3) It is concluded that the randomized encryption technique makes it difficult for attackers to analyze encrypted texts and determine if they correspond to the same plaintext, improving security over existing methods for cloud data migration.
Privacy preserving and delegated access control for cloud applicationsredpel dot com
Privacy preserving and delegated access control for cloud applications
for more ieee paper / full abstract / implementation , just visit www.redpel.com
This document proposes an efficient multi-keyword ranked search (EMRS) scheme over encrypted mobile cloud data through blind storage. The EMRS enables search users to perform multi-keyword searches over encrypted documents stored on a cloud server and receive ranked search results based on relevance. It utilizes techniques like relevance scoring, secure k-nearest neighbor computation, and blind storage to provide search functionality while preserving security and privacy. The scheme is analyzed to demonstrate that it achieves confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and conceals access patterns, addressing key security requirements. Experimental results show the EMRS provides improved efficiency and functionality compared to existing proposals.
Survey on Privacy- Preserving Multi keyword Ranked Search over Encrypted Clou...Editor IJMTER
The advent of cloud computing, data owners are motivated to outsource their complex
data management systems from local sites to commercial public cloud for great flexibility and
economic savings. But for protecting data privacy, sensitive data has to be encrypted before
outsourcing.Considering the large number of data users and documents in cloud, it is crucial for
the search service to allow multi-keyword query and provide result similarity ranking to meet the
effective data retrieval need. Related works on searchable encryption focus on single keyword
search or Boolean keyword search, and rarely differentiate the search results. We first propose a
basic MRSE scheme using secure inner product computation, and then significantly improve it to
meet different privacy requirements in two levels of threat models. The Incremental High Utility
Pattern Transaction Frequency Tree (IHUPTF-Tree) is designed according to the transaction
frequency (descending order) of items to obtain a compact tree.
By using high utility pattern the items can be arranged in an efficient manner. Tree structure
is used to sort the items. Thus the items are sorted and frequent pattern is obtained. The frequent
pattern items are retrieved from the database by using hybrid tree (H-Tree) structure. So the
execution time becomes faster. Finally, the frequent pattern item that satisfies the threshold value
is displayed.
Data Search in Cloud using the Encrypted KeywordsIRJET Journal
This document presents a proposed system for searching encrypted data stored in the cloud without decrypting it. The system would allow users to perform expressive boolean keyword searches using encrypted trapdoors. It aims to improve efficiency and security over existing methods by supporting boolean expressions, hiding keyword values from servers, and proving security under a formal model. The system design involves data owners encrypting documents and keywords before outsourcing to the cloud. Users generate trapdoors from a trusted center and send them to the cloud server to retrieve matching encrypted files. The goals are to enable expressive searching, efficiency, privacy of keyword values, and provable security.
IRJET - Efficient and Verifiable Queries over Encrypted Data in CloudIRJET Journal
This document proposes a scheme for efficient and verifiable queries over encrypted data stored in the cloud. It aims to allow an authorized user to query encrypted documents of interest while maintaining privacy. The scheme provides a verification mechanism to allow users to check the correctness of query results and identify any valid results omitted by a potentially untrustworthy cloud server. The document reviews related work on searchable encryption and verifiable queries. It then outlines the proposed approach to build secure verifiable queries for encrypted cloud data.
Efficient Privacy Preserving Clustering Based Multi Keyword Search IRJET Journal
This document proposes an efficient privacy-preserving clustering-based multi-keyword search system. It uses hierarchical clustering to generate clusters of encrypted documents in the cloud. The system aims to improve search efficiency while maintaining security. It utilizes EM clustering, SHA-1 hashing for deduplication, and a user revocation method. Experimental results show the framework has advantages such as efficient memory and time utilization, secure search over encrypted data, secure data storage, and deduplication.
An proficient and Confidentiality-Preserving Multi- Keyword Ranked Search ove...Editor IJCATR
Cloud computing has developed progressively prevalent for data owners to outsource their data to public cloud servers
while consenting data users to reclaim this data. For isolation disquiets, a secure rifle over encrypted cloud data has stirred numerous
research mechanisms underneath the particular owner model. Conversely, most cloud servers in practice do not just assist one owner,
as an alternative, their sustenance gives multiple owners to share the assistances carried by cloud computing. In this proficient and
confidentiality-Preserving Multi-Keyword Ranked Search over Encrypted Cloud Data, new schemes to deal with Privacy preserving
Ranked Multi-keyword Search in a Multi-owner model (PRMSM) has been introduced. To facilitate cloud servers to execute secure
search without knowing the actual data of both keywords and trapdoors, we thoroughly build a novel secure search protocol. To rank
the search results and domain the privacy of relevance scores amongst keywords and files. To thwart the assailants from snooping
secret keys and fantasizing to be legal data users submitting pursuits, a novel dynamic secret key generation protocol and a new data
user authentication protocol is discussed.
Improving security for data migration in cloud computing using randomized enc...IOSR Journals
1) The document proposes an encryption technique using randomization to improve security for data migration in cloud computing. It aims to address major security issues in cloud data migration like confidentiality, integrity, reliability and data security.
2) The proposed method uses a random key to encrypt data, and then encrypts the random key with a shared key before transmission. This adds an extra layer of security by obscuring the actual encryption key.
3) It is concluded that the randomized encryption technique makes it difficult for attackers to analyze encrypted texts and determine if they correspond to the same plaintext, improving security over existing methods for cloud data migration.
Privacy preserving and delegated access control for cloud applicationsredpel dot com
Privacy preserving and delegated access control for cloud applications
for more ieee paper / full abstract / implementation , just visit www.redpel.com
This document proposes an efficient multi-keyword ranked search (EMRS) scheme over encrypted mobile cloud data through blind storage. The EMRS enables search users to perform multi-keyword searches over encrypted documents stored on a cloud server and receive ranked search results based on relevance. It utilizes techniques like relevance scoring, secure k-nearest neighbor computation, and blind storage to provide search functionality while preserving security and privacy. The scheme is analyzed to demonstrate that it achieves confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and conceals access patterns, addressing key security requirements. Experimental results show the EMRS provides improved efficiency and functionality compared to existing proposals.
Survey on Privacy- Preserving Multi keyword Ranked Search over Encrypted Clou...Editor IJMTER
The advent of cloud computing, data owners are motivated to outsource their complex
data management systems from local sites to commercial public cloud for great flexibility and
economic savings. But for protecting data privacy, sensitive data has to be encrypted before
outsourcing.Considering the large number of data users and documents in cloud, it is crucial for
the search service to allow multi-keyword query and provide result similarity ranking to meet the
effective data retrieval need. Related works on searchable encryption focus on single keyword
search or Boolean keyword search, and rarely differentiate the search results. We first propose a
basic MRSE scheme using secure inner product computation, and then significantly improve it to
meet different privacy requirements in two levels of threat models. The Incremental High Utility
Pattern Transaction Frequency Tree (IHUPTF-Tree) is designed according to the transaction
frequency (descending order) of items to obtain a compact tree.
By using high utility pattern the items can be arranged in an efficient manner. Tree structure
is used to sort the items. Thus the items are sorted and frequent pattern is obtained. The frequent
pattern items are retrieved from the database by using hybrid tree (H-Tree) structure. So the
execution time becomes faster. Finally, the frequent pattern item that satisfies the threshold value
is displayed.
Data Search in Cloud using the Encrypted KeywordsIRJET Journal
This document presents a proposed system for searching encrypted data stored in the cloud without decrypting it. The system would allow users to perform expressive boolean keyword searches using encrypted trapdoors. It aims to improve efficiency and security over existing methods by supporting boolean expressions, hiding keyword values from servers, and proving security under a formal model. The system design involves data owners encrypting documents and keywords before outsourcing to the cloud. Users generate trapdoors from a trusted center and send them to the cloud server to retrieve matching encrypted files. The goals are to enable expressive searching, efficiency, privacy of keyword values, and provable security.
IRJET-Auditing and Resisting Key Exposure on Cloud StorageIRJET Journal
1. The document discusses auditing and resisting key exposure in cloud storage. It proposes a new framework called an auditing protocol with key-exposure resilience that allows integrity of stored data to still be verified even if the client's current secret key is exposed.
2. It formalizes the definition and security model for such a protocol and proposes an efficient practical construction. The security proof and asymptotic performance analysis show the proposed protocol is secure and efficient.
3. Key techniques used include periodic key updates, homomorphic linear authenticators, and a novel authenticator construction to boost forward security and provide proof of retrievability with the current design.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Classifying confidential data using SVM for efficient cloud query processingTELKOMNIKA JOURNAL
Nowadays, organizations are widely using a cloud database engine from the cloud service
providers. Privacy still is the main concern for these organizations where every organization is strictly
looking forward more secure environment for their own data. Several studies have proposed different types
of encryption methods to protect the data over the cloud. However, the daily transactions represented by
queries for such databases makes encryption is inefficient solution. Therefore, recent studies presented
a mechanism for classifying the data prior to migrate into the cloud. This would reduce the need of
encryption which enhances the efficiency. Yet, most of the classification methods used in the literature
were based on string-based matching approach. Such approach suffers of the exact match of terms where
the partial matching would not be considered. This paper aims to take the advantage of N-gram
representation along with Support Vector Machine classification. A real-time data will used in
the experiment. After conducting the classification, the Advanced Encryption Standard algorithm will be
used to encrypt the confidential data. Results showed that the proposed method outperformed the baseline
encryption method. This emphasizes the usefulness of using the machine learning techniques for
the process of classifying the data based on confidentiality.
IRJET- Privacy Preserving Keyword Search over Cloud DataIRJET Journal
The document proposes a scheme for secure ranked keyword search over encrypted cloud data. It discusses encrypting data before outsourcing it to the cloud for confidentiality. An index of keyword sets is stored on a local trusted server, while encrypted data files are stored on an untrusted cloud server. When a user searches for keywords, relevant files are ranked based on criteria like term frequency and file length. The top results are then retrieved from the cloud without revealing sensitive information to unauthorized parties. The system aims to enable efficient yet private keyword searches on large amounts of outsourced encrypted data.
Privacy Preserving in Cloud Using Distinctive Elliptic Curve Cryptosystem (DECC)ElavarasaN GanesaN
The document proposes a Distinctive Elliptic Curve Cryptosystem (DECC) for privacy preserving in cloud computing. DECC uses smaller keys compared to existing cryptographic algorithms. It performs key generation, encryption, and decryption faster than Triple DES, RDP, and ECC based on simulations with varying file sizes from 128MB to 1GB. DECC has lower average relative error, consumes less time for overall processing, and takes less time for anonymization compared to the other algorithms based on the simulation results. The proposed DECC aims to improve privacy and efficiency for data security in cloud computing.
1) The document proposes an optimized and secured semantic-based ranking approach for keyword search over encrypted cloud data. It aims to improve search accuracy by considering keyword semantics and different keyword forms.
2) An index is created from unencrypted files containing keyword-file mappings and encrypted relevance scores. Files are encrypted before outsourcing to the cloud.
3) The approach analyzes semantics between keywords, performs stemming, and calculates relevance scores. It encrypts the index and files before outsourcing to the cloud to protect data privacy during searches.
Role Based Access Control Model (RBACM) With Efficient Genetic Algorithm (GA)...dbpublications
This document summarizes a research paper that proposes a new cloud data security model using role-based access control, encryption, and genetic algorithms. The model uses Token Based Data Security Algorithm (TBDSA) combined with RSA and AES encryption to securely encode, encrypt, and forward cloud data. A genetic algorithm is used to generate encrypted passwords for cloud users. Role managers are assigned to control user roles and data access. The aim is to integrate encoding, encrypting, and forwarding for secure cloud storage while minimizing processing time.
This document proposes a system for enabling secure and efficient ranked keyword search over outsourced cloud data. It summarizes that existing searchable encryption techniques only support basic Boolean search and do not consider relevance ranking. The proposed system explores using relevance scores from information retrieval to build a searchable index and develops a one-to-many order-preserving mapping technique to protect sensitive score information, allowing efficient server-side ranking without compromising keyword privacy. The system provides ranked search results while maintaining strong security guarantees.
Enabling efficient multi keyword ranked search over encrypted mobile cloud da...redpel dot com
This document summarizes a research paper that proposes a new efficient multi-keyword ranked search (EMRS) scheme over encrypted mobile cloud data through blind storage. The key contributions are:
1) It introduces a relevance score to the searchable encryption scheme to enable multi-keyword ranked search over encrypted documents. It also constructs an efficient index to improve search efficiency.
2) It modifies the blind storage system to solve the trapdoor unlinkability problem and conceal the search user's access pattern from the cloud server.
3) It provides security analysis showing the EMRS can achieve confidentiality of documents/index, trapdoor privacy, unlinkability, and conceal access patterns while experiments show it improves efficiency over existing proposals.
Privacy preserving multi-keyword ranked search over encrypted cloud data 2Swathi Rampur
This document proposes and defines the problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). It establishes strict privacy requirements for such a system, including data privacy, index privacy, keyword privacy and trapdoor privacy. It presents the MRSE framework with four algorithms: Setup, BuildIndex, Trapdoor and Query. The Query algorithm allows cloud servers to perform a ranked search on encrypted indexes and return similarity-ranked results, while preserving privacy.
Privacy preserving multi-keyword ranked search over encrypted cloud dataIGEEKS TECHNOLOGIES
This document proposes a system called privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). Existing searchable encryption systems only support single-keyword or boolean keyword search without result ranking. The proposed MRSE system allows a user to search for multiple keywords and returns documents ranked by relevance. It establishes privacy requirements and uses an efficient "coordinate matching" semantic to quantify document similarity based on keyword matches. The system architecture includes modules for data owners to encrypt and upload files, for users to search and download encrypted files, and for ranking search results.
The document proposes a system for multi-keyword ranked search over encrypted cloud data while preserving privacy. It addresses limitations in previous systems that allowed single keyword search or did not consider privacy. The proposed system uses asymmetric key encryption, a block-max index, and dynamic key generation to allow efficient retrieval of relevant encrypted data from the cloud without security breaches. It involves three parts: (1) a server that encrypts and stores data in the cloud and sends decryption keys; (2) a cloud server that handles search requests, ranks results, and responds; and (3) users that request data from the cloud server.
Fuzzy Keyword Search Over Encrypted Data in Cloud ComputingIJERA Editor
As Cloud Computing becomes prevalent, more and more sensitive information are being centralized into the cloud. For the protection of data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a very challenging task. Although traditional searchable encryption schemes allow a user to securely search over encrypted data through keywords and selectively retrieve files of interest, these techniques support only exact keyword search. This significant drawback makes existing techniques unsuitable in cloud computing as it is greatly affect system usability, rendering user searching experiences very frustrating and system efficiency very low. In this paper, for the first time we formalize and solve the problem of effective fuzzy keyword search over encrypted cloud while maintaining keyword privacy. In our solution, we exploit edit distance to quantify keyword similarity and develop new advanced technique on constructing fuzzy keyword sets which greatly reduces the storage and representation overheads. In this way, we show that our proposed solution is secure and privacy preserving, while realizing the goal of fuzzy keyword search.
IRJET- Compound Keyword Search of Encrypted Cloud Data by using Semantic SchemeIRJET Journal
This document proposes a semantic-based compound keyword search (SCKS) scheme for encrypted cloud data. SCKS aims to address limitations of existing approaches by enabling semantic-based, multi-keyword and ranked searches without relying on a predefined global dictionary. It introduces a compound concept semantic similarity calculation method and combines it with Locality-Sensitive Hashing and a secure k-Nearest Neighbor scheme to map keyword vectors to indexes while considering frequency. Experimental results on real-world datasets show SCKS has low computation overhead and higher search accuracy than existing schemes.
Implementing Proof of Retriavaibility for Multiple Replica of Data File using...IRJET Journal
1. The document proposes a protocol for implementing proof of retrievability for multiple replicas of data files stored on cloud servers using a NoSQL database. It aims to verify the integrity of data when both the cloud storage server and third party auditor cannot fully be trusted.
2. The proposed system replaces a relational database with a NoSQL database to improve data operation performance and scaling for large datasets. It designs a protocol where the third party auditor generates signatures for dataset blocks and integrity proofs, which are then verified by the user to check the trustworthiness of the third party auditor.
3. Experimental results show that the time required for operations like tag generation and challenge-proof-verify are lower than previous solutions,
This document describes a proposed system for enabling effective yet privacy-preserving fuzzy keyword search in cloud computing. It formalizes the problem of fuzzy keyword search over encrypted cloud data for the first time. The system uses edit distance to quantify keyword similarity and develops two techniques - wildcard-based and gram-based - to construct efficient fuzzy keyword sets. It then proposes a symbol-based trie-traverse searching scheme to match keywords and retrieve files. Security analysis shows the solution preserves privacy while allowing fuzzy searches.
IRJET - K-Gram based Composite Secret Sign Search Over Encrypted Cloud In...IRJET Journal
This document presents a proposed system called K-Gram based Composite Secret Sign Search over Encrypted Cloud Information. The system aims to provide a qualified search scheme for cloud storage data using multi-keyword search. It uses fuzzy keyword sets to account for spelling errors when searching encrypted file names in the cloud server. If a search keyword matches, related fuzzy keywords are used to search file lists. Experimental results demonstrate that the proposed solution can greatly boost privacy protection, scalability and query processing time efficiency over existing methods. The system considers keyword frequency when ranking search results to better protect user data privacy. It consists of modules for user login, file upload, frequent keyword search, similarity search, linear search, email alerts and file downloading.
This document contains information about several M.Phil Computer Science Cloud Computing projects written in C# and NS2. It provides the titles, languages, links, and short abstracts for each project. The projects focus on topics related to cloud computing including secure cloud storage, data integrity verification, privacy-preserving auditing, and keyword search over encrypted cloud data.
This document proposes and defines the problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). It establishes strict privacy requirements for such a system, including data privacy, index privacy, keyword privacy and trapdoor privacy. It presents the MRSE framework with four algorithms: Setup, BuildIndex, Trapdoor and Query. The Query algorithm allows cloud servers to perform a ranked search on encrypted indexes and return similarity-ranked results, while preserving privacy.
IRJET-Auditing and Resisting Key Exposure on Cloud StorageIRJET Journal
1. The document discusses auditing and resisting key exposure in cloud storage. It proposes a new framework called an auditing protocol with key-exposure resilience that allows integrity of stored data to still be verified even if the client's current secret key is exposed.
2. It formalizes the definition and security model for such a protocol and proposes an efficient practical construction. The security proof and asymptotic performance analysis show the proposed protocol is secure and efficient.
3. Key techniques used include periodic key updates, homomorphic linear authenticators, and a novel authenticator construction to boost forward security and provide proof of retrievability with the current design.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Classifying confidential data using SVM for efficient cloud query processingTELKOMNIKA JOURNAL
Nowadays, organizations are widely using a cloud database engine from the cloud service
providers. Privacy still is the main concern for these organizations where every organization is strictly
looking forward more secure environment for their own data. Several studies have proposed different types
of encryption methods to protect the data over the cloud. However, the daily transactions represented by
queries for such databases makes encryption is inefficient solution. Therefore, recent studies presented
a mechanism for classifying the data prior to migrate into the cloud. This would reduce the need of
encryption which enhances the efficiency. Yet, most of the classification methods used in the literature
were based on string-based matching approach. Such approach suffers of the exact match of terms where
the partial matching would not be considered. This paper aims to take the advantage of N-gram
representation along with Support Vector Machine classification. A real-time data will used in
the experiment. After conducting the classification, the Advanced Encryption Standard algorithm will be
used to encrypt the confidential data. Results showed that the proposed method outperformed the baseline
encryption method. This emphasizes the usefulness of using the machine learning techniques for
the process of classifying the data based on confidentiality.
IRJET- Privacy Preserving Keyword Search over Cloud DataIRJET Journal
The document proposes a scheme for secure ranked keyword search over encrypted cloud data. It discusses encrypting data before outsourcing it to the cloud for confidentiality. An index of keyword sets is stored on a local trusted server, while encrypted data files are stored on an untrusted cloud server. When a user searches for keywords, relevant files are ranked based on criteria like term frequency and file length. The top results are then retrieved from the cloud without revealing sensitive information to unauthorized parties. The system aims to enable efficient yet private keyword searches on large amounts of outsourced encrypted data.
Privacy Preserving in Cloud Using Distinctive Elliptic Curve Cryptosystem (DECC)ElavarasaN GanesaN
The document proposes a Distinctive Elliptic Curve Cryptosystem (DECC) for privacy preserving in cloud computing. DECC uses smaller keys compared to existing cryptographic algorithms. It performs key generation, encryption, and decryption faster than Triple DES, RDP, and ECC based on simulations with varying file sizes from 128MB to 1GB. DECC has lower average relative error, consumes less time for overall processing, and takes less time for anonymization compared to the other algorithms based on the simulation results. The proposed DECC aims to improve privacy and efficiency for data security in cloud computing.
1) The document proposes an optimized and secured semantic-based ranking approach for keyword search over encrypted cloud data. It aims to improve search accuracy by considering keyword semantics and different keyword forms.
2) An index is created from unencrypted files containing keyword-file mappings and encrypted relevance scores. Files are encrypted before outsourcing to the cloud.
3) The approach analyzes semantics between keywords, performs stemming, and calculates relevance scores. It encrypts the index and files before outsourcing to the cloud to protect data privacy during searches.
Role Based Access Control Model (RBACM) With Efficient Genetic Algorithm (GA)...dbpublications
This document summarizes a research paper that proposes a new cloud data security model using role-based access control, encryption, and genetic algorithms. The model uses Token Based Data Security Algorithm (TBDSA) combined with RSA and AES encryption to securely encode, encrypt, and forward cloud data. A genetic algorithm is used to generate encrypted passwords for cloud users. Role managers are assigned to control user roles and data access. The aim is to integrate encoding, encrypting, and forwarding for secure cloud storage while minimizing processing time.
This document proposes a system for enabling secure and efficient ranked keyword search over outsourced cloud data. It summarizes that existing searchable encryption techniques only support basic Boolean search and do not consider relevance ranking. The proposed system explores using relevance scores from information retrieval to build a searchable index and develops a one-to-many order-preserving mapping technique to protect sensitive score information, allowing efficient server-side ranking without compromising keyword privacy. The system provides ranked search results while maintaining strong security guarantees.
Enabling efficient multi keyword ranked search over encrypted mobile cloud da...redpel dot com
This document summarizes a research paper that proposes a new efficient multi-keyword ranked search (EMRS) scheme over encrypted mobile cloud data through blind storage. The key contributions are:
1) It introduces a relevance score to the searchable encryption scheme to enable multi-keyword ranked search over encrypted documents. It also constructs an efficient index to improve search efficiency.
2) It modifies the blind storage system to solve the trapdoor unlinkability problem and conceal the search user's access pattern from the cloud server.
3) It provides security analysis showing the EMRS can achieve confidentiality of documents/index, trapdoor privacy, unlinkability, and conceal access patterns while experiments show it improves efficiency over existing proposals.
Privacy preserving multi-keyword ranked search over encrypted cloud data 2Swathi Rampur
This document proposes and defines the problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). It establishes strict privacy requirements for such a system, including data privacy, index privacy, keyword privacy and trapdoor privacy. It presents the MRSE framework with four algorithms: Setup, BuildIndex, Trapdoor and Query. The Query algorithm allows cloud servers to perform a ranked search on encrypted indexes and return similarity-ranked results, while preserving privacy.
Privacy preserving multi-keyword ranked search over encrypted cloud dataIGEEKS TECHNOLOGIES
This document proposes a system called privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). Existing searchable encryption systems only support single-keyword or boolean keyword search without result ranking. The proposed MRSE system allows a user to search for multiple keywords and returns documents ranked by relevance. It establishes privacy requirements and uses an efficient "coordinate matching" semantic to quantify document similarity based on keyword matches. The system architecture includes modules for data owners to encrypt and upload files, for users to search and download encrypted files, and for ranking search results.
The document proposes a system for multi-keyword ranked search over encrypted cloud data while preserving privacy. It addresses limitations in previous systems that allowed single keyword search or did not consider privacy. The proposed system uses asymmetric key encryption, a block-max index, and dynamic key generation to allow efficient retrieval of relevant encrypted data from the cloud without security breaches. It involves three parts: (1) a server that encrypts and stores data in the cloud and sends decryption keys; (2) a cloud server that handles search requests, ranks results, and responds; and (3) users that request data from the cloud server.
Fuzzy Keyword Search Over Encrypted Data in Cloud ComputingIJERA Editor
As Cloud Computing becomes prevalent, more and more sensitive information are being centralized into the cloud. For the protection of data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a very challenging task. Although traditional searchable encryption schemes allow a user to securely search over encrypted data through keywords and selectively retrieve files of interest, these techniques support only exact keyword search. This significant drawback makes existing techniques unsuitable in cloud computing as it is greatly affect system usability, rendering user searching experiences very frustrating and system efficiency very low. In this paper, for the first time we formalize and solve the problem of effective fuzzy keyword search over encrypted cloud while maintaining keyword privacy. In our solution, we exploit edit distance to quantify keyword similarity and develop new advanced technique on constructing fuzzy keyword sets which greatly reduces the storage and representation overheads. In this way, we show that our proposed solution is secure and privacy preserving, while realizing the goal of fuzzy keyword search.
IRJET- Compound Keyword Search of Encrypted Cloud Data by using Semantic SchemeIRJET Journal
This document proposes a semantic-based compound keyword search (SCKS) scheme for encrypted cloud data. SCKS aims to address limitations of existing approaches by enabling semantic-based, multi-keyword and ranked searches without relying on a predefined global dictionary. It introduces a compound concept semantic similarity calculation method and combines it with Locality-Sensitive Hashing and a secure k-Nearest Neighbor scheme to map keyword vectors to indexes while considering frequency. Experimental results on real-world datasets show SCKS has low computation overhead and higher search accuracy than existing schemes.
Implementing Proof of Retriavaibility for Multiple Replica of Data File using...IRJET Journal
1. The document proposes a protocol for implementing proof of retrievability for multiple replicas of data files stored on cloud servers using a NoSQL database. It aims to verify the integrity of data when both the cloud storage server and third party auditor cannot fully be trusted.
2. The proposed system replaces a relational database with a NoSQL database to improve data operation performance and scaling for large datasets. It designs a protocol where the third party auditor generates signatures for dataset blocks and integrity proofs, which are then verified by the user to check the trustworthiness of the third party auditor.
3. Experimental results show that the time required for operations like tag generation and challenge-proof-verify are lower than previous solutions,
This document describes a proposed system for enabling effective yet privacy-preserving fuzzy keyword search in cloud computing. It formalizes the problem of fuzzy keyword search over encrypted cloud data for the first time. The system uses edit distance to quantify keyword similarity and develops two techniques - wildcard-based and gram-based - to construct efficient fuzzy keyword sets. It then proposes a symbol-based trie-traverse searching scheme to match keywords and retrieve files. Security analysis shows the solution preserves privacy while allowing fuzzy searches.
IRJET - K-Gram based Composite Secret Sign Search Over Encrypted Cloud In...IRJET Journal
This document presents a proposed system called K-Gram based Composite Secret Sign Search over Encrypted Cloud Information. The system aims to provide a qualified search scheme for cloud storage data using multi-keyword search. It uses fuzzy keyword sets to account for spelling errors when searching encrypted file names in the cloud server. If a search keyword matches, related fuzzy keywords are used to search file lists. Experimental results demonstrate that the proposed solution can greatly boost privacy protection, scalability and query processing time efficiency over existing methods. The system considers keyword frequency when ranking search results to better protect user data privacy. It consists of modules for user login, file upload, frequent keyword search, similarity search, linear search, email alerts and file downloading.
This document contains information about several M.Phil Computer Science Cloud Computing projects written in C# and NS2. It provides the titles, languages, links, and short abstracts for each project. The projects focus on topics related to cloud computing including secure cloud storage, data integrity verification, privacy-preserving auditing, and keyword search over encrypted cloud data.
This document proposes and defines the problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). It establishes strict privacy requirements for such a system, including data privacy, index privacy, keyword privacy and trapdoor privacy. It presents the MRSE framework with four algorithms: Setup, BuildIndex, Trapdoor and Query. The Query algorithm allows cloud servers to perform a ranked search on encrypted indexes and return similarity-ranked results, while preserving privacy.
This document proposes and defines the problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). It establishes strict privacy requirements for such a system, including data privacy, index privacy, keyword privacy and trapdoor privacy. It presents the MRSE framework with four algorithms: Setup, BuildIndex, Trapdoor and Query. The Query algorithm allows cloud servers to perform a ranked search on encrypted indexes and return similarity-ranked results, while preserving privacy.
Literature Survey on Buliding Confidential and Efficient Query Processing Usi...paperpublications3
Abstract: Hosting data query services with the deployed cloud computing infrastructure increase the scalability and high performance evaluations with low cost. However, some data owners might not be interested to the save their in the cloud environment because of data confidentiality and query processing privacy should be guaranteed by the cloud service providers. Secured Query should able to provide very high efficient of query processing and also should reduce in – house workload. In this paper we proposed RASP data perturbation techniques combines various objectives like random noise injection, dimensionality expansion, efficient encryption and random projection, henceforth RASP methodology are also used to preserves multidimensional ranges. KNN – R algorithm used to work with RASP range for processing KNN queries. The experimental result of our project carried out to define realistic security and threat model approaches for improved efficient and security.
A Survey: Hybrid Job-Driven Meta Data Scheduling for Data storage with Intern...dbpublications
Cloud computing is a promising computing model that enables convenient and on demand network access to a shared pool of configurable computing resources. The first offered cloud service is moving data into the cloud: data owners let cloud service providers host their data on cloud servers and data consumers can access the data from the cloud servers. This new paradigm of data storage service also introduces new security challenges, because data owners and data servers have different identities and different business interests with map and reduce tasks in different jobs. Therefore, an independent auditing service is required to make sure that the data is correctly hosted in the Cloud. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incurring significant overhead. In addition, the two variations are separately suitable for different Map Reduce workload scenarios and provide the best job performance among all tested algorithms in cloud computing data storage.
This document provides 6 IEEE project summaries in the domain of Java and cloud computing/data mining. The summaries are:
1. A decentralized access control scheme for secure cloud data storage that supports anonymous authentication.
2. A performance analysis framework for distributed file systems that qualitatively and quantitatively evaluates performance.
3. Approaches to guarantee trustworthy transactions on cloud servers by enforcing policy consistency constraints.
4. A scalable MapReduce approach for anonymizing large datasets to satisfy privacy requirements like k-anonymity.
5. A resource allocation scheme for a self-organizing cloud that achieves maximized utilization and optimal execution efficiency.
6. An attribute-based encryption framework for flexible
Privacy preserving public auditing for secured cloud storagedbpublications
As the cloud computing technology develops during the last decade, outsourcing data to cloud service for storage becomes an attractive trend, which benefits in sparing efforts on heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud while achieving integrity auditing. In this work, we study the problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity and deduplication in cloud, we propose two secure systems, namely SecCloud and SecCloud+. SecCloud introduces an auditing entity with a maintenance of a MapReduce cloud, which helps clients generate data tags before uploading as well as audit the integrity of data having been stored in cloud. Compared with previous work, the computation by user in SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud+ is designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.
IRJET- A Survey on Searching of Keyword on Encrypted Data in Cloud using ...IRJET Journal
The document describes a survey on searching for keywords on encrypted data stored in the cloud using an access structure. It discusses how cloud computing allows for large-scale data outsourcing but encrypted data is difficult to search. The paper proposes a method to search encrypted data using access structures expressed as Boolean predicates. Key algorithms discussed are ranked serial binary search to reduce search time and AES encryption to encrypt data and avoid duplication. The goal is to enable efficient keyword searches on encrypted cloud data.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
Flaw less coding and authentication of user data using multiple cloudsIRJET Journal
This document discusses secure data storage in multiple cloud storage providers. It proposes a method for users to store encrypted data across multiple cloud storage providers using splitting and merging concepts. Private keys are generated during file access using a pseudo key generator and encrypted using 3DES for transmission. The method aims to increase data availability, confidentiality and reduce costs by distributing data across multiple cloud providers. It also discusses using image compression with reversible data hiding techniques to provide data confidentiality when storing images in the cloud.
A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud ...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud ...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Embedded machine learning-based road conditions and driving behavior monitoring
Cloud java titles adrit solutions
1. ADRIT SOLUTIONS
Ph: 9845252155 ; 7676768124 Email: adritsolutions@gmail.com
JAVA IEEE 2016-15 Cloud Computing Projects
1. IEEE 2016: Secure Optimization Computation Outsourcing in Cloud Computing: A Case
Study of Linear Programming.
Abstract: Cloud computing enables an economically promising paradigm of computation
outsourcing. However, how to protect customer’s confidential data processed and generated
during the computation is becoming the major security concern. Focusing on engineering
computing and optimization tasks, this paper investigates secure outsourcing of widely
applicable linear programming (LP) computations. Our mechanism design explicitly decomposes
LP computation outsourcing into public LP solvers running on the cloud and private LP
parameters owned by the customer. The resulting flexibility allows us to explore appropriate
security/efficiency tradeoff via higher-level abstraction of LP computation than the general
circuit representation. Specifically, by formulating private LP problem as a set of
matrices/vectors, we develop efficient privacy-preserving problem transformation techniques,
which allow customers to transform the original LP into some random one while protecting
sensitive input/output information. To validate the computation result, we further explore the
fundamental duality theorem of LP and derive the necessary and sufficient conditions that
correct results must satisfy. Such result verification mechanism is very efficient and incurs close-
to-zero additional cost on both cloud server and customers. Extensive security analysis and
experiment results show the immediate practicability of our mechanism design.
2. IEEE 2016: Ensures Dynamic access and Secure E-Governance system in Clouds
Services – EDSE
Abstract: E-Governance process helps the public to learn the information and available of
data’s themselves rather than being dependent on a physical guidance. They have been
2. driven through e-govern experience over the past decade; hence there is a necessity to
explore new E-Governance concepts with advanced technologies. These systems are now
exposed to wide numbers of threat while handling the information. This paper therefore
designing an efficient system for ensuring security and dynamic operation, so Remote
Integrity and secure dynamic operation is designed and implemented in E-Governance
environment. The data is Stored in the server using dynamic data operation with proposed
method which enables the user to access the data for further usage. Here the system does an
authentication process to prevent the data loss and ensuring security with reliability method. An
efficient distributed storage auditing mechanism is planned which overcomes the limitations in
handling the data loss. The content was made easy through the means of cloud computing by
using innovative method during information retrieval. Ensuring data security in this service
enforces error localization and easy identification of misbehaving server. Availability,
Confidentiality and integrity are the key factors of the security. In nature the data are
dynamic in cloud service; hence this process aims to process the operation with reduced
computational rate, space and time consumption. And also ensure trust based secured access
control.
3. IEEE 2016: On Traffic-Aware Partition and Aggregation in MapReduce for Big Data
Applications.
Abstract: The MapReduce programming model simplifies large-scale data processing on
commodity cluster by exploiting parallel map tasks and reduces tasks. Although many efforts
have been made to improve the performance of MapReduce jobs, they ignore the
network traffic generated in the shuffle phase, which plays a critical role in performance
enhancement. Traditionally, a hash function is used to partition intermediate data among reduce
tasks, which, however, is not traffic-efficient because network topology and data size associated
with each key are not taken into consideration. In this paper, we study to reduce
network traffic cost for a MapReduce job by designing a novel intermediate data partition
scheme. Furthermore, we jointly consider the aggregator placement problem, where each
aggregator can reduce merged traffic from multiple map tasks. A decomposition-based
distributed algorithm is proposed to deal with the large-scale optimization problem
for big data application and an online algorithm is also designed to
3. adjust data partition and aggregation in a dynamic manner. Finally, extensive simulation results
demonstrate that our proposals can significantly reduce network traffic cost under both offline
and online cases.
4. IEEE 2016: A Secure and Dynamic Multi-Keyword Ranked Search Scheme over
Encrypted Cloud Data.
Abstract: Due to the increasing popularity of cloud computing, more and more data owners are
motivated to outsource their data to cloud servers for great convenience and reduced cost
in data management. However, sensitive data should be encrypted before outsourcing for privacy
requirements, which obsoletes data utilization like keyword-based document retrieval. In this
paper, we present a securemulti-keyword ranked search scheme over encrypted cloud data,
which simultaneously supports dynamic update, operations like deletion and insertion of
documents. Specifically, the vector space model and the widely-used TF x IDF model are
combined in the index construction and query generation. We construct a special tree-based
index structure and propose a “Greedy Depth-first Search” algorithm to provide efficient multi-
keyword ranked search. The secure kNN algorithm is utilized to encrypt the index and query
vectors, and meanwhile ensure accurate relevance score calculation between encrypted index and
query vectors. In order to resist statistical attacks, phantom terms are added to the index vector
for blinding search results. Due to the use of our special tree-based index structure, the
proposed scheme can achieve sub-linear search time and deal with the deletion and insertion of
documents flexibly. Extensive experiments are conducted to demonstrate the efficiency of the
proposed scheme.
5. IEEE 2016: An Efficient Privacy-Preserving Ranked Keyword Search Method
Abstract: Cloud data owners prefer to outsource documents in an encrypted form for the
purpose of privacy preserving. Therefore it is essential to develop efficient and reliable cipher
text search techniques. One challenge is that the relationship between documents will be
4. normally concealed in the process of encryption, which will lead to significant search accuracy
performance degradation. Also the volume of data in data centers has experienced a dramatic
growth. This will make it even more challenging to design ciphertext search schemes that can
provide efficient and reliable online information retrieval on large volume of encrypted data. In
this paper, a hierarchical clustering method is proposed to support more search semantics and
also to meet the demand for fast ciphertext search within a big data environment. The proposed
hierarchical approach clusters the documents based on the minimum relevance threshold, and
then partitions the resulting clusters into sub-clusters until the constraint on the maximum size of
cluster is reached. In the search phase, this approach can reach a linear computational complexity
against an exponential size increase of document collection. In order to verify the authenticity of
search results, a structure called minimum hash sub-tree is designed in this paper. Experiments
have been conducted using the collection set built from the IEEE Xplore. The results show that
with a sharp increase of documents in the dataset the search time of the proposed method
increases linearly whereas the search time of the traditional method increases exponentially.
Furthermore, the proposed method has an advantage over the traditional method in the rank
privacy and relevance of retrieved documents.
6. IEEE 2016: Differentially Private Online Learning for Cloud-Based Video
Recommendation with Multimedia Big Data in Social Networks
Abstract: With the rapid growth in multimedia services and the enormous offers of video
contents in online social networks, users have difficulty in obtaining their interests. Therefore,
various personalized recommendation systems have been proposed. However, they ignore that
the accelerated proliferation of social media data has led to the big data era, which has greatly
impeded the process of video recommendation. In addition, none of them has considered both the
privacy of users’ contexts (e.g., social status, ages and hobbies) and video service vendors’
repositories, which are extremely sensitive and of significant commercial value. To handle the
problems, we propose a cloud-assisted differentially private video recommendation system based
on distributed online learning. In our framework, service vendors are modeled as distributed
cooperative learners, recommending videos according to user’s context, while simultaneously
adapting the video-selection strategy based on user-click feedback to maximize total user clicks
(reward). Considering the sparsity and heterogeneity of big social media data, we also propose a
5. novel geometric differentially private model, which can greatly reduce the performance
(recommendation accuracy) loss. Our simulation shows the proposed algorithms outperform
other existing methods and keep a delicate balance between computing accuracy and privacy
preserving level.
7. IEEE 2016: Fine-Grained Two-Factor Access Control for Web-Based Cloud Computing
Services
Abstract: In this paper, we introduce a new fine-grained two-factor authentication (2FA) access
control system for web-based cloud computing services. Specifically, in our proposed 2FA
access control system, an attribute-based access control mechanism is implemented with the
necessity of both a user secret key and a lightweight security device. As a user cannot access the
system if they do not hold both, the mechanism can enhance the security of the system,
especially in those scenarios where many users share the same computer for web-based cloud
services. In addition, attribute-based control in the system also enables the cloud server to restrict
the access to those users with the same set of attributes while preserving user privacy, i.e., the
cloud server only knows that the user fulfills the required predicate, but has no idea on the exact
identity of the user. Finally, we also carry out a simulation to demonstrate the practicability of
our proposed 2FA system.
8. IEEE 2016: Dual-Server Public-Key Encryption with Keyword Search for Secure Cloud
Storage.
Abstract: Searchable encryption is of increasing interest for protecting the data privacy in secure
searchable cloud storage. In this work, we investigate the security of a well-known cryptographic
primitive, namely Public Key Encryption with Keyword Search (PEKS) which is very useful in
many applications of cloud storage. Unfortunately, it has been shown that the traditional PEKS
framework suffers from an inherent insecurity called inside Keyword Guessing Attack (KGA)
launched by the malicious server. To address this security vulnerability, we propose a new PEKS
framework named Dual-Server Public Key Encryption with Keyword Search (DS-PEKS). As
6. another main contribution, we define a new variant of the Smooth Projective Hash Functions
(SPHFs) referred to as linear and homomorphic SPHF (LH-SPHF). We then show a generic
construction of secure DS-PEKS from LH-SPHF. To illustrate the feasibility of our new
framework, we provide an efficient instantiation of the general framework from a DDH-based
LH-SPHF and show that it can achieve the strong security against inside KGA.
9. IEEE 2016: DeyPoS: Deduplicatable Dynamic Proof of Storage for Multi-User
Environments
Abstract: Dynamic Proof of Storage (PoS) is a useful cryptographic primitive that enables a user
to check the integrity of outsourced files and to efficiently update the files in a cloud server.
Although researchers have proposed many dynamic PoS schemes in single-user environments,
the problem in multi-user environments has not been investigated sufficiently. A practical multi-
user cloud storage system needs the secure client-side cross-user deduplication technique, which
allows a user to skip the uploading process and obtain the ownership of the files immediately,
when other owners of the same files have uploaded them to the cloud server. To the best of our
knowledge, none of the existing dynamic PoSs can support this technique. In this paper, we
introduce the concept of deduplicatable dynamic proof of storage and propose an efficient
construction called DeyPoS, to achieve dynamic PoS and secure cross-user deduplication,
simultaneously. Considering the challenges of structure diversity and private tag generation, we
exploit a novel tool called Homomorphic Authenticated Tree (HAT). We prove the security of
our construction, and the theoretical analysis and experimental results show that our construction
is efficient in practice.
10. IEEE 2016: KSF-OABE: Outsourced Attribute-Based Encryption with Keyword
Search Function for Cloud Storage
Abstract: Cloud computing becomes increasingly popular for data owners to outsource their
data to public cloud servers while allowing intended data users to retrieve these data stored in
cloud. This kind of computing model brings challenges to the security and privacy of data stored
7. in cloud. Attribute-based encryption (ABE) technology has been used to design fine-grained
access control system, which provides one good method to solve the security issues in cloud
setting. However, the computation cost and cipher text size in most ABE schemes grow with the
complexity of the access policy. Outsourced ABE (OABE) with fine-grained access control
system can largely reduce the computation cost for users who want to access encrypted data
stored in cloud by outsourcing the heavy computation to cloud service provider (CSP). However,
as the amount of encrypted files stored in cloud is becoming very huge, which will hinder
efficient query processing? To deal with above problem, we present a new cryptographic
primitive called attribute-based encryption scheme with outsourcing key-issuing and outsourcing
decryption, which can implement keyword search function (KSF-OABE). The proposed KSF-
OABE scheme is proved secure against chosen-plaintext attack (CPA). CSP performs partial
decryption task delegated by data user without knowing anything about the plaintext. Moreover,
the CSP can perform encrypted keyword search without knowing anything about the keywords
embedded in trapdoor.
11. IEEE 2016: Sec RBAC: Secure data in the Clouds
Abstract: Most current security solutions are based on perimeter security. However, Cloud
computing breaks the organization perimeters. When data resides in the Cloud, they reside
outside the organizational bounds. This leads users to a loos of control over their data and raises
reasonable security concerns that slow down the adoption of Cloud computing. Is the Cloud
service provider accessing the data? Is it legitimately applying the access control policy defined
by the user? This paper presents a data-centric access control solution with enriched role-based
expressiveness in which security is focused on protecting user data regardless the Cloud service
provider that holds it. Novel identity-based and proxy re-encryption techniques are used to
protect the authorization model. Data is encrypted and authorization rules are cryptographically
protected to preserve user data against the service provider access or misbehavior. The
authorization model provides high expressiveness with role hierarchy and resource hierarchy
support. The solution takes advantage of the logic formalism provided by Semantic Web
technologies, which enables advanced rule management like semantic conflict detection. A proof
of concept implementation has been developed and a working prototypical deployment of the
proposal has been integrated within Google service.
8. 12. IEEE 2016: Secure Auditing and Duplicating Data in Cloud
Abstract: As the cloud computing technology develops during the last decade
outsourcing data to cloud service for storage becomes an attractive trend, which benefits in
sparing efforts on heavy data maintenance and management. Nevertheless, since the
outsourced cloud storage is not fully trustworthy, it raises security concerns on how to
realize data deduplication in cloud while achieving integrity auditing. In this work, we study the
problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at
achieving both data integrity and deduplication in cloud, we propose two secure systems, namely
Sec Cloud and Sec Cloud . Sec Cloud introduces an auditing entity with a maintenance of a Map
Reduce cloud, which helps clients generate data tags before uploading as well as audit the
integrity of data having been stored in cloud. Compared with previous work, the computation by
user in Sec Cloud greatly reduced during the file uploading and auditing phases. Sec Cloud is
designed motivated by the fact that customers always want to encrypt their data before
uploading, and enables integrity auditing and secure deduplication on encrypted data.
13. IEEE 2016: Public Integrity Auditing for Shared Dynamic Cloud Data with Group
User Revocation
Abstract: The advent of the cloud computing makes storage outsourcing becomes a rising trend,
which promotes the secure remote data auditing a hot topic that appeared in the research
literature. Recently some research considers the problem of secure and
efficient public data integrity auditing for shared dynamic data. However, these schemes are still
not secure against the collusion of cloud storage server and
revoked group users during user revocation in practical cloud storage system. In this paper, we
figure out the collusion attack in the exiting scheme and provide an
efficient public integrity auditing scheme with secure group user revocation based on vector
commitment and verifier-local revocation group signature. We design a concrete scheme based
on the our scheme definition. Our scheme supports the public checking and
efficient user revocation and also some nice properties, such as confidently, efficiency,
countability and traceability of secure group user revocation. Finally, the security and
experimental analysis show that, compared with its relevant schemes our scheme is also secure
and efficient
9. 14. IEEE 2016: Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via
Cloud Storage
Abstract: The capability of selectively sharing encrypted data with different
users via public cloud storage may greatly ease security concerns over inadvertent data leaks in
the cloud. A key challenge to designing such encryption schemes lies in the efficient
management of encryption keys. The desired flexibility of sharing any group of selected
documents with any group of users demands different encryption keys to be used for different
documents. However, this also implies the necessity of securely distributing to users a large
number of keys for both encryption and search, and those users will have to securely store the
received keys, and submit an equally large number of keyword trapdoors to the cloud in order to
perform search over the shared data. The implied need for secure communication, storage, and
complexity clearly renders the approach impractical. In this paper, we address this practical
problem, which is largely neglected in the literature, by proposing the novel concept of key-
aggregate search able encryption and instantiating the concept through a concrete KASE scheme,
in which a data owner only needs to distribute a single key to a user for sharing a large number
of documents, and the user only needs to submit a single trapdoor to the cloud for querying
the shared documents. The security analysis and performance evaluation both confirm that our
proposed schemes are provably secure and practically efficient.
15. IEEE 2016: A Secure and Dynamic Multi-Keyword Ranked Search Scheme over
Encrypted Cloud Data
Abstract: Due to the increasing popularity of cloud computing, more and more data owners are
motivated to outsource their data to cloud servers for great convenience and reduced cost
in data management. However, sensitive data should be encrypted before outsourcing for privacy
requirements, which obsoletes data utilization like keyword-based document retrieval. In this
paper, we present a securemulti-keyword ranked search scheme over encrypted cloud data,
which simultaneously supports dynamic update operations like deletion and insertion of
documents. Specifically, the vector space model and the widely-used TF x IDF model are
combined in the index construction and query generation. We construct a special tree-based
index structure and propose a “Greedy Depth-first Search” algorithm to provide efficient multi-
keyword ranked search. The secure kNN algorithm is utilized to encrypt the index and query
10. vectors, and meanwhile ensure accurate relevance score calculation between encrypted index and
query vectors. In order to resist statistical attacks, phantom terms are added to the index vector
for blinding search results. Due to the use of our special tree-based index structure, the
proposed scheme can achieve sub-linear search time and deal with the deletion and insertion of
documents flexibly. Extensive experiments are conducted to demonstrate the efficiency of the
proposed scheme.
16. IEEE 2016: Identity-Based Encryption with Outsourced Revocation in Cloud
Computing
Abstract: Identity-Based Encryption (IBE) which simplifies the public key and certificate
management at Public Key Infrastructure (PKI) is an important alternative to public
key encryption. However, one of the main efficiency drawbacks of IBE is the overhead
computation at Private Key Generator (PKG) during user revocation. Efficient revocation has
been well studied in traditional PKI setting, but the cumbersome management of certificates is
precisely the burden that IBE strives to alleviate. In this paper, aiming at tackling the critical
issue of identity revocation, we introduce outsourcing computation into IBE for the first time and
propose a revocable IBE scheme in the server-aided setting. Our scheme offloads most of the key
generation related operations during key-issuing and key-update processes to a Key Update
Cloud Service Provider, leaving only a constant number of simple operations for PKG and users
to perform locally. This goal is achieved by utilizing a novel collusion-resistant technique: we
employ a hybrid private key for each user, in which an AND gate is involved to connect and
bound the identity component and the time component. Furthermore, we propose another
construction which is provable secure under the recently formulized Refereed Delegation of
Computation model. Finally, we provide extensive experimental results to demonstrate the
efficiency of our proposed construction.
17. IEEE 2016: Reducing Fragmentation for In-line Deduplication Backup Storage via
Exploiting Backup History and Cache Knowledge
11. Abstract: In backup systems, the chunks of each backup are physically scattered
after deduplication, which causes a challenging fragmentation problem. We observe that
the fragmentation comes into sparse and out-of-order containers. The sparse container decreases
restore performance and garbage collection efficiency, while the out-of-order container decreases
restore performance if the restore cache is small. In order to reduce the fragmentation, we
propose History-Aware Rewriting algorithm (HAR) and Cache-Aware Filter (CAF).
HAR exploits historical information in backup systems to accurately identify and reduce sparse
containers, and CAF exploits restore cache knowledge to identify the out-of-order containers that
hurt restore performance. CAF efficiently complements HAR in datasets where out-of-order
containers are dominant. To reduce the metadata overhead of the garbage collection, we further
propose a Container-Marker Algorithm (CMA) to identify valid containers instead of valid
chunks. Our extensive experimental results from real-world datasets show HAR significantly
improves the restore performance by 2.84-175.36 × at a cost of only rewriting 0.5-2.03 percent
data.
18. IEEE 2015: SecDep: A user-aware efficient fine-grained secure deduplication scheme
with multi-level key management
Abstract: Nowadays, many customers and enterprises backup their data to cloud storage that
performs deduplication to save storage space and network bandwidth. Hence, how to perform
secure deduplication becomes a critical challenge for cloud storage. According to our analysis,
the state-of-the-art secure deduplication methods are not suitable for cross-user fine grained data
deduplication. They either suffer brute-force attacks that can recover files falling into a known
set, or incur large computation (time) overheads. Moreover, existing approaches of convergent
key management incur large space overheads because of the huge number of chunks shared
among users. Our observation that cross-user redundant data are mainly from the duplicate files,
motivates us to propose an efficient secure deduplication scheme SecDep. SecDep employs
User-Aware Convergent Encryption (UACE) and Multi-Level Key management (MLK)
approaches. (1) UACE combines cross-user file-level and inside-user chunk-level deduplication,
and exploits different secure policies among and inside users to minimize the computation
overheads. Specifically, both of file-level and chunk-level deduplication use variants of
Convergent Encryption (CE) to resist brute-force attacks. The major difference is that the file-
12. level CE keys are generated by using a server-aided method to ensure security of cross-user
deduplication, while the chunk-level keys are generated by using a user-aided method with lower
computation overheads. (2) To reduce key space overheads, MLK uses file-level key to encrypt
chunk-level keys so that the key space will not increase with the number of sharing users.
Furthermore, MLK splits the file-level keys into share-level keys and distributes them to multiple
key servers to ensure security and reliability of file-level keys. Our security analysis
demonstrates that SecDep ensures data confidentiality and key security. Our experiment results
based on several large real-world datasets show that SecDep is more time efficient and key-
space-efficient than the state-of-the-art secure deduplication approaches.
19. IEEE 2015: Energy-aware Load Balancing and Application Scaling for the Cloud
Ecosystem
Abstract: In this paper we introduce an energy-aware operation model used for load balancing
and application scaling on a cloud. The basic philosophy of our approach is dening an energy-
optimal operation regime and attempting to maximize the number of servers operating in this
regime. Idle and lightly-loaded servers are switched to one of the sleep states to save energy. The
load balancing and scaling algorithms also exploit some of the most desirable features of server
consolidation mechanisms discussed in the literature.
20. IEEE 2015: Provable Multi copy Dynamic Data Possession in Cloud Computing
Systems
Abstract: Increasingly more and more organizations are opting for outsourcing data to
remote cloud service providers (CSPs). Customers can rent the CSPs storage infrastructure to
store and retrieve almost unlimited amount of data by paying fees metered in gigabyte/month.
For an increased level of scalability, availability, and durability, some customers may want
their data to be replicated on multiple servers across multiple data centers. The more copies the
CSP is asked to store, the more fees the customers are charged. Therefore, customers need to
have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service
contract, and all these copies are consistent with the most recent modifications issued by the
customers. In this paper, we propose a map-based provable multi
copy dynamic data possession (MB-PMDDP) scheme that has the following features: 1) it
13. provides an evidence to the customers that the CSP is not cheating by storing fewer copies; 2) it
supports outsourcing of dynamic data, i.e., it supports block-level operations, such as block
modification, insertion, deletion, and append; and 3) it allows authorized users to seamlessly
access the file copies stored by the CSP. We give a comparative analysis of the proposed MB-
PMDDP scheme with a reference model obtained by extending
existing provable possession of dynamic single-copy schemes. The theoretical analysis is
validated through experimental results on a commercial cloud platform. In addition, we show the
security against colluding servers, and discuss how to identify corrupted copies by slightly
modifying the proposed scheme.
21. IEEE 2015: A Profit Maximization Scheme with Guaranteed Quality of Service in
Cloud Computing
Abstract: As an effective and efficient way to provide computing resources and services to
customers on demand, cloud computing has become more and more popular.
From cloud service providers' perspective, profit is one of the most important considerations, and
it is mainly determined by the configuration of a cloud service platform under given market
demand. However, a single long-term renting scheme is usually adopted to configure
a cloud platform, which cannot guarantee the service quality but leads to serious resource waste.
In this paper, a double resource renting scheme is designed firstly in which short-term renting
and long-term renting are combined aiming at the existing issues. This double
renting scheme can effectively guarantee the quality of service of all requests and reduce the
resource waste greatly. Secondly, a service system is considered as an M/M/m + D queuing
model and the performance indicators that affect the profit of our double renting scheme are
analyzed, e.g., the average charge, the ratio of requests that need temporary servers, and so forth.
Thirdly, a profit maximization problem is formulated for the double renting scheme and the
optimized configuration of a cloud platform is obtained by solving
the profit maximization problem. Finally, a series of calculations are conducted to compare
the profit of our proposed scheme with that of the single renting scheme. The results show that
our scheme can not only guarantee the service quality of all requests, but also obtain
more profit than the latter.
14. 22. IEEE 2015: Cloud Sky: A Controllable Data Self-Destruction System for Un trusted
Cloud Storage Networks
Abstract: In cloud services, users may frequently be required to reveal their personal private
information which could be stored in the cloud to used by different parts for different purposes.
However, in a cloud-wide storage network, the servers are easily under strong attacks and also
commonly experience software/hardware faults. As such, the private information could be under
great risk in such an un trusted environment. Given that the presented personal sensitive
information is usually out of user's control in most cloud-based services, ensuring data security
and privacy protection with respect to un trusted storage network has become a formidable
challenge in research. To address these challenges, in this paper we propose a self-
destruction system, named Cloud Sky, which is able to enforce the security of user privacy over
the un trusted cloud in a controllable way. Cloud Sky exploits a key control mechanism based on
the attribute-based encryption (ABE) and takes advantage of active storage networks to allow the
user to control the subjective life-cycle and the access control polices of the private data whose
integrity is ensured by using HMAC to cope with un trusted environments and there by adapting
it to the cloud in terms of both performance and security requirements. The feasibility of
the system in terms of its performance and scalability is demonstrated by experiments on a real
large-scale storage network.
23. IEEE 2015: Cost-Minimizing Dynamic Migration of Content Distribution Services into
Hybrid Clouds
Abstract: With the recent advent of cloud computing technologies, a growing number
of content distribution applications are contemplating a switch to cloud-based services, for better
scalability and lower cost. Two key tasks are involved for such a move: to migrate
the contents to cloud storage, and to distribute the Web service load to cloud-based
Web services. The main issue is to best utilize the cloud as well as the application provider's
existing private cloud, to serve volatile requests with service response time guarantee at all times,
while incurring the minimum operational cost. While it may not be too difficult to design a
simple heuristic, proposing one with guaranteed cost optimality over a long run of the system
constitutes an intimidating challenge. Employing Lyapunov optimization techniques, we design a
dynamic control algorithm to optimally place contents and dispatch requests in a hybrid cloud
15. infrastructure spanning geo-distributed data centers, which minimizes overall
operational cost overtime, subject to service response time constraints. Rigorous analysis shows
that the algorithm nicely bounds the response times within the preset QoS target, and guarantees
that the overall cost is within a small constant gap from the optimum achieved by a T-slot look
ahead mechanism with known future information. We verify the performance of
our dynamic algorithm with prototype-based evaluation.
24. IEEE 2015: A Hybrid Cloud Approach for Secure Authorized Deduplication
Abstract: Data deduplication is one of important data compression techniques for eliminating
duplicate copies of repeating data, and has been widely used in cloud storage to reduce the
amount of storage space and save bandwidth. To protect the confidentiality of sensitive data
while supporting deduplication, the convergent encryption technique has been proposed to
encrypt the data before outsourcing. To better protect data security, this paper makes the first
attempt to formally address the problem of authorized data deduplication. Different from
traditional deduplication systems, the differential privileges of users are further considered in
duplicate check besides the data itself. We also present several new deduplication constructions
supporting authorized duplicate check in a hybrid cloud architecture. Security analysis
demonstrates that our scheme is secure in terms of the definitions specified in the proposed
security model. As a proof of concept, we implement a prototype of our proposed authorized
duplicate check scheme and conduct test bed experiments using our prototype. We show that our
proposed authorized duplicate check scheme incurs minimal overhead compared to normal
operations.
25. IEEE 2015: I-sieve: An inline high performance deduplication system used in cloud
storage
Abstract: Data deduplication is an emerging and widely employed method for
current storage systems. As this technology is gradually applied in inline scenarios such as with
virtual machines and cloud storage systems, this study proposes a
novel deduplication architecture called I-sieve. The goal of I-sieve is to realize
a high performance data sieve system based on iSCSI in the cloud storage system. We also
16. design the corresponding index and mapping tables and present a multi-level cache using a solid
state drive to reduce RAM consumption and to optimize lookup performance. A prototype of I-
sieve is implemented based on the open source iSCSI target, and many experiments have been
conducted driven by virtual machine images and testing tools. The evaluation results show
excellent deduplication and foreground performance. More importantly, I-sieve can co-exist with
the existing deduplication systems as long as they support the iSCSI protocol.
26. IEEE 2015: Panda: Public Auditing for Shared Data with Efficient User Revocation in
the Cloud
Abstract: With data storage and sharing services in the cloud, users can easily modify
and share data as a group. To ensure shared data integrity can be verified publicly, users in the
group need to compute signatures on all the blocks in shared data. Different blocks
in shared data are generally signed by different users due to data modifications performed by
different users. For security reasons, once a user is revoked from the group, the blocks which
were previously signed by this revoked user must be re-signed by an existing user. The straight
forward method, which allows an existing user to download the corresponding part
of shared data and re-sign it during user revocation, is inefficient due to the large size
of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the
integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-
signatures, we allow the cloud to re-sign blocks on behalf of
existing users during user revocation, so that existing users do not need to download and re-sign
blocks by themselves. In addition, a public verifier is always able to audit the integrity
of shared data without retrieving the entire data from the cloud, even if some part
of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support
batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show
that our mechanism can significantly improve the efficiency of user revocation.
27. IEEE 2015: Just-in-Time Code Offloading for Wearable Computing
Abstract: Wearable computing becomes an emerging computing paradigm for various recently
developed wearable devices, such as Google Glass and the Samsung Galaxy Smart watch, which
have significantly changed our daily life with new functions. To magnify the applications
17. on wearable devices with limited computational capability, storage, and battery capacity, in this
paper, we propose a novel three-layer architecture consisting of wearable devices, mobile
devices, and a remote cloud for code offloading. In particular, we offload a portion of
computation tasks from wearable devices to local mobile devices or remote cloud such that even
applications with a heavy computation load can still be upheld on wearable devices.
Furthermore, considering the special characteristics and the requirements of wearable devices,
we investigate a code offloading strategy with a novel just-in-time objective, i.e., maximizing the
number of tasks that should be executed on wearable devices with guaranteed delay
requirements. Because of the NP-hardness of this problem as we prove, we propose a fast
heuristic algorithm based on the genetic algorithm to solve it. Finally, extensive simulations are
conducted to show that our proposed algorithm significantly outperforms the other
three offloading strategies
28. IEEE 2015: Secure sensitive data sharing on a big data platform
Abstract: Users store vast amounts of sensitive data on
a big data platform. Sharing sensitive data will help enterprises reduce the cost of providing users
with personalized services and provide value-added data services.
However, secure data sharing is problematic. This paper proposes a framework for secure
sensitive data sharing on a big data platform, including secure data delivery, storage, usage, and
destruction on a semi-trusted big data sharing platform. We present a proxy re-encryption
algorithm based on heterogeneous cipher text transformation and a user process protection
method based on a virtual machine monitor, which provides support for the realization of system
functions. The framework protects the security of users' sensitive data effectively
and shares these data safely. At the same time, data owners retain complete control of their
own data in a sound environment for modern Internet information security.
29. IEEE 2015: HEROS: Energy-Efficient Load Balancing for Heterogeneous Data Centers
Abstract: Heterogeneous architectures have become more popular and widespread in the recent
years with the growing popularity of general-purpose processing on graphics processing units,
low-power systems on a chip, multi- and many-core architectures, asymmetric cores,
18. coprocessors, and solid-state drives. The design and management of cloud computing data-
centers must adapt to these changes while targeting objectives of improving system
performance, energy efficiency and reliability. This paper presents HEROS, a
novel load balancing algorithm for energy-efficient resource allocation in
heterogeneous systems. HEROS takes into account the heterogeneity of a system during the
decision-making process and uses a holistic representation of the system. As a result, servers that
contain resources of multiple types (computing, memory, storage and networking) and have
varying internal structures of their components can be utilized more efficiently.
30. IEEE 2015: Enabling Efficient Multi-Keyword Ranked Search Over Encrypted Mobile
Cloud Data Through Blind Storage
Abstract: In mobile cloud computing, a fundamental application is to outsource
the mobile data to external cloud servers for scalable data storage. The outsourced data, however,
need to be encrypted due to the privacy and confidentiality concerns of their owner. This results
in the distinguished difficulties on the accurate search over the encrypted mobile cloud data. To
tackle this issue, in this paper, we develop the searchable encryption for multi-
keyword ranked search over the storage data. Specifically, by considering the large number of
outsourced documents (data) in the cloud, we utilize the relevance score and k-nearest neighbor
techniques to develop an efficient multi-keyword search scheme that can return
the ranked search results based on the accuracy. Within this framework, we leverage an efficient
index to further improve the search efficiency, and adopt the blind storage system to conceal
access pattern of the search user. Security analysis demonstrates that our scheme can achieve
confidentiality of documents and index, trapdoor privacy, trapdoor unlink ability, and concealing
access pattern of the search user. Finally, using extensive simulations, we show that our proposal
can achieve much improved efficiency in terms of search functionality and search time compared
with the existing proposals.
31. IEEE 2015: Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud
Storage
Abstract: To protect outsourced data in cloud storage against corruptions, adding fault tolerance
to cloud storage together with data integrity checking and failure reparation becomes critical.
19. Recently, regenerating codes have gained popularity due to their lower repair bandwidth while
providing fault tolerance. Existing remote checking methods for regenerating-coded data only
provide private auditing, requiring data owners to always stay online and handle auditing, as well
as repairing, which is sometimes impractical. In this paper, we propose a public auditing scheme
for the regenerating-code-based cloud storage. To solve the regeneration problem of failed
authenticators in the absence of data owners, we introduce a proxy, which is privileged to
regenerate the authenticators, into the traditional public auditing system model. Moreover, we
design a novel public verifiable authenticator, which is generated by a couple of keys and can be
regenerated using partial keys. Thus, our scheme can completely release data owners from online
burden. In addition, we randomize the encode coefficients with a pseudorandom function to
preserve data privacy. Extensive security analysis shows that our scheme is provable secure
under random oracle model and experimental evaluation indicates that our scheme is highly
efficient and can be feasibly integrated into the regenerating-code-based cloud storage.
32. IEEE 2015: CHARM: A Cost-Efficient Multi-Cloud Data Hosting Scheme with High
Availability
Abstract: Nowadays, more and more enterprises and organizations are hosting their data into
the cloud, in order to reduce the IT maintenance cost and enhance the data reliability. However,
facing the numerous cloud vendors as well as their heterogeneous pricing policies, customers
may well be perplexed with which cloud(s) are suitable for storing their data and
what hosting strategy is cheaper. The general status quo is that customers usually put
their data into a single cloud (which is subject to the vendor lock-in risk) and then simply trust to
luck. Based on comprehensive analysis of various state-of-the-art cloud vendors, this paper
proposes a novel data hosting scheme (named CHARM) which integrates two key functions
desired. The first is selecting several suitable clouds and an appropriate redundancy strategy to
store data with minimized monetary cost and guaranteed availability. The second is triggering a
transition process to re-distribute data according to the variations of data access pattern and
pricing of clouds. We evaluate the performance of CHARM using both trace-driven simulations
and prototype experiments. The results show that compared with the major
existing schemes; CHARM not only saves around 20 percent of monetary cost but also exhibits
sound adaptability to data and price adjustments.
20. 33. IEEE 2015: Innovative Schemes for Resource Allocation in the Cloud for Media
Streaming Applications
Abstract: Media streaming applications have recently attracted a large number of users in the
Internet. With the advent of these bandwidth-intensive applications, it is economically inefficient
to provide streaming distribution with guaranteed QoS relying only on central resources at
a media content provider. Cloud computing offers an elastic infrastructure that media content
providers (e.g., Video on Demand (VoD) providers) can use to obtain streaming resources that
match the demand. Media content providers are charged for the amount of resources allocated
(reserved) in the cloud. Most of the existing cloud providers employ a pricing model for the
reserved resources that is based on non-linear time-discount tariffs (e.g., Amazon Cloud Front
and Amazon EC2). Such a pricing scheme offers discount rates depending non-linearly on the
period of time during which the resources are reserved in the cloud. In this case, an open
problem is to decide on both the right amount of resources reserved in the cloud, and their
reservation time such that the financial cost on the media content provider is minimized. We
propose a simple-easy to implement-algorithm for resource reservation that maximally exploits
discounted rates offered in the tariffs, while ensuring that sufficient resources are reserved in
the cloud. Based on the prediction of demand for streaming capacity, our algorithm is carefully
designed to reduce the risk of making wrong resource allocation decisions. The results of our
numerical evaluations and simulations show that the proposed algorithm significantly reduces
the monetary cost of resource allocations in the cloud as compared to other
conventional schemes.
34. IEEE 2015: OPoR: Enabling Proof of Retrievability in Cloud Computing with
Resource-Constrained Devices
Abstract: Cloud computing moves the application software and databases to the centralized
large data centers, where the management of the data and services may not be fully trustworthy.
In this work, we study the problem of ensuring the integrity of data storage in cloud computing.
To reduce the computational cost at user side during the integrity verification of their data, the
notion of public verifiability has been proposed. However, the challenge is that the
computational burden is too huge for the users with resource-constrained devices to compute the
public authentication tags of file blocks. To tackle the challenge, we propose OPoR, a
21. new cloud storage scheme involving a cloud storage server and a cloud audit server, where the
latter is assumed to be semi-honest. In particular, we consider the task of allowing
the cloud audit server, on behalf of the cloud users, to pre-process the data before uploading to
the cloud storage server and later verifying the data integrity. OPoR outsources and offloads the
heavy computation of the tag generation to the cloud audit server and eliminates the involvement
of user in the auditing and in the pre-processing phases. Furthermore, we strengthen
the proof of irretrievability (PoR) model to support dynamic data operations, as well as ensure
security against reset attacks launched by the cloud storage server in the upload phase.
35. IEEE 2015: Enabling Efficient Multi-Keyword Ranked Search Over Encrypted Mobile
Cloud Data Through Blind Storage
Abstract: In mobile cloud computing, a fundamental application is to outsource
the mobile data to external cloud servers for scalable data storage. The outsourced data, however,
need to be encrypted due to the privacy and confidentiality concerns of their owner. This results
in the distinguished difficulties on the accurate search over the encrypted mobile cloud data. To
tackle this issue, in this paper, we develop the searchable encryption for multi-
keyword ranked search over the storage data. Specifically, by considering the large number of
outsourced documents (data) in the cloud, we utilize the relevance score and k-nearest neighbor
techniques to develop an efficient multi-keyword search scheme that can return
the ranked search results based on the accuracy. Within this framework, we leverage an efficient
index to further improve the search efficiency, and adopt the blind storage system to conceal
access pattern of the search user. Security analysis demonstrates that our scheme can achieve
confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and concealing
access pattern of the search user. Finally, using extensive simulations, we show that our proposal
can achieve much improved efficiency in terms of search functionality and search time compared
with the existing proposals.
36. IEEE 2015: Toward Offering More Useful Data Reliably to Mobile Cloud From
Wireless Sensor Network
Abstract: The integration of ubiquitous wireless sensor network (WSN) and
powerful mobile cloud computing (MCC) is a research topic that is attracting growing interest in
22. both academia and industry. In this new paradigm, WSN
provides data to the cloud and mobile users request data from the cloud. To support applications
involving WSN-MCC integration, which need to reliably offer data that are more useful to the
mobile users from WSN to cloud, this paper first identifies the critical issues that affect the
usefulness of sensory data and the reliability of WSN, then proposes a novel WSN-MCC
integration scheme named TPSS, which consists of two main parts: 1) time and priority-based
selective data transmission (TPSDT) for WSN gateway to selectively transmit sensory data that
are more useful to the cloud, considering the time and priority features of the data requested by
the mobile user and 2) priority-based sleep scheduling (PSS) algorithm for WSN to save energy
consumption so that it can gather and transmit data in a more reliable way. Analytical and
experimental results demonstrate the effectiveness of TPSS in improving usefulness of
sensory data and reliability of WSN for WSN-MCC integration.
37. IEEE 2015: Secure Distributed Deduplication Systems with Improved Reliability
Abstract: Data deduplication is a technique for eliminating duplicate copies of data, and has
been widely used in cloud storage to reduce storage space and upload bandwidth. However, there
is only one copy for each file stored in cloud even if such a file is owned by a huge number of
users. As a result, deduplication system improves storage utilization while reducing reliability.
Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by
users to cloud. Aiming to address the above security challenges, this paper makes the first
attempt to formalize the notion of distributed reliable deduplication system. We propose
new distributed deduplication systems with higher reliability in which the data chunks
are distributed across multiple cloud servers. The security requirements of data confidentiality
and tag consistency are also achieved by introducing a deterministic secret sharing scheme
in distributed storage systems, instead of using convergent encryption as in previous
deduplication systems. Security analysis demonstrates that
our deduplication systems are secure in terms of the definitions specified in the proposed security
model. As a proof of concept, we implement the proposed systems and demonstrate that the
incurred overhead is very limited in realistic environments.
23. Contact Us: #42/5, 1st
Floor, 18th
Cross, 21st
Main, Vijayanagar, Bangalore-40
Land Mark: Near Maruthi Mandir ; www.adritsolutions.com