A hybrid cloud approach for secure authorizedNinad Samel
This document summarizes a research paper that proposes a hybrid cloud approach for secure authorized data deduplication. The paper presents a scheme that uses convergent encryption to encrypt files before uploading them to cloud storage. It also considers the differential privileges of users when performing duplicate checks, in addition to file content. A prototype is implemented to test the proposed authorized duplicate check scheme. Experimental results show the scheme incurs minimal overhead compared to normal cloud storage operations. The goal is to better protect data security while supporting deduplication in a hybrid cloud architecture.
iaetsd Controlling data deuplication in cloud storageIaetsd Iaetsd
This document discusses controlling data deduplication in cloud storage. It proposes an architecture that provides duplicate check procedures with minimal overhead compared to normal cloud storage operations. The key aspects of the proposed system are:
1) It uses convergent encryption to encrypt data for privacy while still allowing for deduplication of duplicate files.
2) It introduces a private cloud that manages user privileges and generates tokens for authorized duplicate checking in a hybrid cloud architecture.
3) It evaluates the overhead of the proposed authorized duplicate checking scheme and finds it incurs negligible overhead compared to normal cloud storage operations.
Guaranteed Availability of Cloud Data with Efficient CostIRJET Journal
This document discusses efficient and cost-effective methods for hosting data across multiple cloud storage providers (multi-cloud) to ensure high data availability and reduce costs. It proposes distributing data among different cloud providers using replication and erasure coding techniques. This approach guarantees data availability even if one cloud provider fails and minimizes monetary costs by taking advantage of varying cloud pricing models and data access patterns. The technique is shown to save around 20% of costs while providing high flexibility to handle data and pricing changes over time.
This document discusses using Hidden Markov Model (HMM) forward chaining techniques for prefetching in distributed file systems (DFS) for cloud computing. It begins by introducing DFS for cloud storage and issues like load balancing. It then discusses using HMM to analyze client I/O and predict future requests to prefetch relevant data. The HMM forward algorithm would be used to prefetch data from storage servers to clients proactively. This could improve performance by reducing client wait times for requested data in DFS for cloud applications.
Improved deduplication with keys and chunks in HDFS storage providersIRJET Journal
The document proposes a new deduplication scheme for HDFS storage providers that improves reliability. It uses MD5 hashing to generate unique tags for files and blocks, and stores the tags in a metadata file. Ownership verification is done during upload/download by checking these tags. Encrypted data is distributed across block servers for reliability. Convergent keys derived from hashes encrypt the data blocks using 3DES. This ensures security while allowing deduplication. The scheme achieves both file-level and block-level deduplication and uses distributed key servers for reliable key management at large scale.
Secure Distributed Deduplication Systems with Improved Reliability1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
A hybrid cloud approach for secure authorizedNinad Samel
This document summarizes a research paper that proposes a hybrid cloud approach for secure authorized data deduplication. The paper presents a scheme that uses convergent encryption to encrypt files before uploading them to cloud storage. It also considers the differential privileges of users when performing duplicate checks, in addition to file content. A prototype is implemented to test the proposed authorized duplicate check scheme. Experimental results show the scheme incurs minimal overhead compared to normal cloud storage operations. The goal is to better protect data security while supporting deduplication in a hybrid cloud architecture.
iaetsd Controlling data deuplication in cloud storageIaetsd Iaetsd
This document discusses controlling data deduplication in cloud storage. It proposes an architecture that provides duplicate check procedures with minimal overhead compared to normal cloud storage operations. The key aspects of the proposed system are:
1) It uses convergent encryption to encrypt data for privacy while still allowing for deduplication of duplicate files.
2) It introduces a private cloud that manages user privileges and generates tokens for authorized duplicate checking in a hybrid cloud architecture.
3) It evaluates the overhead of the proposed authorized duplicate checking scheme and finds it incurs negligible overhead compared to normal cloud storage operations.
Guaranteed Availability of Cloud Data with Efficient CostIRJET Journal
This document discusses efficient and cost-effective methods for hosting data across multiple cloud storage providers (multi-cloud) to ensure high data availability and reduce costs. It proposes distributing data among different cloud providers using replication and erasure coding techniques. This approach guarantees data availability even if one cloud provider fails and minimizes monetary costs by taking advantage of varying cloud pricing models and data access patterns. The technique is shown to save around 20% of costs while providing high flexibility to handle data and pricing changes over time.
This document discusses using Hidden Markov Model (HMM) forward chaining techniques for prefetching in distributed file systems (DFS) for cloud computing. It begins by introducing DFS for cloud storage and issues like load balancing. It then discusses using HMM to analyze client I/O and predict future requests to prefetch relevant data. The HMM forward algorithm would be used to prefetch data from storage servers to clients proactively. This could improve performance by reducing client wait times for requested data in DFS for cloud applications.
Improved deduplication with keys and chunks in HDFS storage providersIRJET Journal
The document proposes a new deduplication scheme for HDFS storage providers that improves reliability. It uses MD5 hashing to generate unique tags for files and blocks, and stores the tags in a metadata file. Ownership verification is done during upload/download by checking these tags. Encrypted data is distributed across block servers for reliability. Convergent keys derived from hashes encrypt the data blocks using 3DES. This ensures security while allowing deduplication. The scheme achieves both file-level and block-level deduplication and uses distributed key servers for reliable key management at large scale.
Secure Distributed Deduplication Systems with Improved Reliability1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
This document proposes new distributed deduplication systems with improved reliability for cloud storage. It introduces distributing data chunks across multiple cloud servers to provide better fault tolerance compared to single-server systems. A secret sharing scheme is used to split files into fragments and distribute them across servers, instead of encryption, to achieve data confidentiality while still allowing for deduplication. Security analysis shows the proposed systems achieve confidentiality, integrity, and reliability even if some servers collude. The systems are implemented and shown to incur low overhead.
A NEW FRAMEWORK FOR SECURING PERSONAL DATA USING THE MULTI-CLOUDijsptm
Relaying On A Single Cloud As A Storage Service Is Not A Proper Solution For A Number Of Reasons; For Instance, The Data Could Be Captured While Uploaded To The Cloud, And The Data Could Be Stolen From The Cloud Using A Stolen Id. In This Paper, We Propose A Solution That Aims At Offering A Secure Data Storage For Mobile Cloud Computing Based On The Multi-Clouds Scheme. The Proposed Solution
Will Take The Advantages Of Multi-Clouds, Data Cryptography, And Data Compression To Secure The
Distributed Data; By Splitting The Data Into Segments, Encrypting The Segments, Compressing The
Segments, Distributing The Segments Via Multi-Clouds While Keeping One Segment On The Mobile Device
Memory; Which Will Prevent Extracting The Data If The Distributed Segments Have Been Intercepted
NEW SECURE CONCURRECY MANEGMENT APPROACH FOR DISTRIBUTED AND CONCURRENT ACCES...ijiert bestjournal
Handover the critical data to the cloud provider sh ould have the guarantee of security and availabilit y for data at rest,in motion,and in use. Many alternatives sys tems exist for storage services,but the data confi dentiality in the database as a service paradigm are still immature. We propose a novel architecture that integrates clo ud database services paradigm with data confidentiality and exe cuting concurrent operations on encrypted data. Thi s is the method supporting geographically distributed client s to connect directly and access to an encrypted cl oud database,and to execute concurrent and independent operation s by using modifying the database structure. The proposed architecture has also the more advanta ge of removing intermediate proxies that limit the flexibility,availability,and expandability properties that are inbuilt in cloud-based systems. The efficacy of th e proposed architecture is evaluated by theoretical analyses a nd extensive experimental results with the help of prototype implementation related to the TPC-C standard benchm ark for various categories of clients and network l atencies. We propose a multi-keyword ranked search method for the encrypted cloud data databases,which simultan eously fulfill the needs of privacy requirements. The prop osed scheme could return not only the exact matchin g files,but also the files including the terms latent semantica lly associated to the query keyword.
This document summarizes a study on a new dynamic load balancing approach in cloud environments. It begins by outlining some of the major challenges of load balancing in cloud systems, including uneven distribution of workloads across CPUs. It then proposes a new approach with three main components: 1) A queueing and job assignment process that prioritizes assigning jobs to faster CPUs, 2) A timeout chart to determine when jobs should be migrated or terminated to avoid delays, and 3) Use of a "super node" to act as a proxy and backup in case other nodes fail. The approach is intended to more efficiently distribute jobs and help cloud systems maintain optimal performance. Finally, the document discusses how this approach could be integrated into existing cloud architectures
This document presents a Cooperative Provable Data Possession (CPDP) scheme to ensure data integrity in a multicloud storage system. The CPDP scheme uses a trusted third party to generate secret keys, verification tags for data blocks, and store public parameters. It allows a client to issue challenges to verify the integrity of its data stored across multiple cloud service providers. The verification process involves the cloud providers proving possession of the original data file without retrieving the whole file. This scheme aims to efficiently verify data integrity in a multicloud system with support for data migration and scalability.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
A scalabl e and cost effective framework for privacy preservation over big d...amna alhabib
This document proposes a scalable and cost-effective framework called SaC-FRAPP for preserving privacy over big data on the cloud. The key idea is to leverage cloud-based MapReduce to anonymize large datasets before releasing them to other parties. Anonymized datasets are then managed using HDFS to avoid re-computation costs. A prototype system is implemented to demonstrate that the framework can anonymize and manage anonymized big data sets in a highly scalable, efficient and cost-effective manner.
Iaetsd time constrained self-destructingIaetsd Iaetsd
This document summarizes research on time constrained self-destructing data systems (SeDaS) for data privacy. Various techniques have been used to provide security for data stored in the cloud while ensuring performance for uploading and downloading files. Researchers have focused on key encryption/decryption and sharing algorithms. The proposed SeDaS system aims to destruct all data and copies after a specified time period set by the user to protect private data and prevent unauthorized access, even by cloud administrators.
The document discusses privacy and security issues related to cloud storage. It proposes a new privacy-preserving auditing scheme for cloud storage that uses an interactive challenge-response protocol and verification protocol. This allows a third party auditor to verify the integrity and identify corrupted data for a cloud storage user, while preserving data privacy. The scheme aims to be efficient, lightweight and privacy-preserving. Experimental results show the protocol is efficient and achieves its goals.
An asynchronous replication model to improve data available into a heterogene...Alexander Decker
This document summarizes a research paper that proposes an asynchronous replication model to improve data availability in heterogeneous systems. The proposed model uses a loosely coupled architecture between main and replication servers to reduce dependencies. It also supports heterogeneous systems, allowing different parts of an application to run on different systems for better performance. This makes it a cost-effective solution for data replication across different system types.
Cloud Data De Duplication in Multiuser Environment DeposM2ijtsrd
Nowadays, cloud computing produce a huge amount of sensitive data, such as personal Information, financial data, and electronic health records, social media data. And that causes duplication of data and that suffers to storage and performance of cloud system. Data De Duplication has been widely used to eliminate redundant storage overhead in cloud storage system to improve IT resources efficiency. However, traditional techniques face a great challenge in big data De Duplication to strike a sensible tradeoff between the conflicting goals of scalable De Duplication throughput and high duplicate elimination ratio. De Duplication reduces the space and bandwidth requirements of data storage services, and is most effective when applied across multiple users, a common practice by cloud storage offerings. I study the privacy implications of cross user De Duplication. Thus, an interesting challenging problem is how to deduplicate multimedia data with a multi user environment and propose an efficient system to overcome these types of problems. In this paper, I introduce a new primitive called Depos M2 which gives a partial positive answer for these challenging problem. I propose two phases De Duplication and proof of storage, where the first one allows De Duplication of data and letter one allows proof of storage that means give permission to respective user i.e. owner of that file. Mr. Kaustubh Borate | Prof. Bharti Dhote "Cloud Data De-Duplication in Multiuser Environment: DeposM2" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25270.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/25270/cloud-data-de-duplication-in-multiuser-environment-deposm2/mr-kaustubh-borate
BFC: High-Performance Distributed Big-File Cloud Storage Based On Key-Value S...dbpublications
This document summarizes a research paper about Big File Cloud (BFC), a high-performance distributed big-file cloud storage system based on a key-value store. BFC addresses challenges in designing an efficient storage engine for cloud systems requiring support for big files, lightweight metadata, low latency, parallel I/O, deduplication, distribution, and scalability. It proposes a lightweight metadata design with fixed-size metadata regardless of file size. It also details BFC's architecture, logical data layout using file chunks, metadata and data storage, distribution and replication, and uploading/deduplication algorithms. The results can be used to build scalable distributed data cloud storage supporting files up to terabytes in size.
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
This document proposes a seed block algorithm and remote data backup server to help users recover files if the cloud is destroyed or files are deleted. The proposed system stores a backup of user's cloud data on a remote server. It uses a seed block algorithm that breaks files into blocks, takes their XOR, and stores the output to allow data to be recovered. The system was tested on different file types and sizes, showing it could recover same-sized files and required less time than existing solutions. Its applications include secure storage and access to information even without network connectivity.
SiDe is a cost-efficient cloud storage mechanism that uses data deduplication and compression to minimize storage usage while maintaining reliability. It uses chunk-level deduplication to identify and store only unique chunks of files. For files stored short-term, only one replica is kept, while files stored long-term have one replica and one compressed copy. Simulation results show SiDe reduces storage by 81-84% compared to traditional 3-replica strategies, significantly lowering cloud storage costs.
The document discusses various existing techniques for remote data backup and recovery in cloud computing such as HSDRT, PCS, ERGOT, Linux Box, and Cold/Hot backup strategies. It summarizes that while these techniques address certain aspects, they are lacking in terms of implementation complexity, cost, security, redundancy, and recovery time. The proposed Seed Block Algorithm aims to address these issues by allowing remote data collection and file recovery in the event of deletion or cloud destruction, while managing time and implementation complexity.
A Survey Paper on Removal of Data Duplication in a Hybrid Cloud IRJET Journal
This document summarizes a research paper on removing data duplication in a hybrid cloud. It discusses how data deduplication techniques like single-instance storage and block-level deduplication can reduce storage needs by eliminating duplicate data. It also describes the types of cloud storage (public, private, hybrid) and cloud services (SaaS, PaaS, IaaS). The document proposes encrypting files with differential privilege keys to improve security when checking for duplicate content in a hybrid cloud and prevent unauthorized access during deduplication.
An Efficient PDP Scheme for Distributed Cloud StorageIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Improving Data Storage Security in Cloud using HadoopIJERA Editor
The rising abuse of information stored on large data centres in cloud emphasizes the need to safe guard the data. Despite adopting strict authentication policies for cloud users data while transferred over to secure channel when reaches data centres is vulnerable to numerous attacks .The most widely adoptable methodology is safeguarding the cloud data is through encryption algorithm. Encryption of large data deployed in cloud is actually a time consuming process. For the secure transmission of information AES encryption has been used which provides most secure way to transfer the sensitive information from sender to the intended receiver. The main purpose of using this technique is to make sensitive information unreadable to all other except the receiver. The data thus compressed enables utilization of storage space in cloud environment. It has been augmented with Hadoop‟s map-reduce paradigm which works in a parallel mode. The experimental results clearly reflect the effectiveness of the methodology to improve the security of data in cloud environment.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document proposes new distributed deduplication systems with improved reliability for cloud storage. It introduces distributing data chunks across multiple cloud servers to provide better fault tolerance compared to single-server systems. A secret sharing scheme is used to split files into fragments and distribute them across servers, instead of encryption, to achieve data confidentiality while still allowing for deduplication. Security analysis shows the proposed systems achieve confidentiality, integrity, and reliability even if some servers collude. The systems are implemented and shown to incur low overhead.
A NEW FRAMEWORK FOR SECURING PERSONAL DATA USING THE MULTI-CLOUDijsptm
Relaying On A Single Cloud As A Storage Service Is Not A Proper Solution For A Number Of Reasons; For Instance, The Data Could Be Captured While Uploaded To The Cloud, And The Data Could Be Stolen From The Cloud Using A Stolen Id. In This Paper, We Propose A Solution That Aims At Offering A Secure Data Storage For Mobile Cloud Computing Based On The Multi-Clouds Scheme. The Proposed Solution
Will Take The Advantages Of Multi-Clouds, Data Cryptography, And Data Compression To Secure The
Distributed Data; By Splitting The Data Into Segments, Encrypting The Segments, Compressing The
Segments, Distributing The Segments Via Multi-Clouds While Keeping One Segment On The Mobile Device
Memory; Which Will Prevent Extracting The Data If The Distributed Segments Have Been Intercepted
NEW SECURE CONCURRECY MANEGMENT APPROACH FOR DISTRIBUTED AND CONCURRENT ACCES...ijiert bestjournal
Handover the critical data to the cloud provider sh ould have the guarantee of security and availabilit y for data at rest,in motion,and in use. Many alternatives sys tems exist for storage services,but the data confi dentiality in the database as a service paradigm are still immature. We propose a novel architecture that integrates clo ud database services paradigm with data confidentiality and exe cuting concurrent operations on encrypted data. Thi s is the method supporting geographically distributed client s to connect directly and access to an encrypted cl oud database,and to execute concurrent and independent operation s by using modifying the database structure. The proposed architecture has also the more advanta ge of removing intermediate proxies that limit the flexibility,availability,and expandability properties that are inbuilt in cloud-based systems. The efficacy of th e proposed architecture is evaluated by theoretical analyses a nd extensive experimental results with the help of prototype implementation related to the TPC-C standard benchm ark for various categories of clients and network l atencies. We propose a multi-keyword ranked search method for the encrypted cloud data databases,which simultan eously fulfill the needs of privacy requirements. The prop osed scheme could return not only the exact matchin g files,but also the files including the terms latent semantica lly associated to the query keyword.
This document summarizes a study on a new dynamic load balancing approach in cloud environments. It begins by outlining some of the major challenges of load balancing in cloud systems, including uneven distribution of workloads across CPUs. It then proposes a new approach with three main components: 1) A queueing and job assignment process that prioritizes assigning jobs to faster CPUs, 2) A timeout chart to determine when jobs should be migrated or terminated to avoid delays, and 3) Use of a "super node" to act as a proxy and backup in case other nodes fail. The approach is intended to more efficiently distribute jobs and help cloud systems maintain optimal performance. Finally, the document discusses how this approach could be integrated into existing cloud architectures
This document presents a Cooperative Provable Data Possession (CPDP) scheme to ensure data integrity in a multicloud storage system. The CPDP scheme uses a trusted third party to generate secret keys, verification tags for data blocks, and store public parameters. It allows a client to issue challenges to verify the integrity of its data stored across multiple cloud service providers. The verification process involves the cloud providers proving possession of the original data file without retrieving the whole file. This scheme aims to efficiently verify data integrity in a multicloud system with support for data migration and scalability.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
A scalabl e and cost effective framework for privacy preservation over big d...amna alhabib
This document proposes a scalable and cost-effective framework called SaC-FRAPP for preserving privacy over big data on the cloud. The key idea is to leverage cloud-based MapReduce to anonymize large datasets before releasing them to other parties. Anonymized datasets are then managed using HDFS to avoid re-computation costs. A prototype system is implemented to demonstrate that the framework can anonymize and manage anonymized big data sets in a highly scalable, efficient and cost-effective manner.
Iaetsd time constrained self-destructingIaetsd Iaetsd
This document summarizes research on time constrained self-destructing data systems (SeDaS) for data privacy. Various techniques have been used to provide security for data stored in the cloud while ensuring performance for uploading and downloading files. Researchers have focused on key encryption/decryption and sharing algorithms. The proposed SeDaS system aims to destruct all data and copies after a specified time period set by the user to protect private data and prevent unauthorized access, even by cloud administrators.
The document discusses privacy and security issues related to cloud storage. It proposes a new privacy-preserving auditing scheme for cloud storage that uses an interactive challenge-response protocol and verification protocol. This allows a third party auditor to verify the integrity and identify corrupted data for a cloud storage user, while preserving data privacy. The scheme aims to be efficient, lightweight and privacy-preserving. Experimental results show the protocol is efficient and achieves its goals.
An asynchronous replication model to improve data available into a heterogene...Alexander Decker
This document summarizes a research paper that proposes an asynchronous replication model to improve data availability in heterogeneous systems. The proposed model uses a loosely coupled architecture between main and replication servers to reduce dependencies. It also supports heterogeneous systems, allowing different parts of an application to run on different systems for better performance. This makes it a cost-effective solution for data replication across different system types.
Cloud Data De Duplication in Multiuser Environment DeposM2ijtsrd
Nowadays, cloud computing produce a huge amount of sensitive data, such as personal Information, financial data, and electronic health records, social media data. And that causes duplication of data and that suffers to storage and performance of cloud system. Data De Duplication has been widely used to eliminate redundant storage overhead in cloud storage system to improve IT resources efficiency. However, traditional techniques face a great challenge in big data De Duplication to strike a sensible tradeoff between the conflicting goals of scalable De Duplication throughput and high duplicate elimination ratio. De Duplication reduces the space and bandwidth requirements of data storage services, and is most effective when applied across multiple users, a common practice by cloud storage offerings. I study the privacy implications of cross user De Duplication. Thus, an interesting challenging problem is how to deduplicate multimedia data with a multi user environment and propose an efficient system to overcome these types of problems. In this paper, I introduce a new primitive called Depos M2 which gives a partial positive answer for these challenging problem. I propose two phases De Duplication and proof of storage, where the first one allows De Duplication of data and letter one allows proof of storage that means give permission to respective user i.e. owner of that file. Mr. Kaustubh Borate | Prof. Bharti Dhote "Cloud Data De-Duplication in Multiuser Environment: DeposM2" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25270.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/25270/cloud-data-de-duplication-in-multiuser-environment-deposm2/mr-kaustubh-borate
BFC: High-Performance Distributed Big-File Cloud Storage Based On Key-Value S...dbpublications
This document summarizes a research paper about Big File Cloud (BFC), a high-performance distributed big-file cloud storage system based on a key-value store. BFC addresses challenges in designing an efficient storage engine for cloud systems requiring support for big files, lightweight metadata, low latency, parallel I/O, deduplication, distribution, and scalability. It proposes a lightweight metadata design with fixed-size metadata regardless of file size. It also details BFC's architecture, logical data layout using file chunks, metadata and data storage, distribution and replication, and uploading/deduplication algorithms. The results can be used to build scalable distributed data cloud storage supporting files up to terabytes in size.
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
This document proposes a seed block algorithm and remote data backup server to help users recover files if the cloud is destroyed or files are deleted. The proposed system stores a backup of user's cloud data on a remote server. It uses a seed block algorithm that breaks files into blocks, takes their XOR, and stores the output to allow data to be recovered. The system was tested on different file types and sizes, showing it could recover same-sized files and required less time than existing solutions. Its applications include secure storage and access to information even without network connectivity.
SiDe is a cost-efficient cloud storage mechanism that uses data deduplication and compression to minimize storage usage while maintaining reliability. It uses chunk-level deduplication to identify and store only unique chunks of files. For files stored short-term, only one replica is kept, while files stored long-term have one replica and one compressed copy. Simulation results show SiDe reduces storage by 81-84% compared to traditional 3-replica strategies, significantly lowering cloud storage costs.
The document discusses various existing techniques for remote data backup and recovery in cloud computing such as HSDRT, PCS, ERGOT, Linux Box, and Cold/Hot backup strategies. It summarizes that while these techniques address certain aspects, they are lacking in terms of implementation complexity, cost, security, redundancy, and recovery time. The proposed Seed Block Algorithm aims to address these issues by allowing remote data collection and file recovery in the event of deletion or cloud destruction, while managing time and implementation complexity.
A Survey Paper on Removal of Data Duplication in a Hybrid Cloud IRJET Journal
This document summarizes a research paper on removing data duplication in a hybrid cloud. It discusses how data deduplication techniques like single-instance storage and block-level deduplication can reduce storage needs by eliminating duplicate data. It also describes the types of cloud storage (public, private, hybrid) and cloud services (SaaS, PaaS, IaaS). The document proposes encrypting files with differential privilege keys to improve security when checking for duplicate content in a hybrid cloud and prevent unauthorized access during deduplication.
An Efficient PDP Scheme for Distributed Cloud StorageIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Improving Data Storage Security in Cloud using HadoopIJERA Editor
The rising abuse of information stored on large data centres in cloud emphasizes the need to safe guard the data. Despite adopting strict authentication policies for cloud users data while transferred over to secure channel when reaches data centres is vulnerable to numerous attacks .The most widely adoptable methodology is safeguarding the cloud data is through encryption algorithm. Encryption of large data deployed in cloud is actually a time consuming process. For the secure transmission of information AES encryption has been used which provides most secure way to transfer the sensitive information from sender to the intended receiver. The main purpose of using this technique is to make sensitive information unreadable to all other except the receiver. The data thus compressed enables utilization of storage space in cloud environment. It has been augmented with Hadoop‟s map-reduce paradigm which works in a parallel mode. The experimental results clearly reflect the effectiveness of the methodology to improve the security of data in cloud environment.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Privacy preserving public auditing for secured cloud storagedbpublications
As the cloud computing technology develops during the last decade, outsourcing data to cloud service for storage becomes an attractive trend, which benefits in sparing efforts on heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud while achieving integrity auditing. In this work, we study the problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity and deduplication in cloud, we propose two secure systems, namely SecCloud and SecCloud+. SecCloud introduces an auditing entity with a maintenance of a MapReduce cloud, which helps clients generate data tags before uploading as well as audit the integrity of data having been stored in cloud. Compared with previous work, the computation by user in SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud+ is designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.
Iaetsd secured and efficient data scheduling of intermediate data setsIaetsd Iaetsd
This document discusses securing and efficiently scheduling intermediate data sets in cloud computing. It proposes using an upper bound constraint approach to identify sensitive intermediate data sets for encryption. Suppression techniques like semi-suppression and full-suppression are applied to sensitive data sets to reduce time and costs while the Value Generalization Hierarchy protocol is used to provide security during data access. Optimized balanced scheduling is also used to balance system loads and minimize costs. The goal is to efficiently manage intermediate data sets while preserving privacy.
Cooperative Schedule Data Possession for Integrity Verification in Multi-Clou...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
IRJET- Improving Data Availability by using VPC Strategy in Cloud Environ...IRJET Journal
This document discusses improving data availability in cloud environments using virtual private cloud (VPC) strategies and data replication strategies (DRS). It proposes using VPC to define private networks in public clouds and deploying cloud resources into those private networks for improved security and control. It also proposes using DRS to store multiple copies of data across different nodes to increase data availability, reduce bandwidth usage, and provide fault tolerance. The proposed approach identifies popular data files for replication, selects the best storage sites based on factors like request frequency, failure probability, and storage usage, and decides when to replace replicas to optimize resource usage. A simulation showed this hybrid VPC and DRS approach improved performance metrics like response time, network usage, and load balancing compared to
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
The document surveys privacy preserving techniques in cloud computing, focusing on the L-EnCDB scheme which uses format preserving encryption to encrypt database fields while maintaining data types, allowing for SQL queries on encrypted data. It also discusses fuzzy queries and the two layer architecture of L-EnCDB with an application layer for encryption and interface and a database layer for SQL functions and data services. The paper concludes that privacy preserving techniques are important for data security in cloud computing.
Distributed Large Dataset Deployment with Improved Load Balancing and Perform...IJERA Editor
Cloud computing is a prototype for permitting universal, appropriate, on-demand network access. Cloud is a
method of computing where enormously scalable IT-enabled proficiencies are delivered „as a service‟ using
Internet tools to multiple outdoor clients. Virtualization is the establishment of a virtual form of something such
as computing device or server, an operating system, or network devices and storage device. The different names
for cloud data management are DaaS Data as a service, Cloud Storage, and DBaaS Database as a service. Cloud
storage permits users to store data, information in documents formats. iCloud, Google drive, Drop box, etc. are
most common and widespread cloud storage methods. The main challenges connected with cloud database are
fault tolerance, scalability, data consistency, high availability and integrity, confidentiality and many more.
Load balancing improves the performance of the data center. We propose an architecture which provides load
balancing to the cloud database. We introduced a load balancing server which calculates the load of the data
center using our proposed algorithm and distributes the data accordingly to the different data centers.
Experimental results showed that it also improve the performance of the cloud system.
This document summarizes a research paper that proposes a framework called Cooperative Provable Data Possession (CPDP) to verify the integrity of data stored across multiple cloud storage providers. The framework uses two techniques: 1) a Hash Index Hierarchy that allows responses from different cloud providers to a client's challenge to be combined into a single response, and 2) Homomorphic Verifiable Responses that enable efficient verification of data stored on multiple cloud providers. The document outlines the security properties and performance benefits of the CPDP framework for verifying data integrity in a multi-cloud storage environment.
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
An Optimal Cooperative Provable Data Possession Scheme for Distributed Cloud ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
The document discusses using network coding with multi-generation mixing to improve data recovery in cloud storage systems. It provides a literature review of several papers that use techniques like Maximum Distance Separation codes, random linear network coding, and instantly decodable network coding. The proposed work develops an architecture that uses multi-generation mixing and the DODEX+ encoding scheme to encode and retrieve data across multiple mobile clients and cloud storage. This aims to provide more efficient and reliable data delivery over wireless mesh networks. Tools like Amazon S3 and the NS2 network simulator are used to implement and test the proposed system.
Preserving Privacy Policy- Preserving public auditing for data in the cloudinventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Role Based Access Control Model (RBACM) With Efficient Genetic Algorithm (GA)...dbpublications
This document summarizes a research paper that proposes a new cloud data security model using role-based access control, encryption, and genetic algorithms. The model uses Token Based Data Security Algorithm (TBDSA) combined with RSA and AES encryption to securely encode, encrypt, and forward cloud data. A genetic algorithm is used to generate encrypted passwords for cloud users. Role managers are assigned to control user roles and data access. The aim is to integrate encoding, encrypting, and forwarding for secure cloud storage while minimizing processing time.
A novel cloud storage system with support of sensitive data applicationijmnct
Most users are willing to store their data in the c
loud storage system and use many facilities of clou
d. But
their sensitive data applications faces with potent
ial serious security threats. In this paper, securi
ty
requirements of sensitive data application in the c
loud are analyzed and improved structure for the ty
pical
cloud storage system architecture is proposed. The
hardware USB-Key is used in the proposed architectu
re
for purpose of enhancing security of user identity
and interaction security between the users and the
cloud
storage system. Moreover, drawn from the idea of da
ta active protection, a data security container is
introduced in the system to enhance the security of
the data transmission process; by encapsulating th
e
encrypted data, increasing appropriate access contr
ol and data management functions. The static data
blocks are replaced with a dynamic executable data
security container. Then, an enhanced security
architecture for software of cloud storage terminal
is proposed for more adaptation with the user's sp
ecific
requirements, and its functions and components can
be customizable. Moreover, the proposed architectur
e
have capability of detecting whether the execution
environment is according with the pre-defined
environment requirements.
Similar to Improving availability and reducing redundancy using deduplication of cloud storage system (20)
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Improving availability and reducing redundancy using deduplication of cloud storage system
1. 1
Dissertation Phase-II Presentation On:
“Improving the availability and reducing
redundancy using deduplication of cloud storage system “
Presented by:
Mr. Dhanaraj S. Patil.
Under The Guidance Of:
Mrs. R.J. Deshmukh.
2. OUTLINE
• Cloud storage system
• Cloud of clouds
• Replication & Erasure code
• Problem Statement
• Achieved objects
• System Architecture
• Experimental Setup
• Implementation & Result
• Conclusion
2
3. CLOUD STORAGE SYSTEM
• The digital data is stored in logical pools
• Public, private and hybrid
• Advantages:-
Pay-per-Use
Availability
• Disadvantages:-
Data outage
Vendor lock in problem
• Example:-
Amazon s3, windows Azure
3
4. CLOUD-OF-CLOUD
• The digital data is stored in logical pools
• Multiple cloud venders at one point
• low cost
• no vender-lock
• example:- Depsky
4
5. REPLICATION
• Creating multiple copies of data
• Widely used in cloud storage systems
• 3- replica strategy
• Improves reliability, fault-tolerance, accessibility
5
6. ERASURE CODE
• Data is broken into fragments, expanded and encoded with
redundant data pieces
• consumes less storage
• Data can be rebuild from any fragment.
• drawback:- CPU-intensive
6
7. LITERATURE SURVEY
•Ensuring Cloud data reliability with minimum replication by
proactive replica checking
•Replication-based Load Balancing scheme
7
8. PROBLEM STATEMENT
To develop a system which implements efficient
cloud storage using data deduplication technique to avoid
data redundancy problem.
8
9. ACHIEVED OBJECTIVES
• To study the different data distribution technique in
cloud system.
• To analyze the hybrid redundant [HyRD] data
distribution scheme.
• To design system for data redundancy problem by
applying data deduplication with versioning.
• To measure the performance of Implemented system
with existing system.
9
12. Message Digest 5 algorithm [MD5] :
Step 1: Appending padding bits
message is extended to length 448 modulo 512.
Step 2: Appending length
A 64- bit representation of message is added.
Step 3: Initialize MD buffer
It used to store the result.
word A: 01 23 45 67
word B: 89 ab cd ef
word C: fe dc ba 98
word D: 76 54 32 10
12
13. Message Digest 5 algorithm [MD5]
contd.
Step 4: process message in 16- word blocks
Define 4 Auxiliary functions. Which helps in
processing message in 512 –bit block
Step 5: Output
To produce digest just add a,b,c,d and convert it
into hexadecimal.
13
14. EXPERIMENTAL SETUP
1. Hardware Requirements
Processor: Pentium Dual-Core 2.50 GHz (Or Above)
Memory: 1GB (Or Above)
2. Software Requirements
Operating System: Windows 7/ 8 and above
Front End & Back end: HTML, PHP
Database: MySql
14
19. RESULTS19
Following Table Describes the storage consumption
in the cloud which is consider the size of file. In this
we compare the storage space used by existing
system and our system; for fixed size file.
22. RESULTS22
In File versioning we made versions of file which are
having same file name but different content or data in it. In
this we attach the version number to file name and made
new file. We compare existing system and implemented
system by uploading same file having same name with
different content in it. In following Fig we shown the
version count of file with respect to attempts of uploading
the file having different content or data in implemented
system and existing system.
24. RESULTS24
The Following figure shows that the graphical analysis of file
uploading in the cloud. Where x-axis describes File size in kb and
y-axis describes time in seconds
26. CONCLUSION
Availability is one of the main key constraint of the cloud
storage service that user must consider while uploading data to cloud.
With single cloud storage system problem may arise such as, vendor-
lock-in, service outage etc. In existing system, the inter cloud system was
based on hybrid redundancy distribution technique but still it shows data
redundancy issues. The implemented system tries to solve above problem
with the help of MD5 and versioning.
The system describes several techniques to reduce the data
redundancy problem. To implement this MD5 algorithm is used for
verification of the hash values of the file and file versions are maintain
for availability and durability of the data. An experimental study shows
that redundancy problem can be reduced and data availability maintains
with our approach. For the future work we are trying to add security to
our system while sharing our data and we can also tries to provide access
control policies.
26
27. REFERENCES
[1] Bo Mao, Suzhen Wu and Hong Jiang “Exploiting Workload Characteristics and Service Diversity to
Improve the Availability of Cloud Storage Systems”, IEEE Transactions on Parallel and Distributed
Systems, Pages: 2010 – 2021, Year: 2016.
[2] Wenhao Li, Yun Yang, Dong Yuan, “Ensuring Cloud data reliability with minimum replication by
proactive replica checking”, IEEE TRANSACTIONS ON COMPUTERS, Pages: 1494 - 1506, Year: 2016.
[3] Maomeng Su, Lei Zhang, Yongwei Wu, Kang Chen, and Keqin Li, “Systematic Data Placement
Optimization in Multi-Cloud Storage for Complex Requirements”, IEEE TRANSACTIONS ON
COMPUTERS, Pages: 1964 –1977, Year: 2016.
[4] Amir Nahir, Ariel Orda, and Danny Raz, “Replication-based Load Balancing”, IEEE TRANSACTIONS
ON PARALLEL AND DISTRIBUTED SYSTEMS, Pages: 494 – 507, Year: 2016.
[5] Shiuan-Tzuo Shen, Hsiao-Ying Lin, and Wen-Guey Tzeng, “An Effective Integrity Check Scheme for
Secure Erasure Code-Based Storage Systems”, IEEE TRANSACTIONS ON RELIABILITY, Pages: 840 –
851, Year: 2015.
[6] Ayad F. Barsoum and M. Anwar Hasan, “Provable Multicopy Dynamic Data Possession in Cloud
Computing Systems”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND
SECURITY,Pages: 485 - 497, Year: 2015.
[7] Frederik Armknecht, Jens-Matthias Bohli, Ghassan O. Karame, Franck Youssef, “Transparent Data
Deduplication in the Cloud”, In Proceedings of the 22nd ACM SIGSAC Conference on Computer and
Communications Security, October 2015.
[8] N.Jayapandian, Dr.A.M.J.Md.Zubair Rahman, I.Nandhini, “A Novel Approach for Handling Sensitive
Data with Deduplication Method in Hybrid Cloud”, 2015 Online International Confernece on Green
Engineering and Technologies (IC-GET 2015), Pages: 1 – 6, Year: 2015.
[9] Ghazal Riahi “E-learning systems based on cloud computing: A Review”, Procedia Computer Science
62, 352 – 359, 2015.
[10] Hui Zhang, Guofei Jiang, Kenji Yoshihira, and Haifeng Chen, “Proactive Workload Management in
Hybrid Cloud Computing”, IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT,
Pages: 90 – 100, Year: 2014.
27
28. REFERENCES
[11] X. Zhang, M. Tsugawa, Y. Zhang, H. Song, C. Cao, G. Huang, and J. Fortes. Towards Model-Defined
Cloud of Clouds, In Proceedings of the 17th International Conference on Model Driven Engineering
Languages and Systems (MODELS‟14), pages 41–45, Sep. 2014.
[12] Osama Khan, Randal Burns, James Plank, William Pierce Cheng Huang, “Rethinking Erasure Codes for
Cloud File Systems: Minimizing I/O for Recovery and Degraded Reads”, In Proceedings of the 10th
USENIX conference on File and Storage Technologies, Pages 20-20, February 2012.
[13] Jain, A. and S. chawla.”E-learning in the cloud”, International Journal of Latest Research in Science and
Technology 2(1): 478-481. 2013.
[14] Y. Ma, T. Nandagopal, K. Puttaswamy, and S. Banerjee, “An Ensemble of Replication and Erasure Codes
for Cloud File Systems”, In Proceedings of the 32nd IEEE International Conference on Computer
Communications (INFOCOM‟13), pages 1276–1284, Apr. 2013.
[15] Cloud computing:- https://en.wikipedia.org/wiki/Cloud_computing
[16] Y. Wang, L. Alvisi, and Mike Dahlin. Gnothi: Separating Data and Metadata for Efficient and Available
Storage Replication, In Proceedings of the 2012 USENIX Annual Technical Conference (ATC‟12), pages
413–424, Jun. 2012.
[17] Md. Alam Hossain, Md. Kamrul Islam, Subrata Kumar Das and Md. Asif Nashiry “CRYPTANALYZING
OF MESSAGE DIGEST ALGORITHMS MD4 AND MD5”, International Journal on Cryptography and
Information Security(IJCIS),Vol.2, No.1,March 2012.
[18] DepSky:- http://cloud-of-clouds.github.io/depsky/
[19] Hussam Abu-Libdeh, Lonnie Princehouse, Hakim Weatherspoon, “RACS: A Case for Cloud Storage
Diversity”, In Proceedings of the 1st ACM symposium on Cloud computing, Pages 229-240, June 2010.
[20] Alysson Bessani, Miguel Correia, Bruno Quaresma, Fernando Andr´e, Paulo Sousa, “DEPSKY:
Dependable and Secure Storage in a Cloud-of-Clouds”, In Proceedings of the sixth conference on
Computer systems, Pages 31-46, April 2011.
[21] Rivest R., 1992, “The MD5 Message-Digest Algorithm,”RFC 1321,MIT LCS and RSA Data Securit y,
Inc.
28