Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
Drops division and replication of data in cloud for optimal performance and s...Pvrtechnologies Nellore
This document proposes a method called DROPS (Division and Replication of Data in Cloud for Optimal Performance and Security) that divides files into fragments and replicates the fragments across cloud nodes for improved security and performance. The method stores each file fragment on a separate node to prevent attackers from accessing full files even if some nodes are compromised. It also separates nodes storing fragments using graph coloring to obscure fragment locations. The method aims to improve retrieval time by selecting central nodes and replicating fragments on high-traffic nodes. The document compares DROPS to 10 replication strategies and evaluates it using 3 data center network architectures.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
iaetsd Controlling data deuplication in cloud storageIaetsd Iaetsd
This document discusses controlling data deduplication in cloud storage. It proposes an architecture that provides duplicate check procedures with minimal overhead compared to normal cloud storage operations. The key aspects of the proposed system are:
1) It uses convergent encryption to encrypt data for privacy while still allowing for deduplication of duplicate files.
2) It introduces a private cloud that manages user privileges and generates tokens for authorized duplicate checking in a hybrid cloud architecture.
3) It evaluates the overhead of the proposed authorized duplicate checking scheme and finds it incurs negligible overhead compared to normal cloud storage operations.
A hybrid cloud approach for secure authorizedNinad Samel
This document summarizes a research paper that proposes a hybrid cloud approach for secure authorized data deduplication. The paper presents a scheme that uses convergent encryption to encrypt files before uploading them to cloud storage. It also considers the differential privileges of users when performing duplicate checks, in addition to file content. A prototype is implemented to test the proposed authorized duplicate check scheme. Experimental results show the scheme incurs minimal overhead compared to normal cloud storage operations. The goal is to better protect data security while supporting deduplication in a hybrid cloud architecture.
1) The document proposes a scheme for ensuring data storage security in cloud computing using erasure coding and distributed verification of encoded data blocks across multiple servers.
2) The scheme achieves storage correctness insurance and can identify misbehaving servers if data corruption is detected.
3) It supports secure and efficient dynamic operations on data blocks like update, delete, and append, which many previous works did not consider.
Proposed Model for Enhancing Data Storage Security in Cloud Computing SystemsHossam Al-Ansary
This document proposes a model for enhancing data storage security in cloud computing systems. It discusses threats and attacks to cloud data storage from external and internal sources. It then describes three common cloud deployment models: public clouds, private clouds, and hybrid clouds. The document proposes that cloud systems should include cloud service providers, users, and third party auditors. It also outlines two types of potential adversaries (weak and strong). Finally, it proposes design goals for secure cloud data storage systems, including ensuring storage correctness, fast error localization, dynamic data support, dependability, and lightweight verification.
The document proposes a multistage optimization framework called OCDO that provides scalable security provisioning in SDN-based cloud environments. OCDO uses a decomposition technique that leverages network topology characteristics to allow concurrent optimization. It also uses a segmentation technique to distribute security function availability among network nodes based on semantics. The framework aims to optimize both computing and networking resources simultaneously while ensuring security function precedence requirements and proper handling of stateful functions. Evaluation results show OCDO achieves higher efficiency and can scale to large data center topologies.
Drops division and replication of data in cloud for optimal performance and s...Pvrtechnologies Nellore
This document proposes a method called DROPS (Division and Replication of Data in Cloud for Optimal Performance and Security) that divides files into fragments and replicates the fragments across cloud nodes for improved security and performance. The method stores each file fragment on a separate node to prevent attackers from accessing full files even if some nodes are compromised. It also separates nodes storing fragments using graph coloring to obscure fragment locations. The method aims to improve retrieval time by selecting central nodes and replicating fragments on high-traffic nodes. The document compares DROPS to 10 replication strategies and evaluates it using 3 data center network architectures.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
iaetsd Controlling data deuplication in cloud storageIaetsd Iaetsd
This document discusses controlling data deduplication in cloud storage. It proposes an architecture that provides duplicate check procedures with minimal overhead compared to normal cloud storage operations. The key aspects of the proposed system are:
1) It uses convergent encryption to encrypt data for privacy while still allowing for deduplication of duplicate files.
2) It introduces a private cloud that manages user privileges and generates tokens for authorized duplicate checking in a hybrid cloud architecture.
3) It evaluates the overhead of the proposed authorized duplicate checking scheme and finds it incurs negligible overhead compared to normal cloud storage operations.
A hybrid cloud approach for secure authorizedNinad Samel
This document summarizes a research paper that proposes a hybrid cloud approach for secure authorized data deduplication. The paper presents a scheme that uses convergent encryption to encrypt files before uploading them to cloud storage. It also considers the differential privileges of users when performing duplicate checks, in addition to file content. A prototype is implemented to test the proposed authorized duplicate check scheme. Experimental results show the scheme incurs minimal overhead compared to normal cloud storage operations. The goal is to better protect data security while supporting deduplication in a hybrid cloud architecture.
1) The document proposes a scheme for ensuring data storage security in cloud computing using erasure coding and distributed verification of encoded data blocks across multiple servers.
2) The scheme achieves storage correctness insurance and can identify misbehaving servers if data corruption is detected.
3) It supports secure and efficient dynamic operations on data blocks like update, delete, and append, which many previous works did not consider.
Proposed Model for Enhancing Data Storage Security in Cloud Computing SystemsHossam Al-Ansary
This document proposes a model for enhancing data storage security in cloud computing systems. It discusses threats and attacks to cloud data storage from external and internal sources. It then describes three common cloud deployment models: public clouds, private clouds, and hybrid clouds. The document proposes that cloud systems should include cloud service providers, users, and third party auditors. It also outlines two types of potential adversaries (weak and strong). Finally, it proposes design goals for secure cloud data storage systems, including ensuring storage correctness, fast error localization, dynamic data support, dependability, and lightweight verification.
The document proposes a multistage optimization framework called OCDO that provides scalable security provisioning in SDN-based cloud environments. OCDO uses a decomposition technique that leverages network topology characteristics to allow concurrent optimization. It also uses a segmentation technique to distribute security function availability among network nodes based on semantics. The framework aims to optimize both computing and networking resources simultaneously while ensuring security function precedence requirements and proper handling of stateful functions. Evaluation results show OCDO achieves higher efficiency and can scale to large data center topologies.
Migration of Virtual Machine to improve the Security in Cloud Computing IJECEIAES
Cloud services help individuals and organization to use data that are managed by third parties or another person at remote locations. With the increase in the development of cloud computing environment, the security has become the major concern that has been raised more consistently in order to move data and applications to the cloud as individuals do not trust the third party cloud computing providers with their private and most sensitive data and information. This paper presents, the migration of virtual machine to improve the security in cloud computing. Virtual machine (VM) is an emulation of a particular computer system. In cloud computing, virtual machine migration is a useful tool for migrating operating system instances across multiple physical machines. It is used to load balancing, fault management, low-level system maintenance and reduce energy consumption. Virtual machine (VM) migration is a powerful management technique that gives data center operators the ability to adapt the placement of VMs in order to better satisfy performance objectives, improve resource utilization and communication locality, achieve fault tolerance, reduce energy consumption, and facilitate system maintenance activities. In the migration based security approach, proposed the placement of VMs can make enormous difference in terms of security levels. On the bases of survivability analysis of VMs and Discrete Time Markov Chain (DTMC) analysis, we design an algorithm that generates a secure placement arrangement that the guest VMs can moves before succeeds the attack.
This document presents a proposed approach called ICCC (Information Correctness to the Customers in Cloud Data Storage) to provide customers with proof of the correctness of their data stored in the cloud. The ICCC approach aims to minimize storage and computation costs for both customers and cloud storage providers. It involves the customer pre-processing their file by generating and encrypting metadata about random bits of each data block before uploading the file. This metadata is appended to the file. To verify correctness, the customer challenges the cloud storage provider by specifying a data block and bit, and the provider must return the correct metadata bit. This allows verification with minimal access to the entire file and low overhead for both parties.
Survey on cloud backup services of personal storageeSAT Journals
Abstract In widespread cloud environment cloud services is tremendously growing due to large amount of personal computation data. Deduplication process is used for avoiding the redundant data. A cloud storage environment for data backup in personal computing devices facing various challenge, of source deduplication for the cloud backup services with low deduplication efficiency. Challenges facing in the process of deduplication for cloud backup service are-1)Low deduplication efficiency due to exclusive access to large amount of data and limited system resources of PC based client site.2)Low data transfer efficiency due to transferring deduplicate data from source to backup server are typically small but that can be often across the WAN. Keywords- Cloud computing, Deduplication, cloud backup, application awareness
A SECURITY FRAMEWORK IN CLOUD COMPUTING INFRASTRUCTUREIJNSA Journal
In a typical cloud computing diverse facilitating components like hardware, software, firmware,
networking, and services integrate to offer different computational facilities, while Internet or a private
network (or VPN) provides the required backbone to deliver the services. The security risks to the cloud
system delimit the benefits of cloud computing like “on-demand, customized resource availability and
performance management”. It is understood that current IT and enterprise security solutions are not
adequate to address the cloud security issues. This paper explores the challenges and issues of security
concerns of cloud computing through different standard and novel solutions. We propose analysis and
architecture for incorporating different security schemes, techniques and protocols for cloud computing,
particularly in Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) systems. The proposed
architecture is generic in nature, not dependent on the type of cloud deployment, application agnostic and
is not coupled with the underlying backbone. This would facilitate to manage the cloud system more
effectively and provide the administrator to include the specific solution to counter the threat. We have also
shown using experimental data how a cloud service provider can estimate the charging based on the
security service it provides and security-related cost-benefit analysis can be estimated.
This document discusses secure cloud data management and proposes solutions using the Declarative Secure Distributed Systems (DS2) platform. It first categorizes threats from the cloud infrastructure, communication network, and users. It then describes how DS2 uses a declarative language called SeNDlog to specify secure distributed systems and enforce policies across layers. Finally, it provides an example of how DS2 can implement authenticated MapReduce query processing in the cloud to address security challenges.
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
This document summarizes a research paper that proposes a scheme for ensuring security and reliability of data stored in the cloud. The scheme utilizes erasure coding to redundantly store encrypted data fragments across multiple cloud servers. It generates homomorphic tokens that allow auditing of the data storage and identification of any misbehaving servers. The scheme supports secure dynamic operations like modification, deletion and append of cloud data files. Analysis shows the scheme is efficient and resilient against various security threats like server compromises or failures. It ensures storage correctness and fast localization of data errors in the cloud.
This white paper discusses an integrated security solution from Juniper Networks for virtualized data centers and clouds. It addresses the security challenges introduced by virtualized workloads, which physical firewalls have limited visibility into. The solution includes Juniper's SRX firewalls to protect physical workloads and segment traffic, and Firefly Host virtual firewalls to protect virtualized workloads within hypervisors and enforce the same security policies. This provides consistent security across physical and virtual environments as organizations adopt cloud computing.
A Survey Paper on Removal of Data Duplication in a Hybrid Cloud IRJET Journal
This document summarizes a research paper on removing data duplication in a hybrid cloud. It discusses how data deduplication techniques like single-instance storage and block-level deduplication can reduce storage needs by eliminating duplicate data. It also describes the types of cloud storage (public, private, hybrid) and cloud services (SaaS, PaaS, IaaS). The document proposes encrypting files with differential privilege keys to improve security when checking for duplicate content in a hybrid cloud and prevent unauthorized access during deduplication.
An approach for secured data transmission at client end in cloud computingIAEME Publication
This document summarizes a research paper that proposes an algorithm for securing data transmission between a client and cloud server in cloud computing environments. The algorithm uses an authentication function and key that are updated during transmission to verify authorization and detect any modifications by potential attackers. When a client connects to a server, they both initialize the key to the same value. Then, the key is incremented by one for each packet sent or received. If a client wants to verify security, it can send a packet with the current key value to the server for matching. This helps prevent man-in-the-middle attacks by making it difficult for attackers to modify packets without knowing the updated key values. The approach aims to securely transmit sensitive data from cloud servers
This document provides an introduction to Eclipse Zenoh, an open source project that unifies data in motion, data at rest, and computations in a distributed system. Zenoh elegantly blends traditional publish-subscribe with geo-distributed storage, queries, and computations. The presentation will demonstrate Zenoh's advantages for enabling typical edge computing scenarios and simplifying large-scale distributed applications through real-world use cases. It will also provide an overview of Zenoh's architecture, performance, and APIs.
Excellent Manner of Using Secure way of data storage in cloud computingEditor IJMTER
The major challenging issue in Cloud computing is Security. Providing Security is big issue
towards protecting data from third person as well as in Internet. This mainly deals the Security how it is
provided. Various type of services are there to protect our data and Various Services are available in Cloud
Computing to Utilize effective manner as Software as a Service (SaaS), Platform as a Service (PaaS),
Hardware as a Service (HaaS). Cloud computing is the use of computing resources (hardware and
software) that are delivered as a service over Internet network. Cloud Computing moves the Application
software and databases to the large data centres, where the administration of the data and services may not
be fully trustworthy that is in third party here the party has to get certified and authorized. Since Cloud
Computing share distributed resources via network in the open environment thus it makes new security
risks towards the correctness of the data in cloud. I propose in this paper flexibility of data storage
mechanism in the distributed environment by using the homomorphism token generation. In the proposed
system, users need to allow auditing the cloud storage with lightweight communication. While using
Encryption and Decryption methods it is very burden for a single processor. Than the processing
Capabilities can we utilize from Cloud Computing.
This document summarizes recent research on security issues related to single cloud and multi-cloud storage models. It finds that relying on a single cloud service provider poses risks to data availability and integrity if the provider experiences an outage. Storing data across multiple cloud providers (a multi-cloud model) can help address these issues but may increase costs. The document surveys various techniques proposed in recent research to improve security, availability, and integrity in single and multi-cloud environments, such as homomorphic tokens, file division, and the Depsky model. It concludes that while single cloud has been more widely researched, multi-cloud is an important area of ongoing work to help overcome the security and cost challenges of cloud storage.
On The Homogeneous Biquadratic Equation with 5 UnknownsIJSRD
Five different methods of the non-zero integral solutions of the homogeneous biquadratic Diophantine equation with five unknowns are determined. Some interesting relations among the special numbers and the solutions are observed.
Test Rig for Measurement of Spark Advance Angle and Ignition System Using AT8...IJSRD
An electronic ignition control system for internal combustion engine, notably for motor vehicles, which comprises a rotary member revolving at engine speed and provided with two reference marks Which the position correspond to the maximum ignition advance angle and to the minimum ignition advance angle, respectively , said reference marks defining at least one area on said rotary member, a sensor disposed in close vicinity of said rotary member so as to detect the moments of passage of said references marks and at least one up and down counter for counting pulse, said rotary member further comprising a third references marks separate from the first two reference marks aforesaid so as to define two successive areas scanned in succession by said sensor, while a first up and down counter positively counts the pulse from a first area and negatively counts, during passage of second area the pulse from second clock system adapted to emit pluses at a frequency programmable according to the desired ignition advance law, the resetting of said up and down counter being utilized for producing ignition spark. Finally up and down counter calculates the final value of the advance angle. After calculating the final value of advance angle signal send to the ignition box to ignition coil, and ignition coil operate the spark as per advance angle.
A Literature Survey on Comparative Analysis of RZ and NRZ Line Encoding over ...IJSRD
Modern world is the world of internet which requires more capacity and more bandwidth to modernize the world. Hence we move towards optical communication. When we talk about optical communication then chromatic dispersion is biggest obstacle for high speed optical channel. Chromatic dispersion can be reduced either by compensation technique [1, 2] or either by electronic dispersion compensation. In this paper, we make comparative analysis in RZ and NRZ line encoding over 40 Gbps system. Two different modulation formats including non-return-to-zero (NRZ), return-to-zero (RZ) is compared on the basis of Bit error rate and quality factor. The 40 Gbps signal is transmitted over 200 km in single mode fiber.
Role of Simulation in Deep Drawn Cylindrical PartIJSRD
Simulation is widely used in forming industry due to its speed and lower cost and it has been proven to be effective in prediction of formability and spring back behavior. The purpose of finite element simulation in the sheet metal forming process is to minimize the time and cost in the design phase by predicting key outcomes such as the final shape of the part, the possibility of various defects and the flow of material. Such simulation is most useful and efficient when it is performed in the early stage of design by designers, rather than by analysis specialists after the detailed design is complete. The accuracy of such simulation depends on knowledge of material properties, boundary conditions and processing parameters. In the industry today, numerical sheet metal forming simulation is very important tool for reducing load time and improving part quality. In this paper finite element model for the deep-drawing of cylindrical cups is constructed and the simulation results are obtained by using different simulation parameters, i.e. punch velocity, coefficient of friction and blank holder force of the FE mesh-elements and these results are compared with experimental work.
Surveying, Planning and Scheduling For A Hill Road Work at Kalrayan Hills by ...IJSRD
To date, few construction methods have helped the project managers make a decision on the near-optimum distributions of men, material, Space and tools according to their job objectives and job limitations. This thesis presents an intelligent scheduling system (ISS) that can assist the project managers to find the near-optimum agenda plan according to their job objectives and job limitations. Intelligent scheduling system (ISS) uses model techniques to share out resources and allocate dissimilar levels of priorities to different tricks in each model cycle to find the near-optimum solution. ISS considers and combines most of the important construction factors (agenda of task, expenses, manpower, breathing space, utensils and material) at the same time in a incorporated environment, which makes the resulting agenda that will be nearer to optimum. Moreover, ISS allows for what-if analysis of probable scenarios, and schedule adjustments based on unexpected conditions (modified orders, delayed material delivery, etc.). As a final point, two model applications and one real-world construction job are utilized to illustrate and evaluate the success of ISS with two commonly used software packages, Primavera Project Planner and Microsoft Project.
Design and Linear Static Analysis of Transport Aircraft Vertical Tail for Dif...IJSRD
This document summarizes the design and linear static analysis of the vertical tail structure of a transport aircraft for different rudder deflection angles. The vertical tail and rudder structure was modeled in CATIA and analyzed in MSC Patran and NASTRAN. Stresses were calculated for rudder deflections of 10, 20, and 30 degrees. The maximum principal stress was found to be well below the yield strength of the 7075-T6 aluminum alloy used, indicating the design is safe. Validation using a reserve factor approach also confirmed stresses are within acceptable limits for the expected loads on the vertical tail.
A Survey on Service Request Scheduling in Cloud Based ArchitectureIJSRD
Cloud computing has become quite popular now-a-days. It facilitates the users to store and process their data which is stored in 3rd party data centers. Today in IT sector everything is run and managed on the cloud environment. As the number of users is increasing day by day, faster and efficient processing of large volume of data and resources is desired at all levels. So the management of resources attains prime importance. While using cloud computing various issues are encountered like load balancing, traffic while computation etc. Job scheduling is one of the solution of these problems which reduces the waiting time and maximizes the quality of services. In job scheduling “priority†is an important factor. In this paper, we will be discussing various scheduling algorithms and a review on dynamic priority scheduling algorithm.
A Survey on Service Request Scheduling in Cloud Based ArchitectureIJSRD
Cloud computing has become quite popular now-a-days. It facilitates the users to store and process their data which is stored in 3rd party data centers. Today in IT sector everything is run and managed on the cloud environment. As the number of users is increasing day by day, faster and efficient processing of large volume of data and resources is desired at all levels. So the management of resources attains prime importance. While using cloud computing various issues are encountered like load balancing, traffic while computation etc. Job scheduling is one of the solution of these problems which reduces the waiting time and maximizes the quality of services. In job scheduling “priority†is an important factor. In this paper, we will be discussing various scheduling algorithms and a review on dynamic priority scheduling algorithm.
Migration of Virtual Machine to improve the Security in Cloud Computing IJECEIAES
Cloud services help individuals and organization to use data that are managed by third parties or another person at remote locations. With the increase in the development of cloud computing environment, the security has become the major concern that has been raised more consistently in order to move data and applications to the cloud as individuals do not trust the third party cloud computing providers with their private and most sensitive data and information. This paper presents, the migration of virtual machine to improve the security in cloud computing. Virtual machine (VM) is an emulation of a particular computer system. In cloud computing, virtual machine migration is a useful tool for migrating operating system instances across multiple physical machines. It is used to load balancing, fault management, low-level system maintenance and reduce energy consumption. Virtual machine (VM) migration is a powerful management technique that gives data center operators the ability to adapt the placement of VMs in order to better satisfy performance objectives, improve resource utilization and communication locality, achieve fault tolerance, reduce energy consumption, and facilitate system maintenance activities. In the migration based security approach, proposed the placement of VMs can make enormous difference in terms of security levels. On the bases of survivability analysis of VMs and Discrete Time Markov Chain (DTMC) analysis, we design an algorithm that generates a secure placement arrangement that the guest VMs can moves before succeeds the attack.
This document presents a proposed approach called ICCC (Information Correctness to the Customers in Cloud Data Storage) to provide customers with proof of the correctness of their data stored in the cloud. The ICCC approach aims to minimize storage and computation costs for both customers and cloud storage providers. It involves the customer pre-processing their file by generating and encrypting metadata about random bits of each data block before uploading the file. This metadata is appended to the file. To verify correctness, the customer challenges the cloud storage provider by specifying a data block and bit, and the provider must return the correct metadata bit. This allows verification with minimal access to the entire file and low overhead for both parties.
Survey on cloud backup services of personal storageeSAT Journals
Abstract In widespread cloud environment cloud services is tremendously growing due to large amount of personal computation data. Deduplication process is used for avoiding the redundant data. A cloud storage environment for data backup in personal computing devices facing various challenge, of source deduplication for the cloud backup services with low deduplication efficiency. Challenges facing in the process of deduplication for cloud backup service are-1)Low deduplication efficiency due to exclusive access to large amount of data and limited system resources of PC based client site.2)Low data transfer efficiency due to transferring deduplicate data from source to backup server are typically small but that can be often across the WAN. Keywords- Cloud computing, Deduplication, cloud backup, application awareness
A SECURITY FRAMEWORK IN CLOUD COMPUTING INFRASTRUCTUREIJNSA Journal
In a typical cloud computing diverse facilitating components like hardware, software, firmware,
networking, and services integrate to offer different computational facilities, while Internet or a private
network (or VPN) provides the required backbone to deliver the services. The security risks to the cloud
system delimit the benefits of cloud computing like “on-demand, customized resource availability and
performance management”. It is understood that current IT and enterprise security solutions are not
adequate to address the cloud security issues. This paper explores the challenges and issues of security
concerns of cloud computing through different standard and novel solutions. We propose analysis and
architecture for incorporating different security schemes, techniques and protocols for cloud computing,
particularly in Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) systems. The proposed
architecture is generic in nature, not dependent on the type of cloud deployment, application agnostic and
is not coupled with the underlying backbone. This would facilitate to manage the cloud system more
effectively and provide the administrator to include the specific solution to counter the threat. We have also
shown using experimental data how a cloud service provider can estimate the charging based on the
security service it provides and security-related cost-benefit analysis can be estimated.
This document discusses secure cloud data management and proposes solutions using the Declarative Secure Distributed Systems (DS2) platform. It first categorizes threats from the cloud infrastructure, communication network, and users. It then describes how DS2 uses a declarative language called SeNDlog to specify secure distributed systems and enforce policies across layers. Finally, it provides an example of how DS2 can implement authenticated MapReduce query processing in the cloud to address security challenges.
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
This document summarizes a research paper that proposes a scheme for ensuring security and reliability of data stored in the cloud. The scheme utilizes erasure coding to redundantly store encrypted data fragments across multiple cloud servers. It generates homomorphic tokens that allow auditing of the data storage and identification of any misbehaving servers. The scheme supports secure dynamic operations like modification, deletion and append of cloud data files. Analysis shows the scheme is efficient and resilient against various security threats like server compromises or failures. It ensures storage correctness and fast localization of data errors in the cloud.
This white paper discusses an integrated security solution from Juniper Networks for virtualized data centers and clouds. It addresses the security challenges introduced by virtualized workloads, which physical firewalls have limited visibility into. The solution includes Juniper's SRX firewalls to protect physical workloads and segment traffic, and Firefly Host virtual firewalls to protect virtualized workloads within hypervisors and enforce the same security policies. This provides consistent security across physical and virtual environments as organizations adopt cloud computing.
A Survey Paper on Removal of Data Duplication in a Hybrid Cloud IRJET Journal
This document summarizes a research paper on removing data duplication in a hybrid cloud. It discusses how data deduplication techniques like single-instance storage and block-level deduplication can reduce storage needs by eliminating duplicate data. It also describes the types of cloud storage (public, private, hybrid) and cloud services (SaaS, PaaS, IaaS). The document proposes encrypting files with differential privilege keys to improve security when checking for duplicate content in a hybrid cloud and prevent unauthorized access during deduplication.
An approach for secured data transmission at client end in cloud computingIAEME Publication
This document summarizes a research paper that proposes an algorithm for securing data transmission between a client and cloud server in cloud computing environments. The algorithm uses an authentication function and key that are updated during transmission to verify authorization and detect any modifications by potential attackers. When a client connects to a server, they both initialize the key to the same value. Then, the key is incremented by one for each packet sent or received. If a client wants to verify security, it can send a packet with the current key value to the server for matching. This helps prevent man-in-the-middle attacks by making it difficult for attackers to modify packets without knowing the updated key values. The approach aims to securely transmit sensitive data from cloud servers
This document provides an introduction to Eclipse Zenoh, an open source project that unifies data in motion, data at rest, and computations in a distributed system. Zenoh elegantly blends traditional publish-subscribe with geo-distributed storage, queries, and computations. The presentation will demonstrate Zenoh's advantages for enabling typical edge computing scenarios and simplifying large-scale distributed applications through real-world use cases. It will also provide an overview of Zenoh's architecture, performance, and APIs.
Excellent Manner of Using Secure way of data storage in cloud computingEditor IJMTER
The major challenging issue in Cloud computing is Security. Providing Security is big issue
towards protecting data from third person as well as in Internet. This mainly deals the Security how it is
provided. Various type of services are there to protect our data and Various Services are available in Cloud
Computing to Utilize effective manner as Software as a Service (SaaS), Platform as a Service (PaaS),
Hardware as a Service (HaaS). Cloud computing is the use of computing resources (hardware and
software) that are delivered as a service over Internet network. Cloud Computing moves the Application
software and databases to the large data centres, where the administration of the data and services may not
be fully trustworthy that is in third party here the party has to get certified and authorized. Since Cloud
Computing share distributed resources via network in the open environment thus it makes new security
risks towards the correctness of the data in cloud. I propose in this paper flexibility of data storage
mechanism in the distributed environment by using the homomorphism token generation. In the proposed
system, users need to allow auditing the cloud storage with lightweight communication. While using
Encryption and Decryption methods it is very burden for a single processor. Than the processing
Capabilities can we utilize from Cloud Computing.
This document summarizes recent research on security issues related to single cloud and multi-cloud storage models. It finds that relying on a single cloud service provider poses risks to data availability and integrity if the provider experiences an outage. Storing data across multiple cloud providers (a multi-cloud model) can help address these issues but may increase costs. The document surveys various techniques proposed in recent research to improve security, availability, and integrity in single and multi-cloud environments, such as homomorphic tokens, file division, and the Depsky model. It concludes that while single cloud has been more widely researched, multi-cloud is an important area of ongoing work to help overcome the security and cost challenges of cloud storage.
On The Homogeneous Biquadratic Equation with 5 UnknownsIJSRD
Five different methods of the non-zero integral solutions of the homogeneous biquadratic Diophantine equation with five unknowns are determined. Some interesting relations among the special numbers and the solutions are observed.
Test Rig for Measurement of Spark Advance Angle and Ignition System Using AT8...IJSRD
An electronic ignition control system for internal combustion engine, notably for motor vehicles, which comprises a rotary member revolving at engine speed and provided with two reference marks Which the position correspond to the maximum ignition advance angle and to the minimum ignition advance angle, respectively , said reference marks defining at least one area on said rotary member, a sensor disposed in close vicinity of said rotary member so as to detect the moments of passage of said references marks and at least one up and down counter for counting pulse, said rotary member further comprising a third references marks separate from the first two reference marks aforesaid so as to define two successive areas scanned in succession by said sensor, while a first up and down counter positively counts the pulse from a first area and negatively counts, during passage of second area the pulse from second clock system adapted to emit pluses at a frequency programmable according to the desired ignition advance law, the resetting of said up and down counter being utilized for producing ignition spark. Finally up and down counter calculates the final value of the advance angle. After calculating the final value of advance angle signal send to the ignition box to ignition coil, and ignition coil operate the spark as per advance angle.
A Literature Survey on Comparative Analysis of RZ and NRZ Line Encoding over ...IJSRD
Modern world is the world of internet which requires more capacity and more bandwidth to modernize the world. Hence we move towards optical communication. When we talk about optical communication then chromatic dispersion is biggest obstacle for high speed optical channel. Chromatic dispersion can be reduced either by compensation technique [1, 2] or either by electronic dispersion compensation. In this paper, we make comparative analysis in RZ and NRZ line encoding over 40 Gbps system. Two different modulation formats including non-return-to-zero (NRZ), return-to-zero (RZ) is compared on the basis of Bit error rate and quality factor. The 40 Gbps signal is transmitted over 200 km in single mode fiber.
Role of Simulation in Deep Drawn Cylindrical PartIJSRD
Simulation is widely used in forming industry due to its speed and lower cost and it has been proven to be effective in prediction of formability and spring back behavior. The purpose of finite element simulation in the sheet metal forming process is to minimize the time and cost in the design phase by predicting key outcomes such as the final shape of the part, the possibility of various defects and the flow of material. Such simulation is most useful and efficient when it is performed in the early stage of design by designers, rather than by analysis specialists after the detailed design is complete. The accuracy of such simulation depends on knowledge of material properties, boundary conditions and processing parameters. In the industry today, numerical sheet metal forming simulation is very important tool for reducing load time and improving part quality. In this paper finite element model for the deep-drawing of cylindrical cups is constructed and the simulation results are obtained by using different simulation parameters, i.e. punch velocity, coefficient of friction and blank holder force of the FE mesh-elements and these results are compared with experimental work.
Surveying, Planning and Scheduling For A Hill Road Work at Kalrayan Hills by ...IJSRD
To date, few construction methods have helped the project managers make a decision on the near-optimum distributions of men, material, Space and tools according to their job objectives and job limitations. This thesis presents an intelligent scheduling system (ISS) that can assist the project managers to find the near-optimum agenda plan according to their job objectives and job limitations. Intelligent scheduling system (ISS) uses model techniques to share out resources and allocate dissimilar levels of priorities to different tricks in each model cycle to find the near-optimum solution. ISS considers and combines most of the important construction factors (agenda of task, expenses, manpower, breathing space, utensils and material) at the same time in a incorporated environment, which makes the resulting agenda that will be nearer to optimum. Moreover, ISS allows for what-if analysis of probable scenarios, and schedule adjustments based on unexpected conditions (modified orders, delayed material delivery, etc.). As a final point, two model applications and one real-world construction job are utilized to illustrate and evaluate the success of ISS with two commonly used software packages, Primavera Project Planner and Microsoft Project.
Design and Linear Static Analysis of Transport Aircraft Vertical Tail for Dif...IJSRD
This document summarizes the design and linear static analysis of the vertical tail structure of a transport aircraft for different rudder deflection angles. The vertical tail and rudder structure was modeled in CATIA and analyzed in MSC Patran and NASTRAN. Stresses were calculated for rudder deflections of 10, 20, and 30 degrees. The maximum principal stress was found to be well below the yield strength of the 7075-T6 aluminum alloy used, indicating the design is safe. Validation using a reserve factor approach also confirmed stresses are within acceptable limits for the expected loads on the vertical tail.
A Survey on Service Request Scheduling in Cloud Based ArchitectureIJSRD
Cloud computing has become quite popular now-a-days. It facilitates the users to store and process their data which is stored in 3rd party data centers. Today in IT sector everything is run and managed on the cloud environment. As the number of users is increasing day by day, faster and efficient processing of large volume of data and resources is desired at all levels. So the management of resources attains prime importance. While using cloud computing various issues are encountered like load balancing, traffic while computation etc. Job scheduling is one of the solution of these problems which reduces the waiting time and maximizes the quality of services. In job scheduling “priority†is an important factor. In this paper, we will be discussing various scheduling algorithms and a review on dynamic priority scheduling algorithm.
A Survey on Service Request Scheduling in Cloud Based ArchitectureIJSRD
Cloud computing has become quite popular now-a-days. It facilitates the users to store and process their data which is stored in 3rd party data centers. Today in IT sector everything is run and managed on the cloud environment. As the number of users is increasing day by day, faster and efficient processing of large volume of data and resources is desired at all levels. So the management of resources attains prime importance. While using cloud computing various issues are encountered like load balancing, traffic while computation etc. Job scheduling is one of the solution of these problems which reduces the waiting time and maximizes the quality of services. In job scheduling “priority†is an important factor. In this paper, we will be discussing various scheduling algorithms and a review on dynamic priority scheduling algorithm.
Experimental study on Torsion behavior of Flange beam with GFRPIJSRD
The Study deals with experimental study using glass fiber polymers in civil science. Repairing represents an important aspect of the construction industry and its importance is increasing due to surrounding conditions or geoenvironmental degradations, increased service loads, reduced ability (to hold or do something) due to (old/allowing to get old/getting older), worsening because of poor construction materials and work quality’s and need for seismic-related have demanded the need for repair and rehabilitation of existing structures. Fiber reinforced polymers has been utilized effectively as a part of numerous applications such as low weight, high quality and capacity to last. Numerous past examination chips away at torsion strengthening were centered on strong rectangular RC Beams with distinctive strip designs and diverse sorts of fiber. Distinctive models were produced to torsion test for strengthening of RC beams and effectively utilized for approval of the test works.In the present work test study was done with a specific end goal to have a superior comprehension the conduct of torsion reinforcing of strong RC flanged T-Beam. A RC T-beam is deliberately examined and intended for torsion like a RC rectangular beam; the impact of cement on flange is disregarded by codes. In the present study impact of width in changing so as to oppose torsion is concentrated on flange width of controlled bars. Alternate specification considered is reinforcing and fiber orientations.
Robotic technology has the origin from the year 1954 and has very vast application in the field like energy, health, agriculture, education, research, motion and space etc. The disadvantage of any autonomous robot is that, it is very difficult to maintain stability of its traverse condition. In this paper, we have introduced and achieved maximum stability for the traverse condition through white line following technique. Here we are using the Fire Bird V robotic technology as our robotic medium, which has the Atmega2560 as its microcontroller and where it uses Studio 4 as its controller’s software platform. The control dynamic to the robot is sent via the software platform which is embedded ‘C’ language and the traverse condition is maintained. This robot runs at step down voltage value of 9V AC supply and has also uses lithium 9v battery as a stand by back up.
Comparison of Shunt Facts Devices for the Improvement of Transient Stability ...IJSRD
This paper presents, the performance of STATCOM placed at midpoint of the two machine power system and compared with the performance of SVC. The comparison of various results found for the different type of faults (single line, double line & three phase fault) occur in long transmission line, and their removal by using shunt FACTS devices is analysed. Computer simulation results under a severe disturbance condition (three phase fault) for different fault clearing times, and different line lengths are analyzed. Both controllers are implemented using MATLAB/SIMULINK. Simulation results shows that the STATCOM with conventional PI controller installed with two machine three bus systems provides better damping oscillation characteristics in rotor angle as compared to two machine power system installed with SVC. The transient stability of two machine system installed with STATCOM has been improved considerably and post settling time of the system after facing disturbance is also improved.
An Efficient DTN Routing Algorithm for Automatic Crime Information Sharing fo...IJSRD
Delay Tolerant Network shows many issues that are exist in traditional network. Opportunistic network emerge as interesting evolution in MANET. Mobile nodes in the opportunistic network communicate with each other even in case of no route connection. In this paper a kiosk (or hub station) that is connected to villages to establish internet connection. Such kiosk is placed where traffic frequency is high. We will use high frequency sensor in vehicles. When passing through kiosk, high frequency sensor will establish connection to kiosk & kiosk will connect villages to internet. This system is very useful in crime information sharing services. As an example, if there are a person who is victim of any crime or in a trouble condition. He/she have mobile devices connect to internet. They send a trouble message which is passed to near kiosk and passed on to vehicles and forward their information to police station. This system is helpful in villages, where network communication is not proper.
Structural and Thermal Analysis of Metal - Ceramic Disk BrakeIJSRD
Disk brakes are using from so many years in automotive and still researches are going on in this field for decreasing the temperature effect so that by this we can operate easily. Many new materials are introduced for the disk brake rotor to withstand high temperature produced during braking action. Apart from the high temperature property, the disc rotor materials must also have high thermal conductivity property, as this property decides the amount of heat dissipation to the air stream from the disk rotor. A brake material with good temperature and high thermal conductivity property gives maximum efficiency by overcoming the problem of thermo-mechanical instability [TEI] in the rotor which is more common in low thermal conductivity brake rotor material. In the present work, a Grey cast iron material and metal-ceramic has been chosen for the disk brake rotor. Number of methods before already introduced to know the history of the different materials related to disk brakes, analysis will be done in 2d and 3d in analytical and numerical methods. With different types of assumptions these numerical methods ranges from finite differences to finite elements. To conclude the temperature history for the Grey cast iron material, and metal-ceramic, a numerical simulation technique called finite element method is used. Transient analysis is carried out in ANSYS to predict temperature distribution as a function of time in the disk brake rotor. The results from the transient analysis are compared. As the brake rotor can be treated as the coupled field problem, it is mandatory to do structural analysis after performing thermal analysis in ANSYS to study the stability and rigidity behavior of the rotor material. The results from the transient analysis are given as the input to the structural analysis in order to conclude the stress distribution and displacement in disk brake rotor under thermal loading. The stability behavior of different brake rotor material is compared to facilitate the conceptual design of the disk brake system.
A Review on Un filling Defect Found In Forging ProcessIJSRD
The objective of this paper is to find defects in forging process. Causes for unfilling defects are discussed and remedies for the unfilling defect would help to improve the quality of forging product. It is concluded that proper forging technique, improving die design, proper heating of billet reduce the unfilling defect. Simulation in FEM based software DEFORM 3D helps to detect unfilling defect and using discussed remedies help to reduce rejection rate.
A Study on Uniform and Apodized Fiber Bragg GratingsIJSRD
This document summarizes a study on modeling uniform and apodized fiber Bragg gratings (FBGs) using MATLAB. The authors simulate different FBG designs and analyze their reflection spectra and side lobe strengths. Uniform FBGs are affected by changes in grating length, refractive index modulation, and pitch. Increasing length increases reflectivity but decreases bandwidth, while increasing index modulation increases reflectivity and bandwidth. Apodized FBGs using Gaussian, sinc, and raised cosine profiles reduce side lobes compared to uniform FBGs, at the cost of lower reflectivity. Apodized FBGs are preferred over uniform FBGs when wavelength selectivity is important.
Lid driven cavity flow simulation using CFD & MATLABIJSRD
Steady Incompressible Navier-Stokes equation on a uniform grid has been studied at various Reynolds number using CFD (Computational Fluid Dynamics). Present paper aim is to obtain the stream-function and velocity field in steady state using the finite difference formulation on momentum equations and continuity equation. Reynold number dominates the flow problem. Taylor’s series expansion has been used to convert the governing equations in the algebraic form using finite difference schemes. MATLAB has been used to draw to flow simulations inside the driven-cavity.
Experimental Investigation of Wear Properties of Aluminium LM25 Red Mud Metal...IJSRD
The use of different kind of Metal-matrix composite materials is in constant growing over the years, because they have better physical, mechanical and tribological properties comparing to matrix materials. In automotive industry they are used for pistons, cylinders, engine blocks, brakes and power transfer system elements. Research attempts have been made in the past toreduce the cost of processing of composites, decrease the weight of the composites, and increase the desired performance characteristics. This paper presents the findings of an experimental investigation on the effects of applied load, RPM, Sliding distance and pin temperature in dry sliding wear studies performed on red mud-based aluminum metal matrix composites (MMC). Red mud is an industrial waste during the production of aluminum from bauxite ore. A pin-on-disc apparatus was used to conduct the dry sliding wear test. Taguchi-based L9 orthogonal array has been used to accomplish the objective of the experimental study. Analysis of variance (ANOVA) is employed to find the optimal setting and the effect of each parameter on wear rate.
Fragmentation of Data in Large-Scale System For Ideal Performance and SecurityEditor IJCATR
Cloud computing is becoming prominent trend which offers the number of significant advantages. One of the ground laying
advantage of the cloud computing is the pay-as-per-use, where according to the use of the services, the customer has to pay. At present,
user’s storage availability improves the data generation. There is requiring farming out such large amount of data. There is indefinite
large number of Cloud Service Providers (CSP). The Cloud Service Providers is increasing trend for many number of organizations and
as well as for the customers that decreases the burden of the maintenance and local data storage. In cloud computing transferring data to
the third party administrator control will give rise to security concerns. Within the cloud, compromisation of data may occur due to
attacks by the unauthorized users and nodes. So, in order to protect the data in cloud the higher security measures are required and also
to provide security for the optimization of the data retrieval time. The proposed system will approach the issues of security and
performance. Initially in the DROPS methodology, the division of the files into fragments is done and replication of those fragmented
data over the cloud node is performed. Single fragment of particular file can be stored on each of the nodes which ensure that no
meaningful information is shown to an attacker on a successful attack. The separation of the nodes is done by T-Coloring in order to
prohibit an attacker to guess the fragment’s location. The complete data security is ensured by DROPS methodology
A SECURITY FRAMEWORK IN CLOUD COMPUTING INFRASTRUCTUREIJNSA Journal
In a typical cloud computing diverse facilitating components like hardware, software, firmware, networking, and services integrate to offer different computational facilities, while Internet or a private network (or VPN) provides the required backbone to deliver the services. The security risks to the cloud system delimit the benefits of cloud computing like “on-demand, customized resource availability and performance management”. It is understood that current IT and enterprise security solutions are not adequate to address the cloud security issues. This paper explores the challenges and issues of security concerns of cloud computing through different standard and novel solutions. We propose analysis and architecture for incorporating different security schemes, techniques and protocols for cloud computing, particularly in Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) systems. The proposed architecture is generic in nature, not dependent on the type of cloud deployment, application agnostic and is not coupled with the underlying backbone. This would facilitate to manage the cloud system more effectively and provide the administrator to include the specific solution to counter the threat. We have also shown using experimental data how a cloud service provider can estimate the charging based on the security service it provides and security-related cost-benefit analysis can be estimated.
A Privacy Preserving Three-Layer Cloud Storage Scheme Based On Computational ...IJSRED
This document proposes a three-layer cloud storage scheme based on fog computing to improve privacy protection. The scheme splits user data into three parts that are stored in the cloud server, fog server, and user's local machine. It uses a Hash-Solomon encoding technique to distribute the data in a way that original data cannot be reconstructed from partial information. The scheme leverages fog computing to both utilize cloud storage and securely protect data privacy against insider attacks. Theoretical analysis and experiments demonstrate that the proposed scheme effectively addresses privacy issues in existing cloud storage models.
Evaluation Of The Data Security Methods In Cloud Computing Environmentsijfcstjournal
This document discusses methods for ensuring data security in cloud computing environments. It begins by introducing cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The main goals of data security - confidentiality, integrity, and availability - are then described. Several methods for data security are proposed, including data fragmentation where sensitive data is divided and distributed across different domains. Encryption techniques are also discussed as ways to protect confidential data during storage and transmission. Overall, the document aims to evaluate approaches for addressing key issues around securing user data in cloud systems.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
This document discusses security issues related to cloud computing. It begins with an introduction to cloud computing models including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It then discusses potential security attacks to clouds like denial of service attacks and man-in-the-middle attacks. Security concerns with moving data and applications to the cloud are outlined. Techniques for securely publishing data in the cloud are also presented. The document concludes that security in cloud computing is challenging due to the complexity of clouds but that assurance of secure and mission-critical operations is important.
This document provides an overview of cloud computing, including definitions, advantages, disadvantages, and recommendations. It defines cloud computing as networked computer resources that can be accessed remotely through the internet. Key advantages include cost savings, scalability, device/location independence, and shared infrastructure. Disadvantages include loss of governance, lock-in effects, and security/isolation risks from shared multi-tenant systems. The document recommends approaches like standard checklists to help assess risks and obtain assurances when adopting cloud services.
This document summarizes research on domain-specific considerations for cloud computing performance. It discusses key contributions in computational power, performance provisioning, load balancing, and service level agreements. It also analyzes the characteristics of public, private and hybrid cloud models in terms of scalability, security, reliability, cost, data integration, management and other factors. The study reveals that considering domain-specific factors is important for designing effective cloud-based applications and environments.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes a research thesis that proposes a trusted cloud computing platform (TCCP) to address critical security issues in cloud computing. The TCCP is designed to provide a closed box execution environment for virtual machines to guarantee confidentiality and integrity of computations outsourced to infrastructure as a service cloud providers. It allows customers to remotely verify whether a cloud provider's backend is running a trusted TCCP implementation before launching a virtual machine. The TCCP leverages advances in trusted computing technologies to securely manage virtual machines and cloud infrastructure through protocols for node registration and virtual machine launch and migration. The goal of the TCCP is to extend the capabilities of traditional trusted platforms to the complex, distributed environments of cloud computing infra
This document summarizes a research paper that proposes a system for privacy-preserving public auditing of cloud data storage. The system allows a third-party auditor (TPA) to verify the integrity of data stored with a cloud service provider on behalf of users, without learning anything about the actual data contents. The system uses a public key-based homomorphic linear authenticator technique that enables the TPA to perform audits without having access to the full data. This technique allows the TPA to efficiently audit multiple users' data simultaneously. The document describes the system components, methodology used involving key generation and auditing protocols, and concludes the proposed system provides security and performance guarantees for privacy-preserving public auditing of cloud data
Enhancement of the Cloud Data Storage Architectural Framework in Private CloudINFOGAIN PUBLICATION
The data storage in the cloud typically resides in a service providing environment collocated with data from different clients. The institutions or organizations moving the sensitive and regulated data into the cloud in order to maintain the account for the means by which the access data is controlled and the data is kept secure. Data can take many forms. The cloud based application development; it includes the application programs, scripts, and configuration settings, along with the development tools. For deployed applications, it includes records and other content created or used by the applications, as well as account information about the users of the applications. Access controls are one means to keep data away from unauthorized users; encryption is another. Access controls are typically identity-based, which makes authentication of the user’s identity an important issue in cloud computing. In this research paper focus the cloud data storage architectural frame work of encrypted data.
Public Key Encryption algorithms Enabling Efficiency Using SaaS in Cloud Comp...Editor IJMTER
The Most great challenging in Cloud computing is Security. Here Security plays key role
in this paper proposed concept mainly deals with security at the end user access. While coming to the
end user access that are connected through the public networks. Here the end user wants to access his
application or services protected by the unauthorized persons. In this area if we want to apply
encryption or decryption methods such as RSA, 3DES, MD5, Blow fish. Etc.,
Whereas we can utilize these services at the end user access in cloud computing. Here there is
problem of encryption and decryption of the messages, services and applications. They are is lot of
time to take encrypt as well as decrypt and more number of processing capabilities are needed to use
the mechanism. For that problem we are introducing to use of cloud computing in SaaS model. i.e.,
scalable is applicable in this area so whenever it requires we can utilize the SaaS model.
In Cloud computing use of computing resources (hardware and software) that are delivered as a
service over Internet network. In advance earlier there is problem of using key size in various
algorithm like 64 bit it take some long period to encrypt the data.
Enhanced Integrity Preserving Homomorphic Scheme for Cloud StorageIRJET Journal
This document discusses enhancing integrity preservation for cloud storage using a homomorphic encryption scheme. It begins with an abstract that outlines using MD5 algorithm for integrity checks on fully homomorphic encrypted data. It then provides background on issues with privacy and integrity in cloud computing. The document reviews related work on cloud security and integrity verification. It discusses challenges with ensuring data integrity when stored remotely in the cloud and proposes using a homomorphic encryption scheme along with MD5 for integrity preservation of outsourced data in the cloud.
Enhanced Data Partitioning Technique for Improving Cloud Data Storage SecurityEditor IJMTER
Cloud computing is a model for enabling for on demand network access to shared
configurable computing resources (e.g. networks, servers, storage, applications, and services).It is
based on virtualization and distributed computing technologies. Cloud Data storage systems enable
user to store data efficiently on server without any trouble of data resources. User can easily store
and retrieve their data remotely. The two biggest concerns about cloud data storage are reliability and
security. Clients aren’t like to entrust their data to another third party or companies without a
guarantee that they will be able to access therein formations whenever they want. In the existing
system, the data are stored in the cloud using dynamic data operation with computation which makes
the user need to make a copy for further updating and verification of the data loss. Different
distributed storing auditing techniques are used for overcoming the problem of data loss. Recent
work of this paper has show that data partitioning technique used for data storage by providing
Digital signature to every partitioning data and user .this technique allow user to upload or retrieve
the data with matching the digital signatures provided to them. This method ensures high cloud
storage integrity, enhanced error localization and easy identification of misbehaving server and
unauthorized access to the cloud server. Hence this work aims to store the data securely in reduced
space with less time and computational cost.
Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Cloud computing is the internet based computing it is also known as "pay per use model"; we can pay only for the resources that are in the use. The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy measures are actively being researched, there is still little focus on detective controls related to cloud accountability and auditability. The complexity resulting from the sheer amount of virtualization and data distribution carried out in current clouds has also revealed an urgent need for research in cloud accountability, as has the shift in focus of customer concerns from server health and utilization to the integrity and safety of end-users' data. In this paper we purpose the method to store data provenance using Amazon S3 and simple DB.
Cloud Computing Using Encryption and Intrusion Detectionijsrd.com
Cloud computing provides many benefits to the users such as accessibility and availability. As the data is available over the cloud, it can be accessed by different users. There may be sensitive data of organization. This is the one issue to provide access to authenticated users only. But the data can be accessed by the owner of the cloud. So to avoid getting data being accessed by the cloud owner, we will use the intrusion detection system to provide security to the data. The other issue is to save the data backup in other cloud in encrypted form so that load balancing can be done. This will help the user with data availability in case of failure of one cloud.
This document summarizes research on personality-based distributed provable data ownership in multi-cloud storage. It discusses how current provable data possession protocols have limitations such as authentication overhead and lack of flexibility. The proposed approach eliminates authentication management by using identity-based cryptography. It aims to provide a secure, efficient and adaptable protocol for integrity checking of outsourced data across multiple cloud servers.
This document discusses effective modular order preserving encryption on cloud using multivariate hypergeometric distribution (MHGD). It begins with an abstract that describes how order preserving encryption allows efficient range queries on encrypted data. It then provides background on cloud computing security concerns and discusses existing approaches to searchable encryption, including probabilistic encryption, deterministic encryption, homomorphic encryption, and order preserving encryption. The key proposed approach is to improve the security of existing modular order preserving encryption approaches by utilizing MHGD.
Similar to Survey on Division and Replication of Data in Cloud for Optimal Performance and Security (20)
IJSRD is an international, open-access, peer-reviewed journal that publishes papers on engineering science and technology on a monthly basis. It aims to encourage new ideas and share knowledge in these fields. The journal helps researchers, undergraduates, postgraduates, and students publish their work. All published papers are indexed in major services. IJSRD provides benefits such as unlimited free article downloads, online submission and tracking, and free certificates for published articles. It has grown in impact factor and covers over 15,000 published articles by tens of thousands of authors.
Maintaining Data Confidentiality in Association Rule Mining in Distributed En...IJSRD
The data in real world applications is distributed at multiple locations, and the owner of the databases may be different people. Thus to perform mining task, the data needs to be kept at central location which causes threat to the privacy of corporate data. Hence the key challenge is to applying mining on distributed source data with preserving privacy of corporate data. The system addresses the problem of incrementally mining frequent itemsets in dynamic environment. The assumption made here is that, after initial mining the source undergoes into small changes in each time. The privacy of data should not be threatened by an adversary i.e. the miner and target database owner should not be able to recover original data from transformed data.
Performance and Emission characteristics of a Single Cylinder Four Stroke Die...IJSRD
The current trends in CI engine are to use Water-diesel emulsion as alternative fuel. It can be employed directly to the existing CI Engine system with no additional modifications. This system helps in reduction of NOx as well as PM, which in turn improve the combustion efficiency of the engine. However there are still investigations have to be done. The current work mainly concentrated on diesel engine run on water-diesel emulsions and its effect on engine performance and emissions were studied. The various loads were applied on a constant speed diesel engine run on water-diesel emulsions of varying ratios of 0.2:1, 0.3:1. 0.4:1 and 0.5:1. Emission and performance characteristics were measured and were compared with base diesel operation. The emissions like NOx and smoke density were found to decrease greatly and brake thermal efficiency was found to increase at high loads. Smoke level was 4.2 BSU and 3 BSU for base diesel and water diesel emulsion of 0.4. The ignition delay was found to increase with water diesel emulsions. This also increased the maximum rate of pressure rise and peak pressure. The engine was found to run rough with water-diesel emulsions. The optimal water-diesel ratio was found to be 0.4:1 by weight. HC and CO emissions were found to increase with water diesel emulsions.
Preclusion of High and Low Pressure In Boiler by Using LABVIEWIJSRD
Pressure is an important physical parameter to be controlled in process boiler, heat exchanger, nuclear reactor and steam carrying pipeline. In the article the issue has been face in boiler operation due to pressure is handled. In boiler, the problem is due to maximum and minimum range of pressure. Due to the issues there is a chance to causes the hazop. To avoid such the problem the high and low pressure in boiler has to control. In the paper such the problem has sorted out by implementing ON-OFF control. Here the proposed control action for pressure control is implemented with the help of LabVIEW (Laboratory Virtual Instrument Engineering Workbench) software and NI ELVIS hardware. In the idea the boiler’s low range and high is monitored and controlled valve desirably. And also the high range and low range of pressure in the boiler is signified to plant operator by alarm signal.
Prevention and Detection of Man in the Middle Attack on AODV ProtocolIJSRD
In this paper it is discuss about AODV protocol and security attacks and man in the middle attack in detail. AODV Protocol is use to find route and very important protocol for communication in wireless network. So AODV protocol should be Secured and it is a big challenge. There are various attacks that occur on it. Here in this paper it discussed about the detection and preventions of man-in-the-middle attack in detail.
Comparative Analysis of PAPR Reduction Techniques in OFDM Using Precoding Tec...IJSRD
In this modern era, Orthogonal Frequency Division Multiplexing (OFDM) has been proved to be an explicit promising technique for wired and wireless systems because of its several advantages like high spectral efficiency, robustness against frequency selective fading, relatively simple receiver implementation etc. Besides having a number of advantages OFDM suffers from few disadvantages like high Peak to Average Power Ratio (PAPR), Intercarrier Interference (ICI), Intersymbol Interference (ISI) etc. These detrimental effects, if not compensated properly and timely, can result in system performance degradation. This paper mainly concentrates on reduction of PAPR.A comparisons have been made between various precoding techniques against conventional OFDM.
Evaluation the Effect of Machining Parameters on MRR of Mild SteelIJSRD
Today’s life is totally based on Internet. Now a days people cannot imagine life without Internet. Information and communication technology plays vital role in today’s online networked society. In today’s life, we are very close to the online social networks. Online social networks are used for posting and sharing information across various social networking sites. But user’s privacy is not maintained by online social networks. For maintaining users sensitive information’s privacy online social networks provides little or no support. For filtering unwanted messages we propose a system using machine learning (ML). Using machine learning in soft classifier content based filtering performed. In proposed system filtering rules (FR’s) are provided for content independent filtering.. Blacklists are used for more flexibility by which filtering choices are increased. Proposed system provides security to the Online Social Networks.
Filter unwanted messages from walls and blocking nonlegitimate user in osnIJSRD
Today’s life is totally based on Internet. Now a days people cannot imagine life without Internet. Information and communication technology plays vital role in today’s online networked society. In today’s life, we are very close to the online social networks. Online social networks are used for posting and sharing information across various social networking sites. But user’s privacy is not maintained by online social networks. For maintaining users sensitive information’s privacy online social networks provides little or no support. For filtering unwanted messages we propose a system using machine learning (ML). Using machine learning in soft classifier content based filtering performed. In proposed system filtering rules (FR’s) are provided for content independent filtering.. Blacklists are used for more flexibility by which filtering choices are increased. Proposed system provides security to the Online Social Networks.
Keystroke Dynamics Authentication with Project Management SystemIJSRD
This document summarizes a research paper on a proposed keystroke dynamics authentication system integrated with a project management system. The system aims to add an extra layer of security beyond typical username and password authentication. It extracts typing features from users during a training phase and verifies users during login based on their keystroke patterns. If verification fails after three attempts, it moves to additional authentication steps like one-time passwords and image selection. The system is meant to securely manage project information and files shared between employees of a project management system. Previous research on keystroke dynamics authentication is also summarized.
Diagnosing lungs cancer Using Neural NetworksIJSRD
Artificial Neural Networks is the new technology. It is the branch of Artificial Intelligence and also it is an accepted new technology. Now a days Neural Networks Plays a Vital role in Medicine, Particularly in some fields such as cardiology, oncology etc. And also it has many applications in many areas like Science and Technology, Education, Business, Business and Manufacturing, etc. Neural Networks is most useful for making the decision more Effective. In this Paper, by the use of Neural Networks how the severe disease Lungs Cancer has been diagnosed more effectively. This Paper discussed about how the Lungs cancer can be identified effectively in earlier stages and diagnosed using Neural Networks and some devices. The Neural Networks has been successfully applied in Carcinogenesis. The main aim of this research is by the use of Neural Networks the Carcinogenesis can be diagnosed more cost-effective, easy to use techniques and methods. This Paper discussed about how the Lungs cancer can be identified effectively in earlier stages and diagnosed using Neural Networks and some devices. Sputum Cytology is used to detect the Lungs Cancer in Early stages.
A Survey on Sentiment Analysis and Opinion MiningIJSRD
In Today’s world, the social media has given web users a place for expressing and sharing their thoughts and opinions on different topics or events. For this purpose, the opinion mining has gained the importance. Sentiment classification and Opinion Mining is the study of people’s opinion, emotions, attitude towards the product, services, etc. Sentiment Analysis and Opinion Mining are the two interchangeable terms. There are various approaches and techniques exist for Sentiment Analysis like Naïve Bayes, Decision Trees, Support Vector Machines, Random Forests, Maximum Entropy, etc. Opinion mining is a useful and beneficial way to scientific surveys, political polls, market research and business intelligence, etc. This paper presents a literature review of various techniques used for opinion mining and sentiment analysis.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Experimental Investigation of Granulated Blast Furnace Slag ond Quarry Dust a...IJSRD
In this experimental work ninety nine cubes has been prepared having dimension 70.7x70.7x70.7 mm are cast as per IS:4031 (2000). In this experimental investigation cement mortar mix 1:3 by volume were selected for 0%, 20%, 40%, 60%, 80% and 100% partially replacement of natural sand (NS) by Granulated blast furnace slag (GBFS) and quarry dust (QD) [3 cubes on each parameter respectively] for W/C ratio of 0.55 respectively. All the cubes were tested under compressive testing machine. To compare the average compressive strength of natural sand (NS) with granulated blast furnace slag (GBFS) and quarry dust (QD).
Product Quality Analysis based on online ReviewsIJSRD
Customers satisfaction is the most important criteria before buying any product. Technology today has grown to such an extent that every smallest possible query is found on internet. An individual can express his reviews towards a product through Internet. This allows others to have a brief idea about the product before buying one for them. In this paper, we take into account all the challenges and limitations encountered while reading the online reviews and time being consumed in understanding quality of the product from the reviews. We include several methods and algorithms that help the consumer to understand the Quality of the product in better way.
Solving Fuzzy Matrix Games Defuzzificated by Trapezoidal Parabolic Fuzzy NumbersIJSRD
The matrix game theory gives a mathematical background for dealing with competitive or antagonistic situations arise in many parts of real life. Matrix games have been extensively studied and successfully applied to many fields such as economics, business, management and e-commerce as well as advertising. This paper deals with two-person matrix games whose elements of pay-off matrix are fuzzy numbers. Then the corresponding matrix game has been converted into crisp game using defuzzification techniques. The value of the matrix game for each player is obtained by solving corresponding crisp game problems using the existing method. Finally, to illustrate the proposed methodology, a practical and realistic numerical example has been applied for different defuzzification methods and the obtained results have been compared
Study of Clustering of Data Base in Education Sector Using Data MiningIJSRD
Data mining is a technology used in different disciplines to search for significant relationships among variables in n number of data sets. Data mining is frequently used in all types’ areas as well as applications. In this paper the application of data mining is attached with the field of education. The relationship between student’s university entrance examination results and their success was studied using cluster analysis and k-means algorithm techniques.
Fault Tolerance in Big Data Processing Using Heartbeat Messages and Data Repl...IJSRD
Big data is a popular term used to define the exponential evolution and availability of data, includes both structured and unstructured data. The volatile progression of demands on big data processing imposes heavy burden on computation, communication and storage in geographically distributed data centers. Hence it is necessary to minimize the cost of big data processing, which also includes fault tolerance cost. Big Data processing involves two types of faults: node failure and data loss. Both the faults can be recovered using heartbeat messages. Here heartbeat messages acts as an acknowledgement messages between two servers. This paper depicts about the study of node failure and recovery, data replication and heartbeat messages.
Investigation of Effect of Process Parameters on Maximum Temperature during F...IJSRD
This document summarizes a study that used finite element analysis to investigate the effect of process parameters on maximum temperature during friction stir welding of aluminium alloy AA-7075. Nine simulations were conducted using different combinations of tool rotational speed (300-450 RPM) and welding speed (2.5-5 mm/s). The results found that maximum temperature increased with increasing tool rotational speed but decreased with increasing welding speed. Statistical analysis showed that tool rotational speed had the strongest influence on maximum temperature. A regression model was developed that can predict maximum temperature based on rotational speed and welding speed. The model was validated and found to predict temperature accurately within about 2% error.
Review Paper on Computer Aided Design & Analysis of Rotor Shaft of a RotavatorIJSRD
The intent of this paper is to study the various forces and stress acting on a rotor shaft of a standard rotavator which is subjected to transient loading. The standard models of rotavator, having a progressive cutting sequence was considered for the study and analysis. The study was extended to various available models having different cutting blade arrangement. The study was carried on different papers and identifies the various forces acting on a Rotor shaft of a rotavator. The positions of the torque and forces applied are varied according to the model considered. The response was obtained by considering the angle of twist and equivalent stress on the rotor shaft. This paper presented a methodology for conducting transient analysis of rotor shaft of a rotavator,
A Survey on Data Mining Techniques for Crime Hotspots PredictionIJSRD
A crime is an act which is against the laws of a country or region. The technique which is used to find areas on a map which have high crime intensity is known as crime hotspot prediction. The technique uses the crime data which includes the area with crime rate and predict the future location with high crime intensity. The motivation of crime hotspot prediction is to raise people’s awareness regarding the dangerous location in certain time period. It can help for police resource allocation for creating a safe environment. The paper presents survey of different types of data mining techniques for crime hotspots prediction.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
The chapter Lifelines of National Economy in Class 10 Geography focuses on the various modes of transportation and communication that play a vital role in the economic development of a country. These lifelines are crucial for the movement of goods, services, and people, thereby connecting different regions and promoting economic activities.
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Jemison, MacLaughlin, and Majumder "Broadening Pathways for Editors and Authors"
Survey on Division and Replication of Data in Cloud for Optimal Performance and Security
1. IJSRD - International Journal for Scientific Research & Development| Vol. 3, Issue 10, 2015 | ISSN (online): 2321-0613
All rights reserved by www.ijsrd.com 984
Survey on Division and Replication of data in Cloud for Optimal
Performance and Security
Snehal Khot1
Ujwala Wanaskar2
1
P.G. Student 2
Associate Professor
1,2
Department of Computer Engineering
1,2
P.V.P.I.T., Bavdhan, Pune, Maharashtra, India
Abstract— Outsourcing information to an outsider
authoritative control, as is done in distributed computing,
offers ascend to security concerns. The information trade off
may happen because of assaults by different clients and hubs
inside of the cloud. Hence, high efforts to establish safety
are required to secure information inside of the cloud. On
the other hand, the utilized security procedure should
likewise consider the advancement of the information
recovery time. In this paper, we propose Division and
Replication of Data in the Cloud for Optimal Performance
and Security (DROPS) that all in all methodologies the
security and execution issues. In the DROPS procedure, we
partition a record into sections, and reproduce the divided
information over the cloud hubs. Each of the hubs stores just
a itary part of a specific information record that guarantees
that even in the event of a fruitful assault, no important data
is uncovered to the assailant. Additionally, the hubs putting
away the sections are isolated with certain separation by
method for diagram T-shading to restrict an assailant of
speculating the areas of the sections. Moreover, the DROPS
procedure does not depend on the customary cryptographic
procedures for the information security; in this way
alleviating the arrangement of computationally costly
approaches. We demonstrate that the likelihood to find and
bargain the greater part of the hubs putting away the
sections of a solitary record is to a great degree low. We
likewise analyze the execution of the DROPS system with
ten different plans. The more elevated amount of security
with slight execution overhead was watched.
Key words: Centrality, cloud security, fragmentation,
replication, performance
I. INTRODUCTION
The distributed computing worldview has changed the use
and administration of the data innovation foundation.
Distributed computing is portrayed by on-interest self-
administrations; universal system gets to, asset pooling,
flexibility, and measured administrations. The previously
stated qualities of distributed computing make it a striking
possibility for organizations, associations, and individual
clients for reception. In any case, the advantages of minimal
effort, insignificant administration (from a clients point of
view), and more prominent adaptability accompany
expanded security concerns. Security is a standout amongst
the most pivotal perspectives among those forbidding the far
reaching reception of distributed computing. Cloud security
issues might stem because of the centre technology
execution (virtual machine (VM) escape, session riding, and
so forth.), cloud administration offerings (organized
question dialect infusion, feeble confirmation plans, and so
on.), and emerging from cloud attributes (information
recuperation weakness, Internet convention powerlessness,
and so on.) For a cloud to be secure, the majority of the
taking part substances must be secure. In any given
framework with numerous units, the largest amount of the
systems security is equivalent to the security level of the
weakest element. In this way, in a cloud, the security of the
benefits does not exclusively rely on upon an individual's
efforts to establish safety. The neighbouring elements might
give a chance to an assailant to sidestep the client’s
protections. The off-site information stockpiling cloud
utility obliges clients to move information in cloud's
virtualized and shared environment that may bring about
different security concerns. Pooling and flexibility of a
cloud, permits the physical assets to be shared among
numerous clients. Also, the common assets may be
reassigned to different clients at some occurrence of time
that may bring about information trade off through
information recuperation systems. Moreover, a multi-
occupant virtualized environment might bring about a VM
to get away from the limits of virtual machine screen
(VMM). They got away VM can meddle with different VMs
to have entry to unapproved information. Additionally,
cross-occupant virtualized system access might likewise
trade off information protection and trustworthiness.
Shameful media disinfection can likewise spill customers
private information. The information outsourced to an open
cloud must be secured. Unapproved information access by
different clients and forms (whether coincidental or
purposeful) must be avoided. As examined over, any frail
substance can put the entire cloud at danger. In such a
situation, the security system should significantly build an
assailant's push to recover a sensible measure of information
even after an effective interruption in the cloud. In addition,
the plausible measure of misfortune (as an after effect of
information spillage) should likewise be minimized.
A cloud must guarantee throughput, unwavering
quality, and security. A key element deciding the throughput
of a cloud that stores information is the information
recovery time. In expansive scale frameworks, the issues of
information unwavering quality, information accessibility,
and reaction time are managed information replication
procedures. Be that as it may, putting copies information
over various hubs builds the assault surface for that specific
information. Case in point, putting away m copies of a
document in a cloud rather than one reproduction builds the
likelihood of a hub holding record to be picked as assault
casualty, from 1 n to m n , where n is the aggregate number
of hubs. From the above talk, we can find that both security
and execution are basic for the cutting edge vast scale
frameworks, for example, mists. Along these lines, in this
paper, we by and large approach the issue of security and
execution as a safe information replication issue. We
introduce Division and Replication of Data in the Cloud for
Optimal Performance and Security (DROPS) that judicially
parts client records into pieces and repeats them at vital
2. Survey on Division and Replication of data in Cloud for Optimal Performance and Security
(IJSRD/Vol. 3/Issue 10/2015/223)
All rights reserved by www.ijsrd.com 985
areas inside of the cloud. The division of a document into
parts is performed in light of a given client criteria such that
the individual sections don't contain any important data.
Each of the cloud hubs (we utilize the term hub to speak to
processing, stockpiling, physical, and virtual machines)
contains an unmistakable section to build the information
security. A fruitful assault on a solitary hub must not
uncover the areas of different pieces inside of the cloud. To
keep an aggressor unverifiable about the areas of the record
parts and to encourage enhance the security, we select the
hubs in a way that they are not nearby and are at sure
separation from one another. The hub partition is guaranteed
by the method for the T-shading. To enhance information
recovery time, the hubs are chosen in light of the centrality
measures that guarantee an enhanced access time. To
encourage enhance the recovery time, we judicially repeat
parts over the hubs that create the most noteworthy
read/compose demands. The choice of the hubs is performed
in two stages. In the first stage, the hubs are chosen for the
starting situation of the sections taking into account the
centrality measures. In the second stage, the hubs are chosen
for replication. The working of the DROPS approach
II. LITERATURE REVIEW
Kashif Bilal and Marc Manzano Describes Server farms
being an engineering and useful square of distributed
computing are indispensable to the Information and
Correspondence Technology (ICT) division. Distributed
computing is thoroughly used by different areas, for
example, horticulture, atomic science, savvy frameworks,
medicinal services, and web crawlers for examination,
information stockpiling, and investigation. A Data Center
Network (DCN) constitutes the communicational spine of a
server farm, determining the execution limits for cloud
foundation. The DCN should be powerful to
disappointments and instabilities to convey the required
Quality-of-Service (QoS) level and fulfill administration
level assention (SLA). In this paper, we examine strength of
the cutting edge DCNs. Our real commitments are: 1)
Displayed multilayered chart displaying of different DCNs;
2) Concentrated on the established vigor measurements
considering different disappointment situations to perform a
relative investigation; 3) They introduce the insufficiency of
the traditional system power measurements to suitably
assess the DCN heartiness; and 4) They propose new
systems to evaluate the DCN strength. Right now, there is
no definite study accessible focusing the DCN vigor.
Subsequently, we trust this study will establish a firm
framework for the future DCN vigor research.
Dejene Boru and Dzmitry Kliazovich Describes
Distributed computing is a developing worldview that gives
figuring assets as an administration over a system.
Correspondence assets regularly turn into a bottleneck in
administration provisioning for some cloud applications. In
this manner, information replication, which brings
information (e.g., databases) closer to information buyers
(e.g., cloud applications), is seen as a promising
arrangement. It permits minimizing system delays and data
transfer capacity utilization. In this paper we concentrate on
information replication in distributed computing server
farms. Dissimilar to different methodologies accessible in
the writing, they consider both vitality productivity and data
transmission utilization of the framework, notwithstanding
the enhanced Quality of Service (QoS) as a consequence of
the diminished correspondence delays. The assessment
results got amid broad reenactments uncover execution and
vitality productivity tradeoffs and guide the configuration of
future information replication arrangements.
Wolfgang Kampichlerand Frequentis AG
Describes an idea for Multiple Free Layers of a Security
(MILS) Console Subsystem (MCS) for a tried and true data
and correspondence base for ATM voice and information
administrations. Wellbeing and security prerequisites
characteristic for ATM systems show a perfect application
for Distributed MILS architectures. This paper concentrates
on the console subsystem that oversees the collaborations
between a human client and one or more division bit (SK)
segments. The MCS, itself, keeps running on a partition
piece. Its customers are allotments on the same SK hubs in
an enclave that are fit for corresponding with the MCS in a
dependable design. The MCS speaks with its customers
(customer application back-end) by means of SK data
channels (e.g. IP correspondence designed on a solitary
hub). The human interface given by the MCS comprises of
info/yield gadgets exemplified by a showcase screen,
console, mouse, mouthpiece and speaker that can be shared
among segments for voice and information applications at
the same time.
III. SYSTEM IMPLEMENTATION
The real commitments in this paper are as per the following:
- A plan for outsourced information that considers both
the security and execution. The proposed plan pieces
and reproduces the information record over cloud hubs.
- The proposed DROPS plan guarantees that even on
account of a fruitful assault, no important data is
uncovered to the assailant.
- Independent on conventional cryptographic systems
for information security. The non-cryptographic nature
of the proposed plan makes it quicker to perform the
required operations (situation and recovery) on the
information.
- Its guarantee a controlled replication of the document
pieces, where each of the sections is reproduced once
with the end goal of enhanced
The algorithm automatically generates mask image
without user interaction that contains only text regions to be
inpainted.
IV. DETAILED DESIGN
A. Mathematical Model:
Consider a cloud that comprises of M hubs, each with its
own particular stockpiling limit. Let Si speaks to the name
of i-th hub and si indicates all out capacity limit of Si. The
correspondence time in the middle of Si and Sj is the
aggregate time of the greater part of the connections inside
of a chose way from Si to Sj spoke to by c(i, j). We consider
N number of record sections such that Ok indicates k-th part
of a document while alright speaks to the extent of k-th part.
Let the aggregate read and compose demands from Si for
Ok be spoken to by ri k and wi k, separately. Let Pk mean
the essential hub that stores the essential duplicate of Ok.
3. Survey on Division and Replication of data in Cloud for Optimal Performance and Security
(IJSRD/Vol. 3/Issue 10/2015/223)
All rights reserved by www.ijsrd.com 986
The replication plan for Ok meant by Rk is likewise put
away at Pk. Also, every Si contains a two-field record,
putting away Pk for Ok and NNi k that speaks to the closest
hub putting away Ok. At whatever point there is an overhaul
in Ok, the upgraded rendition is sent to Pk that telecasts the
upgraded adaptation to all of the hubs in Rk. Let b(i,j) and
t(i,j) be the aggregate data transmission of the connection
and movement between locales Si what's more, Sj ,
separately . The centrality measure for Si is spoken to by
ceni. Let colSi store the estimation of doled out shading to
Si. The colSi can have one out of two values, in particular:
open shading and close shading. The quality open shading
speaks to that the hub is accessible for putting away the
record section. The quality close shading appears that the
hub can't store the record section Leave T alone an
arrangement of whole numbers beginning from zero and
finishing on a pre specified number. On the off chance that
the chose number is three, at that point T = {0; 1; 2; 3}. The
set T is utilized to confine the hub determination to those
hubs that are at jump separations not fitting in with T. For
the simplicity of perusing, the most regularly utilized
documentations are recorded. Our point is to minimize the
general aggregate system exchange time or replication time
(RT) or additionally termed as replication cost (RC). The
RT is made out of two variables: (a) period because of read
solicitations and (b) time due to compose demands. The
aggregate read time of Ok by Si from NNi k is meant by Ri
k and is given by:
The total time due to the writing of Ok by Si
addressed to the Pk is represented as Wik and is given:
The overall RT is represented by:
B. ALGORITHAM:
V. RESULTS AND DISCUSSION
The execution of the DROPS approach with the calculations
examined. The conduct of the calculations was considered
by:
a) expanding the quantity of hubs in the framework - it is
obvious that the unpredictability centrality brought
about the most astounding execution while the
betweenness centrality demonstrated the most minimal
execution. The explanation behind this is that hubs
with higher capriciousness are closer to all different
hubs in the system that outcomes in lower RC esteem
for getting to the pieces.
b) expanding the quantity of articles keeping number of
hubs steady - The expansion in number of record parts
can strain the capacity limit of the cloud that, thusly
might influence the determination of the hubs. To
consider the effect on execution because of expansion
in number of record parts.
c) changing the hubs stockpiling limit - we considered the
impact of progress in the hubs capacity limit. An
adjustment away limit of the hubs might influence the
quantity of reproductions on the hub because of
capacity limit limitations. Naturally, a lower hub
stockpiling limit might bring about the end of some
ideal hubs to be chosen for replication in view of
infringement of capacity limit requirements. The end
of a few hubs might debase the execution to some
degree in light of the fact that a hub giving lower
access time may be pruned because of non-
accessibility of enough storage room to store the
document piece. Higher hub stockpiling limit permits
full-scale replication of sections, expanding the
execution pick up. Then again, hub limit over certain
level won't change the execution essentially as
recreating the as of now reproduced pieces won't create
significant execution increment. In the event that the
stockpiling hubs have enough ability to store the
designated document sections, at that point a further
increment in the capacity limit of a hub can't bring
about the parts to be put away once more.
Additionally, the T-shading permits just a solitary copy
4. Survey on Division and Replication of data in Cloud for Optimal Performance and Security
(IJSRD/Vol. 3/Issue 10/2015/223)
All rights reserved by www.ijsrd.com 987
to be put away on any hub. Hence, after a certain point,
the expansion away limit may not influence the
execution.
d) Shifting the read/compose proportion - The change in
R/W ratio affects the performance of the discussed
comparative techniques. An increase in the number of
reads would lead to a need of more replicas of the
fragments in the cloud. The increased number of
replicas decreases the communication cost associated
with the reading of fragments. However, the increased
number of writes demands that the replicas be placed
closer to the primary node. The presence of replicas
closer to the primary node results in decreased RC
associated with updating replicas. The higher write
ratios may increase the traffic on the network for
updating the replicas.
The aforementioned parameters are critical as they
influence the issue size and the execution of calculations.
VI. CONCLUSION
In this survey paper it is proposed that DROPS strategy is a
distributed storage security conspire that by and large
manages the security and execution as far as recovery time.
The information record was divided and the parts are
scattered over different hubs. The hubs were isolated by
method for T-shading. The discontinuity and dispersal
guaranteed that no noteworthy data was reachable by a foe if
there should a rise an occurrence of a fruitful assault. No
hub in the cloud put away more than a solitary part of the
same document. The execution of the DROPS system was
contrasted and full-scale replication strategies. The
consequences of the recreations uncovered that the
synchronous spotlight on the security and execution,
brought about expanded security level of information joined
by a slight execution drop. As of now with the DROPS
approach, a client needs to download the record, redesign
the substance, and transfer it once more. It is vital to build
up a programmed upgrade instrument that can recognize and
overhaul the required sections just. The previously stated
future work will save the time and assets used in
downloading, upgrading, and transferring the document
once more. In addition, the ramifications of TCP in cast over
the DROPS strategy should be concentrated on that is
significant to dispersed information stockpiling and get to.
It gives brief knowledge of Division and replication
of data in cloud for optimal performance and security
(DROPS)
REFERENCES
[1] K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, . A.
Madani, N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C.
Z. Xu, and A. Y. Zomaya, “Division and Replication
of data in cloud for optimal performance and security”
IEEE Transaction for cloud computing DOI
10.1109/TCC.2015.2400460
[2] K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A.
Madani, N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C.
Z. Xu, and A. Y. Zomaya, “Quantitative comparisons
of the state of the art data center architectures,”
Concurrency and Computation: Practice and
Experience, Vol. 25, No. 12, 2013, pp. 1771-1783.
[3] K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and
A. Zomaya, “On the characterization of the structural
Robustness of data center networks,” IEEE
Transactions on Cloud Computing, Vol. 1, No. 1,
2013, pp. 64-77.
[4] D. Boru, D. Kliazovich, F. Granelli, P. Bouvry, and A.
Y. Zomaya, “Energy-efficient data replication in cloud
computing datacenters,” In IEEE Globecom
Workshops, 2013, pp. 446-451. .
[5] Y. Deswarte, L. Blain, and J-C. Fabre, “Intrusion
tolerance in distributed computing systems,” In
Proceedings of IEEE Computer Society Symposium on
Research in Security and Privacy, Oakland CA, pp.
110-121, 1991.
[6] B. Grobauer, T.Walloschek, and E. Stocker,
“Understanding cloud computing vulnerabilities,”
IEEE Security and Privacy, Vol. 9, No. 2, 2011, pp.
50-57.
[7] W. K. Hale, “Frequency assignment: Theory and
applications,” Proceedings of the IEEE, Vol. 68, No.
12, 1980, pp. 1497-1514.
[8] K. Hashizume, D. G. Rosado, E. Fernndez-Medina,
and E. B. Fernandez, “An analysis of security issues
for cloud computing,” Journal of Internet Services and
Applications, Vol. 4, No. 1, 2013, pp. 1-13.
[9] M. Hogan, F. Liu, A.Sokol, and J. Tong, “NIST cloud
computing standards roadmap,” NIST Special
Publication, July 2011.
[10]W. A. Jansen, “Cloud hooks: Security and privacy
issues in cloud computing,” In 44th Hawaii IEEE
International Conference onSystem Sciences (HICSS),
2011, pp. 1-10.
[11]A. Juels and A. Opera, “New approaches to security
and availability for cloud data,” Communications of
the ACM, Vol. 56, No. 2, 2013, pp. 64-73.