This document discusses data sharing in the cloud using distributed accountability. It proposes a Cloud Information Accountability (CIA) framework to provide end-to-end accountability in a highly distributed manner. The CIA framework uses an object-centered approach that enables automatic logging mechanisms to be enclosed with user data and policies. This improves security and privacy of data in the cloud. It also provides distributed auditing mechanisms and a secure Java virtual machine (JVM) for high security. The framework is evaluated against potential attacks like disassembly attacks and man-in-the-middle attacks to demonstrate its effectiveness.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Accountability in Distributed Environment For Data Sharing in the CloudEditor IJCATR
Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis.
A major feature of the cloud services is that users‘ data are usually processed remotely in unknown machines that users do
not own or operate. While enjoying the convenience brought by this new emerging technology, users‘ fears of losing control
of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud
services. To address this problem, in this paper, we propose a novel highly decentralized information accountability
framework to keep track of the actual usage of the users ‗data in the cloud. In particular, we propose an object-centred
approach that enables enclosing our logging mechanism together with users‘ data and policies. We leverage the JAR
programmable capabilities to both create a dynamic and travelling object, and to ensure that any access to users‘ data will
trigger authentication and automated logging local to the JARs. To strengthen user‘s control, we also provide distributed
auditing mechanisms
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Accountability in Distributed Environment For Data Sharing in the CloudEditor IJCATR
Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis.
A major feature of the cloud services is that users‘ data are usually processed remotely in unknown machines that users do
not own or operate. While enjoying the convenience brought by this new emerging technology, users‘ fears of losing control
of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud
services. To address this problem, in this paper, we propose a novel highly decentralized information accountability
framework to keep track of the actual usage of the users ‗data in the cloud. In particular, we propose an object-centred
approach that enables enclosing our logging mechanism together with users‘ data and policies. We leverage the JAR
programmable capabilities to both create a dynamic and travelling object, and to ensure that any access to users‘ data will
trigger authentication and automated logging local to the JARs. To strengthen user‘s control, we also provide distributed
auditing mechanisms
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Enhanced security framework to ensure data security in cloud using security b...eSAT Journals
Abstract Data security and Access control is a challenging research work in Cloud Computing. Cloud service users upload there private and confidential data over the cloud. As the data is transferred among the server and client, the data is to be protected from unauthorized entries into the server, by authenticating the user’s and provide high secure priority to the data. So the Experts always recommend using different passwords for different logins. Any normal person cannot possibly follow that advice and memorize all their usernames and passwords. That is where password managers come in. The purpose of this paper is to secure data from unauthorized person using Security blanket algorithm.
Ensuring Distributed Accountability in the CloudSuraj Mehta
Ensuring distributed accountability for data sharing in the cloud is in short nothing
but a novel highly decentralized information accountability framework to keep track
of the actual usage of the users' data in the cloud. Cloud computing enables highly
ecient services that are easily consumed over the internet.
Improve HLA based Encryption Process using fixed Size Aggregate Key generationEditor IJMTER
Cloud computing is an innovative idea for IT industries which provides several services to
users. In cloud computing secure authentication and data integrity of data is a major challenge, due to
internal and external threats. For improvement in data security over cloud, various techniques are
used.MAC based authentication is one of them, which suffers from undesirable systematic demerits
which have bounded usage and not secure verification, which may pose additional online load to users,
in a public auditing setting. Reliable and secure auditing are also challenging in cloud. In Cloud auditing
existing audit systems are based on aggregate key HLA algorithm. This algorithm is based on variable
sizes, different aggregate key generation, which encounters with security issues at decryption level.
Current Scheme generates a high length of key decryption that encounters with problem of space
complexity. To overcome these issues, We can improve HLA algorithm by improve aggregate key
generation, based on fixed key size. This algorithm generates constant aggregate key which will
overcomes problem of sharing of keys, security issues and space complexity.
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
A cloud storage system for sharing data securely with privacy preservation an...eSAT Journals
Abstract Cloud computing provides much-known services for storing user data over cloud server and it provides attention towards a broad set of technologies, rules and controls deployed to provide security for applications and data. As the more and more firm uses the cloud, security in cloud environment is becoming very important issue. It is much needed that companies should work with partners doing best practices of cloud security and which facilitate transparency for their solutions. Number of security solutions today depends on the authentication for security but it did not provide solution for the privacy problems while sharing data in the cloud environment. Data access request from the user itself may expose users’ private data no matter his request approved or not. So this becomes very important in sharing data in the cloud environment. In this paper we proposed a system which provides attention towards the above mentioned problem. In proposed system we used the concept of data anonymity for sending data access request to data owner and also provide the data auditing facility to detect fraud in the integrity of users shared data. Keywords: Cloud computing, privacy preservation, data integrity, data sharing, authentication
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected
servers instead of local servers. Cloud can be used to store data, share resources and also to provide
services. Technically, there is very little difference between public and private cloud architecture. However,
the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party
cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security
in clouds. Several techniques implementing attribute based encryption for fine grained access control have
been proposed. Under such approaches, the key management overhead is a little bit high in terms of
computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack
mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and
reduces the overhead of the key management as well as secret sharing by using efficient algorithms and
protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption
environment.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority sharing is attractive for multi-user collaborative cloud applications.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on
demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection
of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious
concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the
cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to
achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that
user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based
privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request
and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing
among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority
sharing is attractive for multi-user collaborative cloud applications.
Cloud computing is rapidly emerging due to the provisioning of elastic, flexible, and on demand storage and computing services for customers. The data is usually encrypted before storing to the cloud. The access control, key management, encryption, and decryption processes are handled by the customers to ensure data security. A single key shared between all group members will result in the access of past data to a newly joining member. The aforesaid situation violates the confidentiality and the principle of least privilege.
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
In recent years,increment in the growth and popularity of cloud services has lead the enterprises to an increase in the capability to handle, store and retrieve critical data. This technology access a shared group of configurable computing resources, which are- servers,storage and applications. Cloud computing is a succeeding generation architecture of IT enterprise, which convert the application software and databaseto large data hubs.Data security and storage of data is an essential functionality of cloud services.It allows data storage in the cloud server efficiently without any worry. Cloud services includes request service, wide web access, measured services, just single click away ,easy usage, just pay for the services you use and location independent.All these features poses many security challenges.The data partitioning techniques are used in literature, for privacy conserving and security of data, using third party auditor (TPA). Objective of the current workis to review all available partitioning technique in literature and analyze them. Through this work authors will compare and identify the limitations and benefits of the available and widely used partitioning techniques.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Enhanced security framework to ensure data security in cloud using security b...eSAT Journals
Abstract Data security and Access control is a challenging research work in Cloud Computing. Cloud service users upload there private and confidential data over the cloud. As the data is transferred among the server and client, the data is to be protected from unauthorized entries into the server, by authenticating the user’s and provide high secure priority to the data. So the Experts always recommend using different passwords for different logins. Any normal person cannot possibly follow that advice and memorize all their usernames and passwords. That is where password managers come in. The purpose of this paper is to secure data from unauthorized person using Security blanket algorithm.
Ensuring Distributed Accountability in the CloudSuraj Mehta
Ensuring distributed accountability for data sharing in the cloud is in short nothing
but a novel highly decentralized information accountability framework to keep track
of the actual usage of the users' data in the cloud. Cloud computing enables highly
ecient services that are easily consumed over the internet.
Improve HLA based Encryption Process using fixed Size Aggregate Key generationEditor IJMTER
Cloud computing is an innovative idea for IT industries which provides several services to
users. In cloud computing secure authentication and data integrity of data is a major challenge, due to
internal and external threats. For improvement in data security over cloud, various techniques are
used.MAC based authentication is one of them, which suffers from undesirable systematic demerits
which have bounded usage and not secure verification, which may pose additional online load to users,
in a public auditing setting. Reliable and secure auditing are also challenging in cloud. In Cloud auditing
existing audit systems are based on aggregate key HLA algorithm. This algorithm is based on variable
sizes, different aggregate key generation, which encounters with security issues at decryption level.
Current Scheme generates a high length of key decryption that encounters with problem of space
complexity. To overcome these issues, We can improve HLA algorithm by improve aggregate key
generation, based on fixed key size. This algorithm generates constant aggregate key which will
overcomes problem of sharing of keys, security issues and space complexity.
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
A cloud storage system for sharing data securely with privacy preservation an...eSAT Journals
Abstract Cloud computing provides much-known services for storing user data over cloud server and it provides attention towards a broad set of technologies, rules and controls deployed to provide security for applications and data. As the more and more firm uses the cloud, security in cloud environment is becoming very important issue. It is much needed that companies should work with partners doing best practices of cloud security and which facilitate transparency for their solutions. Number of security solutions today depends on the authentication for security but it did not provide solution for the privacy problems while sharing data in the cloud environment. Data access request from the user itself may expose users’ private data no matter his request approved or not. So this becomes very important in sharing data in the cloud environment. In this paper we proposed a system which provides attention towards the above mentioned problem. In proposed system we used the concept of data anonymity for sending data access request to data owner and also provide the data auditing facility to detect fraud in the integrity of users shared data. Keywords: Cloud computing, privacy preservation, data integrity, data sharing, authentication
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected
servers instead of local servers. Cloud can be used to store data, share resources and also to provide
services. Technically, there is very little difference between public and private cloud architecture. However,
the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party
cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security
in clouds. Several techniques implementing attribute based encryption for fine grained access control have
been proposed. Under such approaches, the key management overhead is a little bit high in terms of
computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack
mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and
reduces the overhead of the key management as well as secret sharing by using efficient algorithms and
protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption
environment.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority sharing is attractive for multi-user collaborative cloud applications.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on
demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection
of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious
concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the
cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to
achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that
user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based
privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request
and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing
among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority
sharing is attractive for multi-user collaborative cloud applications.
Cloud computing is rapidly emerging due to the provisioning of elastic, flexible, and on demand storage and computing services for customers. The data is usually encrypted before storing to the cloud. The access control, key management, encryption, and decryption processes are handled by the customers to ensure data security. A single key shared between all group members will result in the access of past data to a newly joining member. The aforesaid situation violates the confidentiality and the principle of least privilege.
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
In recent years,increment in the growth and popularity of cloud services has lead the enterprises to an increase in the capability to handle, store and retrieve critical data. This technology access a shared group of configurable computing resources, which are- servers,storage and applications. Cloud computing is a succeeding generation architecture of IT enterprise, which convert the application software and databaseto large data hubs.Data security and storage of data is an essential functionality of cloud services.It allows data storage in the cloud server efficiently without any worry. Cloud services includes request service, wide web access, measured services, just single click away ,easy usage, just pay for the services you use and location independent.All these features poses many security challenges.The data partitioning techniques are used in literature, for privacy conserving and security of data, using third party auditor (TPA). Objective of the current workis to review all available partitioning technique in literature and analyze them. Through this work authors will compare and identify the limitations and benefits of the available and widely used partitioning techniques.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A Novel Information Accountability Framework for Cloud ComputingIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Electrically small antennas: The art of miniaturizationEditor IJARCET
We are living in the technological era, were we preferred to have the portable devices rather than unmovable devices. We are isolating our self rom the wires and we are becoming the habitual of wireless world what makes the device portable? I guess physical dimensions (mechanical) of that particular device, but along with this the electrical dimension is of the device is also of great importance. Reducing the physical dimension of the antenna would result in the small antenna but not electrically small antenna. We have different definition for the electrically small antenna but the one which is most appropriate is, where k is the wave number and is equal to and a is the radius of the imaginary sphere circumscribing the maximum dimension of the antenna. As the present day electronic devices progress to diminish in size, technocrats have become increasingly concentrated on electrically small antenna (ESA) designs to reduce the size of the antenna in the overall electronics system. Researchers in many fields, including RF and Microwave, biomedical technology and national intelligence, can benefit from electrically small antennas as long as the performance of the designed ESA meets the system requirement.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Volume 2-issue-6-1939-1944
1. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1939
www.ijarcet.org
DATA SHARING IN THE CLOUD USING DISTRIBUTED ACCOUNTABILITY
Epuru Madhavarao1
, M Parimala2
, Chikkala JayaRaju3
ABSTRACT: Now a day’s Cloud Computing is the
rapid growing technology. Now most of the persons are
using Cloud Computing technology .Cloud computing
enables highly scalable services to be easily consumed
over the Internet on an as-needed basis. A major
feature of the cloud services is that users’ data are
usually processed remotely in unknown machines that
users do not own or operate. While enjoying the
convenience brought by this new emerging technology,
users’ fears of losing control of their own data
(particularly, financial and health data) can become a
significant barrier to the wide adoption of cloud
services. To solve the above problem in this paper we
provide effective mechanism to using accountability
frame work to keep track of the actual usage of the
users’ data in the cloud. In particular, we propose an
object-centered approach that enables enclosing our
logging mechanism together with users’ data and
policies. Accountability is checking of authorization
policies and it is important for transparent data access.
We provide automatic logging mechanisms using JAR
programming which improves security and privacy of
data in cloud. To strengthen user’s control, we also
provide distributed auditing mechanisms. We also
provide secure JVM, this secure JVM is provide the
high security to the user’s or customers. We provide
extensive experimental studies that demonstrate the
efficiency and effectiveness of the proposed approaches.
Keywords: cloud computing, logging, audit ability,
accountability, data sharing, secure JVM.
1. INTRODUCTION:
Cloud computing is the computing the
resources(hardware and software) that are delivered as a
service over a network .The name comes from the shape of
the cloud-shaped symbol as an abstraction for the complex
infrastructure it contains in system diagrams. Cloud
computing entrusts remote services with a user’s data,
software and computation. The below Fig.1.shows that
overview of cloud computing . Cloud computing presents a
new way to supplement the current consumption and
delivery model for IT services based on the Internet, by
providing for dynamically scalable and often virtualized
resources as a service over the Internet. Now a day’s most
of the persons are accessing the large volumes of data from
clouds. In this way they don’t provide the security because
of the wide adaption of cloud services. In this now I am
provide the accountability and secure JVM. This two are
providing the security.
Fig.1.Overview of the cloud computing.
The architectural service layers of cloud computing are:
Software as a Service(SaaS):This service is a
Software deployment model whereby a provider
licenses an application to customers for use as a
service on demand.
Examples:
Google app’s,
Salesforce.com,
Social Networks.
Platform as a Service(PaaS):Optimized IT and
developer tools offered through Platform as a
Service(PaaS) for database and testing
environments.
Examples:
MS-Azure,
Operating Systems,
Infrastructure Scaling
Infrastructure as a Service(IaaS):On-demand
highly scalable computing, storage and hosting
services.
Examples:
Mainframes,
Storage
Cloud computing as a fast growing technology
provides many scalable services. It moves user’s data to the
centralized large data centers, where the management of the
data and services may not be fully trustworthy. Users may
not know the machines which actually process and host
their data in a cloud environment. Users also start worrying
about losing control of their own data. The data processed
on clouds are often outsourced, leading to a number of
issues related to accountability, including the handling of
personally identifiable information. Such fears are
becoming a significant to the wide adoption of cloud
services[1]. It is essential to provide an effective
mechanism for users to monitor the usage of their data in
the cloud. For example, users need to be able to ensure that
their data are handled according to the service level
2. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1940
www.ijarcet.org
agreements made at the time they sign on for services in the
cloud.
Conventional access control approaches developed
for closed domains such as databases and operating
systems, or approaches using a centralized server in
distributed environments, are not suitable, due to the
following features characterizing cloud environments. First,
data handling can be outsourced by the direct cloud service
provider (CSP) to other entities in the cloud and theses
entities can also delegate the tasks to others, and so on.
Second, entities are allowed to join and leave the cloud in a
flexible manner. As a result, data handling in the cloud
goes through a complex and dynamic hierarchical service
chain which does not exist in conventional environments.
To overcome the above problems, we propose a novel
approach, namely Cloud Information Accountability (CIA)
framework, based on the notion of information
accountability [3]. Unlike privacy protection technologies
which are built on the hide-it-or-lose-it perspective,
information accountability focuses on keeping the data
usage transparent and track able. Our proposed CIA
framework provides end-to-end accountability in a highly
distributed fashion. One of the main innovative features of
the CIA framework lies in its ability of maintaining
lightweight and powerful accountability that combines
aspects of access control, usage control and authentication.
By means of the CIA, data owners can track not only
whether or not the service-level agreements are being
honored, but also enforce access and usage control rules as
needed. Associated with the accountability feature, we also
develop two distinct modes for auditing: push mode and
pull mode. The push mode refers to logs being periodically
sent to the data owner or stakeholder while the pull mode
refers to an alternative approach whereby the user (or
another authorized party) can retrieve the logs as needed.
Fig.2.Cloud Computing Services
The above Fig.2.shows that cloud computing services.
There is a number of notable commercial and individual
cloud computing services, like Amazon, Google, Microsoft,
Yahoo, and Sales force [2].
2. EXISTING PROBLEM STATEMENT:
The use of Conventional access control approaches
developed for closed domains such as databases and
operating systems, or approaches using a centralized server
in distributed environments, are not suitable, due to the two
following reasons.
First, data handling can be outsourced by the
direct cloud service provider (CSP) to other
entities in the cloud and theses entities can also
delegate the tasks to others, and so on.
Second, entities are allowed to join and leave the
cloud in a flexible manner.
As a result, data handling in the cloud goes through a
complex and dynamic hierarchical service chain which
does not exist in conventional environments [4].
In one existing system the user’s private data are sent
to the cloud in an encrypted form, and the processing is
done on the encrypted data. The output of the processing is
de-obfuscated by the privacy manager to reveal the correct
result[5][6]. However, the privacy manager provides only
limited features in that it does not guarantee protection
once the data are being disclosed. In another existing
system an agent-based system specific to grid computing.
Distributed jobs, along with the resource consumption at
local machines are tracked by static software agents. But it
is mainly focused on resource consumption and on tracking
of sub-jobs processed at multiple computing nodes, rather
than access control.
Drawbacks from existing system given below:
The conventional access control approach must
require any dedicated authentication and storage
system.
The user cannot have the information regarding
usage or access of data by other users.
Requires third-party services to complete the
monitoring and focuses on lower level
monitoring of system resources.
3. PROPOSED SYSTEM:
The Cloud Information Accountability frame work
proposed in this work conducts automated logging
Distributed auditing of relevant access performed by any
entity, carried out at any point of time at any cloud service
provider[7]. It has two major components: logger and log
harmonizer. The JAR file includes a set of simple access
control rules specifying whether and how the cloud servers
and possibly other data stakeholders are authorized to
access the content itself. Apart from that we are going to
check the integrity of the JRE on the systems on which the
logger components is initiated. This integrity checks are
carried out by using oblivious hashing .The proposed
methodology will also take concern of the JAR file by
converting the JAR into obfuscated code which will adds
an additional layer of security to the infrastructure. Apart
from that we are going to extend the security of user's data
by provable data possessions for integrity verification.
Depending on the configuration settings defined at the time
of creation, the JAR will provide usage control associated
with logging, or will provide only logging functionality. As
3. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1941
www.ijarcet.org
for the logging, each time there is an access to the data, the
JAR will automatically generate a log record. In the
above proposed system, we provide secure JVM; this
secure JVM is providing the high security to the data
owners. A Java virtual machine (JVM) is a virtual
machine that can execute Java byte code. It is the code
execution component of the Java platform. Sun
Microsystems has stated that there are over 5.5 billion
JVM-enabled devices.
A Java virtual machine is a program which executes
certain other programs, namely those containing Java byte
code instructions. JVM's are most often implemented to run
on an existing operating system, but can also be
implemented to run directly on hardware. A JVM provides
a run-time environment in which Java byte code can be
executed, enabling features such as automated exception
handling, which provides root-cause debugging
information for every software error (exception). A JVM is
distributed along with Java Class Library, a set of standard
class libraries (in Java byte code) that implement the
Java application programming interface (API). These
libraries, bundled together with the JVM, form the Java
Runtime Environment (JRE).
Fig.3. overview of the proposed system.
JVMs are available for many hardware and
software platforms. The use of the same byte code for all
JVMs on all platforms allows Java to be described as
a write once, run anywhere programming language,
versus write once, compile anywhere, which describes
cross-platform compiled languages. Thus, the JVM is a
crucial component of the Java platform. Java byte code is
an intermediate language which is typically compiled from
Java, but it can also be compiled from other programming
languages. For example, Ada source code can be compiled
to Java byte code and executed on a JVM.
Oracle Corporation, the owner of
the Java trademark, produces the most widely used JVM,
named Hotspot, that is written in the C++ programming
language.JVMs using the Java trademark may also be
developed by other companies as long as they adhere to the
JVM specification published by Oracle Corporation and to
related contractual obligations.
4. MODULES:
The major buildings modules of proposed systems are Five.
They are.
4.1. DATA OWNER MODULE
4.2. JAR CREATION MODULE
4.3. CLOUD SERVICE PROVIDER MODULE
4.4. Disassembling Attack
4.5. Man-in-the-Middle Attack
4.1. DATA OWNER MODULE:
In this module, the data owner uploads their data in
the cloud server. The new users can register with the
service provider and create a new account and so they can
securely upload the files and store it. For the security
purpose the data owner encrypts the data file and then store
in the cloud. The Data owner can have capable of
manipulating the encrypted data file. And the data owner
can set the access privilege to the encrypted data file. To
allay users’ concerns, it is essential to provide an effective
mechanism for users to monitor the usage of their data in
the cloud. For example, users need to be able to ensure that
their data are handled according to the service level
agreements made at the time they sign on for services in the
cloud.
4.2. JAR CREATION MODULE
In this module we create the jar file for every file
upload. The user should have the same jar file to download
the file. This way the data is going to be secured. The
logging should be decentralized in order to adapt to the
dynamic nature of the cloud. More specifically, log files
should be tightly bounded with the corresponding data
being controlled, and require minimal infrastructural
support from any server. Every access to the user’s data
should be correctly and automatically logged. This requires
integrated techniques to authenticate the entity who
accesses the data, verify, and record the actual operations
on the data as well as the time that the data have been
accessed. Log files should be reliable and tamper proof to
avoid illegal insertion, deletion, and modification by
malicious parties. Recovery mechanisms are also desirable
to restore damaged log files caused by technical problems.
The proposed technique should not intrusively monitor data
recipients’ systems, nor it should introduce heavy
communication and computation overhead, which
otherwise will hinder its feasibility and adoption in
practice.
4.3. CLOUD SERVICE PROVIDER MODULE
The cloud service provider manages a cloud to
provide data storage service. Data owners encrypt their data
files and store them in the cloud with the jar file created for
each file for sharing with data consumers. To access the
shared data files, data consumers download encrypted data
files of their interest from the cloud and then decrypt them.
4.4. DISASSEMBLING ATTACK
In this module we show how our system is
secured by evaluating to possible attacks to disassemble the
JAR file of the logger and then attempt to extract useful
4. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1942
www.ijarcet.org
information out of it or spoil the log records in it. Given the
ease of disassembling JAR files, this attack poses one of
the most serious threats to our architecture. Since we
cannot prevent an attacker to gain possession of the JARs,
we rely on the strength of the cryptographic schemes
applied[9]to preserve the integrity and confidentiality of the
logs. Once the JAR files are disassembled, the attacker is in
possession of the public IBE key used for encrypting the
log files, the encrypted log file itself, and the *.class files.
Therefore, the attacker has to rely on learning the private
key or subverting the encryption to read the log records. To
compromise the confidentiality of the log files, the attacker
may try to identify which encrypted log records correspond
to his actions by mounting a chosen plaintext attack to
obtain some pairs of encrypted log records and plain texts.
However, the adoption of the Weil Pairing algorithm
ensures that the CIA framework has both chosen cipher text
security and chosen plaintext security in the random oracle
model. Therefore, the attacker will not be able to decrypt
any data or log files in the disassembled JAR file. Even if
the attacker is an authorized user, he can only access the
actual content file but he is not able to decrypt any other
data including the log files which are viewable only to the
data owner.1 From the disassembled JAR files, the
attackers are not able to directly view the access control
policies either, since the original source code is not
included in the JAR files. If the attacker wants to infer
access control policies, the only possible way is through
analyzing the log file. This is, however, very hard to
accomplish since, as mentioned earlier, log records are
encrypted and breaking the encryption is computationally
hard. Also, the attacker cannot modify the log files
extracted from a disassembled JAR. Would the attacker
erase or tamper a record, the integrity checks added to each
record of the log will not match at the time of verification,
revealing the error. Similarly, attackers will not be able to
write fake records to log files without going undetected,
since they will need to sign with a valid key and the chain
of hashes will not match.
4.5. Man-in-the-Middle Attack.
In this module, an attacker may intercept messages
during the authentication of a service provider with the
certificate authority, and reply the messages in order to
masquerade as a legitimate service provider. There are two
points in time that the attacker can replay the messages.
One is after the actual service provider has completely
disconnected and ended a session with the certificate
authority. The other is when the actual service provider is
disconnected but the session is not over, so the attacker
may try to renegotiate the connection. The first type of
attack will not succeed since the certificate typically has a
time stamp which will become obsolete at the time point of
reuse. The second type of attack will also fail since
renegotiation is banned in the latest version of OpenSSL
and cryptographic checks have been added.
5. CLOUD INFORMATION
ACCOUNTABILITY (CIA):
In this section, we present an overview of the
Cloud Information Accountability framework and discuss
how the CIA framework [8] meets the design requirements
discussed in the previous section. The Cloud Information
Accountability framework proposed in this work conducts
automated logging and distributed auditing of relevant
access performed by any entity, carried out at any point of
time at any cloud service provider.
It has two major components: logger and log harmonizer.
5.1. Major Components:
There are two major components of the CIA, the
first being the logger, and the second being the log
harmonizer. The logger is the component which is strongly
coupled with the user’s data, so that it is downloaded when
the data are accessed, and is copied whenever the data are
copied. It handles a particular instance or copy of the user’s
data and is responsible for logging access to that instance or
copy. The log harmonizer forms the central component
which allows the user access to the log files.
The logger is strongly coupled with user’s data
(either single or multiple data items). Its main tasks include
automatically logging access to data items that it contains,
encrypting the log record using the public key of the
content owner, and periodically sending them to the log
harmonizer. It may also be configured to ensure that access
and usage control policies associated with the data are
honored. For example, a data owner can specify that user X
is only allowed to view but not to modify the data. The
logger will control the data access even after it is
downloaded by user X.
The logger requires only minimal support from the
server (e.g., a valid Java virtual machine installed) in order
to be deployed. The tight coupling between data and
logger, results in a highly distributed logging system,
therefore meeting our first design requirement.
Furthermore, since the logger does not need to be installed
on any system or require any special support from the
server, it is not very intrusive in its actions, thus satisfying
our fifth requirement. Finally, the logger is also responsible
for generating the error correction information for each log
record and sends the same to the log harmonizer. The error
correction information combined with the encryption and
authentication mechanism provides a robust and reliable
recovery mechanism, therefore meeting the third
requirement. The log harmonizer is responsible for
auditing. Being the trusted component, the log harmonizer
generates the master key. It holds on to the decryption key
for the IBE key pair, as it is responsible for decrypting the
logs. Alternatively, the decryption can be carried out on the
client end if the path between the log harmonizer and the
client is not trusted. In this case, the harmonizer sends the
key to the client in a secure key exchange.
It supports two auditing strategies: push and pull.
Under the push strategy, the log file is pushed back to the
data owner periodically in an automated fashion. The pull
mode is an on-demand approach, whereby the log file is
obtained by the data owner as often as requested. These
two modes allow us to satisfy the aforementioned fourth
design requirement. In case there exist multiple loggers for
the same set of data items, the log harmonizer will merge
log records from them before sending back to the data
owner. The log harmonizer is also responsible for handling
log file corruption. In addition, the log harmonizer can
itself carry out logging in addition to auditing. Separating
the logging and auditing functions improves the
performance. The logger and the log harmonizer are both
implemented as lightweight and portable JAR files. The
5. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1943
www.ijarcet.org
JAR file implementation provides automatic logging
functions, which meets the second design requirement.
5.1.1. Advantages:
One of the main innovative features of the CIA
framework lies in its ability of maintaining lightweight and
powerful accountability that combines aspects of access
control, usage control and authentication. Providing
defenses against man in middle attack, dictionary attack,
Disassembling Attack, Compromised JVM At-tack, Data
leakage attack.PDP allows the users to remotely verify the
integrity of there data It’s Suitable for limited and large
number of storages.
5.2. Algorithm of Log Retrieval for Push and Pull
mode:
Pushing or Pulling strategies have interesting
tradeoffs. The pushing strategy is beneficial when there are
a large number of accesses to the data within a short period
of time. The pull strategy is most needed when the data
owner suspects some misuse of his data; The pull mode
allows him to monitor the usage of his content
immediately. Supporting both pushing and pulling modes
helps protecting from some nontrivial attacks. The
algorithm presents logging and synchronization steps with
the harmonizer.
The log retrieval algorithm for the push and pull modes:
Fig.3.push and pull algorithm.
5.2. Tools used for implementing cloud
In the proposed model we are using the following tools:
5.2.1. Eucalyptus Cloud
The Eucalyptus Cloud platform is open source software for
building AWS-compatible private and hybrid clouds.
Eucalyptus supports Amazon web services EC2 and S3
interfaces. It pools together existing virtualized
infrastructure to create cloud resources for compute,
network and storage.
5.2.2. Amazon EC2
Amazon Elastic compute cloud (EC2) is a central part of
Amazon.com’s cloud computing platform, Amazon web
Services (AWS). EC2 allows users to rent virtual
computers on which to run their own Computer
Applications. They are designed for control and
management of VM instances, EBS volumes, elastic IPs,
and security groups and should work well with EC2 and
Eucalyptus [10].
6. CONCLUSION:
This paper presents effective mechanism, which
performs automatic authentication of users and create log
records of each data access by the user. Data owner can
audit his content on cloud, and he can get the confirmation
that his data is safe on the cloud. Data owner also able to
know the duplication of data made without his knowledge.
Data owner should not worry about his data on cloud using
this mechanism and data usage is transparent, using this
mechanism.
In future we would like to develop a cloud, on
which we will install JRE and JVM, to do the
authentication of JAR. Try to improve security of store data
and to reduce log record generation time.
REFERENCES:
[1] S.Pearson and A.Charlesworth, “Accountability as a
Way Forward for Privacy Protection in the cloud”,
Proc.Frist Int’1 Conf. Cloud Computing, 2009.
[2] P.T Jaeger, J .Lin and J.M. Grimes, “Cloud Computing
and Information Policy: Computing in a Policy Cloud?”, J.
Information Technology and Policies, vol. 5, no. 3, pp.
269-289, 2009
[3]D.J. Weitzner, H. Abelson, T. Berners-Lee, J. Feigen-
baum, J. Handler, and G.J. Sussman, “Information
Accountability”, Comm.ACM,vol. 51,no. 6,pp. 82-87,2008
[4]Ensuring Distributed Accountability for Data Sharing in
the Cloud Author, Smitha Sundareswaran, Anna
C.Squicciarini, Member, IEEE, and Dan Lin, IEEE
Transactions on Dependable and Secure Computing ,VOL
9,NO,4 July/August 2012
6. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1944
www.ijarcet.org
[5] S.Pearson, Y.Shen, and M. Mowbray, “A Privacy
Manager for Cloud Computing”, proc.Int’1 Conf.Cloud
Computing (CloudCom),pp. 90-106,2009.
[6]T.Mather, S.Kumaraswamy, and S.Latif, Cloud Security
and Privacy: An Enterprise Perspective on Risks and
Compliance (Theory in Practice), first ed. O’Reilly, 2009.
[7] Nilutpal Bose, Mrs. G. Manimala, “SECURE
FRAMEWORK FOR DATA SHARING IN CLOUD
COMPUTING ENVIRONMENT, Website:
www.ijetae.com, Volume 3, Special Issue 1, January 2013)
[8]Ensuring Distributed Accountability for Data Sharing in
the Cloud Author, Smitha Sundareswaran, Anna
C.Squicciarini, Member, IEEE, and Dan Lin, IEEE
Transactions on Dependable and Secure Computing ,VOL
9,NO,4 July/August 2012
[9] D.Boneh and M.K. Franklin, “Identity-Based
Encryption from the Weil Pairing”, Proc.Int’1
Cryptography Conf.Advances in Cryptology,pp. 213-
229,2001.
[10]EucalyptusSystems, http://www.eucalyptus.com/,2012.
Author’s Profile:
EPURU MADHAVARAO, received the Bachelor of
engineering degree in Information Technology from NIET
Andhra Pradesh, India. Now Pursuing M.Tech in
department of computer science and engineering in
Vignan's Lara Institute of Technology &
Science,Vadlamudi, affiliated to JNTUK,Guntur,A.P,India.
M PARIMALA is currently serving as a Assistant
professor (CSE) in Vignan's Lara Institute of Technology
& Science, Vadlamudi, Guntur, A.P., India.
CHIKKALA JAYARAJU, received the bachelor of
engineering degree in Information Technology from QIS
Engineering College Andhra Pradesh, India. Now Pursuing
M.Tech in department of computer science and engineering
in Vignan's Lara Institute of Technology &
Science,Vadlamudi,affliated to JNTUK,Guntur,A.P,India .