The document is a seminar report submitted by Alin Babu on the topic of secure cloud storage. It discusses cloud computing models and services, different access control mechanisms like mandatory access control and discretionary access control. It also covers secure data storage using AES encryption and the Disintegration Protocol (DIP) architecture for enhancing cloud security. Proxy re-encryption schemes and their advantages for secure file sharing in cloud applications like Dropbox and SugarSync are also summarized.
SECRY - Secure file storage on cloud using hybrid cryptographyALIN BABU
Final project presentation of Final year B.tech CSE Project APJ Abdul Kalam Technological University.
About the project
Cloud computing has now become a major trend, it is a new data hosting technology that is very popular in recent years. In this project, we are developing an web application that can securely store the files to a cloud server. We proposes a system that uses hybrid cryptography technique to securely store the data in cloud. The hybrid approach when deployed in cloud environment makes the remote server more secure and thus, helps the users to fetch more trust of their data in the cloud. For data security and privacy protection issues, the fundamental challenge of separation of sensitive data and access control is fulfilled. Cryptography technique translates original data into unreadable format. This technique uses keys for translate data into unreadable form. So only authorized person can access data from cloud server.
We provide a cloud storage that uses multiple crypotraphic technique which is known by hybrid cryptography. The product provides confidentiality by using security for both upload and download. The data will be secured since we use multi level security techniques and multiple servers for storage.
Cloud here means data and encryption means to secure the data. In this ppt you can get to know about various encryption algorithms which are used to secure the data.
Secure your applications with Azure AD and Key VaultDavide Benvegnù
Developers like the productivity of the Azure Platform, and now with Azure KeyVault and AAD we can easily secure secrets like DocumentDB, Media Services or Azure Batch keys in Azure KeyVault and apply granular policies to define who can access the secrets.
In this session we will see how to adopt a secure approach to manage application secrets by using Azure KeyVault, Azure Active Directory and Principals based on Certificates.
Blockchain Technology for Patients Medical RecordseHealth Forum
Med-iFile uses blockchain technology & cryptographic processes to provide a unique infrastructure to patients’ medical records. We aim at creating a nationwide database and communications framework for the medical sector. Under the proposed technological framework, we can ensure data integrity, protect the privacy of sensitive data & enhance the capabilities of clinical research.
Med-iFile team:
George Efthymiou, Sotiria Kalivi, Fotis Papastergiou, Christos Martinis, Nikos Drakopoulos
SECRY - Secure file storage on cloud using hybrid cryptographyALIN BABU
Final project presentation of Final year B.tech CSE Project APJ Abdul Kalam Technological University.
About the project
Cloud computing has now become a major trend, it is a new data hosting technology that is very popular in recent years. In this project, we are developing an web application that can securely store the files to a cloud server. We proposes a system that uses hybrid cryptography technique to securely store the data in cloud. The hybrid approach when deployed in cloud environment makes the remote server more secure and thus, helps the users to fetch more trust of their data in the cloud. For data security and privacy protection issues, the fundamental challenge of separation of sensitive data and access control is fulfilled. Cryptography technique translates original data into unreadable format. This technique uses keys for translate data into unreadable form. So only authorized person can access data from cloud server.
We provide a cloud storage that uses multiple crypotraphic technique which is known by hybrid cryptography. The product provides confidentiality by using security for both upload and download. The data will be secured since we use multi level security techniques and multiple servers for storage.
Cloud here means data and encryption means to secure the data. In this ppt you can get to know about various encryption algorithms which are used to secure the data.
Secure your applications with Azure AD and Key VaultDavide Benvegnù
Developers like the productivity of the Azure Platform, and now with Azure KeyVault and AAD we can easily secure secrets like DocumentDB, Media Services or Azure Batch keys in Azure KeyVault and apply granular policies to define who can access the secrets.
In this session we will see how to adopt a secure approach to manage application secrets by using Azure KeyVault, Azure Active Directory and Principals based on Certificates.
Blockchain Technology for Patients Medical RecordseHealth Forum
Med-iFile uses blockchain technology & cryptographic processes to provide a unique infrastructure to patients’ medical records. We aim at creating a nationwide database and communications framework for the medical sector. Under the proposed technological framework, we can ensure data integrity, protect the privacy of sensitive data & enhance the capabilities of clinical research.
Med-iFile team:
George Efthymiou, Sotiria Kalivi, Fotis Papastergiou, Christos Martinis, Nikos Drakopoulos
The Impact and Potential of Blockchain on the Banking SectorPECB
This session will explore how Blockchain technology can solve the 4 major pain points in financial technologies: High maintenance and support costs, Out-dated IT systems, Need for manual reconciliation and Systems that don’t “talk” to each other.
Main points covered:
• What Is Blockchain technology?
• What Blockchain technology can do?
• What Blockchain technology cannot do?
• Anatomy of a Blockchain solution
• Ideal Blockchain use cases for banks
Presenter:
Our presenter for this webinar is Rohas Nagpal, is a Blockchain evangelist and Chief Blockchain Architect of Primechain Technologies Pvt. Ltd. Rohas comes from a cybercrime investigation and security background and has been working in that field since the mid-1990s. He co-founded Asian School of Cyber Laws in 1999 and has investigated cybercrimes & data breaches for hundreds of organizations across most industry and Government sectors. He has assisted the Government of India in framing draft rules and regulations under the Information Technology Act.
Organizer: Ardian Berisha
Date: April 18th, 2018
Defines a framework for authentication service using the X.500 directory.It is the Repository of public-key certificates,Based on use of public-key cryptography and digital signatures.
Ignou MCA 4th semester mini project report. College admission system. This project is based on real working system of University seat allocation to affiliate colleges. College admission system provide seat allocation process for various UG PG programs for every academic session.
Banking is the first industry which is about to face the greatest change in its functionality because of the use of blockchain. The present banking system in India is not totally free from errors.
Blog: https://financebuddha.com/blog/guide-blockchain-technology
A blockchain, originally block chain, is a growing list of records, called blocks, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. But Blockchain is not simply a mere technology that may fade away rather it is a concept that serves a wide variety of purpose and is one of the most trusted emerging technology of the era. This is a small attempt at how Blockchain technology may revolutionize the Cloud platforms.
Project Link : https://github.com/vedantmane/images
This is basically about the hybrid cloud and steps to implement them, starting from what is cloud, hybrid cloud to its implementation. Hybrid Cloud is nowadays implemented by many organisations and transitioning a traditional IT setup to a hybrid cloud model is no small undertaking. So, one should know about it and how it is implemented.
Blockchain with IoT's in the supply chain can make SCM process as easier. Just using the micro IoT's chip to monitor the movement of products and use of blockchain to store the tracking records can provide proper supply chain management. This could also increase product usage and demand.
Developing a Healthcare Blockchain SolutionLeewayHertz
Healthcare industry use digital methods for maintaining electronic health records. From patient’s personal information to different diagnostic reports, and doctor’s prescriptions, healthcare organizations, currently use centralized servers for saving various types of data. The server’s owned by private companies or health information exchanges.
Ensuring data storage security in cloud computingUday Wankar
Cloud computing has been envisioned as the next-generation architecture of IT enterprise.
In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy.
Moving data into the cloud offers great convenience to users since they don’t have to care about the complexities of direct hardware management.
This document is a comprehensive analysis of all the ways that Identity and Access Management (IAM) solutions can be run in and integrate with cloud computing systems.
Both cloud computing and IAM are relatively new, so the first part of this document defines key concepts and terminology. Next, assumptions that clarify the scope of this document in terms of network topology and functionality are presented and finally a comprehensive list of architectural scenarios are presented, along with an analysis of the costs, risks and benefits of each scenario.
The Impact and Potential of Blockchain on the Banking SectorPECB
This session will explore how Blockchain technology can solve the 4 major pain points in financial technologies: High maintenance and support costs, Out-dated IT systems, Need for manual reconciliation and Systems that don’t “talk” to each other.
Main points covered:
• What Is Blockchain technology?
• What Blockchain technology can do?
• What Blockchain technology cannot do?
• Anatomy of a Blockchain solution
• Ideal Blockchain use cases for banks
Presenter:
Our presenter for this webinar is Rohas Nagpal, is a Blockchain evangelist and Chief Blockchain Architect of Primechain Technologies Pvt. Ltd. Rohas comes from a cybercrime investigation and security background and has been working in that field since the mid-1990s. He co-founded Asian School of Cyber Laws in 1999 and has investigated cybercrimes & data breaches for hundreds of organizations across most industry and Government sectors. He has assisted the Government of India in framing draft rules and regulations under the Information Technology Act.
Organizer: Ardian Berisha
Date: April 18th, 2018
Defines a framework for authentication service using the X.500 directory.It is the Repository of public-key certificates,Based on use of public-key cryptography and digital signatures.
Ignou MCA 4th semester mini project report. College admission system. This project is based on real working system of University seat allocation to affiliate colleges. College admission system provide seat allocation process for various UG PG programs for every academic session.
Banking is the first industry which is about to face the greatest change in its functionality because of the use of blockchain. The present banking system in India is not totally free from errors.
Blog: https://financebuddha.com/blog/guide-blockchain-technology
A blockchain, originally block chain, is a growing list of records, called blocks, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. But Blockchain is not simply a mere technology that may fade away rather it is a concept that serves a wide variety of purpose and is one of the most trusted emerging technology of the era. This is a small attempt at how Blockchain technology may revolutionize the Cloud platforms.
Project Link : https://github.com/vedantmane/images
This is basically about the hybrid cloud and steps to implement them, starting from what is cloud, hybrid cloud to its implementation. Hybrid Cloud is nowadays implemented by many organisations and transitioning a traditional IT setup to a hybrid cloud model is no small undertaking. So, one should know about it and how it is implemented.
Blockchain with IoT's in the supply chain can make SCM process as easier. Just using the micro IoT's chip to monitor the movement of products and use of blockchain to store the tracking records can provide proper supply chain management. This could also increase product usage and demand.
Developing a Healthcare Blockchain SolutionLeewayHertz
Healthcare industry use digital methods for maintaining electronic health records. From patient’s personal information to different diagnostic reports, and doctor’s prescriptions, healthcare organizations, currently use centralized servers for saving various types of data. The server’s owned by private companies or health information exchanges.
Ensuring data storage security in cloud computingUday Wankar
Cloud computing has been envisioned as the next-generation architecture of IT enterprise.
In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy.
Moving data into the cloud offers great convenience to users since they don’t have to care about the complexities of direct hardware management.
This document is a comprehensive analysis of all the ways that Identity and Access Management (IAM) solutions can be run in and integrate with cloud computing systems.
Both cloud computing and IAM are relatively new, so the first part of this document defines key concepts and terminology. Next, assumptions that clarify the scope of this document in terms of network topology and functionality are presented and finally a comprehensive list of architectural scenarios are presented, along with an analysis of the costs, risks and benefits of each scenario.
Cloud computing is a flexible, cost-effective and proven delivery platform for providing business or
consumer IT services over the Internet. Cloud resources can be rapidly deployed and easily scaled, with all
processes, applications and services provisioned “on demand,” regardless of user location or device.
Cloud computing security through symmetric cipher modelijcsit
Cloud computing can be defined as an application and services which runs on distributed network using
virtualized and it is accessed through internet protocols and networking. Cloud computing resources and
virtual and limitless and information’s of the physical systems on which software running are abstracted
from the user. Cloud Computing is a style of computing in which dynamically scalable and often virtualized
resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or
control over the technology infrastructure in the "cloud" that supports them. To satisfy the needs of the
users the concept is to incorporate technologies which have the common theme of reliance on the internet
Software and data are stored on the servers whereas cloud computing services are provided through
applications online which can be accessed from web browsers. Lack of security and access control is the
major drawback in the cloud computing as the users deal with sensitive data to public clouds .Multiple
virtual machine in cloud can access insecure information flows as service provider; therefore to implement
the cloud it is necessary to build security. Therefore the main aim of this paper is to provide cloud
computing security through symmetric cipher model. This article proposes symmetric cipher model in
order to implement cloud computing security so that data can accessed and stored securely.
Thanks to the advent of public and private clouds, both IT and business have become more agile – more able to quickly respond to fluctuating needs and demands in information processing. However, to achieve a fully agile infrastructure, businesses need to integrate their traditional IT with clouds in all their variants. Hybrid clouds provide that path forward.
For companies considering a hybrid cloud infrastructure, there are significant concerns, with security being number one. Companies must protect corporate data and applications, even as that data moves in a geographically distributed IT infrastructure. Simultaneously, they must ensure the security of data from point of capture at the edge to consumption and storage in the back end. A second concern is ease of infrastructure management and maintenance. This concern becomes more relevant as the number of vendors and management interfaces increase. A related concern has to do with simplifying management and maintenance with automation. For automation to succeed, it requires a policy-driven infrastructure. Finally, because businesses are ultimately looking for greater agility from hybrid clouds, another key concern is the ease of application development and application deployment to production.
For this paper, we used publicly available information to compare two major hybrid cloud technology and service companies: Cisco, through its hybrid cloud portfolio, and HP, through its Helion portfolio. Although it is difficult to pinpoint exactly where each vendor falls in the hybrid cloud spectrum, we can draw a few broad conclusions. The Cisco approach is network-centric and application-centric. The HP approach, on the other hand, is more infrastructure-centric, with an emphasis in developer support, and includes some elements to support the software development lifecycle. The differences between the two companies’ approaches are clearest in the question of security. From our research, it is clear that HP and Cisco are both strong contenders. Their offerings span compute, storage, and network for hybrid clouds and offer different approaches to and levels of security, automation, SDLC support, network virtualization, cloud management, workload mobility technologies, and more. Each company has its own specific target niche in enterprise cloud deployments.
As the interconnectivity between private and public clouds grows, the world of the hybrid cloud is quickly changing. We expect significant changes in the near future –not only in offerings from Cisco and HP, but in the hybrid cloud ecosystem generally. We look forward to watching how Cisco, HP, and other cloud vendors adapt to the expansions and shifts in the future of the hybrid cloud.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Cloud computing is an internet-based computing technology, where shared re-sources
such as software, platform, storage and information are provided to customers on demand.
Cloud computing is a computing platform for sharing resources that include infrastructures,
software, applications, and business processes. The exact definition of cloud computing is A
large-scale distributed computing paradigm that is driven by economies of scale, in which a
pool of abstracted, virtualized, dynamically scalable, managed computing power, storage,
platforms, and services are delivered on demand to external customers over the Internet .
SURVEY ON KEY AGGREGATE CRYPTOSYSTEM FOR SCALABLE DATA SHARINGEditor IJMTER
Public-key cryptosystems produce constant-size cipher texts with efficient delegation
of decryption rights for any set of cipher texts. One can aggregate any set of secret keys and make
them as compact as a single key. The secret key holder can release a constant-size aggregate key for
flexible choices of cipher text set in cloud storage. In KAC, users encrypt a message not only under a
public-key, but also under an identifier of cipher text called class. That means the cipher texts are
further categorized into different classes. The key owner holds a master-secret called master-secret
key, which can be used to extract secret keys for different classes. More importantly, the extracted
key have can be an aggregate key which is as compact as a secret key for a single class, but
aggregates the power of many such keys, i.e., the decryption power for any subset of cipher text
classes. The key aggregate cryptosystem is enhanced with boundary less cipher text classes. The
system is improved with device independent key distribution mechanism. The key distribution
process is enhanced with security features to protect key leakage. The key parameter transmission
process is integrated with the cipher text download process.
Cloud computing is a progressive innovation that has achieved new extravagances in the field of
Information Technology. It gives a wellspring of information and application programming stockpiling as
colossal server farms called 'mists', which can be gotten to with the assistance of a system association.
These mists boost the capacities of undertakings with no additional set-up, faculty or permitting costs.
Mists are for the most part sent utilizing Public, Private or Hybrid models relying on the necessities of the
client. In this paper, we have explored the distributed computing engineering, concentrating on the
elements of the Public, Private and Hybrid cloud models. There is a dire need to examine the performance
of a cloud environment on several metrics and enhance its usability and capability. This paper aims at
highlighting important contributions of various researchers in domains like computational power,
performance provisioning, Load balancing and SLAs.
A SECURITY FRAMEWORK IN CLOUD COMPUTING INFRASTRUCTUREIJNSA Journal
In a typical cloud computing diverse facilitating components like hardware, software, firmware,
networking, and services integrate to offer different computational facilities, while Internet or a private
network (or VPN) provides the required backbone to deliver the services. The security risks to the cloud
system delimit the benefits of cloud computing like “on-demand, customized resource availability and
performance management”. It is understood that current IT and enterprise security solutions are not
adequate to address the cloud security issues. This paper explores the challenges and issues of security
concerns of cloud computing through different standard and novel solutions. We propose analysis and
architecture for incorporating different security schemes, techniques and protocols for cloud computing,
particularly in Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) systems. The proposed
architecture is generic in nature, not dependent on the type of cloud deployment, application agnostic and
is not coupled with the underlying backbone. This would facilitate to manage the cloud system more
effectively and provide the administrator to include the specific solution to counter the threat. We have also
shown using experimental data how a cloud service provider can estimate the charging based on the
security service it provides and security-related cost-benefit analysis can be estimated.
A SECURITY FRAMEWORK IN CLOUD COMPUTING INFRASTRUCTUREIJNSA Journal
In a typical cloud computing diverse facilitating components like hardware, software, firmware, networking, and services integrate to offer different computational facilities, while Internet or a private network (or VPN) provides the required backbone to deliver the services. The security risks to the cloud system delimit the benefits of cloud computing like “on-demand, customized resource availability and performance management”. It is understood that current IT and enterprise security solutions are not adequate to address the cloud security issues. This paper explores the challenges and issues of security concerns of cloud computing through different standard and novel solutions. We propose analysis and architecture for incorporating different security schemes, techniques and protocols for cloud computing, particularly in Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) systems. The proposed architecture is generic in nature, not dependent on the type of cloud deployment, application agnostic and is not coupled with the underlying backbone. This would facilitate to manage the cloud system more effectively and provide the administrator to include the specific solution to counter the threat. We have also shown using experimental data how a cloud service provider can estimate the charging based on the security service it provides and security-related cost-benefit analysis can be estimated.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Securing your Kubernetes cluster_ a step-by-step guide to success !
Secure Cloud Storage
1. Secure Cloud Storage
SEMINAR REPORT (SEMESTER - VII)
SUBMITTED BY
ALIN BABU (SCT15CS007)
in partial fulfillment for the award of the degree of
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
SREE CHITRA THIRUNAL COLLEGE OF ENGINEERING,
THIRUVANANTHAPURAM - 18
NOVEMBER, 2019
2. SREE CHITRA THIRUNAL COLLEGE OF ENGINEERING,
THIRUVANATHAPURAM - 695018
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CERTIFICATE
Certified that seminar work entitled “Secure Cloud Storage” is a bonafide work
carried out in the seventh semester by ALIN BABU (SCT15CS007) in partial ful-
filment for the award of Bachelor of Technology in COMPUTER SCIENCE AND
ENGINEERING from APJ Abdul Kalam Technological University during the year
2020.
HEAD OF THE DEPARTMENT
Dr.Subu Surendran
Head of the Department
Department of Computer
Science & Engineering
SEMINAR GUIDE
Smt.Soja Salim
Assistant Professor
Department of Computer
Science & Engineering
3. ACKNOWLEDGMENT
I express my sincere gratitude to all the faculty members of the Department of
Computer Science and Engineering, SCT College of Engineering, Thiruvananthapuram
for their relentless support and inspiration. I am ever-grateful to my family, friends and
well-wishers for their immense goodwill and words of motivation.
I would like to express a note of deep obligation to my Guide, Smt. Soja Salim,
Assistant Professor, Department of Computer Science and Engineering, SCT College of
Engineering, for her excellent guidance and valuable suggestions. It was indeed a privilege
to work under her during the entire duration of this study. She has immensely helped
me with her knowledge and stimulating suggestions to shape this study, refine arguments
and present it to the best of my abilities.
I am also indebted to Dr. Subu Surendran,Professor and Head of the Department
of Computer Science and Engineering, SCT College of Engineering, for inspiring me to
strive for perfection. I am also thankful for the support and encouragement offered by
him during the entire course of this study to make this seminar a great success.
4. ABSTRACT
Cloud computing is a set of IT services that are provided to users over a network.
It is a shared pool of configurable computing resources. Cloud computing is an emerging
technology in now a days. The security is an important aspect of the current scenario and
plays an important role in cloud computing. Security is a set of control base technologies
and policies designed to regulatory compliance rules and protect information. Methods of
cloud computing play a major role as it guaranteed data protection from the cloud service
provider, in order to increase security and confidentiality in the cloud.
The main focus of the seminar will be cloud computing data encryption methods
and mechanisms to achieve it in the cloud. The seminar will be mainly on the Manda-
tory Access Control (MAC) mechanism and Disintegration Protocol (DIP). The different
encryption for storing data in the cloud should be explained briefly. There will be a
brief explanation about the advantages of using Proxy Re-encryption (PRE) in-secure file
sharing. The seminar shall be concluded with DIP and advantages of PRE in Dropbox
and SugarSync
6. 7 Proxy Re Encryption Scheme (PRE) 25
8 Advantages of Using PRE for Secure Sharing 28
9 Conclusion 29
REFERENCES 30
ii
7. LIST OF ABBREVIATIONS
CSP Cloud Service Provider
SaaS Software as a Service
PaaS Platform as a Service
IaaS Infrastructure as a Service
MAC Mandatory Access Control
DIP Disintegration Protocol
OS Operating System
ACL Access Control List
DAC Discretionary Access Control
RBAC Rule/Role Based Access Control
AES Advanced Encryption Standard
DES Data Encryption Standard
TLS Transport Layer Protocol
SSH Secure Sockets Layer
HTTP HyperText Transfer Protocol
iii
8. TCP Transmission Control Protocol
DS Data Server
CS Connection Server
RA Resource Allocator
NIC Network Interface Card
NAT Network Address Translation
NOBE Nth Order Binary Encoding
DNS Domain Name Server
ACK Acknowledgement
VM Virtual Machine
SSH Secure Shell
iv
10. CHAPTER 1
INTRODUCTION
Cloud platform services, concepts, and applications such as storage, processing power,
virtualization, and connectivity allow the use of sharing data. Ensuring user’s privacy and
security of data are the most concerned challenges. However, the integrity of knowledge,
knowledge transfer, knowledge of the location and features such as optional backup and
recovery are the additional problems associated with cloud computing. Users presume
that cloud service suppliers (CSP) provide an assurance that the while data is in transit
from the user’s establishments to the cloud servers, its confidentiality and integrity would
not be negotiated and their data can get transferred securely to confirm such high level
of data security.
CSPs serve to the users over the internet on demand. Elasticity and service upon
request are essential characteristics of cloud computing as it permits the user to deter-
mine the appropriate resources whereas excluding the unnecessary ones. Today there are
many CSPs such as Microsoft, Oracle Corporation, Google, IBM, Apple, Amazon and
CenturyLink. There are so many studies on cloud performance based on various metrics,
such as price and VM performance. CSPs provide the users with different options of cloud
placement models (private, public, hybrid or community) and services (IaaS, PaaS, and
SaaS).
1
11. An increasing important priority for the broad selection and receipt of cloud com-
puting is the capability of data owners and users to have imposed and assessed security
guarantees. Guaranteeing security means ensuring confidentiality and integrity of data,
accesses, and computations on them as well as providing availability of data and services
to authorized users and in agreement with compliance with the providers. The lucrative
benefit of scalability and elasticity, with cloud computing, comes at the expense of impair-
ment of control of the owner’s data, with the high risk of security threat. It is assumed
that the
CSPs should provide sufficient protection mechanisms for data in warehouses, pro-
cessing and communicating over the Internet. Researchers and CSPs have introduced
various protocols, but they offer little or no security assurance to the end users. How-
ever, organizations are still reluctant to implement cloud services because of reliability,
security and privacy issues concerning delivery of cloud services as well as fears about the
credibility of their cloud service supplier.
2
12. CHAPTER 2
CLOUD COMPUTING
Cloud Computing is a set of IT Services that are provided to a customer over a network
and these services are delivered by third party provider who owns the infrastructure. It
is often provided "as a service" over the Internet, typically in the form of infrastructure
as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). Cloud
computing is the broader concept of infrastructure convergence [1]. This type of data
centre environment allows enterprises to get their applications up and running faster,
with easier manageability, and less maintenance to meet business demands. For example,
we can manage and store all smartphones or tablets apps at one location i.e. cloud. So
we do not require any memory space at our end. This also gives the security of data and
applications in case device is damaged or lost.
Based on a deployment model, we can classify cloud as:
• Private Cloud
• Public Cloud
• Hybrid Cloud
Based on a service the cloud model is offering, it is classified into three:
• Platform as a Service (PaaS)
• Software as a Service (SaaS)
3
13. • Infrastructure as a Service (IaaS)
2.1 Cloud Deployment Models
2.1.1 Private Cloud
A private cloud is extensively used in single organizations for offering services to internal
users. A private cloud could be used to maintain the security of a city or to provide
privacy to organizational data.
Figure 2.1: Private Cloud
2.1.2 Public Cloud
Public cloud infrastructure allows all services to be publicly accessible. Sometimes these
services are free to the public. External enterprises can use resources offered by the cloud
at free of cost.
4
14. Figure 2.2: Public Cloud
2.1.3 Hybrid Cloud
A hybrid cloud offers best of both a private and public cloud structures. This delivers an
infrastructure for a public cloud while continuing control over vital data using the private
cloud. These models differ on features like control, flexibility, and management. This
model is also termed as a cloud-computing stack.
Figure 2.3: Hybrid Cloud
5
15. 2.2 Services offered by Cloud
2.2.1 Platform as a Service (PaaS)
It allows organizations to supply on demand resources for developing, testing, delivering
and managing their software applications. PaaS eliminates the burden of infrastructure
and operating systems and in that way, they can concentrate on deploying their applica-
tion.
2.2.2 Software as a Service (SaaS)
It is the method of offering software applications and services over the internet. Software
applications are delivered with the on-demand ability. These software services are man-
aged and maintained by the CSP. This eliminates the need for the user to worry about
software updates and upgrade.
2.2.3 Infrastructure as a Service (IaaS)
On the contrary, IaaS is at the other end of the cloud spectrum. In this type cloud service,
users want to keep control of their software environment, but they do not have to buy
and maintain any infrastructure equipment. Instead, they can request an IaaS provider
a virtual machine.
Figure 2.4: Examples of Different form of service in cloud
6
16. CHAPTER 3
ACCESS CONTROL
Access control is a way of limiting access to a system or to physical or virtual resources. In
computing, access control is a process by which users are granted access and certain priv-
ileges to systems, resources or information.In access control systems, users must present
credentials before they can be granted access. In physical systems, these credentials may
come in many forms, but credentials that can’t be transferred provide the most secu-
rity.For example, a key card may act as an access control and grant the bearer access to a
classified area. Because this credential can be transferred or even stolen, it is not a secure
way of handling access control. There are various access control methods used limit the
access to a system or resources.
3.1 Mandatory Access Control
Mandatory access control(MAC) is a system controlled approach limiting access to source
entities,constructed on the level of approval or permission of the accessing entity, be it a
person, a process, or a device.MAC norms are defined by the system administrator,strictly
enforced by the operating system (OS) or security kernel, and are unable to be altered by
end users.Mandatory Access Control (MAC) is the strictest of all levels of control. The
design of MAC was defined, and is primarily used by the government. MAC takes a hier-
archical approach to controlling access to resources. Under a MAC enforced environment
7
17. access to all resource objects (such as data files) is controlled by settings defined by the
system administrator. As such, all access to resource objects is strictly controlled by the
operating system based on system administrator configured settings. It is not possible
under MAC enforcement for users to change the access control of a resource.
Mandatory Access Control begins with security labels assigned to all resource objects
on the system. These security labels contain two pieces of information - a classification
(top secret, confidential etc) and a category (which is essentially an indication of the
management level, department or project to which the object is available).Similarly, each
user account on the system also has classification and category properties from the same
set of properties applied to the resource objects. When a user attempts to access a resource
under Mandatory Access Control the operating system checks the user’s classification and
categories and compares them to the properties of the object’s security label. If the user’s
credentials match the MAC security label properties of the object access is allowed. It is
important to note that both the classification and categories must match. A user with top
secret classification, for example, cannot access a resource if they are not also a member
of one of the required categories for that object.
Mandatory Access Control is by far the most secure access control environment but
does not come without a price. Firstly, MAC requires a considerable amount of planning
before it can be effectively implemented. Once implemented it also imposes a high system
management overhead due to the need to constantly update object and account labels to
accommodate new data, new users and changes in the categorization and classification of
existing users.
8
18. 3.2 Discretionary access control
Unlike Mandatory Access Control (MAC) where access to system resources is controlled
by the operating system (under the control of a system administrator), Discretionary
Access Control (DAC) allows each user to control access to their own data. DAC is
typically the default access control mechanism for most desktop operating systems.
Instead of a security label in the case of MAC, each resource object on a DAC based
system has an Access Control List (ACL) associated with it. An ACL contains a list of
users and groups to which the user has permitted access together with the level of access
for each user or group. For example, User A may provide read-only access on one of her
files to User B, read and write access on the same file to User C and full control to any
user belonging to Group 1.
It is important to note that under DAC a user can only set access permissions
for resources which they already own. A hypothetical User A cannot, therefore, change
the access control for a file that is owned by User B. User A can, however, set access
permissions on a file that she owns. Under some operating systems it is also possible for
the system or network administrator to dictate which permissions users are allowed to set
in the ACLs of their resources.
Discretionary Access Control provides a much more flexible environment than Manda-
tory Access Control but also increases the risk that data will be made accessible to users
that should not necessarily be given access.
9
19. 3.3 Role Based Access Control
Role Based Access Control (RBAC), also known as Non discretionary Access Control,
takes more of a real world approach to structuring access control. Access under RBAC
is based on a user’s job function within the organization to which the computer sys-
tem belongs.Essentially, RBAC assigns permissions to particular roles in an organization.
Users are then assigned to that particular role. For example, an accountant in a company
will be assigned to the Accountant role, gaining access to all the resources permitted for
all accountants on the system. Similarly, a software engineer might be assigned to the
developer role.
Roles differ from groups in that while users may belong to multiple groups, a user
under RBAC may only be assigned a single role in an organization. Additionally, there is
no way to provide individual users additional permissions over and above those available
for their role. The accountant described above gets the same permissions as all other
accountants, nothing more and nothing less.
3.4 Rule Based Access Control
Rule Based Access Control (RBAC) introduces acronym ambiguity by using the same
four letter abbreviation (RBAC) as Role Based Access Control.Under Rules Based Access
Control, access is allowed or denied to resource objects based on a set of rules defined by a
system administrator. As with Discretionary Access Control, access properties are stored
in Access Control Lists (ACL) associated with each resource object. When a particular
account or group attempts to access a resource, the operating system checks the rules
contained in the ACL for that object.
10
20. Examples of Rules Based Access Control include situations such as permitting access
for an account or group to a network connection at certain hours of the day or days of the
week. As with MAC, access control cannot be changed by users. All access permissions
are controlled solely by the system administrator.
11
21. CHAPTER 4
Secure Data Storage using AES
The security of the data in the cloud database server is the key area of concern in the
acceptance of cloud. It requires a very high degree of privacy and authentication. To
protect the data in cloud database server cryptography is one of the important methods.
Cryptography provides various symmetric and asymmetric algorithms to secure the data.
Symmetric algorithms are more preffered as it has the speed and computational efficiency
to handle encryption of large volumes of data. In symmetric cryptosystems, the longer
the key length, the stronger the encryption. AES [2] is most frequently used encryption
algorithm today this algorithm is based on several substitutions, permutations and linear
transformations, each executed on data blocks of 16 byte. As of today, no practicable
attack against AES exists. Therefore, AES remains the preferred encryption standard for
governments, banks and high security systems around the world.
In, AES data encryption is more scientifically capable and graceful cryptographic
algorithm, but its main force rests in the key length. The time necessary to break an
encryption algorithm is straight related to the length of the key used to secure the com-
munication. AES allows choosing a various type of bits like 128-bit, 192-bit or 256-bit
key, making it exponentially stronger than the 56-bit key of DES.
12
22. 4.1 AES Algorithm
AES acronym of Advanced Encryption Standard is a symmetric encryption algorithm.
The algorithm was developed by two Belgian cryptographers Joan Daemen and Vincent
Rijmen. It is useful when we want to encrypt a confidential text into a decryptable
format, for example when we need to send sensitive data in e-mail. The decryption of the
encrypted text is possible only if we know the right password. AES is an iterative rather
than Feistel cipher. It is based on ‘substitution–permutation network’. It comprises of
a series of linked operations, some of which involve replacing inputs by specific outputs
(substitutions) and others involve shuffling bits around (permutations).
Figure 4.1: Encryption and decryption in AES
13
23. Steps in AES :
• The First Step
– AddRoundKey
• The Following Four Functions Are Periodically Repeated
– SubByte
– ShiftRow
– MixColumn
– AddRoundKey
• Final Step
– SubByte
– ShiftRow
– AddRoundKey
Byte Substitution (SubBytes)
The 16 input bytes are substituted by looking up a fixed table (S-box) given in
design. The result is in a matrix of four rows and four columns.
Figure 4.2: Byte Substitution (SubBytes)
14
24. Shift Rows
Each of the four rows of the matrix is shifted to the left. Any entries that ‘fall off’
are re-inserted on the right side of row. Shift is carried out as follows
• First row is not shifted.
• Second row is shifted one (byte) position to the left.
• Third row is shifted two positions to the left.
• Fourth row is shifted three positions to the left.
• The result is a new matrix consisting of the same 16 bytes but shifted with respect
to each other.
Figure 4.3: ShiftRows
Mix Columns
Each column of four bytes is now transformed using a special mathematical function.
This function takes as input the four bytes of one column and outputs four completely
new bytes, which replace the original column. The result is another new matrix consisting
of 16 new bytes. It should be noted that this step is not performed in the last round.
15
25. Figure 4.4: Mix Columns
AddRoundkey
The 16 bytes of the matrix are now considered as 128 bits and are XORed to the
128 bits of the round key. If this is the last round then the output is the ciphertext.
Otherwise, the resulting 128 bits are interpreted as 16 bytes and we begin another similar
round.
Figure 4.5: AddRoundkey
16
26. CHAPTER 5
Disintegration Protocol (DIP)
The unidirectional, closed and disintegrated protocol that is used to store data to be
securely in the cloud. In general, the server is designed to perform N number of different
tasks (function). In Disintegration Protocol (DIP) [3], we disintegrate N various services
from one server and distribute them among M homogeneous different servers. The DIP
architecture used for the experiments described in this paper is shown in Fig. Although
these experiments were conducted in a WAN environment in the private cloud, as noted
earlier, the proposed DIP technique does not require that the set of clients CL to be
connected to a private cloud (they can be located anywhere on the Internet Below we
discuss the only condition in which the server needs to be connected to the private cloud.
However, this provision does not limit the scope or scalability of DIP since many real-world
Web server clusters are located within the same LAN.
5.1 Architecture of DIP
The figure 5.1 shows the basic architecture of the DIP [3]. R0 is a regular firewall that
prevents specific types of information from moving between an untrusted network and
a trusted network. TLS/SSL or other software service are running on existing router or
server. One can implement the firewall in any of processing modes such as packet filtering,
application gateways, circuit gateways, MAC layer firewalls or hybrid. Integrity for data
17
27. in transit is offered by using hashing function and message authentication codes. Every
transport protocol (HTTP, TCP, IPC, and MSMQ) has its technique for establishing cre-
dentials and handling communication protection. The most popular approach for this is to
use Secure Sockets Layer (SSL) for encrypting and confirming the contents of the packets
transmitted over Secure HTTP (HTTPS). For the scope of this paper we will restrict our
discussion to DIP components i.e. connection server (CS), resource allocator (RA) and
data servers (DSs, DSs*) and internal packet filter routers R1 and R2. The clients send
requests to a connection server CS, CS sends SYN-ACK, and keep track current request
until it receives (HTTP) GET and sends GET ACK to the client. Moreover, at the same
time, CS will send an inter-server packet (ISP) of 168 bytes to RA. CS uses two NIC cards
on- board transceivers NIC1 and external transceivers NIC2 with two different IP address.
In this case, we can decide which transceiver to use and then make the appropriate choice
on our card.
Figure 5.1: The architecture of the DIP.
18
28. When data is transmitted on the network cable, they travel as a single stream
of bits. When data progresses on a network cable, the cable is treated as a one-lane
roadway,and the data perpetually proceeds in one direction only. At given point of time,
the server either sending or receiving data, but never performs both. NIC1 is used for
communicating with Client, and NIC2 is used to send packets to RA. NIC2 is activated
(in UP state) only for the short interval of time when CS is forwarding packets to RA,
otherwise, all other time it stays in DOWN state. This mechanism allows turning On/Off
the connectivity between CS and RA and controls one-way flow from CS to RA. We
control transmitter’s state in the Ethernet object using methods enable Transmit( ) and
disable Transmit( ). Similar control can be achieved by disabling the receiver and only
allowing the function of the transmitter of NIC2.
Furthermore, router R1 (eth0) only accept packet coming from CS and drops all
other packets coming from other IP address even from RA through interface eth1. Similar
flow control mechanism is enforced for a transmission from DSs to DSs*. After receiving
the packet from CS, RA determines to forward the given request (packet) to appropriate
data servers DSs. Data are fragmented in a various block on different DSs on the same
cloud or geographically distributed clouds. RA will keep TCB records of all connection
until it receives FIN-ACK from CS. All ACKs after the GET-ACK are forwarded to RA,
and RA will respond all ACKs and Data requests though different DSs. When DS sends
the last data packet (with FIN) of data to Client, it also sends one inter server packet to
CS (via Ro) telling that it is done with transmitting all data. CS forwards this information
to RA. Whenever CS receives FIN-ACK from a Client, it forwards to RA, and now RA
will delete the TCB record of a particular connection. RA keeps 100,000 TCB records
and can increase based on incoming load. DSs receives client information through TCB
record from RA, and directly sends data to the client and in the data packet, it puts SrcIP
19
29. and SrcMac address of CS.
CSs, DSs and RA identical servers they have the same hardware and same software,
but perform different tasks. CS* is the backup server for CS if CS fails immediately
in few millisecond CS* resumes the role of CS, detail migration and position change
over process is describe in our earlier paper a split protocol technique for web server
migration. DS*s are data transmission servers, which receives data from DSs, they do not
store any information or data, practically they are empty servers In the event of a server
compromise, a hacker will only be able to get the last piece of data transmitted from the
DS as if the connection is still in the active state.
5.2 Multi- housed DIP architecture
DIP architecture is capable of handling massive traffic, and the only bottleneck we have
observed is the router. DIP server can be configured as a multi-homed structure. In addi-
tion to maintaining a reliable connection, multi homing allows performing load-balancing
by lowering the number of client/server connecting to the Internet through any single
connection. It also permits the load through multiple connections, enhances the perfor-
mance and can considerably decrease wait times. In Multi-homing, if a router fails, all
data will be rerouted through the other routers with the help of Network address trans-
lation (NAT), by remapping one IP address into a different address by revising network
address. Router RO and RA maintain all standard security protocols. RA implements
NOBE encoding for data compression and additional privacy of data. DIP can be applied
on top of any existing security protocols TLS, IPsec, SSH, Cryptographic protocol and
inter-cloud protocols. The format of client request differs broadly by Cloud platform and
virtualization layer of the operating system used for the Cloud Computing. Most of the
cloud uses general pointers such as MAC and IP addresses [32], and now and then a DNS
20
30. name specified to the VM. The pairs of RSA keys are used for the credential of the public
key and private keys. On that VM, one can boot a system image yielding a running
system, and uses it in a similar practice as one would use an operating system in your
data center. The figure 5.2 shows the multi- housed DIP architecture. architecture
Figure 5.2: Multi- housed DIP architecture.
Figure 5.3: DIP in the cloud
21
31. CHAPTER 6
Access Control Implementation on DIP
In a modern world, where the state sponsored hacking groups have almost unlimited
resources; consequently, security of information becomes essential. Secure data storage
mechanisms with DIP can be implemented on the cloud servers and secure file sharing
among them. To ensure the security of information for the cloud base server with en-
crypted storage data, the main issue would appear to be limiting access to unauthorized
users. [1] Security of the data should be in a good balance between complete protection
and usability. Thus, idea is to use time proven mechanisms while adding additional pro-
tection. Cloud service consists of three servers: user input (CSs), data storage (DSs), user
output (DSs*) as shown in Fig 1. Each of servers has their own security mechanisms. The
separation is done to ensure that if any of servers are compromised, the data will stay
intact. User input server can have multiple implementations for access control, depending
on the needs of the organization. For example, library articles would not require extra
safety measures, however financial or military institutions would demand the highest con-
trol possible. Therefore, there will be a choice for the organization to choose a regular
access control function or the advanced one.
22
32. Regular option :
Communication with user input server would start with login information. The user
can request to enter login and password. To prevent brute force, we are limiting the user
to five attempts per fifteen minutes. To ensure additional safety, the system forwards a
message to the registered phone number and email address stating that someone is trying
to access the account. There is a link provided to give the user an opportunity to lock the
account, in case if there is a suspicion of unauthorized action. In this case, the account
will freeze until the user communicates with support centre, prove identity, and account
ownership. In the case of the correct login and password entered, the system will generate
a random entrance code that is provided to the users through email or text. While setting
up an account, the user is prompted to enter security answers to seven security questions.
Every time that use renters from the unrecognised device (Mac and cookies check); the
system will require answering two randomly selected questions out of seven. Failing to
answer them would lock the account.
Advanced option :
Everything regular option provides (notes: the user will not be able to skip security
questions. The system does not save cookies on the user’s computer) plus additional
options:
• Biometrics (optional at the current technology state) - if user’s device provides
fingerprints function, it can be used.
• Additional possible application - makes the user read the randomly generated para-
graph.
• Security token authentication: Each user is provided by an organization with the
unique security token. The device is connected to the device that is trying to access
23
33. the database. It will not cancel out any of the previous steps but will serve as
additional protection, so only a person with a physical device and knowledge of login,
password, access to phone/email and knowledge of security questions can access the
database. This type of option is highly recommended for financial institutions and
government contractors.
Inner server communications have a different structure. Security will start at the
point that we limit servers communications to a one-way channel. The receiving server can
limit to respond only with a packet acceptance message. All packets are encrypted with
Advanced Encryption Standard, which uses Rijndael cipher. Each pair of communicating
servers has a private large set of randomly generated numbers. For every fifteen packets,
the active server can randomly choose a key from the set, encrypt packet with a set was
chosen and will include an index number of current keys. That index number is going to be
a single non encrypted data that is transmitted on the network. To ensure safety, pairs of
sets are updated according to computational powers of computers that are available at the
current time. For examples, if there is a possibility that 256-bit key cipher can be broken
using the latest type of super computer within a month, sets are updated in every two
weeks. In addition to that, every communication is based on the following assumptions:
MAC addresses are the same for both servers. The server will send information to the
input server to provide the list of user’s storage information to the output server. The
user will see it as a list on the same web page. Once a user selects to download anything,
this request is forwarded to the input server. All the data stored on the informational
server is archived using NOBE technology. All files are compressed during uploading
procedure. A separate index file will store information files stored, original names, their
storage indexes, and a list of groups that have access to specific files. Once Server receives
a signal that certain user is trying to access it, the reports are generated, showing all the
24
34. files that accessible to the user. Output server will post that report on the web page
for furthers election. According to a selection of a file that the user needs to download,
NOBE file is forwarded to output server using inner server communication technology.
Output server will send a request to a user’s device to activate the NOBE client. Using
the RSA encryption system, output server will encrypt the NOBE file and forward it to
the user NOBE client. That will decrypt it using the private key and decompress it for
the user.
In addition to that, every communication is based on the following assumptions:
MAC addresses are the same for both servers. The server will send information to the
input server to provide the list of user’s storage information to the output server. The
user will see it as a list on the same web page. Once a user selects to download anything,
this request is forwarded to the input server.All the data stored on the informational
server is archived using NOBE technology. All files are compressed during uploading
procedure. A separate index file will store information files stored, original names, their
storage indexes, and a list of groups that have access to specific files. Once Server receives
a signal that certain user is trying to access it, the reports are generated, showing all the
files that accessible to the user. Output server will post that report on the web page
for further selection. According to a selection of a file that the user needs to download,
NOBE file is forwarded to output server using inner server communication technology.
Output server will send a request to a user’s device to activate the NOBE client. Using
the RSA encryption system, output server will encrypt the NOBE file and forward it to
the user NOBE client. That will decrypt it using the private key and decompress it for
the user.
25
35. CHAPTER 7
Proxy Re Encryption Scheme (PRE)
Proxy Re-Encryption (PRE) is a cryptographic primitive, which has a very interesting
application in delegating decryption rights. It helps in converting a ciphertext, which is
meant for a delegator to be decrypted by a delegate with the help of a semi-trusted party
called the proxy.
Consider the scenario, in which Alice (the delegator) leaves the country for a vaca-
tion, delegates the decryption rights to Bob (the delegatee). Any ciphertext meant for
Alice could be converted into a ciphertext of Bob by the semi-trusted proxy, with the help
of a Re-Encryption key generated by Alice and handed over to the proxy.
A proxy is, in essence, any provider – including a cloud service provider. As shown
in the figure above, the proxy or cloud provider never sees the actual secret message.
It only ever sees encrypted messages and public keys (elements marked in green above).
Private keys remain private to the individual parties, and the secret message only every
gets decrypted by the intended recipient, not the proxy. So, the cloud provider never sees
the information. Let’s consider this scenario:
26
36. Figure 7.1: Proxy Re-Encryption.
For security reasons, Alice doesn’t trust the cloud provider so Alice encrypts all her
data (with her public key) and store her encrypted data into the cloud. Now, Alice data
is safe and (theoretically) only she can access it since only Alice knows her private key.
A few weeks later, along comes Bob and it turns out, he needs to see Alice data. Alice
now has two choices:
1. Get Bob’s public key, decrypt the data, re-encrypt it with Bob’s public key, and send
it to him.This is a bit clunky. Alice needs to decrypt her data and then re-encrypt.
It may work for one person. . . what if Alice has many Bobs? It would be better to
leave this work to her cloud provider if at all possible. . .
2. Use proxy re-encryption!-In this case, Alice retrieves Bob’s public key, and issue a
“re-encryption” key. This key represents the trusted relationship Alice would like
27
37. to build with Bob. Alice sends this key to her cloud provider and they proceed to
re-encrypt the already encrypted data they have stored with the key. Bob can now
download this re-encrypted data and decrypt it at will
In the second scenario, note how the decryption / re-encryption process is sidestepped
and Alice doesn’t need to perform this operation on her own devices. Instead, all she needs
to do is generate a key which should be quick, and pass the buck to her provider, which at
no point can decrypt the original message, making this system very scalable and enables
data-sharing apps in a cloud environment.
28
38. CHAPTER 8
Advantages of Using PRE for Secure Sharing
Dropbox [4], SugarSync [5], Box, and Soonr are security black holes, while all user files
are encrypted in transit and at rest, the user is not in control of the encryption keys.
These keys are managed by the internal key-servers and are not revealed to the user at
any point in time. However, they claim that the keys are stored in encrypted form for all
other purposes. The advantages of PRE are:
1. The owner of the file has the sole responsibility for providing and revoking access
to the files.
2. Sharing is straightforward with no overhead for the file owner.
3. Offers end-to-end security for files while sharing them.
4. Military-grade security could be achieved using the state of the art encryption mech-
anism like AES-256.
5. Uses advanced proxy re-encryption with multi-hop so that consecutive sharing is
possible.
29
39. CHAPTER 9
Conclusion
Internet-based online cloud services provide enormous volumes of storage space, tailor
made computing resources and eradicates the obligation of native machines for data main-
tenance as well. Cloud storage service providers claim to offer the ability of secure and
elastic data-storage services that can adapt to various storage necessities. Most of the
security tools have a finite rate of failure, and intrusion comes with more complex and
sophisticated techniques; the security failure rates increasing. Once we upload our data
into the cloud, we lose control of our data, which certainly brings new security risks to-
ward integrity and confidentiality of our data.The storing data in encrypted format will
be able to solve the security issues.The access control mechanisms can be used to limit
the unwanted access to the cloud by unauthorised persons. A secure file sharing mecha-
nism for the cloud with the disintegration protocol (DIP) can be used for storage and file
sharing.
The innovative approach to secure storage data on cloud servers through the com-
bined use of cryptography and unique network architecture of DIP is highly. It could
be seen that DIP techniques can be used to improve the security of any servers or any
computer application. The linear behaviour of a number of DIP elements and qualitative
improvement in system reliability, integrity and security offer higher throughput. DIP
30
40. with PRE scheme enables a secure file sharing among the cloud. As a real-world appli-
cation, Cloud storage providers such as Dropbox, Sugarsync, Box etc. for a secure file
sharing can directly adopt this solution.
31
41. REFERENCES
[1] Bharat S Rawal and S Sree Vivek. “Secure cloud storage and file sharing”. In 2017
IEEE International Conference on Smart Cloud (SmartCloud), pages 78–83. IEEE,
2017.
[2] B. Avinash, T. Harish, G. Karthikeyan, and Prof. D. Vinodha. “A Survey on proxy
re-encryption method using cloud computing”. International Journal of Research and
Engineering, 4(2):57–59, 2017.
[3] Bharat S Rawal, Harsha K Kalutarage, S Sree Vivek, and Kamlendu Pandey. “The
disintegration protocol: An ultimate technique for cloud data security.”. In 2016 IEEE
International Conference on Smart Cloud (SmartCloud), pages 27–34. IEEE, 2016.
[4] https://www.cloudwards.net/review/dropbox/.
[5] https://www.cloudwards.net/review/sugarsync/.
32