This document discusses protecting the security of assets and information. It covers identifying and classifying sensitive data and assets, determining appropriate security controls, and establishing ownership roles and responsibilities. The goal is to properly handle information throughout its lifecycle to prevent unauthorized disclosure and data breaches. Key steps include marking, handling, storing, and destroying assets based on their classification.
Classify information and supporting assets (e.g., sensitivity, criticality), Determine and maintain ownership (e.g., data owners, system owners, business/mission
owners), Protect privacy, Ensure appropriate retention (e.g., media, hardware, personnel), Determine data security controls (e.g., data at rest, data in transit), Establish handling requirements (markings, labels, storage, destruction of sensitive
information)
When data collects in one place, it is called data at rest. Data at rest can be archival or reference files that are changed rarely or never; data at rest can also be data that is subject to regular but not constant change.
Classify information and supporting assets (e.g., sensitivity, criticality), Determine and maintain ownership (e.g., data owners, system owners, business/mission
owners), Protect privacy, Ensure appropriate retention (e.g., media, hardware, personnel), Determine data security controls (e.g., data at rest, data in transit), Establish handling requirements (markings, labels, storage, destruction of sensitive
information)
When data collects in one place, it is called data at rest. Data at rest can be archival or reference files that are changed rarely or never; data at rest can also be data that is subject to regular but not constant change.
Compliance to privacy act and mandatory data breach reporting for corporatese-Safe Systems
Entities covered by the Australian Privacy Act 1988 have obligations under the Act need to take reasonable steps to protect the personal information held from misuse, interference and loss, and from unauthorised access, modification or disclosure. The Privacy Amendment (Notifiable Data Breaches) Bill 2016, establishes a mandatory data breach notification scheme in Australia.
The Privacy Act and mandatory data breach reporting (NDB Scheme) fundamentally require the need of a data governance tool that can identify and protect sensitive personal user data and provide clear visibility in the event it is breached.
e-Safe Compliance is the technology response which forms an integral part of the overall policy and procedural response required to address the privacy legislation.
To assist the organizations with this legislation what OAIC has done well is to come out with a guide to securing personal information.
this is an important piece of document because OAIC states that they will refer to this guide when doing its investigations on whether an organization has complied with its personal information security obligations or when undertaking an assessment.
https://www.oaic.gov.au/agencies-and-organisations/guides/guide-to-securing-personal-information
The slides showcase how e-Safe Compliance full fills the requirement of a governance tool and can assist organization to comply with all the nine areas highlighted within the Guide.
A Review Study on the Privacy Preserving Data Mining Techniques and Approaches14894
In this paper we review on the
various privacy preserving data mining techniques like data
modification and secure multiparty computation based on the
different aspects.
Index Terms– Privacy and Security, Data Mining, Privacy
Preserving, Secure Multiparty Computation (SMC) and Data
Modification
Compliance to privacy act and mandatory data breach reporting for schoolse-Safe Systems
Entities covered by the Australian Privacy Act 1988 have obligations under the Act need to take reasonable steps to protect the personal information held from misuse, interference and loss, and from unauthorised access, modification or disclosure. The Privacy Amendment (Notifiable Data Breaches) Bill 2016, establishes a mandatory data breach notification scheme in Australia.
Private Schools are covered under this scheme in Australia.
https://www.oaic.gov.au/individuals/faqs-for-individuals/education-and-child-care/are-private-schools-and-tertiary-educational-institutions-covered-by-the-privacy-act.
The Privacy Act and mandatory data breach reporting (NDB Scheme) fundamentally require the need of a data governance tool that can identify and protect sensitive student data and provide clear visibility in the event it is breached.
e-Safe Compliance is the technology response which forms an integral part of the overall policy and procedural response required to address the privacy legislation.
To assist the organizations with this legislation what OAIC has done well is to come out with a guide to securing personal information.
this is an important piece of document because OAIC states that they will refer to this guide when doing its investigations on whether a school has complied with its personal information security obligations or when undertaking an assessment.
https://www.oaic.gov.au/agencies-and-organisations/guides/guide-to-securing-personal-information
The slides showcase how e-Safe Compliance full fills the requirement of a governance tool and can assist organization to comply with all the nine areas highlighted within the Guide.
In this presentation we have covered the topic Data Security from the subject of Information Security. Where Data, Data Security, Security, Security Policy, Tools to secure data, Security Overview (Availability, Integrity, Authenticity, Confidentiality), Some myths and Dimensions of System Security and Security Issues are discussed.
When data collects in one place, it is called data at rest. Data at rest can be archival or reference files that are changed rarely or never; data at rest can also be data that is subject to regular but not constant change.
CompTIA exam study guide presentations by instructor Brian Ferrill, PACE-IT (Progressive, Accelerated Certifications for Employment in Information Technology)
"Funded by the Department of Labor, Employment and Training Administration, Grant #TC-23745-12-60-A-53"
Learn more about the PACE-IT Online program: www.edcc.edu/pace-it
Addressing the EU GDPR & New York Cybersecurity Requirements: 3 Keys to SuccessSirius
The EU Global Data Protection Regulation (GDPR) and New York State Cybersecurity Requirements for Financial Services Companies (23 NYCRR 500) represent a landmark change in the global data protection space. While they originate in different countries and apply to different organizations, their primary message is the same:
Protect your data, or pay a steep price. More specifically, protect the sensitive data you collect from customers.
With deadlines looming, is your organization ready?
The time to act is now. Read more to learn:
--Key mandates and minimum requirements for compliance
--Why a comprehensive data-centric security strategy is invaluable to all data protection and data privacy efforts
--How you can gauge your organization’s incident response capabilities
--How to extend your focus beyond the organization’s figurative four walls to ensure requirements are met throughout your supply chain
The first New York requirements deadline has arrived. With the next deadline of mandates only 6 months away, you don't want to fall behind and leave your organization at risk for potential penalties and fines.
This is Microsoft Azure Information Protection which helps you out to protect your data being accessible to the unauthorized users. This is an overview for the AIP
DLP solutions are security tools and techniques that monitor, control, and block unauthorized data transfers. DLP cyber security aim to detect and mitigate the risk of data breaches by protecting sensitive data backup from external and internal threats
Information Protection is the ability to positively control and report on the use and modification of your most important information assets. In this whitepaper you will find useful information to protect your organization with Microsoft Technologies,
Sample Data Security PoliciesThis document provides three ex.docxrtodd599
Sample Data Security Policies
This document provides three example data security policies
that cover key areas of concern. They should not be considered
an exhaustive list but rather each organization should identify
any additional areas that require policy in accordance with their
users, data, regulatory environment and other relevant factors.
The three policies cover:
1. Data security policy: Employee requirements
2. Data security policy: Data Leakage Prevention – Data in Motion
3. Data security policy: Workstation Full Disk Encryption
Comments to assist in the use of these policies have been added in red.
Sample Data Security Policies
1
Data security policy: Employee requirements
Using this policy
This example policy outlines behaviors expected of employees when dealing with data and provides a classification of the types of
data with which they should be concerned. This should link to your AUP (acceptable use policy), security training and information
security policy to provide users with guidance on the required behaviors.
1.0 Purpose
<Company X> must protect restricted, confidential or sensitive data from loss to avoid reputation damage and to avoid adversely
impacting our customers. The protection of data in scope is a critical business requirement, yet flexibility to access data and work
effectively is also critical.
It is not anticipated that this technology control can effectively deal with the malicious theft scenario, or that it will reliably detect
all data. It’s primary objective is user awareness and to avoid accidental loss scenarios. This policy outlines the requirements for
data leakage prevention, a focus for the policy and a rationale.
2.0 Scope
1. Any employee, contractor or individual with access to <Company X> systems or data.
2. Definition of data to be protected (you should identify the types of data and give examples so that your users can identify it
when they encounter it)
� PII
� Financial
� Restricted/Sensitive
� Confidential
� IP
3.0 Policy – Employee requirements
1. You need to complete <Company X>’s security awareness training and agree to uphold the acceptable use policy.
2. If you identify an unknown, un-escorted or otherwise unauthorized individual in <Company X> you need to immediately notify
<complete as appropriate>.
3. Visitors to <Company X> must be escorted by an authorized employee at all times. If you are responsible for escorting
visitors you must restrict them appropriate areas.
4. You are required not to reference the subject or content of sensitive or confidential data publically, or via systems or
communication channels not controlled by <Company X>. For example, the use of external e-mail systems not hosted by
<Company X> to distribute data is not allowed.
5. Please keep a clean desk. To maintain information security you need to ensure that all printed in scope data is not left
unattended at your workstation.
Sample Data Security Policies
2.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Compliance to privacy act and mandatory data breach reporting for corporatese-Safe Systems
Entities covered by the Australian Privacy Act 1988 have obligations under the Act need to take reasonable steps to protect the personal information held from misuse, interference and loss, and from unauthorised access, modification or disclosure. The Privacy Amendment (Notifiable Data Breaches) Bill 2016, establishes a mandatory data breach notification scheme in Australia.
The Privacy Act and mandatory data breach reporting (NDB Scheme) fundamentally require the need of a data governance tool that can identify and protect sensitive personal user data and provide clear visibility in the event it is breached.
e-Safe Compliance is the technology response which forms an integral part of the overall policy and procedural response required to address the privacy legislation.
To assist the organizations with this legislation what OAIC has done well is to come out with a guide to securing personal information.
this is an important piece of document because OAIC states that they will refer to this guide when doing its investigations on whether an organization has complied with its personal information security obligations or when undertaking an assessment.
https://www.oaic.gov.au/agencies-and-organisations/guides/guide-to-securing-personal-information
The slides showcase how e-Safe Compliance full fills the requirement of a governance tool and can assist organization to comply with all the nine areas highlighted within the Guide.
A Review Study on the Privacy Preserving Data Mining Techniques and Approaches14894
In this paper we review on the
various privacy preserving data mining techniques like data
modification and secure multiparty computation based on the
different aspects.
Index Terms– Privacy and Security, Data Mining, Privacy
Preserving, Secure Multiparty Computation (SMC) and Data
Modification
Compliance to privacy act and mandatory data breach reporting for schoolse-Safe Systems
Entities covered by the Australian Privacy Act 1988 have obligations under the Act need to take reasonable steps to protect the personal information held from misuse, interference and loss, and from unauthorised access, modification or disclosure. The Privacy Amendment (Notifiable Data Breaches) Bill 2016, establishes a mandatory data breach notification scheme in Australia.
Private Schools are covered under this scheme in Australia.
https://www.oaic.gov.au/individuals/faqs-for-individuals/education-and-child-care/are-private-schools-and-tertiary-educational-institutions-covered-by-the-privacy-act.
The Privacy Act and mandatory data breach reporting (NDB Scheme) fundamentally require the need of a data governance tool that can identify and protect sensitive student data and provide clear visibility in the event it is breached.
e-Safe Compliance is the technology response which forms an integral part of the overall policy and procedural response required to address the privacy legislation.
To assist the organizations with this legislation what OAIC has done well is to come out with a guide to securing personal information.
this is an important piece of document because OAIC states that they will refer to this guide when doing its investigations on whether a school has complied with its personal information security obligations or when undertaking an assessment.
https://www.oaic.gov.au/agencies-and-organisations/guides/guide-to-securing-personal-information
The slides showcase how e-Safe Compliance full fills the requirement of a governance tool and can assist organization to comply with all the nine areas highlighted within the Guide.
In this presentation we have covered the topic Data Security from the subject of Information Security. Where Data, Data Security, Security, Security Policy, Tools to secure data, Security Overview (Availability, Integrity, Authenticity, Confidentiality), Some myths and Dimensions of System Security and Security Issues are discussed.
When data collects in one place, it is called data at rest. Data at rest can be archival or reference files that are changed rarely or never; data at rest can also be data that is subject to regular but not constant change.
CompTIA exam study guide presentations by instructor Brian Ferrill, PACE-IT (Progressive, Accelerated Certifications for Employment in Information Technology)
"Funded by the Department of Labor, Employment and Training Administration, Grant #TC-23745-12-60-A-53"
Learn more about the PACE-IT Online program: www.edcc.edu/pace-it
Addressing the EU GDPR & New York Cybersecurity Requirements: 3 Keys to SuccessSirius
The EU Global Data Protection Regulation (GDPR) and New York State Cybersecurity Requirements for Financial Services Companies (23 NYCRR 500) represent a landmark change in the global data protection space. While they originate in different countries and apply to different organizations, their primary message is the same:
Protect your data, or pay a steep price. More specifically, protect the sensitive data you collect from customers.
With deadlines looming, is your organization ready?
The time to act is now. Read more to learn:
--Key mandates and minimum requirements for compliance
--Why a comprehensive data-centric security strategy is invaluable to all data protection and data privacy efforts
--How you can gauge your organization’s incident response capabilities
--How to extend your focus beyond the organization’s figurative four walls to ensure requirements are met throughout your supply chain
The first New York requirements deadline has arrived. With the next deadline of mandates only 6 months away, you don't want to fall behind and leave your organization at risk for potential penalties and fines.
This is Microsoft Azure Information Protection which helps you out to protect your data being accessible to the unauthorized users. This is an overview for the AIP
DLP solutions are security tools and techniques that monitor, control, and block unauthorized data transfers. DLP cyber security aim to detect and mitigate the risk of data breaches by protecting sensitive data backup from external and internal threats
Information Protection is the ability to positively control and report on the use and modification of your most important information assets. In this whitepaper you will find useful information to protect your organization with Microsoft Technologies,
Sample Data Security PoliciesThis document provides three ex.docxrtodd599
Sample Data Security Policies
This document provides three example data security policies
that cover key areas of concern. They should not be considered
an exhaustive list but rather each organization should identify
any additional areas that require policy in accordance with their
users, data, regulatory environment and other relevant factors.
The three policies cover:
1. Data security policy: Employee requirements
2. Data security policy: Data Leakage Prevention – Data in Motion
3. Data security policy: Workstation Full Disk Encryption
Comments to assist in the use of these policies have been added in red.
Sample Data Security Policies
1
Data security policy: Employee requirements
Using this policy
This example policy outlines behaviors expected of employees when dealing with data and provides a classification of the types of
data with which they should be concerned. This should link to your AUP (acceptable use policy), security training and information
security policy to provide users with guidance on the required behaviors.
1.0 Purpose
<Company X> must protect restricted, confidential or sensitive data from loss to avoid reputation damage and to avoid adversely
impacting our customers. The protection of data in scope is a critical business requirement, yet flexibility to access data and work
effectively is also critical.
It is not anticipated that this technology control can effectively deal with the malicious theft scenario, or that it will reliably detect
all data. It’s primary objective is user awareness and to avoid accidental loss scenarios. This policy outlines the requirements for
data leakage prevention, a focus for the policy and a rationale.
2.0 Scope
1. Any employee, contractor or individual with access to <Company X> systems or data.
2. Definition of data to be protected (you should identify the types of data and give examples so that your users can identify it
when they encounter it)
� PII
� Financial
� Restricted/Sensitive
� Confidential
� IP
3.0 Policy – Employee requirements
1. You need to complete <Company X>’s security awareness training and agree to uphold the acceptable use policy.
2. If you identify an unknown, un-escorted or otherwise unauthorized individual in <Company X> you need to immediately notify
<complete as appropriate>.
3. Visitors to <Company X> must be escorted by an authorized employee at all times. If you are responsible for escorting
visitors you must restrict them appropriate areas.
4. You are required not to reference the subject or content of sensitive or confidential data publically, or via systems or
communication channels not controlled by <Company X>. For example, the use of external e-mail systems not hosted by
<Company X> to distribute data is not allowed.
5. Please keep a clean desk. To maintain information security you need to ensure that all printed in scope data is not left
unattended at your workstation.
Sample Data Security Policies
2.
Similar to L2 - Protecting Security of Assets_.pptx (20)
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
2. LEARNING OUTCOMES
At the end of this lecture students should be able to:
Identify and classify information and assets
Protect privacy
Explain data security controls
Establish information and asset handling requirements
3. The Asset Security domain focuses on collecting, handling, and
protecting information throughout its lifecycle.
A primary step in this domain is classifying information based
on its value to the organization.
All follow-on actions vary depending on the classification. For
example, highly classified data requires stringent security
controls. In contrast, unclassified data uses fewer security
controls.
INTRODUCTION
5. One of the first steps in asset security is identifying and
classifying information and assets.
In this context, assets include sensitive data, the hardware used
to process it, and the media used to hold it.
Organizations often include classification definitions within a
security policy.
Personnel then label assets appropriately based on the security
policy requirements.
IDENTIFY AND CLASSIFY ASSETS
6. Steps to Identifying and classifying assets:
1) Defining Sensitive Data
2) Defining Data Classifications
3) Defining Asset Classifications
4) Determining Data Security Controls
5) Understanding Data States
6) Handling Information Assets
7) Data Protection Method
IDENTIFY AND CLASSIFY ASSETS
7. Sensitive data is any information that isn’t public or unclassified.
It can include confidential, proprietary, protected, or any other
type of data that an organization needs to protect due to its
value to the organization, or to comply with existing laws and
regulations.
1. DEFINING SENSITIVE DATA
8. Types of Sensitive Data
Personally identifiable information (PII) is any information that can
identify an individual.
- National Institute of Standards and Technology (NIST) Special
Publication (SP) 800-122 provides a more formal definition:
Any information about an individual maintained by an agency,
including
(1) any information that can be used to distinguish or trace an
individual’s identity, such as name, social security number, date
and place of birth, mother’s maiden name, or biometric records;
and
(2) any other information that is linked or linkable to an individual,
1. DEFINING SENSITIVE DATA
9. Types of Sensitive Data
Protected health information (PHI) is any health-related information that
can be related to a specific person
- Health information means any information, whether oral or recorded in any
form or medium, that—
(A) is created or received by a health care provider, health plan, public
health authority, employer, life insurer, school or university, or health care
clearinghouse; and
(B) relates to the past, present, or future physical or mental health or
condition of any individual, the provision of health care to an individual, or
the past, present, or future payment for the provision of health care to an
individual (HIPAA)
1. DEFINING SENSITIVE DATA
10. Types of Sensitive Data
Proprietary data refers to any data that helps an organization
maintain a competitive edge.
It could be software code it developed, technical plans for
products, internal processes, intellectual property, or trade
secrets.
If competitors are able to access the proprietary data, it can
seriously affect the primary mission of an organization.
1. DEFINING SENSITIVE DATA
11. Organizations typically include data classifications in their
security policy, or in a separate data policy.
A data classification identifies the value of the data to the
organization and is critical to protect data confidentiality and
integrity.
The policy identifies classification labels used within the
organization.
It also identifies how data owners can determine the proper
classification and how personnel should protect data based on
its classification.
2. DEFINING DATA CLASSIFICATIONS
13. Organizations typically include data classifications in their
security policy, or in a separate data policy.
A data classification identifies the value of the data to the
organization and is critical to protect data confidentiality and
integrity.
The policy identifies classification labels used within the
organization.
It also identifies how data owners can determine the proper
classification and how personnel should protect data based on
its classification.
2. DEFINING DATA CLASSIFICATIONS
14. Asset classifications should match the data classifications.
In other words, if a computer is processing top secret
data, the computer should also be classified as a top
secret asset.
Similarly, if media such as internal or external drives holds
top secret data, the media should also be classified as top
secret.
It is common to use clear marking on the hardware assets
so that personnel are reminded of data that can be
3. DEFINING ASSET CLASSIFICATIONS
15. After defining data and asset classifications, it’s important to define
the security requirements and identify security controls to
implement those security requirements.
Imagine that an organization has decided on data labels of
Confidential/Proprietary, Private, Sensitive, and Public as described
previously.
Management then decides on a data security policy dictating the
use of specific security controls to protect data in these categories.
The policy will likely address data stored in files, in databases, on
servers including email servers, on user systems, sent via email,
and stored in the cloud.
4. DETERMINING DATA SECURITY CONTROLS
17. It’s important to protect data in all data states , including while it is at
rest, in motion, and in use.
Data at Rest - Data at rest is any data stored on media such as
hard drives, external USB drives, storage area networks (SANs), and
backup tapes.
Data in Transit - Data in transit (sometimes called data in motion) is
any data transmitted over a network. This includes data transmitted
over an internal network using wired or wireless methods and data
transmitted over public networks such as the internet.
Data in Use - Data in use refers to data in memory or temporary
storage buffers, while an application is using it. Because an
application can’t process encrypted data, it must decrypt it in
5. UNDERSTANDING DATA STATES
18. A key goal of managing sensitive data is to prevent data
breaches.
A data breach is any event in which an unauthorized entity can
view or access sensitive data.
If you pay attention to the news, you probably hear about data
breaches quite often.
6. HANDLING INFORMATION AND ASSETS
19. Basic steps to limit data breaches
Marking Sensitive Data and Assets - Marking (often called labeling)
information ensures that users can easily identify the classification level of any
data
Handling Sensitive Information and Assets - Handling refers to the secure
transportation of media through its lifetime. Personnel handle data differently
based on its value and classification, and as you’d expect, highly classified
information needs much greater protection. Many times, people get
accustomed to handling sensitive information and become lackadaisical with
protecting it.
Storing Sensitive Data - Sensitive data should be stored in such a way that it
protected against any type of loss. The obvious protection is encryption. If
sensitive data is stored on physical media such as portable disk drives or
backup tapes, personnel should follow basic physical security practices to
6. HANDLING INFORMATION AND ASSETS
20. Basic steps to limit data breaches
Destroying Sensitive Data - When an organization no longer needs sensitive
data, personnel should destroy it. Proper destruction ensures that it cannot
fall into the wrong hands and result in unauthorized disclosure. Highly
classified data requires different steps to destroy it than data classified at a
lower level.
Eliminating Data Remanence - Data remanence is the data that remains on
media after the data was supposedly erased. It typically refers to data on a
hard drive as residual magnetic flux.
One way to remove data remanence is with a degausser. A degausser
generates a heavy magnetic field, which realigns the magnetic fields in
magnetic media such as traditional hard drives, magnetic tape, and floppy
disk drives. Degaussers using power will reliably rewrite these magnetic fields
and remove data remanence. However, they are only effective on magnetic
media.
6. HANDLING INFORMATION AND ASSETS
21. Basic steps to limit data breaches
Ensuring Appropriate Asset Retention - Retention requirements apply
data or records, media holding sensitive data, systems that process
sensitive data, and personnel who have access to sensitive data.
Record retention and media retention is the most important element of
asset retention.
Record retention involves retaining and maintaining important
information as long as it is needed and destroying it when it is no
longer needed
6. HANDLING INFORMATION AND ASSETS
22. One of the primary methods of protecting the confidentiality of data
is encryption
Symmetric encryption uses the same key to encrypt and decrypt data
Advanced Encryption Standard
Triple DES
Blowfish
Transport encryption methods encrypt data before it is transmitted,
providing protection of data in transit. The primary risk of sending
unencrypted data over a network is a sniffing attack.
Organizations often enable remote access solutions such as virtual
private networks (VPNs).
7. DATA PROTECTION METHODS
24. Many people within an organization manage, handle, and use
data, and they have different requirements based on their roles
One of the most important concepts here is ensuring that
personnel know who owns information and assets.
The owners have a primary responsibility of protecting the data
and assets.
DETERMINING OWNERSHIP
25. Data Owner –
o data owner is the person who has ultimate organizational
responsibility for data.
o The owner is typically the chief operating officer (CEO), president,
or a department head
o Data owners identify the classification of data and ensure that it is
labeled properly.
o They also ensure that it has adequate security controls based on
the classification and the organization’s security policy
DETERMINING OWNERSHIP
26. Asset Owner –
o The asset owner (or system owner) is the person who owns the asset or
system that processes sensitive data.
o Develops and maintains a system security plan in coordination with
information owners, the system administrator, and functional end users
o The system owner is typically the same person as the data owner, but it
can sometimes be someone different, such as a different department
head
o The system owner is responsible for ensuring that data processed on the
system remains secure. This includes identifying the highest level of data
that the system processes
DETERMINING OWNERSHIP
27. Business/Mission Owner –
o The business/mission owner role is viewed differently in different
organizations
o a program manager or an information system owner and as such, the
responsibilities of the business/mission owner can overlap with the
responsibilities of the system owner or be the same role.
o Business owners might own processes that use systems managed by other
entities
o business owners are responsible for ensuring that systems provide value to
the organization. This sounds obvious.
o However, IT departments sometimes become overzealous and implement
security controls without considering the impact on the business or its
DETERMINING OWNERSHIP
28. Data Processors –
o Generically, a data processor is any system used to process data.
o a data processor is “a natural or legal person, public authority,
agency, or other body, which processes personal data solely on
behalf of the data controller.”
o In this context, the data controller is the person or entity that
controls processing of the data.
DETERMINING OWNERSHIP
29. Pseudonymization –
o Two technical security controls that organizations can implement
are encryption and pseudonymization.
o A pseudonym is an alias
o Pseudonymization refers to the process of using pseudonyms to
represent other data.
o It can be done to prevent the data from directly identifying an
entity, such as a person.
DETERMINING OWNERSHIP
30. Anonymization –
oAnonymization is the process of removing all relevant
data so that it is impossible to identify the original
subject or person.
DETERMINING OWNERSHIP
31. Administrators –
o A data administrator is responsible for granting appropriate access
to personnel.
o They don’t necessarily have full administrator rights and privileges,
but they do have the ability to assign permissions.
o Administrators assign permissions based on the principles of least
privilege and the need to know, granting users access to only what
they need for their job.
o Administrators typically assign permissions using a Role Based
Access Control model
DETERMINING OWNERSHIP
32. Custodians –
o Data owners often delegate day-to-day tasks to a custodian. A
custodian helps protect the integrity and security of data by
ensuring that it is properly stored and protected.
o For example, custodians would ensure that the data is backed up in
accordance with a backup policy.
o If administrators have configured auditing on the data, custodians
would also maintain these logs.
DETERMINING OWNERSHIP
33. Users –
o A user is any person who accesses data via a computing system to
accomplish work tasks.
o Users have access to only the data they need to perform their work
tasks. You can also think of users as employees or end users.
DETERMINING OWNERSHIP
35. Organizations have an obligation to protect data that they
collect and maintain especially both PII and PHI data.
Many laws and regulations mandate the protection of privacy data,
and organizations have an obligation to learn which laws and
regulations apply to them. Additionally, organizations need to ensure
that their practices comply with these laws and regulations.
It’s common for organizations to use an online privacy policy on their
websites
PROTECTING PRIVACY
36. When protecting privacy, an organization will typically use
several different security controls.
Selecting the proper security controls can be a daunting task,
especially for new organizations.
However, using security baselines and identifying relevant
standards makes the task a little easier.
Baselines provide a starting point and ensure a minimum security
standard.
PROTECTING PRIVACY
37. Asset security focuses on collecting, handling, and protecting
information throughout its lifecycle.
Sensitive information is any information that an organization keeps
private and can include multiple levels of classifications
Organizations take specific steps to mark, handle, store, and destroy
sensitive information and hardware assets, and these steps help
prevent the loss of confidentiality due to unauthorized disclosure.
Personnel can fulfill many different roles when handling data.
Security baselines provide a set of security controls that an
organization can implement as a secure starting point.
CONCLUSION
Editor's Notes
organizations have a responsibility to protect PII. This includes PII related to employees and customers. Many laws require organizations to notify individuals if a data breach results in a compromise of PII.
Health Insurance Portability and Accountability Act (HIPAA)
Some people think that only medical care providers such as doctors and hospitals need to protect PHI. However, HIPAA defines PHI much more broadly. Any employer that provides, or supplements, healthcare policies collects and handles
Although copyrights, patents, and trade secret laws provide a level of protection for proprietary data, this isn’t always enough. Many criminals don’t pay attention to copyrights, patents, and laws. Similarly, foreign entities have stolen a significant amount of proprietary data
As an example, government data classifications include top secret, secret, confidential, and unclassified. Anything above unclassified is sensitive Data, but clearly, these have different values.
Confidential or Proprietary The confidential or proprietary label typically refers to the highest level of classified data. In this context, a data breach would cause exceptionally grave damage to the mission of the organization. As an example, attackers have repeatedly attacked Sony, stealing more than 100 terabytes of data including full-length versions of unreleased movies. These quickly showed up on file-sharing sites and security experts estimate that people downloaded these movies up to a million times. With pirated versions of the movies available, many people skipped seeing them when Sony ultimately released them. This directly affected their bottom line. The movies were proprietary and the organization might have considered it as exceptionally grave damage. In retrospect, they may choose to label movies as confidential or proprietary and use the strongest access controls to protect them.
Private The private label refers to data that should stay private within the organization but doesn’t meet the definition of confidential or proprietary data. In this context, a data breach would cause serious damage to the mission of the organization. Many organizations label PII and PHI data as private. It’s also common to label internal employee data and some financial data as private. As an example, the payroll department of a company would have access to payroll data, but this data is not available to regular employees.
Sensitive Sensitive data is similar to confidential data. In this context, a data breach would cause damage to the mission of the organization. As an example, information technology (IT) personnel within an organization might have extensive data about the internal network including the layout, devices, operating systems, software, Internet Protocol (IP) addresses, and more. If attackers have easy access to this data, it makes it much easier for them to launch attacks. Management may decide they don’t want this information available to the public, so they might label it as sensitive
Public Public data is similar to unclassified data. It includes information posted in websites, brochures, or any other public source. Although an organization doesn’t protect the confidentiality of public data, it does take steps to protect its integrity. For example, anyone canview public data posted on a website. However, an organization doesn’t want attackers to modify this data so it takes steps to protect it.
For example, if a computer is used to process top secret data, the computer and the monitor will have clear and prominent labels reminding users of the classification of data that can be processed on the computer
For this example, we’re limiting the type of data to only email. The organization has defined how it wants to protect email in each of the data categories. They decided that any email in the Public category doesn’t need to be encrypted. However, email in all other categories
(Confidential/Proprietary, Private, Sensitive, and Public) must be encrypted when being sent (data in transit) and while stored on an email server (data at rest). Encryption converts cleartext data into scrambled ciphertext and makes it more difficult to read. Using strong encryption methods such as Advanced Encryption Standard with 256-bit cryptography keys (AES 256) makes it almost impossible for unauthorized personnel to read the text.
THE Table shows other security requirements for email that management defined in their data security policy. Notice that data in the highest level of classification category (Confidential/Proprietary) has the most security requirements defined in the security policy.
Additionally, identity and access management (IAM) security controls help ensure that only authorized personnel can access resources.
SSDs use integrated circuitry instead of magnetic flux on spinning platters. Because of this, degaussing SSDs won’t remove data. However, even when using other methods to remove data from SSDs, data remnants often remain.
As an example, consider a web server used for e-commerce that interacts with a back-end database server. A software development department might perform database development and database administration for the database and the database server, but the IT department maintains the web server.
In this case, the software development DH is the system owner for the database server, and the IT DH is the system owner for the web server. However, it’s more common for one person (such as a single department head) to control both servers, and this one person would be the system owner for both systems.
Business owners might own processes that use systems managed by other entities. As an example, the sales department could be the business owner but the IT department and the software development department could be the system owners for systems used in sales processes.
As an example, a company that collects personal information on employees for payroll is a data controller. If they pass this information to a third-party company to process payroll, the payroll company is the data processor. In this example, the payroll company (the data processor) must not use the data for anything other than processing payroll at the direction of the data controller
Instead of including personal information such as the patient’s name, address, and phone number, it could just refer to the patient as Patient 23456 in the medical record. The doctor’s office still needs this personal information, and it could be held in another database linking it to the patient
pseudonym (Patient 23456).