What I learned at the Infosecurity ISACA North America Conference 2019Ulf Mattsson
The 2019 Infosecurity ISACA North America Expo and Conference was held in New York City’s Javits Convention Center on November 20-21. With more than 50 sessions spanning 5 tracks, this conference offered the best-in-class educational content ISACA members and certification holders depend on, plus unprecedented access to leaders in the security industry.
Join Ulf Mattsson, Head of Innovation at TokenX for a conference recap webinar on the biggest takeaways
Privacy preserving computing and secure multi-party computation ISACA AtlantaUlf Mattsson
A major challenge that many organizations faces, is how to address data privacy regulations such as CCPA, GDPR and other emerging regulations around the world, including data residency controls as well as enable data sharing in a secure and private fashion. We will present solutions that can reduce and remove the legal, risk and compliance processes normally associated with data sharing projects by allowing organizations to collaborate across divisions, with other organizations and across jurisdictions where data cannot be relocated or shared.
We will discuss secure multi-party computation where organizations want to securely share sensitive data without revealing their private inputs. We will review solutions that are driving faster time to insight by the use of different techniques for privacy-preserving computing including homomorphic encryption, k-anonymity and differential privacy. We will present best practices and how to control privacy and security throughout the data life cycle. We will also review industry standards, implementations, policy management and case studies for hybrid cloud and on-premises.
Unlock the potential of data security 2020Ulf Mattsson
Explore challenges of managing and protecting data. We'll share best practices on establishing the right balance between privacy, security, and compliance
Protecting Data Privacy in Analytics and Machine LearningUlf Mattsson
In this session, we will discuss a range of new emerging technologies for privacy and confidentiality in machine learning and data analytics. We will discuss how to use open source tools to put these technologies to work for databases and other data sources.
When we think about developing AI responsibly, there’s many different activities that we need to think about. In this session, we will discuss technologies that help protect people, preserve privacy, and enable you to do machine learning confidentially.
This session discusses industry standards and emerging privacy-enhanced computation techniques, secure multiparty computation, and trusted execution environments. We will discuss Zero Trust philosophy fundamentally changes the way we approach security since trust is a vulnerability that can be exploited particularly when working remotely and increasingly using cloud models. We will also discuss the “why, what, and how” of techniques for privacy preserving computing.
We will review how different industries are taking opportunity of these privacy preserving techniques. A retail company used secure multi-party computation to be able to respect user privacy and specific regulations and allow the retailer to gain insights while protecting the organization’s IP. Secure data-sharing is used by a healthcare organization to protect the privacy of individuals and they also store and search on encrypted medical data in cloud.
We will also review the benefits of secure data-sharing for financial institutions including a large bank that wanted to broaden access to its data lake without compromising data privacy but preserving the data’s analytical quality for machine learning purposes.
ISSA Atlanta - Emerging application and data protection for multi cloudUlf Mattsson
Personal data privacy will be the most prominent issue affecting how businesses gather, store, process, and disclose data in public cloud. Businesses have been inundated with information on what recent privacy laws like GDPR and CCPA require, but many are still trying to figure out how to comply with them on a practical level. Many companies are focusing on data privacy from the legal and security side, which are foundational, but are missing the focus on data. The good news is that these data privacy regulations compel businesses to get a handle on personal data — how they get it, where they get it from, which systems process it, where it goes internally and externally, etc. In other words, the new norms of data privacy require proactive data management, which enables organizations to extract real business value from their data, improve the customer experience, streamline internal processes, and better understand their customers.
The new Verizon Data Breach Investigations Report (DBIR) provides perspectives on how Criminals simply shift their focus and adapt their tactics to locate and steal the data they find to be of most value.
This session will discuss Emerging Application and Data Protection for Multi-cloud and review Differential privacy, Tokenization, Homomorphic encryption, and Privacy-preserving computation.
• Learn New Application and Data Protection Strategies
• Learn Advancements in Machine Learning
• Learn how to develop a roadmap for EU GDPR compliance
• Learn Data-centric Security for Digital Business
• Learn Where Data Security and Value of Data Meet in the Cloud
• Learn Data Protection On-premises, and in Public and Private Clouds
• Learn about Emerging Application and Data Protection for Multi-cloud
• Learn about Emerging Data Privacy and Security for Cloud
• Learn about New Enterprise Application and Data Security Challenges
• Learn about Differential privacy, Tokenization, Homomorphic encryption, and Privacy-preserving computation
What I learned at the Infosecurity ISACA North America Conference 2019Ulf Mattsson
The 2019 Infosecurity ISACA North America Expo and Conference was held in New York City’s Javits Convention Center on November 20-21. With more than 50 sessions spanning 5 tracks, this conference offered the best-in-class educational content ISACA members and certification holders depend on, plus unprecedented access to leaders in the security industry.
Join Ulf Mattsson, Head of Innovation at TokenX for a conference recap webinar on the biggest takeaways
Privacy preserving computing and secure multi-party computation ISACA AtlantaUlf Mattsson
A major challenge that many organizations faces, is how to address data privacy regulations such as CCPA, GDPR and other emerging regulations around the world, including data residency controls as well as enable data sharing in a secure and private fashion. We will present solutions that can reduce and remove the legal, risk and compliance processes normally associated with data sharing projects by allowing organizations to collaborate across divisions, with other organizations and across jurisdictions where data cannot be relocated or shared.
We will discuss secure multi-party computation where organizations want to securely share sensitive data without revealing their private inputs. We will review solutions that are driving faster time to insight by the use of different techniques for privacy-preserving computing including homomorphic encryption, k-anonymity and differential privacy. We will present best practices and how to control privacy and security throughout the data life cycle. We will also review industry standards, implementations, policy management and case studies for hybrid cloud and on-premises.
Unlock the potential of data security 2020Ulf Mattsson
Explore challenges of managing and protecting data. We'll share best practices on establishing the right balance between privacy, security, and compliance
Protecting Data Privacy in Analytics and Machine LearningUlf Mattsson
In this session, we will discuss a range of new emerging technologies for privacy and confidentiality in machine learning and data analytics. We will discuss how to use open source tools to put these technologies to work for databases and other data sources.
When we think about developing AI responsibly, there’s many different activities that we need to think about. In this session, we will discuss technologies that help protect people, preserve privacy, and enable you to do machine learning confidentially.
This session discusses industry standards and emerging privacy-enhanced computation techniques, secure multiparty computation, and trusted execution environments. We will discuss Zero Trust philosophy fundamentally changes the way we approach security since trust is a vulnerability that can be exploited particularly when working remotely and increasingly using cloud models. We will also discuss the “why, what, and how” of techniques for privacy preserving computing.
We will review how different industries are taking opportunity of these privacy preserving techniques. A retail company used secure multi-party computation to be able to respect user privacy and specific regulations and allow the retailer to gain insights while protecting the organization’s IP. Secure data-sharing is used by a healthcare organization to protect the privacy of individuals and they also store and search on encrypted medical data in cloud.
We will also review the benefits of secure data-sharing for financial institutions including a large bank that wanted to broaden access to its data lake without compromising data privacy but preserving the data’s analytical quality for machine learning purposes.
ISSA Atlanta - Emerging application and data protection for multi cloudUlf Mattsson
Personal data privacy will be the most prominent issue affecting how businesses gather, store, process, and disclose data in public cloud. Businesses have been inundated with information on what recent privacy laws like GDPR and CCPA require, but many are still trying to figure out how to comply with them on a practical level. Many companies are focusing on data privacy from the legal and security side, which are foundational, but are missing the focus on data. The good news is that these data privacy regulations compel businesses to get a handle on personal data — how they get it, where they get it from, which systems process it, where it goes internally and externally, etc. In other words, the new norms of data privacy require proactive data management, which enables organizations to extract real business value from their data, improve the customer experience, streamline internal processes, and better understand their customers.
The new Verizon Data Breach Investigations Report (DBIR) provides perspectives on how Criminals simply shift their focus and adapt their tactics to locate and steal the data they find to be of most value.
This session will discuss Emerging Application and Data Protection for Multi-cloud and review Differential privacy, Tokenization, Homomorphic encryption, and Privacy-preserving computation.
• Learn New Application and Data Protection Strategies
• Learn Advancements in Machine Learning
• Learn how to develop a roadmap for EU GDPR compliance
• Learn Data-centric Security for Digital Business
• Learn Where Data Security and Value of Data Meet in the Cloud
• Learn Data Protection On-premises, and in Public and Private Clouds
• Learn about Emerging Application and Data Protection for Multi-cloud
• Learn about Emerging Data Privacy and Security for Cloud
• Learn about New Enterprise Application and Data Security Challenges
• Learn about Differential privacy, Tokenization, Homomorphic encryption, and Privacy-preserving computation
A practical data privacy and security approach to ffiec, gdpr and ccpaUlf Mattsson
With sensitive data residing everywhere, organizations becoming more mobile, and the breach epidemic growing, the need for advanced data privacy and security solutions has become even more critical. French regulators cited GDPR in fining Google $57 million and the U.K.'s Information Commissioner's Office is seeking a $230 million fine against British Airways and seeking $124 million from Marriott. Facebook is setting aside $3 billion to cover the costs of a privacy investigation launched by US regulators.
This session will take a practical approach to address guidance and standards from the Federal Financial Institutions Examination Council (FFIEC), EU GDPR, California CCPA, NIST Risk Management Framework, COBIT and the ISO 31000 Risk management Principles and Guidelines.
Learn how new data privacy and security techniques can help with compliance and data breaches, on-premises, and in public and private clouds.
Evolving regulations are changing the way we think about tools and technologyUlf Mattsson
Discover the latest in RegTech and stay up-to-date on compliance tools and best practices.
The move to digital has meant that many organizations have had to rethink legacy systems.
They need to put the customer first, focus on the Customer Experience and Digital Experience Platforms.
They also need to understand the latest in RegTech and solutions for hybrid cloud.
We will discuss Regtech for the financial industry and related technologies for compliance.
We will discuss new International Standards, tools and best practices for financial institutions including PCI v4, FFIEC, NACHA, NIST, GDPR and CCPA.
We will discuss related technologies for Data Security and Privacy, including data de-identification, encryption, tokenization and the new API Economy.
Future data security ‘will come from several sources’John Davis
The process of digitisation will become more all-encompassing, but will create new data security needs that can only be met by multiple suppliers, a report has said. - See more at: http://www.storetec.net/news-blog/future-data-security-will-come-from-several-sources
Bridging the gap between privacy and big data Ulf Mattsson - Protegrity Sep 10Ulf Mattsson
Big Data systems like Hadoop provide analysis of massive amounts of data to open up “Big Answers”, identifying trends and new business opportunities. The massive scalability and economical storage also provides the opportunity to monetize collected data by selling it to a third party.
However, the biggest issue with Big Data remains security. Like any other system, the data must be protected according to regulatory mandates, such as PCI, HIPAA and Privacy laws; from both external and internal threats – including privileged users.
So how can we bridge the gap between access to vast amounts of data, and security of more and more types of data, in this rapidly evolving new environment?
In this webinar, Ulf Mattsson explores the issues and provide solutions to bring together data insight and security in Big Data. With deep knowledge in advanced data security technologies, Ulf explains the best practices in order to safely unlock the power of Big Data.
Book about
Quantum Computing Blockchain Reversable Protection Privacy by Design, Applications and APIs Privacy, Risks, and Threats Machine Learning and Analytics Non-Reversable Protection International Unicode Secure Multi-party Computing Computing on Encrypted Data Internet of Things II. Data Confidentiality and Integrity Standards and Regulations IV. Applications VI. Summary Best Practices, Roadmap, and Vision Trends, Innovation, and Evolution Hybrid Cloud , CASB and SASE Appendix A B C D E I. Introduction and Vision Section Access Control Zero Trust Architecture Trusted Execution Environments III. Users and Authorization Governance, Guidance, and Frameworks V. Platforms Data User App Innovation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Chapter Discovery and Search Glossary
Emerging application and data protection for multi cloudUlf Mattsson
Emerging Application and Data Protection for Multi-Cloud
Personal data privacy will be the most prominent issue affecting how businesses gather, store, process, and disclose data in public cloud. Businesses have been inundated with information on what recent privacy laws like GDPR and CCPA require, but many are still trying to figure out how to comply with them on a practical level. Many companies are focusing on data privacy from the legal and security side, which are foundational, but are missing the focus on data. The good news is that these data privacy regulations compel businesses to get a handle on personal data - how they get it, where they get it from, which systems process it, where it goes internally and externally, etc. In other words, the new norms of data privacy require proactive data management, which enables organizations to extract real business value from their data, improve the customer experience, streamline internal processes, and better understand their customers. The new Verizon Data Breach Investigations Report (DBIR) provides perspectives on how Criminals simply shift their focus and adapt their tactics to locate and steal the data they find to be of most value. This session will discuss Emerging Application and Data Protection for Multi-cloud and review Differential privacy, Tokenization, Homomorphic encryption, and Privacy-preserving computation.
Advanced PII / PI data discovery and data protectionUlf Mattsson
We will discuss using Advanced PII/PI Discovery to Find & Inventory All Personal Data at an Enterprise Scale.
Learn about new machine learning & identity intelligence technology.
You will learn how to:
• Identify all PII across structured, unstructured, cloud & Big Data.
• Inventory PII by data subject & residency for GDPR.
• Measure data re-identifiability for pseudonymization.
• Uncover dark or uncatalogued data.
• Fix data quality, visualize PII data relationships
• Apply data protection to discovered sensitive data.
New regulations and the evolving cybersecurity technology landscapeUlf Mattsson
As the cyber threat landscape continues to evolve, organizations worldwide are increasing their spend on cybersecurity technology. We have a transition from 3rd party security providers into native cloud security services. The challenge of securing enterprise data assets is increasing. What’s needed to control Cyber Risk and stay Compliant in this evolving landscape?
We will discuss evolving industry standards, how to keep track of your data assets, protect your sensitive data and maintain compliance to new regulations.
Data Virtualization for Accelerated Digital Transformation in Banking and Fin...Denodo
Watch full webinar here: https://bit.ly/37jIyzf
Presented at FST Media Future of Financial Services, Sydney (Australia)
Watch this session on-demand to understand how data virtualization helps finance companies to:
- Modernise their data infrastructure by providing a virtual approach to accessing, managing, and delivering data
- Implement a centralised governance framework and roll out security measures across the enterprise data infrastructure
- Integrate data from multiple sources to create a 360-degree view into customers’ changing needs and behaviours
Tokenization on Blockchain is a steady trend. It seems that everything is being tokenized on Blockchain from paintings, diamonds and company stocks to real estate. Thus, we took an asset, tokenized it and created its digital representation that lives on Blockchain. Blockchain guarantees that the ownership information is immutable.
Unfortunately, some problems need to be solved before we can successfully tokenize real-world assets on Blockchain. Main problem stems from the fact that so far, no country has a solid regulation for cryptocurrency. For example, what happens if a company that handles tokenization sells the property? They have no legal rights on the property and thus are not protected by the law. Another problem is that this system brings us back some sort of centralization. The whole idea of Blockchain and especially smart contracts is to create a trustless environment.
Tokenization is a method that converts a digital value into a digital token. Tokenization can be used as a method that converts rights to an asset into a digital token.
The tokenization system can be implemented local to the data that is tokenized or in a centralized model. We will discuss tokenization implementations that can provide scalability across hybrid cloud models. This session will position different data protection techniques, use cases for blockchain, and protecting blockchain.
Securing data today and in the future - Oracle NYCUlf Mattsson
NYOUG - New York Oracle Users Group:
- Risks Associated with Cloud Computing
- Data Tokens in a Cloud Environment
- Data Tokenization at the Gateway Layer
- Data Tokenization at the Database Layer
- Risk Management and PCI
An extensive research survey on data integrity and deduplication towards priv...IJECEIAES
Owing to the highly distributed nature of the cloud storage system, it is one of the challenging tasks to incorporate a higher degree of security towards the vulnerable data. Apart from various security concerns, data privacy is still one of the unsolved problems in this regards. The prime reason is that existing approaches of data privacy doesn't offer data integrity and secure data deduplication process at the same time, which is highly essential to ensure a higher degree of resistance against all form of dynamic threats over cloud and internet systems. Therefore, data integrity, as well as data deduplication is such associated phenomena which influence data privacy. Therefore, this manuscript discusses the explicit research contribution toward data integrity, data privacy, and data deduplication. The manuscript also contributes towards highlighting the potential open research issues followed by a discussion of the possible future direction of work towards addressing the existing problems.
Protecting data privacy in analytics and machine learning - ISACAUlf Mattsson
In this session, we will discuss a range of new emerging technologies for privacy and confidentiality in machine learning and data analytics. We will discuss how to put these technologies to work for databases and other data sources.
When we think about developing AI responsibly, there’s many different activities that we need to think about.
This session also discusses international standards and emerging privacy-enhanced computation techniques, secure multiparty computation, zero trust, cloud and trusted execution environments. We will discuss the “why, what, and how” of techniques for privacy preserving computing.
We will review how different industries are taking opportunity of these privacy preserving techniques. A retail company used secure multi-party computation to be able to respect user privacy and specific regulations and allow the retailer to gain insights while protecting the organization’s IP. Secure data-sharing is used by a healthcare organization to protect the privacy of individuals and they also store and search on encrypted medical data in cloud.
We will also review the benefits of secure data-sharing for financial institutions including a large bank that wanted to broaden access to its data lake without compromising data privacy but preserving the data’s analytical quality for machine learning purposes.
Practical risk management for the multi cloudUlf Mattsson
This session will take a practical approach to IT risk management and discuss multi cloud, Verizon Data Breach Investigations Report (DBIR) and how Enterprises are losing ground in the fight against persistent cyber-attacks. We simply cannot catch the bad guys until it is too late. This picture is not improving. Verizon reports concluded that less than 14% of breaches are detected by internal monitoring tools.
We will review the JP Morgan Chase data breach were hackers were in the bank’s network for months undetected. Network configuration errors are inevitable, even at the largest banks as Capital One that recently had a data breach where a hacker gained access to 100 million credit card applications and accounts.
Viewers will also learn about:
- Macro trends in Cloud security and Micro trends in Cloud security
- Risks from Quantum Computing and when we should move to alternate forms of encryption
- Review “Kill Chains” from Lockhead Martin in relation to APT and DDoS Attacks
- Risk Management methods from ISACA and other organizations
Speaker: Ulf Mattsson, Head of Innovation, TokenEx
The past, present, and future of big data securityUlf Mattsson
ONE OF THE BIGGEST REMAINING CONCERNS REGARDING HADOOP, PERHAPS SECOND ONLY TO ROI, IS SECURITY.
The Past, Present, and Future of Big Data SecurityWhile Apache Hadoop and the craze around Big Data seem to have exploded out into the market, there are still a lot more questions than answers about this new environment.
Hadoop is an environment with limited structure, high ingestion volume, massive scalability and redundancy, designed for access to a vast pool of multi-structured data. What’s been missing is new security tools to match.
Read more in this article by Ulf Mattsson, Protegrity CTO, originally published by Help Net Security’s (IN)SECURE Magazine.
Safeguarding customer and financial data in analytics and machine learningUlf Mattsson
Digital Transformation and the opportunities to use data in Analytics and Machine Learning are growing exponentially, but so too are the business and financial risks in Data Privacy. The increasing number of privacy incidents and data breaches are destroying brands and customer trust, and we will discuss how business prioritization can be benefit from a finance-based data risk assessment (FinDRA).
More than 60 countries have introduced privacy laws and by 2023, 65% of the world’s population will have its personal information covered under modern privacy regulations. We will discuss use cases in financial services that are finding a balance between new technology impact, regulatory compliance, and commercial business opportunity. Several privacy-preserving and privacy-enhanced techniques can provide practical security for data in use and data sharing, but none universally cover all use cases. We will discuss what tools can we use mitigate business risks caused by security threats, data residency and privacy issues. We will discuss how technologies like pseudonymization, anonymization, tokenization, encryption, masking and privacy preservation in analytics and business intelligence are used in Analytics and Machine Learning.
Organizations are increasingly concerned about data security in processing personal information in external environments, such as the cloud; and information sharing. Data is spreading across hybrid IT infrastructure on-premises and multi-cloud services and we will discuss how to enforce consistent and holistic data security and privacy policies. Increasing numbers of data security, privacy and identity access management products are in use, but they do not integrate, do not share common policies, and we will discuss use cases in financial services of different techniques to protect and manage data security and privacy.
The new EU-US Privacy Shield, covering transatlantic exchanges of personal data for commercial purposes, went into effect in July 2016. Although this is a critical issue, many companies are not aware of the implications it has for them. What steps do companies need to take when transferring data from Europe to the US?
A practical data privacy and security approach to ffiec, gdpr and ccpaUlf Mattsson
With sensitive data residing everywhere, organizations becoming more mobile, and the breach epidemic growing, the need for advanced data privacy and security solutions has become even more critical. French regulators cited GDPR in fining Google $57 million and the U.K.'s Information Commissioner's Office is seeking a $230 million fine against British Airways and seeking $124 million from Marriott. Facebook is setting aside $3 billion to cover the costs of a privacy investigation launched by US regulators.
This session will take a practical approach to address guidance and standards from the Federal Financial Institutions Examination Council (FFIEC), EU GDPR, California CCPA, NIST Risk Management Framework, COBIT and the ISO 31000 Risk management Principles and Guidelines.
Learn how new data privacy and security techniques can help with compliance and data breaches, on-premises, and in public and private clouds.
Evolving regulations are changing the way we think about tools and technologyUlf Mattsson
Discover the latest in RegTech and stay up-to-date on compliance tools and best practices.
The move to digital has meant that many organizations have had to rethink legacy systems.
They need to put the customer first, focus on the Customer Experience and Digital Experience Platforms.
They also need to understand the latest in RegTech and solutions for hybrid cloud.
We will discuss Regtech for the financial industry and related technologies for compliance.
We will discuss new International Standards, tools and best practices for financial institutions including PCI v4, FFIEC, NACHA, NIST, GDPR and CCPA.
We will discuss related technologies for Data Security and Privacy, including data de-identification, encryption, tokenization and the new API Economy.
Future data security ‘will come from several sources’John Davis
The process of digitisation will become more all-encompassing, but will create new data security needs that can only be met by multiple suppliers, a report has said. - See more at: http://www.storetec.net/news-blog/future-data-security-will-come-from-several-sources
Bridging the gap between privacy and big data Ulf Mattsson - Protegrity Sep 10Ulf Mattsson
Big Data systems like Hadoop provide analysis of massive amounts of data to open up “Big Answers”, identifying trends and new business opportunities. The massive scalability and economical storage also provides the opportunity to monetize collected data by selling it to a third party.
However, the biggest issue with Big Data remains security. Like any other system, the data must be protected according to regulatory mandates, such as PCI, HIPAA and Privacy laws; from both external and internal threats – including privileged users.
So how can we bridge the gap between access to vast amounts of data, and security of more and more types of data, in this rapidly evolving new environment?
In this webinar, Ulf Mattsson explores the issues and provide solutions to bring together data insight and security in Big Data. With deep knowledge in advanced data security technologies, Ulf explains the best practices in order to safely unlock the power of Big Data.
Book about
Quantum Computing Blockchain Reversable Protection Privacy by Design, Applications and APIs Privacy, Risks, and Threats Machine Learning and Analytics Non-Reversable Protection International Unicode Secure Multi-party Computing Computing on Encrypted Data Internet of Things II. Data Confidentiality and Integrity Standards and Regulations IV. Applications VI. Summary Best Practices, Roadmap, and Vision Trends, Innovation, and Evolution Hybrid Cloud , CASB and SASE Appendix A B C D E I. Introduction and Vision Section Access Control Zero Trust Architecture Trusted Execution Environments III. Users and Authorization Governance, Guidance, and Frameworks V. Platforms Data User App Innovation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Chapter Discovery and Search Glossary
Emerging application and data protection for multi cloudUlf Mattsson
Emerging Application and Data Protection for Multi-Cloud
Personal data privacy will be the most prominent issue affecting how businesses gather, store, process, and disclose data in public cloud. Businesses have been inundated with information on what recent privacy laws like GDPR and CCPA require, but many are still trying to figure out how to comply with them on a practical level. Many companies are focusing on data privacy from the legal and security side, which are foundational, but are missing the focus on data. The good news is that these data privacy regulations compel businesses to get a handle on personal data - how they get it, where they get it from, which systems process it, where it goes internally and externally, etc. In other words, the new norms of data privacy require proactive data management, which enables organizations to extract real business value from their data, improve the customer experience, streamline internal processes, and better understand their customers. The new Verizon Data Breach Investigations Report (DBIR) provides perspectives on how Criminals simply shift their focus and adapt their tactics to locate and steal the data they find to be of most value. This session will discuss Emerging Application and Data Protection for Multi-cloud and review Differential privacy, Tokenization, Homomorphic encryption, and Privacy-preserving computation.
Advanced PII / PI data discovery and data protectionUlf Mattsson
We will discuss using Advanced PII/PI Discovery to Find & Inventory All Personal Data at an Enterprise Scale.
Learn about new machine learning & identity intelligence technology.
You will learn how to:
• Identify all PII across structured, unstructured, cloud & Big Data.
• Inventory PII by data subject & residency for GDPR.
• Measure data re-identifiability for pseudonymization.
• Uncover dark or uncatalogued data.
• Fix data quality, visualize PII data relationships
• Apply data protection to discovered sensitive data.
New regulations and the evolving cybersecurity technology landscapeUlf Mattsson
As the cyber threat landscape continues to evolve, organizations worldwide are increasing their spend on cybersecurity technology. We have a transition from 3rd party security providers into native cloud security services. The challenge of securing enterprise data assets is increasing. What’s needed to control Cyber Risk and stay Compliant in this evolving landscape?
We will discuss evolving industry standards, how to keep track of your data assets, protect your sensitive data and maintain compliance to new regulations.
Data Virtualization for Accelerated Digital Transformation in Banking and Fin...Denodo
Watch full webinar here: https://bit.ly/37jIyzf
Presented at FST Media Future of Financial Services, Sydney (Australia)
Watch this session on-demand to understand how data virtualization helps finance companies to:
- Modernise their data infrastructure by providing a virtual approach to accessing, managing, and delivering data
- Implement a centralised governance framework and roll out security measures across the enterprise data infrastructure
- Integrate data from multiple sources to create a 360-degree view into customers’ changing needs and behaviours
Tokenization on Blockchain is a steady trend. It seems that everything is being tokenized on Blockchain from paintings, diamonds and company stocks to real estate. Thus, we took an asset, tokenized it and created its digital representation that lives on Blockchain. Blockchain guarantees that the ownership information is immutable.
Unfortunately, some problems need to be solved before we can successfully tokenize real-world assets on Blockchain. Main problem stems from the fact that so far, no country has a solid regulation for cryptocurrency. For example, what happens if a company that handles tokenization sells the property? They have no legal rights on the property and thus are not protected by the law. Another problem is that this system brings us back some sort of centralization. The whole idea of Blockchain and especially smart contracts is to create a trustless environment.
Tokenization is a method that converts a digital value into a digital token. Tokenization can be used as a method that converts rights to an asset into a digital token.
The tokenization system can be implemented local to the data that is tokenized or in a centralized model. We will discuss tokenization implementations that can provide scalability across hybrid cloud models. This session will position different data protection techniques, use cases for blockchain, and protecting blockchain.
Securing data today and in the future - Oracle NYCUlf Mattsson
NYOUG - New York Oracle Users Group:
- Risks Associated with Cloud Computing
- Data Tokens in a Cloud Environment
- Data Tokenization at the Gateway Layer
- Data Tokenization at the Database Layer
- Risk Management and PCI
An extensive research survey on data integrity and deduplication towards priv...IJECEIAES
Owing to the highly distributed nature of the cloud storage system, it is one of the challenging tasks to incorporate a higher degree of security towards the vulnerable data. Apart from various security concerns, data privacy is still one of the unsolved problems in this regards. The prime reason is that existing approaches of data privacy doesn't offer data integrity and secure data deduplication process at the same time, which is highly essential to ensure a higher degree of resistance against all form of dynamic threats over cloud and internet systems. Therefore, data integrity, as well as data deduplication is such associated phenomena which influence data privacy. Therefore, this manuscript discusses the explicit research contribution toward data integrity, data privacy, and data deduplication. The manuscript also contributes towards highlighting the potential open research issues followed by a discussion of the possible future direction of work towards addressing the existing problems.
Protecting data privacy in analytics and machine learning - ISACAUlf Mattsson
In this session, we will discuss a range of new emerging technologies for privacy and confidentiality in machine learning and data analytics. We will discuss how to put these technologies to work for databases and other data sources.
When we think about developing AI responsibly, there’s many different activities that we need to think about.
This session also discusses international standards and emerging privacy-enhanced computation techniques, secure multiparty computation, zero trust, cloud and trusted execution environments. We will discuss the “why, what, and how” of techniques for privacy preserving computing.
We will review how different industries are taking opportunity of these privacy preserving techniques. A retail company used secure multi-party computation to be able to respect user privacy and specific regulations and allow the retailer to gain insights while protecting the organization’s IP. Secure data-sharing is used by a healthcare organization to protect the privacy of individuals and they also store and search on encrypted medical data in cloud.
We will also review the benefits of secure data-sharing for financial institutions including a large bank that wanted to broaden access to its data lake without compromising data privacy but preserving the data’s analytical quality for machine learning purposes.
Practical risk management for the multi cloudUlf Mattsson
This session will take a practical approach to IT risk management and discuss multi cloud, Verizon Data Breach Investigations Report (DBIR) and how Enterprises are losing ground in the fight against persistent cyber-attacks. We simply cannot catch the bad guys until it is too late. This picture is not improving. Verizon reports concluded that less than 14% of breaches are detected by internal monitoring tools.
We will review the JP Morgan Chase data breach were hackers were in the bank’s network for months undetected. Network configuration errors are inevitable, even at the largest banks as Capital One that recently had a data breach where a hacker gained access to 100 million credit card applications and accounts.
Viewers will also learn about:
- Macro trends in Cloud security and Micro trends in Cloud security
- Risks from Quantum Computing and when we should move to alternate forms of encryption
- Review “Kill Chains” from Lockhead Martin in relation to APT and DDoS Attacks
- Risk Management methods from ISACA and other organizations
Speaker: Ulf Mattsson, Head of Innovation, TokenEx
The past, present, and future of big data securityUlf Mattsson
ONE OF THE BIGGEST REMAINING CONCERNS REGARDING HADOOP, PERHAPS SECOND ONLY TO ROI, IS SECURITY.
The Past, Present, and Future of Big Data SecurityWhile Apache Hadoop and the craze around Big Data seem to have exploded out into the market, there are still a lot more questions than answers about this new environment.
Hadoop is an environment with limited structure, high ingestion volume, massive scalability and redundancy, designed for access to a vast pool of multi-structured data. What’s been missing is new security tools to match.
Read more in this article by Ulf Mattsson, Protegrity CTO, originally published by Help Net Security’s (IN)SECURE Magazine.
Safeguarding customer and financial data in analytics and machine learningUlf Mattsson
Digital Transformation and the opportunities to use data in Analytics and Machine Learning are growing exponentially, but so too are the business and financial risks in Data Privacy. The increasing number of privacy incidents and data breaches are destroying brands and customer trust, and we will discuss how business prioritization can be benefit from a finance-based data risk assessment (FinDRA).
More than 60 countries have introduced privacy laws and by 2023, 65% of the world’s population will have its personal information covered under modern privacy regulations. We will discuss use cases in financial services that are finding a balance between new technology impact, regulatory compliance, and commercial business opportunity. Several privacy-preserving and privacy-enhanced techniques can provide practical security for data in use and data sharing, but none universally cover all use cases. We will discuss what tools can we use mitigate business risks caused by security threats, data residency and privacy issues. We will discuss how technologies like pseudonymization, anonymization, tokenization, encryption, masking and privacy preservation in analytics and business intelligence are used in Analytics and Machine Learning.
Organizations are increasingly concerned about data security in processing personal information in external environments, such as the cloud; and information sharing. Data is spreading across hybrid IT infrastructure on-premises and multi-cloud services and we will discuss how to enforce consistent and holistic data security and privacy policies. Increasing numbers of data security, privacy and identity access management products are in use, but they do not integrate, do not share common policies, and we will discuss use cases in financial services of different techniques to protect and manage data security and privacy.
The new EU-US Privacy Shield, covering transatlantic exchanges of personal data for commercial purposes, went into effect in July 2016. Although this is a critical issue, many companies are not aware of the implications it has for them. What steps do companies need to take when transferring data from Europe to the US?
Data Privacy vs. National Security post Safe HarborGayle Gorvett
Recent Developments in Transatlantic Data Privacy regulation including adoption of Privacy Shield, GDPR and increasing requests for data access for National Security
The Evolution of Data Privacy - A Symantec Information Security Perspective o...Symantec
The European Union’s proposed General Data Protection Regulation (GDPR) has left even the most informed confused. This new regulation is designed to update the current legislation which was drafted in a time that was in technology terms, prehistoric.
The Data Protection Directive, drafted back in 1995, harks back to a time when data processing was more about filing
cabinets than data rack enclosures. It’s time to evolve.
Data Security Breach – knowing the risks and protecting your businessEversheds Sutherland
The impact of a breach in data security can be far reaching, with the risk of reputation damage affecting companies of any size. We will consider how to manage a security breach, its wider impact and building an effective cyber security for your infrastructure.
Is it legal or illegal to use american cloud services in Europe?
Patricia Ayojedi presentation about the controversial between USA an Europe regarding cloud business.
The Evolution of Data Privacy: 3 things you didn’t knowSymantec
The European Union’s proposed General Data Protection Regulation (GDPR) has left even the most informed confused. This new regulation has been designed to update the current directive which was drafted in a time that was in technology terms, prehistoric. It’s time to evolve.
PECB Webinar: The End of Safe Harbour! What happens Next?PECB
The webinar covers:
• What is Safe Harbour, and how companies were relied on it
• How the end of it will affect US firms
• What will happen next
• How companies will react
• The implications of this act
• What is the solution to this
Presenter:
This session was hosted by Mr. Graeme Parker, Managing Director of Parker Solutions Group, a PECB representative in UK. Mr. Parker has more than 20 years of experience in information security, and data privacy, and was also involved with many companies that were relied on Safe Harbour.
Link of the recorded session published on YouTube: https://youtu.be/cbPUTVtxem0
Fully understand how GDPR affects the life of millions of EU citizens by having in mind the 10 simple facts exposed by Dr. Karsten Kinast
The presentation gives a short glimpse in to the motivation of GDPR, the key changes it brings, and the ongoing compliance on information lifecycle it presumes.
GDPR and NIS Compliance - How HyTrust Can HelpJason Lackey
GDPR (EU 2016/679) and NIS are intended to strengthen data protection for people in the EU, replacing Directive 95/46/EC. Learn how HyTrust can help with compliance.
Similar to ISACA Houston - How to de-classify data and rethink transfer of data between us and eu (20)
Jun 29 new privacy technologies for unicode and international data standards ...Ulf Mattsson
Protecting the increasing use International Unicode characters is required by a growing number of Privacy Laws in many countries and general Privacy Concerns with private data. Current approaches to protect International Unicode characters will increase the size and change the data formats. This will break many applications and slow down business operations. The current approach is also randomly returning data in new and unexpected languages. New approach with significantly higher performance and a memory footprint can be customizable and fit on small IoT devices.
We will discuss new approaches to achieve portability, security, performance, small memory footprint and language preservation for privacy protecting of Unicode data. These new approaches provide granular protection for all Unicode languages and customizable alphabets and byte length preserving protection of privacy protected characters.
Old Approaches
Major Issues
Protecting the increasing use International Unicode characters is required by a growing number of Privacy Laws in many countries and general Privacy Concerns with private data.
Old approaches to protect International Unicode characters will typically increase the size and change the data formats.
This will break many applications and slow down business operations. This is an example of an old approach that is also randomly returning data in new and unexpected languages
qubit-conference-new-york-2021: https://nyc.qubitconference.com/
Cybersecurity: Get ready for the unpredictable
Create a sound cybersecurity strategy based on the right technology & budgetary insights, proven practices, and processes for SMEs.
This virtual event will equip CxOs and cybersecurity teams with the right intel to create a sound cybersecurity strategy based on the right technology & budgetary insights, proven practices, and processes specially tailored for SMEs.
Find out how to bring the smart design of cybersecurity architecture and processes, what to automate & how to properly set up internal and external ownership.
The proven cybersecurity strategy fit for your environment can go a long way. Know what to do in-house, what to outsource, set up your budgets right, and get help from the right cybersecurity specialists.
Secure analytics and machine learning in cloud use casesUlf Mattsson
Table of Contents:
Secure Analytics and Machine Learning in Cloud ......................................................................................... 2
Use case #1 in Financial Industry .............................................................................................................. 2
Data Flow .............................................................................................................................................. 2
The approach can be used for other Use-cases .................................................................................... 2
Homomorphic Encryption for Secure Machine Learning in Cloud ............................................................... 3
Evolving Homomorphic Encryption .......................................................................................................... 3
Performance Examples – HE, RSA and AES ........................................................................................... 3
Performance Examples – FHE, NTRU, ECC, RSA and AES ...................................................................... 3
Some popular HE schemes .................................................................................................................... 4
Examples of HE Libraries used by IBM, Duality, and Microsoft ............................................................ 4
Fast Homomorphic Encryption for Secure Analytics in Cloud ...................................................................... 4
Use case #2 in Health Care ........................................................................................................................ 5
Provable security for untrusted environments ..................................................................................... 5
Comparison to multiparty computation and trusted execution environments ................................... 5
Time and memory requirements of HE ................................................................................................ 5
Managing Data Security in Hybrid Cloud ...................................................................................................... 8
Data Security Policy and Zero Trust Architecture ..................................................................................... 8
The future of encryption will change in the Post-Quantum Era: .............................................................. 8
Managing Data Security in a Hybrid World ................................................................................................... 9
Evolving Privacy Regulations ....................................................................................................................... 10
New Ruling in GDPR under "Schrems II" ................................................................................................. 10
The new California Privacy Rights Act (CPRA)
Evolving international privacy regulations and cross border data transfer - g...Ulf Mattsson
We will discuss the Evolving International Privacy Regulations. Cross Border Data Transfer for GDPR under Schrems II is now ruled by an EU court that defined what is required. This ruling can be far reaching for many businesses.
Data encryption and tokenization for international unicodeUlf Mattsson
Unicode is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard is maintained by the Unicode Consortium, and as of March 2020, it has a total of 143,859 characters, with Unicode 13.0 (these characters consist of 143,696 graphic characters and 163 format characters) covering 154 modern and historic scripts, as well as multiple symbol sets and emoji. The character repertoire of the Unicode Standard is synchronized with ISO/IEC 10646, each being code-for-code identical with the other.
The Unicode Standard consists of a set of code charts for visual reference, an encoding method and set of standard character encodings, a set of reference data files, and a number of related items, such as character properties, rules for normalization, decomposition, collation, rendering, and bidirectional text display order (for the correct display of text containing both right-to-left scripts, such as Arabic and Hebrew, and left-to-right scripts). Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including modern operating systems, XML, Java (and other programming languages), and the .NET Framework.
Unicode can be implemented by different character encodings. The Unicode standard defines Unicode Transformation Formats (UTF) UTF-8, UTF-16, and UTF-32, and several other encodings. The most commonly used encodings are UTF-8, UTF-16, and UCS-2 (a precursor of UTF-16 without full support for Unicode)
The future of data security and blockchainUlf Mattsson
Discussion of Post-Quantum Cryptography and other technologies:
Data Security Techniques
Secure Multi-Party Computation (SMPC)
Homomorphic encryption (HE)
Differential Privacy (DP) and K-Anonymity
Pseudonymization and Anonymization
Synthetic Data
Zero trust architecture (ZTA)
Zero-knowledge proofs (ZKP)
Private Set Intersection (PSI)
Trusted execution environments (TEE)
Post-Quantum Cryptography
Blockchain
Regulations and Standards in Data Privacy
GDPR and evolving international privacy regulationsUlf Mattsson
Convergence of data privacy principles, standards and regulations
General Data Protection Regulation (GDPR)
GDPR and California Consumer Privacy Act (CCPA)
What role does technologies play in compliance
Use Cases
Protecting data privacy in analytics and machine learning ISACA London UKUlf Mattsson
ISACA London Chapter webinar, Feb 16th 2021
Topic: “Protecting Data Privacy in Analytics and Machine Learning”
Abstract:
In this session, we will discuss a range of new emerging technologies for privacy and confidentiality in machine learning and data analytics. We will discuss how to put these technologies to work for databases and other data sources.
When we think about developing AI responsibly, there’s many different activities that we need to think about.
This session also discusses international standards and emerging privacy-enhanced computation techniques, secure multiparty computation, zero trust, cloud and trusted execution environments. We will discuss the “why, what, and how” of techniques for privacy preserving computing.
We will review how different industries are taking opportunity of these privacy preserving techniques. A retail company used secure multi-party computation to be able to respect user privacy and specific regulations and allow the retailer to gain insights while protecting the organization’s IP. Secure data-sharing is used by a healthcare organization to protect the privacy of individuals and they also store and search on encrypted medical data in cloud.
We will also review the benefits of secure data-sharing for financial institutions including a large bank that wanted to broaden access to its data lake without compromising data privacy but preserving the data’s analytical quality for machine learning purposes.
New opportunities and business risks with evolving privacy regulationsUlf Mattsson
In the shadow of the global pandemic and the associated economic downturn, organizations are focused on cost optimization, which often leads to impulsive decisions to deprioritize compliance with all nonrevenue programs.
Regulators have evolved to adapt with the notable increase in data subject complaints and are getting more serious about organizations that don’t properly protect consumer data. Marriott was hit with a $124 million fine while Equifax agreed to pay a minimum of $575 million for its breach. The US Federal Trade Commission, the US Consumer Financial Protection Bureau (CFPB), and all 50 U.S. states and territories sued over the company’s failure to take “reasonable steps” to secure its sensitive personal data.
Privacy and data protection are enforced by a growing number of regulations around the world and people are actively demanding privacy protection — and legislators are reacting. More than 60 countries have introduced privacy laws in response to citizens’ cry for transparency and control. By 2023, 65% of the world’s population will have its personal information covered under modern privacy regulations, up from 10% today, according to Gartner. There is a convergence of data privacy principles, standards and regulations on a common set of fundamental principles.
The opportunities to use data are growing exponentially, but so too are the business and financial risks as the number of data protection and privacy regulations grows internationally.
Join this webinar to learn more about:
- Trends in modern privacy regulations
- The impact on organizations to protect and use sensitive data
- Data privacy principles
- The impact of General Data Protection Regulation (GDPR) and data transfer between US and EU
- The evolving CCPA, the new PCI DSS version 4 and new international data privacy laws or regulations
- Data privacy best practices, use cases and how to control sensitive personal data throughout the data life cycle
What is tokenization in blockchain - BCS LondonUlf Mattsson
BCS North London Branch in association with Central London Branch webinar (by GoToWebinar) Date: 2nd December 2020 Time: 18.00 to 19.30 Event title: Blockchain tokenization “What is tokenization in Blockchain?”
Agenda
Blockchain
What is Blockchain?
Use cases, trends and risks
Vendors and platforms
Data protection techniques and scalability
Tokenization
Digital business
Convert a digital value into a digital token
Local and central models
Cloud
Tokenization in Hybrid cloud
Nov 2 security for blockchain and analytics ulf mattsson 2020 nov 2bUlf Mattsson
Blockchain
- What is Blockchain?
- Blockchain trends
Emerging data protection techniques
- Secure multiparty computation
- Trusted execution environments
- Use cases for analytics
- Industry Standards
Tokenization
- Convert a digital value into a digital token
- Tokenization local or in a centralized model
- Tokenization and scalability
Cloud
- Analytics in Hybrid cloud
How to protect privacy sensitive data that is collected to control the corona...Ulf Mattsson
In Singapore, the Government launched an app using short-distance Bluetooth signals to connect one phone using the app with another user who is close by. It stores detailed records on a user's phone for 21 days decrypt the data if there is a public health risk related to an individual's movements.
China used a similar method to track a person's health status and to control movement in cities with high numbers of coronavirus cases. Individuals had to use the app and share their status to be able to access public transportation.
The keys to addressing privacy concerns about high-tech surveillance by the state is de-identifying the data and giving individuals control over their own data. Personal details that may reveal your identity such as a user's name should not be collected or should be protected with access to be granted for only specific health purposes, and data should be deleted after its specific use is no longer needed.
We will discuss how to protect privacy sensitive data that is collected to control the coronavirus outbreak.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
ISACA Houston - How to de-classify data and rethink transfer of data between us and eu
1. 1 datafloq
How to De-classify Data
and Rethink
Transfer of Data between US
and EU
Ulf Mattsson
Chief Security Strategist
www.Protegrity.com
2. 2
Tokenization Management
and Security
Cloud Management and Security
Payment Card Industry (PCI)
Security Standards Council (SSC):
1. Tokenization Task Force
2. Encryption Task Force, Point to
Point Encryption Task Force
3. Risk Assessment SIG
4. eCommerce SIG
5. Cloud SIG, Virtualization SIG
6. Pre-Authorization SIG, Scoping SIG
Working Group
• Chief Security Strategist at Protegrity, previously Head of Innovation at
TokenEx and Chief Technology Officer at Atlantic BT, Compliance Engineering,
and IT Architect at IBM
Ulf Mattsson
• Products and Services:
• Data Encryption, Tokenization, Data Discovery, Cloud Application Security
Brokers (CASB), Web Application Firewalls (WAF), Robotics, and
Applications
• Security Operation Center (SOC), Managed Security Services (MSSP)
• Inventor of more than 70 issued US Patents and developed Industry
Standards with ANSI X9 and PCI SSC
Dec 2019
May 2020
May 2020
3. 3
Agenda
1. Privacy Shield and Schrems II
2. When GDPR apply to data
3. Re-identification attacks
4. Pseudonymization
• When to use pseudonymization or anonymization
• Compliance aspects
• Trans-border communication
• Best practices
• A framework
5. International privacy standards
6. Data de-classification process and workflow
7. Privacy protection of personal health information
4. 4
Source: FTI Consulting, 2020, an independent
global business advisory firm.
More than 500 leaders of large-sized private
sector companies, based in the U.S.
How have the following data privacy regulations impacted your organization?
5. 5
FTI Consulting -
Corporate Data
Privacy Today, 2020
Which of the following aspects of data privacy are you particularly concerned about?
6. 6http://dataprotection.link/Zn1Uk#https://www.wsj.com/articles/coronavirus-paves-way-for-new-age-of-digital-surveillance-11586963028
American officials are drawing cellphone location data from mobile advertising firms to track the presence of crowds—but
not individuals. Apple Inc. and Alphabet Inc.’s Google recently announced plans to launch a voluntary app that health officials
can use to reverse-engineer sickened patients’ recent whereabouts—provided they agree to provide such information.
European nations monitor citizen
movement by tapping
telecommunications data that they say
conceals individuals’ identities.
The extent of tracking hinges on a series of tough choices:
• Make it voluntary or mandatory?
• Collect personal or anonymized data?
• Disclose information publicly or privately?
In Western Australia, lawmakers approved a bill last month to install surveillance gadgets in people’s homes to monitor those
placed under quarantine. Authorities in Hong Kong and India are using geofencing that draws virtual fences around
quarantine zones. They monitor digital signals from smartphone or wristbands to deter rule breakers and nab offenders, who
can be sent to jail. Japan’s most popular messaging app beams health-status questions to its users on behalf of the
government.
8. 8Privacyshield.gov
Privacy Shield Program Overview
The EU-U.S. and Swiss-U.S. Privacy Shield Frameworks were designed by the U.S. Department of Commerce, and the
European Commission and Swiss Administration, respectively, to provide companies on both sides of the Atlantic with a
mechanism to comply with data protection requirements when transferring personal data from the European Union and
Switzerland to the United States in support of transatlantic commerce.
On July 12, 2016, the European Commission deemed the EU-U.S. Privacy Shield Framework adequate to enable data
transfers under EU law (see the adequacy determination).
On January 12, 2017, the Swiss Government announced the approval of the Swiss-U.S. Privacy Shield Framework as a valid
legal mechanism to comply with Swiss requirements when transferring personal data from Switzerland to the United States.
See the statements from the Swiss Federal Council and Swiss Federal Data Protection and Information Commissioner.
On July 16, 2020, the Court of Justice of the European Union issued a judgment declaring as “invalid” the European
Commission’s Decision (EU) 2016/1250 of 12 July 2016 on the adequacy of the protection provided by the EU-U.S. Privacy
Shield. As a result of that decision, the EU-U.S. Privacy Shield Framework is no longer a valid mechanism to comply with EU
data protection requirements when transferring personal data from the European Union to the United States.
This decision does not relieve participants in the EU-U.S. Privacy Shield of their obligations under the EU-U.S. Privacy
Shield Framework.
9. 9
Privacy Shield safeguards: Encryption
• The CJEU reaffirmed the validity of SCCs* but stated that companies must verify, on a case-by-case basis,
whether the law in the recipient country ensures adequate protection, under EU law, for personal data
transferred under SCCs and, where it doesn’t, that companies must provide additional safeguards or suspend
transfers.
• The ruling placed the same requirement on EU data protection authorities to suspend such transfers on a
case-by-case basis where equivalent protection can not be ensured.
• Privacy professionals may need to consider whether relevant surveillance programs and authorities apply in
particular contexts. If they do, they could then assess whether those authorities include proportional
limitations in the given context, as well as whether effective judicial remedies exist.
• Alternatively, they might consider ways to limit the context itself through additional safeguards. Encryption,
for instance, might be a consideration.
https://iapp.org/news/a/the-schrems-ii-decision-eu-us-data-transfers-in-question/
*: Standard Contractual Clauses (SCC). Standard contractual clauses for data transfers between EU and non-EU countries.
10. 10
After Privacy Shield
Focus on five main areas to protect data privacy:
1. Accessible Data: It is critical that organizations be able to access and blend data from many different file types to have an
integrated view and understanding of what personal data they hold.
2. Identifying Data: No matter where personally identifiable information (PII) resides, many organizations rely on technology
capabilities like data filters, sampling techniques and sophisticated algorithms that can identify and extract personal data
from structured and unstructured data sources.
3. Proactive Governance: Organizations need to be able to enforce governance policies, monitor data quality and manage
business terms across the organization. They must also be able to assign owners to terms and link them to policies or
technical assets like reports or data sources. This can be accomplished with data quality, metadata management and
information cataloging technologies.
4. Ongoing Protection: For ongoing protection, role-based data masking and encryption technologies can secure sensitive
information, as well dynamically blend data without moving it. This helps to minimize exposure of sensitive data.
5. Audits and Reviews: Technology that provides interactive reports to identify the users, files, data sources and types of PII
detected is essential. Audits should show who has accessed PII data and how it is being protected across the business.
https://www.cmswire.com/information-management/enterprise-data-strategies-in-the-aftermath-of-the-us-privacy-shield-defeat/
11. 11Privacyshield.gov
Will the Privacy Shield continue to serve as a data transfer
mechanism under the EU General Data Protection Regulation
(GDPR)?
• Yes. Article 45 of the GDPR provides for the continuity of adequacy determinations made under
the EU’s 1995 Data Protection Directive, one of which was the adequacy decision on the EU-U.S.
Privacy Shield.
• The Privacy Shield was also designed with an eye to the GDPR, addressing both substantive and
procedural elements.
• For instance, the Privacy Shield includes an annual review, which was designed to address the
GDPR’s requirement for a mechanism for a periodic review, at least once every four years, of
relevant developments.
• It is important to note that Privacy Shield is not a GDPR compliance mechanism, but rather is a
mechanism that enables participating companies to meet the EU requirements for transferring
personal data to third countries, discussed in Chapter V of the GDPR.
13. 13
The advocate general's 'Schrems II' opinion: What it says and means
• On July 16, the Court of Justice of the European Union issued its long-awaited decision in the case Data Protection
Commission v. Facebook Ireland, Schrems.
• That decision invalidates the European Commission’s adequacy decision for the EU-U.S. Privacy Shield Framework,
on which more than 5,000 U.S. companies rely to conduct trans-Atlantic trade in compliance with EU data
protection rules.
• While the decision upholds the validity of standard contractual clauses, it requires companies and regulators to
conduct case-by-case analyses to determine whether foreign protections concerning government access to data
transferred meet EU standards.
• The decision reinforces the importance of data protection to global commerce and the critical role that privacy
professionals play in implementing protections in line with foreign legal requirements.
• For privacy professionals today, though, there may be more questions than answers.
https://iapp.org/news/a/the-advocate-generals-schrems-ii-opinion-what-it-says-and-means/
14. 14
After Schrems II
Contracts No Longer Enough For Data Transfer
It is critical to note that under the GDPR, pseudonymisation is defined as an outcome and not a technique.
Before the GDPR, pseudonymisation was widely understood to mean replacing direct identifiers with tokens and was
applied to individual fields within a data set.
• It was merely a Privacy Enhancing Technique (“PET”).
• In addition, instead of being applied only to individual fields, GDPR pseudonymisation, in combination with the GDPR
definition for personal data, now requires that the outcome should apply to a data set as a whole (the entire
collection of direct identifiers, indirect identifiers and other attributes).
This means that to achieve GDPR-compliant pseudonymisation, you must protect not only direct identifiers but also indirect
identifiers.
• You must also consider the degree of protection applied to all attributes in a data set.
• Further, to retain any value, you must do so while still preserving the data’s utility for its intended use.
• As a result, pre-GDPR approaches (using static tokens on a direct identifier, which is too often still incorrectly referred to
as “pseudonymisation”) will rarely, if ever, meet the heightened GDPR requirements to satisfy “appropriate safeguard”
requirements for lawful international data transfers under EU law.
16. 16
Case Study
Major healthcare enterprise, providing and coordinating services to
government sponsored programs.
Contracts with numerous physicians, hospitals and Federally Qualified Health Centers (FQHCs) across
many states in the USA.
• The company needed to improve patient outcomes to reduce overall cost per member utilizing
predictive analytics.
• However, governance policies dictated that analysts should not have access to sensitive Protected
Health Information (PHI) and Personally Identifiable Information (PII).
• This meant protecting data in Teradata, Oracle and SQL Server, as well as applications and files.
• In addition, recent security breaches by other companies in the industry drove a mandate to review
and secure sensitive data from external threats and unauthorized access.
17. 17https://www.iso.org/standard/42807.html
Definitions in ISO 25237 International Health informatics standard
• De-identification process addresses three kinds of data:
• direct identifiers, which by themselves identify the patient;
• indirect identifiers, which provide correlation when used with other indirect or external knowledge; and
• non-identifying data, the rest of the data.
• Pseudonymization: particular type of de-identification that both removes the association with a data subject and adds an
association between a particular set of characteristics relating to the data subject and one or more pseudonyms
• Pseudonym: personal identifier that is different from the normally used personal identifier and is used with pseudonymized data to
provide dataset coherence linking all the information about a subject, without disclosing the real world person identity
• data protection: technical and social regimen for negotiating, managing and ensuring informational privacy, and security
• de-identification: general term for any process of reducing the association between a set of identifying data and the data
subject
• irreversibility: situation when, for any passage from identifiable to pseudonymous, it is computationally unfeasible to trace
back to the original identifier from the pseudonym
• data linking: matching and combining data from multiple databases
• linkage of information objects: process allowing a logical association to be established between different information objects
• primary use of personal data: uses and disclosures that are intended for the data collected
18. 18
Re-identification attacks
https://www.iso.org/standard/42807.html
• It is important to note that this information
is usually outside the scope of the data
model of an application.
• In order to create a methodology for privacy
risk assessment, a formalized way of
describing privacy threat and the risk of re-
identification is needed.
• A generic model of re-identification attacks,
shown in its highest level of abstraction in,
consists of three major entities.
A key element in privacy risk assessment is to assess the effect of observational data that can be obtained by an attacker.
Observational data can consist of events recorded by the attacker, but can also consist of information that can be legally
obtained by the attacker.
It could be that the attacker is a generic user of the system who has, either by accident or unauthorized effort, obtained
extra data with which he should not have come into contact in the normal line of his duty.
19. 19
Threat model, goals and means of the attacker
https://www.iso.org/standard/42807.html
There is the goal of the attack (what information is an attacker after?) and there are the means at his disposal.
• The latter is linked with the “value” that the information that could be recovered from the anonymous database has for
the attacker.
Privacy protection is about protecting personal information and not simply about protecting the identity linked to a specific
database record.
This subtle difference is reflected in the three different attacker goals that are specified in the model:
a) re-identification (full):
1) identify to whom a specific anonymous record belongs;
2) identify which anonymous record belongs to a certain
person;
b) information recovery (or partial re-identification);
c) database membership:
1) Is someone listed in the database?
2) Is someone not listed in the database?
20. 20
Re-identification example
https://www.iso.org/standard/42807.html
As can be seen, the anonymous database contains three records with four static variables, which can have the values A or B
(where a question mark indicates missing information).
The attacker can observe only two of these variables directly and correctly and knows all people who are listed in the
anonymous database.
The linkage rules for this situation are thus extremely simple, either a value is the same or it is not.
21. 21
Linkage mechanisms
https://www.iso.org/standard/42807.html
Partial re-identification is an intermediate stage between recovery of all information (on a particular subject) within the
anonymous database and no recovery at all.
In other words, it is the situation in which full re-identification fails, but in which the processes (algorithms) of re-
identification used still succeed in recovering some information from the anonymous database.
23. 23
Field Privacy Action (PA) PA Config
Variant Twin
Output
Gender Pseudonymise AD-lks75HF9aLKSa
Pseudonymization
Generalization
Field Privacy Action (PA) PA Config
Variant Twin
Output
Age Integer Range Bin
Step 10 +
Pseud.
Age_KXYC
Age Integer Range Bin
Custom
Steps
18-25
Aggregation/Binning
Field Privacy Action (PA) PA Config
Variant Twin
Output
Balance Nearest Unit Value Thousand 94000
Rounding
Generalization
Source data:
Output data:
Last name Balance Age Gender
Folds 93791 23 m
… … … …
Generalization
Source data:
Output data:
Patient Age Gender Region Disease
173965429 57 Female Hamburg Gastric ulcer
Patient Age Gender Region Disease
173965429 >50 Female Germany Gastric ulcer
Generalization
Examples of data de-identification
Source: INTERNATIONAL STANDARD ISO/IEC 20889, Privitar, Anonos
24. 24
Pseudonymization vs. Anonymization
Pseudonymization is recognized as an important method for privacy protection of personal health information.
• Such services may be used nationally, as well as for trans-border communication.
• Application areas include:
• indirect use of clinical data (e.g. research); clinical trials and post-marketing surveillance; pseudonymous
care; patient identification systems; public health monitoring and assessment; confidential patient-safety
reporting (e.g. adverse drug effects); comparative quality indicator reporting; peer review; consumer
groups; field service.
Anonymization
• Anonymization is the process and set of tools used where no longitudinal consistency is needed.
• The anonymization process is also used where pseudonymization has been used to address the remaining data
attributes.
• Anonymization utilizes tools like redaction, removal, blanking, substitution, randomization, shifting, skewing,
truncation, grouping, etc. Anonymization can lead to a reduced possibility of linkage.
• Each element allowed to pass should be justified. Each element should present the minimal risk, given the
intended use of the resulting data-set. Thus, where the intended use of the resulting data-set does not require
fine-grain codes, a grouping of codes might be used.
ISO 25237 Health informatics
25. 25
Imaging Data
Application Protection*
Cloud Gateway*
Big Data Protection*
Big Data Protection*
Big Data Protection*
File Protection*
Example of Privacy
protection of personal
health information
Use Cases:
• Diagnostic & reporting with real (Pseudonymized) data
• Clinical Research purpose by analyzing the historical data with Anonymized data
• Real time analytics and triggering actionable events for patients/ Physicians with real (Pseudonymized) data
• Training purpose with Anonymized data
• Clinical Trials and treatment with Real (Pseudonymized) / Anonymized data
• Predictive analytics with real (Pseudonymized) / Anonymized data
Use Cases for
vendor neutral
archive (VNA) for
Medical Imaging
devices &
Analytics
*: Examples of Data Protection Enforcement points
Big Data Protection*
26. 26
Protection throughout the lifecycle of data in Hadoop
Tokenizes or encrypts
sensitive data fields
Enterprise
Policies
Privacy policies may be
managed on-prem or
Cloud Platform
• Policy Enforcement Point (PEP)
Protected data fields
U
Separation of Duties
• Encryption Key Management
Big Data Analytics
Data
Producers
Data
Users
Google Cloud
UU
Big Data Protection with Granular Field Level Protection for Google Cloud
28. 28
Transit Use Storage Singling out
Pseudonymization Tokenization
Protects the data flow
from attacks
Yes Yes Yes Yes Direct identifiers No
Deterministic
encryption
Protects the data when
not used in processing
operations
Yes No Yes Yes All attributes No
Order-preserving
encryption
Protects the data from
attacks
Partially Partially Partially Yes All attributes No
Homomorphic
encryption
Protects the data also
when used in processing
operations
Yes Yes Yes Yes All attributes No
Masking
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes Local identifiers Yes
Local suppression
Protects the data in
analytical applications
Yes Yes Yes Yes
Identifying
attributes
Partially
Record suppression
Removes the data from
the data set
Yes Yes Yes Yes All attributes Yes
Sampling
Exposes only a subset of
the data for analytical
applications
Partially Partially Partially Yes All attributes Partially
Generalization
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
Partially
Rounding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No
Top/bottom coding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No
Noise addition
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially
Permutation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially
Micro aggregation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No All attributes No
Differential privacy
Protects the data in
analytical applications
No Yes Yes No
Identifying
attributes
Yes
K-anonymity
Protects the data in
analytical applications
No Yes Yes Yes Quai identifiers Yes
Privacy models
Applicable to
types of
attributes
Red
Cryptographic tools
Suppression
Generalization
Technique name
Data
truthfulness
at record
level
Use Case / User Story
Data protected in
Randomization
Applicable
to Direct
identifiers
Applicable
to All
attributes
Applicable
to Local
identifiers
Applicable
to
Identifying
attributes
Applicable
to Quasi
Identifiers
Applicability
to Different
types of
attributes
Risk reduction and
truthfulness of
standardized
de-identification
techniques and
models
Source:
INTERNATIONAL
STANDARD ISO/IEC
20889
Data truthfulness
at the record level
is useful for cases
involving traceable
data principal
specific patterns,
such as for fraud
detection,
healthcare
outcome
assessments, etc.
Technique name
29. 29
Transit Use Storage Singling out Linking In
Pseudonymization Tokenization
Protects the data flow
from attacks
Yes Yes Yes Yes Direct identifiers No Partially
Deterministic
encryption
Protects the data when
not used in processing
operations
Yes No Yes Yes All attributes No Partially
Order-preserving
encryption
Protects the data from
attacks
Partially Partially Partially Yes All attributes No Partially
Homomorphic
encryption
Protects the data also
when used in processing
operations
Yes Yes Yes Yes All attributes No No
Masking
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes Local identifiers Yes Partially
Local suppression
Protects the data in
analytical applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially
Record suppression
Removes the data from
the data set
Yes Yes Yes Yes All attributes Yes Yes
Sampling
Exposes only a subset of
the data for analytical
applications
Partially Partially Partially Yes All attributes Partially Partially
Generalization
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially
Rounding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially
Top/bottom coding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially
Noise addition
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially
Permutation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially
Micro aggregation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No All attributes No Partially
Differential privacy
Protects the data in
analytical applications
No Yes Yes No
Identifying
attributes
Yes Yes
K-anonymity
Protects the data in
analytical applications
No Yes Yes Yes Quai identifiers Yes Partially
Privacy models
Applicable to
types of
attributes
Reduces the risk o
Cryptographic tools
Suppression
Generalization
Technique name
Data
truthfulness
at record
level
Use Case / User Story
Data protected in
Randomization
Reduces
the risk
of
Singling
out
Partially
Reduces
the risk
of
Singling
out
Reduce the risk of Singling out
Singling out:
isolating some or
all records
belonging to a data
principal in the
dataset by
observing a set of
characteristics
known to uniquely
identify this data
principal
Risk reduction and
truthfulness of
standardized
de-identification
techniques and
models
Source:
INTERNATIONAL
STANDARD ISO/IEC
20889
Technique name
30. 30
Transit Use Storage Singling out Linking Inference
Pseudonymization Tokenization
Protects the data flow
from attacks
Yes Yes Yes Yes Direct identifiers No Partially No
Deterministic
encryption
Protects the data when
not used in processing
operations
Yes No Yes Yes All attributes No Partially No
Order-preserving
encryption
Protects the data from
attacks
Partially Partially Partially Yes All attributes No Partially No
Homomorphic
encryption
Protects the data also
when used in processing
operations
Yes Yes Yes Yes All attributes No No No
Masking
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes Local identifiers Yes Partially No
Local suppression
Protects the data in
analytical applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially Partially
Record suppression
Removes the data from
the data set
Yes Yes Yes Yes All attributes Yes Yes Yes
Sampling
Exposes only a subset of
the data for analytical
applications
Partially Partially Partially Yes All attributes Partially Partially Partially
Generalization
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially Partially
Rounding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially Partially
Top/bottom coding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially Partially
Noise addition
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially Partially
Permutation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially Partially
Micro aggregation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No All attributes No Partially Partially
Differential privacy
Protects the data in
analytical applications
No Yes Yes No
Identifying
attributes
Yes Yes Partially
K-anonymity
Protects the data in
analytical applications
No Yes Yes Yes Quai identifiers Yes Partially No
Privacy models
Applicable to
types of
attributes
Reduces the risk of
Cryptographic tools
Suppression
Generalization
Technique name
Data
truthfulness
at record
level
Use Case / User Story
Data protected in
Randomization
Reduces
the risk
of
Linking
Partially
Reduces
the risk
of
Linking
Source:
INTERNATIONAL
STANDARD ISO/IEC
20889
Reduce the risk of Linking
Linking
act of associating a
record concerning a
data principal with a
record concerning the
same data principal in a
separate dataset
Risk reduction and
truthfulness of
standardized
de-identification
techniques and
models
Technique name
31. 31
Risk reduction and
truthfulness of
standardized
de-identification
techniques and
models
Transit Use Storage Singling out Linking Inferenc
Pseudonymization Tokenization
Protects the data flow
from attacks
Yes Yes Yes Yes Direct identifiers No Partially No
Deterministic
encryption
Protects the data when
not used in processing
operations
Yes No Yes Yes All attributes No Partially No
Order-preserving
encryption
Protects the data from
attacks
Partially Partially Partially Yes All attributes No Partially No
Homomorphic
encryption
Protects the data also
when used in processing
operations
Yes Yes Yes Yes All attributes No No No
Masking
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes Local identifiers Yes Partially No
Local suppression
Protects the data in
analytical applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially Partiall
Record suppression
Removes the data from
the data set
Yes Yes Yes Yes All attributes Yes Yes Yes
Sampling
Exposes only a subset of
the data for analytical
applications
Partially Partially Partially Yes All attributes Partially Partially Partiall
Generalization
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially Partiall
Rounding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially Partiall
Top/bottom coding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially Partiall
Noise addition
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially Partiall
Permutation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially Partiall
Micro aggregation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No All attributes No Partially Partiall
Differential privacy
Protects the data in
analytical applications
No Yes Yes No
Identifying
attributes
Yes Yes Partiall
K-anonymity
Protects the data in
analytical applications
No Yes Yes Yes Quai identifiers Yes Partially No
Privacy models
Applicable to
types of
attributes
Reduces the risk of
Cryptographic tools
Suppression
Generalization
Technique name
Data
truthfulness
at record
level
Use Case / User Story
Data protected in
Randomization
Reduces
the risk
of
Inference
Partially
Reduces
the risk
of
Inference
Source:
INTERNATIONAL
STANDARD ISO/IEC
20889
Inference:
act of deducing otherwise
unknown information with non-
negligible probability, using the
values of one or more
attributes or by correlating
external data sources
The deduced information can be
the value of one or more
attributes of a data principal,
the presence or absence of a
data principal in a dataset, or
the value of one or more
statistics for a population or
segment of a population.
Reduce the risk of InferenceTechnique name
32. 32
Risk
reduction
and
truthfulness
of
standardized
de-
identification
techniques
and
models
Source:
INTERNATIONAL
STANDARD ISO/IEC
20889
Transit Use Storage Singling out Linking Inference
Pseudonymization Tokenization
Protects the data flow
from attacks
Yes Yes Yes Yes Direct identifiers No Partially No
Deterministic
encryption
Protects the data when
not used in processing
operations
Yes No Yes Yes All attributes No Partially No
Order-preserving
encryption
Protects the data from
attacks
Partially Partially Partially Yes All attributes No Partially No
Homomorphic
encryption
Protects the data also
when used in processing
operations
Yes Yes Yes Yes All attributes No No No
Masking
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes Local identifiers Yes Partially No
Local suppression
Protects the data in
analytical applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially Partially
Record suppression
Removes the data from
the data set
Yes Yes Yes Yes All attributes Yes Yes Yes
Sampling
Exposes only a subset of
the data for analytical
applications
Partially Partially Partially Yes All attributes Partially Partially Partially
Generalization
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
Partially Partially Partially
Rounding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially Partially
Top/bottom coding
Protects the data in
dev/test and analytical
applications
Yes Yes Yes Yes
Identifying
attributes
No Partially Partially
Noise addition
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially Partially
Permutation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No
Identifying
attributes
Partially Partially Partially
Micro aggregation
Protects the data in
dev/test and analytical
applications
Yes Yes Yes No All attributes No Partially Partially
Differential privacy
Protects the data in
analytical applications
No Yes Yes No
Identifying
attributes
Yes Yes Partially
K-anonymity
Protects the data in
analytical applications
No Yes Yes Yes Quai identifiers Yes Partially No
Privacy models
Applicable to
types of
attributes
Reduces the risk of
Cryptographic tools
Suppression
Generalization
Technique name
Data
truthfulness
at record
level
Use Case / User Story
Data protected in
Randomization
Technique name
35. 35
ISO/IEC 29101:2018 Architecture framework
ISO - Actors and Systems
• An actor can be responsible for building the ICT (information and communication technology) systems that it uses, or
not. For example, the PII principal can use a system built by and the responsibility of the PII controller or the ICT system
of the PII principal can be a part of the ICT system of the PII controller.
• In ICT systems employing peer-to-peer communications (communication method, communication model or
communication technology featuring communication in between/among the entities that are peer to each other without
central servers), every application can take the roles of all three listed actors.
• Information is both sent and received by
each peer, so each peer can be a PII
controller or processor for PII transferred by
another party in the role of a PII principal.
• In social networking applications, PII can be
processed by anyone with access to other
people's profiles.
36. 36
Policy framework for operation of pseudonymization services
https://www.iso.org/standard/42807.html
This policy should include the following:
1. description of the processing in which pseudonymization plays a role;
2. identification of the controller of the personal data;
3. identification of the controller of the pseudonymized data;
4. description of the pseudonymization method;
5. identification of the entity carrying out the pseudonymization;
• protection, storage and handling of the pseudonymization “secrets” (usually a cryptographic key or a linking table);
• description of what will happen if the organization is discontinued
• description also for which domains and applications the secret will be used and or how long it is valid
6. detailed description if the pseudonymization is reversible and what authorization by whom is required;
7. definition of the limitations of the receiver of pseudonymized data (e.g. information actions, onward forwarding,
retention policies)
Each data processing or collecting
project that uses pseudonymization
should have a data protection policy
dealing with the pseudonymization
aspects
37. 37
Trustworthy implementation - A trusted third party* performing a pseudonymizing
https://www.iso.org/standard/42807.html
A trusted third party performing a pseudonymizing transformation is necessary for trustworthy implementation of the
pseudonymization technique across multiple entities.
1. As one communicating party does not always trust the other, trust can be established indirectly because the two parties
trust a third, independent party.
2. Both parties are bound by a code of conduct, as specified in a privacy and security policy agreement they agree on with
the pseudonymization service.
3. Use of a pseudonymization service offers the only reliable protection against several types of attack on the
pseudonymization process.
4. Complementary privacy enhancements technology (PETs) and data processing features can easily be implemented.
*: Security authority, or its agent,
trusted by other entities with
respect to security-related
activities (ISO_25237_2017 Health
informatics – Pseudonymization)
38. 38
Interoperability of Trustworthy implementations of the pseudonymization
https://www.iso.org/standard/42807.html
• One or more mechanisms for exchanging the data between the entities in the model (source, pseudonymization service,
target) and for controlling the operation. This is less of an issue and existing protocols can be used, such as html.
• Key exchange issues.
For two independent pseudonymization
service providers to be interoperable:
• integrate each other’s data: data
from the same date subject
processed by any of the service
providers should be linkable to each
other without direct re-identification
of the data subject;
• convert the pseudonymization
results from one or more service
providers in a controlled way
without direct re-identification of
the data subject.
39. 39
Pseudonymization services - Trustworthy practices for operations
https://www.iso.org/standard/42807.html
A pseudonymization service:
1. should be strictly independent of the organizations supplying source data;
2. should be able to guarantee security and trustworthiness of its methods by publishing to its subscribers its
operating practices;
3. should be able to guarantee security and trustworthiness of its software modules:
4. should be able to guarantee security and trustworthiness of its operating environment, platforms and
infrastructure (should provide technical, physical, procedural and personnel controls in accordance with ISO
27799)
5. should implement monitoring and quality assurance services and programmes
6. cryptographic key management
7. instantiation of the pseudonymization service
8. internal audit procedures
9. external audit procedures
10. participants
11. risk assessment should be conducted regarding access by the data source to the resulting pseudonyms and
specification of such restrictions should be expressed in operational policies
40. 40https://www.iso.org/standard/42807.html
Preparation of data
The conceptual model for the use of pseudonymization services requires
that the data be split up in a part containing identifying data and in
another part containing nothing but anonymous data.
1. Data elements that will be used for
linking, grouping, anonymous
searching, matching, etc. shall be
indicated and marked
2. Depending on the privacy policy,
convert elements that need specific
transformations, e.g. for changing
absolute time references into
relative time references, dates of
birth into age groups, need similar
marking
3. Identifying elements that,
according to the privacy policy, are
not needed in the further
processing in the target
applications, shall be discarded
4. The anonymous part of the raw
personal data is put into the
payload part of the personal data
element
42. 42
Pseudonymize - Identifying and payload data shall be separated
Entities in the de-classification process
The separation of identifying and payload data
• Further processing steps will take the identifying part as input and leave the payload
unchanged.
• The pseudonymization process translates the given identifiers into a pseudonym.
Pseudonymization can map a given identifier with the same pseudonym.
• Because the combination of both preservation of linkage between records
belonging to the same identity and the protection of privacy of the data subjects
is the main reason for using pseudonymization, this variant is used most often;
— map a given identifier with a different pseudonym:
— context dependent (context spanning aspect of a pseudonym);
— time dependent (e.g. always varying or changing over specified time-intervals);
— location dependent (e.g. changing when the data comes from different places).
ISO/TS 25237:2008 Health informatics — Pseudonymization
Two types of pseudonymized data
• Irreversible pseudonymization
• Reversible pseudonymization by
applying procedures restricted to
duly authorized users.
U
Tokens
Lookup table
Identifying
data
Payload
data
43. 43 pcisecuritystandards.org
Encryption process
Encrypted Cardholder
data (CHD)
U
Encryption keys
System 1
System 2
System 3
Encryption keys
Encrypted Cardholder
data (CHD)
USystem 4
Encrypted Cardholder
data (CHD)
USystem 0
The following MAY NOT be in scope for
PCI DSS
Encryption Example for PCI DSS
Encryption keys
“Where a third party receives
and/or stores only data
encrypted by another entity,
and where they do not have
the ability to decrypt the data,
the third party may be able to
consider the encrypted data
out of scope if certain
conditions are met.”
third party
another entity
data encrypted
That is specific to a situation where the organization has no access
to the key material and only encrypted PANS. For example, if a
card swipe is encrypted at the PAD and traverses the organizations
network, then to the bank for authorization/settlement and the
organization never gets the clear text PAN and has no access to
the keys used between the PAD and the bank, then that
organization may have no PCI responsibility.
In any situation where the organization has access to the key
material, tokenization is the only method to reduce scope.
44. 44
Tokenization process
U
System 1
The following are each in scope
1. Systems performing tokenization of data
2. Tokens that are not isolated from the tokenization
processes
3. Tokenized data that is present on a system or media
that also contains the tokenization table
4. Tokens that are present in the same environment as
the tokenization table
5. Tokens accessible to an entity that also has access to
the tokenization table
System 2
System 3 USystem 4
Tokens
USystem 0
The following is NOT in scopeTokenization Example for PCI DSS
TokensLookup table
Lookup table
Tokens
Lookup table
pcisecuritystandards.org
46. 46
Access to DataLow High
High -
Low -
I I
Lower Risk and Higher Productivity
with More Access to More Data
User Productivity
Risk
More
Access to
Data
Low Risk Tokens
High Risk Clear Data
47. 47
Security Compliance
Privacy
Controls
&
Tools Regulations
Policies
Risk
Management
Why, What & How
Balance
Breaches
Opportunities
Enable use of protected data to find new business opportunities
Protect that data in ways
that are transparent to
business processes and
compliant to regulations
Data Security
On-prem or as a Service
Compliance to EU GDPR, California CCPA and a
growing list of country specific privacy regulations
50. 50
Personally Identifiable Information
(PII) in compliance with the EU Cross
Border Data Protection Laws,
specifically
• Datenschutzgesetz 2000 (DSG
2000) in Austria, and
• Bundesdatenschutzgesetz in
Germany.
This required access to Austrian and
German customer data to be
restricted to only requesters in each
respective country.
• Achieved targeted compliance with
EU Cross Border Data Security laws
• Implemented country-specific data
access restrictions
Data sources
Case Study
A major international bank performed a consolidation of all European operational data sources
to Italy
52. 52
CCPA redefines ”Personal information”
• CCPA states that ”Personal information” means information that identifies, relates to, describes, is capable of
being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or
household
PwC,
Micro Focus
55. 55
Data flow mapping under GDPR
• If there is not already a documented workflow in place in your organisation, it can be worthwhile for a
team to be sent out to identify how the data is being gathered.
• This will enable you to see how your data flow is different from reality and what needs to be done to
amend this.
If an organisation’s theory about how its data is flowing is different from the reality, you have a breach and
could be fined.
The organisation needs to look at how the data
was captured, who is accountable for it, where it
is located and who has access.
56. 56
Legal Compliance and Nation-State Attacks
• Many companies have information that is attractive to governments and intelligence services.
• Others worry that litigation may result in a subpoena for all their data.
Securosis, 2019
Multi-Cloud Data Privacy considerations
Jurisdiction
• Cloud service
providers
redundancy is great
for resilience, but
regulatory concerns
arises when moving
data across regions
which may have
different laws and
jurisdictions.
SecuPi
57. 57Securosis, 2019
Consistency
• Most firms are quite familiar with their on-premises encryption and key management systems, so they often prefer
to leverage the same tool and skills across multiple clouds.
• Firms often adopt a “best of breed” cloud approach.
Multi-Cloud Key Management considerations
Trust
• Some customers simply do not trust their vendors.
Vendor Lock-in and Migration
• A common concern is vendor lock-in, and
an inability to migrate to another cloud
service provider.
• Some native cloud encryption systems do
not allow customer keys to move outside
the system, and cloud encryption systems
are based on proprietary interfaces.
• The goal is to maintain protection
regardless of where data resides, moving
between cloud vendors.
Cloud Gateway
Google Cloud AWS Cloud Azure Cloud
58. 58
References:
1. California Consumer Privacy Act, OCT 4, 2019, https://www.csoonline.com/article/3182578/california-consumer-privacy-act-what-
you-need-to-know-to-be-compliant.html
2. CIS Controls V7.1 Mapping to NIST CSF, https://dataprivacylab.org/projects/identifiability/paper1.pdf
3. GDPR and Tokenizing Data, https://tdwi.org/articles/2018/06/06/biz-all-gdpr-and-tokenizing-data-3.aspx
4. GDPR VS CCPA, https://wirewheel.io/wp-content/uploads/2018/10/GDPR-vs-CCPA-Cheatsheet.pdf
5. General Data Protection Regulation, https://en.wikipedia.org/wiki/General_Data_Protection_Regulation
6. IBM Framework Helps Clients Prepare for the EU's General Data Protection Regulation, https://ibmsystemsmag.com/IBM-
Z/03/2018/ibm-framework-gdpr
7. INTERNATIONAL STANDARD ISO/IEC 20889, https://webstore.ansi.org/Standards/ISO/ISOIEC208892018?gclid=EAIaIQobChMIvI-
k3sXd5gIVw56zCh0Y0QeeEAAYASAAEgLVKfD_BwE
8. INTERNATIONAL STANDARD ISO/IEC 27018, https://webstore.ansi.org/Standards/ISO/
ISOIEC270182019?gclid=EAIaIQobChMIleWM6MLd5gIVFKSzCh3k2AxKEAAYASAAEgKbHvD_BwE
9. New Enterprise Application and Data Security Challenges and Solutions https://www.brighttalk.com/webinar/new-enterprise-
application-and-data-security-challenges-and-solutions/
10. Machine Learning and AI in a Brave New Cloud World https://www.brighttalk.com/webcast/14723/357660/machine-learning-and-ai-
in-a-brave-new-cloud-world
11. Emerging Data Privacy and Security for Cloud https://www.brighttalk.com/webinar/emerging-data-privacy-and-security-for-cloud/
12. New Application and Data Protection Strategies https://www.brighttalk.com/webinar/new-application-and-data-protection-
strategies-2/
13. The Day When 3rd Party Security Providers Disappear into Cloud https://www.brighttalk.com/webinar/the-day-when-3rd-party-
security-providers-disappear-into-cloud/
14. Advanced PII/PI Data Discovery https://www.brighttalk.com/webinar/advanced-pii-pi-data-discovery/
15. Emerging Application and Data Protection for Cloud https://www.brighttalk.com/webinar/emerging-application-and-data-protection-
for-cloud/
16. Data Security: On Premise or in the Cloud, ISSA Journal, December 2019, ulf@ulfmattsson.com
17. Webinars and slides, www.ulfmattsson.com
59. 59
IS: International Standard
TR: Technical Report
TS: Technical Specification
Guidelines to help comply
with ethical standards
20889 IS Privacy enhancing de-identification terminology and
classification of techniques
27018 IS Code of practice for protection of PII in public clouds acting
as PII processors
27701 IS Security techniques - Extension to ISO/IEC 27001 and
ISO/IEC 27002 for privacy information management - Requirements
and guidelines
29100 IS Privacy framework
29101 IS Privacy architecture framework
29134 IS Guidelines for Privacy impact assessment
29151 IS Code of Practice for PII Protection
29190 IS Privacy capability assessment model
29191 IS Requirements for partially anonymous, partially unlinkable
authentication
Cloud
11 Published International Privacy Standards
Framework
Management
Techniques
Impact
19608 TS Guidance for developing security and privacy functional
requirements based on 15408
Requirements
27550 TR Privacy engineering for system lifecycle processesProcess
ISO Privacy Standards