Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons Learned

513 views

Published on

Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple’s differential privacy deployment for iOS, Google’s RAPPOR, and LinkedIn Salary. We will also discuss various open source as well as commercial privacy tools, and conclude with open problems and challenges for data mining / machine learning community.

Published in: Internet
  • Be the first to comment

Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons Learned

  1. 1. Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons Learned KDD 2018 Tutorial August 2018 Krishnaram Kenthapadi (AI @ LinkedIn) Ilya Mironov (Google AI) Abhradeep Thakurta (UC Santa Cruz) https://sites.google.com/view/kdd2018privacytutorial
  2. 2. Outline / Learning Outcomes • Privacy breaches and lessons learned • Evolution of privacy techniques • Differential privacy: definition and techniques • Privacy techniques in practice: Challenges and Lessons Learned • Google’s RAPPOR • Apple’s differential privacy deployment for iOS • LinkedIn Salary: Privacy Design • Key Takeaways
  3. 3. Privacy: A Historical Perspective Evolution of Privacy Techniques and Privacy Breaches
  4. 4. Privacy Breaches and Lessons Learned Attacks on privacy •Governor of Massachusetts •AOL •Netflix •Web browsing data •Facebook •Amazon •Genomic data
  5. 5. born July 31, 1945 resident of 02138 Massachusetts Group Insurance Commission (1997): Anonymized medical history of state employees (all hospital visits, diagnosis, prescriptions) Latanya Sweeney (MIT grad student): $20 – Cambridge voter roll William Weld vs Latanya Sweeney
  6. 6. 64 %uniquely identifiable with ZIP + birth date + gender (in the US population) Golle, “Revisiting the Uniqueness of Simple Demographics in the US Population”,
  7. 7. Attacker's Advantage Auxiliary information
  8. 8. August 4, 2006: AOL Research publishes anonymized search logs of 650,000 users August 9: New York Times AOL Data Release
  9. 9. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs
  10. 10. Netflix Prize
  11. 11. Oct 2006: Netflix announces Netflix Prize • 10% of their users • average 200 ratings per user Narayanan, Shmatikov (2006): Netflix Prize
  12. 12. Deanonymizing Netflix Data Narayanan, Shmatikov, Robust De- anonymization of Large Datasets (How to Break Anonymity of the Netflix Prize Dataset), 2008
  13. 13. ● Noam Chomsky in Our Times ● Farenheit 9/11 ● Jesus of Nazareth ● Queer as Folk
  14. 14. Key idea: ● Similar intuition as the attack on medical records ● Medical records: Each person can be identified based on a combination of a few attributes ● Web browsing history: Browsing history is unique for each person ● Each person has a distinctive social network  links appearing in one’s feed is unique ● Users likely to visit links in their feed with higher probability than a random user ● “Browsing histories contain tell-tale marks of identity” Su et al, De-anonymizing Web Browsing Data with Social Networks, 2017 De-anonymizing Web Browsing Data with Social Networks
  15. 15. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality
  16. 16. Ad targeting: Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM Privacy Attacks On Ad Targeting
  17. 17. 10 campaigns targeting 1 person (zip code, gender, workplace, alma mater) Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM Facebook vs Korolova Age 21 22 23 … 30 Ad Impressions in a week 0 0 8 … 0
  18. 18. 10 campaigns targeting 1 person (zip code, gender, workplace, alma mater) Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM Facebook vs Korolova Interest A B C … Z Ad Impressions in a week 0 0 8 … 0
  19. 19. ● Context: Microtargeted Ads ● Takeaway: Attackers can instrument ad campaigns to identify individual users. ● Two types of attacks: ○ Inference from Impressions ○ Inference from Clicks Facebook vs Korolova: Recap
  20. 20. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality Active
  21. 21. Items frequently bought together Bought: A B C D E Z: Customers Who Bought This Item Also Bought Calandrino, Kilzer, Narayanan, Felten, Shmatikov, “You Might Also Like: Privacy Risks of Collaborative Attacking Amazon.com A C D E
  22. 22. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality Active Observant
  23. 23. Homer et al., “Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high- density SNP genotyping microarrays”, PLoS Genetics, 2008 Genetic data
  24. 24. Reference population Bayesian Analysis
  25. 25. “In all mixtures, the identification of the presence of a person’s genomic DNA was possible.”
  26. 26. Zerhouni, NIH Director: “As a result, the NIH has removed from open-access databases the aggregate results (including P values and genotype counts) for all the GWAS that had been available on NIH sites” … one week later
  27. 27. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality Active Observant Clever
  28. 28. Differential Privacy
  29. 29. Curator Defining Privacy
  30. 30. 31 CuratorCurator Your data in the database Defining Privacy Your data in the database
  31. 31. Defining Privacy 32 CuratorCurator Intuition: ● A member’s privacy is preserved if … ○ “The released result would nearly be the same, whether or not the user’s information is taken into account” ● An attacker gains very little additional knowledge about any specific member from the published result Defining Privacy Your data in the database
  32. 32. 33 CuratorCurator Databases D and D′ are neighbors if they differ in one person’s data. Differential Privacy [DMNS06]: The distribution of the curator’s output M(D) on database D is (nearly) the same as M(D′). Differential Privacy Your data in the database Your data in the database
  33. 33. ε-Differential Privacy: The distribution of the curator’s output M(D) on database D is (nearly) the same as M(D′). 34 CuratorCurator Parameter ε quantifies information leakage ∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S]. Differential Privacy Your data in the database Your data in the database Dwork, McSherry, Nissim, Smith [TCC 2006]
  34. 34. (ε, δ)-Differential Privacy: The distribution of the curator’s output M(D) on database D is (nearly) the same as M(D′). 35 Parameter ε quantifies information leakage Parameter δ allows for a small probability of failure ∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S]+δ. CuratorCurator Dwork, McSherry, Nissim, Smith [TCC 2006]; Dwork, Kenthapadi, McSherry, Mironov, Naor [ EUROCRYPT 2006] Your data in the database Your data in the database Differential Privacy
  35. 35. 36 f(D) f(D′) — bad outcomes — probability with record x — probability without record x “Bad Outcomes” Interpretation
  36. 36. ● Prior on databases p ● Observed output O ● Does the database contain record x? 37 Bayesian Interpretation
  37. 37. ● Robustness to auxiliary data ● Post-processing: If M(D) is differentially private, so is f(M(D)). ● Composability: Run two ε-DP mechanisms. Full interaction is 2ε-DP. ● Group privacy: Graceful degradation in the presence of correlated inputs. 38 Differential Privacy
  38. 38. Differential Privacy: Takeaway points • Privacy as a notion of stability of randomized algorithms in respect to small perturbations in their input • Worst-case definition • Robust (to auxiliary data, correlated inputs) • Composable • Quantifiable • Concept of a privacy budget • Noise injection
  39. 39. Case Studies
  40. 40. Google’s RAPPOR
  41. 41. 44 London, 1854: Broad Street Cholera Outbreak
  42. 42. ...Mountain View, 2014
  43. 43. Central Model Curator
  44. 44. Local Model
  45. 45. Randomized response: Collecting a sensitive Boolean Developed in the 1960s for sensitive surveys “Have you had an abortion?” - flip a coin, in private - if coin lands heads, respond “YES” - if coin lands tails, respond with the truth Unbiased estimate calculated as: 2 × (fraction of “YES” - ½ )
  46. 46. Randomized response: Collecting a sensitive Boolean Developed in the 1960s for sensitive surveys “Have you had an abortion?” - flip a coin, in private - if coin lands heads, respond “YES” flip another coin to respond “YES” or “NO” - if coin lands tails, respond with the truth Unbiased estimate calculated as: 2 × (fraction of “YES” - ½ ) Satisfies differential privacy
  47. 47. RAPPOR Erlingsson, Pihur, Korolova. "RAPPOR: Randomized aggregatable privacy-preserving ordinal response." ACM CCS 2014.
  48. 48. RAPPOR: two-level randomized response Can we do repeated surveys of sensitive attributes? — Average of randomized responses will reveal a user’s true answer :-( Solution: Memoize! Re-use the same random answer — Memoization can hurt privacy too! Long, random bit sequence can be a unique tracking ID :-( Solution: Use 2-levels! Randomize the memoized response
  49. 49. RAPPOR: two-level randomized response ● Store client value v into bloom filter B using hash functions ● Memoize a Permanent Randomized Response (PRR) B′ ● Report an Instantaneous Randomized Response (IRR) S
  50. 50. RAPPOR: two-level randomized response ● Store client value v into bloom filter B using hash functions ● Memoize a Permanent Randomized Response (PRR) B′ ● Report an Instantaneous Randomized Response (IRR) S f = ½ q = ¾ , p = ½
  51. 51. RAPPOR: Life of a report Value Bloom Filter PRR IRR “www.google.com”
  52. 52. Value Bloom Filter PRR IRR “www.google.com” P(1) = 0.25 P(1) = 0.75 RAPPOR: Life of a report
  53. 53. Value Bloom Filter PRR IRR “www.google.com” P(1) = 0.50 P(1) = 0.75 RAPPOR: Life of a report
  54. 54. Differential privacy of RAPPOR ● Permanent Randomized Response satisfies differential privacy at ● Instantaneous Randomized Response has differential privacy at = 4 ln(3) = ln(3)
  55. 55. Differential Privacy of RAPPOR: Measurable privacy bounds Each report offers differential privacy with ε = ln(3) Attacker’s guess goes from 0.1% → 0.3% in the worst case Differential privacy even if attacker gets all reports (infinite data!!!) Also… Base Rate Fallacy prevents attackers from finding needles in haystacks
  56. 56. Cohorts Bloom Filter: 2 bits out of 128 — too many false positives ... user 0xA0FE91B76: google.com cohort 2cohort 1 cohort 128 h2
  57. 57. Decoding RAPPOR
  58. 58. From Raw Counts to De-noised Counts True bit counts, with no noise De-noised RAPPOR reports
  59. 59. From De-Noised Count to Distribution True bit counts, with no noise De-noised RAPPOR reports google.com: yahoo.com: bing.com:
  60. 60. From De-Noised Count to Distribution Linear Regression: minX ||B - A X||2 LASSO: minX (||B - A X||2)2 + λ||X||1 Hybrid: 1. Find support of X via LASSO 2. Solve linear regression to find weights
  61. 61. Deploying RAPPOR
  62. 62. Coverage
  63. 63. Explaining RAPPOR “Having the cake and eating it too…” “Seeing the forest without seeing the trees…”
  64. 64. Metaphor for RAPPOR
  65. 65. Microdata: An Individual’s Report
  66. 66. Microdata: An Individual’s Report Each bit is flipped with probability 25%
  67. 67. Big Picture Remains!
  68. 68. Google Chrome Privacy White Paper https://www.google.com/chrome/browser/privacy/whitepaper.html Phishing and malware protection Google Chrome includes an optional feature called "Safe Browsing" to help protect you against phishing and malware attacks. This helps prevent evil-doers from tricking you into sharing personal information with them (“phishing”) or installing malicious software on your computer (“malware”). The approach used to accomplish this was designed specifically to protect your privacy and is also used by other popular browsers. If you'd rather not send any information to Safe Browsing, you can also turn these features off. Please be aware that Chrome will no longer be able to protect you from websites that try to steal your information or install harmful software if you disable this feature. We really don't recommend turning it off. … If a URL was indeed dangerous, Chrome reports this anonymously to Google to improve Safe Browsing. The data sent is randomized, constructed in a manner that ensures differential privacy, permitting only monitoring of aggregate statistics that apply to tens of thousands of users at minimum. The reports are an instance of Randomized Aggregatable Privacy-Preserving Ordinal Responses, whose full technical details have been published in a technical report and presented at the 2014 ACM Computer and Communications Security conference. This means that Google cannot infer which website you have visited from this.
  69. 69. Developers’ Uptake
  70. 70. RAPPOR: Lessons Learned
  71. 71. Growing Pains ● Transitioning from a research prototype to a real product ● Scalability ● Versioning
  72. 72. Communicating Uncertainty
  73. 73. Maintaining Candidates List No missing candidates Three missing candidates 4% 13% 17%
  74. 74. RAPPOR Metrics in Chrome https://chromium.googlesource.com/chromium/src/+log/master/tools/metrics/rappor/rappor.xml
  75. 75. Open Source Efforts https://github.com/google/rappor - demo you can run with a couple of shell commands - client library in several languages - analysis tool and simulation - documentation
  76. 76. Follow-up - Bassily, Smith, “Local, Private, Efficient Protocols for Succinct Histograms,” STOC 2015 - Kairouz, Bonawitz, Ramage, “Discrete Distribution Estimation under Local Privacy”, https://arxiv.org/abs/1602.07387 - Qin et al., “Heavy Hitter Estimation over Set-Valued Data with Local Differential Privacy”, CCS 2016
  77. 77. Key takeaway points RAPPOR - locally differentially-private mechanism for reporting of categorical and string data ● First Internet-scale deployment of differential privacy ● Explainable ● Conservative ● Open-sourced
  78. 78. Apple's On-Device Differential Privacy Abhradeep Thakurta, UC Santa Cruz
  79. 79. Apple WWDC, June 2016
  80. 80. References https://arxiv.org/abs/1709.02753
  81. 81. Phablet Derp Photobomb Woot Phablet OMG Woot Troll Prepone Phablet awwww dp Learning from private data Learn new (and frequent) words typed
  82. 82. Learning from private data Learn frequent emojis typed
  83. 83. Apple's On-Device Differential Privacy: Discovering New Words
  84. 84. Roadmap 1. Private frequency estimation with count-min-sketch 2. Private heavy hitters with puzzle piece algorithm 3. Private heavy hitters with tree histogram protocol
  85. 85. Private Frequency Oracle
  86. 86. Private frequency oracle Building block for private heavy hitters 𝑑2𝑑1 𝑑 𝑛 All errors within 𝛾 = O( 𝑛 log|𝒮|) frequency Words (𝒮) 𝛾 "phablet" frequency("phablet")
  87. 87. Private frequency oracle: Design constraints Computational and communication constraints: Client side: size of the domain (|S|) and n Communication to server: very few bits Server-side cost for one query: size of the domain (|S|) and n
  88. 88. Private frequency oracle: Design constraints Computational and communication constraints: Client side: size of the domain (|S|) and n # characters > 3,000 For 8-character words: size of the domain |S|=3,000^8 number of clients ~ 1B Efficiently [BS15] ~ n Our goal ~ O(log |S|)
  89. 89. Private frequency oracle: Design constraints Computational and communication constraints: Client side: O(log |S|) Communication to server: O(1) bits Server-side cost for one query: O(log |S|)
  90. 90. Private frequency oracle A starter solution: Randomized response 𝑑 0 1 0 𝑖 1 0 1 𝑖 Protects ε-differential privacy (with the right bias) Randomized response: d′
  91. 91. 1 0 0 1 1 0 1 0 1 + With bias correction frequency All domain elements Error in each estimate: Θ( 𝑛 log|𝒮|) Optimal error under privacy Private frequency oracle A starter solution: Randomized response
  92. 92. Computational and communication constraints: Client side: O(|S|) Communication to server: O(|S|) bits Server-side cost for one query: O(1) Private frequency oracle A starter solution: Randomized response 1 0 1 𝑖
  93. 93. 𝑑 0 01 0 01 0 01 Hash function: ℎ1 Hash function: ℎ2 Hash function: ℎ 𝑘 Number of hash bins: 𝑛 Computation= 𝑂(log|𝒮|) 𝑘 ≈ log|𝒮| Private frequency oracle Non-private count-min sketch [CM05]
  94. 94. 0 01 0 01 0 01 0 01 1 00 0 11 1 𝑘 1 + 245 127 9123 2132 𝑛 Reducing server computation Private frequency oracle Non-private count-min sketch [CM05]
  95. 95. Reducing server computation 1 𝑘 1 Phablet 245 127 9123 2132 𝑛 9146 2212 Frequency estimate: min (9146, 2212, 2132) Error in each estimate: O( 𝑛log|𝒮|) Server side query cost: 𝑂(log|𝒮|) 𝑘 ≈ log |𝒮| Private frequency oracle Non-private count-min sketch [CM05] "phablet"
  96. 96. Private frequency oracle Private count-min sketch 𝑑 Making client computation differentially private 0 01 0 01 0 01 1 01 1 00 0 00 𝑘𝜖-diff. private, since 𝑘 pieces of information
  97. 97. Private frequency oracle Private count-min sketch 𝑑 Theorem: Sampling ensures 𝜖-differential privacy without hurting accuracy, rather improves it by a factor of 𝑘 0 01 1 00
  98. 98. Private frequency oracle Private count-min sketch 0 01 +1 +1-1 Hadamard transform Reducing client communication
  99. 99. Private frequency oracle Private count-min sketch 0 01 +1 +1-1 Hadamard transform -1 +1 Communication: 𝑂(1) bit Theorem: Hadamard transform and sampling do not hurt accuracy Reducing client communication
  100. 100. Private frequency oracle Private count-min sketch Error in each estimate: O( 𝑛log|𝒮|) Computational and communication constraints: Client side: O(log |S|) Communication to server: O(1) bits Server-side cost for one query: O(log |S|)
  101. 101. Roadmap 1. Private frequency estimation with count-min-sketch 2. Private heavy hitters with puzzle piece algorithm 3. Private heavy hitters with tree histogram protocol
  102. 102. Private heavy hitters: Using the frequency oracle Private frequency oracle Private count-min sketch Domain 𝒮 Too many elements in 𝒮 to search. Element s in S Frequency(s) Find all s in S with frequency > γ
  103. 103. Puzzle piece algorithm (works well in practice, no theoretical guarantees) [Bassily Nissim Stemmer Thakurta, 2017 and Apple differential privacy team, 2017]
  104. 104. Private heavy hitters Ph ab le t$ Frequency > 𝛾 Each bi-gram frequency > 𝛾 Observation: If a word is frequent, its bigrams are frequent too.
  105. 105. Private heavy hitters Sanitized bi-grams, and the complete word ab ad ph ba ab ax le ab ab Position P1 Position P2 Position P3 le ab t$ Position P4 Frequent bi-grams Natural algorithm: Cartesian product of frequent bi-grams
  106. 106. Private heavy hitters ab ad ph ba ab ax le ab ab Position P1 Position P2 Position P3 le ab t$ Position P4 Frequent bi-grams Candidate words P1 x P2 x P3 x P4 Private frequency oracle Private count-min sketch Find frequent words Natural algorithm: Cartesian product of frequent bi-grams
  107. 107. Private heavy hitters Candidate words P1 x P2 x P3 x P4 Private frequency oracle Find frequent words Combinatorial explosion In practice, all bi-grams are frequent Natural algorithm: Cartesian product of frequent bi-grams Private count-min sketch
  108. 108. Puzzle piece algorithm Ph ab le t$ ≜ h=Hash(Phablet) Hash: 𝒮 → 1, … , ℓ Ph ab le t$h h h h Privatized bi-grams tagged with the hash, and the complete word
  109. 109. Puzzle piece algorithm: Server side ab 1 ad 5 Ph 3 ba 4 ab 3 ax 9 le 3 le 7 ab 1 Position P1 Position P2 Position P3 le 1 ab 9 t$ 3 Position P4 Frequent bi-grams tagged with {1, … , ℓ} Candidate words P1 x P2 x P3 x P4 Private frequency oracle Find frequent words Combine only matching bi-grams Private count-min sketch
  110. 110. Roadmap 1. Private frequency estimation with count-min-sketch 2. Private heavy hitters with puzzle piece algorithm 3. Private heavy hitters with tree histogram protocol
  111. 111. Tree histogram algorithm (works well in practice + optimal theoretical guarantees) [Bassily Nissim Stemmer Thakurta, 2017]
  112. 112. Private heavy hitters: Tree histograms (based on [CM05]) 1 0 0 Any string in 𝒮: log |𝒮| bits Idea: Construct prefixes of the heavy hitter bit by bit
  113. 113. Private heavy hitters: Tree histograms 0 1
  114. 114. Private heavy hitters: Tree histograms 0 1 Level 1: Frequent prefix of length 1 Use private frequency oracle If a string is a heavy hitter, its prefixes are too.
  115. 115. Private heavy hitters: Tree histograms 00 01 10 11
  116. 116. Private heavy hitters: Tree histograms Level 2: Frequent prefix of length two Idea: Each level has ≈ 𝑛 heavy hitters 00 01 10 11
  117. 117. Private heavy hitters: Tree histograms Theorem: Finds all heavy hitters with frequency at least 𝑂( 𝑛 log|𝒮|) Computational and communication constraints: Client side: O(log |S|) Communication to server: O(1) bits Server-side computation: O(n log |S|)
  118. 118. Key takeaway points • Keeping local differential privacy constant: •One low-noise report is better than many noisy ones •Weak signal with probability 1 is better than strong signal with small probability • We can learn the dictionary – at a cost • Longitudinal privacy remains a challenge
  119. 119. NIPS 2017
  120. 120. Microsoft: Discretization of continuous variables "These guarantees are particularly strong when user’s behavior remains approximately the same, varies slowly, or varies around a small number of values over the course of data collection."
  121. 121. Microsoft's deployment "Our mechanisms have been deployed by Microsoft across millions of devices ... to protect users’ privacy while collecting application usage statistics." B. Ding, J. Kulkarni, S. Yekhanin, NIPS 2017 More details: today, 1pm-5pm T12: Privacy at Scale: Local Differential Privacy in Practice, G. Cormode, T. Kulkarni, N. Li, T. Wang
  122. 122. LinkedIn Salary
  123. 123. Outline • LinkedIn Salary Overview • Challenges: Privacy, Modeling • System Design & Architecture • Privacy vs. Modeling Tradeoffs
  124. 124. LinkedIn Salary (launched in Nov, 2016)
  125. 125. Salary Collection Flow via Email Targeting
  126. 126. Current Reach (August 2018) • A few million responses out of several millions of members targeted • Targeted via emails since early 2016 • Countries: US, CA, UK, DE, IN, … • Insights available for a large fraction of US monthly active users
  127. 127. Data Privacy Challenges • Minimize the risk of inferring any one individual’s compensation data • Protection against data breach • No single point of failure Achieved by a combination of techniques: encryption, access control, , aggregation, thresholding K. Kenthapadi, A. Chudhary, and S. Ambler, LinkedIn Salary: A System for Secure Collection and Presentation of Structured Compensation Insights to Job Seekers, IEEE PAC 2017 (arxiv.org/abs/1705.06976)
  128. 128. Modeling Challenges • Evaluation • Modeling on de-identified data • Robustness and stability • Outlier detection X. Chen, Y. Liu, L. Zhang, and K. Kenthapadi, How LinkedIn Economic Graph Bonds Information and Product: Applications in LinkedIn Salary, KDD 2018 (arxiv.org/abs/1806.09063) K. Kenthapadi, S. Ambler, L. Zhang, and D. Agarwal, Bringing salary transparency to the world: Computing robust compensation insights via LinkedIn Salary, CIKM 2017 (arxiv.org/abs/1703.09845)
  129. 129. Problem Statement •How do we design LinkedIn Salary system taking into account the unique privacy and security challenges, while addressing the product requirements?
  130. 130. Differential Privacy? [Dwork et al, 2006] • Rich privacy literature (Adam-Worthmann, Samarati-Sweeney, Agrawal-Srikant, …, Kenthapadi et al, Machanavajjhala et al, Li et al, Dwork et al) • Limitation of anonymization techniques (as discussed in the first part) • Worst case sensitivity of quantiles to any one user’s compensation data is large •  Large noise to be added, depriving reliability/usefulness • Need compensation insights on a continual basis • Theoretical work on applying differential privacy under continual observations • No practical implementations / applications • Local differential privacy / Randomized response based approaches (Google’s RAPPOR; Apple’s iOS differential privacy; Microsoft’s telemetry collection) not applicable
  131. 131. Title Region $$ User Exp Designer SF Bay Area 100K User Exp Designer SF Bay Area 115K ... ... ... Title Region $$ User Exp Designer SF Bay Area 100K De-identification Example Title Region Company Industry Years of exp Degree FoS Skills $$ User Exp Designer SF Bay Area Google Internet 12 BS Interactive Media UX, Graphics, ... 100K Title Region Industry $$ User Exp Designer SF Bay Area Internet 100K Title Region Years of exp $$ User Exp Designer SF Bay Area 10+ 100K Title Region Company Years of exp $$ User Exp Designer SF Bay Area Google 10+ 100K #data points > threshold? Yes ⇒ Copy to Hadoop (HDFS) Note: Original submission stored as encrypted objects.
  132. 132. System Architecture
  133. 133. Collection & Storage
  134. 134. Collection & Storage • Allow members to submit their compensation info • Extract member attributes • E.g., canonical job title, company, region, by invoking LinkedIn standardization services • Securely store member attributes & compensation data
  135. 135. De-identification & Grouping
  136. 136. De-identification & Grouping • Approach inspired by k-Anonymity [Samarati-Sweeney] • “Cohort” or “Slice” • Defined by a combination of attributes • E.g, “User experience designers in SF Bay Area” • Contains aggregated compensation entries from corresponding individuals • No user name, id or any attributes other than those that define the cohort • A cohort available for offline processing only if it has at least k entries • Apply LinkedIn standardization software (free-form attribute  canonical version) before grouping • Analogous to the generalization step in k-Anonymity
  137. 137. De-identification & Grouping • Slicing service • Access member attribute info & submission identifiers (no compensation data) • Generate slices & track # submissions for each slice • Preparation service • Fetch compensation data (using submission identifiers), associate with the slice data, copy to HDFS
  138. 138. Insights & Modeling
  139. 139. Insights & Modeling • Salary insight service • Check whether the member is eligible • Give-to-get model • If yes, show the insights • Offline workflow • Consume de-identified HDFS dataset • Compute robust compensation insights • Outlier detection • Bayesian smoothing/inference • Populate the insight key-value stores
  140. 140. Security Mechanisms
  141. 141. Security Mechanisms • Encryption of member attributes & compensation data using different sets of keys • Separation of processing • Limiting access to the keys
  142. 142. Security Mechanisms • Key rotation • No single point of failure • Infra security
  143. 143. Preventing Timestamp Join based Attacks • Inference attack by joining these on timestamp • De-identified compensation data • Page view logs (when a member accessed compensation collection web interface) •  Not desirable to retain the exact timestamp • Perturb by adding random delay (say, up to 48 hours) • Modification based on k-Anonymity • Generalization using a hierarchy of timestamps • But, need to be incremental •  Process entries within a cohort in batches of size k • Generalize to a common timestamp • Make additional data available only in such incremental batches
  144. 144. Privacy vs Modeling Tradeoffs • LinkedIn Salary system deployed in production for ~2.5 years • Study tradeoffs between privacy guarantees (‘k’) and data available for computing insights • Dataset: Compensation submission history from 1.5M LinkedIn members • Amount of data available vs. minimum threshold, k • Effect of processing entries in batches of size, k
  145. 145. Amount of data available vs. threshold, k
  146. 146. Percent of data available vs. batch size, k
  147. 147. Median delay due to batching vs. batch size, k
  148. 148. Key takeaway points • LinkedIn Salary: a new internet application, with unique privacy/modeling challenges • Privacy vs. Modeling Tradeoffs • Potential directions • Privacy-preserving machine learning models in a practical setting [e.g., Chaudhuri et al, JMLR 2011; Papernot et al, ICLR 2017] • Provably private submission of compensation entries?
  149. 149. Beyond Randomized Response
  150. 150. Beyond Randomized Response • LDP + Machine Learning: • "Is interaction necessary for distributed private learning?" Smith, Thakurta, Upadhyay, S&P 2017 • Federated Learning • Encode-Shuffle-Analyze architecture "Prochlo: Strong Privacy for Analytics in the Crowd" Bittau et al., SOSP 2017 • Amplification by Shuffling
  151. 151. LDP + Machine Learning Interactivity as a major implementation constraint ... parallel
  152. 152. LDP + Machine Learning Interactivity as a major implementation constraint ... sequential
  153. 153. "Is interaction necessary for distributed private learning?" [STU 2017] • Single parameter learning (e.g., median): • Maximal accuracy with full parallelism • Multi-parameter learning: • Polylog number of iterations • Lower bounds
  154. 154. Federated Learning "Practical secure aggregation for privacy-preserving machine learning" Bonawitz, Ivanov, Kreuter, Marcedone, McMahan, Patel, Ramage, Segal, Seth, ACM CCS 2017
  155. 155. PROCHLO: Strong Privacy for Analytics in the Crowd Bittau, Erlingsson, Maniatis, Mironov, Raghunathan, Lie, Rudominer, Kode, Tinnes, Seefeld SOSP 2017
  156. 156. The ESA Architecture and Its Prochlo realization E A E E S Σ ESA: Encode, Shuffle, Analyze (ESA) Prochlo: A hardened ESA realization using Intel's SGX + crypto
  157. 157. E A E E S Σ S S... A Σ Σ Σ Local DP Unlinkability Randomized Thresholding Central DP
  158. 158. Key takeaway points • Notion of differential privacy is a principled foundation for privacy- preserving data analyses • Local differential privacy is a powerful technique appropriate for Internet-scale telemetry • Other techniques (thresholding, shuffling) can be combined with differentially private algorithms or be used in isolation.
  159. 159. References Differential privacy: review "A Firm Foundation For Private Data Analysis", C. ACM 2011 by Dwork book "The Algorithmic Foundations of Differential Privacy" by Dwork and Roth
  160. 160. References Google's RAPPOR: paper "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response", ACM CCS 2014, Erlingsson, Pihur, Korolova blog Apple's implementation: article "Learning with Privacy at Scale", Apple ML J., Dec 2017 paper "Practical Locally Private Heavy Hitters", NIPS 2017, by Bassily, Nissim, Stemmer, Thakurta paper "Privacy Loss in Apple's Implementation of Differential Privacy on MacOS 10.12" by Tang, Korolova, Bai, Wang, Wang LinkedIn Salary: paper "LinkedIn Salary: A System for Secure Collection and Presentation of Structured Compensation Insights to Job Seekers", IEEE PAC 2017, Kenthapadi, Chudhary, Ambler blog
  161. 161. Thanks & Questions • Tutorial website: https://sites.google.com/view/kdd2018privacytutorial • Feedback most welcome  • kkenthapadi@linkedin.com, mironov@google.com • Related KDD’18 tutorial today afternoon (1pm-5pm): T12: Privacy at Scale: Local Differential Privacy in Practice, G. Cormode, T. Kulkarni, N. Li, T. Wang

×