Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)

Privacy-preserving
Data Mining in Industry
WSDM 2019 Tutorial
February 2019
Krishnaram Kenthapadi (AI @ LinkedIn)
Ilya Mironov (Google AI)
Abhradeep Thakurta (UC Santa Cruz)
https://sites.google.com/view/wsdm19-privacy-tutorial
Fairness Privacy
Transparency Explainability
Fairness Privacy
Transparency Explainability
Related WSDM’19 sessions:
1.Tutorial: Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned
(Monday, 13:30 – 17:00)
2.H.V. Jagadish's invited talk: Responsible Data Science (Tuesday, 14:45 – 15:30)
3.Session 4: FATE & Privacy (Tuesday, 16:15 – 17:30)
4.Aleksandra Korolova’s invited talk: Privacy-Preserving WSDM (Wednesday, 14:45–15:30)
Outline / Learning Outcomes
• Privacy breaches and lessons learned
• Evolution of privacy techniques
• Differential privacy: definition and techniques
• Privacy techniques in practice: Challenges and Lessons Learned
• Google’s RAPPOR
• Apple’s differential privacy deployment for iOS
• Privacy in AI @ LinkedIn (Analytics framework & LinkedIn Salary)
• Key Takeaways
Privacy: A Historical Perspective
Evolution of Privacy Techniques and Privacy Breaches
Privacy Breaches and Lessons Learned
Attacks on privacy
•Governor of Massachusetts
•AOL
•Netflix
•Web browsing data
•Facebook
•Amazon
•Australian Gov't
born July 31, 1945
resident of 02138
Massachusetts Group Insurance Commission (1997):
Anonymized medical history of state employees (all
hospital visits, diagnosis, prescriptions)
Latanya Sweeney (MIT grad student): $20 – Cambridge
voter roll
William Weld vs Latanya Sweeney
64
%uniquely identifiable with
ZIP + birth date + gender
(in the US population)
Golle, “Revisiting the Uniqueness of Simple Demographics in the US Population”,
Attacker's Advantage
Auxiliary information
August 4, 2006: AOL
Research publishes
anonymized search logs
of 650,000 users
August 9:
New York Times
AOL Data Release
Attacker's Advantage
Auxiliary information
Enough to succeed on a small fraction of inputs
Netflix Prize
Oct 2006: Netflix announces Netflix
Prize
• 10% of their users
• average 200 ratings per user
Narayanan, Shmatikov (2006):
Netflix Prize
Deanonymizing Netflix Data
Narayanan, Shmatikov, Robust De-
anonymization of Large Datasets (How to
Break Anonymity of the Netflix Prize
Dataset), 2008
● Noam Chomsky in Our Times
● Farenheit 9/11
● Jesus of Nazareth
● Queer as Folk
Key idea:
● Similar intuition as the attack on medical records
● Medical records: Each person can be identified
based on a combination of a few attributes
● Web browsing history: Browsing history is unique for
each person
● Each person has a distinctive social network  links
appearing in one’s feed is unique
● Users likely to visit links in their feed with higher
probability than a random user
● “Browsing histories contain tell-tale marks of identity”
Su et al, De-anonymizing Web Browsing Data with Social Networks, 2017
De-anonymizing Web Browsing Data with Social Networks
Attacker's Advantage
Auxiliary information
Enough to succeed on a small fraction of inputs
High dimensionality
Ad targeting:
Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM
Privacy Attacks On Ad Targeting
10 campaigns targeting 1 person (zip code, gender,
workplace, alma mater)
Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM
Facebook vs Korolova
Age
21
22
23
…
30
Ad Impressions in a week
0
0
8
…
0
10 campaigns targeting 1 person (zip code, gender,
workplace, alma mater)
Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM
Facebook vs Korolova
Interest
A
B
C
…
Z
Ad Impressions in a week
0
0
8
…
0
● Context: Microtargeted Ads
● Takeaway: Attackers can instrument ad campaigns to
identify individual users.
● Two types of attacks:
○ Inference from Impressions
○ Inference from Clicks
Facebook vs Korolova: Recap
Attacker's Advantage
Auxiliary information
Enough to succeed on a small fraction of inputs
High dimensionality
Active
Items frequently bought together
Bought: A B C D E
Z: Customers Who Bought This Item Also Bought
Calandrino, Kilzer, Narayanan, Felten, Shmatikov, “You Might Also Like: Privacy Risks of Collaborative
Attacking Amazon.com
A C D E
Attacker's Advantage
Auxiliary information
Enough to succeed on a small fraction of inputs
High dimensionality
Active
Observant
Homer et al., “Resolving individuals contributing trace
amounts of DNA to highly complex mixtures using high-
density SNP genotyping microarrays”, PLoS Genetics,
2008
Genetic data
Reference population
Bayesian Analysis
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
“In all mixtures, the identification
of the presence of a person’s
genomic DNA was possible.”
Zerhouni, NIH Director:
“As a result, the NIH has removed from
open-access databases the aggregate
results (including P values and genotype
counts) for all the GWAS that had been
available on NIH sites”
… one week later
Attacker's Advantage
Auxiliary information
Enough to succeed on a small fraction of inputs
High dimensionality
Active
Observant
Clever
Australian Medicare Release
August 2016: For 10% of Australians (2.9M) medical records and
prescription information from 1984–2014 published by the
federal government.
● Patient: year of birth, gender
● Medical events, codes, the state, price paid
● Dates are perturbed by ±2 weeks
● Supplier IDs are “encrypted”
September 2016: U of Melbourne researchers re-identified
politicians, sports figures, people from news reports.
● 55K women are unique based on their childbirth event(s)
October 2016: Government introduced a bill criminalizing re-
identification of published government data. The bill is pending in
the committee.
“Health Data in an Open World”, Chris Culnane, Benjamin I. P. Rubinstein, Vanessa Teague, https://arxiv.org/abs/1712.05627
Negative Results
Dinur-Nissim
0 1 1 0 1 0 0 0 1 1 0 1Data
query: 𝚺
Dinur-Nissim 2003:
If error is o(√n), then reconstruction is possible up to n−o(n)
...even if 23.9% of errors are arbitrary [DMT07]
...even with O(n) queries [DY08]
Dwork-Naor
Tore Dalenius desideratum (aka as “semantic security”):
“Access to a statistical database should not enable one to
learn anything about an individual that could not be learned
without access.” (1977)
Dwork-Naor (~2006):
If the database teaches us anything, there is always some auxiliary
information that breaks Dalenius desideratum.
Differential Privacy
Curator
Defining Privacy
Curator
Defining Privacy: Fool's Errand
Defining Privacy
39
CuratorCurator
+ your data
- your data
Differential Privacy
40
Databases D and D′ are neighbors if they differ in one person’s data.
Differential Privacy: The distribution of the curator’s output M(D) on database
D is (nearly) the same as M(D′).
CuratorCurator
+ your data
- your data
Dwork, McSherry, Nissim, Smith [TCC 2006]
ε-Differential Privacy: The distribution of the curator’s output M(D) on
database D is (nearly) the same as M(D′).
Differential Privacy
41
CuratorCurator
Parameter ε quantifies
information leakage
∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S].
+ your data
- your data
Dwork, McSherry, Nissim, Smith [TCC 2006]
ε-Differential Privacy: The distribution of the curator’s output M(D) on
database D is (nearly) the same as M(D′).
Differential Privacy
42
CuratorCurator
Parameter ε quantifies
information leakage
∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S]+𝛿.
Parameter 𝛿 gives
some slack
Dwork, Kenthapadi, McSherry, Mironov, Naor [EUROCRYPT 2006]
+ your data
- your data
43
f(D) f(D′)
— bad outcomes
— probability with record x
— probability without record x
“Bad Outcomes” Interpretation
● Prior on databases p
● Observed output O
● Does the database contain record x?
44
Bayesian Interpretation
Differential Privacy
● Robustness to auxiliary data
● Post-processing:
If M(D) is differentially private, so is f(M(D)).
● Composability:
Run two ε-DP mechanisms. Full interaction is 2ε-DP.
● Group privacy:
Graceful degradation in the presence of correlated inputs.
45
What Differential Privacy Isn’t
● Algorithm, architecture, or a rule book
● Secure Computation: what not how
● All-encompassing guarantee: trends may be
sensitive too
Strava Fitness App
BBC: “Fitness app Strava lights up staff at military bases”
Differential Privacy: Takeaway points
• Privacy as a notion of stability of randomized algorithms in
respect to small perturbations in their input
• Worst-case definition
• Robust (to auxiliary data, correlated inputs)
• Composable
• Quantifiable
• Concept of a privacy budget
• Noise injection
Case Studies
Google’s RAPPOR
...Mountain View, 2014
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Central Model
Curator
Local Model
Differential Privacy
ε-Differential Privacy: The distribution of the output M(D) on database
D is (nearly) the same as M(D′) for all adjacent databases D and D′:
∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S].
Local Differential Privacy
ε-Differential Privacy: The distribution of the output M(D) on database
D is (nearly) the same as M(D′) for all adjacent databases D and D′:
∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S].
Local-Differentially Private Mechanisms
● Stanley L. Warner, "Randomized response: a survey technique for
eliminating evasive answer bias", Journal of American Statistical
Association, March 1965.
● Arijit Chaudhuri, Rahul Mukerjee. Randomized
Response. Theory and Techniques. 1988.
Randomized Response (Warner 1965)
Q1: Are you a citizen of the United States?
Q2: Are you not a citizen of the United States?
𝜃 - the true fraction of citizens in the sample
Answer Q1 Answer Q2
p 1 − p
-DP
RAPPOR
Erlingsson, Pihur, Korolova. "RAPPOR: Randomized Aggregatable Privacy-Preserving
Ordinal Response." ACM CCS 2014.
RAPPOR: two-level randomized response
Can we do repeated surveys of sensitive attributes?
— Average of randomized responses will reveal a user’s true answer :-(
Solution: Memoize! Re-use the same random answer
— Memoization can hurt privacy too! Long, random bit sequence can
be a unique tracking ID :-(
Solution: Use 2-levels! Randomize the memoized response
RAPPOR: two-level randomized response
● Store client value v into bloom filter B using hash functions
● Memoize a Permanent Randomized Response (PRR) B′
● Report an Instantaneous Randomized Response (IRR) S
RAPPOR: two-level randomized response
● Store client value v into bloom filter B using hash functions
● Memoize a Permanent Randomized Response (PRR) B′
● Report an Instantaneous Randomized Response (IRR) S
f = ½
q = ¾ , p = ½
RAPPOR: Life of a report
Value
Bloom
Filter
PRR
IRR
“www.google.com”
Value
Bloom
Filter
PRR
IRR
“www.google.com”
P(1) =
0.25
P(1) =
0.75
RAPPOR: Life of a report
Value
Bloom
Filter
PRR
IRR
“www.google.com”
P(1) =
0.50
P(1) =
0.75
RAPPOR: Life of a report
Differential privacy of RAPPOR
● Permanent Randomized Response satisfies differential privacy at
● Instantaneous Randomized Response has differential privacy at
= 4 ln(3)
= ln(3)
Differential Privacy of RAPPOR:
Measurable privacy bounds
Each report offers differential privacy with
ε = ln(3)
Attacker’s guess goes from 0.1% → 0.3% in the worst case
Differential privacy even if attacker gets all reports (infinite data!!!)
Also… Base Rate Fallacy prevents attackers from finding needles in
haystacks
Cohorts
Bloom Filter: 2 bits out of 128 — too many false positives
...
user 0xA0FE91B76:
google.com
cohort 2cohort 1 cohort 128
h2
Decoding RAPPOR
From Raw Counts to De-noised Counts
True bit counts, with no noise
De-noised RAPPOR reports
From De-Noised Count to Distribution
True bit counts, with no noise
De-noised RAPPOR reports
google.com:
yahoo.com:
bing.com:
From De-Noised Count to Distribution
Linear Regression:
minX ||B - A X||2
LASSO:
minX (||B - A X||2)2 + λ||X||1
Hybrid:
1. Find support of X via LASSO
2. Solve linear regression to find weights
Deploying RAPPOR
Coverage
Explaining RAPPOR
“Having the cake and eating it too…”
“Seeing the forest without seeing the trees…”
Metaphor for RAPPOR
Microdata: An Individual’s Report
Microdata: An Individual’s Report
Each bit is flipped with
probability
25%
Big Picture Remains!
Google Chrome Privacy White Paper
https://www.google.com/chrome/browser/privacy/whitepaper.html
Phishing and malware protection
Google Chrome includes an optional feature called "Safe Browsing" to help protect you against phishing and malware attacks. This
helps prevent evil-doers from tricking you into sharing personal information with them (“phishing”) or installing malicious software
on your computer (“malware”). The approach used to accomplish this was designed specifically to protect your privacy and is also
used by other popular browsers.
If you'd rather not send any information to Safe Browsing, you can also turn these features off. Please be aware that Chrome will no
longer be able to protect you from websites that try to steal your information or install harmful software if you disable this feature.
We really don't recommend turning it off.
…
If a URL was indeed dangerous, Chrome reports this anonymously to Google to improve Safe Browsing. The data sent is randomized,
constructed in a manner that ensures differential privacy, permitting only monitoring of aggregate statistics that apply to tens of
thousands of users at minimum. The reports are an instance of Randomized Aggregatable Privacy-Preserving Ordinal Responses,
whose full technical details have been published in a technical report and presented at the 2014 ACM Computer and Communications
Security conference. This means that Google cannot infer which website you have visited from this.
Developers’ Uptake
RAPPOR:
Lessons Learned
Growing Pains
● Transitioning from a research prototype to a real product
● Scalability
● Versioning
Communicating Uncertainty
Maintaining Candidates List
No missing candidates Three missing candidates
4%
13% 17%
RAPPOR Metrics in Chrome
https://chromium.googlesource.com/chromium/src/+log/master/tools/metrics/rappor/rappor.xml
Open Source Efforts
https://github.com/google/rappor
- demo you can run with a couple
of shell commands
- client library in several languages
- analysis tool and simulation
- documentation
Follow-up
- Bassily, Smith, “Local, Private, Efficient Protocols for Succinct
Histograms,” STOC 2015
- Kairouz, Bonawitz, Ramage, “Discrete Distribution Estimation under
Local Privacy”, https://arxiv.org/abs/1602.07387
- Qin et al., “Heavy Hitter Estimation over Set-Valued Data with Local
Differential Privacy”, CCS 2016
Key takeaway points
RAPPOR - locally differentially-private mechanism for reporting of
categorical and string data
● First Internet-scale deployment of differential privacy
● Explainable
● Conservative
● Open-sourced
Apple's On-Device Differential
Privacy
Abhradeep Thakurta, UC Santa Cruz
Apple WWDC, June 2016
References
https://arxiv.org/abs/1709.02753
Phablet
Derp
Photobomb
Woot
Phablet
OMG
Woot
Troll
Prepone
Phablet
awwww
dp
Learning from private data
Learn new (and frequent) words typed
Learning from private data
Learn frequent emojis typed
Apple's On-Device Differential
Privacy: Discovering New Words
Roadmap
1. Private frequency estimation with count-min-sketch
2. Private heavy hitters with puzzle piece algorithm
3. Private heavy hitters with tree histogram protocol
Private Frequency Oracle
Private frequency oracle
Building block for private heavy hitters
𝑑2𝑑1 𝑑 𝑛
All errors within
𝛾 = O( 𝑛 log|𝒮|)
frequency
Words (𝒮)
𝛾
"phablet"
frequency("phablet")
Private frequency oracle:
Design constraints
Computational and communication constraints:
Client side:
size of the domain (|S|) and n
Communication to server:
very few bits
Server-side cost for one query:
size of the domain (|S|) and n
Private frequency oracle:
Design constraints
Computational and communication constraints:
Client side:
size of the domain (|S|) and n
# characters > 3,000
For 8-character words:
size of the domain |S|=3,000^8
number of clients ~ 1B
Efficiently [BS15] ~ n
Our goal ~ O(log |S|)
Private frequency oracle:
Design constraints
Computational and communication constraints:
Client side:
O(log |S|)
Communication to server:
O(1) bits
Server-side cost for one query:
O(log |S|)
Private frequency oracle
A starter solution: Randomized response
𝑑
0 1 0
𝑖
1 0 1
𝑖
Protects ε-differential privacy
(with the right bias)
Randomized response: d′
1 0 0
1 1 0
1 0 1
+ With bias
correction
frequency
All domain elements
Error in each estimate:
Θ( 𝑛 log|𝒮|)
Optimal error under privacy
Private frequency oracle
A starter solution: Randomized response
Private frequency oracle
A starter solution: Randomized response
Computational and communication constraints:
Client side:
O(|S|)
Communication to server:
O(|S|) bits
Server-side cost for one query:
O(1)
1 0 1
𝑖
𝑑
0 01
0 01
0 01
Hash function: ℎ1
Hash function: ℎ2
Hash function: ℎ 𝑘
Number of hash bins: 𝑛
Computation= 𝑂(log|𝒮|)
𝑘 ≈ log|𝒮|
Private frequency oracle
Non-private count-min sketch [CM05]
0 01
0 01
0 01
0 01
1 00
0 11
1
𝑘
1
+
245
127
9123
2132
𝑛
Reducing server computation
Private frequency oracle
Non-private count-min sketch [CM05]
Reducing server computation
1
𝑘
1
Phablet
245
127
9123
2132
𝑛
9146
2212
Frequency estimate:
min (9146, 2212, 2132)
Error in each estimate:
O( 𝑛log|𝒮|)
Server side query cost:
𝑂(log|𝒮|)
𝑘 ≈ log |𝒮|
Private frequency oracle
Non-private count-min sketch [CM05]
"phablet"
Private frequency oracle
Private count-min sketch
𝑑
Making client computation differentially private
0 01
0 01
0 01
1 01
1 00
0 00
𝑘𝜖-diff. private, since 𝑘 pieces of information
Private frequency oracle
Private count-min sketch
𝑑
Theorem: Sampling ensures 𝜖-differential privacy without hurting accuracy,
rather improves it by a factor of 𝑘
0 01 1 00
Private frequency oracle
Private count-min sketch
Reducing client communication
0 01 +1 +1-1
Hadamard transform
Private frequency oracle
Private count-min sketch
Reducing client communication
0 01 +1 +1-1
Hadamard transform
-1 +1
Communication: 𝑂(1) bit
Theorem: Hadamard transform and sampling
do not hurt accuracy
Private frequency oracle
Private count-min sketch
Computational and communication constraints:
Client side:
O(log |S|)
Communication to server:
O(1) bits
Server-side cost for one query:
O(log |S|)
Error in each estimate:
O( 𝑛log|𝒮|)
Roadmap
1. Private frequency estimation with count-min-sketch
2. Private heavy hitters with puzzle piece algorithm
3. Private heavy hitters with tree histogram protocol
Private heavy hitters:
Using the frequency oracle
Private frequency oracle
Private count-min sketch
Domain 𝒮
Too many elements in 𝒮 to search.
Element s in S
Frequency(s)
Find all s in S with
frequency > γ
Roadmap
1. Private frequency estimation with count-min-sketch
2. Private heavy hitters with puzzle piece algorithm
3. Private heavy hitters with tree histogram protocol
Puzzle piece algorithm
(works well in practice, no theoretical guarantees)
[Bassily Nissim Stemmer Thakurta, 2017 and Apple differential privacy team, 2017]
Private heavy hitters
Observation: If a word is frequent, its bigrams are frequent too.
Ph ab le t$ Frequency > 𝛾
Each bi-gram frequency > 𝛾
Private heavy hitters
Natural algorithm: Cartesian product of frequent bi-grams
Sanitized
bi-grams, and the
complete word
ab
ad
ph
ba
ab
ax
le
ab
ab
Position P1 Position P2 Position P3
le
ab
t$
Position P4
Frequent bi-grams
Private heavy hitters
ab
ad
ph
ba
ab
ax
le
ab
ab
Position P1 Position P2 Position P3
le
ab
t$
Position P4
Frequent bi-grams Candidate words
P1 x P2 x P3 x P4
Private frequency oracle
Private count-min sketch
Find frequent
words
Natural algorithm: Cartesian product of frequent bi-grams
Private heavy hitters
Natural algorithm: Cartesian product of frequent bi-grams
Candidate words
P1 x P2 x P3 x P4
Private frequency oracle
Find frequent
words
Combinatorial explosion
In practice, all bi-grams are frequent
Private count-min sketch
Puzzle piece algorithm
Ph ab le t$
≜
h=Hash(Phablet)
Hash: 𝒮 → 1, … , ℓ
Ph ab le t$h h h h
Privatized
bi-grams tagged
with the hash, and
the complete
word
Puzzle piece algorithm: Server side
ab 1
ad 5
Ph 3
ba 4
ab 3
ax 9
le 3
le 7
ab 1
Position P1 Position P2 Position P3
le 1
ab 9
t$ 3
Position P4
Frequent bi-grams tagged with {1, … , ℓ}
Candidate words
P1 x P2 x P3 x P4
Private frequency oracle
Find frequent
words
Combine only matching
bi-grams
Private count-min sketch
Roadmap
1. Private frequency estimation with count-min-sketch
2. Private heavy hitters with puzzle piece algorithm
3. Private heavy hitters with tree histogram protocol
Tree histogram algorithm
(works well in practice + optimal theoretical guarantees)
[Bassily Nissim Stemmer Thakurta, 2017]
Private heavy hitters:
Tree histograms (based on [CM05])
1 0 0
Any string in 𝒮:
log |𝒮| bits
Idea: Construct prefixes of the heavy hitter bit by bit
Private heavy hitters:
Tree histograms
0 1
Private heavy hitters:
Tree histograms
0 1
Level 1: Frequent prefix of length 1
Use private frequency oracle
If a string is a heavy hitter, its prefixes are too.
Private heavy hitters:
Tree histograms
00 01 10 11
Private heavy hitters:
Tree histograms
Level 2: Frequent prefix of length two
Idea: Each level has ≈ 𝑛 heavy hitters
00 01 10 11
Private heavy hitters:
Tree histograms
Computational and communication constraints:
Client side:
O(log |S|)
Communication to server:
O(1) bits
Server-side computation:
O(n log |S|)
Theorem: Finds all heavy hitters with frequency at least
𝑂( 𝑛 log|𝒮|)
Key takeaway points
• Keeping local differential privacy constant:
•One low-noise report is better than many noisy ones
•Weak signal with probability 1 is better than strong signal with small probability
• We can learn the dictionary – at a cost
• Longitudinal privacy remains a challenge
NIPS 2017
Microsoft: Discretization of continuous variables
"These guarantees are particularly strong when user’s behavior remains
approximately the same, varies slowly, or varies around a small number of
values over the course of data collection."
Microsoft's deployment
"Our mechanisms have been deployed by
Microsoft across millions of devices ... to protect
users’ privacy while collecting application usage
statistics."
B. Ding, J. Kulkarni, S. Yekhanin, NeurIPS 2017
Microsoft Research Blog, Dec 8, 2017
Privacy in AI @ LinkedIn
• Framework to compute robust, privacy-preserving analytics
• Privacy challenges/design for a large crowdsourced system (LinkedIn Salary)
Analytics & Reporting Products at LinkedIn
Profile View Analytics
140
Content Analytics
Ad Campaign Analytics
All showing
demographics of
members engaging with
the product
• Admit only a small # of predetermined query types
• Querying for the number of member actions, for a specified time period,
together with the top demographic breakdowns
Analytics & Reporting Products at LinkedIn
• Admit only a small # of predetermined query types
• Querying for the number of member actions, for a specified time period,
together with the top demographic breakdowns
E.g., Clicks on a
given adE.g., Title = “Senior
Director”
Analytics & Reporting Products at LinkedIn
Privacy Requirements
• Attacker cannot infer whether a member performed an action
• E.g., click on an article or an ad
• Attacker may use auxiliary knowledge
• E.g., knowledge of attributes associated with the target member (say,
obtained from this member’s LinkedIn profile)
• E.g., knowledge of all other members that performed similar action
Possible Privacy Attacks
144
Targeting:
Senior directors in US, who studied at Cornell
Matches ~16k LinkedIn members
→ over minimum targeting threshold
Demographic breakdown:
Company = X
May match exactly one person
→ can determine whether the person
clicks on the ad or not
Require minimum reporting threshold
Still amenable to attacks
(Refer our ACM CIKM’18 paper for details)
Rounding mechanism
E.g., report incremental of 10
Still amenable to attacks
E.g. using incremental counts over time to
infer individuals’ actions
Need rigorous techniques to preserve member privacy
(not reveal exact aggregate counts)
Key Product Desiderata
• Coverage & Utility
• Data Consistency
• for repeated queries
• over time
• between total and breakdowns
• across entity/action hierarchy
• for top k queries
Problem Statement
Compute robust, reliable analytics in a privacy-
preserving manner, while addressing the product
desiderata such as coverage, utility, and consistency.
Differential Privacy: Random Noise Addition
If ℓ1-sensitivity of f : D → ℝn:
maxD,D′ ||f(D) − f(D′)||1 = s,
then adding Laplacian noise to true output
f(D) + Laplacen(s/ε)
offers ε-differential privacy.
Dwork, McSherry, Nissim, Smith, “Calibrating Noise to Sensitivity in Private Data Analysis”, TCC 2006
PriPeARL: A Framework for Privacy-Preserving Analytics
K. Kenthapadi, T. T. L. Tran, ACM CIKM 2018
148
Pseudo-random noise generation, inspired by differential privacy
● Entity id (e.g., ad
creative/campaign/account)
● Demographic dimension
● Stat type (impressions, clicks)
● Time range
● Fixed secret seed
Uniformly Random
Fraction
● Cryptographic
hash
● Normalize to
(0,1)
Random
Noise
Laplace
Noise
● Fixed ε
True
Count
Noisy
Count
To satisfy consistency
requirements
● Pseudo-random noise → same query has same result over time, avoid
averaging attack.
● For non-canonical queries (e.g., time ranges, aggregate multiple entities)
○ Use the hierarchy and partition into canonical queries
○ Compute noise for each canonical queries and sum up the noisy
counts
System Architecture
Lessons Learned from Deployment (> 1 year)
• Semantic consistency vs. unbiased, unrounded noise
• Suppression of small counts
• Online computation and performance requirements
• Scaling across analytics applications
• Tools for ease of adoption (code/API library, hands-on how-to tutorial) help!
Summary
• Framework to compute robust, privacy-preserving analytics
• Addressing challenges such as preserving member privacy, product coverage,
utility, and data consistency
• Future
• Utility maximization problem given constraints on the ‘privacy loss budget’
per user
• E.g., noise with larger variance to impressions but less noise to clicks (or conversions)
• E.g., more noise to broader time range sub-queries and less noise to granular time range
sub-queries
• Reference: K. Kenthapadi, T. Tran, PriPeARL: A Framework for Privacy-Preserving
Analytics and Reporting at LinkedIn, ACM CIKM 2018.
Acknowledgements
•Team:
• AI/ML: Krishnaram Kenthapadi, Thanh T. L. Tran
• Ad Analytics Product & Engineering: Mark Dietz, Taylor Greason, Ian
Koeppe
• Legal / Security: Sara Harrington, Sharon Lee, Rohit Pitke
•Acknowledgements (in alphabetical order)
• Deepak Agarwal, Igor Perisic, Arun Swami
LinkedIn Salary
Outline
• LinkedIn Salary Overview
• Challenges: Privacy, Modeling
• System Design & Architecture
• Privacy vs. Modeling Tradeoffs
LinkedIn Salary (launched in Nov, 2016)
Salary Collection Flow via Email Targeting
Current Reach (February 2019)
• A few million responses out of several millions of members targeted
• Targeted via emails since early 2016
• Countries: US, CA, UK, DE, IN, …
• Insights available for a large fraction of US monthly active users
Data Privacy Challenges
• Minimize the risk of inferring any one individual’s compensation data
• Protection against data breach
• No single point of failure
Achieved by a combination of
techniques: encryption, access control,
, aggregation,
thresholding
K. Kenthapadi, A. Chudhary, and S.
Ambler, LinkedIn Salary: A System
for Secure Collection and
Presentation of Structured
Compensation Insights to Job
Seekers, IEEE PAC 2017
(arxiv.org/abs/1705.06976)
Modeling Challenges
• Evaluation
• Modeling on de-identified data
• Robustness and stability
• Outlier detection
X. Chen, Y. Liu, L. Zhang, and K.
Kenthapadi, How LinkedIn
Economic Graph Bonds
Information and Product:
Applications in LinkedIn Salary,
KDD 2018
(arxiv.org/abs/1806.09063)
K. Kenthapadi, S. Ambler,
L. Zhang, and D. Agarwal,
Bringing salary transparency to
the world: Computing robust
compensation insights via
LinkedIn Salary, CIKM 2017
(arxiv.org/abs/1703.09845)
Problem Statement
•How do we design LinkedIn Salary system taking into
account the unique privacy and security challenges,
while addressing the product requirements?
Differential Privacy? [Dwork et al, 2006]
• Rich privacy literature (Adam-Worthmann, Samarati-Sweeney, Agrawal-Srikant, …,
Kenthapadi et al, Machanavajjhala et al, Li et al, Dwork et al)
• Limitation of anonymization techniques (as discussed in the first part)
• Worst case sensitivity of quantiles to any one user’s compensation data is
large
•  Large noise to be added, depriving reliability/usefulness
• Need compensation insights on a continual basis
• Theoretical work on applying differential privacy under continual observations
• No practical implementations / applications
• Local differential privacy / Randomized response based approaches (Google’s RAPPOR; Apple’s
iOS differential privacy; Microsoft’s telemetry collection) not applicable
Title Region
$$
User Exp
Designer
SF Bay
Area 100K
User Exp
Designer
SF Bay
Area 115K
... ...
...
Title Region
$$
User Exp
Designer
SF Bay
Area 100K
De-identification Example
Title Region Company Industry Years of
exp
Degree FoS Skills
$$
User Exp
Designer
SF Bay
Area
Google Internet 12 BS Interactive
Media
UX,
Graphics,
...
100K
Title Region Industry
$$
User Exp
Designer
SF Bay
Area
Internet
100K
Title Region Years of
exp $$
User Exp
Designer
SF Bay
Area
10+
100K
Title Region Company Years of
exp $$
User Exp
Designer
SF Bay
Area
Google 10+
100K
#data
points >
threshold?
Yes ⇒ Copy to
Hadoop (HDFS) Note: Original submission stored as encrypted objects.
System
Architecture
Collection
&
Storage
Collection & Storage
• Allow members to submit their compensation info
• Extract member attributes
• E.g., canonical job title, company, region, by invoking LinkedIn standardization services
• Securely store member attributes & compensation data
De-identification
&
Grouping
De-identification & Grouping
• Approach inspired by k-Anonymity [Samarati-Sweeney]
• “Cohort” or “Slice”
• Defined by a combination of attributes
• E.g, “User experience designers in SF Bay Area”
• Contains aggregated compensation entries from corresponding individuals
• No user name, id or any attributes other than those that define the cohort
• A cohort available for offline processing only if it has at least k entries
• Apply LinkedIn standardization software (free-form attribute  canonical version)
before grouping
• Analogous to the generalization step in k-Anonymity
De-identification & Grouping
• Slicing service
• Access member attribute info &
submission identifiers (no
compensation data)
• Generate slices & track #
submissions for each slice
• Preparation service
• Fetch compensation data (using
submission identifiers), associate
with the slice data, copy to HDFS
Insights
&
Modeling
Insights & Modeling
• Salary insight service
• Check whether the member is
eligible
• Give-to-get model
• If yes, show the insights
• Offline workflow
• Consume de-identified HDFS
dataset
• Compute robust compensation
insights
• Outlier detection
• Bayesian smoothing/inference
• Populate the insight key-value
stores
Security
Mechanisms
Security
Mechanisms
• Encryption of
member attributes
& compensation
data using different
sets of keys
• Separation of
processing
• Limiting access to
the keys
Security
Mechanisms
• Key rotation
• No single point of
failure
• Infra security
Preventing Timestamp Join based Attacks
• Inference attack by joining these on timestamp
• De-identified compensation data
• Page view logs (when a member accessed compensation collection web interface)
•  Not desirable to retain the exact timestamp
• Perturb by adding random delay (say, up to 48 hours)
• Modification based on k-Anonymity
• Generalization using a hierarchy of timestamps
• But, need to be incremental
•  Process entries within a cohort in batches of size k
• Generalize to a common timestamp
• Make additional data available only in such incremental batches
Privacy vs Modeling Tradeoffs
• LinkedIn Salary system deployed in production for ~2.5 years
• Study tradeoffs between privacy guarantees (‘k’) and data available for
computing insights
• Dataset: Compensation submission history from 1.5M LinkedIn members
• Amount of data available vs. minimum threshold, k
• Effect of processing entries in batches of size, k
Amount of
data
available vs.
threshold, k
Percent of
data available
vs. batch size,
k
Median delay
due to
batching vs.
batch size, k
Key takeaway points
• LinkedIn Salary: a new internet application, with
unique privacy/modeling challenges
• Privacy vs. Modeling Tradeoffs
• Potential directions
• Privacy-preserving machine learning models in a practical setting
[e.g., Chaudhuri et al, JMLR 2011; Papernot et al, ICLR 2017]
• Provably private submission of compensation entries?
Beyond Randomized Response
Beyond Randomized Response
• LDP + Machine Learning:
• "Is interaction necessary for distributed private learning?"
Smith, Thakurta, Upadhyay, S&P 2017
• Federated Learning
• DP + Machine Learning
• Encode-Shuffle-Analyze architecture
"Prochlo: Strong Privacy for Analytics in the Crowd"
Bittau et al., SOSP 2017
• Amplification by Shuffling
LDP + Machine Learning
Interactivity as a major implementation constraint
...
parallel
LDP + Machine Learning
Interactivity as a major implementation constraint
...
sequential
"Is interaction necessary for distributed private learning?"
[Smith, Thakurta, Upadhyay, S&P2017]
• Single parameter learning (e.g., median):
• Maximal accuracy with full parallelism
• Multi-parameter learning:
• Polylog number of iterations
• Lower bounds
Federated Learning
"Practical secure aggregation for privacy-preserving machine learning"
Bonawitz, Ivanov, Kreuter, Marcedone, McMahan, Patel, Ramage, Segal,
Seth, ACM CCS 2017
Federated Learning in Gboard
ML and Differential Privacy
"Generalization Implies Privacy" Fallacy
We don’t overfit, therefore
our model cannot possibly
violate privacy.
“Generalization Implies Privacy” Fallacy
Generalization
● average case
● model’s accuracy
Privacy
● worst case
● model’s parameters
“Generalization Implies Privacy” Fallacy
● Examples when it just ain’t so:
○ Person-to-person similarities
○ Support Vector Machines
● Models can be very large
○ Millions of parameters
Somali to English Translation
Somali to English Translation
Somali to English Translation
Somali to English Translation
Somali to English Translation
Maori to English
ML + Differential Privacy
• [DP-SGD] Abadi, Chu, Goodfellow, McMahan, Mironov, Talwar, Zhang,
"Deep Learning with Differential Privacy", ACM CCS 2016
• [PATE] Papernot, Abadi, Erlingson, Goodfellow, Talwar, "Semi-supervised
Knowledge Transfer for Deep Learning from Private Training Data",
ICML 2017
• [PATE] Papernot, Song, Mironov, Raghunathan, Talwar, Erlingson,
"Scalable Private Learning with PATE", ICML 2018
https://github.com/tensorflow/privacy
Statistics + Differential Privacy
Harvard Privacy Tools Project:
Census 2020 and Differential Privacy
Key takeaway points
• Notion of differential privacy is a principled foundation for privacy-
preserving data analyses
• Local differential privacy is a powerful technique appropriate for
Internet-scale telemetry
• Other techniques (thresholding, shuffling) can be combined with
differentially private algorithms or be used in isolation.
References
Differential privacy:
review "A Firm Foundation For Private Data Analysis", C. ACM 2011
by Dwork
book "The Algorithmic Foundations of Differential Privacy"
by Dwork and Roth
References
Google's RAPPOR:
paper "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response", ACM
CCS 2014, Erlingsson, Pihur, Korolova
blog
Apple's implementation:
article "Learning with Privacy at Scale", Apple ML J., Dec 2017
paper "Practical Locally Private Heavy Hitters", NIPS 2017,
by Bassily, Nissim, Stemmer, Thakurta
paper "Privacy Loss in Apple's Implementation of Differential Privacy on MacOS 10.12" by
Tang, Korolova, Bai, Wang, Wang
LinkedIn’s privacy-preserving analytics framework
paper "PriPeARL: A Framework for Privacy-Preserving Analytics and Reporting at LinkedIn",
CIKM 2018, Kenthapadi, Tran
LinkedIn Salary:
paper "LinkedIn Salary: A System for Secure Collection and Presentation of Structured
Compensation Insights to Job Seekers", IEEE PAC 2017, Kenthapadi, Chudhary, Ambler
blog
Fairness Privacy
Transparency Explainability
Related WSDM’19 sessions:
1.Tutorial: Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned
(Monday, 13:30 – 17:00)
2.H.V. Jagadish's invited talk: Responsible Data Science (Tuesday, 14:45 - 15:30)
3.Session 4: FATE & Privacy (Tuesday, 16:15 - 17:30)
4.Aleksandra Korolova’s invited talk: Privacy-Preserving WSDM (Wednesday, 14:45 - 15:30)
Thanks! Questions?
•Tutorial website:
https://sites.google.com/view/wsdm19-privacy-
tutorial
•Feedback most welcome 
• kkenthapadi@linkedin.com, mironov@google.com
Backup Slides
PROCHLO:
Strong Privacy for Analytics in the Crowd
Bittau, Erlingsson, Maniatis, Mironov, Raghunathan,
Lie, Rudominer, Kode, Tinnes, Seefeld
SOSP 2017
The ESA Architecture and Its Prochlo realization
E
A
E
E
S
Σ
ESA: Encode, Shuffle, Analyze (ESA)
Prochlo: A hardened ESA realization using Intel's SGX + crypto
E
A
E
E
S
Σ
S S...
A
Σ
Σ
Σ
Local DP
Unlinkability
Randomized Thresholding Central DP
1 of 206

Recommended

Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial) by
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)Krishnaram Kenthapadi
11.9K views199 slides
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ... by
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...Krishnaram Kenthapadi
2.4K views163 slides
Fairness, Transparency, and Privacy in AI @ LinkedIn by
Fairness, Transparency, and Privacy in AI @ LinkedInFairness, Transparency, and Privacy in AI @ LinkedIn
Fairness, Transparency, and Privacy in AI @ LinkedInKrishnaram Kenthapadi
792 views117 slides
Fairness and Privacy in AI/ML Systems by
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsKrishnaram Kenthapadi
1.6K views65 slides
Recommendation systems by
Recommendation systems  Recommendation systems
Recommendation systems Badr Hirchoua
898 views49 slides
Large scale social recommender systems and their evaluation by
Large scale social recommender systems and their evaluationLarge scale social recommender systems and their evaluation
Large scale social recommender systems and their evaluationMitul Tiwari
1.2K views55 slides

More Related Content

What's hot

Tutorial Cognition - Irene by
Tutorial Cognition - IreneTutorial Cognition - Irene
Tutorial Cognition - IreneSSSW
626 views34 slides
BROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCES by
BROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCESBROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCES
BROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCESMicah Altman
741 views41 slides
Graph Neural Networks for Recommendations by
Graph Neural Networks for RecommendationsGraph Neural Networks for Recommendations
Graph Neural Networks for RecommendationsWQ Fan
624 views66 slides
Big Data Analytics : A Social Network Approach by
Big Data Analytics : A Social Network ApproachBig Data Analytics : A Social Network Approach
Big Data Analytics : A Social Network ApproachAndry Alamsyah
12.1K views47 slides
Best practices machine learning final by
Best practices machine learning finalBest practices machine learning final
Best practices machine learning finalDianna Doan
1.1K views40 slides
Introduction to Recommender System by
Introduction to Recommender SystemIntroduction to Recommender System
Introduction to Recommender SystemWQ Fan
110 views12 slides

What's hot(20)

Tutorial Cognition - Irene by SSSW
Tutorial Cognition - IreneTutorial Cognition - Irene
Tutorial Cognition - Irene
SSSW626 views
BROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCES by Micah Altman
BROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCESBROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCES
BROWN BAG TALK WITH MICAH ALTMAN, SOURCES OF BIG DATA FOR SOCIAL SCIENCES
Micah Altman741 views
Graph Neural Networks for Recommendations by WQ Fan
Graph Neural Networks for RecommendationsGraph Neural Networks for Recommendations
Graph Neural Networks for Recommendations
WQ Fan624 views
Big Data Analytics : A Social Network Approach by Andry Alamsyah
Big Data Analytics : A Social Network ApproachBig Data Analytics : A Social Network Approach
Big Data Analytics : A Social Network Approach
Andry Alamsyah12.1K views
Best practices machine learning final by Dianna Doan
Best practices machine learning finalBest practices machine learning final
Best practices machine learning final
Dianna Doan1.1K views
Introduction to Recommender System by WQ Fan
Introduction to Recommender SystemIntroduction to Recommender System
Introduction to Recommender System
WQ Fan110 views
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio... by Jonathan Stray
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...
Frontiers of Computational Journalism week 1 - Introduction and High Dimensio...
Jonathan Stray480 views
Frontiers of Computational Journalism week 3 - Information Filter Design by Jonathan Stray
Frontiers of Computational Journalism week 3 - Information Filter DesignFrontiers of Computational Journalism week 3 - Information Filter Design
Frontiers of Computational Journalism week 3 - Information Filter Design
Jonathan Stray530 views
Frontiers of Computational Journalism week 2 - Text Analysis by Jonathan Stray
Frontiers of Computational Journalism week 2 - Text AnalysisFrontiers of Computational Journalism week 2 - Text Analysis
Frontiers of Computational Journalism week 2 - Text Analysis
Jonathan Stray527 views
Fundamentals of Deep Recommender Systems by WQ Fan
 Fundamentals of Deep Recommender Systems Fundamentals of Deep Recommender Systems
Fundamentals of Deep Recommender Systems
WQ Fan139 views
Je t’aime… moi non plus: reporting on the opportunities, expectations and cha... by Christoph Trattner
Je t’aime… moi non plus: reporting on the opportunities, expectations and cha...Je t’aime… moi non plus: reporting on the opportunities, expectations and cha...
Je t’aime… moi non plus: reporting on the opportunities, expectations and cha...
Christoph Trattner14.7K views
Frontiers of Computational Journalism week 8 - Visualization and Network Anal... by Jonathan Stray
Frontiers of Computational Journalism week 8 - Visualization and Network Anal...Frontiers of Computational Journalism week 8 - Visualization and Network Anal...
Frontiers of Computational Journalism week 8 - Visualization and Network Anal...
Jonathan Stray700 views
Social Media Mining - Chapter 5 (Data Mining Essentials) by SocialMediaMining
Social Media Mining - Chapter 5 (Data Mining Essentials)Social Media Mining - Chapter 5 (Data Mining Essentials)
Social Media Mining - Chapter 5 (Data Mining Essentials)
SocialMediaMining1.5K views
Social Media Mining - Chapter 9 (Recommendation in Social Media) by SocialMediaMining
Social Media Mining - Chapter 9 (Recommendation in Social Media)Social Media Mining - Chapter 9 (Recommendation in Social Media)
Social Media Mining - Chapter 9 (Recommendation in Social Media)
SocialMediaMining2.4K views
A Multi-Criteria Recommender System Exploiting Aspect-based Sentiment Analysi... by Cataldo Musto
A Multi-Criteria Recommender System Exploiting Aspect-based Sentiment Analysi...A Multi-Criteria Recommender System Exploiting Aspect-based Sentiment Analysi...
A Multi-Criteria Recommender System Exploiting Aspect-based Sentiment Analysi...
Cataldo Musto1.1K views
Question Answering over Linked Data (Reasoning Web Summer School) by Andre Freitas
Question Answering over Linked Data (Reasoning Web Summer School)Question Answering over Linked Data (Reasoning Web Summer School)
Question Answering over Linked Data (Reasoning Web Summer School)
Andre Freitas1.5K views
Prateek Jain dissertation defense, Kno.e.sis, Wright State University by Prateek Jain
Prateek Jain dissertation defense, Kno.e.sis, Wright State UniversityPrateek Jain dissertation defense, Kno.e.sis, Wright State University
Prateek Jain dissertation defense, Kno.e.sis, Wright State University
Prateek Jain1.1K views

Similar to Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)

UN Global Pulse Privacy Framing by
UN Global Pulse Privacy FramingUN Global Pulse Privacy Framing
UN Global Pulse Privacy FramingMicah Altman
2.1K views19 slides
algorithmic-decisions, fairness, machine learning, provenance, transparency by
algorithmic-decisions, fairness, machine learning, provenance, transparencyalgorithmic-decisions, fairness, machine learning, provenance, transparency
algorithmic-decisions, fairness, machine learning, provenance, transparencyPaolo Missier
693 views36 slides
Literature Review: The Role of Signal Processing in Meeting Privacy Challenge... by
Literature Review: The Role of Signal Processing in Meeting Privacy Challenge...Literature Review: The Role of Signal Processing in Meeting Privacy Challenge...
Literature Review: The Role of Signal Processing in Meeting Privacy Challenge...Kato Mivule
1.4K views21 slides
Fairness in Machine Learning by
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine LearningDelip Rao
1.3K views46 slides
Data Science: Origins, Methods, Challenges and the future? by
Data Science: Origins, Methods, Challenges and the future?Data Science: Origins, Methods, Challenges and the future?
Data Science: Origins, Methods, Challenges and the future?Cagatay Turkay
1.2K views63 slides
Turning Learning into Numbers - A Learning Analytics Framework by
Turning Learning into Numbers - A Learning Analytics FrameworkTurning Learning into Numbers - A Learning Analytics Framework
Turning Learning into Numbers - A Learning Analytics FrameworkHendrik Drachsler
6.2K views48 slides

Similar to Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)(20)

UN Global Pulse Privacy Framing by Micah Altman
UN Global Pulse Privacy FramingUN Global Pulse Privacy Framing
UN Global Pulse Privacy Framing
Micah Altman2.1K views
algorithmic-decisions, fairness, machine learning, provenance, transparency by Paolo Missier
algorithmic-decisions, fairness, machine learning, provenance, transparencyalgorithmic-decisions, fairness, machine learning, provenance, transparency
algorithmic-decisions, fairness, machine learning, provenance, transparency
Paolo Missier693 views
Literature Review: The Role of Signal Processing in Meeting Privacy Challenge... by Kato Mivule
Literature Review: The Role of Signal Processing in Meeting Privacy Challenge...Literature Review: The Role of Signal Processing in Meeting Privacy Challenge...
Literature Review: The Role of Signal Processing in Meeting Privacy Challenge...
Kato Mivule1.4K views
Fairness in Machine Learning by Delip Rao
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine Learning
Delip Rao1.3K views
Data Science: Origins, Methods, Challenges and the future? by Cagatay Turkay
Data Science: Origins, Methods, Challenges and the future?Data Science: Origins, Methods, Challenges and the future?
Data Science: Origins, Methods, Challenges and the future?
Cagatay Turkay1.2K views
Turning Learning into Numbers - A Learning Analytics Framework by Hendrik Drachsler
Turning Learning into Numbers - A Learning Analytics FrameworkTurning Learning into Numbers - A Learning Analytics Framework
Turning Learning into Numbers - A Learning Analytics Framework
Hendrik Drachsler6.2K views
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the Consistent by Hendrik Drachsler
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentDutch Cooking with xAPI Recipes, The Good, the Bad, and the Consistent
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the Consistent
Hendrik Drachsler935 views
Kato Mivule: COGNITIVE 2013 - An Overview of Data Privacy in Multi-Agent Lear... by Kato Mivule
Kato Mivule: COGNITIVE 2013 - An Overview of Data Privacy in Multi-Agent Lear...Kato Mivule: COGNITIVE 2013 - An Overview of Data Privacy in Multi-Agent Lear...
Kato Mivule: COGNITIVE 2013 - An Overview of Data Privacy in Multi-Agent Lear...
Kato Mivule723 views
Ben Shneiderman: Thrill of Discovery by russ9595
Ben Shneiderman: Thrill of DiscoveryBen Shneiderman: Thrill of Discovery
Ben Shneiderman: Thrill of Discovery
russ95951.4K views
Managing Confidential Information – Trends and Approaches by Micah Altman
Managing Confidential Information – Trends and ApproachesManaging Confidential Information – Trends and Approaches
Managing Confidential Information – Trends and Approaches
Micah Altman1.9K views
Data Tactics Data Science Brown Bag (April 2014) by Rich Heimann
Data Tactics Data Science Brown Bag (April 2014)Data Tactics Data Science Brown Bag (April 2014)
Data Tactics Data Science Brown Bag (April 2014)
Rich Heimann1.5K views
Data Science and Analytics Brown Bag by DataTactics
Data Science and Analytics Brown BagData Science and Analytics Brown Bag
Data Science and Analytics Brown Bag
DataTactics1.2K views
"Reproducibility from the Informatics Perspective" by Micah Altman
"Reproducibility from the Informatics Perspective""Reproducibility from the Informatics Perspective"
"Reproducibility from the Informatics Perspective"
Micah Altman566 views
Ci2004-10.doc by butest
Ci2004-10.docCi2004-10.doc
Ci2004-10.doc
butest305 views
Session 01 designing and scoping a data science project by bodaceacat
Session 01 designing and scoping a data science projectSession 01 designing and scoping a data science project
Session 01 designing and scoping a data science project
bodaceacat562 views
Session 01 designing and scoping a data science project by Sara-Jayne Terp
Session 01 designing and scoping a data science projectSession 01 designing and scoping a data science project
Session 01 designing and scoping a data science project
Sara-Jayne Terp990 views
Towards A Differential Privacy and Utility Preserving Machine Learning Classi... by Kato Mivule
Towards A Differential Privacy and Utility Preserving Machine Learning Classi...Towards A Differential Privacy and Utility Preserving Machine Learning Classi...
Towards A Differential Privacy and Utility Preserving Machine Learning Classi...
Kato Mivule862 views
“Big data” in human services organisations: Practical problems and ethical di... by husITa
“Big data” in human services organisations: Practical problems and ethical di...“Big data” in human services organisations: Practical problems and ethical di...
“Big data” in human services organisations: Practical problems and ethical di...
husITa349 views
Datascience Introduction WebSci Summer School 2014 by Claudia Wagner
Datascience Introduction WebSci Summer School 2014Datascience Introduction WebSci Summer School 2014
Datascience Introduction WebSci Summer School 2014
Claudia Wagner1.5K views

More from Krishnaram Kenthapadi

Responsible AI in Industry: Practical Challenges and Lessons Learned by
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
428 views162 slides
Responsible AI in Industry: Practical Challenges and Lessons Learned by
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
301 views55 slides
Responsible AI in Industry (ICML 2021 Tutorial) by
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
751 views198 slides
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021) by
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
2.1K views262 slides
Amazon SageMaker Clarify by
Amazon SageMaker ClarifyAmazon SageMaker Clarify
Amazon SageMaker ClarifyKrishnaram Kenthapadi
3.4K views17 slides
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned by
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedPrivacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
535 views59 slides

More from Krishnaram Kenthapadi(15)

Responsible AI in Industry: Practical Challenges and Lessons Learned by Krishnaram Kenthapadi
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned by Krishnaram Kenthapadi
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021) by Krishnaram Kenthapadi
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned by Krishnaram Kenthapadi
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedPrivacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD... by Krishnaram Kenthapadi
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Krishnaram Kenthapadi12.2K views
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW... by Krishnaram Kenthapadi
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS... by Krishnaram Kenthapadi
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Privacy-preserving Analytics and Data Mining at LinkedIn by Krishnaram Kenthapadi
Privacy-preserving Analytics and Data Mining at LinkedInPrivacy-preserving Analytics and Data Mining at LinkedIn
Privacy-preserving Analytics and Data Mining at LinkedIn

Recently uploaded

Building trust in our information ecosystem: who do we trust in an emergency by
Building trust in our information ecosystem: who do we trust in an emergencyBuilding trust in our information ecosystem: who do we trust in an emergency
Building trust in our information ecosystem: who do we trust in an emergencyTina Purnat
109 views18 slides
The Dark Web : Hidden Services by
The Dark Web : Hidden ServicesThe Dark Web : Hidden Services
The Dark Web : Hidden ServicesAnshu Singh
5 views24 slides
How to think like a threat actor for Kubernetes.pptx by
How to think like a threat actor for Kubernetes.pptxHow to think like a threat actor for Kubernetes.pptx
How to think like a threat actor for Kubernetes.pptxLibbySchulze1
5 views33 slides
IETF 118: Starlink Protocol Performance by
IETF 118: Starlink Protocol PerformanceIETF 118: Starlink Protocol Performance
IETF 118: Starlink Protocol PerformanceAPNIC
394 views22 slides
hamro digital logics.pptx by
hamro digital logics.pptxhamro digital logics.pptx
hamro digital logics.pptxtupeshghimire
9 views36 slides
PORTFOLIO 1 (Bret Michael Pepito).pdf by
PORTFOLIO 1 (Bret Michael Pepito).pdfPORTFOLIO 1 (Bret Michael Pepito).pdf
PORTFOLIO 1 (Bret Michael Pepito).pdfbrejess0410
9 views6 slides

Recently uploaded(10)

Building trust in our information ecosystem: who do we trust in an emergency by Tina Purnat
Building trust in our information ecosystem: who do we trust in an emergencyBuilding trust in our information ecosystem: who do we trust in an emergency
Building trust in our information ecosystem: who do we trust in an emergency
Tina Purnat109 views
The Dark Web : Hidden Services by Anshu Singh
The Dark Web : Hidden ServicesThe Dark Web : Hidden Services
The Dark Web : Hidden Services
Anshu Singh5 views
How to think like a threat actor for Kubernetes.pptx by LibbySchulze1
How to think like a threat actor for Kubernetes.pptxHow to think like a threat actor for Kubernetes.pptx
How to think like a threat actor for Kubernetes.pptx
LibbySchulze15 views
IETF 118: Starlink Protocol Performance by APNIC
IETF 118: Starlink Protocol PerformanceIETF 118: Starlink Protocol Performance
IETF 118: Starlink Protocol Performance
APNIC394 views
PORTFOLIO 1 (Bret Michael Pepito).pdf by brejess0410
PORTFOLIO 1 (Bret Michael Pepito).pdfPORTFOLIO 1 (Bret Michael Pepito).pdf
PORTFOLIO 1 (Bret Michael Pepito).pdf
brejess04109 views
Marketing and Community Building in Web3 by Federico Ast
Marketing and Community Building in Web3Marketing and Community Building in Web3
Marketing and Community Building in Web3
Federico Ast14 views

Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)

  • 1. Privacy-preserving Data Mining in Industry WSDM 2019 Tutorial February 2019 Krishnaram Kenthapadi (AI @ LinkedIn) Ilya Mironov (Google AI) Abhradeep Thakurta (UC Santa Cruz) https://sites.google.com/view/wsdm19-privacy-tutorial
  • 3. Fairness Privacy Transparency Explainability Related WSDM’19 sessions: 1.Tutorial: Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned (Monday, 13:30 – 17:00) 2.H.V. Jagadish's invited talk: Responsible Data Science (Tuesday, 14:45 – 15:30) 3.Session 4: FATE & Privacy (Tuesday, 16:15 – 17:30) 4.Aleksandra Korolova’s invited talk: Privacy-Preserving WSDM (Wednesday, 14:45–15:30)
  • 4. Outline / Learning Outcomes • Privacy breaches and lessons learned • Evolution of privacy techniques • Differential privacy: definition and techniques • Privacy techniques in practice: Challenges and Lessons Learned • Google’s RAPPOR • Apple’s differential privacy deployment for iOS • Privacy in AI @ LinkedIn (Analytics framework & LinkedIn Salary) • Key Takeaways
  • 5. Privacy: A Historical Perspective Evolution of Privacy Techniques and Privacy Breaches
  • 6. Privacy Breaches and Lessons Learned Attacks on privacy •Governor of Massachusetts •AOL •Netflix •Web browsing data •Facebook •Amazon •Australian Gov't
  • 7. born July 31, 1945 resident of 02138 Massachusetts Group Insurance Commission (1997): Anonymized medical history of state employees (all hospital visits, diagnosis, prescriptions) Latanya Sweeney (MIT grad student): $20 – Cambridge voter roll William Weld vs Latanya Sweeney
  • 8. 64 %uniquely identifiable with ZIP + birth date + gender (in the US population) Golle, “Revisiting the Uniqueness of Simple Demographics in the US Population”,
  • 10. August 4, 2006: AOL Research publishes anonymized search logs of 650,000 users August 9: New York Times AOL Data Release
  • 11. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs
  • 13. Oct 2006: Netflix announces Netflix Prize • 10% of their users • average 200 ratings per user Narayanan, Shmatikov (2006): Netflix Prize
  • 14. Deanonymizing Netflix Data Narayanan, Shmatikov, Robust De- anonymization of Large Datasets (How to Break Anonymity of the Netflix Prize Dataset), 2008
  • 15. ● Noam Chomsky in Our Times ● Farenheit 9/11 ● Jesus of Nazareth ● Queer as Folk
  • 16. Key idea: ● Similar intuition as the attack on medical records ● Medical records: Each person can be identified based on a combination of a few attributes ● Web browsing history: Browsing history is unique for each person ● Each person has a distinctive social network  links appearing in one’s feed is unique ● Users likely to visit links in their feed with higher probability than a random user ● “Browsing histories contain tell-tale marks of identity” Su et al, De-anonymizing Web Browsing Data with Social Networks, 2017 De-anonymizing Web Browsing Data with Social Networks
  • 17. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality
  • 18. Ad targeting: Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM Privacy Attacks On Ad Targeting
  • 19. 10 campaigns targeting 1 person (zip code, gender, workplace, alma mater) Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM Facebook vs Korolova Age 21 22 23 … 30 Ad Impressions in a week 0 0 8 … 0
  • 20. 10 campaigns targeting 1 person (zip code, gender, workplace, alma mater) Korolova, “Privacy Violations Using Microtargeted Ads: A Case Study”, PADM Facebook vs Korolova Interest A B C … Z Ad Impressions in a week 0 0 8 … 0
  • 21. ● Context: Microtargeted Ads ● Takeaway: Attackers can instrument ad campaigns to identify individual users. ● Two types of attacks: ○ Inference from Impressions ○ Inference from Clicks Facebook vs Korolova: Recap
  • 22. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality Active
  • 23. Items frequently bought together Bought: A B C D E Z: Customers Who Bought This Item Also Bought Calandrino, Kilzer, Narayanan, Felten, Shmatikov, “You Might Also Like: Privacy Risks of Collaborative Attacking Amazon.com A C D E
  • 24. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality Active Observant
  • 25. Homer et al., “Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high- density SNP genotyping microarrays”, PLoS Genetics, 2008 Genetic data
  • 28. “In all mixtures, the identification of the presence of a person’s genomic DNA was possible.”
  • 29. Zerhouni, NIH Director: “As a result, the NIH has removed from open-access databases the aggregate results (including P values and genotype counts) for all the GWAS that had been available on NIH sites” … one week later
  • 30. Attacker's Advantage Auxiliary information Enough to succeed on a small fraction of inputs High dimensionality Active Observant Clever
  • 31. Australian Medicare Release August 2016: For 10% of Australians (2.9M) medical records and prescription information from 1984–2014 published by the federal government. ● Patient: year of birth, gender ● Medical events, codes, the state, price paid ● Dates are perturbed by ±2 weeks ● Supplier IDs are “encrypted”
  • 32. September 2016: U of Melbourne researchers re-identified politicians, sports figures, people from news reports. ● 55K women are unique based on their childbirth event(s) October 2016: Government introduced a bill criminalizing re- identification of published government data. The bill is pending in the committee. “Health Data in an Open World”, Chris Culnane, Benjamin I. P. Rubinstein, Vanessa Teague, https://arxiv.org/abs/1712.05627
  • 34. Dinur-Nissim 0 1 1 0 1 0 0 0 1 1 0 1Data query: 𝚺 Dinur-Nissim 2003: If error is o(√n), then reconstruction is possible up to n−o(n) ...even if 23.9% of errors are arbitrary [DMT07] ...even with O(n) queries [DY08]
  • 35. Dwork-Naor Tore Dalenius desideratum (aka as “semantic security”): “Access to a statistical database should not enable one to learn anything about an individual that could not be learned without access.” (1977) Dwork-Naor (~2006): If the database teaches us anything, there is always some auxiliary information that breaks Dalenius desideratum.
  • 40. Differential Privacy 40 Databases D and D′ are neighbors if they differ in one person’s data. Differential Privacy: The distribution of the curator’s output M(D) on database D is (nearly) the same as M(D′). CuratorCurator + your data - your data Dwork, McSherry, Nissim, Smith [TCC 2006]
  • 41. ε-Differential Privacy: The distribution of the curator’s output M(D) on database D is (nearly) the same as M(D′). Differential Privacy 41 CuratorCurator Parameter ε quantifies information leakage ∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S]. + your data - your data Dwork, McSherry, Nissim, Smith [TCC 2006]
  • 42. ε-Differential Privacy: The distribution of the curator’s output M(D) on database D is (nearly) the same as M(D′). Differential Privacy 42 CuratorCurator Parameter ε quantifies information leakage ∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S]+𝛿. Parameter 𝛿 gives some slack Dwork, Kenthapadi, McSherry, Mironov, Naor [EUROCRYPT 2006] + your data - your data
  • 43. 43 f(D) f(D′) — bad outcomes — probability with record x — probability without record x “Bad Outcomes” Interpretation
  • 44. ● Prior on databases p ● Observed output O ● Does the database contain record x? 44 Bayesian Interpretation
  • 45. Differential Privacy ● Robustness to auxiliary data ● Post-processing: If M(D) is differentially private, so is f(M(D)). ● Composability: Run two ε-DP mechanisms. Full interaction is 2ε-DP. ● Group privacy: Graceful degradation in the presence of correlated inputs. 45
  • 46. What Differential Privacy Isn’t ● Algorithm, architecture, or a rule book ● Secure Computation: what not how ● All-encompassing guarantee: trends may be sensitive too
  • 48. BBC: “Fitness app Strava lights up staff at military bases”
  • 49. Differential Privacy: Takeaway points • Privacy as a notion of stability of randomized algorithms in respect to small perturbations in their input • Worst-case definition • Robust (to auxiliary data, correlated inputs) • Composable • Quantifiable • Concept of a privacy budget • Noise injection
  • 56. Differential Privacy ε-Differential Privacy: The distribution of the output M(D) on database D is (nearly) the same as M(D′) for all adjacent databases D and D′: ∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S].
  • 57. Local Differential Privacy ε-Differential Privacy: The distribution of the output M(D) on database D is (nearly) the same as M(D′) for all adjacent databases D and D′: ∀S: Pr[M(D)∊S] ≤ exp(ε) ∙ Pr[M(D′)∊S].
  • 58. Local-Differentially Private Mechanisms ● Stanley L. Warner, "Randomized response: a survey technique for eliminating evasive answer bias", Journal of American Statistical Association, March 1965. ● Arijit Chaudhuri, Rahul Mukerjee. Randomized Response. Theory and Techniques. 1988.
  • 59. Randomized Response (Warner 1965) Q1: Are you a citizen of the United States? Q2: Are you not a citizen of the United States? 𝜃 - the true fraction of citizens in the sample Answer Q1 Answer Q2 p 1 − p -DP
  • 60. RAPPOR Erlingsson, Pihur, Korolova. "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response." ACM CCS 2014.
  • 61. RAPPOR: two-level randomized response Can we do repeated surveys of sensitive attributes? — Average of randomized responses will reveal a user’s true answer :-( Solution: Memoize! Re-use the same random answer — Memoization can hurt privacy too! Long, random bit sequence can be a unique tracking ID :-( Solution: Use 2-levels! Randomize the memoized response
  • 62. RAPPOR: two-level randomized response ● Store client value v into bloom filter B using hash functions ● Memoize a Permanent Randomized Response (PRR) B′ ● Report an Instantaneous Randomized Response (IRR) S
  • 63. RAPPOR: two-level randomized response ● Store client value v into bloom filter B using hash functions ● Memoize a Permanent Randomized Response (PRR) B′ ● Report an Instantaneous Randomized Response (IRR) S f = ½ q = ¾ , p = ½
  • 64. RAPPOR: Life of a report Value Bloom Filter PRR IRR “www.google.com”
  • 67. Differential privacy of RAPPOR ● Permanent Randomized Response satisfies differential privacy at ● Instantaneous Randomized Response has differential privacy at = 4 ln(3) = ln(3)
  • 68. Differential Privacy of RAPPOR: Measurable privacy bounds Each report offers differential privacy with ε = ln(3) Attacker’s guess goes from 0.1% → 0.3% in the worst case Differential privacy even if attacker gets all reports (infinite data!!!) Also… Base Rate Fallacy prevents attackers from finding needles in haystacks
  • 69. Cohorts Bloom Filter: 2 bits out of 128 — too many false positives ... user 0xA0FE91B76: google.com cohort 2cohort 1 cohort 128 h2
  • 71. From Raw Counts to De-noised Counts True bit counts, with no noise De-noised RAPPOR reports
  • 72. From De-Noised Count to Distribution True bit counts, with no noise De-noised RAPPOR reports google.com: yahoo.com: bing.com:
  • 73. From De-Noised Count to Distribution Linear Regression: minX ||B - A X||2 LASSO: minX (||B - A X||2)2 + λ||X||1 Hybrid: 1. Find support of X via LASSO 2. Solve linear regression to find weights
  • 76. Explaining RAPPOR “Having the cake and eating it too…” “Seeing the forest without seeing the trees…”
  • 79. Microdata: An Individual’s Report Each bit is flipped with probability 25%
  • 81. Google Chrome Privacy White Paper https://www.google.com/chrome/browser/privacy/whitepaper.html Phishing and malware protection Google Chrome includes an optional feature called "Safe Browsing" to help protect you against phishing and malware attacks. This helps prevent evil-doers from tricking you into sharing personal information with them (“phishing”) or installing malicious software on your computer (“malware”). The approach used to accomplish this was designed specifically to protect your privacy and is also used by other popular browsers. If you'd rather not send any information to Safe Browsing, you can also turn these features off. Please be aware that Chrome will no longer be able to protect you from websites that try to steal your information or install harmful software if you disable this feature. We really don't recommend turning it off. … If a URL was indeed dangerous, Chrome reports this anonymously to Google to improve Safe Browsing. The data sent is randomized, constructed in a manner that ensures differential privacy, permitting only monitoring of aggregate statistics that apply to tens of thousands of users at minimum. The reports are an instance of Randomized Aggregatable Privacy-Preserving Ordinal Responses, whose full technical details have been published in a technical report and presented at the 2014 ACM Computer and Communications Security conference. This means that Google cannot infer which website you have visited from this.
  • 84. Growing Pains ● Transitioning from a research prototype to a real product ● Scalability ● Versioning
  • 86. Maintaining Candidates List No missing candidates Three missing candidates 4% 13% 17%
  • 87. RAPPOR Metrics in Chrome https://chromium.googlesource.com/chromium/src/+log/master/tools/metrics/rappor/rappor.xml
  • 88. Open Source Efforts https://github.com/google/rappor - demo you can run with a couple of shell commands - client library in several languages - analysis tool and simulation - documentation
  • 89. Follow-up - Bassily, Smith, “Local, Private, Efficient Protocols for Succinct Histograms,” STOC 2015 - Kairouz, Bonawitz, Ramage, “Discrete Distribution Estimation under Local Privacy”, https://arxiv.org/abs/1602.07387 - Qin et al., “Heavy Hitter Estimation over Set-Valued Data with Local Differential Privacy”, CCS 2016
  • 90. Key takeaway points RAPPOR - locally differentially-private mechanism for reporting of categorical and string data ● First Internet-scale deployment of differential privacy ● Explainable ● Conservative ● Open-sourced
  • 95. Learning from private data Learn frequent emojis typed
  • 97. Roadmap 1. Private frequency estimation with count-min-sketch 2. Private heavy hitters with puzzle piece algorithm 3. Private heavy hitters with tree histogram protocol
  • 99. Private frequency oracle Building block for private heavy hitters 𝑑2𝑑1 𝑑 𝑛 All errors within 𝛾 = O( 𝑛 log|𝒮|) frequency Words (𝒮) 𝛾 "phablet" frequency("phablet")
  • 100. Private frequency oracle: Design constraints Computational and communication constraints: Client side: size of the domain (|S|) and n Communication to server: very few bits Server-side cost for one query: size of the domain (|S|) and n
  • 101. Private frequency oracle: Design constraints Computational and communication constraints: Client side: size of the domain (|S|) and n # characters > 3,000 For 8-character words: size of the domain |S|=3,000^8 number of clients ~ 1B Efficiently [BS15] ~ n Our goal ~ O(log |S|)
  • 102. Private frequency oracle: Design constraints Computational and communication constraints: Client side: O(log |S|) Communication to server: O(1) bits Server-side cost for one query: O(log |S|)
  • 103. Private frequency oracle A starter solution: Randomized response 𝑑 0 1 0 𝑖 1 0 1 𝑖 Protects ε-differential privacy (with the right bias) Randomized response: d′
  • 104. 1 0 0 1 1 0 1 0 1 + With bias correction frequency All domain elements Error in each estimate: Θ( 𝑛 log|𝒮|) Optimal error under privacy Private frequency oracle A starter solution: Randomized response
  • 105. Private frequency oracle A starter solution: Randomized response Computational and communication constraints: Client side: O(|S|) Communication to server: O(|S|) bits Server-side cost for one query: O(1) 1 0 1 𝑖
  • 106. 𝑑 0 01 0 01 0 01 Hash function: ℎ1 Hash function: ℎ2 Hash function: ℎ 𝑘 Number of hash bins: 𝑛 Computation= 𝑂(log|𝒮|) 𝑘 ≈ log|𝒮| Private frequency oracle Non-private count-min sketch [CM05]
  • 107. 0 01 0 01 0 01 0 01 1 00 0 11 1 𝑘 1 + 245 127 9123 2132 𝑛 Reducing server computation Private frequency oracle Non-private count-min sketch [CM05]
  • 108. Reducing server computation 1 𝑘 1 Phablet 245 127 9123 2132 𝑛 9146 2212 Frequency estimate: min (9146, 2212, 2132) Error in each estimate: O( 𝑛log|𝒮|) Server side query cost: 𝑂(log|𝒮|) 𝑘 ≈ log |𝒮| Private frequency oracle Non-private count-min sketch [CM05] "phablet"
  • 109. Private frequency oracle Private count-min sketch 𝑑 Making client computation differentially private 0 01 0 01 0 01 1 01 1 00 0 00 𝑘𝜖-diff. private, since 𝑘 pieces of information
  • 110. Private frequency oracle Private count-min sketch 𝑑 Theorem: Sampling ensures 𝜖-differential privacy without hurting accuracy, rather improves it by a factor of 𝑘 0 01 1 00
  • 111. Private frequency oracle Private count-min sketch Reducing client communication 0 01 +1 +1-1 Hadamard transform
  • 112. Private frequency oracle Private count-min sketch Reducing client communication 0 01 +1 +1-1 Hadamard transform -1 +1 Communication: 𝑂(1) bit Theorem: Hadamard transform and sampling do not hurt accuracy
  • 113. Private frequency oracle Private count-min sketch Computational and communication constraints: Client side: O(log |S|) Communication to server: O(1) bits Server-side cost for one query: O(log |S|) Error in each estimate: O( 𝑛log|𝒮|)
  • 114. Roadmap 1. Private frequency estimation with count-min-sketch 2. Private heavy hitters with puzzle piece algorithm 3. Private heavy hitters with tree histogram protocol
  • 115. Private heavy hitters: Using the frequency oracle Private frequency oracle Private count-min sketch Domain 𝒮 Too many elements in 𝒮 to search. Element s in S Frequency(s) Find all s in S with frequency > γ
  • 116. Roadmap 1. Private frequency estimation with count-min-sketch 2. Private heavy hitters with puzzle piece algorithm 3. Private heavy hitters with tree histogram protocol
  • 117. Puzzle piece algorithm (works well in practice, no theoretical guarantees) [Bassily Nissim Stemmer Thakurta, 2017 and Apple differential privacy team, 2017]
  • 118. Private heavy hitters Observation: If a word is frequent, its bigrams are frequent too. Ph ab le t$ Frequency > 𝛾 Each bi-gram frequency > 𝛾
  • 119. Private heavy hitters Natural algorithm: Cartesian product of frequent bi-grams Sanitized bi-grams, and the complete word ab ad ph ba ab ax le ab ab Position P1 Position P2 Position P3 le ab t$ Position P4 Frequent bi-grams
  • 120. Private heavy hitters ab ad ph ba ab ax le ab ab Position P1 Position P2 Position P3 le ab t$ Position P4 Frequent bi-grams Candidate words P1 x P2 x P3 x P4 Private frequency oracle Private count-min sketch Find frequent words Natural algorithm: Cartesian product of frequent bi-grams
  • 121. Private heavy hitters Natural algorithm: Cartesian product of frequent bi-grams Candidate words P1 x P2 x P3 x P4 Private frequency oracle Find frequent words Combinatorial explosion In practice, all bi-grams are frequent Private count-min sketch
  • 122. Puzzle piece algorithm Ph ab le t$ ≜ h=Hash(Phablet) Hash: 𝒮 → 1, … , ℓ Ph ab le t$h h h h Privatized bi-grams tagged with the hash, and the complete word
  • 123. Puzzle piece algorithm: Server side ab 1 ad 5 Ph 3 ba 4 ab 3 ax 9 le 3 le 7 ab 1 Position P1 Position P2 Position P3 le 1 ab 9 t$ 3 Position P4 Frequent bi-grams tagged with {1, … , ℓ} Candidate words P1 x P2 x P3 x P4 Private frequency oracle Find frequent words Combine only matching bi-grams Private count-min sketch
  • 124. Roadmap 1. Private frequency estimation with count-min-sketch 2. Private heavy hitters with puzzle piece algorithm 3. Private heavy hitters with tree histogram protocol
  • 125. Tree histogram algorithm (works well in practice + optimal theoretical guarantees) [Bassily Nissim Stemmer Thakurta, 2017]
  • 126. Private heavy hitters: Tree histograms (based on [CM05]) 1 0 0 Any string in 𝒮: log |𝒮| bits Idea: Construct prefixes of the heavy hitter bit by bit
  • 127. Private heavy hitters: Tree histograms 0 1
  • 128. Private heavy hitters: Tree histograms 0 1 Level 1: Frequent prefix of length 1 Use private frequency oracle If a string is a heavy hitter, its prefixes are too.
  • 129. Private heavy hitters: Tree histograms 00 01 10 11
  • 130. Private heavy hitters: Tree histograms Level 2: Frequent prefix of length two Idea: Each level has ≈ 𝑛 heavy hitters 00 01 10 11
  • 131. Private heavy hitters: Tree histograms Computational and communication constraints: Client side: O(log |S|) Communication to server: O(1) bits Server-side computation: O(n log |S|) Theorem: Finds all heavy hitters with frequency at least 𝑂( 𝑛 log|𝒮|)
  • 132. Key takeaway points • Keeping local differential privacy constant: •One low-noise report is better than many noisy ones •Weak signal with probability 1 is better than strong signal with small probability • We can learn the dictionary – at a cost • Longitudinal privacy remains a challenge
  • 134. Microsoft: Discretization of continuous variables "These guarantees are particularly strong when user’s behavior remains approximately the same, varies slowly, or varies around a small number of values over the course of data collection."
  • 135. Microsoft's deployment "Our mechanisms have been deployed by Microsoft across millions of devices ... to protect users’ privacy while collecting application usage statistics." B. Ding, J. Kulkarni, S. Yekhanin, NeurIPS 2017
  • 136. Microsoft Research Blog, Dec 8, 2017
  • 137. Privacy in AI @ LinkedIn • Framework to compute robust, privacy-preserving analytics • Privacy challenges/design for a large crowdsourced system (LinkedIn Salary)
  • 138. Analytics & Reporting Products at LinkedIn Profile View Analytics 140 Content Analytics Ad Campaign Analytics All showing demographics of members engaging with the product
  • 139. • Admit only a small # of predetermined query types • Querying for the number of member actions, for a specified time period, together with the top demographic breakdowns Analytics & Reporting Products at LinkedIn
  • 140. • Admit only a small # of predetermined query types • Querying for the number of member actions, for a specified time period, together with the top demographic breakdowns E.g., Clicks on a given adE.g., Title = “Senior Director” Analytics & Reporting Products at LinkedIn
  • 141. Privacy Requirements • Attacker cannot infer whether a member performed an action • E.g., click on an article or an ad • Attacker may use auxiliary knowledge • E.g., knowledge of attributes associated with the target member (say, obtained from this member’s LinkedIn profile) • E.g., knowledge of all other members that performed similar action
  • 142. Possible Privacy Attacks 144 Targeting: Senior directors in US, who studied at Cornell Matches ~16k LinkedIn members → over minimum targeting threshold Demographic breakdown: Company = X May match exactly one person → can determine whether the person clicks on the ad or not Require minimum reporting threshold Still amenable to attacks (Refer our ACM CIKM’18 paper for details) Rounding mechanism E.g., report incremental of 10 Still amenable to attacks E.g. using incremental counts over time to infer individuals’ actions Need rigorous techniques to preserve member privacy (not reveal exact aggregate counts)
  • 143. Key Product Desiderata • Coverage & Utility • Data Consistency • for repeated queries • over time • between total and breakdowns • across entity/action hierarchy • for top k queries
  • 144. Problem Statement Compute robust, reliable analytics in a privacy- preserving manner, while addressing the product desiderata such as coverage, utility, and consistency.
  • 145. Differential Privacy: Random Noise Addition If ℓ1-sensitivity of f : D → ℝn: maxD,D′ ||f(D) − f(D′)||1 = s, then adding Laplacian noise to true output f(D) + Laplacen(s/ε) offers ε-differential privacy. Dwork, McSherry, Nissim, Smith, “Calibrating Noise to Sensitivity in Private Data Analysis”, TCC 2006
  • 146. PriPeARL: A Framework for Privacy-Preserving Analytics K. Kenthapadi, T. T. L. Tran, ACM CIKM 2018 148 Pseudo-random noise generation, inspired by differential privacy ● Entity id (e.g., ad creative/campaign/account) ● Demographic dimension ● Stat type (impressions, clicks) ● Time range ● Fixed secret seed Uniformly Random Fraction ● Cryptographic hash ● Normalize to (0,1) Random Noise Laplace Noise ● Fixed ε True Count Noisy Count To satisfy consistency requirements ● Pseudo-random noise → same query has same result over time, avoid averaging attack. ● For non-canonical queries (e.g., time ranges, aggregate multiple entities) ○ Use the hierarchy and partition into canonical queries ○ Compute noise for each canonical queries and sum up the noisy counts
  • 148. Lessons Learned from Deployment (> 1 year) • Semantic consistency vs. unbiased, unrounded noise • Suppression of small counts • Online computation and performance requirements • Scaling across analytics applications • Tools for ease of adoption (code/API library, hands-on how-to tutorial) help!
  • 149. Summary • Framework to compute robust, privacy-preserving analytics • Addressing challenges such as preserving member privacy, product coverage, utility, and data consistency • Future • Utility maximization problem given constraints on the ‘privacy loss budget’ per user • E.g., noise with larger variance to impressions but less noise to clicks (or conversions) • E.g., more noise to broader time range sub-queries and less noise to granular time range sub-queries • Reference: K. Kenthapadi, T. Tran, PriPeARL: A Framework for Privacy-Preserving Analytics and Reporting at LinkedIn, ACM CIKM 2018.
  • 150. Acknowledgements •Team: • AI/ML: Krishnaram Kenthapadi, Thanh T. L. Tran • Ad Analytics Product & Engineering: Mark Dietz, Taylor Greason, Ian Koeppe • Legal / Security: Sara Harrington, Sharon Lee, Rohit Pitke •Acknowledgements (in alphabetical order) • Deepak Agarwal, Igor Perisic, Arun Swami
  • 152. Outline • LinkedIn Salary Overview • Challenges: Privacy, Modeling • System Design & Architecture • Privacy vs. Modeling Tradeoffs
  • 153. LinkedIn Salary (launched in Nov, 2016)
  • 154. Salary Collection Flow via Email Targeting
  • 155. Current Reach (February 2019) • A few million responses out of several millions of members targeted • Targeted via emails since early 2016 • Countries: US, CA, UK, DE, IN, … • Insights available for a large fraction of US monthly active users
  • 156. Data Privacy Challenges • Minimize the risk of inferring any one individual’s compensation data • Protection against data breach • No single point of failure Achieved by a combination of techniques: encryption, access control, , aggregation, thresholding K. Kenthapadi, A. Chudhary, and S. Ambler, LinkedIn Salary: A System for Secure Collection and Presentation of Structured Compensation Insights to Job Seekers, IEEE PAC 2017 (arxiv.org/abs/1705.06976)
  • 157. Modeling Challenges • Evaluation • Modeling on de-identified data • Robustness and stability • Outlier detection X. Chen, Y. Liu, L. Zhang, and K. Kenthapadi, How LinkedIn Economic Graph Bonds Information and Product: Applications in LinkedIn Salary, KDD 2018 (arxiv.org/abs/1806.09063) K. Kenthapadi, S. Ambler, L. Zhang, and D. Agarwal, Bringing salary transparency to the world: Computing robust compensation insights via LinkedIn Salary, CIKM 2017 (arxiv.org/abs/1703.09845)
  • 158. Problem Statement •How do we design LinkedIn Salary system taking into account the unique privacy and security challenges, while addressing the product requirements?
  • 159. Differential Privacy? [Dwork et al, 2006] • Rich privacy literature (Adam-Worthmann, Samarati-Sweeney, Agrawal-Srikant, …, Kenthapadi et al, Machanavajjhala et al, Li et al, Dwork et al) • Limitation of anonymization techniques (as discussed in the first part) • Worst case sensitivity of quantiles to any one user’s compensation data is large •  Large noise to be added, depriving reliability/usefulness • Need compensation insights on a continual basis • Theoretical work on applying differential privacy under continual observations • No practical implementations / applications • Local differential privacy / Randomized response based approaches (Google’s RAPPOR; Apple’s iOS differential privacy; Microsoft’s telemetry collection) not applicable
  • 160. Title Region $$ User Exp Designer SF Bay Area 100K User Exp Designer SF Bay Area 115K ... ... ... Title Region $$ User Exp Designer SF Bay Area 100K De-identification Example Title Region Company Industry Years of exp Degree FoS Skills $$ User Exp Designer SF Bay Area Google Internet 12 BS Interactive Media UX, Graphics, ... 100K Title Region Industry $$ User Exp Designer SF Bay Area Internet 100K Title Region Years of exp $$ User Exp Designer SF Bay Area 10+ 100K Title Region Company Years of exp $$ User Exp Designer SF Bay Area Google 10+ 100K #data points > threshold? Yes ⇒ Copy to Hadoop (HDFS) Note: Original submission stored as encrypted objects.
  • 163. Collection & Storage • Allow members to submit their compensation info • Extract member attributes • E.g., canonical job title, company, region, by invoking LinkedIn standardization services • Securely store member attributes & compensation data
  • 165. De-identification & Grouping • Approach inspired by k-Anonymity [Samarati-Sweeney] • “Cohort” or “Slice” • Defined by a combination of attributes • E.g, “User experience designers in SF Bay Area” • Contains aggregated compensation entries from corresponding individuals • No user name, id or any attributes other than those that define the cohort • A cohort available for offline processing only if it has at least k entries • Apply LinkedIn standardization software (free-form attribute  canonical version) before grouping • Analogous to the generalization step in k-Anonymity
  • 166. De-identification & Grouping • Slicing service • Access member attribute info & submission identifiers (no compensation data) • Generate slices & track # submissions for each slice • Preparation service • Fetch compensation data (using submission identifiers), associate with the slice data, copy to HDFS
  • 168. Insights & Modeling • Salary insight service • Check whether the member is eligible • Give-to-get model • If yes, show the insights • Offline workflow • Consume de-identified HDFS dataset • Compute robust compensation insights • Outlier detection • Bayesian smoothing/inference • Populate the insight key-value stores
  • 170. Security Mechanisms • Encryption of member attributes & compensation data using different sets of keys • Separation of processing • Limiting access to the keys
  • 171. Security Mechanisms • Key rotation • No single point of failure • Infra security
  • 172. Preventing Timestamp Join based Attacks • Inference attack by joining these on timestamp • De-identified compensation data • Page view logs (when a member accessed compensation collection web interface) •  Not desirable to retain the exact timestamp • Perturb by adding random delay (say, up to 48 hours) • Modification based on k-Anonymity • Generalization using a hierarchy of timestamps • But, need to be incremental •  Process entries within a cohort in batches of size k • Generalize to a common timestamp • Make additional data available only in such incremental batches
  • 173. Privacy vs Modeling Tradeoffs • LinkedIn Salary system deployed in production for ~2.5 years • Study tradeoffs between privacy guarantees (‘k’) and data available for computing insights • Dataset: Compensation submission history from 1.5M LinkedIn members • Amount of data available vs. minimum threshold, k • Effect of processing entries in batches of size, k
  • 176. Median delay due to batching vs. batch size, k
  • 177. Key takeaway points • LinkedIn Salary: a new internet application, with unique privacy/modeling challenges • Privacy vs. Modeling Tradeoffs • Potential directions • Privacy-preserving machine learning models in a practical setting [e.g., Chaudhuri et al, JMLR 2011; Papernot et al, ICLR 2017] • Provably private submission of compensation entries?
  • 179. Beyond Randomized Response • LDP + Machine Learning: • "Is interaction necessary for distributed private learning?" Smith, Thakurta, Upadhyay, S&P 2017 • Federated Learning • DP + Machine Learning • Encode-Shuffle-Analyze architecture "Prochlo: Strong Privacy for Analytics in the Crowd" Bittau et al., SOSP 2017 • Amplification by Shuffling
  • 180. LDP + Machine Learning Interactivity as a major implementation constraint ... parallel
  • 181. LDP + Machine Learning Interactivity as a major implementation constraint ... sequential
  • 182. "Is interaction necessary for distributed private learning?" [Smith, Thakurta, Upadhyay, S&P2017] • Single parameter learning (e.g., median): • Maximal accuracy with full parallelism • Multi-parameter learning: • Polylog number of iterations • Lower bounds
  • 183. Federated Learning "Practical secure aggregation for privacy-preserving machine learning" Bonawitz, Ivanov, Kreuter, Marcedone, McMahan, Patel, Ramage, Segal, Seth, ACM CCS 2017
  • 186. "Generalization Implies Privacy" Fallacy We don’t overfit, therefore our model cannot possibly violate privacy.
  • 187. “Generalization Implies Privacy” Fallacy Generalization ● average case ● model’s accuracy Privacy ● worst case ● model’s parameters
  • 188. “Generalization Implies Privacy” Fallacy ● Examples when it just ain’t so: ○ Person-to-person similarities ○ Support Vector Machines ● Models can be very large ○ Millions of parameters
  • 189. Somali to English Translation
  • 190. Somali to English Translation
  • 191. Somali to English Translation
  • 192. Somali to English Translation
  • 193. Somali to English Translation
  • 195. ML + Differential Privacy • [DP-SGD] Abadi, Chu, Goodfellow, McMahan, Mironov, Talwar, Zhang, "Deep Learning with Differential Privacy", ACM CCS 2016 • [PATE] Papernot, Abadi, Erlingson, Goodfellow, Talwar, "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", ICML 2017 • [PATE] Papernot, Song, Mironov, Raghunathan, Talwar, Erlingson, "Scalable Private Learning with PATE", ICML 2018 https://github.com/tensorflow/privacy
  • 196. Statistics + Differential Privacy Harvard Privacy Tools Project:
  • 197. Census 2020 and Differential Privacy
  • 198. Key takeaway points • Notion of differential privacy is a principled foundation for privacy- preserving data analyses • Local differential privacy is a powerful technique appropriate for Internet-scale telemetry • Other techniques (thresholding, shuffling) can be combined with differentially private algorithms or be used in isolation.
  • 199. References Differential privacy: review "A Firm Foundation For Private Data Analysis", C. ACM 2011 by Dwork book "The Algorithmic Foundations of Differential Privacy" by Dwork and Roth
  • 200. References Google's RAPPOR: paper "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response", ACM CCS 2014, Erlingsson, Pihur, Korolova blog Apple's implementation: article "Learning with Privacy at Scale", Apple ML J., Dec 2017 paper "Practical Locally Private Heavy Hitters", NIPS 2017, by Bassily, Nissim, Stemmer, Thakurta paper "Privacy Loss in Apple's Implementation of Differential Privacy on MacOS 10.12" by Tang, Korolova, Bai, Wang, Wang LinkedIn’s privacy-preserving analytics framework paper "PriPeARL: A Framework for Privacy-Preserving Analytics and Reporting at LinkedIn", CIKM 2018, Kenthapadi, Tran LinkedIn Salary: paper "LinkedIn Salary: A System for Secure Collection and Presentation of Structured Compensation Insights to Job Seekers", IEEE PAC 2017, Kenthapadi, Chudhary, Ambler blog
  • 201. Fairness Privacy Transparency Explainability Related WSDM’19 sessions: 1.Tutorial: Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned (Monday, 13:30 – 17:00) 2.H.V. Jagadish's invited talk: Responsible Data Science (Tuesday, 14:45 - 15:30) 3.Session 4: FATE & Privacy (Tuesday, 16:15 - 17:30) 4.Aleksandra Korolova’s invited talk: Privacy-Preserving WSDM (Wednesday, 14:45 - 15:30)
  • 204. PROCHLO: Strong Privacy for Analytics in the Crowd Bittau, Erlingsson, Maniatis, Mironov, Raghunathan, Lie, Rudominer, Kode, Tinnes, Seefeld SOSP 2017
  • 205. The ESA Architecture and Its Prochlo realization E A E E S Σ ESA: Encode, Shuffle, Analyze (ESA) Prochlo: A hardened ESA realization using Intel's SGX + crypto