Interest measures – make sure that sensitive facts, if they exist, will be deemed uninteresting by algorithms Extra data – example, a “phone book” that contains extra entries. Still useful if goal is to find phone given name, but access to complete phone book doesn’t allow determining facts about (for example) department sizes. Performance – maybe not an issue for small amounts of data, but on large data sets (terabyte); exponential performance is an issue (disk limited) Note that we don’t have the same problem faced by (for example) the GPS military/civilian accuracy encoding. There, the goal is to make information (position) known to all, but just more clearly for some. Here, the information to be made known, and the information to be kept hidden, are completely different. A better analogy would be getting position from communications satellites (e.g. measuring delay). Introducing a small random delay will wreak havoc with trying to determine position by this method, but will not alter the information communicated.
Here I try to summarize entire cryptographic approaches to PPDM. Basically, lots of work done on applying secure Multi-party based ideas for PPDM. Generally, it is applied to distributed data mining. In most of the work recent work, it is assumed That the adversaries are semi-honest (i.e. they follow the protocol correctly). Only recently (including Kantarcioglu and Kardes paper that will be presented in the workshop) malicious model is discussed. It turns out that all these Different solutions are consist of few common secure subprotocols such as dot product and summation.
Main drawbacks for the SMC approaches for PPDM are: 1) They are not efficient enough for really large data sets and high number of distributed nodes. 2) Semi-honest assumption may not be realistic 3) Malicious model is even slower. Possible future directions : New models. Crypto methods assume either malicious or semi-honest adversaries. What about rational adversaries (i.e. Adversaries that cheat for profit)? Game theoretic models can provide efficient solutions. We may try combining anonymization with SMC techniques for efficient and accurate solutions.
Perturbation is a very important technique in PPDM. This technique is to distort the data, but still keep some properties of the data which will be used for later data mining phase. Here listed are some perturbation techniques. Additive based approach is first proposed by Agrawal and Srikant, now has many various. Single one step plus may not enough to protect the privacy, we have proposed a two step model in ICDM 06. Multiplicative based approach, e.g. orthogonal transformation, geometry property is to rotate the data. E.g Chen and Liu ICDM 05. This transformation has the property to keep the Euclidean distance between any pair of data points, so some data mining tools can be directly applied, K-Nearest Neighbor Classifier(KNN), Support Vector Machines(SVM) and so on. The later approach has evaluated the privacy preserving in more detail, so proposed a random projection to a lower space, Liu and Kargupta TKDE2006 and Liu and Kargupta PKDD'06. Condensation and decomposition (Wandand Zhang ICDM06) are using some properties of matrix. In the decomposition area, Wavelet transformation is new. All these approaches are still in progress. Data swapping is a different approach, which transforms the data set by switching a subset of attributes between selected pairs (Fienberg et al 2003.
Still a very challenge area. From the figure we can see, to achieve 100% privacy is easy, simply don’t give out any information. No credit card, no marketing. To ignore the privacy to conduct data mining is easy too. What is privacy? Who cares. Our goal is tough, and will probably end up in that square for the acceptable results.
We have proposed an individual adaptable model, which enable individuals to choose their own privacy level. User is to choose their privacy level, the system will match with different interval length use to perturb the data in our two phase perturbation model. Our two phase model is additive based approach, but we introduce the second step sampling to enhance the privacy preserving.
Make the PPDM approaches more fit in the real life situation is the trend for today’s research. We conducted intensive experiments with real-world data set, and give a applicability study in DKE07 paper. Reconstruction of the original data distribution not work very well with real life data. Distribution is a hard problem. When the distribution of the original data set is not hard, the method may work; but if the distribution of the original data is hard, the method not work well. It depends on the distribution! So we suggest should not use distribution as a meddle step. In our another work, we have tailed the data mining tools to fit the PPDM domain. That is try to directly mapping the data mining functions according to the noise addition method. Believe this is a fruitful direction for PPDM
Privacy Prof. Bhavani Thuraisingham The University of Texas at Dallas March 5, 2008 Lecture #18
Before I as a user of Organization A send data about me to organization B, I read the privacy policies enforced by organization B
If I agree to the privacy policies of organization B, then I will send data about me to organization B
If I do not agree with the policies of organization B, then I can negotiate with organization B
Even if the web site states that it will not share private information with others, do I trust the web site
Note: while confidentiality is enforced by the organization, privacy is determined by the user. Therefore for confidentiality, the organization will determine whether a user can have the data. If so, then the organization van further determine whether the user can be trusted
Platform for Privacy Preferences (P3P): What is it?
P3P is an emerging industry standard that enables web sites to express their privacy practices in a standard format
The format of the policies can be automatically retrieved and understood by user agents
It is a product of W3C; World wide web consortium
When a user enters a web site, the privacy policies of the web site is conveyed to the user; If the privacy policies are different from user preferences, the user is notified; User can then decide how to proceed
Several major corporations are working on P3P standards including
Platform for Privacy Preferences (P3P): Organizations
Several major corporations are working on P3P standards including:
Web sites have also implemented P3P
Semantic web group has adopted P3P
Platform for Privacy Preferences (P3P): Specifications
Initial version of P3P used RDF to specify policies; Recent version has migrated to XML
P3P Policies use XML with namespaces for encoding policies
P3P has its own statements and data types expressed in XML; P3P schemas utilize XML schemas
P3P specification released in January 20005 uses catalog shopping example to explain concepts; P3P is an International standard and is an ongoing project
Example: Catalog shopping
Your name will not be given to a third party but your purchases will be given to a third party
What happens if the web sites do no honor their P3P policies
Then appropriate legal actions will have to be taken
XML is the technology to specify P3P policies
Policy experts will have to specify the policies
Technologies will have to develop the specifications
Legal experts will have to take actions if the policies are violated
Privacy for Assured Information Sharing Export Data/Policy Component Data/Policy for Agency A Data/Policy for Federation Export Data/Policy Component Data/Policy for Agency C Component Data/Policy for Agency B Export Data/Policy
Privacy Preserving Surveillance Raw video surveillance data Face Detection and Face Derecognizing system Suspicious Event Detection System Manual Inspection of video data Comprehensive security report listing suspicious events and people detected Suspicious people found Suspicious events found Report of security personnel Faces of trusted people derecognized to preserve privacy
Directions: Foundations of Privacy Preserving Data Mining
We proved in 1990 that the inference problem in general was unsolvable, therefore the suggestion was to explore the solvability aspects of the problem.
Can we do something similar for privacy?
Is the general privacy problem solvable?
What are the complicity classes?
What is the storage and time complicity
We need to explore the foundation of PPDM and related privacy solutions
Directions: Testbed Development and Application Scenarios
There are numerous PPDM related algorithms. How do they compare with each other? We need a testbed with realistic parameters to test the algorithms
It is time to develop real world scenarios where these algorithms can be utilized
Is it feasible to develop realistic commercial products or should each organization adapt product to suit their needs?