Robust Vehicle Localization in Urban Environments Using Probabilistic MapsKitsukawa Yuki
研究室ゼミでの論文紹介資料です。
Robust Vehicle Localization in Urban Environments Using Probabilistic Maps
Jesse Levinson, Sebastian Thrun
International Conference on Robotics and Automation (ICRA), 2010
Robust Vehicle Localization in Urban Environments Using Probabilistic MapsKitsukawa Yuki
研究室ゼミでの論文紹介資料です。
Robust Vehicle Localization in Urban Environments Using Probabilistic Maps
Jesse Levinson, Sebastian Thrun
International Conference on Robotics and Automation (ICRA), 2010
Frequency-based Constraint Relaxation for Private Query Processing in Cloud D...Junpei Kawamoto
This document proposes a frequency-based constraint relaxation methodology for private queries in cloud databases. It aims to reduce computational costs for servers while maintaining privacy risks below existing "complete" protocols. The approach relaxes the constraint that servers must check all database items for a query by instead checking a subset, or "handled set", based on search intention frequencies. Evaluation on a real dataset found the approach reduces average query costs to 6.5% of complete protocols while keeping privacy risks comparable.
Securing Social Information from Query Analysis in Outsourced DatabasesJunpei Kawamoto
The document presents two methods for securing social information when databases are outsourced: Query Generalization by Dynamic Hash and Result Generalization by Bloom Filter. Queries are generalized through dynamic hashing to mix user queries and prevent determining relationships. Results are generalized using Bloom filters by including irrelevant tuples to mask which users requested the same tuples. The goal is to protect users' social information and relationships from being discovered by the outsourced database provider. Future work involves implementing and evaluating these methods on a real outsourced database service.
This document presents a study on privacy-preserving techniques for continually publishing location data. The authors propose a new definition of adversarial privacy that aims to preserve utility of published histograms while guaranteeing privacy against adversaries. They assume people's movements follow a Markov process and evaluate their approach on change point detection and frequent path extraction tasks, finding it achieves better utility than differential privacy while providing privacy guarantees.
A Locality Sensitive Hashing Filter for Encrypted Vector DatabasesJunpei Kawamoto
This document describes a locality sensitive hashing (LSH) filter for encrypted vector databases. The LSH filter allows cloud servers to filter database tuples without decrypting encrypted data, improving query efficiency. A whitening transformation is applied to encrypted vectors before inserting them into the LSH index to reduce skew in the vector space. When a query is received, the server computes the LSH of the encrypted query vector to identify candidate tuple groups for similarity checking based on the LSH hash values. Tuples with low estimated similarity are skipped, while those with high similarity have their actual encrypted similarity computed.
The document discusses how search engines can sell advertising tied to search queries through an auction-based system. It describes how search queries express a user's intent, allowing targeted advertising. The document proposes using a Vickrey-Clarke-Groves auction to encourage truthful bidding by advertisers by having the winner pay an amount equal to the harm caused to other bidders. This system maximizes total advertiser valuations and incentives truthful reporting of private valuations.
Private Range Query by Perturbation and Matrix Based EncryptionJunpei Kawamoto
This document proposes a new method called Inner Product Predicate (IPP) for performing private range queries over encrypted data. The IPP method adds perturbations to attribute values and queries through matrix-based encryption to prevent frequency analysis attacks. Experimental results show the transformed query distributions are different from the originals and query processing time is linear in the number of tuples. Open problems remain around reducing computational costs and defending against attacks using aggregate query results.
VLDB2009のSession27より,
1) Anonymization of Set-Valued Data via Top-Down, Local Generalization (He and Naughton)
2) K-Automorphism: A General Framework For Privacy Preserving Network Publication (Zou, Chen, and Özsu)
3) Distribution-based Microdata Anonymization (Koudas, Srivastava, Yu, Zhang)
を簡単に紹介.
VLDB2009勉強会: http://qwik.jp/vldb2009-study/
Reducing Data Decryption Cost by Broadcast Encryption and Account Assignment ...Junpei Kawamoto
The document proposes two methods for reducing data decryption costs while preserving social information for web applications. The naive method encrypts access control lists but results in long key chains and high decryption costs. The proposed method uses broadcast encryption to encrypt authority information in one encryption and account assignment to reduce decryption candidates, lowering costs and improving precision. Simulation results show the proposed method maintains near-perfect precision independent of user numbers while requiring only one decryption.
Security of Social Information from Query Analysis in DaaSJunpei Kawamoto
This document discusses protecting social information inferred from database queries. It introduces the problem of attackers extracting social relationships between users from query logs. An attack model is presented where users querying similar topics are assumed to be related. The document then proposes a method of query conversion using an extendible hashing tree to map original queries to generalized queries. This aims to prevent determining the exact original queries and therefore the social relationships between users from the query logs. Evaluation of the method shows it can significantly reduce the precision and recall of the attack in determining social information.
Frequency-based Constraint Relaxation for Private Query Processing in Cloud D...Junpei Kawamoto
This document proposes a frequency-based constraint relaxation methodology for private queries in cloud databases. It aims to reduce computational costs for servers while maintaining privacy risks below existing "complete" protocols. The approach relaxes the constraint that servers must check all database items for a query by instead checking a subset, or "handled set", based on search intention frequencies. Evaluation on a real dataset found the approach reduces average query costs to 6.5% of complete protocols while keeping privacy risks comparable.
Securing Social Information from Query Analysis in Outsourced DatabasesJunpei Kawamoto
The document presents two methods for securing social information when databases are outsourced: Query Generalization by Dynamic Hash and Result Generalization by Bloom Filter. Queries are generalized through dynamic hashing to mix user queries and prevent determining relationships. Results are generalized using Bloom filters by including irrelevant tuples to mask which users requested the same tuples. The goal is to protect users' social information and relationships from being discovered by the outsourced database provider. Future work involves implementing and evaluating these methods on a real outsourced database service.
This document presents a study on privacy-preserving techniques for continually publishing location data. The authors propose a new definition of adversarial privacy that aims to preserve utility of published histograms while guaranteeing privacy against adversaries. They assume people's movements follow a Markov process and evaluate their approach on change point detection and frequent path extraction tasks, finding it achieves better utility than differential privacy while providing privacy guarantees.
A Locality Sensitive Hashing Filter for Encrypted Vector DatabasesJunpei Kawamoto
This document describes a locality sensitive hashing (LSH) filter for encrypted vector databases. The LSH filter allows cloud servers to filter database tuples without decrypting encrypted data, improving query efficiency. A whitening transformation is applied to encrypted vectors before inserting them into the LSH index to reduce skew in the vector space. When a query is received, the server computes the LSH of the encrypted query vector to identify candidate tuple groups for similarity checking based on the LSH hash values. Tuples with low estimated similarity are skipped, while those with high similarity have their actual encrypted similarity computed.
The document discusses how search engines can sell advertising tied to search queries through an auction-based system. It describes how search queries express a user's intent, allowing targeted advertising. The document proposes using a Vickrey-Clarke-Groves auction to encourage truthful bidding by advertisers by having the winner pay an amount equal to the harm caused to other bidders. This system maximizes total advertiser valuations and incentives truthful reporting of private valuations.
Private Range Query by Perturbation and Matrix Based EncryptionJunpei Kawamoto
This document proposes a new method called Inner Product Predicate (IPP) for performing private range queries over encrypted data. The IPP method adds perturbations to attribute values and queries through matrix-based encryption to prevent frequency analysis attacks. Experimental results show the transformed query distributions are different from the originals and query processing time is linear in the number of tuples. Open problems remain around reducing computational costs and defending against attacks using aggregate query results.
VLDB2009のSession27より,
1) Anonymization of Set-Valued Data via Top-Down, Local Generalization (He and Naughton)
2) K-Automorphism: A General Framework For Privacy Preserving Network Publication (Zou, Chen, and Özsu)
3) Distribution-based Microdata Anonymization (Koudas, Srivastava, Yu, Zhang)
を簡単に紹介.
VLDB2009勉強会: http://qwik.jp/vldb2009-study/
Reducing Data Decryption Cost by Broadcast Encryption and Account Assignment ...Junpei Kawamoto
The document proposes two methods for reducing data decryption costs while preserving social information for web applications. The naive method encrypts access control lists but results in long key chains and high decryption costs. The proposed method uses broadcast encryption to encrypt authority information in one encryption and account assignment to reduce decryption candidates, lowering costs and improving precision. Simulation results show the proposed method maintains near-perfect precision independent of user numbers while requiring only one decryption.
Security of Social Information from Query Analysis in DaaSJunpei Kawamoto
This document discusses protecting social information inferred from database queries. It introduces the problem of attackers extracting social relationships between users from query logs. An attack model is presented where users querying similar topics are assumed to be related. The document then proposes a method of query conversion using an extendible hashing tree to map original queries to generalized queries. This aims to prevent determining the exact original queries and therefore the social relationships between users from the query logs. Evaluation of the method shows it can significantly reduce the precision and recall of the attack in determining social information.
5. 関連研究:差分プライバシ1
• デファクトスタンダードなプライバシ定義
• ヒストグラムの集計に個人の情報が使われていると判断できる確率を制限
• ヒストグラムの各値にラプラス分布に従うノイズを付加
• すべてのデータを均等に扱いノイズを付加
• 差分プライバシはあらゆる攻撃に対するプライベート性を考える
• 過疎地帯に対してはノイズの影響が無視できない
2013/7/22
5
[1] C.Dwork, F.McSherry, K.Nissim, A.Smith, “Calibrating noise to sensitivity in private data analysis”, Proc. of the
Third Conference on Theory of Cryptography, pp. 265-284, 2006.
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛ −
−
φ
µ
φ
||
exp
2
1 x
マルコフ過程を用いた位置情報継続開示のためのアドバーザリアルプライバシ
過疎地域における
ノイズ付加の結果
6. 関連研究:差分プライバシ1
• デファクトスタンダードなプライバシ定義
• ヒストグラムの集計に個人の情報が使われていると判断できる確率を制限
• ヒストグラムの各値にラプラス分布に従うノイズを付加
• すべてのデータを均等に扱いノイズを付加
• 差分プライバシはあらゆる攻撃に対するプライベート性を考える
• 過疎地帯に対してはノイズの影響が無視できない
2013/7/22
6
[1] C.Dwork, F.McSherry, K.Nissim, A.Smith, “Calibrating noise to sensitivity in private data analysis”, Proc. of the
Third Conference on Theory of Cryptography, pp. 265-284, 2006.
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛ −
−
φ
µ
φ
||
exp
2
1 x
マルコフ過程を用いた位置情報継続開示のためのアドバーザリアルプライバシ
攻撃者に仮定を与え差分プライバシより弱いが
利便性の高い出力を得るためのプライバシ定義を提案する
過疎地域における
ノイズ付加の結果
vs.
22. 評価実験:頻出パス抽出
• 実験設定
• 渋谷周辺・町田周辺の各地域で頻出パスのランキングを作成
• 位置情報の集約者は (パス,そのパスを通った人数)を出力
• 解析者は出力結果をソートしてランキングを作成
• 比較手法
• Chenらの差分プライバシを満たすアルゴリズム5
• 頻出パス抽出に特化したアルゴリズム
• 稀なパスをあらかじめDBから削除することでノイズ付加量を削減
• 変化点検出と同様 DP-1(ε = 1) と DP-100 (ε = 100) を用意
2013/7/22
マルコフ過程を用いた位置情報継続開示のためのアドバーザリアルプライバシ
22
[5] R.Chen, G.Acs, C.Castelluccia, “Differentially Private Sequential Data Publication via Variable-Length N-
Grams,” Proc. of the 19th ACM Conference on Computer and Communications Security, pp.638-649, 2012.
23. 実験結果
• 評価尺度にはNDCG6を用いた
• トップ k ランキングの善し悪しを評価(値が大きいほど正解に近い)
2013/7/22
マルコフ過程を用いた位置情報継続開示のためのアドバーザリアルプライバシ
23
[6] K.Järvelin, J.Kekäläinen, ”IR evaluation methods for retrieving highly relevant documents,” Proc. of the 23rd
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.41-48,
2000.
良
悪
渋谷周辺
町田周辺
24. 実験結果
• 評価尺度にはNDCG6を用いた
• トップ k ランキングの善し悪しを評価(値が大きいほど正解に近い)
2013/7/22
マルコフ過程を用いた位置情報継続開示のためのアドバーザリアルプライバシ
24
[6] K.Järvelin, J.Kekäläinen, ”IR evaluation methods for retrieving highly relevant documents,” Proc. of the 23rd
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.41-48,
2000.
良
悪
渋谷周辺
町田周辺
• 提案手法・比較手法ともに正解ランキングとほぼ等しい頻出パスランキングを抽出
• 比較手法は稀なパスは削除済みのため抽出不可能
• 一方提案手法は稀なパスであっても抽出可能
• 提案手法はパスの出力に対しても有効