Advertisement
Advertisement

More Related Content

Similar to MediaEval 2018 Pixel Privacy: Task Overview(20)

More from multimediaeval(20)

Advertisement

Recently uploaded(20)

MediaEval 2018 Pixel Privacy: Task Overview

  1. Pixel Privacy at MediaEval 2018 Martha Larson, Zhuoran Liu, Simon Brugman, Zhengyu Zhao MediaEval 2018 Workshop, EURECOM, Sophia Antipolis, France 31 October 2018
  2. First some MediaEval history
  3. Geo-location Estimation: Placing Task • Given a multimedia item and its associated metadata, predict a geo-location. • Data are images and videos drawn from Flickr.
  4. Placing Task: Year-to-year insights • 2010: (5k/5k)* move from images to videos, language modeling works well, uploader ID a key indicator. • 2011: (10k/5k) first use of audio, granularity of regions is critical. • 2012: (15k/5k) user models, motion features, two-stage approaches. • 2013: (9M/262k) eliminated uploader ID, confidence prediction, retrieval approaches. • 2014: (5M/500k) YFCC100M data set, graph approaches, again external knowledge resources hurt. • 2015: (5M/1M) 8% correct visual, 27% correct multimodal. *(training data size/test data size)
  5. M. Larson, P. Kelm, A. Rae, C. Hauff, B. Thomee, M. Trevisiol, J. Choi, O. van Laere, S. Schockaert, G. J. F. Jones, P. Serdyukov, V. Murdock, and G. Friedland. The benchmark as a research catalyst: Charting the progress of geo-prediction for social multimedia. In J. Choi and G. Friedland, editors, Multimodal Location Estimation of Videos and Images. Springer, pp. 5-40. 2015.
  6. User Images on Social Media
  7. User Images on Social Media
  8. User Images on Social Media
  9. Gerald Friedland, Symeon Papadopoulos, Julia Bernd, and Yiannis Kompatsiaris. 2016. Multimedia Privacy. In Proceedings of the 2016 ACM on Multimedia Conference (MM '16). ACM, New York, NY, USA, 1479-1480. Cybercasing: Beware of large-scale automatic image mining
  10. Protecting Privacy is a Problem
  11. How can we nudge users to protect themselves?
  12. Example images whose locations are protected by an Instagram filter Jaeyoung Choi, Martha Larson, Xinchao Li, Kevin Li, Gerald Friedland, and Alan Hanjalic. 2017. The Geo-Privacy Bonus of Popular Photo Enhancements. ACM ICMR 2017.
  13. Pixel Privacy at MediaEval 2018 Task Goal: • Increasing image appeal, • while blocking automatic inference of sensitive scene information. Evaluation Criteria: • Protection: % of images whose location categories can no longer be inferred by the “attack algorithm”. (Attack algorithm taken to be: ResNet50 classifier trained on the training set of the Places365-Standard dataset.) • Appeal: Degree to which the images are enhanced from the point of view of users. Novelty: We combine work on adversarial examples and image enhancement, which have been previously studied separately. B. Zhou, A. Lapedriza, A. Khosla, A. Oliva and A. Torralba, "Places: A 10 Million Image Database for Scene Recognition," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452-1464, 1 June 2018.
  14. Pixel Privacy at MediaEval 2018 Task Data: Places365-Standard dataset Sensitive scene information: Taken to be the class labels of classes in Places365-Standard dataset associated privacy criteria (that we defined): • Places in the home, • Places far away from the home (typical vacation places), • Places typical for children, • Places related to religion, • Places related to people's health, • Places related to alcohol consumption, • Places in which people do not typically wear street clothes, • Places related to people's living conditions/income, • Places related to security, Places related to military. B. Zhou, A. Lapedriza, A. Khosla, A. Oliva and A. Torralba, "Places: A 10 Million Image Database for Scene Recognition," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452-1464, 1 June 2018.
  15. http://places.csail.mit.edu/cellImgs/b_bedroom.jpg
  16. Universal Adversarial Perturbations Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. CVPR 2017.
  17. Universal Adversarial Perturbations Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. CVPR 2017.
  18. Example images whose scene categories are protected style transfer Yang Chen, Yu-Kun Lai, and Yong-Jin Liu. CartoonGAN: Generative Adversarial Networks for Photo Cartoonization. CVPR 2018.
  19. Example images whose scene categories are protected style transfer Yang Chen, Yu-Kun Lai, and Yong-Jin Liu. CartoonGAN: Generative Adversarial Networks for Photo Cartoonization. CVPR 2018.
  20. Baseline: Protection with CartoonGAN Protection: About 60% of the images that can be recognized are protected. Appeal: CartoonGAN designed to appeal (appeal not specifically tested.)
  21. Outlook • What is appeal? • Protecting other types of sensitive information? • Protecting under less-constrained conditions: - Protecting against other attacks? - Protecting in the case of which negative images? • Can we keep up in the “arms race” against classifiers trained on protected images? • Detection vs. Protection: Can the task help to restore the balance in effort that is invested in research?
Advertisement