Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Meyer primr-sililcon flatirons expanded facebook slide deck

1,435 views

Published on

Slides from two recent talks (plus a few additional slides) on legal and ethical aspects of the Facebook "emotional contagion" experiment.

Published in: Science
  • Be the first to comment

Meyer primr-sililcon flatirons expanded facebook slide deck

  1. 1. Background: Facebook Practice • Over 1 billion served/mo worldwide (in U.S., 128m users/day) • Average user: ~1500 items eligible for news feed • Algorithm selects & shows ~300 items (to see all, go to friend’s wall) – Has always existed – FB constantly changes it – Proprietary – Now based on ~100K criteria – Almost certainly affects how much emotionally charged content users see; very likely prioritizes  posts Matt McGee, “EdgeRank Is Dead: Facebook’s News Feed Algorithm Now Has Close To 100K Weight Factors,” Marketing Land, Aug. 16, 2013, http://marketingland.com/edgerank-is-dead-facebooks-news-feed-algorithm-now-has-close-to-100k-weight-55908
  2. 2. Three Hypotheses About Facebook’s Practice 1. Social Comparison ( posts are risky) • Small, observational studies • Correlations b/w FB use & stress, jealously, loneliness, & depression • FB creates “self promotion-envy spiral” 1. Emotional Contagion ( posts are risky) • Lab experiments & field data found happiness & depression spread via in-person social networks 3. Null Hypothesis •Krasnova, et al., Envy on Facebook: A Hidden Threat to Users’ Life Satisfaction?, (2013). Wirtschaftsinformatik Proceedings 2013. Paper 92 •Kross, et al. (2013) Facebook Use Predicts Declines in Subjective Well-Being in Young Adults. PLoS ONE 8(8): e69841 •Hatfield E, Cacioppo JT, Rapson RL (1993) Emotional contagion. Curr Dir Psychol Sci 2(3):96–100 •Fowler & Christakis (2008) Dynamic spread of happiness in a large social network: Longitudinal analysis over 20 years in the Framingham Heart Study. BMJ 337:a2338 •Rosenquist, Fowler & Christakis (2011) Social network determinants of depression. Mol Psychiatry 16(3):273–281
  3. 3. Facebook’s PNAS Research • 1 week (Jan. 11-18, 2012) • Subjects “who viewed FB in English” randomly selected by user ID • 2 experiments manipulated extent to which subjects (N = 689,003) were exposed to emotional content by removing varying %’s of it from particular news feed viewings • All posts remained on friends’ walls; post not presented in 1 viewing may appear in other viewings • Software coded status updates as , , or neutral (“no text was seen by the researchers”) – Coded +/- if at least 1 +/- word, e.g., “nice” or “awful”
  4. 4. Facebook’s PNAS Research Two parallel experiments; 4 conditions (for each, N = ~155,000) Experiment 1: Removing  posts •Treatment: Each  post “had between a 10% & 90% chance (based on user ID) of being omitted…for that specific viewing.” •Control: 10%–90% of 46.8% (i.e., 4.68%-42.12%) of eligible posts randomly removed w/o regard to emotional content Experiment 2: Removing  posts: •Treatment: Each  post “had between a 10% & 90% chance (based on user ID) of being omitted…for that specific viewing.” •Control: 10%–90% of 22.4% (i.e., 2.24%-20.16%) of eligible posts randomly removed w/o regard to emotional content
  5. 5. Results Compared to control subjects, subjects exposed to fewer  posts subsequently in their own posts: •Used 0.1% fewer  words (Cohen’s d = 0.02) •Used 0.04% more  words (Cohen’s d = 0.001) •Produced only 96.7% as many words overall Compared to control subjects, subjects exposed to fewer  posts subsequently in their own posts: •Used 0.07% fewer  words (Cohen’s d = 0.02) •Used 0.06% more  words (Cohen’s d = 0.008) •Produced only 99.7% as many words overall Issues of methodology & interpretation: •Questionable instrument: LIWC 2007 not intended for lengthy text •Questionable coding: “I’m not having a great day.” “Oh great.” •Effect on word choice ≠ effect on mood (no giggling at a funeral)
  6. 6. Tiny Effect Sizes Source: Lee Becker, Effect Size Calculators, http://www.uccs.edu/lbeck er/effect-size.html#Cohen
  7. 7. Like, Really Tiny Cohen’s d = 0.1 (FB effects: d = 0.001–0.02) (Visualization tool doesn’t support ES smaller than 0.1!) Source: Kristoffer Magnusson, Interpreting Cohen's d effect size: an interactive visualization, http://rpsychologist.com/d3/cohend/
  8. 8. Objections • No IRB review • No informed consent (except ToS) • No debriefing • Knowingly psychologically harmed users – EPIC: “purposefully messed with people’s minds” – Guardian: “deliberately made people sad” – Slate: “intentionally made thousands upon thousands of people sad”
  9. 9. Were the PNAS Experiments Legally Subject to IRB Review? • U.S. federal & most state jurisdictions – Facebook: No. – Cornell: only if it checked box AND “engaged in research” • Maryland (never enforced?; likely unconstitutional as applied to “speech” research) – “A person may not conduct research using a human subject unless [he] conducts the research in accordance w/the [Common Rule].” Who “conducted” research? – If HSR: At least FB. If QA/QI: Neither.
  10. 10. Was Cornell “Engaged in Research”? — Cornell affiliates (w/FB) designed research & wrote paper — FB data scientist “performed research” & “analyzed data” OHRP Guidance on Engagement of Institutions in HSR (Oct. 16, 2008): Engagement when, inter alia, researchers obtain: •data about subjects through intervention or interaction •identifiable private information about subjects •subjects’ informed consent NO engagement when, inter alia, researchers: •obtain coded private info from another institution involved in the research (that retains a link to individually identifying info) and are unable to readily ascertain subjects’ ID •author a paper, journal article, or presentation describing HSR
  11. 11. Should Cornell’s IRB Have Reviewed It Anyway, As Good Policy? In applying this guidance, it is important to note that at least one institution must be determined to be engaged in any non-exempt human subjects research project that is conducted or supported by HHS (45 CFR 46.101(a)).
  12. 12. What If an IRB Had Reviewed It? Michelle N. Meyer, John Lantos, Alex John London, Amy L. McGuire, Udo Schuklenk, Lance Stell (July 16, 2014) 28 additional ethicist signatories: http://www.michellenmeyer.com/
  13. 13. The Common Rule and the foundational tenets of research ethics on which it’s based require informed consent for all human subjects research.
  14. 14. Why Isn’t Informed Consent Always Legally Or Ethically Required for Human Subjects Research? Because research ethics is informed by more than the principle of respect for persons’ autonomy. It’s also informed by beneficence & justice. Balancing these principles yields exceptions to informed consent.
  15. 15. Beauchamp, staff philosopher to Nat’l Comm. (1978) •Respect for persons’ autonomy •Beneficence •Justice Beauchamp & Childress (1977) •Respect for persons’ autonomy •Beneficence/Nonmaleficence •Justice
  16. 16. Belmont on Beneficence & Risk-Benefit Analysis “Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being.” “Beneficence . . . requires that we protect against risk of harm to subjects & also that we be concerned about the loss of the substantial benefits that might be gained from research.” “[B]eneficence often occupies a well-defined justifying role in many areas of research involving human subjects. “Research also makes it possible to avoid the harm that may result from the application of previously accepted routine practices that on closer investigation turn out to be dangerous.” “[E]stimates of the probability of harm [must be] reasonable, as judged by known facts or other available studies/”
  17. 17. Belmont on Justice & Subject Selection “[S]election of research subjects needs to be scrutinized in order to determine whether some classes . . . are being systematically selected simply because of their easy availability, their compromised position, or their manipulability, rather than for reasons directly related to the problem being studied.” •The distribution of the risks and potential benefits of research matters. Are research risks borne solely by subjects, with the potential benefits of the research enjoyed by others? •Or are subjects drawn from the population that stands to benefit from the research?
  18. 18. Belmont on Respect for Persons’ Autonomy & Informed Consent Don’t “withhold information necessary to make a considered judgment, when there are no compelling reasons to do so.” “In most cases of research involving human subjects, respect for persons demands that subjects enter into the research voluntarily and with adequate information. In some situations, however, application of the principle is not obvious.” “A special problem of consent arises where informing subjects of some pertinent aspect of the research is likely to impair the validity of the research. In many cases, it is sufficient to indicate to subjects that they are being invited to participate in research of which some features will not be revealed until the research is concluded. In all cases of research involving incomplete disclosure, such research is justified only if it is clear that (1) incomplete disclosure is truly necessary to accomplish the goals of the research, (2) there are no undisclosed risks to subjects that are more than minimal, and (3) there is an adequate plan for debriefing subjects, when appropriate, and for dissemination of research results to them.”
  19. 19. When Isn’t Informed Consent Legally Or Ethically Required for Human Subjects Research? (1) When minimal risk research can’t otherwise be conducted. [Next slide] or (2) When an activity is designed to assure or improve quality rather than contribute to generalizable knowledge, in which case it doesn’t meet the Common Rule’s definition of “research,” (45 C.F.R. § 46.102(d)) [Data-guided practice & Shifting perspective slides] The Facebook experiments fit (1) & could easily have been repackaged to fit (2).
  20. 20. §46.116(d): Waiver/Alteration of IC 1. No more than “minimal risk” 2. Won’t “adversely affect the rights & welfare of subjects” 3. Couldn’t “practicably be carried out” w/o waiver/alteration 4. Debrief “whenever appropriate” Minimal risk: incremental risks of research (§46.111(a)(2)) not greater than those “ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests” (§46.102(i)) • Risk: psychological harm from exposure to emotionally charged text [for  posts,  posts, neither, or both? Is this minimal risk or no foreseeable risk?] • How did control & treatment arms compare to user feeds controlled by FB’s algorithm in practice in Jan. 2012? • To routine risk of emotionally charged text in other contexts (newspapers, Twitter, blog comment section)?
  21. 21. Knowledge-Producing Activities Designed to Enable Internal Data-Guided Practice Practice • Can include innovation & adaptation, but it’s ad hoc and intuition-driven Human Subjects Research Data-Guided Practice • Systematic, data-guided practice innovation & adaptation (e.g., QA/QI, CER, standard-of-care research, learning healthcare systems, A/B testing) • Designed to bring about immediate improvements in practice in particular settings • Research & practice are integrated • Participation not always optional (consent to minimal risk QI part of consent to underlying practice) • Classically, distinct activity occurring in isolation • Designed to contribute to generalizable knowledge • Participation usually optional • Often IRB review
  22. 22. Rethinking the Research/Practice Distinction to Enable Data-Guided Practices Remember Tom Beauchamp, the drafter of the Belmont Report? Congress charged the Nat’l Comm. with defining the boundary b/w “biomedical and behavioral research and the accepted and routine practice of medicine.” Why? Because research was assumed to be dangerous while practice was assumed to be safe. They needed to be distinguished so that research could be singled out for heightened regulation. A lot turns on this distinction. Tom now thinks it was dangerously mistaken. So do I. It has led us to overprotect research participants & underprotect practice participants. Google this  talk on YouTube & watch it. It’s important.
  23. 23. Shifting Perspectives As we evaluate FB’s research in the foreground, let’s do so in light of FB’s practice in the background. First, let’s stipulate that people might be harmed by exposure to emotionally charged social media text. (If you don’t take that seriously, why are you upset about an experiment that very modestly altered the amount of such text users saw for 1 week?)
  24. 24. Now Recall the Three Hypotheses About Facebook’s Practice 1. Social Comparison ( posts are risky) 1. Emotional Contagion ( posts are risky) 3. Null Hypothesis (don’t worry; it’s just noise) How ought Facebook to respond to conflicting data about the risks of its service?
  25. 25. Shifting Perspectives & Inverting Criticisms “The world is just the A of the A/B test” — Duncan Watts, Oct. 2014, MIT CodeCon Conference Facebook has been accused of abusing its power by experimenting, but the alternative is to use its power to set A without knowing A’s effects on users. When do companies have an ethical duty to conduct an experiment as part of QA/QI?
  26. 26. Shifting Perspectives & Inverting Criticisms What’s the real “experiment” (in the sense of exposure to (un)known risks)? Subjecting ~310,000 to A/B testing or 1 billion to A? Who are the real guinea pigs? To avoid badly biased results, FB “withh[e]ld information necessary [for users] to make a considered judgment” about whether to participate in the experiment. The alternative was not producing (& hence withholding) data about FB’s everyday risks.
  27. 27. Shifting Perspectives & Inverting Criticisms Some have invoked Kant to criticize FB for treating subjects as mere means to its corporate ends. What do you call a company that doesn’t investigate credible claims that its service harms users? When subjects bear little or no incremental risk from research & stand to benefit from its results, we should take a cue from the Belmont Report & the Common Rule and stop fetishizing autonomy & consent & start embracing welfare & solidarity. We’re all in these practices — social media, health care — together. Let’s make sure they’re safe & effective.

×