In April 2004, a bold experiment by the Infosecurity Tradeshow in London proved what everyone suspected, over 70% of people passing through Liverpool Street Station would reveal their password in exchange for candy (http://news.bbc.co.uk/2/hi/technology/3639679.stm). Some commentators applauded this validation of a previously unproven assumption about Londoner’s attitudes towards password secrecy. Other commentators had serious ethical concerns with the experiment.
This candy-for-password experiment got me thinking about health privacy/security experiments. Many suspect that the healthcare system has serious human and technical privacy vulnerabilities, but how can we validate this suspicion? Would a patient hand over their provincial health number for a chocolate bar? Would a medical professional hand over a patient’s information for a chai latte? The more I thought about it, the more extreme – and both frightening and funny – the research projects became.
Thesis: Patients will trust claims of anonymity when the technicalities of disclosure of personally identifying information is not immediately obvious.Experiment: A group of patients with a common diagnosis are separately invited to call into a support group conference call where they do not divulge their identities. What proportion of patients will make the call? Will that proportion increase with incentives? Do patients understand that the phone number from which they made the call is disclosed to multiple parties?Photo credit: http://www.flickr.com/photos/seattlemunicipalarchives/Turn on your phone. Turn the volume way up. Now dial this number…
Thesis: Patients are exposing their medical history in ways that may negatively impact their long term future – employment, insurance, etc.Scenario: Conduct deep open source research on a population to determine what proportion have exposed potentially damaging – in terms of employment, insurance, etc. – health information. A follow-up research effort could include understanding how difficult it is to remove such material.Photo credit: http://www.flickr.com/photos/khalidalbaih/
This woman blogged about her heart surgery in grueling detail and provided this photograph of her wrist bands.Photo credit: http://mayajoyfully.blogspot.ca/2012/09/my-incision-decision.html
Not all patient groups seem ignorant of their online privacy.The issue of the disclosure of PHI in social media is clearly more complicated than it seems on the surface.
What about auditing the medical social networking sites?
Thesis: Healthcare facilities relying on Wi-Fi are vulnerable to very simple Wi-Fi attacks.Scenario: Establish a rogue free Wi-Fi access point covering the target facility.DoS the legit Wi-Fi network.Monitor the rogue network for PHI.
Thesis: As demonstrated by the “candy for password” story, patients will readily part with sensitive personal information with minimal incentive.Scenario: Nothing more than a health sector phishing campaign. It could be QR code driven, but need not be.
You know that link in that suspicious email? The one that if you follow it, it will infect your computer with a virus?Every QR code is just like that link, except that you cannot even analyze it before following it.Ran a QR code campaign two years in a row at the Red Deer career fair under the auspices of the Canadian Information Processing Society.In 2012, scattered about 200 QR codes around the venue and received 80 views. 45 of these followed the map to our booth.In 2013, we were not allowed to scatter the codes around so they put it in the fair’s flyer. We received 6 views (one was my colleague) and no one claimed to have followed the map to our booth.
Thesis: Canadian acute care networks are not capable of withstanding a sustained remote attack. If validated, there are policy implications ranging from a.) determining the magnitude of the risk, to b.) privacy compliance questions, to c.) the exposure of critical infrastructure to terrorist attack.Scenario: Carry out a military style red team vs. blue team exercise on operational acute care networks – or at least create a realistic simulation of an operational acute care network as the battlespace.Photo credit: http://www.social-engineer.org/wp-content/uploads/2010/01/se-3d-mesh-by-DigiP.jpg
Do the bad guys follow this prohibition of not “breaking the rules” on airplanes or acute care networks?Until you test it, all that you know is that it is theoretically secure. So, what do you do when it matters, but you cannot test it?
Thesis: A mystical practitioner – psychic, fortune teller, numerologist, etc. – not only could make a lot of money in a healthcare facility but would be a significant privacy risk while doing so.Scenario: A psychic entertainer “reads minds” in a hospital lobby or cafeteria and comes away with a pocket full of PHI.
Credit to Dr. Khaled El EmamThesis: Physicians are at least as vulnerable to cyber-attack vectors based on user naïveté and social engineering as other groups. If this hypothesis can be validated it has significant impact on health privacy policy due to the privileged access to PHI afforded to physicians.Experiment: Distribute specially crafted malware to physicians via USB sticks at medical conferences, by direct mail and other targeted methods. This malware is designed to be innocuous and its only function is to “call home” to the researcher’s simulated command and control server. How many physicians will run the malware?Image creditshttp://www.flickr.com/photos/williamhook/http://www.flickr.com/photos/medithit/
Thesis: An individual perceived as non-threating and therapeutic in a clinical setting could gather significant PHI.Scenario: A balloon twisting clown visits acute care wards and with either overt or hidden cameras photographs charts, wrist bands, monitors, nursing station boards, etc. while going about his entertainment activities.Just as everyone trusts someone in a lab coat holding a clipboard, no one suspects a clown.Photo credit: http://www.flickr.com/photos/wonker/
Photo credit: Fujitsu
Rusty Capps, "The Spy Who Came to Work," Security Management, February 1997.