Brain-Computer Interfaces (BCI) are a type of direct communication between the human brain and its functions and a computer that allow the human to control an external object, such as an artificial limb, or allow a digital device to control the human brain—for instance, to detect and stop epileptic seizures. A new form of BCI combined with AI is already being used experimentally for emotional therapy in psychiatric patients. I will outline this case and then discuss the ethical problems and possible resolutions that a corporation that makes AI-based appliances might consider in order to enhance the device’s ethicality, marketability, and the company’s reputation.
Aminabad Call Girl Agent 9548273370 , Call Girls Service Lucknow
[DSC Europe 23] Kevin LaGrandeur - Brain-Computer Interfaces: Ethical issues and resolutions
1. Brain-Computer Interface Projects:
Literature, Reality, and Cultural Implications
Brain-Computer Interface Projects:
Literature, Reality, and Cultural Implications
Brain-Computer Interface Projects:
Literature, Reality, and Cultural Implications
Kevin LaGrandeur, Ph.D.
Director of Research, Global AI Ethics
Institute;
President, LaGrandeur AI Ethics
Consulting;
Fellow, Institute for Ethics and Emerging
Technology
Brain-Computer
Interfaces:
Ethical issues
and resolutions
4. How it works
• (Aimed at drug-resistant epilepsy, alternative to
surgery)
• Built-in EEG detects brain activity
• AI determines if activity is pre-seizure or normal
• If pre-seizure, electro-stimulation of brain is started
and modulated by AI
5. Advantages of using AI (instead of scheduled
stimulation or surgery)1
• Quicker control of seizures
• Fewer side effects than other modes
• Reduction of tissue damage
• Better battery life for the mechanism
6. Another type of BCI = neuromodulation of
psychiatric disorders, like depression, OCD
7.
8. Advantages
• KEY: Works well when other treatments fail: 90% success rate
• Timing of therapeutic stimulation is much better with AI modulation
9. 9 key ethical
values for
making AI:
(consensus from
36 AI ethics
reports from
various
countries)
(Harvard’s
Berkman-Klein
Center)
10. Chief problems reported by patients
•human autonomy (whose feelings, control of
thoughts)?
•Human agency (who’s doing what)?
•Identity (AI-human fusion) = both positive and
negative:
• “it became me, […] with this device I found myself…I could do anything”2 but also
• How much of what I think/do is me anymore? Estrangement leads to self-harm
•Privacy (how safe is patients’ brain data)?
11.
12. Possible solutions
• Make sure patients understand possible changes in sense of self,
make sure they can handle that
• Avoid using with patients in periods of identity formation (eg.
Adolescence)
• Set up very secure data storage for brain data
• Do not share patient neurological data
• Find way to integrate rules about brain data into current regulations
• Problem: 12 privacy laws already stalled in US Congress
13. But be careful of complications of new AI
regulations
• There might be tensions between these design principles
• In that case, the EU regulations indicate that the company “must
establish methods of accountable deliberation”
• This means companies can’t just use checklists
• Might need consultants’ help to:
• Establish methods for deliberation that are accountable
• Help gather skilled people to decide tradeoffs to make
• tradeoffs must be documented so compliance officer has a record of
deliberations
14. Dr. Kevin LaGrandeur, (LaGrandeur AI Ethics
Consultancy)
• My work is on Academia.edu:
https://nyit.academia.edu/KevinLaGrandeur
• And on ResearchGate: https://www.researchgate.net/profile/Kevin-
Lagrandeur/research
• LinkedIn: https://www.linkedin.com/in/kevin-lagrandeur-299a1b3/
• EMAIL: kevin.lagrandeur@gmail.com
15. References
1 ”Closed-loop brain-computer interfaces,” From the interface, 27 Sept
2020, accessed 2 Nov 2023. https://www.from-the-
interface.com/closed-loop-BCIs/
2 L. Huang and G. van Luijtelaar, “Brain computer interface for epilepsy
treatment,” in Brain-computer interface systems, ed. Reza Fazel-Rezai
(IntechOpen, 2013) ch.12. DOI: 10.5772/55800
3 F. Gilbert, et al. “Embodiment and Estrangement: Results from a First-
in-Human 'Intelligent BCI’ Trial,” Science and Engineering Ethics, 83–96
(2019). https://doi.org/10.1007/s11948-017-0001-5
Editor's Notes
I will be using a broad definition of Artificial Intelligence (or AI) in my talk today, because true AI is not very common yet. True AI is an automated system that can learn how to improve its own function by using large sets of data. It is particularly good only with a few things right now, such as voice recognition, visual recognition, and learning how to use large sets of data to play a game or for navigation. So, for instance, AI can run SIRI for your Apple smartphone, help you to type emails via voice-to-text, or separate real emails from spam. But until we develop a more flexible Artificial General Intelligence (AGI), it will not be much involved with our daily lives or industry. So today I will be talking more broadly about Intelligent Automation, which is becoming quite common and which includes AI.