Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Mind Experiences Models Experimenter Framework


Published on

MEME session @ University of Minho, course:

  • Login to see the comments

Mind Experiences Models Experimenter Framework

  1. 1. [ MEME] Framework [Mind Experiences & Models Experimenter]
  2. 2. [Abstract]
  3. 3. The Mind Experiences and Models Experimenter [MEME] framework use noninvasive electroencephalography trough commercial and open devices of brain computer interface, to record information of user’s brain activity in the context of any specific experiment analyzing it using machine learning models to search for patterns and singular events according the objectives of the test; [Experiment] is the conceptual module of the framework that allow the design, record and play of test cases based in visual, audio and another type of internal or external stimulus; [Emotions] was the first experience recorded, the target was predicting “liking” and “disliking” valence and arousal reactions about the aspect of affective pictures; [Music] is the module developed that send notes using musical instrument digital interface, sequencing brain waves values with the tempo of an editable pad sequencer; also, in auto-frequency mode on, the software automatically translates microvolts signals in sounds according a frequency equivalence table.
  4. 4. Thinking about thinking…
  5. 5.
  6. 6. …everybody is singular!
  7. 7. • Reason (make judgment under uncertainly) • Consciousness • Represent knowledge (also commonsense) • Learn (critical to human intelligence) • Communicate (natural language) • Self-awareness, Sentience, Sapience... How do I think? Cognitive Affective Conative Natural tendency, impulse… ? Emotions
  8. 8. Mind loop = (sensation + perception + action) * emotion
  9. 9. • Sensation: The transformation of external events into neural activity; • Perception: Processing of sensory information; we believe that the end result is a useful representation in terms of the external objects that produced the sensations; • Action: Organisms use the representation of the world in order to act on it, optimizing rewards and minimizing punishments; • Emotion is often the driving force behind motivation, positive or negative. Neural processing mechanism Emotion
  10. 10. Somatic marker hypothesis (SMH) Emotions, as defined by Damasio, are changes in both body and brain states in response to different stimuli. … the somatic marker hypothesis proposes that emotions play a critical role in the ability to make fast, rational decisions in complex and uncertain situations. Ventromedial Prefrontal Cortex
  11. 11. Pattern Recognition Theory of Mind • Kurzweill describes a series of thought experiments which suggest to him that the brain contains a hierarchy of pattern recognizers. Based on this he introduces his Pattern Recognition Theory of Mind. He says the neocortex contains 300 million very general pattern recognition circuits and argues that they are responsible for most aspects of human thought. He also suggests that the brain is a "recursive probabilistic fractal“…
  12. 12. EEG Devices ???
  13. 13. Brain Computer Interface • Any BCI has four components: – Signal Acquisition: getting information from the brain, the user performs a task that produces a distinct EEG signature for that BCI; – Signal Processing: translating information to messages or commands; • Feature Extraction: salient features are extracted from the EEG; – Translation Algorithm: a pattern classification system uses these EEG features to determine which task the user performed; – Operating Environment: the BCI presents feedback to the user, and forms a message or command; • Devices: robotic devices; raise events or commands in other systems;
  14. 14. [Framework]
  15. 15. Problem Statement • Is possible make experiments based in sensorial action/reaction stimuli to searching into EEG datasets for singular events or features related with the specific objective of the experience? • Is possible detect human emotions from brain signals? • Is possible hear and see quantified representations of our thoughts?
  16. 16. Front End Framework Emotion MusicExperiment Back End Languages: C#, Python, R, Java
  17. 17. [Emotion]
  18. 18. Challenger – Objective • Design and execute an experiment to could predict a basic human emotion applying ML algorithms and measuring their confidence through scores; identify basic valences through one source stimuli to record the datasets required for training and testing the models; – Given • Mind Experience Dataset = Spatial + Energy + Time = Inputs by sensors live or recorded – Return • Emotion (Like/Dislike) – Solution space • (EPOC, max) 14 electrodes x 128 Hz/sec, -70 mVolts to 6000
  19. 19. All sensors localizations of 10-20 system
  20. 20. • Default experiment of the framework; visual stimuli resource type; using the Geneva affective picture emotion database GAPED to predict attraction emotions Liking/Disliking; • The mind experience: – Collect EEG data from 13 subjects; – Using Emotiv device with 14 electrodes located at AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4. – Using 223 pictures with valence and arousal marks associated, experience of ~4 min = 30720 records x electrode, 1 frame/sec; – The 3 most important channels AF3, F4 and FC6; (O. SOURINA, Y. LIU) – (Jones and Fox, 1992, Canli et al., 1998), it was shown that the left hemisphere was more active during positive emotions, and the right hemisphere was more active during negative emotions. • To test this binary hypothesis, we collected data from AF3 electrode which is located on left hemisphere and from F4 electrode which is located on the right hemisphere. Experimental procedure and resources
  21. 21. Recording the mind experience
  22. 22. Boxplot with the raw data summary of all subjects; the values of the sensors F7, FC5, P7, O2, T8, F4 and AF4 are invalid;
  23. 23. Approach • Align mind experiences (correction); • Select DataSet filters according the implicit marks to identify: – Test data; – Train data; • For any ML model tested – For any singular subject: • Create ML Model (random forest); • Calculate score;
  24. 24. Plot with the mean of the raw data summary of all the subjects and Boxplot with the means of all the subjects classified by positive and negative emotions, raw data summary;
  25. 25. Plot of positive and negative pictures by subject with the maximums, minimums and means of the AF3 sensor raw data;
  26. 26. Plot of recorded values of all the valid sensors for the once subject with an emotional transition peak well-defined; And plot of recorded values only for the AF3 sensor for the once subject with an emotional transition peak well- defined;
  27. 27. To build random forest model, and following the best practices in the time series analysis of brain waves, was reduced the dimensionality of the raw data to train; the strategy was apply an FFT.
  28. 28. Heatmap of the sensors correlation; the sensors variables AF3, F3, FC6 and F8 are more correlated Final result of this classifier:
  29. 29. [Experiment]
  30. 30. • Allowing the configuration of specific test cases (experiments) based in visual, audio and another external stimuli, through sequences of images, symbols, sounds and language is possible searching for singular events (marks) applying machine learning algorithms to build models for searching patterns in the datasets recorded with the EEG devices. Hypothesis
  31. 31. [MEME] loop Mind Experiences Models Experimenter • Sensation: Signal acquisition from EEG sensors (live or recorded in EDF format) with “events marks” (M) regarding the parameters of the experience configured (implicit) or manually sent by the user (explicit); • Perception: Run machine learning models using the inputs to predict the output (M) ; • Action: Using an event manager, any time that the model predict inputs values related with an specific mark associated with the experience, will be triggered a command to could interact with other systems; • Emotion: Implementation of simple valence emotion model (inspired in OCC Model).
  32. 32. Framework Components BCI Sensors Layer Emotiv EPOC SDK/insight Core Layer Model Experimenter Mind Experiences SignalAcquisitionAdapter SignalProcessingManager Application Interface Layer FramesUI UserProfileManager ExperiencesManager EventsManager ModelManager Neurosky MindWave TemplatesFactory
  33. 33. MEME = State of Mind (Score%)
  34. 34. Components description • User Profile Manager: – CRUD of login related with the citizen scientist; • Signal Processing Manager: – Allow the dynamic configuration of the input EEG dataset setting the columns (features) and rows (time) that will be used to train the model; – Apply FFT to the features expressed in raw data reducing the dimensionality of the input EEG dataset; • Experience Manager: – Frames UI: • Design and Edit the parameters of the experiment; – Name and description of the mind experience; – Type of stimuli or task to analyze; – Main sense stimuli; – Total of frames tasks; – Time of any frame task; – BCI Device; – Sensors output (.csv, .edf, nosql cloud DB); – Edit frame task: window form with customized image, audio, video, text setting also how catch and record specific mouse and keyboard event send by the user; – Template Factory: • Presets with a library of templates from Frames UI saved experiences;
  35. 35. Components description • Model Manager: – Library of ML algorithm linked with ironPython and R: nearest neighbor classifiers, linear classifiers, nonlinear Bayesian classifiers, support vector machine classification, hidden markov models and neural networks; – Select and setup the algorithm to validate and compare scores; – Use the input EEG dataset recorded to train the model selected from the library scoring automatically all the possible chunks of data according a specific sampling window related with the objective of the experiment; • Events Manager: – Record mode: • Run the Frames UI sequence selected recording the EEG stream from the BCI device; – Play mode • Run the Frames UI sequence with the recorded EEG stream content and predict the target of the mind experience according the ML model selected in real time; – Live mode • Run the Frames UI sequence with the live EEG stream from the BCI device and predict the target of the mind experience according the ML model selected in real time; – Add manually marks in recording of the experiences to measure stimulus from other senses (e.g. taste, external events).
  36. 36. [Music]
  37. 37. [MEME] music
  38. 38. Part of the temperament table created for the auto-frequency mode of [Music] that translate automatically the EEG signal in music synchronizing the natural value (Hz) of brain waves with the note and the octave using two different models based in the difference of the distance;
  39. 39. Exponential difference between the musical notes based in Hz distance;
  40. 40. [Music] next steps:
  41. 41. Conclusions The framework use a simple and effective approach to record and analyze information of brain activity in the context of practically any action/reaction experiment with a well define and specific target; find patterns into the datasets recorded in the context of the emotional experiment of likes and dislikes was a hard task were was demonstrated with the low score result of the machine learning algorithm applied: random forest; the artistic module implemented regarding the creation of music open a lot of possibilities for musician that searching for a more natural expression on your live performances.
  42. 42. Conclusions [MEME] framework is in a continuous process of development that could be potentiated with the help of more developer when the source code will be published in an open software repositories; future work and improvements for the next versions: finish the development of the [Experiment] module including the implementation of an automatic method for the selection of the best part of the dataset to train the models; use a cycle that compare scores automatically and avoid the overfit of the model; promote a new public session of dataset recording for the [Emotion] experiment with more than 100 subjects; Improve the user interface of [Music] and start to develop the [Dream] experience.
  43. 43. Vision Software technologies that mix virtual and augmented reality with brain computer interface represent nowadays the user interfaces of the future; detect human emotions will be the best input for complex systems of affective computing that can, for example, regulate the speed of an autonomous car considering the stress level of the passenger or change the environment of an entire home according the state of mind of the user.
  44. 44.
  45. 45. [bonus] [trAIck]
  46. 46. Types of Artificial Inteligence
  47. 47. Types of Artificial Inteligence
  48. 48. MEME Dissertation Review Types of Artificial Inteligence
  49. 49. Review Types of Artificial Inteligence
  50. 50. Strong AI definition