SlideShare a Scribd company logo
1 of 99
Download to read offline
Master Thesis
Engineering in Telecommunications
An iOS adaptation of the DEmotiv MATLAB
Toolbox for Real-Time EEG Functional Mapping
MIGUEL TEIGELL DE AGUSTÍN
Thesis supervised by George Zouridakis, Javier Díaz & David Iglesias
San Sebastián, JULY 2012
Acknowledgements
There are so many people I would like to mention here that I might have to start a new thesis
just for that. Of course I have to begin with my parents, Ricardo & Nuria; for all the sacrifices you
have made and for believing in me. To my sisters, Idoia and Soledad, along with my wonderful
nephews and Jef, because you are the light of my life.
To Juán, Ade, Gonzalo, Joaquín, Carlos, Matthew, Charlie, Luis, Óscar & Fran: you all are like
the brothers that I never had, so thank you for growing with me, shaping me and making me a
better person. It is an honour to share this trip called “Life” in your company. To Estefania and
Patri, because they are just incredible girls.
To the people that I found in Madrid: to Pat, because you know we are soul mates; to David,
for his incredible personality; to Cris, for sticking with me for such a long time; to Pau, Rocío and
Chejo, for guiding me during my baby steps in engineering… I haven’t forgotten; to Rosa, because
no one can get a smile out of me when I’m sad like you can; to Luis Angel and Marina, for being
the coolest among the coolest; to Nacho, for being mon copain préféré; to Irina and Guillermo, for
sticking with me when others bailed… I will always be grateful to you both. And finally, to Daniel,
for the good memories.
I was blessed with amazing people when life took me to San Sebastián: to Miren, Ibone &
Artur, Maider, Tamara, Amaia and the Arantxas, because without you I wouldn’t have been able
to do it; to Mark, for being such a good comrade when I arrived to the city; to Ainarita and Jon,
for your help, support and the good moments shared together; to Merideño, for your friendship
and help; To Unai, for caring for me and supporting me; to Eneko, for being my brother-in-arms
and a loyal friend; to Sergio, for the great farewells ; to Asier, for the warm greetings; to Pedro,
for the laughs we have shared; to Pablo & Sheyla, for the inspiration, the laughs, and the support
that we have shared during these years; to Pamimo, for the good advice and to María, because
you R-O-C-K baby.
To my whole class at the university, because you are the best thing that ever happened to
me…in especial to Andrea, for enduring with me (and I know I can get tiresome at times) ; to
Alejandro, for being such a great guy; to Imanol, for the good moments and to Laura, for your
incredible personality and because I wished I were more like you. To Beatriz, for being my friend,
my confident and for listening to my rants … and finally, to her brother, José Luís, for everything.
From the university staff I would to thank my tutors, Íñigo Gutierrez and Manuel J. Conde, for
their on-going support during all these years; to Ainara Díez , for her charm; to Javier Díaz, for
being my thesis supervisor and to Dr. Zouridakis, for granting me the opportunity to do this work.
When I thought it couldn’t get any better, my time in Belgium proved me wrong: to Maider,
what can I say? Because you deserve it! To my dear Belgian family, Els, Koen and their kids, for
welcoming me at their home and helping me adapt to a new country; still with family, to Bart &
Patrick, for being the coolest people in Antwerp; to Carlitos, for visiting me, and for all the
moments spent together, past and future; to Pablo, for being a great friend and roommate! To
Sandra and Elçin, for all those long chats in so many languages; to Laura, because the Erasmus
doesn’t stop at Leuven! To Javier, for being such a great friend and also for those nice evenings!
Like whim of fate, I ended up in Houston, Texas, where even more people jumped in the
wagon of my life and joined me during the last stops of this process: To Tarun, for your great
assistance during the whole process and your friendship; to Javier, for being the best neighbour
anyone could ever want! To Tatiana and her husband, Fran, for all the moments shared together;
To Juanjo, for being such a great guy and a good friend; to Valentina and Andrea, for being so nice
and cheerful; to David and Amy, for being such a lovely couple and welcoming me to their place;
to Miriam, for being a force of nature! To the guys at Global, for being such a wild bunch! To
Sirena, because, baby, we are soul mates and you know it!
And last but not least, to my wonderful friends: Ainhoa, David and Aina. For your help, your
company, your guidance and, in a few words, just for you being you. For all of those great
memories together in Houston and for all of those still to be made, thank you, It would have
never been the same without you.
And I would like end by dedicating my work to my late grandfather, Manuel, whose memories
I treasure and without whom I wouldn’t be half the person I am today. Thank you grandpa, you
are remembered.
Abstract
With the recent technological advancements and the proliferation of smartphones and tablets
it is possible to state that nearly everyone carries around powerful devices in their pockets with
computational capabilities that match or even surpass a 4 year old desktop computer. This new
status quo has not gone unnoticed by the Scientific Community which has seen it as opportunity
to provide solutions to existing problems. In addition to that, the mobility factor provided by
smartphones allows us to revisit previous solutions and update them to current technologies.
The DEmotiv project, precursor, of this work, assed in real time the cognitive state of a person
engaged in a specific task. To accomplish that, techniques that capture the spatiotemporal
dynamics of brain activity were developed based around EEG.
Electroencephalography (EEG) is a brain mapping technique based on recordings of the
electrical signals generated by cortical neurons. Typically, a set of long wires (electrodes) placed
on the scalp is connected to amplifiers. However, due to recent technological developments, the
electrodes and amplifiers have been reduced to the size of regular headphones. This set up is
completely non-invasive and non-obtrusive, and allows for continuous monitoring of subjects
during performance of normal activities. In order to get such recordings both projects make use of
the EPOC neuroheadset: a battery operated 14-channel wearable wireless headset designed by
Emotiv.
As the DEmotiv project provided a Matlab Toolbox that featured on-going EEG activity
capture, surface activation maps display, Granger connectivity networks etc. with a friendly
custom GUI , it is our intention to take advantage of the possibilities granted by portable devices.,
such a as manoeuvrability. Therefore with this project, we aim to replicate the same great user
experience that the DEmotiv offered over an iPad, a tablet manufactured by Apple Inc.
Table of Contents
Introduction.........................................................................................................................3
Motivation & Objectives.......................................................................................................... 3
DEmotiv MATLAB TOOLBOX by the Biomedical Imaging Lab (U of H)....................................5
Background.............................................................................................................................. 5
Neuroimaging ...................................................................................................................... 5
Structural Imaging Techniques ........................................................................................ 5
Functional Imaging Techniques ....................................................................................... 7
EEG Mapping Techniques .................................................................................................... 9
Cortical Mapping.............................................................................................................. 9
Topographic Mapping.................................................................................................... 10
Connectivity Network ........................................................................................................ 11
Granger Causality........................................................................................................... 12
Evoked Potentials .............................................................................................................. 15
Auditory evoked potentials: The N100 peak ................................................................. 16
Hardware & Software ............................................................................................................ 18
Emotiv................................................................................................................................ 18
The Emotiv EPOC neuroheadset.................................................................................... 19
The Emotiv EPOC SDK .................................................................................................... 21
Project Design & Methodology...........................................................................................24
Hardware ............................................................................................................................... 24
The iMac: then and now.................................................................................................... 24
The iPad 2 .......................................................................................................................... 26
Software Tools & Concepts.................................................................................................... 29
Xcode 4.2 SDK .................................................................................................................... 29
Objective C..................................................................................................................... 35
View Controllers ............................................................................................................ 35
Methods......................................................................................................................... 36
Actions ........................................................................................................................... 37
Storyboards & the Interface Builder.............................................................................. 37
Software Design..................................................................................................................... 40
API: Core-plot 0.9............................................................................................................... 40
Anatomy of the Graph ................................................................................................... 42
Class Diagram................................................................................................................. 43
Objects and Layers......................................................................................................... 44
Layers............................................................................................................................. 44
Graphs............................................................................................................................ 45
Plot Area ........................................................................................................................ 47
Plot Spaces..................................................................................................................... 47
Plots ............................................................................................................................... 49
Axes................................................................................................................................ 51
Dissecting the Project ............................................................................................................ 52
The Plots ............................................................................................................................ 52
TUTSimpleScatterPlot.................................................................................................... 52
ScatterPlot2 ................................................................................................................... 55
CustomRT....................................................................................................................... 57
Designing the GUI .............................................................................................................. 63
The Tab Controller ......................................................................................................... 64
Welcome Scene ............................................................................................................. 65
First Scene...................................................................................................................... 66
Second Scene................................................................................................................. 69
Project Execution & Results................................................................................................72
Achieving Real Time............................................................................................................... 72
Results.................................................................................................................................... 74
The Splash Screen.............................................................................................................. 74
The Welcome Screen ......................................................................................................... 75
The Simple Scatter Plot Test.............................................................................................. 76
The Real Time Plot Test ..................................................................................................... 76
The Custom RT Plot............................................................................................................ 77
Budget ...............................................................................................................................79
Hardware ............................................................................................................................... 79
Software................................................................................................................................. 79
Manpower ............................................................................................................................. 80
Total costs.............................................................................................................................. 80
Conclusions ........................................................................................................................81
Future of the System ..........................................................................................................82
Bibliography.......................................................................................................................83
List of Figures .....................................................................................................................85
List of Code Snippets ..........................................................................................................87
List of Tables ......................................................................................................................87
ANNEX: Code Files..............................................................................................................88
The View Controllers ............................................................................................................. 88
The Plots ................................................................................................................................ 88
Support Files .......................................................................................................................... 88
3
Introduction
Over the recent years mobile phones have seen great technological advancements at a fast
pace growing beyond their original purpose of communicating individuals. Featuring one, two and
even four core CPUs and 4”+ high resolution screens, mobiles phones are de facto pocket
computers with enough processing power to render their desktop counterparts mostly
unnecessary for the common daily tasks, such as Internet browsing, email, social apps, media
playback etc.
The DEmotiv project offered a user friendly approach to neuroimaging techniques like
Electroencephalography (EEG) with a MATLAB toolbox that allowed brain wave analysis and
processing, captured by the compact Emotiv neuroheadset device.
This project aims to further explore the mobile aspects of its predecessor by using the
aforementioned Emotiv neuroheadset along with an Apple iPad, on native software: iOS5 at the
Department of Engineering Technology in the University of Houston based on the idea and
supervision of Dr. George Zouridakis and the work of David Iglesias López.
Motivation & Objectives
The higher purpose of this work, as it was with DEmotiv, is on hand to provide software
assistance in monitoring the cognitive state of a person engaged in any type of activity. On the
other hand, the short term purpose of this project is to adapt and port the DEmotiv MATLAB
toolbox to Apple devices such as iPad/iPhone with the hardware and software constraints that
that involves.
The final goal of the main line of research (not accomplished and out of the focus of this
thesis) is the development of a software tool able to assess the cognitive state of a person
engaged in any kind of activity. To accomplish that, the entire project could be summarized into
three big steps, software creation, hardware verification and subjects’ classification.
Every step contains multiple sub-objectives. So, the first step includes hardware selection, the
creation of a graphical user interface (GUI), data acquisition, data processing, data display, and so
on. Once it is accomplished, and prior to classify the state of the subjects (third step), the device’s
performance need to be tested to check project’s viability. This step includes experimentation and
results analysis.
 The first and second steps are in the scope of the thesis presented here, whereas the
final one is left to future work.
 The tool should contain at least the following features:
 A GUI, so any person could use it. This way we avoid the use of command lines, often
difficult for the low/medium user.
 Display on-going EEG activity, this activity need to be separated in left and right
channels. The separation of the hemispheres is very important for the analysis of
some of the extracted features.
 Real time mapping, including topographic maps, cortical activation and granger
causality connectivity network.
 A band-pass filter of the data ‘on the fly’.
 The ability to save any recorded data into a file.
 Load and plot saved data, allowing the application of all available techniques to
process it.
 Show the anatomic distribution of the headset. In case a person is using the program
without possessing the actual recording device, this option will allow him to infer the
electrodes positioning.
 Record evoked potentials (EPs), receiving the stimulus from an external source and
displaying the epoch average on the main screen.
5
DEmotiv MATLAB TOOLBOX by the Biomedical Imaging Lab (U of H)
As this project is the adaptation of the DEmotiv, developed by David Iglesias at the Biomedical
Imaging Lab in the University of Houston (U of H), it is utmost important to review the highlights
of the source material in order to get a better understanding of the work done as they both share
common goals, functions and backbones.
Background
Neuroimaging
It is known as Neuroimaging to the collection of techniques to record both directly or
indirectly brain’s images. These techniques have been recently developed and provide images
that represent either the structure of the functionality of the brain.
Since any image is a construction based on a designed model, the grade of precision will
always depend on a high quantity of factors. Also, how well the image matches reality is
influenced by whether or not, there is knowledge of the reference (the real object). Sometimes
the model of an image is perfectly known, in such a case it is easy to determine, even by ‘eye’,
how well the image adapts to the intended one.
But in some cases there is no present reference to compare the image to, leading to
interesting questions such as How could we compare anything if we don’t even know what are we
trying to image? These two questions are quite common to the matter, as it is in the case of
neuroimaging techniques.
Structural Imaging Techniques
There are many neuroimaging techniques being used nowadays but two of the most common
are Commuted Tomography (CT) and Magnetic Resonance Imaging (MRI)
MRI images are captures by using magnetic fields to align the magnetization of some atomic
nuclei in the body and RF fields to constantly alter the alignment of this magnetization. On the
other hand, CT working principle is based on X-Rays and absorption.
Figure 1: MRI Image
Figure 2: CT image
7
Functional Imaging Techniques
The purpose of the Functional Imaging Techniques is to understand how the brain works
taking into account its physiology, dynamics and functional architecture. There procedures are
mainly used for research purposes and often employed as a 1st
diagnosis by doctor due to their
non-invasive nature.
Common neuroimaging techniques are functional magnetic resonance imaging (fMRI),
magnetoencephalopraphy (MEG), positron emission tomography (PET) and
electroencephalography (EEG). Focus will be placed on the last one, EEG, as it is the one used for
this research. Detailed information about the other Functional Imaging Techniques is left to the
reader’s own interest.
The underlying idea of EEG is to measure the variation produced in electric and magnetic
fields by a group of neurons. A single neuron does not produce enough activity to achieve a
proper measurement from the outside, but when millions of them gather, their activity can in fact
be detected.
The recording procedure uses a set of conducting electrodes placed on the scalp that allows
the detection of the electric signals of the brain. The electrodes are placed over the head using a
conducting gel. The purpose of this gel is the adaptation of impedances between the electrodes
and the human tissue so to optimize the readings. The figure depicts the procedure:
Figure 3: EEG procedure
The location of these electrodes is not random at all and it is fixed according to the 10-20
international system, as shown below:
Figure 4: 10-20 International System
The number of channels in an EEG can vary from 8 to 256 in some cases. The signal recorded
from these channels is independently connected to an amplifier with two inputs. One input
belongs to the measured electrode and the other for the reference, which is common to the
entire system and usually placed in the ear lobes.
The resulting product of the EEG measurement is a set of time-series data stored channel by
channel. Usually this data is kept in form of a computer file that may have different file formats
depending on the machine that produced it. The file has a header with information about how
the recording was done. Numbers of channels, type of data, sampling frequency or total time are
some typical header’s parameters. The rest of the data is stored in a matrix, of dimensions NxP,
with N being the number of channels and P the total number of points.
Then the data can be obtained by 2 different types of recordings: Epoched and Continuous.
Continuous recording saves the data “as it arrives”. Epoched data is the result of some
experiments that use a repetitive stimulus. Every data frame, commonly known as epoch or trial,
contains the information recorded after a stimulus, finishing prior to the next one. It has
sometimes a short pre-stimulus period, allowing for an easier comparison of differences between
what happens before and after the stimulus onset.
9
Figure 5: Continuous data recording
Figure 6: Epoch data recording
EEG Mapping Techniques
The following chapter is dedicated to describing the reasons behind the EEG procedures. It
will be explained why we obtain these data, the way to use it and how can real-world information
be inferred from it.
For such purposes the Topographic mapping and Cortical Mapping methodologies will be
introduced.
Cortical Mapping
With the cortical mapping technique (CM) it is possible to plot the source distribution of the
recorded data. As an EEG, it is non-invasive and therefore the information gathered is the scalp
distribution through the electrodes. What makes it different to Topographic Mapping is that the
representation of the cerebral cortex is obtained through complex mathematical models instead
of through data interpolations.
If data in the scalp is considered as the starting point, there are several methodologies to solve
this problem, usually known as the inverse problem. The problem itself relates with the number
of independent parameters underlying a scalp potential distribution. This number could only be
less or equal to that of the number of channels used during the recordings.
Then, only some data could be extracted with confidence from the results. This inverse problem
has a non-unique solution, since more than one source distribution is able to generate the same
scalp map.
Figure 7: Cortical Views in 3D
As we can imagine, obtaining a CM gives the user very useful functional information. In case
of an experiment, the plot will show which brain parts are in charge of different functionalities.
Topographic Mapping
11
Topographic techniques use the data that has been directly recorded from the electrodes. The
subject’s head is simulated by plotting the data in a 2 or 3 dimensional model, thanks to the
electrodes positions being available. Then the mesh information is obtained through data
interpolation techniques of the different electrodes data. This way, any plotted point is obtained
giving every electrode a specific ‘weight’, depending on the distance.
Figure 8: Topographic View in 2D
As shown on the previous figure, a colour scale is commonly used. It indicates the level of
activation being usually Red the maximum level of activation and Blue the minimum.
It can be concluded then that Topographic Mapping allows the user to observe, with a single
view, which parts of the brain are active at any specific time point. Some of TM’s characteristics
are:
 2D/3D plots
 3D Rotation
 Colour Scaling
 Automatic & User-Defined Scale Values
Connectivity Network
The last mapping feature implemented by the original project is a connectivity network based
on Granger causality. These networks let us see the connections established among the different
parts of the brain. It is important to keep in mind that any signal generated in the brain by a
neuronal population may be influenced by other.
Depending on the aim and interest of the study, a frequency ordinary coherence analysis or
cross correlation between time signals may not be enough for revealing brain connectivity. If by
evaluating a signal cross correlation it is possible to improve a future prediction, then it means
that past values have influence over future ones.
Figure 9: Brain Connectivity Network
According to this, we can extract that any future value may not be completely random, and by
analysing these influences the future values could be better estimated. After lightly reviewing
these concepts, it is possible to get a better understating of the Granger Causality, whose
methodology is applied to generate the network.
Granger Causality
Wiener first, and Granger, later developed the idea that asserts the following: “If the
prediction of a time series A could be improved by knowing a second one B, then B is said to have
a causal influence on A”. That means that knowing B and its past values, it is possible to improve
the prediction on A, by reducing the error we commit on that prediction.
13
One of the biggest advantages about using Granger causality is that unlike other methods, it is
not invasive. There are methods for analysing connectivity patterns that may require surgery, with
the possibility of a brain’s lesion. In this case, the collected EEG signal is just needed.
Anil K. Seth compared the causal connectivity between complex networks obtained from very
rich brain activities with data collected from much simpler experiments. The results of his analysis
suggested that complex networks show a strong causal flow if compared with the results of
simpler ones. These results were very interesting for the community, opening a new way to
evolve in neuroscience analysis.
 Graphical representation:
The most common way to represent Granger causality is through a graph, specifically one that
looks like this:
Figure 10: Granger Causality graph
On the image it is possible to notice various elements:
First, there are the nodes (circles) of the network which represent one channel of the
recorded data. This example is an 8 channel network for the sake of simplicity, although an actual
graph could reach hundreds of nodes.
Then, there are the arrows connecting the nodes, which state a causal relationship between
them. This relationship could either be one way (like 2 to 3) or both ways (like 8 and 7), meaning
that both channel have influence over each other.
Typical characteristics of granger causality include:
 Level of influence, the bigger the arrow is, bigger is the influence. A set of colours is
used sometimes.
 Causal density, it defines percentage of significant connections over the total.
 Causal flow, it is a single node characteristic. It measures the difference between the
ingoing and outgoing flow, determining it as a sink node (ingoing), a source
(outgoing), or as an inter-node (equal).
 Causal reciprocity or causal disequilibrium. They are related with the degree of
reciprocity within a neural network.
Not all real connections are represented in a GC figure. Only those whose value is bigger than
the specified threshold are considered to be important enough, or in other words, statistically
significant. The setup of the threshold is usually automatic, but could be left to the user’s choice.
 Mathematical model:
The motivation behind this section is just to understand the basis of the granger causality. As
it was mentioned before, times series signals are being used in the model. Let’s suppose two of
these signals represented by an autoregressive form.
( ) ∑ ( ) ∑ ( ) (1)
( ) ∑ ( ) ∑ ( ) (2)
As it can be observed from the equations, the value of the signal X is dependent on its past
values (first summation term), added to the past values of the other signal (second summation
15
term) and also by an error. If the variance of the error while predicting X is reduced by the
inclusion of Y in the equation, then, Y is said to cause X.
The entire model is developed from these equations, calculating the statistical
interdependence between variables.
At this point, it is known that a variable could cause other, or have a significant influence in its
future value. But what happens when there are have more than two variables? The model below
is known as bivariate model, and there exists the so called multivariate model. When going back
to figure 10 focusing on the path through nodes 7-8-1, it is obvious that node 7 has a bidirectional
influence with 8, but also that node 8 is causing 1. It is certain that node 8 is the one causing 1?
What if node 7 is having a causal influence over 1 going through 8? To avoid such problematic
questions, multivariate models are used.
Multivariate model for GC must perform a huge quantity of calculations, being the number of
them exponentially increased with the number of channels recorded. The operations are usually
done in a specific core cluster, prepared for that purpose. Due to the nature of this project, the GC
implemented is quite simple. Otherwise, the accomplishment of a real-time situation would be
completely unreal.
Evoked Potentials
This is term used for the brain potentials measured after the presentation of a stimulus. Their
amplitude tends to be very low when compared with on-going EEG activity and can vary from less
than a mV to a few mV. Any recorded EEG data includes a baseline signal due to biological and
random noise. To avoid this low amplitude problem of the response against the background,
many more than one simple experiment or trial are required.
Since most of the baseline is randomly generated, conducting and averaging a big number of
experiments will average out the noise, allowing the relevant EP signal that is response to the
stimulus to remain. To observe the response, 100 or more trials are usually conducted.
These measured responses when any particular stimulus is applied are also known as event
related potentials (ERPs). While evoked potentials are related to the response due to a physical
stimulation, ERPs are cause of high-level processing, involving memory, attention or any other
changes in the cognitive state.
There are a lot of different EPs subclasses available. Visual evoked potentials (VEPs), auditory
evoked potentials (AEPs) or somatosensory evoked potentials (SSEPs) are some of them. The
experiments present in this project are auditory evoked potentials.
Auditory evoked potentials: The N100 peak
For an auditory evoked potential the stimulus or event released is a sound. The signal
generated by the sound ascends through the auditory pathway, and AEPs are used to trace it. As
any sound, the evoked potential is generated in the cochlea, following his way through the
midbrain to finally arrive in the cerebral cortex.
AEPs are chosen for this project for their simplicity. Any EPs consist of a series of consecutive
and negative potential peaks. These peaks appear after multiple trials are averaged, and allow for
the analysis. AEPs have a singular particularity: a negative peak placed around 100 ms after the
stimulus delivery. This peak is called N100, and is pretty easy to observe from the averaged signal.
 The N100 peak:
It is a large negative potential measured by EEG during certain EP experiments and it is also
known as N1. Its peaking is ranged between 80 and 120 milliseconds after the onset of the
stimulus. The N100 is distributed mainly in the fronto-central region of the head’s scalp. It is often
followed by a positive peak at 200 ms, and both together are known as the N100-P200 complex.
The N100 is generated in the primary auditory cortex, placed in the superior temporal gyrus in
Heschl’s gyrus. It is shown in figure 14 below. The N1 generating area is not the same in both
hemispheres, the right one is larger.
17
Figure 11: N100 generation
The N100 is involved in perception due to a strong dependence of the amplitude in the rise
time of a sound’s onset and its loudness. It may almost disappear when the subject of the
experiment has the control of the stimuli. As for example if the person is using a switch to control
the stimulus or his own voice. It is also weaker when the stimulation is done repetitively and
stronger if it is randomly generated. It is pretty interesting the reason adopted by experts for such
an effect. The attenuation seems to be linked with the person’s intelligence, as it occurs stronger
in individuals with a higher one.
Hardware & Software
There are multiple EEG devices available in the market, but not all of them may fulfil the
design requirements.
The majority of the EEG machines available possess a characteristic that make them
inappropriate for this project, size. Typical EEG machines are placed in hospitals or research
laboratories.
Taking into account the final goal of this project is to assess in real time the cognitive state of
a person engaged in normal or daily activities, the size of the device is a very big constraint. If we
want the subject to be moving around (as in this case) while the procedure is being applied, the
device needs also to be wireless and wearable.
In this case, the price may or may not be a constraint, it depends on available funds. But, it is
important to say that such technologies are not cheap at all, and the price of a device could reach
thousands of dollars. Being this the first approach to the project development, buying a very
expensive device to then realize the goal is not achievable is not a good idea. We needed to make
a trade-off between quality, price and size, with size being obviously the more restricting feature.
An overview of hardware requirements:
 Size: It needs to be as small as possible.
 Wireless: The wiring prevents subject’s movement.
 Wearable: The entire device needs to be ‘on’ the subject.
 Price: The one that allows for a better signal quality while keeping a small size.
 Quality: It is obviously better as higher. However, actual technology development
doesn’t allow for a high quality device with previous mentioned features (without
increasing the price to something unrealistic).
Emotiv
19
Emotiv is an Australian company that has introduced in the market a breakthrough interface
technology for digital media taking inputs directly from the brain. The Emotiv EPOC neuroheadset
is probably their flagship product and it is a high resolution, neuro-signal acquisition and
processing device. The headset uses a set of sensors to tune into electric signal produced by the
brain and is connected wirelessly to most Windows PCs.
There were different available versions in the Emotiv store, including developer, research,
enterprise or education editions. The purchased package was the Education Edition SDK. This
edition is designed for academic and educational institutes, undertaking research with no direct
financial benefit and can be used by any staff members of the department for teaching or
researching purposes.
The Education Edition SDK contains the following:
 A research neuroheadset package.
 Emotiv software toolkit.
 User’s license.
 A saline solution for the electrodes.
The Emotiv EPOC neuroheadset
The Emotiv neuroheadset bundle contains, the headset itself, 14 spare electrodes adjustable
to the device, a battery charger, an USB receiver and a bottle of saline solution. The headset has
14 high resolution channels based on the international 10-20 locations. Those channels are: AF3,
F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4. It also includes CMS/DRL references in
P3/P4 locations and a gyroscope, represented as two more channels, GYROX and GYROY, in the
recorded data.
The device has an internal sampling rate of 2048 hertz per channel. This is heavily filtered to
remove mains artifacts (signal noise due to movement, eyes blinks, respiration, etc) and then
down sampled to 128 hertz. There is a hardware low-pass filter in each channel preamplifier with
an 85 Hz cutoff, and a high-pass on each input with a cutoff frequency of 0.16 Hz.
The headset has a LED diode with three different color options, red, blue or green.
This light should be blue when we switch the device on, red when charging the battery and
green when it is fully loaded.
Figure 12: The Emotiv EPOC neuroheadset
A mentioned before the device works wirelessly with a lithium battery that provides
autonomy of 12 hours use. The communications between the PC and the headset is also done
wirelessly using a USB receiver over an encrypted proprietary protocol that is decrypted by the
bundled software.
Figure 13: The USB Receiver
It also contains a complete spare sensor kit including an EPOC hydrator pack and 16 fully-
assembled felt-based sensor assemblies with gold-plated contacts. It is shown in the next figure.
21
Figure 14: The Electrode set
The Emotiv EPOC SDK
In addition to the EPOC neuroheadset hardware, the Emotiv Education SDK provides a
complete software toolkit exposing Emotiv APIs and detection libraries. It includes
EmotivControlPanel.exe, EmotivComposer.exe, EmoKey.exe, header files and import libraries, and
sample code. The EmoComposer & EmoKey are hardware emulators which enable to develop the
headset. The SDK provides an effective development environment that integrates well with new
and existing frameworks.
There are three detection suites incorporated in the kit, Affectiv suite, Cognitiv suite and
Expressiv suite. The first one is used to monitor user’s emotional states in real-time, the Cognitive
suite reads user’s conscious thoughts and intentions, and the Expressiv suite interprets facial
expressions.
Emotiv Control Panel is the main application and the one used in this project’s development.
It includes the explained suites and other features. The Headset Setup panel is displayed by
default when starting Emotiv Control Panel. The main function of this panel is to display contact
quality feedback for the neuroheadset’s EEG sensors and provide guidance for fitting the headset
correctly. It is extremely important to achieve the best possible contact quality before proceeding
to the other panel tabs. A poor contact quality will result in poor detection results and a low
quality signal.
This is a screenshot of the Emotiv control panel:
Figure 15: Emotiv Control Panel
The image on the left is a representation of sensor’s locations. Each circle is representing a
sensor and its approximate location when wearing the headset. The colour of the circles
represents the contact quality. There are five possible colour outcomes, black, red, orange, yellow
and green.
Colours representation:
 Black: No signal
 Red: Very poor signal
 Orange: Poor signal
 Yellow: Fair signal
 Green: Good signal
To achieve the best possible contact quality, all sensors should be shown as green. In case
some of them (as shown in figure 22) are not green, it is recommended to relocate the headset or
ultimately use more saline solution on the failing electrodes. It is important to check the state of
the contacts regularly when working with the device.
23
Other features displayed here are the wireless signal level, the battery level and the selected
user. Finally, there are other three tabs for the suites. However, those suites are not used in the
present project.
To end the Emotiv software’s review, we provide other useful characteristics:
EEG display:
 5 second rolling time window (chart recorder mode)
 ALL or selected channels can be displayed
 Automatic or manual scaling (individual channel display mode)
 Adjustable channel offset (multi-channel display mode)
 Synchronized marker window
Gyro display:
 5 second rolling time window (chart recorder mode)
 X and Y deflection
Data Packet display:
 5 second rolling graph of Pocket Counter output
 Packet loss – integrated count of missing data packets
 Verify data integrity for wireless transmission link
Data recording and Playback:
 Fully adjustable slider, play/pause/exit controls
 Subject and record ID, date, start time recorded in file naming convention
Project Design & Methodology
This chapter will get deeper into the different design decisions that had to be made for the
project and that stray from the original DEmotiv Matlab Toolbox. These include hardware
selections and software tools as well as methodologies and implementations.
Hardware
From the hardware point of way, the development of this project was restricted to apple
devices as it is required in order to develop for the iOS platform.
Development itself was done with 2011 iMac using the Developers SDK known as Xcode.
The targeted device to run the iOS adaptation of the Demotiv was an iPad 2. The reason
behind this, as other features of the iPad will explain in detail in the following sections.
The iMac: then and now
Over the years the iMac series have seen many tweaks in both hardware and design: The first
model was introduced in 1998 and was known as the iMac G3. Back then the design consisted of a
CRT screen embedded inside an oval plastic body. The plastic was translucent and it arrived to the
market in different colours.
25
Figure 16: the 1998 iMac G3 series
The original iMac G3 hardware was built around the PowerPC architecture, developed by IBM,
Apple and Motorola and it ran Mac OS 8.1. Choosing the PowerPC architecture had many
consequences. On one hand Apple had more control over the software, because they were the
main providers of PowerPC computers … but on the other hand, the increasing popularity of the
rival operating system, Windows by Microsoft, put them in a difficult position. A fact is that the
Windows OS could be used on any computer built around Intel’s x86 architecture, and because of
that, there were several hardware providers that made Windows-capable PCs. The increasing
hardware supply lowered the windows PC price and therefore it dominated the market.
Acknowledging these problems, Apple decided in 2006 to take a change of course and switch
all computer hardware to the x86 platform. This event was a turning point in Apple’s history and it
led to enormously increasing its market share. And a welcomed side effect of turning to x86 was
the compatibility of Mac computers with Windows software through the Boot Camp feature of
current iMacs.
The computer chosen to perform our development is a 2011 iMac from Apple. The iMac is the
all-in-one solution of Apple’s catalogue and it features in a single aluminium case, a full fledge
Mac PC with an embedded 21 inch LCD screen. They are quite easy transport and install, taking
little space.
Figure 17: the 2011 iMac
According to the manufacturer the specifications of our computer are:
 21.5-inch (viewable) LED-backlit glossy widescreen TFT display with support for
millions of colors
 2.5GHz quad-core Intel Core i5 with 6MB on-chip shared L3 cache
 4GB (two 2GB) of 1333MHz DDR3 memory
 500GB (7200 rpm) hard drive
 AMD Radeon HD 6750M graphics processor with 512MB of GDDR5 memory
The iPad 2
In 2010 Apple revolutionised the market by introducing a new concept of mobile computing:
the iPad. The best way to define it would be as a hybrid between a current generation
smartphone and a laptop computer. It found a great success becoming the 70% of global tablet
computers sales worldwide in barely during its first year in the market .
Like its siblings, the iPhone and the iPod Touch, it is controlled by a touchscreen instead of a
stylus and usual input of information is done through an on-screen virtual keyboard. It served the
same purposes as well (audio-visual media including books, periodicals, movies, music, games,
27
apps and web content) but in an unique form factor that could substitute a laptop for most of the
daily chores.
Figure 18: The iPad Family
From the software point of view, it shared the same operating system as the other portable
“iProducts”: The iOS. As it was the last member of the family to be conceived, it came out to the
market with a wide preexisting library of applications that were already available for the iPhone
and the iPod Touch.
Talking about the hardware, it is quite a powerful device:
Model iPad iPad 2 iPad (3rd
generation)
Initial OS iOS 3.2 iOS 4.3 iOS 5.1
Highest
supported
OS
iOS 5.1.1 iOS 6
Display
9.7 in (250 mm), 4:3 aspect ratio, scratch-resistant glossy
glass covered LED-backlit IPS LCD screen, fingerprint-resistant
oleophobic coating, 16,777,216-color (24-bit), 1024×768 px
(XGA) at 132 ppi, 800:1 contrast ratio
2048×1536 p
x resolution
(264 ppi)
Processor 1 GHz ARM Cortex-
A8 Apple A4 (64 KB L1 +
1 GHz (dynamically clocked)
dual-core ARM Cortex-A9 (64 KB
Dual-core
512 KB L2) SoC L1 + 512 KB L2) Apple A5 SoC Apple A5X SoC
Graphics
processor
PowerVR SGX535
GPU
PowerVR SGX543MP2 GPU PowerVR
SGX543MP4 GPU
Storage 16, 32 or 64 GB
Memory 256 MB LPDDR
DRAM
512 MB Dual-Channel
LPDDR2 DRAM
1 GB
Material Contoured aluminium back and bezel
3G and 4G models: Contoured aluminium back and bezel; plastic for cellular radio
Bezel colour Black Black or white
Battery
Built-in rechargeable lithium-ion polymer battery
3.75 V 24.8 W·h
(6613 mA·h)
3.8 V 25 W·h (6579 mA·h) 3.7 V
42.5 W·h
Rated
battery life
browsing: 10 hours (Wi-Fi); 9 hours (3G or 4G)
audio: 140 hours
video: 10 hours
standby: 1 month
Table 1: iPad Family Specs
Both the powerful hardware and the versatility of the iOS were key reasons behind our
decision to use the iPad for the porting of the Demotiv Matlab Toolbox. But why the iPad and not
the iPhone, that runs the same OS and has almost identical processing power?
In a few words: the screen size. I will be explained that due to the universal nature of iOS
development, all source code can be easily compiled for the different iProducts, but it was
decided to target the iPad because –as it was already explained- the Demotiv Toolbox displays 14
brain signals (7 for each lobe) and the bigger size of the screen would ease the visibility of the
signals
29
Software Tools & Concepts
The soul of the project is software based and on this section it will be explained both the tools
used for development as well as the different APIs involved. The entire application was developed
using Objective-C with the Xcode 4.2 SDK over Mac OS Lion.
The project itself saw many modifications during its course, implementing testing
functionalities, adding new ones and removing those that did not work as expected. The review
will focus on the different implemented features, how and why were they added, the used
methodologies, flux charts to explain code, and main problems faced all the way around and their
solution (if any were available).
Xcode 4.2 SDK
Xcode 4.2 is the Integrated Development Environment used in this project and it is the most
important tool when developing for for Mac OS or iOS. It is available for all Mac OS Lion users
through the Mac App Store and it includes most of Apple’s developer documentation to assist the
user.
It supports C, C++, Objective-C, Objective-C++, Java, AppleScript, Python and Ruby source
code with a variety of programming models, including but not limited to Cocoa, Carbon, and Java.
Although version 4.2 was used for this project it sees constant updates to implement new
functionalities and to fix bugs, at the time of writing the latest version is 4.3.2.
Figure 19: Xcode Welcome Screen
Now, more functions of this program will be explained. This is Xcode’s main window, divided
in different sections:
Figure 20: Xcode Main Window
31
1. This window is used to display the functions of the set of buttons number 6, which
from left to right are:
Figure 21: Navigation buttons
a. Project navigator: the Project’s File Tree Structure. All the files related to the
project are placed here in a tree structure. No matter if they are source code
files or supporting files such as texts or images, they will all be present on this
part of this part of the screen and can be sorted in folders and sub-folders to
accommodate the user’s needs.
b. Symbol navigator: it lets the user surf around the projects classes
c. Search navigator: provides a search function both, files and code.
d. Issue navigator: it displays the “issues” that showed up during compilation. By
issues, Xcode means important information that does not interrupt
compilation but that the developer should review in order to prevent bugs.
e. Debug navigator: shows low level information related to debugging the
project.
f. Breakpoint navigator: shows the breakpoints placed by the developer to
handle the debugging process.
g. Log navigator: it serves as the project log by registering compilation times,
debugging times etc
2. Main Window: this part of the screen shows whatever file or function has been
chosen. If it is a source code file, the source code will be present here, in case of an
image; the image itself will be showed here.
3. The windows on right are composed by:
a. The File Inspector: it shows attributes and details of the selected element.
Such as Identity and Type, Localization etc.
Figure 22: The File Inspector
b. The Quick Help Inspector: it shows information about code selected within a
source code file. For instance it would the Class hierarchy and/or definition of
an element in the code.
33
Figure 23: Quick Help Inspector example
c. The Libraries: the bottom part displays the element libraries for Code
Snippets, Media, File Templates and Objects. They allow the user to access
elements with a GUI instead of programmatically with several shortcuts.
Figure 24: The File template Library
4. At the top-right corner of the window there are 3 groups of buttons that control the
Xcode GUI:
a. The Editor buttons handle the way the main window of the GUI displays
information. It provides single page view or 2 page split-view options, for
instance.
b. The View buttons allow the user to hide different parts of the Xcode GUI.
c. The Organizer provides access to Help and Documentation
Figure 25: Editor, View and Organizer
35
5. The top left corner shows the Play/Stop buttons, to compile and run the project and
the project’s name
6. Navigation buttons explained already in 1.
7. The bottoms windows show supplemental information during debugging.
Objective C
The Objective-C language is a simple computer language designed to enable sophisticated
object-oriented programming. Objective-C is defined as a small but powerful set of extensions to
the standard ANSI C language. Its additions to C are mostly based on Smalltalk, one of the first
object-oriented programming languages. Objective-C is designed to give C full object-oriented
programming capabilities, and to do so in a simple and straightforward way.
Most object-oriented development environments consist of several parts:
 An object-oriented programming language
 A library of objects
 A suite of development tools
 A runtime environment
View Controllers
View controllers are a vital link between an app’s data and its visual appearance. Whenever
an iOS app displays a user interface, the displayed content is managed by a view controller or a
group of view controllers coordinating with each other. Therefore, view controllers provide the
skeletal framework on which to build the apps.
iOS provides many built-in view controller classes to support standard user interface pieces,
such as navigation and tab bars.
Figure 26: View Controllers by Apple
Methods
Methods are functions that are defined in a class. Objective-C supports two types of methods:
instance methods and class methods. Instance methods can be called only using an instance of
the class. Instance methods are prefixed with the minus sign (-) character.
Class methods can be invoked directly using the class name and do not need an instance of
the class in order to work. Class methods are prefixed with the plus sign (+) character.
In some programming languages, such as C# and Java, class methods are known as static
methods. But in Objective C the anatomy of the method is:
37
-(void) doSomething:(NSString *) str
withAnotherPara:(float) value {
//---implementation here---
}
Code Snippet 1: example of a method
Actions
An action is a method that can handle events raised by views (for example, when a button is
clicked) in the View window. An outlet, on the other hand, allows your code to programmatically
reference a view on the View window.
Action methods must have a conventional signature. The UIKit framework permits some
variation of signature, but both platforms accept action methods with a signature similar to the
following:
- (IBAction)doSomething:(id)sender;
Code Snippet 2: example of an Action
The type qualifier IBAction, which is used in place of the void return type, flags the declared
method as an action so that Interface Builder is aware of it. For an action method to appear in
Interface Builder, we first must declare it in a header file of the class whose instance is to receive
the action message.
Storyboards & the Interface Builder
For the design process of GUIs Xcode provides a very useful tool called the Interface Builder.
The main purpose of this tool is to let the developer build a graphical user interface in an easy
manner providing visual elements to use instead of code.
Figure 27: Xcode's Interface Builder
With the introduction of iOS 5 and the release of Xcode 4.2, the Interface Builder supported a
new way to develop GUIs known as Storyboards.
A storyboard is a visual representation of the user interface of an iOS application, showing
screens of content and the connections between those screens. A storyboard is composed of a
sequence of scenes, each of which represents a view controller and its views; scenes are
connected by segue objects, which represent a transition between two view controllers.
Xcode provides a visual editor for storyboards, where we can lay out and design the user
interface of an application by adding views such as buttons, table views, and text views onto
scenes. In addition, a storyboard enables us to connect a view to its controller object, and to
manage the transfer of data between view controllers. Using storyboards is Apple’s
recommended way to design the user interface of an application because they enable us to
visualize the appearance and flow of the user interface on one canvas.
39
Figure 28: Explaining storyboards by Apple
On iPhone, each scene corresponds to a full screen’s worth of content; on iPad, multiple
scenes can appear on screen at once—for example, using popover view controllers. Each scene
has a dock, which displays icons representing the top-level objects of the scene. The dock is used
primarily to make action and outlet connections between the view controller and its views.
UI Button
A button is one of the UI elements most used during the development of this project. Its
purpose is to execute a method or methods by pressing it on the screen.
This is done by making an Action –that contains all the code that shall be executed when
pressing the button- in the header file and linking that Action to the button in the Storyboard.
Figure 29: Button Example
Software Design
Developing a new application can sometimes become a hit and run process. In order to avoid
that it is important to have a clear idea of what is the goal of the application and what it should
feature. In short word, we should ask ourselves: what do we want the application to do? Of
course this is limited by the common scarce resources: time and manpower, so it important to set
realistic goals.
After that, the next question that should be addresses is: how do we do it? And that is exactly
what will be detailed on the following section.
API: Core-plot 0.9
Soon enough during the design of the project it became obvious that the plotting of signals,
would be an essential part of development. As it was seen on the original Matlab toolbox, it is
needed to display 14 channels of the brain, 7 for each hemisphere.
Although Apple provides several libraries at the developer’s disposal there are no specific APIs
to ease the process of coding a function plot, and for that reason it was necessary to look for
other solutions.
The Core-plot API is a community developed library that provides a plotting framework for OS
X and iOS. It provides 2D visualization of data, and is tightly integrated with Apple technologies
like Core Animation, Core Data, and Cocoa Bindings.
41
Figure 30: Core-Plot
It is a free of charge and available to everyone through their Google Project Homepage at
http://code.google.com/p/core-plot/ under the open source BSD licenses.
This API represents the core of the project and as such, it will be explained in detail on the
following sub-sections but before delving into the classes that make up Core Plot, it is worth
considering the design goals of the framework. Core Plot has been developed to run on both Mac
OS X and iOS. This places some restrictions on the technologies that can be used: AppKit drawing
is not possible, and view classes like NSView and UIView can only be used as host views. Drawing
is instead performed using the low-level Quartz 2D API, and Core Animation layers are used to
build up the various different aspects of a graph.
It's not all bad news, because utilizing Core Animation also opens up a whole range of
possibilities for introducing 'eye-candy'. Graphs can be animated, with transitions and effects. The
objective is to have Core Plot be capable of not only producing publication quality still images, but
also stunning graphical effects and interactivity.
Another objective that is influential in the design of Core Plot is that it should behave as much
as possible from a developer's perspective as a built-in framework. Design patterns and
technologies used in Apple's own frameworks, such as the data source pattern, delegation, and
bindings, are all supported in Core Plot.
Anatomy of the Graph
This diagram shows a standard bar graph with two data sets plotted. Below, the chart has
been annotated to show the various components of the chart, and the naming scheme used in
Core Plot to identify them.
Figure 31: Official anatomy of a graph in core-plot
43
Class Diagram
This standard UML class diagram gives a static view of the main classes in the framework. The
cardinality of relationships is given by a label, with a '1' indicating a to-one relationship, and an
asterisk (*) representing a to-many relationship.
Figure 32: Official Class diagram of Core-plot
Objects and Layers
This diagram shows run time relationships between objects (right) together with layers in the
Core Animation layer tree (left). Colour coding shows the correspondence between objects and
their corresponding layers.
Figure 33: Official Objects and Layers diagram
Layers
Core Animation's layer class, CALayer, is not very suitable for producing vector images, as
required for publication quality graphics, and provides no support for event handling. For these
reasons, Core Plot layers derive from a class called CPTLayer, which itself is a subclass of CALayer.
CPTLayer includes drawing methods that make it possible to produce high quality vector graphics,
as well as event handling methods to facilitate interaction.
45
The drawing methods include:
-(void)renderAsVectorInContext:(CGContextRef)context;
-(void)recursivelyRenderInContext:(CGContextRef)context;
-(NSData *)dataForPDFRepresentationOfLayer;
Code Snippet 3: the drawing methods
When subclassing CPTLayer, it is important that you don't just override the standard
drawInContext: method, but instead override renderAsVectorInContext:. That way,
the layer will draw properly when vector graphics are generated, as well as when drawn to the
screen.
Graphs
The central class of Core Plot is CPTGraph. In Core Plot, the term 'graph' refers to the
complete diagram, which includes axes, labels, a title, and one or more plots (eg histogram, line
plot). CPTGraph is an abstract class from which all graph classes derive.
A graph class is fundamentally a factory: It is responsible for creating the various objects that
make up the graphic, and for setting up the appropriate relationships. The CPTGraph class holds
references to objects of other high level classes, such as CPTAxisSet, CPTPlotArea, and
CPTPlotSpace. It also keeps track of the plots (CPTPlot instances) that are displayed on the graph.
@interface CPTGraph : CPTBorderedLayer {
@private
CPTPlotAreaFrame *plotAreaFrame;
NSMutableArray *plots;
NSMutableArray *plotSpaces;
NSString *title;
CPTTextStyle *titleTextStyle;
CPTRectAnchor titlePlotAreaFrameAnchor;
CGPoint titleDisplacement;
CPTLayerAnnotation *titleAnnotation;
CPTLegend *legend;
CPTLayerAnnotation *legendAnnotation;
CPTRectAnchor legendAnchor;
CGPoint legendDisplacement;
}
@property (nonatomic, readwrite, copy) NSString *title;
@property (nonatomic, readwrite, copy) CPTTextStyle
*titleTextStyle;
@property (nonatomic, readwrite, assign) CGPoint
titleDisplacement;
@property (nonatomic, readwrite, assign) CPTRectAnchor
titlePlotAreaFrameAnchor;
@property (nonatomic, readwrite, retain) CPTAxisSet *axisSet;
@property (nonatomic, readwrite, retain) CPTPlotAreaFrame
*plotAreaFrame;
@property (nonatomic, readonly, retain) CPTPlotSpace
*defaultPlotSpace;
@property (nonatomic, readwrite, retain) NSArray
*topDownLayerOrder;
@property (nonatomic, readwrite, retain) CPTLegend *legend;
@property (nonatomic, readwrite, assign) CPTRectAnchor
legendAnchor;
@property (nonatomic, readwrite, assign) CGPoint
legendDisplacement;
-(void)reloadData;
-(void)reloadDataIfNeeded;
-(NSArray *)allPlots;
-(CPTPlot *)plotAtIndex:(NSUInteger)index;
-(CPTPlot *)plotWithIdentifier:(id <NSCopying>)identifier;
-(void)addPlot:(CPTPlot *)plot;
-(void)addPlot:(CPTPlot *)plot toPlotSpace:(CPTPlotSpace *)space;
-(void)removePlot:(CPTPlot *)plot;
-(void)removePlotWithIdentifier:(id <NSCopying>)identifier;
-(void)insertPlot:(CPTPlot *)plot atIndex:(NSUInteger)index;
-(void)insertPlot:(CPTPlot *)plot atIndex:(NSUInteger)index
intoPlotSpace:(CPTPlotSpace *)space;
-(NSArray *)allPlotSpaces;
-(CPTPlotSpace *)plotSpaceAtIndex:(NSUInteger)index;
-(CPTPlotSpace *)plotSpaceWithIdentifier:(id
<NSCopying>)identifier;
-(void)addPlotSpace:(CPTPlotSpace *)space;
-(void)removePlotSpace:(CPTPlotSpace *)plotSpace;
-(void)applyTheme:(CPTTheme *)theme;
@end
Code Snippet 4: The CPTGraph Class
47
CPTGraph is an abstract superclass; subclasses like CPTXYGraph are actually responsible for
doing most of creation and organization of graph components. Each subclass is usually associated
with particular subclasses of the various layers that make up the graph. For example, the
CPTXYGraph creates an instance of CPTXYAxisSet, and CPTXYPlotSpace.
Plot Area
The plot area is that part of a graph where data is plotted. It is typically bordered by axes, and
grid lines may also appear in the plot area. There is only one plot area for each graph, and it is
represented by the class CPTPlotArea. The plot area is surrounded by a CPTPlotAreaFrame, which
can be used to add a border to the area.
Plot Spaces
Plot spaces define the mapping between the coordinate space, in which a set of data exists,
and the drawing space inside the plot area.
For example, if you were to plot the speed of a train versus time, the data space would have
time along the horizontal axis, and speed on the vertical axis. The data space may range from 0 to
150 km/hr for the speed, and 0 to 180 minutes for the time. The drawing space, on the other
hand, is dictated by the bounds of the plot area. A plot space, represented by a descendant of the
CPTPlotSpace class, defines the mapping between a coordinate in the data space, and the
corresponding point in the plot area.
It is tempting to use the built in support for affine transformations to perform the mapping
between the data and drawing spaces, but this would be very limiting, because the mapping does
not have to be linear. For example, it is not uncommon to use a logarithmic scale for the data
space.
To facilitate as wide a range of data sets as possible, values in the data space can be stored
internally as NSDecimalNumber instances. It makes no sense to store values in the drawing space
in this way, because drawing coordinates are represented in Cocoa by floating point numbers
(CGFloat), and any extra precision would be lost.
A CPTPlotSpace subclass must implement methods for transforming from drawing
coordinates to data coordinates, and for converting from data coordinates to drawing
coordinates.
-(CGPoint)plotAreaViewPointForPlotPoint:(NSDecimal *)plotPoint;
-(CGPoint)plotAreaViewPointForDoublePrecisionPlotPoint:(double
*)plotPoint;
-(void)plotPoint:(NSDecimal *)plotPoint
forPlotAreaViewPoint:(CGPoint)point;
-(void)doublePrecisionPlotPoint:(double *)plotPoint
forPlotAreaViewPoint:(CGPoint)point;
Code Snippet 5: Plot spaces
Data coordinates --- represented here by the 'plot point' --- are passed as an C array of
NSDecimals or doubles. Drawing coordinates --- represented here by the 'view point' --- are
passed as standard CGPoint instances.
Whenever an object needs to perform the transform from data to drawing coordinates, or
vice versa, it should query the plot space to which it corresponds. For example, instances of
CPTPlot (discussed below) are each associated with a particular plot space, and use that plot
space to determine where in the plot area they should draw.
It is important to realize that a single graph may contain multiple plots, and that these plots
may be plotted on different scales. For example, one plot may need to be drawn with a
logarithmic scale, and a separate plot may be drawn on a linear scale. There is nothing to prevent
both plots appearing in a single graph.
For this reason, a single CPTGraph instance can have multiple instances of CPTPlotSpace. In
the most common cases, there will only be a single instance of CPTPlotSpace, but the flexibility
exists within the framework to support multiple spaces in a single graph.
49
Plots
A particular representation of data in a graph is known as a 'plot'. For example, data could be
shown as a line or scatter plot, with a symbol at each data point. The same data could be
represented by a bar plot/histogram.
A graph can have multiple plots. Each plot can derive from a single data set, or different data
sets: they are completely independent of one another.
Although it may not seem like it at first glance, a plot is analogous to a table view. For
example, to present a simple line plot of the speed of a train versus time, we need a value for the
speed at different points in time. This data could be stored in two columns of a table view, or
represented as a scatter plot. In effect, the plot and the table view are just different views of the
same model data.
What this means is that the same design patterns used to populate table views with data can
be used to provide data to plots. In particular, we can either use the data source design pattern,
or we can use bindings. To provide a plot with data using the data source approach, you set the
dataSource outlet of the CPTPlot object, and then implement the data source methods.
@protocol CPTPlotDataSource <NSObject>
-(NSUInteger)numberOfRecords;
@optional
// Implement one of the following
-(NSArray *)numbersForPlot:(CPTPlot *)plot
field:(NSUInteger)fieldEnum
recordIndexRange:(NSRange)indexRange;
-(NSNumber *)numberForPlot:(CPPlot *)plot
field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index;
@end
Code Snippet 6: Providing data to plots
It is possible to think of the field as being analogous to a column identifier in a table view, and
the record index being analogous to the row index. Each type of plot has a fixed number of fields.
For example, a scatter plot has two: the value of for the horizontal axis (x) and the value for the
vertical axis (y). An enumerator in the CPTScatterPlot class defines these fields.
typedef enum _CPTScatterPlotField {
CPTScatterPlotFieldX,
CPTScatterPlotFieldY
} CPTScatterPlotField;
Code Snippet 7: The ScatterPlot X and Y values
A record is analogous to the row of a table view. For a scatter plot, it corresponds to a single
point on the graph.
Plot classes not only support the data source design pattern, but also Cocoa bindings, as a
means of supplying data. This is again very similar to the approach taken with table views: each
field of the plot --- analogous to a table column --- gets bound to a key path via an
NSArrayController.
CPTGraph *graph = ...;
CPTScatterPlot *boundLinePlot = [[[CPTScatterPlot alloc]
initWithFrame:CGRectZero] autorelease];
boundLinePlot.identifier = @"Bindings Plot";
boundLinePlot.dataLineStyle.lineWidth = 2.f;
[graph addPlot:boundLinePlot];
[boundLinePlot bind:CPTScatterPlotBindingXValues toObject:self
withKeyPath:@"arrangedObjects.x" options:nil];
[boundLinePlot bind:CPTScatterPlotBindingYValues toObject:self
withKeyPath:@"arrangedObjects.y" options:nil];
Code Snippet 8: Cocoa Bindings example
The superclass of all plot classes is CPTPlot. This is an abstract base class; each subclass of
CPTPlot represents a particular variety of plot. For example, the CPTScatterPlot class is used to
draw line and scatter plots, while the CPTBarPlot class is used for bar and histogram plots.
51
A plot object has a close relationship to the CPTPlotSpace class discussed earlier. In order to
draw itself, the plot class needs to transform the values it receives from the data source into
drawing coordinates. The plot space serves this purpose.
Axes
Axes describe the scale of the plotting coordinate space to the viewer. A basic graph will have
just two axes, one for the horizontal direction (x) and one for the vertical direction (y), but this is
not a constraint in Core Plot --- you can add as many axes as you like. Axes can appear at the sides
of the plot area, but also on top of it. Axes can have different scales, and can include major and/or
minor ticks, as well as labels and a title.
Each axis on a graph is represented by an object of class descendant from CPTAxis. CPTAxis is
responsible for drawing itself, and accessories like ticks and labels. To do this it needs to know
how to map data coordinates into drawing coordinates. For this reason, each axis is associated
with a single instance of CPTPlotSpace.
A graph can have multiple axes, but all axes get grouped together in a single CPTAxisSet
object. An axis set is a container for all the axes belonging to a graph, as well as a factory for
creating standard sets of axes (eg CPTXYAxisSet creates two axes, one for x and one for y).
Axis labels are usually textual, but there is support in Core Plot for custom labels: any core
animation layer can be used as an axis label by wrapping it in an instance of the CPTAxisLabel
class.
Dissecting the Project
This sub section will get in detail with the different functionalities of the and for that purpose;
the different files that of the project will be presented.
The Plots
On this sub section the different plot s that make the project will be presented in a temporal
basis, pointing out the evolution of the project as development advanced.
The plot files follow the canonical structure defined on the Core-plot 0.9 API but of course, in
order to fit them to our specifics, several changes were implemented. Full code for each plot file
will not be displayed here as it can be found at the end of this document, but important parts will
be highlighted nevertheless.
TUTSimpleScatterPlot
This is the first plot developed and it is a simple scatter plot that allows us to draw given points,
that must be fed to the plot over a X-Y axis. Notable parts are:
The graph object that will host the scatter plot is created and then the Plot Area that will be used
to draw plot is hard set within the code:
53
// Create a graph object which we will use to host just one
scatter plot.
CGRect frame = [self.hostingView bounds];
self.graph = [[CPTXYGraph alloc] initWithFrame:frame] ;
// Add some padding to the graph, with more at the bottom
for axis labels.
self.graph.plotAreaFrame.paddingTop = 20.0f;
self.graph.plotAreaFrame.paddingRight = 20.0f;
self.graph.plotAreaFrame.paddingBottom = 50.0f;
self.graph.plotAreaFrame.paddingLeft = 20.0f;
// Tie the graph we've created with the hosting view.
self.hostingView.hostedGraph = self.graph;
Code Snippet 9: setting the plot area
For the axes, is it possible to set title, a line style (its colour, width), and position in the plot area.
// Modify the graph's axis with a label, line style, etc.
CPTXYAxisSet *axisSet = (CPTXYAxisSet *)self.graph.axisSet;
// Modify the graph's axis with a label, line style, etc.
CPTXYAxisSet *axisSet = (CPTXYAxisSet *)self.graph.axisSet;
axisSet.xAxis.title = @"Data X";
axisSet.xAxis.titleTextStyle = textStyle;
axisSet.xAxis.titleOffset = 30.0f;
axisSet.xAxis.axisLineStyle = lineStyle;
axisSet.xAxis.majorTickLineStyle = lineStyle;
axisSet.xAxis.minorTickLineStyle = lineStyle;
axisSet.xAxis.labelTextStyle = textStyle;
axisSet.xAxis.labelOffset = 3.0f;
axisSet.xAxis.majorIntervalLength =
CPTDecimalFromFloat(2.0f);
axisSet.xAxis.minorTicksPerInterval = 1;
axisSet.xAxis.minorTickLength = 5.0f;
axisSet.xAxis.majorTickLength = 7.0f;
axisSet.yAxis.title = @"Data Y";
axisSet.yAxis.titleTextStyle = textStyle;
axisSet.yAxis.titleOffset = 40.0f;
axisSet.yAxis.axisLineStyle = lineStyle;
axisSet.yAxis.majorTickLineStyle = lineStyle;
axisSet.yAxis.minorTickLineStyle = lineStyle;
axisSet.yAxis.labelTextStyle = textStyle;
axisSet.yAxis.labelOffset = 3.0f;
axisSet.yAxis.majorIntervalLength =
CPTDecimalFromFloat(10.0f);
axisSet.yAxis.minorTicksPerInterval = 1;
axisSet.yAxis.minorTickLength = 5.0f;
axisSet.yAxis.majorTickLength = 7.0f;
Code Snippet 10: coding the Axis
The plot area position and the axes values must be chosen wisely so that plot uses as much of
the area available. For that purpose:
// Setup some floats that represent the min/max values on our
axis.
float xAxisMin = -10;
float xAxisMax = 10;
float yAxisMin = 0;
float yAxisMax = 100;
// We modify the graph's plot space to setup the axis' min /
max values.
CPTXYPlotSpace *plotSpace = (CPTXYPlotSpace
*)self.graph.defaultPlotSpace;
plotSpace.xRange = [CPTPlotRange
plotRangeWithLocation:CPTDecimalFromFloat(xAxisMin)
length:CPTDecimalFromFloat(xAxisMax - xAxisMin)];
plotSpace.yRange = [CPTPlotRange
plotRangeWithLocation:CPTDecimalFromFloat(yAxisMin)
length:CPTDecimalFromFloat(yAxisMax - yAxisMin)];
Code Snippet 11: Binding Axis and plot spaces
The last part to mention is the code that binds the different elements of the plot: The Plot
area, the graph and the plot, with a given line style:
// Add a plot to our graph and axis. We give it an identifier so
that we
// could add multiple plots (data lines) to the same graph
if necessary.
CPTScatterPlot *plot = [[CPTScatterPlot alloc] init] ;
plot.dataSource = self;
plot.identifier = @"mainplot";
plot.dataLineStyle = lineStyle;
plot.plotSymbol = plotSymbol;
[self.graph addPlot:plot];
Code Snippet 12: binding graph and plots
The mechanics for the elements described above are maintained on the other plots of the
project and therefore will be omitted during their descriptions not to fall in redundancy.
55
ScatterPlot2
This is the evolution of the previous plot, TUTSimpleScatterPlot, and represents a middle step
in the project. The aim of this plot was to design the final form that would host a channel, and
with it, 7 different plots at the same time in one graph.
From this plot file it is worth mentioning the implementation of the numberForPlot:
method
-(NSNumber *)numberForPlot:(CPTPlot *)plot
field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index
{
NSArray * data1 = [self.graphData objectAtIndex:0];
NSArray * data2 = [self.graphData objectAtIndex:1];
NSArray * data3 = [self.graphData objectAtIndex:2];
NSArray * data4 = [self.graphData objectAtIndex:3];
NSArray * data5 = [self.graphData objectAtIndex:4];
NSArray * data6 = [self.graphData objectAtIndex:5];
NSArray * data7 = [self.graphData objectAtIndex:6];
NSValue *val1 = [data1 objectAtIndex:index];
NSValue *val2 = [data2 objectAtIndex:index];
NSValue *val3 = [data3 objectAtIndex:index];
NSValue *val4 = [data4 objectAtIndex:index];
NSValue *val5 = [data5 objectAtIndex:index];
NSValue *val6 = [data6 objectAtIndex:index];
NSValue *val7 = [data7 objectAtIndex:index];
CGPoint point1 = [val1 CGPointValue];
CGPoint point2 = [val2 CGPointValue];
CGPoint point3 = [val3 CGPointValue];
CGPoint point4 = [val4 CGPointValue];
CGPoint point5 = [val5 CGPointValue];
CGPoint point6 = [val6 CGPointValue];
CGPoint point7 = [val7 CGPointValue];
Code Snippet 13: numberForPlot: data gathering
Seven arrays are created. Every each of them will store the data points, in GCPoint format, of
one plot. They are called point1 up to point7.
switch (fieldEnum)
{
case CPTScatterPlotFieldX:
{
return [NSNumber numberWithFloat:point1.x];
}
case CPTScatterPlotFieldY:
{
if ([plot.identifier isEqual:@"mainplot"])
{
return [NSNumber numberWithFloat:point1.y];
} else if ([plot.identifier isEqual:@"mainplot2"])
{
return [NSNumber numberWithFloat:point2.y];
} else if ([plot.identifier isEqual:@"mainplot3"])
{
return [NSNumber numberWithFloat:point3.y];
} else if ([plot.identifier isEqual:@"mainplot4"])
{
return [NSNumber numberWithFloat:point4.y];
}else if ([plot.identifier isEqual:@"mainplot5"])
{
return [NSNumber numberWithFloat:point5.y];
} else if ([plot.identifier isEqual:@"mainplot6"])
{
return [NSNumber numberWithFloat:point6.y];
} else if ([plot.identifier isEqual:@"mainplot7"])
{
return [NSNumber numberWithFloat:point7.y];
}
}
}
return [NSNumber numberWithFloat:0];
}
Code Snippet 14: choosing data point in numberForPlot:
After that, the way designed for Core-plot to know what data points corresponds to what line
style and setting is through the use of an identifier. The identifier chosen for this file is mainplot
and there are seven of them. Therefore every each point data array is assigned to one mainplot
57
line style within the method and the consequence of this is that the seven plots that form the
graph are drawn together but with different characteristics.
CustomRT
This is the final form of the plot. There are several interesting new features implemented on
this plot file that will be addressed.
readData:
This method controls the data gathering/selection of the final plot. It receives Boolean
parameter that determines whether to plot data of the right or left channel.
As mentioned before, the voltage points that define the Y axis of the plots to draw are stored
in a text file called MiguelUNIX.txt. There is code within this method to open the aforementioned
file, read it line by line, and the figures separated by commas that correspond to a voltage reading
of a signal.
After that, through a timer, the method triggers another method to which it is concatenated,
called newDataFromFile. Every time that method is called the result will be that 1 single
value is drawn for every one of the 7 plots that form the channel, so in order to achieve real time
plotting the method will be called several times.
The way to call newDataFromFile: from within readData: is done through a timer
that works in endless loop. The name of the timer is dataTimer it is triggered using a global
constant defined at the beginning of plot called kFrameRate2. The use of this constant lets us
control how many times the timer triggers, therefore calling the method, and drawing the points.
In other words, it lets us control the speed at which the 7 signals (plots) of the graph are drawn
onscreen.
Below we can find a flow chart that describes the basics of the implementation followed by
the actual code:
Press PLOT
leftChannel
flag?
(IBAction) Plot:
LAUNCH
Open file
MiguelUNIX.txt
Flag = TRUE
Separate by lines
Trigger timer
Flag = FALSE
fileContents
allLinedStrings
∞ loop
newDataFromFile:
Figure 34: new Data: flow chart
59
- (void)readData: (BOOL *) leftChannel
{
if (leftChannel) {
leftChannelFlag=TRUE;
}else {
leftChannelFlag=FALSE;
}
[plotData1 removeAllObjects];
[plotData2 removeAllObjects];
[plotData3 removeAllObjects];
[plotData4 removeAllObjects];
[plotData5 removeAllObjects];
[plotData6 removeAllObjects];
[plotData7 removeAllObjects];
NSString* filePath = @"MiguelUNIX";
NSString* fileRoot = [[NSBundle mainBundle]
pathForResource:filePath
ofType:@"txt"];
// read everything from text
NSString* fileContents =
[NSString stringWithContentsOfFile:fileRoot
encoding:NSUTF8StringEncoding
error:nil];
// first, separate by new line
allLinedStrings =
[fileContents componentsSeparatedByCharactersInSet:
[NSCharacterSet newlineCharacterSet]];
readIndex = 0;
dataTimer = [NSTimer timerWithTimeInterval:0.1 /
kFrameRate2
target:self
selector:@selector(newDataFromFile:)
userInfo:nil
repeats:YES] ;
[[NSRunLoop mainRunLoop] addTimer:dataTimer
forMode:NSDefaultRunLoopMode];
}
Code Snippet 15: the method readData:
newDataFromFile:
This method creates the final 7 data point arrays called myArray[i] that will store the voltage
values to be drawn on the plot from the MiguelUnix.txt.
Then each array is linked to its corresponding plot, called thePlot. At the final part of the
method we can find the code dedicated to getting the point from the data array and drawing it on
the screen. Here comes in action another global constant, called kMaxDataPoints2. The purpose
of this constant is to set the maximum number of points that appear on a single screen per plot,
therefore, it could be said that it doubles as the “Span” factor. By increasing or decreasing this
value it is possible to show more or fewer data points over the axes.
The following figure shows a simplified flow chart of how this method works:
pauseTimer:
This plot also features a way to pause the drawing process of the 7 plots in the graph. This is
achieved with the manipulation of the timers that trigger the drawing process.
61
readData:
leftChannel flag? select data points 0-6TRUE
select data points 7-13
FALSE
Break to 1 string line
NSArray strsInOneLine [readIndex];
Break into single voltage
values
plotData1
=> kMaxDataPoints2
?
singleStrs
Free 1st object of
plotData1
NO
Add voltage value to plot
YES
Send to plot class
readIndex ++;
Evaluate for plotData 2-7
Figure 35: newDataFromFile: flow chart
The code for this method is extensive and as such, for the sake of clarity, not included on this
section but the reader can find it at the code file library, by the end of this document.
-(void) pauseTimer:(NSTimer *)timer {
pauseStart = [NSDate dateWithTimeIntervalSinceNow:0] ;
previousFireDate = [timer fireDate] ;
[timer setFireDate:[NSDate distantFuture]];
}
Code Snippet 16: the pauseTimer: method
resumeTimer:
Obviously, the implementation of pauseTimer: required an opposed method to resume
the timers:
-(void) resumeTimer:(NSTimer *)timer {
float pauseTime = -1*[pauseStart timeIntervalSinceNow];
[timer setFireDate:[previousFireDate
initWithTimeInterval:pauseTime sinceDate:previousFireDate]];
}
Code Snippet 17: the method resumeTimer:
63
Designing the GUI
The Storyboard file for the iOS adaptation of the DEmotiv Toolbox has the following aspect:
Figure 36: Project's Storyboard
As it can be observed it was chosen a tab based structure for the interface. The Tab
functionality is provided by the Tab Controller on the left. Although that Scene will not be a View
by itself that the users will be able to see, it provided the needed framework for the 3 other views
to work together through a tab based menu.
The other scenes are actual screens for the application and they are simple called Welcome
Scene, First Scene and Second Scene. As it was mentioned before when explaining the Interface
Builder, every single one of these scenes is supported in code with a unique View Controller class,
that have been called accordingly WelcomeController, FirstViewController and
SecondViewController.
From this point on, the different files and classes that form the GUI will be explained. Full code
can be found as an appending at the end of this document but still, relevant code snippets will be
presented in order to assist the explanations and facilitate a better understanding of the work
The Tab Controller
This view is the one hidden from the user as it provides the working functionality of the tab
menu at the bottom part of every scene.
Figure 37: Tab Controller
The application has 3 different views which correspond to a tab each. For that purpose there
are 3 buttons on the Tab Menu, one for each tab. By clicking on them it is possible to browse
among tabs.
The Welcome tab is the one the user arrives to when launching the application.
65
Welcome Scene
This is the 1st
the application that first view that the user will arrive to and it looks like this:
Figure 38: Welcome Screen
The final form of the welcome screen wasn’t implemented in time, but the current one has a
welcome message, a text field intended to put the final applications instructions and the
Biomedical Imaging Lab Logo.
Figure 39: Biomedical Imaging Lab logo
There is also a button called Plot, with no current purpose that it was used for plot tests. No
graph is plotted on the welcome screen because the UI elements reduce the available area for the
graph to be displayed.
As it was desired to maximize the size of the plot, the other tabs, 1st and 2nd
were created. By
doing this it is possible to use the whole screen of the iPad to display the graph.
First Scene
1st and 2nd
Scene have very similar GUI features. So why keep both? It was the natural
consequence of trials and testing during the development of the project.
Before getting to the last version of Custom Real Time Plot, several other plots were tried in
different steps, and in order to test them (and keep them separate from the other plots) it was
needed to have available Scenes as test benches.
The 1st
Scene loos like this:
67
Figure 40: 1st Scene
It has a very simple structure. There is a big white area that represents the Graph Hosting View for
the plots and buttons to launch two different plot tests. The scene is controlled by the
FirstViewController Class, which can be found at the source in the CD.
The buttons trigger two different test plots:
 Scatter Plot Test:
This was the first plot achieved with the Core-plot API. The other plots, as well as the final
one, were built upon this one by adding functionalities and making modifications to the code.
Let’s take a look to the trigger code from FirstViewController.m:
NSMutableArray *data = [NSMutableArray array];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(-10,
75)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(-8,
50)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(-6,
30)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(-4,
10)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(-2,
5)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(0,
0)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(3,
10)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(4,
25)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(5,
53)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(7,
70)]];
[data addObject:[NSValue valueWithCGPoint:CGPointMake(10,
75)]];
self.scatterPlot = [[TUTSimpleScatterPlot alloc]
initWithHostingView: _graphHostingView andData:data];
[self.scatterPlot initialisePlot];
Code Snippet 18: triggering Scatter Plot Test
As it can be seen at the code snippet above, the plot’s base file is the TUTSimpleScatterPlot ,
which was previously explained. The data points for the plot are manually created one by one and
stored in an NSMutableArray called data, and then they are fed to the triggering method
iniWithHostingView:andData:
 Real Time Test:
The evolution of the previous plot is this one. It is based on the file ScatterPlot2 file. It
represents a middle step between the Scatter Plot Test and the final plot.
69
Second Scene
The 2nd
scene serves as the screen for the final version of the brain plot achieved in this
project and it looks quite similar to the 1st
one. It is supported in code by the
SecondViewController class and has the following appearance:
Figure 41: The 2nd Scene
At a first glance we can see that on this case there are not one, but two Graph Hosting Views
(in white and beige). The reasons behind that is because there will be 2 plots drawn
simultaneously. As it was mentioned at the Plot Section, the final plot is meant to draw 7 signals
that correspond to either the left or right channel. By having 2 graphs on the same screen it is
possible to draw both channels at the same time, displaying the total 14 channels.
 Plot:
Through this button the drawing of plots is triggered. The required commands are:
// action to plot both channels
- (IBAction)Plot:(id)sender {
self.customRT1 = [[CustomRT alloc] initWithHostingView:
_graphHostingViewLeft];
[self.customRT1 readData:TRUE];
[self.customRT1 initialisePlot:@"Left Channel"];
self.customRT2 = [[CustomRT alloc] initWithHostingView:
_graphHostingViewRight];
[self.customRT2 readData:FALSE];
[self.customRT2 initialisePlot:@"Right Channel"];
}
Code Snippet 19: Triggering the final plots
As it can be seen on the code, the title of the graph is fed to the method
initWithHostingView: and there is a second method called readData: that was
previously explained, that needs a Boolean value to either read the left or right channel from the
text file.
 Pause:
This is a new functionality implemented for the final graphs. With the use of timers it is
possible to pause the drawing process of the plot, keep the state and then resume the plotting. It
is triggered by this:
71
// to pause plotting
- (IBAction)pausePlot:(id)sender {
[self.customRT1 pauseTimer:[self.customRT1 returnTimer]];
[self.customRT2 pauseTimer:[self.customRT2 returnTimer]];
}
Code Snippet 20: pause the plot
 Resume:
This buttons is used along with the Pause plot. Its purpose is to resume the drawing process
of the graph after it has been paused.
// to resume plotting
- (IBAction)resumePlot:(id)sender {
// [ setHidden:!(.isOn)];
[self.customRT1 resumeTimer:[self.customRT1 returnTimer]];
[self.customRT2 resumeTimer:[self.customRT2 returnTimer]];
}
Code Snippet 21: resume the plot
Project Execution & Results
In the previous chapter all the hardware and software component of the project were
explained in detail. From the computer itself, to the software tools and the code, one by one, they
have been analysed. But this section will try to show the final results, despite the non-interactive
nature of a text document.
Final results for the process previously presented will be showed. Also, opposed to the
accomplishments, all the things that did not work and the limitations of the project will be
mentioned here.
Achieving Real Time
On every project there are always features that do not get to be implemented. There could be
several reasons behind that: the finite resources (such as manpower and time), limitations of
technology, or just things that do not work as expected. This project is, of course, no exception.
Well deep in the developing process an important obstacle was found: The original system
architecture used by the DEmotiv Matlab toolbox looked like this:
Figure 42: Original Demotiv Matlab Toolbox architecture
73
As it was mentioned before, the original Matlab Toolbox acquired the data by establishing a
Wi-Fi link with the Emotiv EPOC neuroheadset. The Wi-Fi signal itself was encrypted using Emotiv
proprietary format and therefore it required the Emotiv SDK for the decryption process. As the
Emotiv SDK and the Matlab software were both available for the Windows PC, it represented no
problem at all, as the brain signal could be acquired and decrypted by the SDK and then used by
Matlab for further manipulations.
With this project, one of the intended goals was to dispense with the computer and use only
the portable device –on our case the iPad- for all data acquiring, manipulations and computations
purposes but as the WiFi link of the neuroheadset is encrypted, the receiving end must have the
Emotiv SDK in order to decrypt the signal.
As Emotiv at the time of this project had no SDK developed for iOS devices, only for Windows,
it is impossible to get the data straight from neuroheadset from an iPad because it cannot decrypt
the signal. This obstacle was impossible to overcome and so it was decided to go around it.
Main consequences of this were:
 Need of a relay laptop, as a middle step, with the Emotiv SDK in charge of decrypting the
headset signal and sending it to the iPad. This way it is possible to preserve a high mobility
component as originally intended.
Figure 43: proposed architecture for our system
 The link between the laptop and the iPad is beyond the scope of the project and was not
implemented due to time constraints, that it left for future revision of the project as a
way to expand it.
 A direct consequence of the previous point is that with the lack of link, the signal is plot to
the graphs from a dump file called MiguelUNIX.txt that stores a sample brain reading
from the DEmotiv Matlab toolbox.
 Despite having no real time data acquisition, there is in fact real time graph plotting, as
the acquisition was simulated (and previously explained) with the use of timers
Results
On this section it is possible to see the final appearance of the different features of the
project running on the iPad simulator.
The Splash Screen
This is the first thing the user sees when launching the application from the iPad. It is a splash
screen, featuring the logo of the University of Houston and a “please wait” message. Although
there is no need for a splash screen, as the project loads into memory almost instantly, it was
added for aesthetic reasons and because it does not affect the user experience.
75
Figure 44: The Splash screen
The Welcome Screen
The appearance of the welcome screen is practically the same to the one previously found on
the Scene section. This is great consequence of the WYSIWYG (What You See Is What You Get)
nature of building GUIs with the Interface Builder and Storyboards.
Figure 45: The Welcome Screen in action
The Simple Scatter Plot Test
This is how the first graph developed looks like. It is a very simple plot of a few given points in
a static fashion but it served its purpose to learn how to use the Core-plot library.
Figure 46: the Simple Scatter Plot Test
The Real Time Plot Test
This is how the first real time real time plot looks like. It came to be as an evolution of the
previous plot. Important new features were:
77
 7 simultaneous plots.
 Data scrolling over the axes.
 Open/read from text file.
Figure 47: Real Time plot test 1
The Custom RT Plot
This is how the final plot of the project looks like. It shows both channels of signals in an split-
view fashion. Important features implemented here are:
 14 simultaneous plots
 Split-view of both channels
 Pause / Resume functions
Two figures are provided in order to show the pause/resume and the data scrolling features.
Figure 48: The final RT plot 1
Figure 49: The final RT plot 2
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC
THESIS iOS - Tesis PFC

More Related Content

Similar to THESIS iOS - Tesis PFC

French mobility welcome speech from erasmus+ student
French mobility  welcome speech from erasmus+ studentFrench mobility  welcome speech from erasmus+ student
French mobility welcome speech from erasmus+ studentRafael Montero
 
Accessibility of HIV and AIDS information among students with special communi...
Accessibility of HIV and AIDS information among students with special communi...Accessibility of HIV and AIDS information among students with special communi...
Accessibility of HIV and AIDS information among students with special communi...Emmanuel International Malawi
 
Planet Homeless drukbestand proefschrifteditie (bijgesneden)
Planet Homeless drukbestand proefschrifteditie (bijgesneden)Planet Homeless drukbestand proefschrifteditie (bijgesneden)
Planet Homeless drukbestand proefschrifteditie (bijgesneden)Nienke Boesveldt
 
Lore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesisLore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesisLore Dirick
 
Lore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesisLore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesisLore Dirick
 
Are You Ready for the Quickening!
Are You Ready for the Quickening!Are You Ready for the Quickening!
Are You Ready for the Quickening!DataconomyGmbH
 
Jan feb2013 nourish newsletter
Jan feb2013 nourish newsletterJan feb2013 nourish newsletter
Jan feb2013 nourish newsletterSarah Bergs
 
(Critical theory throughout contemporary society) david m. berry critical t...
(Critical theory throughout contemporary society) david m. berry   critical t...(Critical theory throughout contemporary society) david m. berry   critical t...
(Critical theory throughout contemporary society) david m. berry critical t...Reza_Sanaye
 
To be an EVS-volunteer
To be an EVS-volunteerTo be an EVS-volunteer
To be an EVS-volunteerguestc730a3
 
Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...
Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...
Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...Gaia Education
 
TED Global 2011 Fellows Booklet
TED Global 2011 Fellows BookletTED Global 2011 Fellows Booklet
TED Global 2011 Fellows BookletZoely Mamizaka
 
Greenlight For Girls 20 November Brussels
Greenlight For Girls 20 November BrusselsGreenlight For Girls 20 November Brussels
Greenlight For Girls 20 November BrusselsMRancourt
 
Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210
Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210
Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210Pieter Van Nieuwenhuyse
 
Muet Writing Essay Sample 2014. Online assignment writing service.
Muet Writing Essay Sample 2014. Online assignment writing service.Muet Writing Essay Sample 2014. Online assignment writing service.
Muet Writing Essay Sample 2014. Online assignment writing service.Allison Gilbert
 
DoGoodAsYouGo - 2013 Anthology
DoGoodAsYouGo - 2013 AnthologyDoGoodAsYouGo - 2013 Anthology
DoGoodAsYouGo - 2013 AnthologyJay Shapiro
 

Similar to THESIS iOS - Tesis PFC (20)

Technology
TechnologyTechnology
Technology
 
French mobility welcome speech from erasmus+ student
French mobility  welcome speech from erasmus+ studentFrench mobility  welcome speech from erasmus+ student
French mobility welcome speech from erasmus+ student
 
Accessibility of HIV and AIDS information among students with special communi...
Accessibility of HIV and AIDS information among students with special communi...Accessibility of HIV and AIDS information among students with special communi...
Accessibility of HIV and AIDS information among students with special communi...
 
Planet Homeless drukbestand proefschrifteditie (bijgesneden)
Planet Homeless drukbestand proefschrifteditie (bijgesneden)Planet Homeless drukbestand proefschrifteditie (bijgesneden)
Planet Homeless drukbestand proefschrifteditie (bijgesneden)
 
Lore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesisLore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesis
 
Lore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesisLore_Dirick_Doctoral_thesis
Lore_Dirick_Doctoral_thesis
 
Greece
GreeceGreece
Greece
 
Esp2011opening
Esp2011openingEsp2011opening
Esp2011opening
 
Are You Ready for the Quickening!
Are You Ready for the Quickening!Are You Ready for the Quickening!
Are You Ready for the Quickening!
 
Jan feb2013 nourish newsletter
Jan feb2013 nourish newsletterJan feb2013 nourish newsletter
Jan feb2013 nourish newsletter
 
(Critical theory throughout contemporary society) david m. berry critical t...
(Critical theory throughout contemporary society) david m. berry   critical t...(Critical theory throughout contemporary society) david m. berry   critical t...
(Critical theory throughout contemporary society) david m. berry critical t...
 
To be an EVS-volunteer
To be an EVS-volunteerTo be an EVS-volunteer
To be an EVS-volunteer
 
Gherlee
GherleeGherlee
Gherlee
 
Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...
Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...
Global Ecovillage Educators for a Sustainable Earth Information Newsletter_Su...
 
Hermans 2014 Engaging with risks
Hermans 2014 Engaging with risksHermans 2014 Engaging with risks
Hermans 2014 Engaging with risks
 
TED Global 2011 Fellows Booklet
TED Global 2011 Fellows BookletTED Global 2011 Fellows Booklet
TED Global 2011 Fellows Booklet
 
Greenlight For Girls 20 November Brussels
Greenlight For Girls 20 November BrusselsGreenlight For Girls 20 November Brussels
Greenlight For Girls 20 November Brussels
 
Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210
Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210
Pieter_Van_Nieuwenhuyse-PhD_dissertation-20121210
 
Muet Writing Essay Sample 2014. Online assignment writing service.
Muet Writing Essay Sample 2014. Online assignment writing service.Muet Writing Essay Sample 2014. Online assignment writing service.
Muet Writing Essay Sample 2014. Online assignment writing service.
 
DoGoodAsYouGo - 2013 Anthology
DoGoodAsYouGo - 2013 AnthologyDoGoodAsYouGo - 2013 Anthology
DoGoodAsYouGo - 2013 Anthology
 

THESIS iOS - Tesis PFC

  • 1. Master Thesis Engineering in Telecommunications An iOS adaptation of the DEmotiv MATLAB Toolbox for Real-Time EEG Functional Mapping MIGUEL TEIGELL DE AGUSTÍN Thesis supervised by George Zouridakis, Javier Díaz & David Iglesias San Sebastián, JULY 2012
  • 2.
  • 3.
  • 4. Acknowledgements There are so many people I would like to mention here that I might have to start a new thesis just for that. Of course I have to begin with my parents, Ricardo & Nuria; for all the sacrifices you have made and for believing in me. To my sisters, Idoia and Soledad, along with my wonderful nephews and Jef, because you are the light of my life. To Juán, Ade, Gonzalo, Joaquín, Carlos, Matthew, Charlie, Luis, Óscar & Fran: you all are like the brothers that I never had, so thank you for growing with me, shaping me and making me a better person. It is an honour to share this trip called “Life” in your company. To Estefania and Patri, because they are just incredible girls. To the people that I found in Madrid: to Pat, because you know we are soul mates; to David, for his incredible personality; to Cris, for sticking with me for such a long time; to Pau, Rocío and Chejo, for guiding me during my baby steps in engineering… I haven’t forgotten; to Rosa, because no one can get a smile out of me when I’m sad like you can; to Luis Angel and Marina, for being the coolest among the coolest; to Nacho, for being mon copain préféré; to Irina and Guillermo, for sticking with me when others bailed… I will always be grateful to you both. And finally, to Daniel, for the good memories. I was blessed with amazing people when life took me to San Sebastián: to Miren, Ibone & Artur, Maider, Tamara, Amaia and the Arantxas, because without you I wouldn’t have been able to do it; to Mark, for being such a good comrade when I arrived to the city; to Ainarita and Jon, for your help, support and the good moments shared together; to Merideño, for your friendship and help; To Unai, for caring for me and supporting me; to Eneko, for being my brother-in-arms and a loyal friend; to Sergio, for the great farewells ; to Asier, for the warm greetings; to Pedro, for the laughs we have shared; to Pablo & Sheyla, for the inspiration, the laughs, and the support that we have shared during these years; to Pamimo, for the good advice and to María, because you R-O-C-K baby. To my whole class at the university, because you are the best thing that ever happened to me…in especial to Andrea, for enduring with me (and I know I can get tiresome at times) ; to Alejandro, for being such a great guy; to Imanol, for the good moments and to Laura, for your
  • 5. incredible personality and because I wished I were more like you. To Beatriz, for being my friend, my confident and for listening to my rants … and finally, to her brother, José Luís, for everything. From the university staff I would to thank my tutors, Íñigo Gutierrez and Manuel J. Conde, for their on-going support during all these years; to Ainara Díez , for her charm; to Javier Díaz, for being my thesis supervisor and to Dr. Zouridakis, for granting me the opportunity to do this work. When I thought it couldn’t get any better, my time in Belgium proved me wrong: to Maider, what can I say? Because you deserve it! To my dear Belgian family, Els, Koen and their kids, for welcoming me at their home and helping me adapt to a new country; still with family, to Bart & Patrick, for being the coolest people in Antwerp; to Carlitos, for visiting me, and for all the moments spent together, past and future; to Pablo, for being a great friend and roommate! To Sandra and Elçin, for all those long chats in so many languages; to Laura, because the Erasmus doesn’t stop at Leuven! To Javier, for being such a great friend and also for those nice evenings! Like whim of fate, I ended up in Houston, Texas, where even more people jumped in the wagon of my life and joined me during the last stops of this process: To Tarun, for your great assistance during the whole process and your friendship; to Javier, for being the best neighbour anyone could ever want! To Tatiana and her husband, Fran, for all the moments shared together; To Juanjo, for being such a great guy and a good friend; to Valentina and Andrea, for being so nice and cheerful; to David and Amy, for being such a lovely couple and welcoming me to their place; to Miriam, for being a force of nature! To the guys at Global, for being such a wild bunch! To Sirena, because, baby, we are soul mates and you know it! And last but not least, to my wonderful friends: Ainhoa, David and Aina. For your help, your company, your guidance and, in a few words, just for you being you. For all of those great memories together in Houston and for all of those still to be made, thank you, It would have never been the same without you. And I would like end by dedicating my work to my late grandfather, Manuel, whose memories I treasure and without whom I wouldn’t be half the person I am today. Thank you grandpa, you are remembered.
  • 6.
  • 7. Abstract With the recent technological advancements and the proliferation of smartphones and tablets it is possible to state that nearly everyone carries around powerful devices in their pockets with computational capabilities that match or even surpass a 4 year old desktop computer. This new status quo has not gone unnoticed by the Scientific Community which has seen it as opportunity to provide solutions to existing problems. In addition to that, the mobility factor provided by smartphones allows us to revisit previous solutions and update them to current technologies. The DEmotiv project, precursor, of this work, assed in real time the cognitive state of a person engaged in a specific task. To accomplish that, techniques that capture the spatiotemporal dynamics of brain activity were developed based around EEG. Electroencephalography (EEG) is a brain mapping technique based on recordings of the electrical signals generated by cortical neurons. Typically, a set of long wires (electrodes) placed on the scalp is connected to amplifiers. However, due to recent technological developments, the electrodes and amplifiers have been reduced to the size of regular headphones. This set up is completely non-invasive and non-obtrusive, and allows for continuous monitoring of subjects during performance of normal activities. In order to get such recordings both projects make use of the EPOC neuroheadset: a battery operated 14-channel wearable wireless headset designed by Emotiv. As the DEmotiv project provided a Matlab Toolbox that featured on-going EEG activity capture, surface activation maps display, Granger connectivity networks etc. with a friendly custom GUI , it is our intention to take advantage of the possibilities granted by portable devices., such a as manoeuvrability. Therefore with this project, we aim to replicate the same great user experience that the DEmotiv offered over an iPad, a tablet manufactured by Apple Inc.
  • 8.
  • 9. Table of Contents Introduction.........................................................................................................................3 Motivation & Objectives.......................................................................................................... 3 DEmotiv MATLAB TOOLBOX by the Biomedical Imaging Lab (U of H)....................................5 Background.............................................................................................................................. 5 Neuroimaging ...................................................................................................................... 5 Structural Imaging Techniques ........................................................................................ 5 Functional Imaging Techniques ....................................................................................... 7 EEG Mapping Techniques .................................................................................................... 9 Cortical Mapping.............................................................................................................. 9 Topographic Mapping.................................................................................................... 10 Connectivity Network ........................................................................................................ 11 Granger Causality........................................................................................................... 12 Evoked Potentials .............................................................................................................. 15 Auditory evoked potentials: The N100 peak ................................................................. 16 Hardware & Software ............................................................................................................ 18 Emotiv................................................................................................................................ 18 The Emotiv EPOC neuroheadset.................................................................................... 19 The Emotiv EPOC SDK .................................................................................................... 21 Project Design & Methodology...........................................................................................24 Hardware ............................................................................................................................... 24 The iMac: then and now.................................................................................................... 24 The iPad 2 .......................................................................................................................... 26 Software Tools & Concepts.................................................................................................... 29 Xcode 4.2 SDK .................................................................................................................... 29 Objective C..................................................................................................................... 35 View Controllers ............................................................................................................ 35
  • 10. Methods......................................................................................................................... 36 Actions ........................................................................................................................... 37 Storyboards & the Interface Builder.............................................................................. 37 Software Design..................................................................................................................... 40 API: Core-plot 0.9............................................................................................................... 40 Anatomy of the Graph ................................................................................................... 42 Class Diagram................................................................................................................. 43 Objects and Layers......................................................................................................... 44 Layers............................................................................................................................. 44 Graphs............................................................................................................................ 45 Plot Area ........................................................................................................................ 47 Plot Spaces..................................................................................................................... 47 Plots ............................................................................................................................... 49 Axes................................................................................................................................ 51 Dissecting the Project ............................................................................................................ 52 The Plots ............................................................................................................................ 52 TUTSimpleScatterPlot.................................................................................................... 52 ScatterPlot2 ................................................................................................................... 55 CustomRT....................................................................................................................... 57 Designing the GUI .............................................................................................................. 63 The Tab Controller ......................................................................................................... 64 Welcome Scene ............................................................................................................. 65 First Scene...................................................................................................................... 66 Second Scene................................................................................................................. 69 Project Execution & Results................................................................................................72 Achieving Real Time............................................................................................................... 72 Results.................................................................................................................................... 74 The Splash Screen.............................................................................................................. 74 The Welcome Screen ......................................................................................................... 75 The Simple Scatter Plot Test.............................................................................................. 76 The Real Time Plot Test ..................................................................................................... 76 The Custom RT Plot............................................................................................................ 77
  • 11. Budget ...............................................................................................................................79 Hardware ............................................................................................................................... 79 Software................................................................................................................................. 79 Manpower ............................................................................................................................. 80 Total costs.............................................................................................................................. 80 Conclusions ........................................................................................................................81 Future of the System ..........................................................................................................82 Bibliography.......................................................................................................................83 List of Figures .....................................................................................................................85 List of Code Snippets ..........................................................................................................87 List of Tables ......................................................................................................................87 ANNEX: Code Files..............................................................................................................88 The View Controllers ............................................................................................................. 88 The Plots ................................................................................................................................ 88 Support Files .......................................................................................................................... 88
  • 12.
  • 13. 3 Introduction Over the recent years mobile phones have seen great technological advancements at a fast pace growing beyond their original purpose of communicating individuals. Featuring one, two and even four core CPUs and 4”+ high resolution screens, mobiles phones are de facto pocket computers with enough processing power to render their desktop counterparts mostly unnecessary for the common daily tasks, such as Internet browsing, email, social apps, media playback etc. The DEmotiv project offered a user friendly approach to neuroimaging techniques like Electroencephalography (EEG) with a MATLAB toolbox that allowed brain wave analysis and processing, captured by the compact Emotiv neuroheadset device. This project aims to further explore the mobile aspects of its predecessor by using the aforementioned Emotiv neuroheadset along with an Apple iPad, on native software: iOS5 at the Department of Engineering Technology in the University of Houston based on the idea and supervision of Dr. George Zouridakis and the work of David Iglesias López. Motivation & Objectives The higher purpose of this work, as it was with DEmotiv, is on hand to provide software assistance in monitoring the cognitive state of a person engaged in any type of activity. On the other hand, the short term purpose of this project is to adapt and port the DEmotiv MATLAB toolbox to Apple devices such as iPad/iPhone with the hardware and software constraints that that involves. The final goal of the main line of research (not accomplished and out of the focus of this thesis) is the development of a software tool able to assess the cognitive state of a person engaged in any kind of activity. To accomplish that, the entire project could be summarized into three big steps, software creation, hardware verification and subjects’ classification.
  • 14. Every step contains multiple sub-objectives. So, the first step includes hardware selection, the creation of a graphical user interface (GUI), data acquisition, data processing, data display, and so on. Once it is accomplished, and prior to classify the state of the subjects (third step), the device’s performance need to be tested to check project’s viability. This step includes experimentation and results analysis.  The first and second steps are in the scope of the thesis presented here, whereas the final one is left to future work.  The tool should contain at least the following features:  A GUI, so any person could use it. This way we avoid the use of command lines, often difficult for the low/medium user.  Display on-going EEG activity, this activity need to be separated in left and right channels. The separation of the hemispheres is very important for the analysis of some of the extracted features.  Real time mapping, including topographic maps, cortical activation and granger causality connectivity network.  A band-pass filter of the data ‘on the fly’.  The ability to save any recorded data into a file.  Load and plot saved data, allowing the application of all available techniques to process it.  Show the anatomic distribution of the headset. In case a person is using the program without possessing the actual recording device, this option will allow him to infer the electrodes positioning.  Record evoked potentials (EPs), receiving the stimulus from an external source and displaying the epoch average on the main screen.
  • 15. 5 DEmotiv MATLAB TOOLBOX by the Biomedical Imaging Lab (U of H) As this project is the adaptation of the DEmotiv, developed by David Iglesias at the Biomedical Imaging Lab in the University of Houston (U of H), it is utmost important to review the highlights of the source material in order to get a better understanding of the work done as they both share common goals, functions and backbones. Background Neuroimaging It is known as Neuroimaging to the collection of techniques to record both directly or indirectly brain’s images. These techniques have been recently developed and provide images that represent either the structure of the functionality of the brain. Since any image is a construction based on a designed model, the grade of precision will always depend on a high quantity of factors. Also, how well the image matches reality is influenced by whether or not, there is knowledge of the reference (the real object). Sometimes the model of an image is perfectly known, in such a case it is easy to determine, even by ‘eye’, how well the image adapts to the intended one. But in some cases there is no present reference to compare the image to, leading to interesting questions such as How could we compare anything if we don’t even know what are we trying to image? These two questions are quite common to the matter, as it is in the case of neuroimaging techniques. Structural Imaging Techniques There are many neuroimaging techniques being used nowadays but two of the most common are Commuted Tomography (CT) and Magnetic Resonance Imaging (MRI)
  • 16. MRI images are captures by using magnetic fields to align the magnetization of some atomic nuclei in the body and RF fields to constantly alter the alignment of this magnetization. On the other hand, CT working principle is based on X-Rays and absorption. Figure 1: MRI Image Figure 2: CT image
  • 17. 7 Functional Imaging Techniques The purpose of the Functional Imaging Techniques is to understand how the brain works taking into account its physiology, dynamics and functional architecture. There procedures are mainly used for research purposes and often employed as a 1st diagnosis by doctor due to their non-invasive nature. Common neuroimaging techniques are functional magnetic resonance imaging (fMRI), magnetoencephalopraphy (MEG), positron emission tomography (PET) and electroencephalography (EEG). Focus will be placed on the last one, EEG, as it is the one used for this research. Detailed information about the other Functional Imaging Techniques is left to the reader’s own interest. The underlying idea of EEG is to measure the variation produced in electric and magnetic fields by a group of neurons. A single neuron does not produce enough activity to achieve a proper measurement from the outside, but when millions of them gather, their activity can in fact be detected. The recording procedure uses a set of conducting electrodes placed on the scalp that allows the detection of the electric signals of the brain. The electrodes are placed over the head using a conducting gel. The purpose of this gel is the adaptation of impedances between the electrodes and the human tissue so to optimize the readings. The figure depicts the procedure: Figure 3: EEG procedure
  • 18. The location of these electrodes is not random at all and it is fixed according to the 10-20 international system, as shown below: Figure 4: 10-20 International System The number of channels in an EEG can vary from 8 to 256 in some cases. The signal recorded from these channels is independently connected to an amplifier with two inputs. One input belongs to the measured electrode and the other for the reference, which is common to the entire system and usually placed in the ear lobes. The resulting product of the EEG measurement is a set of time-series data stored channel by channel. Usually this data is kept in form of a computer file that may have different file formats depending on the machine that produced it. The file has a header with information about how the recording was done. Numbers of channels, type of data, sampling frequency or total time are some typical header’s parameters. The rest of the data is stored in a matrix, of dimensions NxP, with N being the number of channels and P the total number of points. Then the data can be obtained by 2 different types of recordings: Epoched and Continuous. Continuous recording saves the data “as it arrives”. Epoched data is the result of some experiments that use a repetitive stimulus. Every data frame, commonly known as epoch or trial, contains the information recorded after a stimulus, finishing prior to the next one. It has sometimes a short pre-stimulus period, allowing for an easier comparison of differences between what happens before and after the stimulus onset.
  • 19. 9 Figure 5: Continuous data recording Figure 6: Epoch data recording EEG Mapping Techniques The following chapter is dedicated to describing the reasons behind the EEG procedures. It will be explained why we obtain these data, the way to use it and how can real-world information be inferred from it. For such purposes the Topographic mapping and Cortical Mapping methodologies will be introduced. Cortical Mapping With the cortical mapping technique (CM) it is possible to plot the source distribution of the recorded data. As an EEG, it is non-invasive and therefore the information gathered is the scalp distribution through the electrodes. What makes it different to Topographic Mapping is that the
  • 20. representation of the cerebral cortex is obtained through complex mathematical models instead of through data interpolations. If data in the scalp is considered as the starting point, there are several methodologies to solve this problem, usually known as the inverse problem. The problem itself relates with the number of independent parameters underlying a scalp potential distribution. This number could only be less or equal to that of the number of channels used during the recordings. Then, only some data could be extracted with confidence from the results. This inverse problem has a non-unique solution, since more than one source distribution is able to generate the same scalp map. Figure 7: Cortical Views in 3D As we can imagine, obtaining a CM gives the user very useful functional information. In case of an experiment, the plot will show which brain parts are in charge of different functionalities. Topographic Mapping
  • 21. 11 Topographic techniques use the data that has been directly recorded from the electrodes. The subject’s head is simulated by plotting the data in a 2 or 3 dimensional model, thanks to the electrodes positions being available. Then the mesh information is obtained through data interpolation techniques of the different electrodes data. This way, any plotted point is obtained giving every electrode a specific ‘weight’, depending on the distance. Figure 8: Topographic View in 2D As shown on the previous figure, a colour scale is commonly used. It indicates the level of activation being usually Red the maximum level of activation and Blue the minimum. It can be concluded then that Topographic Mapping allows the user to observe, with a single view, which parts of the brain are active at any specific time point. Some of TM’s characteristics are:  2D/3D plots  3D Rotation  Colour Scaling  Automatic & User-Defined Scale Values Connectivity Network The last mapping feature implemented by the original project is a connectivity network based on Granger causality. These networks let us see the connections established among the different
  • 22. parts of the brain. It is important to keep in mind that any signal generated in the brain by a neuronal population may be influenced by other. Depending on the aim and interest of the study, a frequency ordinary coherence analysis or cross correlation between time signals may not be enough for revealing brain connectivity. If by evaluating a signal cross correlation it is possible to improve a future prediction, then it means that past values have influence over future ones. Figure 9: Brain Connectivity Network According to this, we can extract that any future value may not be completely random, and by analysing these influences the future values could be better estimated. After lightly reviewing these concepts, it is possible to get a better understating of the Granger Causality, whose methodology is applied to generate the network. Granger Causality Wiener first, and Granger, later developed the idea that asserts the following: “If the prediction of a time series A could be improved by knowing a second one B, then B is said to have a causal influence on A”. That means that knowing B and its past values, it is possible to improve the prediction on A, by reducing the error we commit on that prediction.
  • 23. 13 One of the biggest advantages about using Granger causality is that unlike other methods, it is not invasive. There are methods for analysing connectivity patterns that may require surgery, with the possibility of a brain’s lesion. In this case, the collected EEG signal is just needed. Anil K. Seth compared the causal connectivity between complex networks obtained from very rich brain activities with data collected from much simpler experiments. The results of his analysis suggested that complex networks show a strong causal flow if compared with the results of simpler ones. These results were very interesting for the community, opening a new way to evolve in neuroscience analysis.  Graphical representation: The most common way to represent Granger causality is through a graph, specifically one that looks like this: Figure 10: Granger Causality graph On the image it is possible to notice various elements: First, there are the nodes (circles) of the network which represent one channel of the recorded data. This example is an 8 channel network for the sake of simplicity, although an actual graph could reach hundreds of nodes.
  • 24. Then, there are the arrows connecting the nodes, which state a causal relationship between them. This relationship could either be one way (like 2 to 3) or both ways (like 8 and 7), meaning that both channel have influence over each other. Typical characteristics of granger causality include:  Level of influence, the bigger the arrow is, bigger is the influence. A set of colours is used sometimes.  Causal density, it defines percentage of significant connections over the total.  Causal flow, it is a single node characteristic. It measures the difference between the ingoing and outgoing flow, determining it as a sink node (ingoing), a source (outgoing), or as an inter-node (equal).  Causal reciprocity or causal disequilibrium. They are related with the degree of reciprocity within a neural network. Not all real connections are represented in a GC figure. Only those whose value is bigger than the specified threshold are considered to be important enough, or in other words, statistically significant. The setup of the threshold is usually automatic, but could be left to the user’s choice.  Mathematical model: The motivation behind this section is just to understand the basis of the granger causality. As it was mentioned before, times series signals are being used in the model. Let’s suppose two of these signals represented by an autoregressive form. ( ) ∑ ( ) ∑ ( ) (1) ( ) ∑ ( ) ∑ ( ) (2) As it can be observed from the equations, the value of the signal X is dependent on its past values (first summation term), added to the past values of the other signal (second summation
  • 25. 15 term) and also by an error. If the variance of the error while predicting X is reduced by the inclusion of Y in the equation, then, Y is said to cause X. The entire model is developed from these equations, calculating the statistical interdependence between variables. At this point, it is known that a variable could cause other, or have a significant influence in its future value. But what happens when there are have more than two variables? The model below is known as bivariate model, and there exists the so called multivariate model. When going back to figure 10 focusing on the path through nodes 7-8-1, it is obvious that node 7 has a bidirectional influence with 8, but also that node 8 is causing 1. It is certain that node 8 is the one causing 1? What if node 7 is having a causal influence over 1 going through 8? To avoid such problematic questions, multivariate models are used. Multivariate model for GC must perform a huge quantity of calculations, being the number of them exponentially increased with the number of channels recorded. The operations are usually done in a specific core cluster, prepared for that purpose. Due to the nature of this project, the GC implemented is quite simple. Otherwise, the accomplishment of a real-time situation would be completely unreal. Evoked Potentials This is term used for the brain potentials measured after the presentation of a stimulus. Their amplitude tends to be very low when compared with on-going EEG activity and can vary from less than a mV to a few mV. Any recorded EEG data includes a baseline signal due to biological and random noise. To avoid this low amplitude problem of the response against the background, many more than one simple experiment or trial are required. Since most of the baseline is randomly generated, conducting and averaging a big number of experiments will average out the noise, allowing the relevant EP signal that is response to the stimulus to remain. To observe the response, 100 or more trials are usually conducted.
  • 26. These measured responses when any particular stimulus is applied are also known as event related potentials (ERPs). While evoked potentials are related to the response due to a physical stimulation, ERPs are cause of high-level processing, involving memory, attention or any other changes in the cognitive state. There are a lot of different EPs subclasses available. Visual evoked potentials (VEPs), auditory evoked potentials (AEPs) or somatosensory evoked potentials (SSEPs) are some of them. The experiments present in this project are auditory evoked potentials. Auditory evoked potentials: The N100 peak For an auditory evoked potential the stimulus or event released is a sound. The signal generated by the sound ascends through the auditory pathway, and AEPs are used to trace it. As any sound, the evoked potential is generated in the cochlea, following his way through the midbrain to finally arrive in the cerebral cortex. AEPs are chosen for this project for their simplicity. Any EPs consist of a series of consecutive and negative potential peaks. These peaks appear after multiple trials are averaged, and allow for the analysis. AEPs have a singular particularity: a negative peak placed around 100 ms after the stimulus delivery. This peak is called N100, and is pretty easy to observe from the averaged signal.  The N100 peak: It is a large negative potential measured by EEG during certain EP experiments and it is also known as N1. Its peaking is ranged between 80 and 120 milliseconds after the onset of the stimulus. The N100 is distributed mainly in the fronto-central region of the head’s scalp. It is often followed by a positive peak at 200 ms, and both together are known as the N100-P200 complex. The N100 is generated in the primary auditory cortex, placed in the superior temporal gyrus in Heschl’s gyrus. It is shown in figure 14 below. The N1 generating area is not the same in both hemispheres, the right one is larger.
  • 27. 17 Figure 11: N100 generation The N100 is involved in perception due to a strong dependence of the amplitude in the rise time of a sound’s onset and its loudness. It may almost disappear when the subject of the experiment has the control of the stimuli. As for example if the person is using a switch to control the stimulus or his own voice. It is also weaker when the stimulation is done repetitively and stronger if it is randomly generated. It is pretty interesting the reason adopted by experts for such an effect. The attenuation seems to be linked with the person’s intelligence, as it occurs stronger in individuals with a higher one.
  • 28. Hardware & Software There are multiple EEG devices available in the market, but not all of them may fulfil the design requirements. The majority of the EEG machines available possess a characteristic that make them inappropriate for this project, size. Typical EEG machines are placed in hospitals or research laboratories. Taking into account the final goal of this project is to assess in real time the cognitive state of a person engaged in normal or daily activities, the size of the device is a very big constraint. If we want the subject to be moving around (as in this case) while the procedure is being applied, the device needs also to be wireless and wearable. In this case, the price may or may not be a constraint, it depends on available funds. But, it is important to say that such technologies are not cheap at all, and the price of a device could reach thousands of dollars. Being this the first approach to the project development, buying a very expensive device to then realize the goal is not achievable is not a good idea. We needed to make a trade-off between quality, price and size, with size being obviously the more restricting feature. An overview of hardware requirements:  Size: It needs to be as small as possible.  Wireless: The wiring prevents subject’s movement.  Wearable: The entire device needs to be ‘on’ the subject.  Price: The one that allows for a better signal quality while keeping a small size.  Quality: It is obviously better as higher. However, actual technology development doesn’t allow for a high quality device with previous mentioned features (without increasing the price to something unrealistic). Emotiv
  • 29. 19 Emotiv is an Australian company that has introduced in the market a breakthrough interface technology for digital media taking inputs directly from the brain. The Emotiv EPOC neuroheadset is probably their flagship product and it is a high resolution, neuro-signal acquisition and processing device. The headset uses a set of sensors to tune into electric signal produced by the brain and is connected wirelessly to most Windows PCs. There were different available versions in the Emotiv store, including developer, research, enterprise or education editions. The purchased package was the Education Edition SDK. This edition is designed for academic and educational institutes, undertaking research with no direct financial benefit and can be used by any staff members of the department for teaching or researching purposes. The Education Edition SDK contains the following:  A research neuroheadset package.  Emotiv software toolkit.  User’s license.  A saline solution for the electrodes. The Emotiv EPOC neuroheadset The Emotiv neuroheadset bundle contains, the headset itself, 14 spare electrodes adjustable to the device, a battery charger, an USB receiver and a bottle of saline solution. The headset has 14 high resolution channels based on the international 10-20 locations. Those channels are: AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4. It also includes CMS/DRL references in P3/P4 locations and a gyroscope, represented as two more channels, GYROX and GYROY, in the recorded data. The device has an internal sampling rate of 2048 hertz per channel. This is heavily filtered to remove mains artifacts (signal noise due to movement, eyes blinks, respiration, etc) and then down sampled to 128 hertz. There is a hardware low-pass filter in each channel preamplifier with an 85 Hz cutoff, and a high-pass on each input with a cutoff frequency of 0.16 Hz.
  • 30. The headset has a LED diode with three different color options, red, blue or green. This light should be blue when we switch the device on, red when charging the battery and green when it is fully loaded. Figure 12: The Emotiv EPOC neuroheadset A mentioned before the device works wirelessly with a lithium battery that provides autonomy of 12 hours use. The communications between the PC and the headset is also done wirelessly using a USB receiver over an encrypted proprietary protocol that is decrypted by the bundled software. Figure 13: The USB Receiver It also contains a complete spare sensor kit including an EPOC hydrator pack and 16 fully- assembled felt-based sensor assemblies with gold-plated contacts. It is shown in the next figure.
  • 31. 21 Figure 14: The Electrode set The Emotiv EPOC SDK In addition to the EPOC neuroheadset hardware, the Emotiv Education SDK provides a complete software toolkit exposing Emotiv APIs and detection libraries. It includes EmotivControlPanel.exe, EmotivComposer.exe, EmoKey.exe, header files and import libraries, and sample code. The EmoComposer & EmoKey are hardware emulators which enable to develop the headset. The SDK provides an effective development environment that integrates well with new and existing frameworks. There are three detection suites incorporated in the kit, Affectiv suite, Cognitiv suite and Expressiv suite. The first one is used to monitor user’s emotional states in real-time, the Cognitive suite reads user’s conscious thoughts and intentions, and the Expressiv suite interprets facial expressions. Emotiv Control Panel is the main application and the one used in this project’s development. It includes the explained suites and other features. The Headset Setup panel is displayed by default when starting Emotiv Control Panel. The main function of this panel is to display contact quality feedback for the neuroheadset’s EEG sensors and provide guidance for fitting the headset correctly. It is extremely important to achieve the best possible contact quality before proceeding to the other panel tabs. A poor contact quality will result in poor detection results and a low quality signal. This is a screenshot of the Emotiv control panel:
  • 32. Figure 15: Emotiv Control Panel The image on the left is a representation of sensor’s locations. Each circle is representing a sensor and its approximate location when wearing the headset. The colour of the circles represents the contact quality. There are five possible colour outcomes, black, red, orange, yellow and green. Colours representation:  Black: No signal  Red: Very poor signal  Orange: Poor signal  Yellow: Fair signal  Green: Good signal To achieve the best possible contact quality, all sensors should be shown as green. In case some of them (as shown in figure 22) are not green, it is recommended to relocate the headset or ultimately use more saline solution on the failing electrodes. It is important to check the state of the contacts regularly when working with the device.
  • 33. 23 Other features displayed here are the wireless signal level, the battery level and the selected user. Finally, there are other three tabs for the suites. However, those suites are not used in the present project. To end the Emotiv software’s review, we provide other useful characteristics: EEG display:  5 second rolling time window (chart recorder mode)  ALL or selected channels can be displayed  Automatic or manual scaling (individual channel display mode)  Adjustable channel offset (multi-channel display mode)  Synchronized marker window Gyro display:  5 second rolling time window (chart recorder mode)  X and Y deflection Data Packet display:  5 second rolling graph of Pocket Counter output  Packet loss – integrated count of missing data packets  Verify data integrity for wireless transmission link Data recording and Playback:  Fully adjustable slider, play/pause/exit controls  Subject and record ID, date, start time recorded in file naming convention
  • 34. Project Design & Methodology This chapter will get deeper into the different design decisions that had to be made for the project and that stray from the original DEmotiv Matlab Toolbox. These include hardware selections and software tools as well as methodologies and implementations. Hardware From the hardware point of way, the development of this project was restricted to apple devices as it is required in order to develop for the iOS platform. Development itself was done with 2011 iMac using the Developers SDK known as Xcode. The targeted device to run the iOS adaptation of the Demotiv was an iPad 2. The reason behind this, as other features of the iPad will explain in detail in the following sections. The iMac: then and now Over the years the iMac series have seen many tweaks in both hardware and design: The first model was introduced in 1998 and was known as the iMac G3. Back then the design consisted of a CRT screen embedded inside an oval plastic body. The plastic was translucent and it arrived to the market in different colours.
  • 35. 25 Figure 16: the 1998 iMac G3 series The original iMac G3 hardware was built around the PowerPC architecture, developed by IBM, Apple and Motorola and it ran Mac OS 8.1. Choosing the PowerPC architecture had many consequences. On one hand Apple had more control over the software, because they were the main providers of PowerPC computers … but on the other hand, the increasing popularity of the rival operating system, Windows by Microsoft, put them in a difficult position. A fact is that the Windows OS could be used on any computer built around Intel’s x86 architecture, and because of that, there were several hardware providers that made Windows-capable PCs. The increasing hardware supply lowered the windows PC price and therefore it dominated the market. Acknowledging these problems, Apple decided in 2006 to take a change of course and switch all computer hardware to the x86 platform. This event was a turning point in Apple’s history and it led to enormously increasing its market share. And a welcomed side effect of turning to x86 was the compatibility of Mac computers with Windows software through the Boot Camp feature of current iMacs. The computer chosen to perform our development is a 2011 iMac from Apple. The iMac is the all-in-one solution of Apple’s catalogue and it features in a single aluminium case, a full fledge Mac PC with an embedded 21 inch LCD screen. They are quite easy transport and install, taking little space.
  • 36. Figure 17: the 2011 iMac According to the manufacturer the specifications of our computer are:  21.5-inch (viewable) LED-backlit glossy widescreen TFT display with support for millions of colors  2.5GHz quad-core Intel Core i5 with 6MB on-chip shared L3 cache  4GB (two 2GB) of 1333MHz DDR3 memory  500GB (7200 rpm) hard drive  AMD Radeon HD 6750M graphics processor with 512MB of GDDR5 memory The iPad 2 In 2010 Apple revolutionised the market by introducing a new concept of mobile computing: the iPad. The best way to define it would be as a hybrid between a current generation smartphone and a laptop computer. It found a great success becoming the 70% of global tablet computers sales worldwide in barely during its first year in the market . Like its siblings, the iPhone and the iPod Touch, it is controlled by a touchscreen instead of a stylus and usual input of information is done through an on-screen virtual keyboard. It served the same purposes as well (audio-visual media including books, periodicals, movies, music, games,
  • 37. 27 apps and web content) but in an unique form factor that could substitute a laptop for most of the daily chores. Figure 18: The iPad Family From the software point of view, it shared the same operating system as the other portable “iProducts”: The iOS. As it was the last member of the family to be conceived, it came out to the market with a wide preexisting library of applications that were already available for the iPhone and the iPod Touch. Talking about the hardware, it is quite a powerful device: Model iPad iPad 2 iPad (3rd generation) Initial OS iOS 3.2 iOS 4.3 iOS 5.1 Highest supported OS iOS 5.1.1 iOS 6 Display 9.7 in (250 mm), 4:3 aspect ratio, scratch-resistant glossy glass covered LED-backlit IPS LCD screen, fingerprint-resistant oleophobic coating, 16,777,216-color (24-bit), 1024×768 px (XGA) at 132 ppi, 800:1 contrast ratio 2048×1536 p x resolution (264 ppi) Processor 1 GHz ARM Cortex- A8 Apple A4 (64 KB L1 + 1 GHz (dynamically clocked) dual-core ARM Cortex-A9 (64 KB Dual-core
  • 38. 512 KB L2) SoC L1 + 512 KB L2) Apple A5 SoC Apple A5X SoC Graphics processor PowerVR SGX535 GPU PowerVR SGX543MP2 GPU PowerVR SGX543MP4 GPU Storage 16, 32 or 64 GB Memory 256 MB LPDDR DRAM 512 MB Dual-Channel LPDDR2 DRAM 1 GB Material Contoured aluminium back and bezel 3G and 4G models: Contoured aluminium back and bezel; plastic for cellular radio Bezel colour Black Black or white Battery Built-in rechargeable lithium-ion polymer battery 3.75 V 24.8 W·h (6613 mA·h) 3.8 V 25 W·h (6579 mA·h) 3.7 V 42.5 W·h Rated battery life browsing: 10 hours (Wi-Fi); 9 hours (3G or 4G) audio: 140 hours video: 10 hours standby: 1 month Table 1: iPad Family Specs Both the powerful hardware and the versatility of the iOS were key reasons behind our decision to use the iPad for the porting of the Demotiv Matlab Toolbox. But why the iPad and not the iPhone, that runs the same OS and has almost identical processing power? In a few words: the screen size. I will be explained that due to the universal nature of iOS development, all source code can be easily compiled for the different iProducts, but it was decided to target the iPad because –as it was already explained- the Demotiv Toolbox displays 14 brain signals (7 for each lobe) and the bigger size of the screen would ease the visibility of the signals
  • 39. 29 Software Tools & Concepts The soul of the project is software based and on this section it will be explained both the tools used for development as well as the different APIs involved. The entire application was developed using Objective-C with the Xcode 4.2 SDK over Mac OS Lion. The project itself saw many modifications during its course, implementing testing functionalities, adding new ones and removing those that did not work as expected. The review will focus on the different implemented features, how and why were they added, the used methodologies, flux charts to explain code, and main problems faced all the way around and their solution (if any were available). Xcode 4.2 SDK Xcode 4.2 is the Integrated Development Environment used in this project and it is the most important tool when developing for for Mac OS or iOS. It is available for all Mac OS Lion users through the Mac App Store and it includes most of Apple’s developer documentation to assist the user. It supports C, C++, Objective-C, Objective-C++, Java, AppleScript, Python and Ruby source code with a variety of programming models, including but not limited to Cocoa, Carbon, and Java. Although version 4.2 was used for this project it sees constant updates to implement new functionalities and to fix bugs, at the time of writing the latest version is 4.3.2.
  • 40. Figure 19: Xcode Welcome Screen Now, more functions of this program will be explained. This is Xcode’s main window, divided in different sections: Figure 20: Xcode Main Window
  • 41. 31 1. This window is used to display the functions of the set of buttons number 6, which from left to right are: Figure 21: Navigation buttons a. Project navigator: the Project’s File Tree Structure. All the files related to the project are placed here in a tree structure. No matter if they are source code files or supporting files such as texts or images, they will all be present on this part of this part of the screen and can be sorted in folders and sub-folders to accommodate the user’s needs. b. Symbol navigator: it lets the user surf around the projects classes c. Search navigator: provides a search function both, files and code. d. Issue navigator: it displays the “issues” that showed up during compilation. By issues, Xcode means important information that does not interrupt compilation but that the developer should review in order to prevent bugs. e. Debug navigator: shows low level information related to debugging the project. f. Breakpoint navigator: shows the breakpoints placed by the developer to handle the debugging process. g. Log navigator: it serves as the project log by registering compilation times, debugging times etc 2. Main Window: this part of the screen shows whatever file or function has been chosen. If it is a source code file, the source code will be present here, in case of an image; the image itself will be showed here.
  • 42. 3. The windows on right are composed by: a. The File Inspector: it shows attributes and details of the selected element. Such as Identity and Type, Localization etc. Figure 22: The File Inspector b. The Quick Help Inspector: it shows information about code selected within a source code file. For instance it would the Class hierarchy and/or definition of an element in the code.
  • 43. 33 Figure 23: Quick Help Inspector example c. The Libraries: the bottom part displays the element libraries for Code Snippets, Media, File Templates and Objects. They allow the user to access elements with a GUI instead of programmatically with several shortcuts.
  • 44. Figure 24: The File template Library 4. At the top-right corner of the window there are 3 groups of buttons that control the Xcode GUI: a. The Editor buttons handle the way the main window of the GUI displays information. It provides single page view or 2 page split-view options, for instance. b. The View buttons allow the user to hide different parts of the Xcode GUI. c. The Organizer provides access to Help and Documentation Figure 25: Editor, View and Organizer
  • 45. 35 5. The top left corner shows the Play/Stop buttons, to compile and run the project and the project’s name 6. Navigation buttons explained already in 1. 7. The bottoms windows show supplemental information during debugging. Objective C The Objective-C language is a simple computer language designed to enable sophisticated object-oriented programming. Objective-C is defined as a small but powerful set of extensions to the standard ANSI C language. Its additions to C are mostly based on Smalltalk, one of the first object-oriented programming languages. Objective-C is designed to give C full object-oriented programming capabilities, and to do so in a simple and straightforward way. Most object-oriented development environments consist of several parts:  An object-oriented programming language  A library of objects  A suite of development tools  A runtime environment View Controllers View controllers are a vital link between an app’s data and its visual appearance. Whenever an iOS app displays a user interface, the displayed content is managed by a view controller or a group of view controllers coordinating with each other. Therefore, view controllers provide the skeletal framework on which to build the apps. iOS provides many built-in view controller classes to support standard user interface pieces, such as navigation and tab bars.
  • 46. Figure 26: View Controllers by Apple Methods Methods are functions that are defined in a class. Objective-C supports two types of methods: instance methods and class methods. Instance methods can be called only using an instance of the class. Instance methods are prefixed with the minus sign (-) character. Class methods can be invoked directly using the class name and do not need an instance of the class in order to work. Class methods are prefixed with the plus sign (+) character. In some programming languages, such as C# and Java, class methods are known as static methods. But in Objective C the anatomy of the method is:
  • 47. 37 -(void) doSomething:(NSString *) str withAnotherPara:(float) value { //---implementation here--- } Code Snippet 1: example of a method Actions An action is a method that can handle events raised by views (for example, when a button is clicked) in the View window. An outlet, on the other hand, allows your code to programmatically reference a view on the View window. Action methods must have a conventional signature. The UIKit framework permits some variation of signature, but both platforms accept action methods with a signature similar to the following: - (IBAction)doSomething:(id)sender; Code Snippet 2: example of an Action The type qualifier IBAction, which is used in place of the void return type, flags the declared method as an action so that Interface Builder is aware of it. For an action method to appear in Interface Builder, we first must declare it in a header file of the class whose instance is to receive the action message. Storyboards & the Interface Builder For the design process of GUIs Xcode provides a very useful tool called the Interface Builder. The main purpose of this tool is to let the developer build a graphical user interface in an easy manner providing visual elements to use instead of code.
  • 48. Figure 27: Xcode's Interface Builder With the introduction of iOS 5 and the release of Xcode 4.2, the Interface Builder supported a new way to develop GUIs known as Storyboards. A storyboard is a visual representation of the user interface of an iOS application, showing screens of content and the connections between those screens. A storyboard is composed of a sequence of scenes, each of which represents a view controller and its views; scenes are connected by segue objects, which represent a transition between two view controllers. Xcode provides a visual editor for storyboards, where we can lay out and design the user interface of an application by adding views such as buttons, table views, and text views onto scenes. In addition, a storyboard enables us to connect a view to its controller object, and to manage the transfer of data between view controllers. Using storyboards is Apple’s recommended way to design the user interface of an application because they enable us to visualize the appearance and flow of the user interface on one canvas.
  • 49. 39 Figure 28: Explaining storyboards by Apple On iPhone, each scene corresponds to a full screen’s worth of content; on iPad, multiple scenes can appear on screen at once—for example, using popover view controllers. Each scene has a dock, which displays icons representing the top-level objects of the scene. The dock is used primarily to make action and outlet connections between the view controller and its views. UI Button A button is one of the UI elements most used during the development of this project. Its purpose is to execute a method or methods by pressing it on the screen. This is done by making an Action –that contains all the code that shall be executed when pressing the button- in the header file and linking that Action to the button in the Storyboard. Figure 29: Button Example
  • 50. Software Design Developing a new application can sometimes become a hit and run process. In order to avoid that it is important to have a clear idea of what is the goal of the application and what it should feature. In short word, we should ask ourselves: what do we want the application to do? Of course this is limited by the common scarce resources: time and manpower, so it important to set realistic goals. After that, the next question that should be addresses is: how do we do it? And that is exactly what will be detailed on the following section. API: Core-plot 0.9 Soon enough during the design of the project it became obvious that the plotting of signals, would be an essential part of development. As it was seen on the original Matlab toolbox, it is needed to display 14 channels of the brain, 7 for each hemisphere. Although Apple provides several libraries at the developer’s disposal there are no specific APIs to ease the process of coding a function plot, and for that reason it was necessary to look for other solutions. The Core-plot API is a community developed library that provides a plotting framework for OS X and iOS. It provides 2D visualization of data, and is tightly integrated with Apple technologies like Core Animation, Core Data, and Cocoa Bindings.
  • 51. 41 Figure 30: Core-Plot It is a free of charge and available to everyone through their Google Project Homepage at http://code.google.com/p/core-plot/ under the open source BSD licenses. This API represents the core of the project and as such, it will be explained in detail on the following sub-sections but before delving into the classes that make up Core Plot, it is worth considering the design goals of the framework. Core Plot has been developed to run on both Mac OS X and iOS. This places some restrictions on the technologies that can be used: AppKit drawing is not possible, and view classes like NSView and UIView can only be used as host views. Drawing is instead performed using the low-level Quartz 2D API, and Core Animation layers are used to build up the various different aspects of a graph. It's not all bad news, because utilizing Core Animation also opens up a whole range of possibilities for introducing 'eye-candy'. Graphs can be animated, with transitions and effects. The objective is to have Core Plot be capable of not only producing publication quality still images, but also stunning graphical effects and interactivity. Another objective that is influential in the design of Core Plot is that it should behave as much as possible from a developer's perspective as a built-in framework. Design patterns and technologies used in Apple's own frameworks, such as the data source pattern, delegation, and bindings, are all supported in Core Plot.
  • 52. Anatomy of the Graph This diagram shows a standard bar graph with two data sets plotted. Below, the chart has been annotated to show the various components of the chart, and the naming scheme used in Core Plot to identify them. Figure 31: Official anatomy of a graph in core-plot
  • 53. 43 Class Diagram This standard UML class diagram gives a static view of the main classes in the framework. The cardinality of relationships is given by a label, with a '1' indicating a to-one relationship, and an asterisk (*) representing a to-many relationship. Figure 32: Official Class diagram of Core-plot
  • 54. Objects and Layers This diagram shows run time relationships between objects (right) together with layers in the Core Animation layer tree (left). Colour coding shows the correspondence between objects and their corresponding layers. Figure 33: Official Objects and Layers diagram Layers Core Animation's layer class, CALayer, is not very suitable for producing vector images, as required for publication quality graphics, and provides no support for event handling. For these reasons, Core Plot layers derive from a class called CPTLayer, which itself is a subclass of CALayer. CPTLayer includes drawing methods that make it possible to produce high quality vector graphics, as well as event handling methods to facilitate interaction.
  • 55. 45 The drawing methods include: -(void)renderAsVectorInContext:(CGContextRef)context; -(void)recursivelyRenderInContext:(CGContextRef)context; -(NSData *)dataForPDFRepresentationOfLayer; Code Snippet 3: the drawing methods When subclassing CPTLayer, it is important that you don't just override the standard drawInContext: method, but instead override renderAsVectorInContext:. That way, the layer will draw properly when vector graphics are generated, as well as when drawn to the screen. Graphs The central class of Core Plot is CPTGraph. In Core Plot, the term 'graph' refers to the complete diagram, which includes axes, labels, a title, and one or more plots (eg histogram, line plot). CPTGraph is an abstract class from which all graph classes derive. A graph class is fundamentally a factory: It is responsible for creating the various objects that make up the graphic, and for setting up the appropriate relationships. The CPTGraph class holds references to objects of other high level classes, such as CPTAxisSet, CPTPlotArea, and CPTPlotSpace. It also keeps track of the plots (CPTPlot instances) that are displayed on the graph. @interface CPTGraph : CPTBorderedLayer { @private CPTPlotAreaFrame *plotAreaFrame; NSMutableArray *plots; NSMutableArray *plotSpaces; NSString *title; CPTTextStyle *titleTextStyle; CPTRectAnchor titlePlotAreaFrameAnchor; CGPoint titleDisplacement; CPTLayerAnnotation *titleAnnotation; CPTLegend *legend; CPTLayerAnnotation *legendAnnotation; CPTRectAnchor legendAnchor; CGPoint legendDisplacement; } @property (nonatomic, readwrite, copy) NSString *title;
  • 56. @property (nonatomic, readwrite, copy) CPTTextStyle *titleTextStyle; @property (nonatomic, readwrite, assign) CGPoint titleDisplacement; @property (nonatomic, readwrite, assign) CPTRectAnchor titlePlotAreaFrameAnchor; @property (nonatomic, readwrite, retain) CPTAxisSet *axisSet; @property (nonatomic, readwrite, retain) CPTPlotAreaFrame *plotAreaFrame; @property (nonatomic, readonly, retain) CPTPlotSpace *defaultPlotSpace; @property (nonatomic, readwrite, retain) NSArray *topDownLayerOrder; @property (nonatomic, readwrite, retain) CPTLegend *legend; @property (nonatomic, readwrite, assign) CPTRectAnchor legendAnchor; @property (nonatomic, readwrite, assign) CGPoint legendDisplacement; -(void)reloadData; -(void)reloadDataIfNeeded; -(NSArray *)allPlots; -(CPTPlot *)plotAtIndex:(NSUInteger)index; -(CPTPlot *)plotWithIdentifier:(id <NSCopying>)identifier; -(void)addPlot:(CPTPlot *)plot; -(void)addPlot:(CPTPlot *)plot toPlotSpace:(CPTPlotSpace *)space; -(void)removePlot:(CPTPlot *)plot; -(void)removePlotWithIdentifier:(id <NSCopying>)identifier; -(void)insertPlot:(CPTPlot *)plot atIndex:(NSUInteger)index; -(void)insertPlot:(CPTPlot *)plot atIndex:(NSUInteger)index intoPlotSpace:(CPTPlotSpace *)space; -(NSArray *)allPlotSpaces; -(CPTPlotSpace *)plotSpaceAtIndex:(NSUInteger)index; -(CPTPlotSpace *)plotSpaceWithIdentifier:(id <NSCopying>)identifier; -(void)addPlotSpace:(CPTPlotSpace *)space; -(void)removePlotSpace:(CPTPlotSpace *)plotSpace; -(void)applyTheme:(CPTTheme *)theme; @end Code Snippet 4: The CPTGraph Class
  • 57. 47 CPTGraph is an abstract superclass; subclasses like CPTXYGraph are actually responsible for doing most of creation and organization of graph components. Each subclass is usually associated with particular subclasses of the various layers that make up the graph. For example, the CPTXYGraph creates an instance of CPTXYAxisSet, and CPTXYPlotSpace. Plot Area The plot area is that part of a graph where data is plotted. It is typically bordered by axes, and grid lines may also appear in the plot area. There is only one plot area for each graph, and it is represented by the class CPTPlotArea. The plot area is surrounded by a CPTPlotAreaFrame, which can be used to add a border to the area. Plot Spaces Plot spaces define the mapping between the coordinate space, in which a set of data exists, and the drawing space inside the plot area. For example, if you were to plot the speed of a train versus time, the data space would have time along the horizontal axis, and speed on the vertical axis. The data space may range from 0 to 150 km/hr for the speed, and 0 to 180 minutes for the time. The drawing space, on the other hand, is dictated by the bounds of the plot area. A plot space, represented by a descendant of the CPTPlotSpace class, defines the mapping between a coordinate in the data space, and the corresponding point in the plot area. It is tempting to use the built in support for affine transformations to perform the mapping between the data and drawing spaces, but this would be very limiting, because the mapping does not have to be linear. For example, it is not uncommon to use a logarithmic scale for the data space. To facilitate as wide a range of data sets as possible, values in the data space can be stored internally as NSDecimalNumber instances. It makes no sense to store values in the drawing space
  • 58. in this way, because drawing coordinates are represented in Cocoa by floating point numbers (CGFloat), and any extra precision would be lost. A CPTPlotSpace subclass must implement methods for transforming from drawing coordinates to data coordinates, and for converting from data coordinates to drawing coordinates. -(CGPoint)plotAreaViewPointForPlotPoint:(NSDecimal *)plotPoint; -(CGPoint)plotAreaViewPointForDoublePrecisionPlotPoint:(double *)plotPoint; -(void)plotPoint:(NSDecimal *)plotPoint forPlotAreaViewPoint:(CGPoint)point; -(void)doublePrecisionPlotPoint:(double *)plotPoint forPlotAreaViewPoint:(CGPoint)point; Code Snippet 5: Plot spaces Data coordinates --- represented here by the 'plot point' --- are passed as an C array of NSDecimals or doubles. Drawing coordinates --- represented here by the 'view point' --- are passed as standard CGPoint instances. Whenever an object needs to perform the transform from data to drawing coordinates, or vice versa, it should query the plot space to which it corresponds. For example, instances of CPTPlot (discussed below) are each associated with a particular plot space, and use that plot space to determine where in the plot area they should draw. It is important to realize that a single graph may contain multiple plots, and that these plots may be plotted on different scales. For example, one plot may need to be drawn with a logarithmic scale, and a separate plot may be drawn on a linear scale. There is nothing to prevent both plots appearing in a single graph. For this reason, a single CPTGraph instance can have multiple instances of CPTPlotSpace. In the most common cases, there will only be a single instance of CPTPlotSpace, but the flexibility exists within the framework to support multiple spaces in a single graph.
  • 59. 49 Plots A particular representation of data in a graph is known as a 'plot'. For example, data could be shown as a line or scatter plot, with a symbol at each data point. The same data could be represented by a bar plot/histogram. A graph can have multiple plots. Each plot can derive from a single data set, or different data sets: they are completely independent of one another. Although it may not seem like it at first glance, a plot is analogous to a table view. For example, to present a simple line plot of the speed of a train versus time, we need a value for the speed at different points in time. This data could be stored in two columns of a table view, or represented as a scatter plot. In effect, the plot and the table view are just different views of the same model data. What this means is that the same design patterns used to populate table views with data can be used to provide data to plots. In particular, we can either use the data source design pattern, or we can use bindings. To provide a plot with data using the data source approach, you set the dataSource outlet of the CPTPlot object, and then implement the data source methods. @protocol CPTPlotDataSource <NSObject> -(NSUInteger)numberOfRecords; @optional // Implement one of the following -(NSArray *)numbersForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndexRange:(NSRange)indexRange; -(NSNumber *)numberForPlot:(CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index; @end Code Snippet 6: Providing data to plots
  • 60. It is possible to think of the field as being analogous to a column identifier in a table view, and the record index being analogous to the row index. Each type of plot has a fixed number of fields. For example, a scatter plot has two: the value of for the horizontal axis (x) and the value for the vertical axis (y). An enumerator in the CPTScatterPlot class defines these fields. typedef enum _CPTScatterPlotField { CPTScatterPlotFieldX, CPTScatterPlotFieldY } CPTScatterPlotField; Code Snippet 7: The ScatterPlot X and Y values A record is analogous to the row of a table view. For a scatter plot, it corresponds to a single point on the graph. Plot classes not only support the data source design pattern, but also Cocoa bindings, as a means of supplying data. This is again very similar to the approach taken with table views: each field of the plot --- analogous to a table column --- gets bound to a key path via an NSArrayController. CPTGraph *graph = ...; CPTScatterPlot *boundLinePlot = [[[CPTScatterPlot alloc] initWithFrame:CGRectZero] autorelease]; boundLinePlot.identifier = @"Bindings Plot"; boundLinePlot.dataLineStyle.lineWidth = 2.f; [graph addPlot:boundLinePlot]; [boundLinePlot bind:CPTScatterPlotBindingXValues toObject:self withKeyPath:@"arrangedObjects.x" options:nil]; [boundLinePlot bind:CPTScatterPlotBindingYValues toObject:self withKeyPath:@"arrangedObjects.y" options:nil]; Code Snippet 8: Cocoa Bindings example The superclass of all plot classes is CPTPlot. This is an abstract base class; each subclass of CPTPlot represents a particular variety of plot. For example, the CPTScatterPlot class is used to draw line and scatter plots, while the CPTBarPlot class is used for bar and histogram plots.
  • 61. 51 A plot object has a close relationship to the CPTPlotSpace class discussed earlier. In order to draw itself, the plot class needs to transform the values it receives from the data source into drawing coordinates. The plot space serves this purpose. Axes Axes describe the scale of the plotting coordinate space to the viewer. A basic graph will have just two axes, one for the horizontal direction (x) and one for the vertical direction (y), but this is not a constraint in Core Plot --- you can add as many axes as you like. Axes can appear at the sides of the plot area, but also on top of it. Axes can have different scales, and can include major and/or minor ticks, as well as labels and a title. Each axis on a graph is represented by an object of class descendant from CPTAxis. CPTAxis is responsible for drawing itself, and accessories like ticks and labels. To do this it needs to know how to map data coordinates into drawing coordinates. For this reason, each axis is associated with a single instance of CPTPlotSpace. A graph can have multiple axes, but all axes get grouped together in a single CPTAxisSet object. An axis set is a container for all the axes belonging to a graph, as well as a factory for creating standard sets of axes (eg CPTXYAxisSet creates two axes, one for x and one for y). Axis labels are usually textual, but there is support in Core Plot for custom labels: any core animation layer can be used as an axis label by wrapping it in an instance of the CPTAxisLabel class.
  • 62. Dissecting the Project This sub section will get in detail with the different functionalities of the and for that purpose; the different files that of the project will be presented. The Plots On this sub section the different plot s that make the project will be presented in a temporal basis, pointing out the evolution of the project as development advanced. The plot files follow the canonical structure defined on the Core-plot 0.9 API but of course, in order to fit them to our specifics, several changes were implemented. Full code for each plot file will not be displayed here as it can be found at the end of this document, but important parts will be highlighted nevertheless. TUTSimpleScatterPlot This is the first plot developed and it is a simple scatter plot that allows us to draw given points, that must be fed to the plot over a X-Y axis. Notable parts are: The graph object that will host the scatter plot is created and then the Plot Area that will be used to draw plot is hard set within the code:
  • 63. 53 // Create a graph object which we will use to host just one scatter plot. CGRect frame = [self.hostingView bounds]; self.graph = [[CPTXYGraph alloc] initWithFrame:frame] ; // Add some padding to the graph, with more at the bottom for axis labels. self.graph.plotAreaFrame.paddingTop = 20.0f; self.graph.plotAreaFrame.paddingRight = 20.0f; self.graph.plotAreaFrame.paddingBottom = 50.0f; self.graph.plotAreaFrame.paddingLeft = 20.0f; // Tie the graph we've created with the hosting view. self.hostingView.hostedGraph = self.graph; Code Snippet 9: setting the plot area For the axes, is it possible to set title, a line style (its colour, width), and position in the plot area. // Modify the graph's axis with a label, line style, etc. CPTXYAxisSet *axisSet = (CPTXYAxisSet *)self.graph.axisSet; // Modify the graph's axis with a label, line style, etc. CPTXYAxisSet *axisSet = (CPTXYAxisSet *)self.graph.axisSet; axisSet.xAxis.title = @"Data X"; axisSet.xAxis.titleTextStyle = textStyle; axisSet.xAxis.titleOffset = 30.0f; axisSet.xAxis.axisLineStyle = lineStyle; axisSet.xAxis.majorTickLineStyle = lineStyle; axisSet.xAxis.minorTickLineStyle = lineStyle; axisSet.xAxis.labelTextStyle = textStyle; axisSet.xAxis.labelOffset = 3.0f; axisSet.xAxis.majorIntervalLength = CPTDecimalFromFloat(2.0f); axisSet.xAxis.minorTicksPerInterval = 1; axisSet.xAxis.minorTickLength = 5.0f; axisSet.xAxis.majorTickLength = 7.0f; axisSet.yAxis.title = @"Data Y"; axisSet.yAxis.titleTextStyle = textStyle; axisSet.yAxis.titleOffset = 40.0f; axisSet.yAxis.axisLineStyle = lineStyle; axisSet.yAxis.majorTickLineStyle = lineStyle; axisSet.yAxis.minorTickLineStyle = lineStyle; axisSet.yAxis.labelTextStyle = textStyle; axisSet.yAxis.labelOffset = 3.0f; axisSet.yAxis.majorIntervalLength = CPTDecimalFromFloat(10.0f); axisSet.yAxis.minorTicksPerInterval = 1; axisSet.yAxis.minorTickLength = 5.0f; axisSet.yAxis.majorTickLength = 7.0f; Code Snippet 10: coding the Axis
  • 64. The plot area position and the axes values must be chosen wisely so that plot uses as much of the area available. For that purpose: // Setup some floats that represent the min/max values on our axis. float xAxisMin = -10; float xAxisMax = 10; float yAxisMin = 0; float yAxisMax = 100; // We modify the graph's plot space to setup the axis' min / max values. CPTXYPlotSpace *plotSpace = (CPTXYPlotSpace *)self.graph.defaultPlotSpace; plotSpace.xRange = [CPTPlotRange plotRangeWithLocation:CPTDecimalFromFloat(xAxisMin) length:CPTDecimalFromFloat(xAxisMax - xAxisMin)]; plotSpace.yRange = [CPTPlotRange plotRangeWithLocation:CPTDecimalFromFloat(yAxisMin) length:CPTDecimalFromFloat(yAxisMax - yAxisMin)]; Code Snippet 11: Binding Axis and plot spaces The last part to mention is the code that binds the different elements of the plot: The Plot area, the graph and the plot, with a given line style: // Add a plot to our graph and axis. We give it an identifier so that we // could add multiple plots (data lines) to the same graph if necessary. CPTScatterPlot *plot = [[CPTScatterPlot alloc] init] ; plot.dataSource = self; plot.identifier = @"mainplot"; plot.dataLineStyle = lineStyle; plot.plotSymbol = plotSymbol; [self.graph addPlot:plot]; Code Snippet 12: binding graph and plots The mechanics for the elements described above are maintained on the other plots of the project and therefore will be omitted during their descriptions not to fall in redundancy.
  • 65. 55 ScatterPlot2 This is the evolution of the previous plot, TUTSimpleScatterPlot, and represents a middle step in the project. The aim of this plot was to design the final form that would host a channel, and with it, 7 different plots at the same time in one graph. From this plot file it is worth mentioning the implementation of the numberForPlot: method -(NSNumber *)numberForPlot:(CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index { NSArray * data1 = [self.graphData objectAtIndex:0]; NSArray * data2 = [self.graphData objectAtIndex:1]; NSArray * data3 = [self.graphData objectAtIndex:2]; NSArray * data4 = [self.graphData objectAtIndex:3]; NSArray * data5 = [self.graphData objectAtIndex:4]; NSArray * data6 = [self.graphData objectAtIndex:5]; NSArray * data7 = [self.graphData objectAtIndex:6]; NSValue *val1 = [data1 objectAtIndex:index]; NSValue *val2 = [data2 objectAtIndex:index]; NSValue *val3 = [data3 objectAtIndex:index]; NSValue *val4 = [data4 objectAtIndex:index]; NSValue *val5 = [data5 objectAtIndex:index]; NSValue *val6 = [data6 objectAtIndex:index]; NSValue *val7 = [data7 objectAtIndex:index]; CGPoint point1 = [val1 CGPointValue]; CGPoint point2 = [val2 CGPointValue]; CGPoint point3 = [val3 CGPointValue]; CGPoint point4 = [val4 CGPointValue]; CGPoint point5 = [val5 CGPointValue]; CGPoint point6 = [val6 CGPointValue]; CGPoint point7 = [val7 CGPointValue]; Code Snippet 13: numberForPlot: data gathering Seven arrays are created. Every each of them will store the data points, in GCPoint format, of one plot. They are called point1 up to point7.
  • 66. switch (fieldEnum) { case CPTScatterPlotFieldX: { return [NSNumber numberWithFloat:point1.x]; } case CPTScatterPlotFieldY: { if ([plot.identifier isEqual:@"mainplot"]) { return [NSNumber numberWithFloat:point1.y]; } else if ([plot.identifier isEqual:@"mainplot2"]) { return [NSNumber numberWithFloat:point2.y]; } else if ([plot.identifier isEqual:@"mainplot3"]) { return [NSNumber numberWithFloat:point3.y]; } else if ([plot.identifier isEqual:@"mainplot4"]) { return [NSNumber numberWithFloat:point4.y]; }else if ([plot.identifier isEqual:@"mainplot5"]) { return [NSNumber numberWithFloat:point5.y]; } else if ([plot.identifier isEqual:@"mainplot6"]) { return [NSNumber numberWithFloat:point6.y]; } else if ([plot.identifier isEqual:@"mainplot7"]) { return [NSNumber numberWithFloat:point7.y]; } } } return [NSNumber numberWithFloat:0]; } Code Snippet 14: choosing data point in numberForPlot: After that, the way designed for Core-plot to know what data points corresponds to what line style and setting is through the use of an identifier. The identifier chosen for this file is mainplot and there are seven of them. Therefore every each point data array is assigned to one mainplot
  • 67. 57 line style within the method and the consequence of this is that the seven plots that form the graph are drawn together but with different characteristics. CustomRT This is the final form of the plot. There are several interesting new features implemented on this plot file that will be addressed. readData: This method controls the data gathering/selection of the final plot. It receives Boolean parameter that determines whether to plot data of the right or left channel. As mentioned before, the voltage points that define the Y axis of the plots to draw are stored in a text file called MiguelUNIX.txt. There is code within this method to open the aforementioned file, read it line by line, and the figures separated by commas that correspond to a voltage reading of a signal. After that, through a timer, the method triggers another method to which it is concatenated, called newDataFromFile. Every time that method is called the result will be that 1 single value is drawn for every one of the 7 plots that form the channel, so in order to achieve real time plotting the method will be called several times. The way to call newDataFromFile: from within readData: is done through a timer that works in endless loop. The name of the timer is dataTimer it is triggered using a global constant defined at the beginning of plot called kFrameRate2. The use of this constant lets us control how many times the timer triggers, therefore calling the method, and drawing the points. In other words, it lets us control the speed at which the 7 signals (plots) of the graph are drawn onscreen. Below we can find a flow chart that describes the basics of the implementation followed by the actual code:
  • 68. Press PLOT leftChannel flag? (IBAction) Plot: LAUNCH Open file MiguelUNIX.txt Flag = TRUE Separate by lines Trigger timer Flag = FALSE fileContents allLinedStrings ∞ loop newDataFromFile: Figure 34: new Data: flow chart
  • 69. 59 - (void)readData: (BOOL *) leftChannel { if (leftChannel) { leftChannelFlag=TRUE; }else { leftChannelFlag=FALSE; } [plotData1 removeAllObjects]; [plotData2 removeAllObjects]; [plotData3 removeAllObjects]; [plotData4 removeAllObjects]; [plotData5 removeAllObjects]; [plotData6 removeAllObjects]; [plotData7 removeAllObjects]; NSString* filePath = @"MiguelUNIX"; NSString* fileRoot = [[NSBundle mainBundle] pathForResource:filePath ofType:@"txt"]; // read everything from text NSString* fileContents = [NSString stringWithContentsOfFile:fileRoot encoding:NSUTF8StringEncoding error:nil]; // first, separate by new line allLinedStrings = [fileContents componentsSeparatedByCharactersInSet: [NSCharacterSet newlineCharacterSet]]; readIndex = 0; dataTimer = [NSTimer timerWithTimeInterval:0.1 / kFrameRate2 target:self selector:@selector(newDataFromFile:) userInfo:nil repeats:YES] ; [[NSRunLoop mainRunLoop] addTimer:dataTimer forMode:NSDefaultRunLoopMode]; } Code Snippet 15: the method readData:
  • 70. newDataFromFile: This method creates the final 7 data point arrays called myArray[i] that will store the voltage values to be drawn on the plot from the MiguelUnix.txt. Then each array is linked to its corresponding plot, called thePlot. At the final part of the method we can find the code dedicated to getting the point from the data array and drawing it on the screen. Here comes in action another global constant, called kMaxDataPoints2. The purpose of this constant is to set the maximum number of points that appear on a single screen per plot, therefore, it could be said that it doubles as the “Span” factor. By increasing or decreasing this value it is possible to show more or fewer data points over the axes. The following figure shows a simplified flow chart of how this method works: pauseTimer: This plot also features a way to pause the drawing process of the 7 plots in the graph. This is achieved with the manipulation of the timers that trigger the drawing process.
  • 71. 61 readData: leftChannel flag? select data points 0-6TRUE select data points 7-13 FALSE Break to 1 string line NSArray strsInOneLine [readIndex]; Break into single voltage values plotData1 => kMaxDataPoints2 ? singleStrs Free 1st object of plotData1 NO Add voltage value to plot YES Send to plot class readIndex ++; Evaluate for plotData 2-7 Figure 35: newDataFromFile: flow chart
  • 72. The code for this method is extensive and as such, for the sake of clarity, not included on this section but the reader can find it at the code file library, by the end of this document. -(void) pauseTimer:(NSTimer *)timer { pauseStart = [NSDate dateWithTimeIntervalSinceNow:0] ; previousFireDate = [timer fireDate] ; [timer setFireDate:[NSDate distantFuture]]; } Code Snippet 16: the pauseTimer: method resumeTimer: Obviously, the implementation of pauseTimer: required an opposed method to resume the timers: -(void) resumeTimer:(NSTimer *)timer { float pauseTime = -1*[pauseStart timeIntervalSinceNow]; [timer setFireDate:[previousFireDate initWithTimeInterval:pauseTime sinceDate:previousFireDate]]; } Code Snippet 17: the method resumeTimer:
  • 73. 63 Designing the GUI The Storyboard file for the iOS adaptation of the DEmotiv Toolbox has the following aspect: Figure 36: Project's Storyboard As it can be observed it was chosen a tab based structure for the interface. The Tab functionality is provided by the Tab Controller on the left. Although that Scene will not be a View by itself that the users will be able to see, it provided the needed framework for the 3 other views to work together through a tab based menu. The other scenes are actual screens for the application and they are simple called Welcome Scene, First Scene and Second Scene. As it was mentioned before when explaining the Interface Builder, every single one of these scenes is supported in code with a unique View Controller class, that have been called accordingly WelcomeController, FirstViewController and SecondViewController. From this point on, the different files and classes that form the GUI will be explained. Full code can be found as an appending at the end of this document but still, relevant code snippets will be presented in order to assist the explanations and facilitate a better understanding of the work
  • 74. The Tab Controller This view is the one hidden from the user as it provides the working functionality of the tab menu at the bottom part of every scene. Figure 37: Tab Controller The application has 3 different views which correspond to a tab each. For that purpose there are 3 buttons on the Tab Menu, one for each tab. By clicking on them it is possible to browse among tabs. The Welcome tab is the one the user arrives to when launching the application.
  • 75. 65 Welcome Scene This is the 1st the application that first view that the user will arrive to and it looks like this: Figure 38: Welcome Screen The final form of the welcome screen wasn’t implemented in time, but the current one has a welcome message, a text field intended to put the final applications instructions and the Biomedical Imaging Lab Logo.
  • 76. Figure 39: Biomedical Imaging Lab logo There is also a button called Plot, with no current purpose that it was used for plot tests. No graph is plotted on the welcome screen because the UI elements reduce the available area for the graph to be displayed. As it was desired to maximize the size of the plot, the other tabs, 1st and 2nd were created. By doing this it is possible to use the whole screen of the iPad to display the graph. First Scene 1st and 2nd Scene have very similar GUI features. So why keep both? It was the natural consequence of trials and testing during the development of the project. Before getting to the last version of Custom Real Time Plot, several other plots were tried in different steps, and in order to test them (and keep them separate from the other plots) it was needed to have available Scenes as test benches. The 1st Scene loos like this:
  • 77. 67 Figure 40: 1st Scene It has a very simple structure. There is a big white area that represents the Graph Hosting View for the plots and buttons to launch two different plot tests. The scene is controlled by the FirstViewController Class, which can be found at the source in the CD. The buttons trigger two different test plots:  Scatter Plot Test: This was the first plot achieved with the Core-plot API. The other plots, as well as the final one, were built upon this one by adding functionalities and making modifications to the code. Let’s take a look to the trigger code from FirstViewController.m:
  • 78. NSMutableArray *data = [NSMutableArray array]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(-10, 75)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(-8, 50)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(-6, 30)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(-4, 10)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(-2, 5)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(0, 0)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(3, 10)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(4, 25)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(5, 53)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(7, 70)]]; [data addObject:[NSValue valueWithCGPoint:CGPointMake(10, 75)]]; self.scatterPlot = [[TUTSimpleScatterPlot alloc] initWithHostingView: _graphHostingView andData:data]; [self.scatterPlot initialisePlot]; Code Snippet 18: triggering Scatter Plot Test As it can be seen at the code snippet above, the plot’s base file is the TUTSimpleScatterPlot , which was previously explained. The data points for the plot are manually created one by one and stored in an NSMutableArray called data, and then they are fed to the triggering method iniWithHostingView:andData:  Real Time Test: The evolution of the previous plot is this one. It is based on the file ScatterPlot2 file. It represents a middle step between the Scatter Plot Test and the final plot.
  • 79. 69 Second Scene The 2nd scene serves as the screen for the final version of the brain plot achieved in this project and it looks quite similar to the 1st one. It is supported in code by the SecondViewController class and has the following appearance: Figure 41: The 2nd Scene At a first glance we can see that on this case there are not one, but two Graph Hosting Views (in white and beige). The reasons behind that is because there will be 2 plots drawn simultaneously. As it was mentioned at the Plot Section, the final plot is meant to draw 7 signals that correspond to either the left or right channel. By having 2 graphs on the same screen it is possible to draw both channels at the same time, displaying the total 14 channels.
  • 80.  Plot: Through this button the drawing of plots is triggered. The required commands are: // action to plot both channels - (IBAction)Plot:(id)sender { self.customRT1 = [[CustomRT alloc] initWithHostingView: _graphHostingViewLeft]; [self.customRT1 readData:TRUE]; [self.customRT1 initialisePlot:@"Left Channel"]; self.customRT2 = [[CustomRT alloc] initWithHostingView: _graphHostingViewRight]; [self.customRT2 readData:FALSE]; [self.customRT2 initialisePlot:@"Right Channel"]; } Code Snippet 19: Triggering the final plots As it can be seen on the code, the title of the graph is fed to the method initWithHostingView: and there is a second method called readData: that was previously explained, that needs a Boolean value to either read the left or right channel from the text file.  Pause: This is a new functionality implemented for the final graphs. With the use of timers it is possible to pause the drawing process of the plot, keep the state and then resume the plotting. It is triggered by this:
  • 81. 71 // to pause plotting - (IBAction)pausePlot:(id)sender { [self.customRT1 pauseTimer:[self.customRT1 returnTimer]]; [self.customRT2 pauseTimer:[self.customRT2 returnTimer]]; } Code Snippet 20: pause the plot  Resume: This buttons is used along with the Pause plot. Its purpose is to resume the drawing process of the graph after it has been paused. // to resume plotting - (IBAction)resumePlot:(id)sender { // [ setHidden:!(.isOn)]; [self.customRT1 resumeTimer:[self.customRT1 returnTimer]]; [self.customRT2 resumeTimer:[self.customRT2 returnTimer]]; } Code Snippet 21: resume the plot
  • 82. Project Execution & Results In the previous chapter all the hardware and software component of the project were explained in detail. From the computer itself, to the software tools and the code, one by one, they have been analysed. But this section will try to show the final results, despite the non-interactive nature of a text document. Final results for the process previously presented will be showed. Also, opposed to the accomplishments, all the things that did not work and the limitations of the project will be mentioned here. Achieving Real Time On every project there are always features that do not get to be implemented. There could be several reasons behind that: the finite resources (such as manpower and time), limitations of technology, or just things that do not work as expected. This project is, of course, no exception. Well deep in the developing process an important obstacle was found: The original system architecture used by the DEmotiv Matlab toolbox looked like this: Figure 42: Original Demotiv Matlab Toolbox architecture
  • 83. 73 As it was mentioned before, the original Matlab Toolbox acquired the data by establishing a Wi-Fi link with the Emotiv EPOC neuroheadset. The Wi-Fi signal itself was encrypted using Emotiv proprietary format and therefore it required the Emotiv SDK for the decryption process. As the Emotiv SDK and the Matlab software were both available for the Windows PC, it represented no problem at all, as the brain signal could be acquired and decrypted by the SDK and then used by Matlab for further manipulations. With this project, one of the intended goals was to dispense with the computer and use only the portable device –on our case the iPad- for all data acquiring, manipulations and computations purposes but as the WiFi link of the neuroheadset is encrypted, the receiving end must have the Emotiv SDK in order to decrypt the signal. As Emotiv at the time of this project had no SDK developed for iOS devices, only for Windows, it is impossible to get the data straight from neuroheadset from an iPad because it cannot decrypt the signal. This obstacle was impossible to overcome and so it was decided to go around it. Main consequences of this were:  Need of a relay laptop, as a middle step, with the Emotiv SDK in charge of decrypting the headset signal and sending it to the iPad. This way it is possible to preserve a high mobility component as originally intended. Figure 43: proposed architecture for our system
  • 84.  The link between the laptop and the iPad is beyond the scope of the project and was not implemented due to time constraints, that it left for future revision of the project as a way to expand it.  A direct consequence of the previous point is that with the lack of link, the signal is plot to the graphs from a dump file called MiguelUNIX.txt that stores a sample brain reading from the DEmotiv Matlab toolbox.  Despite having no real time data acquisition, there is in fact real time graph plotting, as the acquisition was simulated (and previously explained) with the use of timers Results On this section it is possible to see the final appearance of the different features of the project running on the iPad simulator. The Splash Screen This is the first thing the user sees when launching the application from the iPad. It is a splash screen, featuring the logo of the University of Houston and a “please wait” message. Although there is no need for a splash screen, as the project loads into memory almost instantly, it was added for aesthetic reasons and because it does not affect the user experience.
  • 85. 75 Figure 44: The Splash screen The Welcome Screen The appearance of the welcome screen is practically the same to the one previously found on the Scene section. This is great consequence of the WYSIWYG (What You See Is What You Get) nature of building GUIs with the Interface Builder and Storyboards. Figure 45: The Welcome Screen in action
  • 86. The Simple Scatter Plot Test This is how the first graph developed looks like. It is a very simple plot of a few given points in a static fashion but it served its purpose to learn how to use the Core-plot library. Figure 46: the Simple Scatter Plot Test The Real Time Plot Test This is how the first real time real time plot looks like. It came to be as an evolution of the previous plot. Important new features were:
  • 87. 77  7 simultaneous plots.  Data scrolling over the axes.  Open/read from text file. Figure 47: Real Time plot test 1 The Custom RT Plot This is how the final plot of the project looks like. It shows both channels of signals in an split- view fashion. Important features implemented here are:  14 simultaneous plots  Split-view of both channels  Pause / Resume functions
  • 88. Two figures are provided in order to show the pause/resume and the data scrolling features. Figure 48: The final RT plot 1 Figure 49: The final RT plot 2