ASSESSMENT OF ADHD IN A SAMPLE OF ADULTS
THROUGH A COMPUTER GAME AND DATA MINING
In this work, we argue that through a computer game, named Supermarket Game, initially developed to identify a dysexecu-
tive syndrome, one can aid in the diagnosis process of Attention-Deﬁcit Hyperactive Disorder in adults. OBJECTIVE: To
verify the predictive capabilities of the game, in a sample of university students aged 21 to 27 years through Data Mining
techniques. METHOD: 50 university students underwent 2 stages in an experiment: a medical diagnosis and a playing
session. The game’s data was processed by 4 Data Mining algorithms. Each algorithm yields several prediction models,
according to the hypothesis being considered. The medical diagnosis was used as a gold standard test to verify the pre-
diction capability of the Data Mining techniques in the game’s data. RESULTS: With all attributes in numeric format we
obtained poor prediction performance. When the numeric attributes were discretized, a slight improvement was observed.
Considering only 2 classes, we had a considerable gain in performance, mainly in the calibration metric. Our best result was
obtained considering only the attributes of Time Spent in each stage of the game and the K* algorithm. CONCLUSION:
The Supermarket Game seems to be sensitive in the task of identifying ADHD cases in adults, although its capability to
classify the disorder subtypes has not been veriﬁed yet.
ADHD, Data mining, Game, Executive Dysfunction, Adults
The Attention-Deﬁcit Hyperactive Disorder (ADHD) is known as a psychiatric disorder that affects 3% to
6% of children and adolescents, causing several impairments both in school and family lives (Simith et al.,
2007; Shimitz et al., 2002; Brook and Geva, 2001; Barkley, 1997). The ADHD is deﬁned in the 4th
of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), elaborated by the American Psy-
chiatric Association (APA, 2000). It is characterized by 2 groups of symptoms: inattention symptoms and
hyperactivity/impulsivity symptoms. The ADHD diagnosis is mainly based on interviews with the individual
and/or reports from informants about the individual (Kessler et al., 2005; Simith et al., 2007). These reports
can be produced through standardized questionnaires like the Swanson-Nolan-Pellam-IV (SNAP-IV), based on
symptoms described in DSM-IV to diagnose children and adolescents (Swanson et al., 2001).
Despite the wide discussion about the presence of the ADHD in childhood, some studies have investigated
the manifestation of this disorder in adulthood. In 1980, the APA ofﬁcially recognized the ADHD adulthood
type, under the denomination “residual type”, through the publication of the DSM 3rd
edition. After that,
some longitudinal studies show that the ADHD could really persist in adulthood around 60% to 70% of cases
(Barkley and Gordon, 2000). Other researches show that, although the symptoms listed in DSM-IV have been
initially described to evaluate children and adolescents aged 7 to 17, the ADHD diagnosis in adults can be
reliably accomplished through the same approach (Spencer et al., 1994; Barkley and Gordon, 2000), however
some adjustments need to be performed, since some questions are inappropriate to adults assessment and need
to be adapted (Kessler, et al., 2005a). The Adult Self-Report Scale (ASRS), for example, is a questionnaire
based on the symptoms listed in DSM-IV, but this one was adjusted to adult use (Kessler, et al., 2005b; Mattos
et al., 2006).
Although the reports are widely used in the diagnosis process, this type of tool may reﬂect an arbitrary
judgment about the subject from the particular point of view of the informant. The clinician relies on the
report, and if the information is not actually correct, the diagnosis will not be correct, too. To aid in the
ADHD diagnosis process, some clinicians have considered some neuropsychological tests, along with the
questionnaires. Studies show that the ADHD is associated to several neuropsychological deﬁcits (Frazier et al.,
2004). Some neuropsychological tests that have been considered in the ADHD diagnosis are: (1) The Test
of Visual Attention (TAVIS-III) that is a continuous performance test that assesses several levels of visual
attention, like sensibility, changing concept and visual sustention, in children and adolescents aged 6 to 17 years
(Mattos and Duchesne, 1997). Studies show that this test can contribute in ADHD diagnosis (Coutinho et al.,
2007). (2) The Iowa Gambling Task (IGT), that is a psychological test widely used in research of cognition
and emotion. It simulates a real-life decision making environment, where the individual needs to decide what is
the best way to earn money is. Studies show that a healthy individual after about 40 attempts may identify the
best ways; however, individuals with executive dysfunction ﬁnd it difﬁcult to realize this (Bechara et al., 1997,
1994). (3) The Developmental Neuropsychological Assessment (NEPSY) is a neuropsychological battery,
composed by 6 test sets, designed to evaluate children and adolescents aged 3 to 16 years (Kemp et al., 2001;
Korkman et al., 1998). Couvadelli (2006) suggests that the Attention Test and Executive Function Test of the
NEPSY are sensitive to identify ADHD subtypes.
The great advantage of the tests cited earlier is that, generally, they do not rely on the judgments of infor-
mants, like the questionnaires. The information “extracted” from the patient through these tools may reveal
some hidden cognitive and behavioral features, regardless of the individual’s conscience, that would not be ex-
hibited by reports or self-reports. Nevertheless, it is important to note that these tests need to be interpreted by
a clinician, because they were not developed speciﬁcally to assess the ADHD directly, but to assess neuropsy-
chological issues that can occur together with ADHD. So far, there is not a standard test to assess the ADHD.
Another problem arises whether we consider that most of the tests used to aid in ADHD diagnosis process were
developed to assess children and adolescents, and not to assess adults. In this case, the adult ADHD carrier has
a limited number of neuropsychological test options to be considered in the diagnosis.
In our work, we propose that through a computer game, ﬁrst designed to identify a dysexecutive syndrome,
and through Data Mining algorithms to process the game’s data, one can aid the ADHD diagnosis in adults.
In a different way, our approach is concerned with the direct classiﬁcation of the ADHD subtypes, through
the identiﬁcation of executive dysfunction behaviors, instead of with the classiﬁcation of executive dysfunction
itself. The patient plays a game that works as a “behavioral catalyst”, identifying patterns according to the
individual’s in-game behavior, in a play session. For each play session the game yields data that can be analyzed
by Data Mining algorithms that classify the identiﬁed patterns according to the ADHD subtypes. A challenging
task in this context is to ﬁnd an efﬁcient prediction model for adults that may identify the disorder subtypes
correctly, using only the game data: This is our goal in this work.
2. THE SUPERMARKET GAME
The Supermarket Game was ﬁrst designed to perform cognitive assessment, with the aim of proving that games
can accomplish cognitive capture (Andrade, 2009). It was inspired in a neuropsychological test named Zoo
Map Test, applied to executive dysfunction assessment (Wilson et al., 1997). Studies show that the executive
function is found weakened in ADHD individuals (Willcutt et al., 2005). As ADHD and executive dysfunction
seem to be related, no modiﬁcation in the game mechanism was necessary for our purpose.
The game is basically a labyrinth that must be traversed while the player acquires items shown in a shopping
list (see Figure 1). Its interface has a supermarket map, a shop list on the right that shows the required items, the
score obtained for each task performed, and the time spent. The player personates a shopper character (avatar)
that must be controlled by the keyboard’s arrows.
The game has 18 stages divided in 2 modes. Mode 1 has 10 stages in which the avatar must acquire all items
shown in the shop list in the shortest possible time, without passing the same place more than once. Mode 2,
has 8 stages, in which the avatar also must acquire all items shown in the shop list in the shortest possible time,
Figure 1: The Supermarket Game.
without passing the same place more than once, but this time, the items must be picked up in the same shop list
order. Each Mode starts with one item in the shopping list. For each new stage in the Mode, one more item is
added. Mode 1 assesses the player’s planning capability and Mode 2 assesses the player’s execution capability.
3.1 The Subjects
The sample was drawn from a public university and is composed by 50 adults, all medical students, aged
21 to 27 years. To obtain this sample, about 300 individuals underwent a screening through the self-report
questionnaire, ASRS. From this group, 17 individuals classiﬁed as positive cases by the questionnaire according
to DSM-IV criteria were also diagnosed as positive cases. We took nearly twice as many healthy individuals to
use them as control, i.e., 33 negative cases. All 50 subjects also underwent a play session with the Supermarket
The small sample used in the experiment (only 50 individuals) is due to the fact that ADHD positive cases
already clinically diagnosed are rare, especially when it comes to adults. Furthermore, to accomplish this work,
we need the permission of the competent organs, including ethic committee and state department of education,
and depending on the country and the social group being studied, these permissions are not so easy to obtain.
3.2 The Data
In the Supermarket Game, each play session has 18 stages that must be completed. A play session produces a
data set regarding a player. Some data are provided by the game mechanism and others by the test supervisor
(person that applies the game). In a game session a total of 40 attributes are ﬁlled from each player:
• Id: The player’s identiﬁcation;
• Age: The player’s age;
• Gender: The player’s gender;
• Points: 18 Score Points attributes (one for each stage);
• Time: 18 Time Spent attributes (one for each stage);
• Class: ADHD Classiﬁcation (The class attribute, provided by the clinician).
The Time Spent attributes were converted into integer type so that they were expressed in seconds. The
attributes Id, Age and Gender are provided by the test supervisor. The Id attribute was not used, so the learning
process was performed with only 39 attributes.
The class attribute is the individual situation, diagnosed by the clinician, that might be predicted by the
algorithms. The cutoff used by the clinician to classify the ADHD subtypes was the same established by DSM-
IV, and so each individual who presents 6 or more symptoms in inattention or hyperactivity-impulsivity was
considered ADHD carrier. According to this rule, 4 labels were used to classify the individuals regarding to
each ADHD subtype:
• Non-ADHD: Subjects with fewer than 6 symptoms on both symptom groups;
• ADHD-I: Subjects with more than 6 inattention symptoms but fewer than 6 hyperactivity-impulsivity
• ADHD-HI: Subjects with more than 6 hyperactivity-impulsivity symptoms but fewer than 6 inattention
• ADHD-C: Subjects with more than 6 symptoms on both symptom groups.
3.3 Data Analysis: Validation and Metrics
The technique used to evaluate the algorithms performance is known as 10 times 10-fold stratiﬁed cross-
validation. In this technique, the whole data set is randomized, divided into 10 folds (parts) and processed
10 times. At each time, the algorithm being evaluated uses 9 folds to build a prediction model and one fold to
test it. The average of all accuracies obtained through the 10 iterations is considered as the prediction model
accuracy. Furthermore, the data randomization process occurs 10 times, yielding a total of 100 iterations. This
process is widely discussed in literature (Witten and Frank, 2005; Han and Kamber, 2006; Alpaydin, 2010).
The choice of performance metrics capable of correctly evaluating the prediction models accuracy is another
important issue. For the sake of facility, in this work we are evaluating the models through a binary classiﬁcation
approach. When the models are not dichotomous, i.e., the outcome variable has more than 2 class values, we
are considering the average of all class values when each one is considered as positive and the remain classes
are considered negative. A special care was taken with the prevalence issue. The ADHD prevalence in our
sample is around 1/3, a presence far more representative than we would ﬁnd in a normal population (3% to
6%). To avoid this issue, we seek to use metrics that do not rely on prevalence. Furthermore, we decided to
apply 2 concepts widely used in epidemiology: Discrimination and Calibration. Discrimination refers to the
model’s ability to distinguish correctly between 2 classes of outcomes (Balakrishnan and Rao, 2004). A model
with high discrimination power produces a ranking that gives greater probabilities to positive cases than to
negative cases. The discrimination metric used in our work is known as Area Under the Receiver Operating
Characteristic (ROC) Curve (AUC), a trade-off between the true positive rate and the false positive rate of
a model (Fawcett, 2006). The Calibration of a model shows how closely the predicted probabilities agree
numerically with the actual outcome regardless of a ranking (Balakrishnan and Rao, 2004). The Calibration
metric used in our work is known as Macro Average Arithmetic (MAVA) and is obtained through the average
of the accuracies of each class that has been considered in the model (Mitchell, 1997; Ferri et al., 2009). Both
metrics (AUC and MAVA) do not chance with the prevalence.
Four algorithms were chosen for the classiﬁcation process:
• Naive Bayes: It’s a simple but efﬁcient Bayesian technique, widely used for diagnosis issues, and dis-
cussed in literature (Russell and Norvig, 2009; Witten and Frank, 2005; Han and Kamber, 2006; Alpay-
din, 2010). It can produce excellent results if the assumption of independence between attributes was
observed. The algorithm used was implemented by John and Langley in (John and Langley, 1995);
• Support vector classiﬁer: This is a support vector classiﬁer implementation that replaces all missing
values and transforms nominal attributes into binary ones. By default, it also normalizes all attributes.
The algorithm used in our work was developed by John Platt (Platt, 1999);
• Lazy classiﬁer: Also known as instance-based classiﬁer. In this approach the algorithm tries to identify
the class of an instance through the analysis of a set of other instances which the classes are known by
some similarity function. The algorithm used in our work is known as K*, that uses an entropy-based
distance function and was implemented by John Cleary and Leonard Trigg (Cleary and Trigg, 1995);
• Decision tree: It’s an attribute hierarchical structure, designed like a tree, where each node denotes an
attribute to be assessed, and each branch denotes an outcome, i.e., a possible attribute value (Han and
Kamber, 2006). The algorithm used is a Java implementation of the C4.5 algorithm, ﬁrst developed in C
language, named J48 (Quinlan, 1993).
We started considering the numerical attributes in their natural form. In this ﬁrst step, our goal is to verify if
the selected algorithms are capable to accomplish good predictions when the class is nominal and the attributes
are numerical. The results obtained are shown in Table 1.
Table 1: Processing results with numerical attributes.
MAVA (95% C.I.) AUC (95% C.I.)
Naive Bayes 0.22(0.15 − 0.31) 0.48(0.38 − 0.58)
SMO 0.22(0.15 − 0.31) 0.51(0.41 − 0.61)
K* 0.23(0.16 − 0.32) 0.49(0.39 − 0.59)
Decision Tree (J48) 0.23(0.16 − 0.32) 0.48(0.38 − 0.58)
The algorithms showed a very poor prediction capability when numerical attributes were considered. Ac-
tually, some Data Mining algorithms do not show good prediction performance when working with numeric
attributes, mainly when the sample does not show a normal distribution for this attribute. In our sample, the
distribution of Score Points attribute seems to be asymmetric to left, and the distribution of Time Spent attribute
seems to be asymmetric to right. To address this issue, we decided to apply a technique known as discretiza-
tion in our sample where the values of the numeric attributes are categorized according to a pre-determined
rule (Witten and Frank, 2005). After some tests, we decided to discretize the attribute values into 4 intervals.
As the algorithms selected can handle both numerical and categorical attributes, the experiment could work in
the same way. The results obtained with the discretization hypothesis are shown in Table 2
Table 2: Processing results with nominal attributes.
MAVA (95% C.I.) AUC (95% C.I.)
Naive Bayes 0.24(0.17 − 0.33) 0.57(0.43 − 0.70)
SMO 0.25(0.17 − 0.34) 0.51(0.38 − 0.64)
K* 0.24(0.17 − 0.33) 0.50(0.37 − 0.63)
Decision Tree (J48) 0.34(0.22 − 0.48) 0.62(0.48 − 0.74)
When the numerical attributes were categorized into 4 intervals, the algorithms had a slight improvement.
The Decision Tree Algorithm had a considerable increase in the AUC metric and the remaining algorithms had
a little improvement in both metrics. Although these results are enough to show that nominal attributes are
more efﬁcient than numerical attributes in this domain, the models produced are still very poor to a diagnostic
We then began to suspect that some issues could seriously impair the learning of a prediction model. First,
the large number of class labels needed to describe the disorder subsets could be confusing the models. In our
sample we have 4 labels and 50 instances, of which 23 are Non-ADHD, 3 are ADHD-HI, 11 are ADHD-C and
13 are ADHD-I. Notice that an algorithm to describe the ADHD-HI subtype can use only 3 instances from the
data set! Perhaps little instances to build a model. Furthermore, we have 39 attributes in the data set, a very
large number of features, facing the small number of instances. To work-around the problem of large number of
class labels, we apply a simple approach to the disorder deﬁnition by a little change in the symptom description
without contradicting the DSM-IV criteria. The individuals considered ADHD-HI, ADHD-I, ADHD-C were
re-labeled as being just ADHD, and thus, we are left with only 2 classes from now on. The goal is to yield
a new joint ADHD class, composed by all ADHD instances regardless the subtypes, with a larger amount of
information to build more accurate models. The Table 3 shows the application of this new approach in the class
Table 3: Processing results with nominal attributes and 2 classes.
MAVA (95% C.I.) AUC (95% C.I.)
Naive Bayes 0.51(0.38 − 0.64) 0.51(0.38 − 0.64)
SMO 0.54(0.40 − 0.67) 0.53(0.39 − 0.66)
K* 0.53(0.39 − 0.66) 0.53(0.39 − 0.66)
Decision Tree (J48) 0.70(0.56 − 0.81) 0.68(0.54 − 0.79)
With only 2 classes, the calibration metric MAVA showed a great improvement in all algorithms, especially
for Decision Tree (0.70). The discrimination metric AUC was worse for the Naive Bayes algorithm (0.51), the
SMO and K* algorithms was slightly better, and the Decision Tree algorithm had a considerable improvement
In our last experiment, we tried to decrease the number of attributes. We applied a simple assumption that
may be easily observed in this domain. The attributes were divided into 2 groups: Only Score Points and only
Time Spent attributes. The Age and Gender attributes were maintained in each group. The algorithms were run
using one of these groups at each time. Tables 4 and 5 show the results obtained.
Table 4: Processing results with only score points attributes in nominal format and 2 classes.
MAVA (95% C.I.) AUC (95% C.I.)
Naive Bayes 0.58(0.44 − 0.71) 0.51(0.38 − 0.64)
SMO 0.52(0.39 − 0.65) 0.52(0.39 − 0.65)
K* 0.47(0.37 − 0.57) 0.37(0.25 − 0.51)
Decision Tree (J48) 0.52(0.39 − 0.65) 0.48(0.37 − 0.58)
Table 5: Processing results with only Time Spent attributes in nominal format and 2 classes.
MAVA (95% C.I.) AUC (95% C.I.)
Naive Bayes 0.50(0.37 − 0.63) 0.52(0.39 − 0.65)
SMO 0.55(0.41 − 0.68) 0.54(0.40 − 0.67)
K* 0.73(0.59 − 0.83) 0.78(0.65 − 0.87)
Decision Tree (J48) 0.69(0.55 − 0.80) 0.78(0.65 − 0.87)
With only Score Points attributes, the results were worse than to use all attributes. Only the Naive Bayes
algorithm shows a small improvement in MAVA metric (from 0.51 to 0.58). On the other hand, with only Time
Spent attributes almost all algorithms had a signiﬁcant improvement, mainly in AUC metric. The Naive Bayes
and Decision Tree algorithms had a slight decrease in MAVA metric (from 0.51 to 0.50 and from 0.70 to 0.69,
respectively). With only Time Spent attributes the K* algorithm obtained the best result of our work: 0.73 in
MAVA and 0.78 in AUC.
Through the Supermarket Game and Data Mining techniques it is possible to identify ADHD and Non-ADHD
cases in adults with 73% of accuracy. Unlike the existing methods to aid in the ADHD diagnosis process, our
approach does not depend on reports of informants and its results can be easily interpreted by the clinician.
Nevertheless, in our experiment, we could not verify the game capability to identify the disorder subtypes
because of the sample size. In future works, different data sets need to be investigated together with other Data
Mining techniques in order to establish this new approach as a method to aid in the ADHD diagnosis process.
Alpaydin, E. (2010), Introduction to machine learning, The MIT Press.
Andrade, Leila Cristina Vasconcelos (2009), Avaliac¸˜ao cognitiva utilizando t´ecnicas inteligentes e um jogo
computacional, in XX Simp´osio Brasileiro de Inform´atica na Educac¸˜ao, Florian´opolis, SC.
APA, American Psychiatric Association (2000), DSM-IV Diagnostic and Statistical Manual of the American
Psychiatric Association, iv edn, American Psychiatric Association.
Balakrishnan, N. and CR Rao (2004), Advances in survival analysis,(handbook of statistics, vol. 23).
Barkley, R. and M. Gordon (2000), Biederman, J.; Mick, E.; Faraone, S.V. - Age-dependent decline of symptoms
of attention deﬁcit hyperactivity disorder: impact of remission deﬁnition and symptom type., Am J Psychiatry
157, chapter Research on comorbidity, adaptative functioning, and cognitive impairments in adults with
ADHD: implications for a clinical practice., pp. 816–818.
Barkley, R.A. (1997), Behavioral inhibition, sustained attention, and executive functions: Constructing a uni-
fying theory of adhd., Psychological bulletin 121(1), 65.
Bechara, A., A.R. Damasio, H. Damasio and S.W. Anderson (1994), Insensitivity to future consequences fol-
lowing damage to human prefrontal cortex* 1, Cognition 50(1-3), 7–15.
Bechara, A., H. Damasio, D. Tranel and A.R. Damasio (1997), Deciding advantageously before knowing the
advantageous strategy, Science 275(5304), 1293.
Brook, U. and D. Geva (2001), Knowledge and attitudes of high school pupils towards peers’ attention deﬁcit
and learning disabilities, Patient Education and Counseling 43(1), 31–36.
Cleary, J.G. and L.E. Trigg (1995), K*: An instance-based learner using an entropic distance measure, in MA-
CHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, Citeseer, pp. 108–114.
Coutinho, G., P. Mattos and C. Ara´ujo (2007), Desempenho neuropsicol´ogico de tipos de transtorno do d´eﬁcit
de atenc¸˜ao e hiperatividade (tdah) em tarefas de atenc¸˜ao visual, J Bras Psiquiatr 56(1), 13–6.
Couvadelli, B. (2006), Nepsy proﬁles in children diagnosed with different subtypes of adhd.
Fawcett, T. (2006), An introduction to roc analysis, Pattern recognition letters 27(8), 861–874.
Ferri, C., J. Hern´andez-Orallo and R. Modroiu (2009), An experimental comparison of performance measures
for classiﬁcation, Pattern Recognition Letters 30(1), 27–38.
Frazier, T.W., H.A. Demaree and E.A. Youngstrom (2004), Meta-analysis of intellectual and neuropsychologi-
cal test performance in attention-deﬁcit/hyperactivity disorder* 1, Neuropsychology 18(3), 543–555.
Han, J. and M. Kamber (2006), Data mining: concepts and techniques, Morgan Kaufmann.
John, George H. and Pat Langley (1995), Estimating continuous distributions in bayesian classiﬁers, in Eleventh
Conference on Uncertainty in Artiﬁcial Intelligence, San Mateo, pp. 338–345.
Kemp, S.L., M. Korkman and U. Kirk (2001), Essentials of nepsy assessment.
Kessler, R., et al. (2005a), Prevalence, severity, and comorbidity of 12-month dsm-iv disorders in the national
comorbidity survey replication, in Arch Gen Psychiatry 62, pp. 233–239.
Kessler, R.A. et al. (2005), Patterns and predictors of attention-deﬁcit/hyperactivity disorder persistence into
adulthood: results from the national comorbidity survey replication, Biological Psychiatry 57(11), 1442–
Kessler, R.C., et al. (2005b), The world health organization adult adhd self-report scale (asrs): a short screening
scale for use in the general population., in Psychol Med 35 (2), pp. 245–256.
Korkman, M. et al. (1998), Nepsy: A developmental neuropsychological assessment.
Mattos, Paulo, Daniel Segenreich, Eloisa Saboya, Mario Louza, Gabriela Dias and Marcos Romano (2006),
Adaptac¸˜ao transcultural para o portuguˆes da escala adult self-report scale para avaliac¸˜ao do transtorno de
d´eﬁcit de atenc¸˜ao/hiperatividade (tdah) em adultos, in Revista de Psiquiatria Cl´ınica, Vol. 33, pp. 188–195.
Mattos, Paulo and M. Duchesne (1997), Normalizac¸˜ao de um teste computadorizado de atenc¸˜ao visual, Arq.
Neuropsiquiatria 55, 62–69.
Mitchell, Tom M. (1997), Machine Learning, McGraw-Hill Science/Engineering/Math.
Platt, J.C. (1999), Fast training of support vector machines using sequential minimal optimization, in Advances
in Kernel Methods, MIT press, pp. 185–208.
Quinlan, Ross (1993), Programs for Machine Learning, Morgan Kaufmann.
Russell, S.J. and P. Norvig (2009), Artiﬁcial intelligence: a modern approach, Prentice hall.
Shimitz, Marcelo, Luciana Cadore, Marcelo Paczko, Leticia Kipper, M´arcia Chaves, Luis A. Rohde, Clarissa
Moura and M´arcia Knijnik (2002), Neuropsychological performance in dsm-iv adhd subtypes: an ex-
ploratory study with untreated adolescents, Can J Psychiatry 47, 863–869.
Simith, Bradley H., Russel A. Barkley and Cheri J. Shapiro (2007), Attention deﬁcit hyperactivity disorder, 4th
edn, Guilford, New York.
Spencer, T., J. Biederman, J., T.E. Wilens and S.V. Faraone (1994), Is attention-deﬁcit hyperactivity disorder
in adults a valid disorder?, in Harv Rev Psychiatry, pp. 326–335.
Swanson, J.M., H.C. Kraemer, S.P. Hinshaw, L.E. Arnold, C.K. Conners, H.B. Abikoff, W. Clevenger, M.
Davies, G.R. Elliott, L.L. Greenhill et al. (2001), Clinical relevance of the primary ﬁndings of the mta:
success rates based on severity of adhd and odd symptoms at the end of treatment, Journal of the American
Academy of Child & Adolescent Psychiatry 40(2), 168–179.
Willcutt, E.G., A.E. Doyle, J.T. Nigg, S.V. Faraone and B.F. Pennington (2005), Validity of the executive
function theory of attention-deﬁcit/hyperactivity disorder: a meta-analytic review, Biological Psychiatry
Wilson, B.A., J.J. Evans, N. Alderman, P.W. Burgess and H. Emslie (1997), Behavioural assessment of the
dysexecutive syndrome, Methodology of frontal and executive function pp. 239–250.
Witten, I.H. and E. Frank (2005), Data Mining: Practical machine learning tools and techniques, Morgan