1.
The ICOM Project
Computational Intelligence: Principles and Applications
Gerson Zaverucha, Valmir Barbosa, Felipe França, Ines de Castro Dutra, Luis Alfredo de Carvalho, Luiz Adauto
Pessoa and Priscila Lima
COPPE-Sistemas Universidade Federal do Rio de Janeiro
Marley Vellasco and Marco Aurélio Pacheco
ICA - Pontifícia Universidade Católica do Rio de Janeiro
Teresa Ludermir and Wilson de Oliveira
DI- Universidade Federal de Pernambuco
Victor Navarro da Silva
CEPEL
Mario Fiallos Aguilar
EE - Universidade Federal do Ceará
Suzana Fuks
CNPS – EMBRAPA
Alberto de Souza and Marcos Negreiros
DI- Universidade Federal do Espirito Santo
Abstract: In this paper, we present the results obtained by the ICOM project. We have developed and implemented some neural,
symbolic and hybrid propositional and first order theory refinement systems.We also describe our work on ANN compilers and
parallelization, on genetic algoritms and computational neuroscience. The systems developed in this project were applied to:
computational biology, real world electric power systems fault diagnosis, load forecasting and overflow control of electric generator
reservoir, inductive design of Distributed Object Oriented Database, Data Mining, agriculture, image classification and circuit design.
Finally, we present some conclusions.
Keywords: Computational Intelligence, Hybrid Systems, Artificial Neural Networks, Theory Refinement Systems, Inductive Logic
Programming, Applications.
1. Introduction
This project‘s main objective was the research in Computational Inteligence, its principles and applications.
The development and application of Computation Intelligence systems has been one of the main growing areas of
research in Computer Science [Kandel and Langholz 92], which can be seen by the ever increasing number of
conferences and workshops. These systems use and integrate a number of AI techniques in a new paradigm to reproduce
aspects of human behaviour such as reasoning, learning and perception. Among these techniques we have: Neural
Networks; Classical, Fuzzy and Nonmonotonic Logics; Inductive Logic Programming; Genetic Algorithms and Bayesian
Networks.
Each one of these AI techniques have individually shown its importance and its strengths. But it is through their
integration, considering each one of their characteristic strengths, that one can build powerful and robust hybrid
intelligent systems, allowing to efficiently solve problems in different areas that were not possible with conventional
methods.
Taking into account the evolution of computer science for massively parallel architectures [Barbosa 93], intelligent
systems, to be effective, could be based on some artificial neural network’s model. Notwithstanding, they should be able
to justify their decision making process while learning, acquiring a higher level of formalism. Therefore, it is very
important to outline tight correlation between symbolic knowledge and artificial neural networks, assigning to the later an
“explanation capability”.
“Right now we are caught in the middle; neither purely connectionist nor purely symbolic systems seem able to
support the sorts of intellectual performances we take for granted even in young children. I’ll argue that the solution lies
somewhere between these two extremes, and our problem will be to find out how to build a suitable bridge... Above all,
we currently need more research on how to combine both types of ideas” [Minsky 91].
1
2.
“Connectionism and Automated Inferencing are two such subfields which have been developed as if they were
independent although they both are concerned with high-level reasoning. Automated Inference Systems can only be
adequate if they are based on a massively parallel computational model. The super computing community is convinced
that by the year 2000 the fastest computers will be massively parallel. Hence, massive parallelism may serve as a vehicle
to reconcile the research in Connectionism and Automated Inferencing. We do not just want to build systems which show
intelligent behavior, we also want to understand how humans do it.” [Holldobler 93]
In this paper, we present the results obtained by the ICOM project. We have developed and implemented some neural,
symbolic and hybrid propositional and first order theory refinement systems, which are described in section 2.1; in
section 2.2 we describe our work on ANN compilers and parallelization, on genetic algoritms and computational neuroscience.
Section 3 presents some important applications of systems developed in this project, among them: real world electric
power systems fault diagnosis, load forecasting, overflow control of electric generator reservoir, inductive design of
Distributed Object Oriented Database, Data Mining, agriculture, image classification and electronic circuit design.
Finally, section 4 draws some conclusions.
2. Principles
2.1 Theory Refinement Systems
Theory refinement systems combine analytical and inductive learning, that is given a theory, describing a domain
problem, which can be incorrect and/or incomplete, it is refined with examples by applying a learning algorithm such that
it correctly classifies all of them and is able to generalize to unseen ones. The use of background (prior) knowledge has
shown to lead to faster training and better generalization, specially when there is a reduced number of training examples
[Pazzani, Kibler 92].
Knowledge-based neural networks (KBNN) are propositional theory refinement systems using neural networks that
promotes the integration of symbolist and connectionist paradigms of AI ([Shavlik 94]. Propositional symbolic
knowledge (like rules) describing a domain problem is mapped into a neural network through an insertion algorithm, then
this network is trained with examples (refinement) and finally an extraction algorithm is applied to it generating a
refined propositional symbolic knowledge. KBNN systems like KBANN [Towell and Shavlik 94], RAPTURE [Mahoney
and Mooney 93] and CASCADE ARTMAP [Tan, Ah-Hwee 97].have been shown to outperform purely neural and
symbolic inductive learning systems.
2.1.1 The Connectionist Inductive Learning and Logic Programming System
In this project, [Garcez and Zaverucha 99, Garcez et al 96,97] developed a KBNN called the Connectionist Inductive
Learning and Logic Programming (CIL2P) system. Starting with the background knowledge represented by a
propositional normal (general) logic program [Lloyd 87], a translation algorithm is applied generating a partially
recurrent three layer neural network ([Jordan 86]) network:
• the number of hidden units is equal to the number of clauses in the logic program;
• for each clause, its head is mapped to an output unit and each literal that appears in the body its atom is mapped to an
input unit with weight sign corresponding to the literal sign;
• for each atom that appears both in the head and the body of a clause, a recurrent link (with fixed weight equal to one)
is created from its correpondent output unit to its correpondent input unit.
Moreover, it imposes a well defined structure through the setting of threshold and weight values such that each unit in
the hidden layer works as an AND gate, being activated if and only if all units in the input layer connected to it are
activated, and each unit in the output layer works as an OR gate, being activated if and only if at least one unit in the
hidden layer connected to it is activated. This is similar to a set of clauses in a normal logic program: a consequent of a
clause is satisfied if and only if all of its antecedents are satisfied, and it is necessary to have at least one clause satisfied
in order to have the literal in its head proved. It is important to notice that, despite the fact that the units work as AND or
OR gates, their activation functions are bipolar semi-linear activation functions (tanh).
When the propositional normal logic program is acceptable [Protti and Zaverucha 98], a theorem was proved showing
that the neural network computes exactly what the logic program computes - its stable model [Gelfond and Lifschitz 88],
as a massively parallel system for Logic Programming.
This neural network can be trained with examples and the results obtained with this refined network can be explained
by extracting a revised logic program from it . This way, CIL 2P offers [Holldobler and Kalinke 94]’s parallel model for
Logic Programming the capability to perform inductive learning from examples and background knowledge.
2
3.
The symbolic knowledge extracted from the neural network can be more easily analyzed by a domain knowledge
expert that decides if it should feedback to the system, closing the learning cycle, or not, or even be used in analogous
domain applications.
The CIL2P system has been implemented in Delphi (Windows 3.x/95/NT) [Basilio 97, Basilio and Zaverucha 97] and
in the Stutgart Neural Network Simulator (SNNS) [Baptista and Kalinowksi 99].
The system have obtained very successful experimental results in two real-world problems in the molecular biology
domains from the Human Genome Project, specifically the promoter and splice-junction problems. Comparisons to the
results obtained by other neural (standard three layer feedforward network), symbolic (including FOCL ), and hybrid
inductive learning systems (including KBANN [Towell and Shavlik 94]) in the same domain knowledge showed the
effectiveness of CIL2P.
2.1.2 Rule Extraction From Neural Networks
The explanation capability of neural networks can be obtained by the extraction of symbolic knowledge from it,
through the so called “Rule Extraction” methods. It makes the knowledge learned accessible for an expert’s analysis and
allows the justification of the decision making process. This way, it allows ANN systems to be applied to safety-critical
problem domains, the verification and debugging of ANN components in software systems, improvement of the
generalisation of ANN solutions, data exploration and the induction of scientific. Moreover, since artificial neural
networks have shown to outperform symbolic machine learning methods in many application areas [Thrun et al. 91]
[Towell, Shavlik 94a], the development of techniques for extracting rules from trained neural networks gives rise to an
effective learning system that may contribute to the solution of the “Knowledge Acquisition Bottleneck” problem. The
knowledge extracted can be directly added to the knowledge base or used in the solution of analogous domains problems.
It is known that the knowledge acquired by a neural network during its training phase is encoded as: (i) the network
architecture itself; (ii) the activation function associated to it; and (iii) the value of its weights. As stated in [Andrews et
al. 95], the task of extracting explanations from trained artificial neural networks is the one of interpreting in a
comprehensible form the collective effect of (i), (ii), and (iii).
In knowledge-based neural networks, like KBANN and C-IL2P, the training process, however, disturbs the structure,
by changing threshold and weight values in order to refine the knowledge based on the examples used for training. The
units no longer work as AND or OR gates, and, hence, the rule structure is lost. With this change, it is not trivial to
identify the knowledge encoded in the network.
An alternative to obviate the often high computational cost of extraction algorithms is to keep the rule structure during
training. This would allow the use of a decompositional extraction method - one that considers the topology of the
network, as opposed to pedagogical methods, which treat the network as a “black box” ([Andrews et al 95]). Pedagogical
extraction algorithms are “unbiased” methods, since no assumptions on the topology of the network are made. This often
leads these methods to be O(2n), for they consider as many subsets of the input space as necessary, to assure some
reliability.
In this project, [Menezes et al 98, 97] presented an approach for knowledge extraction which takes advantage of the
“rule structure” imposed to knowledge-based networks by the theory insertion algorithm. Keeping this structure during
the training process makes rule extraction an easier task. A penalty function was used for this purpose. The C-IL2P
penalty function works similarly to the one in GDS ([Blasig 94]). Some differences, however, should be noted. GDS has
weights and thresholds converging to specific values, whereas C-IL2P penalty function allows these values to vary in a
range. This increases the flexibility of the training process, and probably leads to better generalization results. Another
difference is the fact that it uses the same function to penalize weights and thresholds, what makes easier the
implementation of the method.
The investigation of the rule extraction problem in the RAM model of ANN known as Boolean Neural Networks
(BNN) [Ludermir et al 98] was also carried out in this project [Ludermir and Oliveira 98a]. Two methods have been
proposed for extracting rules from trained BNN [Ludermir and Oliveira 98b].The first one deals with the extraction of
symbolic grammatical rules from trained BNN [Ludermir and Oliveira 97]. The second one produces Boolean expression
rules from trained feedforward BNN [Barros et al 98, Ludermir 98]. The methods have been used with some benchmarks
and practical problems [1]. Proposal of a general theory of computation for dynamical systems, in general, and ANN, in
particular, via symbolic dynamics [Oliveira and Ludermir 98].
2.1.3 The Combinatorial Neural Model
Fuzzy neural networks have been introduced as a possibility to meet many of the requirements for hybrid intelligent
systems [Buckley & Eslami 96] [Buckley & Hayashi 94] [Buckley & Hayasi 95] [Gupta & Rao 94] [Kandel & Langholz
92] [Keller et al 92] [Lee & Lee 75] [Machado & Rocha 93] [Pedrycz 93] [Pedrycz & Rocha 93] [Rocha 92]. Such
networks, based on fuzzy logic theory, directly encode structured knowledge in a numerical framework, and can be seen
as trainable evidence-aggregation networks that utilize families of connectives from fuzzy set theory as neurons.
3
4.
Because fuzzy sets lend themselves well to both a symbolic computational approach and a numeric one, they constitute
an attractive tool for merging neural networks and symbolic processing together.
In this research we have dealt with the Combinatorial Neural Model (CNM), a variation of fuzzy neural networks
introduced to tackle classification tasks and the more general task of establishing a mapping between two fuzzy
multidimensional spaces [Machado & Rocha 91, 94a, 94b]. CNM has been investigated and analyzed from various
perspectives, and appears to point towards answers in the context of the issues that are relevant to the field. A discussion
of its properties related to the consultation process, such as uncertainty management, inquiry of new information,
explanation of conclusions, and evidence censorship, can be found in [Machado & Rocha 93, 97]. or properties related to
the construction of networks from expert knowledge and the refinement of their knowledge by means of examples, the
reader is referred to [Machado & Rocha 94]. CNM has also been combined with semantic networks for the construction
of expert systems [Kandel & Lanholz 92] [Rocha 92], and employed successfully in several problems of medical
diagnosis [Leão et al 93,95,96] and credit scoring [Pree et al 97].
We have focused on the various types of learning associated with CNM networks, namely the learning of membership
functions, of the network topology, and of connection weights, with strong emphasis on the latter. Our main contribution
has been the introduction of new learning algorithms, and a comparative study of their performance on a large-scale
classification problem from the field of image analysis. In addition to various results on the relative merits of the new
algorithms, we have shown that they outperform networks trained by error back-propagation on the same problem.
Details on theoretical and experimental results are to be found in [Machado et al 98].
A CNM network comprises n+m+1 neurons, arranged into an input layer with n neurons, a hidden layer with m
neurons, and a single-neuron output layer. Each input-layer neuron has state in [0,1], and connects forward to a (possibly
empty) subset of neurons of the hidden layer. Each of these connections is also called a synapse. Each hidden-layer
neuron has state given by the fuzzy AND of the states of at least one input-layer neuron, with provisions for both
excitatory and inhibitory synapses. The connectivity of a hidden-layer neuron is the number of input-layer neurons that
connect forward to it. The output-layer neuron has state given by the fuzzy OR of the weighted states of hidden-layer
neurons. Each hidden-layer neuron is thought of as being associated with an input-to-output pathway; the weight of the
corresponding connection to the output-layer neuron is referred to as the weight of that pathway.
In classification problems, a feature of the pattern under analysis can be either categorical (i.e., the feature can only
take on a finite number of values) or, more generally, numeric (i.e., the feature may take on any value from a certain real
domain). Inputing a categorical feature to a CNM network requires at most one input-layer neuron for each of the
feature's possible values (it may require less, depending, for example, on whether and how inhibitory synapses are used).
The state of each such neuron is either 0 or 1.
A numeric feature, on the other hand, cannot in general be input to the network directly, but rather has to be
approached by first partitioning its possible values into, say, "low," "medium," and "high" values. The way to input this
feature to the network is then to compute its degree of membership in each of the appropriate fuzzy sets (three, in our
example), and then use such degrees as inputs to the network. What this takes is one input-layer neuron for each fuzzy
set, each neuron having as activation function the desired membership function.
From the standpoint of a specific application, the question of interest is whether CNM networks can be guaranteed to
provide satisfactory approximations of functions in [0,1] whose domain is that of the application. What we have shown is
that CNM networks without inhibitory synapses are universal approximators. The role of inhibitory synapses is to allow
fewer input-layer neurons for some features, thereby potentially simplifying the network. CNM networks are trained by
supervised learning procedures along three different learning dimensions: learning of the membership functions, learning
of the network topology, and learning of the pathway weights. Learning along the former two dimensions is to be
regarded as a sort of preprocessing to the learning along the latter dimension. Also, depending on the problem at hand,
not every one of these learning dimensions has to be explored. For example, the learning of membership functions is only
meaningful when the patterns of the problem domain are described by numeric features. Similarly, if the topology of the
network is known, or if one may assume that the connectivity of pathways is limited by some upper bound, then one
proceeds directly to pathway-weight learning. We have in this research concentrated on the learning of pathway weights.
We have introduced six learning algorithms for CNM networks, called SS, RSS, DWD, H, IRP, and SRP. Of these,
the first four are based on subgradient procedures of nondifferentiable optimization, while the two others employ the
concepts of rewards and punishments. RSS is derived from SS in much the same way as RProp is derived from standard
error back-propagation. While RSS is conceptually equivalent to SS, the simplification it embodies accounts for
significant performance improvements over that algorithm. The other two subgradient-based algorithms, DWD and H,
are also conceptual descendants of SS, which has then provided the foundations for all the subgradient-based algorithms
we have investigated. SRP is the one-step version of IRP, and is used mostly as a start-up iteration that can precede any
of the other algorithms.
The problem we have used to illustrate the performance of our CNM learning algorithms is the interpretation of
Landsat-5 satellite images, aimed at monitoring the deforestation in the Amazon region [Barbosa et al 94]. This is a
large-scale problem, and automating its solution can be very beneficial in a variety of contexts related to environmental
issues. In addition, the choice of this problem for our performance evaluation is convenient because the data we have
employed is the same used previously to evaluate systems based on error back-propagation and RProp.
Our overall approach proceeds in two phases. The first phase, contrasting with the usual pixel-by-pixel approaches,
consists of the segmentation of the image into spectrally homogeneous portions (called segments, generated by a region-
growing technique). In the second phase, each segment is then classified into one or more of the thematic categories
4
5.
Water (W), Deforested area (D), Forest (F), Savanna (S), Cloud (C), and Shadow (Sh). The categories W, D, F, and S are
referred to as the basic categories. The remaining categories (C and Sh), called interfering categories, are needed to
account for the interference caused by clouds and shaded areas in the classification process.
In order to allow the treatment of complex situations, the natural way to approach the classification of segments is to
follow a fuzzy-logic approach, that is, allow a segment to belong to multiple categories with partial degrees of
membership. Such complex situations include transition phenomena (e.g., the regeneration of a deforested area) and
cases of multiple classification (e.g., a savanna overcast by clouds).
The architecture of our classifying system is centered on the construction of a specific CNM network for each
thematic category. The input to all CNM networks are the numeric features of the image segment to be classified. Eighty-
two numeric features are used for each image segment, being of spectral, geometric and textural nature. For each segment
there are six spectral features, 22 geometric features, and 54 textural features.
The first layer of each CNM network performs the "fuzzification" of these numeric features by means of a learning
method that allows the design of specific fuzzy partitions of each feature domain for each thematic category. Note that
the fuzzy partition of the feature domain is dependent upon the category under consideration, and then different networks
may have different membership functions for the same feature. The method we have employed for the learning of
membership functions is based on the determination of singularities (minima, maxima, and null plateaus) of an entropy-
like function of the feature under consideration. By employing a measure of "relevance," these singularities are ranked
and only the most "relevant" give rise to a membership function. The eventual usefulness of each membership function
thus created depends on how the network's topology is obtained and on the learning of weights. Both processes allow for
the elimination of pathways, so that the corresponding membership functions may end up being done away with
completely. In several cases, this is what happens to membership functions which are too narrow.
Approximately 17,000 segments were used in our experiments, divided into approximately two-thirds to constitute the
training set and the remaining one-third to constitute the test set. These segments were obtained from five images of the
Amazon region, chosen for containing a great variety of the possible situations, including many of great complexity. The
five images come from the states of Acre, Mato Grosso, Para, and Rondonia, and from around Tucuruidam.
All segments were previously classified by a photointerpreter into the six categories, using (for simplicity) only five
values (0, 0.25, 0.5, 0.75, 1) for the degrees of membership of a segment in each of the categories. This classification
was subject to the constraints that the degrees of membership of a segment in the basic categories must add up to one,
and that its degrees of membership in all six categories must add up to some number between one and two.
A very welcome by-product of the structure of CNM as a collection of independent neural networks (one per category)
is that each network can be trained separately. Not only is this attractive from a purely computational standpoint (smaller
networks can be trained faster, and the various neural networks can be spread out for training over a network of
workstations or the processors of a parallel-processing machine), but also it allows the training set to be especially
calibrated to provide the most efficient training for each module. Specifically, each CNM network can be trained on an
extension of the training set obtained for that network by the unbiased replication of some of its members [Leporace et al
94]. This replication is aimed at correcting the uneven distribution of examples concurring toward and against
establishing the membership in the category of the network being trained.
Because expert knowledge on image interpretation was not available, we had to adopt learning from scratch in this
study. What this means is that an initial CNM network had to be built for the learning algorithms to act upon. For the
determination of this initial network, we have confined ourselves to the following. The hidden layer of the initial network
is such that there exist pathways leading out of all possible combinations of the input-layer neurons that do not violate a
predefined upper bound on connectivity.
Our results in a selected subset of all experimentation possibilities suggest that, with the exception of IRP, all our
algorithms have a significant role to play. However, it has been virtually impossible to select one of them as the absolute
best in all cases. Instead, depending on the category at hand, all of RSS, DWD, and H have performed best. The direction
these conclusions point in is that of building systems of a hybrid nature. In such systems, each module is tuned to its
specific needs, and may be based on models and techniques altogether different from those employed in other modules.
The hybrid system we have built along these directions is referred to as CNM*, and comprises, for each of the six
categories of the image classification problem, the best network obtained for that category. This has resulted in three
networks trained by DWD, two by RSS, and one by H. CNM* has shown a superior performance when compared to
systems trained by standard error back-propagation, and has in general performed comparably to RProp. In addition, we
note incidentally that the training of CNM* took significantly less time than that of RProp, itself significantly faster than
error back-propagation.
CNM networks trained by our algorithms have revealed a surprising degree of robustness when low-weight pathways
are pruned off them. What results are smaller networks and possibly indications as to which input features are indeed
important (input neurons that get disconnected in the pruning process are needless).
2.1.4 MMKBANN and MMCIL 2 P Systems
[Hallack et al 98] combined ideas from KBNN systems, like KBANN and CIL2P, and fuzzy NN, like CNM,
developing KBNN using neurons with min/max activation functions trained with subgradient descent methods. They
5
6.
were applied in the same molecular biology domain, the promoter problem, and also to the chess game, obtaining good
results. Their extension to the first-order logic case is presented in section 2.8.
2.1.5 Neuro- Fuzzy Hierarchical Quadtree Models
[Souza et al 97, 99] proposed and developed a new hybrid neuro-fuzzy model, named Hierarchical Neuro-Fuzzy
Quadtree Model (HNFQ), which uses a recursive partitioning of the input space, called quadtree. The quadtree
partitioning was inspired in the quadtree structures proposed by Finkel and Bentley in 1974 and has been widely used in
the area of manipulation and compression of images. The model consists of an architecture, its basic cell, and its
learning algorithm. The HNFQ system has been evaluated in two well know benchmarks applications, The Mackey-Glass
chaotic series forecast and the Sinc(x, y) function approximation, and in real applications.
2.1.6 Representing First- order Logic in Symetric Neural Networks
[Pinkas 91,92] defined a bi-directional mapping between propositional logic formulas and energy functions of
symmetric neural networks. He showed that determining whether a propositional logic formula is satisfiable is equivalent
to finding whether the global minimum of its associated energy function is equal to zero. He also defined how to
transform a first-order resolution-based proof of a formula (query Q) from a given set of formulas (knowledge base KB)
into a set of constraints described by a set of propositional logic formulas C. Then he showed that the satisfaction of C is
sound and complete with respect to a first-order resolution-based proof of Q from KB. Therefore, finding that the global
minimum of the energy function associated to C is equal to zero is sound and complete with respect to a first-order
resolution-based proof of Q from KB.
Pinkas did not implement his system. In this project, [Kilkerry et al 98, 99] pointed out some adjustments to C and
presented an implementation of the revised system. The implementation has a user friendly interface - it is very easy to
present the query to the user-specified knowledge base and also to customize the network for it visually displays the
auxiliary matrices of neurons described by the theory. Besides using the system as a massively parallel theorem prover
(see also [Lima 94, 96]), it can be applied in optimizations problems where the user can describe the problem
declaratively (through logical formulas), instead of having to specify the network’s energy function, and the system will
explain the reason for the given answer.
The simulators developed are also being converted to ANDORRA-PROLOG in order to run in the ANDORRA
parallel environment [Yang et al 1991]. This way, the simulators will also be used as testbed for different process-
allocation mechanisms and strategies [Santos Costa et al 1996,1997).
[Carvalho98b] introduced a synthesis method for binary Hopfield neural networks according to which learning is
viewed as an optimization problem. A theory for concept representation is developed and synthesis criteria, used to
define the optimization problem's objective function and constraints, are presented. Experimental results are provided
based on the use of simulated annealing to solve the optimization problem.
2.1.7 First Order Symbolic Theory Refinement Systems
Most Inductive Logic Programming (ILP) systems restrict their language to definite or semi-definite programs [Lavrac
and Dzeroski 94,Muggleton and de Raedt94]. The subsumption mechanism introduces a syntactic notion of generality
between definite clauses that can be used to prune the search space of hypotheses. These properties form the basis of
most ILP systems that induce definite programs and also play an important role in the computation of heuristics.
However, they do not hold for normal programs in general. In this project, [Fogel and Zaverucha 98] identified a subclass
of normal logic programs, called strict and call-consistent, that can be handled and proposed training sets for the top-
down induction of these logic programs (the gain heuristic needed to be changed).
Recently, there has been increased interest in the multiple predicate learning problem resulting in a number of
systems, including MPL [DeRaedt et al 93]. [Fogel and Zaverucha 98] investigated the problem of inducing normal
programs of multiple predicates in the empirical ILP setting. They have also presented a top-down algorithm for multiple
predicate learning, called NMPL, which is based on MPL and improved it in the following ways: the sets of training
examples are iteratively constructed; the language of definite clauses has been extended to strict and call-consistent
normal programs. Finally, wthey discussed the cost of the MPL's refinement algorithm and presented theoretical and
experimental results showing that NMPL can be as effective as MPL and is computationally cheaper.
2.1.8 Towards Extending KBNN to First- order
[McCarthy 88] pointed out that neural networks applications have all the basic predicates unary and even applied to a
fixed object and a concept is a propositional function of these predicates. Therefore they cannot perform concept
description, but only discrimination. An example can be found in [Hinton 86].
6
7.
Extending KBNN to first-order involves two well-known and unsolved problems on neural networks: the
representation of structured terms and their unification. Existing models like [Shastri and Ajjanagade 93], [Holldobler 93]
and [Pinkas 92] have in common the use of architectures and resources that hinders their ability to perform learning,
which is our main goal. [Shavlik 94] pointed out that first-order theory refinement with neural networks is an open
problem.
An alternative way to induce relational concepts is to transform the original first-order theory into a propositional one
using the LINUS system [Lavrac, Dzeroski 94], such that propositional symbolic learning algorithms like ASSISTANT
and NEWGEN are applied.
In this project, [Basilio et al 98] used neural networks as the propositional learning algorithm. This way, neural
networks can induce relational concepts descriptions in a subset of First-Order Logic, namely constrained DHDB
(Deductive Hierarchical Database) formalism [Lloyd 87]. Given a DHDB background knowledge and a set of training
examples (ground facts of the target predicate), LINUS transforms a first-order problem into an attribute-value form. The
possible applications of the background predicates on the arguments of the target relation are determined, taking into
account argument types. Each such application introduces a new attribute. The attribute-value tuples are generalizations
(relative to the given background knowledge) of the individual facts about the target relation. The system picks a target
ground predicate from the examples, one at a time, and applies its terms in all attributes. If each resulting ground
predicate can be proved by the DHDB, it is set to TRUE, otherwise it is set to FALSE. This information is sent to a
neural network, which generates an attribute-value hypothesis and apply a rule extraction algorithm. Finally, the
knowledge obtained is converted back to DHDB form. Various examples from Machine Learning literature described in
[Lavrac, Dzeroski 94] were used to analyze the performance of neural networks’ learning via LINUS: family
relationships, the concept of an arch, rules that govern card sequences and the description of robots. In all the
experiments, a standard multilayer feedforward backpropagation neural network of three layers has been used. It is
important to notice that any neural network architecture could be used.
Also in this project, [Hallack et al 98] proposed a hybrid connectionist-symbolic model of first-order theory
refinement called MinMax Connectionist Inductive Logic Programming (MMCILP) system. Basically:
• the AND and OR usually achieved in KBNN through the setting of wieghts and thresholds are replace by the
fuzzy AND (min) and OR (max) functions;
• the standard neuron (computational element) is replaced by a node with extended capabilities inspired by [Sun]
“discrete neuron“;
• each link has a function mapping complex messages (set of tuples) into comples messages, besides having an
associated weight;
• learning is performed by subgradient descent methods together with ILP techniques.
2.2 ANN Compilers and Parallelization, Evolutionary Design of Circuits, Genetic Algorithms
and Computational Neuroscience
2.2.1 Artificial Neural Networks Compilers and Silicon Compilers
One of the achievements of this work consisted of the development of compilers which, from the description of a
target problem in logical clauses, produce a specification of a digital circuit in VHDL (a Hardware Description
Language) able to process the corresponding inference in an asynchronous and very agile way. Artificial Neural Network
(ANN) models are used as intermediate languages, or virtual machines, in the translation process. Two compilers were
developed and each one can be described as a concatenation of two translators: (i) Logical clauses to ANN (LA)
translator, and; (ii) ANN to VHDL (AV) translator. The first LA translator implemented [Aiello et all, 1995 a, b] is able to
compile sets of propositional clauses into custom ANN architectures based on Caianiello's model [Caianiello, 1989]. A
second LA translator was developed and is able to compile sets of first order logic clauses limited to problems that only
require local search [Lima 1992] into customized Hopfield ANN architectures. For each LA translator, a particular AV
translator was built [Ferreira and França, 1997a, b)].
Once a VHDL specification is obtained, via the first or the second compiler, many existing Computer Aided
Electornic (CAE) tools, e.g., Synopsysª, Mentor Graphicsª, Cadenceª, etc, can be used to synthesize a digital circuit using
Field Programmable Gate Arrays (FPGAs) or Application-Specific Integrated Circuits (ASICs) as substract. In particular,
an ALTERAª FPGA development system is the first host of the circuits generated by our compilers. A next step is to try
the Xilinxª development system, based on a fine-grain (internal gate architecture) FPGA family, in order to compare
implementation costs.
7
8.
2.2.3 Parallelization of the Backpropagation Learning Algorithm in Clusters of Workstations
[Melcíades et al 98] describes the parallelization of the algorithm of learning backpropagation in clusters of
workstations. The studied modes are: online, batch e block [Coetzee 96]. The experimental results show that the
parallelization in the batch mode is the best alternative.
The backpropagation algorithm is one of the most used methods for training Artificial Neural Networks (ANN)
[Haykin 91]. Unhappily in many situations, its training time becomes unacceptable. To try to avoid this drawback many
studies have been carried through with the objective of using parallel machines and/or clusters of computers to speed up
the involved calculations in the backpropagation algorithm [Chaves et al 95] [Resk 93].
As a matter fact clusters of workstations and PCs have also shown its effectiveness in the parallelization of other in
Science and Engineering [Pfister 98]. Nowadays clusters of computers represent an alternative sufficiently consolidated
[Pfister 98]. The programming of these clusters can be carried through with some tools like PVM (Parallel Virtual
Machine) [PVM3] and MPI (Message Passing Interface) [Pacheco 96]. With these tools is possible to use a virtual
parallel environment of a network of heterogeneous machines. The algorithm can be discomposed in several parts that
will be executed in different processors simultaneously.
The parallelization of the algorithm of backpropagation for ANN allowed to get an acceleration of calculations
satisfactory, especially in batch mode. The parallelization of the algorithm in block mode showed that this mode has a
worse performance, but also allows to get acceleration of calculations.
The parallelization in this work is being used in computational intensive problems, such as the described classification
of fingerprints.
2.2.4 Performance and Convergence of Genetic Algorithms
[Costa 97] investigated the performance and convergence of Genetic Algorithms (GAs): solutions and related
measures. The work presented a discussion on the most important questions about the mathematical foundations of GAs:
the objections pointed by Thorton (1995) about the coherence of the association between schema and building blocks; the
contradiction between the Schema Theorem and the Price Theorem, mentioned by Altemberg (1994); and the new ideas
raised by the variance of fitness concept. The work also discussed aspects related to Deception and Epistasis: the Walsh
coeficients, deceptive functions such as 6-bits fully and 3-bits deceptive function by Deb, deceptive but easy and non-
deceptive but hard by Greffenstette, F2 and F3 by Whitley, NK functions and Royal Road functions. Alternative GAs
such as: Messy GA (Goldberg), Structured GA (Dasgupta), Augmenated GA (Grefenstette), and GA’s of Paredis were
rewied. Finally, the work investigated two classes of convergence measures: probabilistic and based on landscapes, with
special attention to the FDC (Fitness Distance Correlation) proposed by Forrest (1995), which is evaluated on a set of
mathematical functions. The environment used was the GENEsYs 1.0 (Thomas Back), which has been adapted and
expanded to supply the purposes of this work.
2.2.4 Computational Neuroscience
2.2.4.1 Adaptive Resonance Theory in the Search of the Neuropsychological Patterns of
Schizophrenia
Neuropsychology is a clinical methodology which, through patient testing, assess cognitive performance leading to the
localization of brain dysfunctional areas. This methodology shows itself useful in clinical research because a
neurofunctional substrate is supposed to exist embedded in every psychiatric disorder. Schizophrenia is a chronic and
highly debilitating mental disease about which little is known except its social, familiar and personal costs. There seems
to exist consistent functional patterns in the presentation of this disorder. These patterns define at least three symptom
clusters or dimensions of schizophrenic patients which are the psychotic, the negative, and the disorganized. They must
be well interpreted as the concept of psychiatric syndrome denotes a pattern of correlated symptoms which the expert
qualitatively defines as a cluster. In this respect, artificial neural networks, mainly those that, like adaptive resonance
theory, are capable of unsupervisioned learning, indicate a highly promising direction for clustering the
neuropsychological dimensions of schizophrenia. The parameters for the neural networks cluster analysis were chosen
based on neurobiological studies of the disease, trying to highlight the damaged brain circuits. The correlation between
the neuropsychological patterns defined by the neural network and the known psychopathology was determined,
bringing interesting original results to the understanding of the exixting sub-types of this disorder [Carvalho97,
Carvalho98a]. This research was awarded by the World Biological Psychiatry Society with the Latin America
Biological Psychiatry Prize of 1998.
8
9.
2.2.4.2 A Model for Autism
Autism is a mental disorder characterized by deficits in socialization, communication, and imagination. Along with the
deficits, autistic children may have savant skills (“islets of ability”) of unknown origin. The present article proposes a
neurobiological model for the autism. A neural network capable of defining neural maps simulates the process of
neurodevelopment. The computer simulations hint that brain regions responsible for the formation of higher level
representations are impaired in autistic patients. The lack of this integrated representation of the world would result in
the peculiar cognitive deficits of socialization, communication, and imagination and could also explain some “islets of
abilities”. The neuronal model is based on plausible biological findings and on recently developed cognitive theories of
autism. Close relations are established between the computational properties of the neural network model and the
cognitive theory of autism denominated “weak central coherence”, bringing some insight to the understanding of the
disorder [Carvalho99a].
2.2.4.3 A Model for Chronic Stress
This work defines a model for the neurobiological substrates of the chronic stress and its relation to depression. It is
based on the observation that there is an anatomical loop between two important pontine nuclei which release
norepinephrine and corticotropin-releasing factor. These two substances, deeply associated with the referred disorders,
have their action integrated by the mutually connecting nuclei. The loop forms a positive-feedback type circuit which
works as a local memory for stressful events, facilitating future actions in face of repetitive environmental challenges.
Under certain circumstances, the system there generated maintains a state of chronic neural hyperactivity,resulting in an
excessive release of norepinephrine and corticotropin-releasing factor, similar to what is observed in chronic stress and
depression. A mathematical model is developed, and some structural and physiological hypotheses are established and
supported by the experimental records found in the literature. Computer simulations, especially designed to incorporate
the mathematical equations and the model assumptions, are performed to show how the facilitatory effect and the chronic
hyperactivity state develop. The results of the model represent a contribution to the understanding of the neurobiological
etiology of the disorders of chronic stress and some forms of depression [Carvalho99b].
3. Applications
3.1 Electric Power Systems Diagnosis
The increasing complexity of Power Systems demands the development of sophisticated supervision, control and
automation techniques. Among the most commonly found applications are the faults detection and diagnosis systems
based on the processing of the target electric system's alarms.
Alarm processing in a large scale electric system requires the treatment of a great amount of information. When a
disturbance happens, a large number of isolated events occurs, making it difficult for the operator to recognize the cause
of this disturbance (make the diagnostic) and to determine the corrective actions that shall be taken.
In this project we analysed intelligent systems for alarm processing problem: two loosely coupled Hybrid System and
an extension to the C-IL2p (“Connectionist Inductive Learning and Logic Programming System”) that allows the
implementation of Extended Logic Programs in Neural Networks.
The hybrid systems developed consisted in a Neural Network module to pre-process the alarms, informing which
faults are more probable in the power system with that alarm configuration. The output is fed to:
• the Fuzzy Logic [Silva and Zebelum 96, Ribeiro et al 98] module to interpret the results (that means that from the
fault selection made by the Neural Network and from a simple rule set, the Fuzzy system returns a safer and more
precise diagnostic). In order to verify its validity, the intelligent system was subjected to an analysis from a
specialist in protection. During the tests the system correctly diagnosed 99% of the presented cases and is capable
of identifying simple and multiple faults, besides being able to manipulate noisy data. It is portable both to the
WINDOWS/DOS and the UNIX environments;
• an Expert system with similar objectives [Biondi et al 96, 97], closely related to [Silva et al 95, 96].
[Garcez et al 97] presented an extension of C-IL2P system that allows the implementation of Extended Logic
Programs [Gelfond and Lifschitz 91] in Neural Networks. As a case example, the system was applied for fault diagnosis
of a simplified power system generation plant, obtaining good preliminary results results and an accuracy of 97% in
noisy multiple faults.
9
10.
3.2 Electric Load Forecasting
• Having a precise load-forecasting model is very important to the electric power system operation. The disposability
of load values estimations to the next hours or days to the system operator allows him to manage the generation
units, optimizing the actual power resources.
• Several works relating computational intelligent techniques to load forecast have been published in recent years.
This project analyses the results obtained in the development of load forecasting system using different Artificial
Intelligence techniques.
• [Ribeiro et al 98]developed a system that predicts hourly loads up to a weak ahead. A calendar of events is
introduced in the system, where special days (holidays, sportive events, etc) are inserted, which allows the system to
treat them in a special way, improving its performance. Its fast fuzzy rule generation made possible its training in
real time with the more recent data and allowed the kno wledge of the last trend of the considered series. This
characteristic together with the calendar of events made the system very flexible, fitting any utility very easily. The
procedure is applied to some utilities in Brazil using observations collected during several years. The results
obtained in these applications indicate that the proposed system is adequate to the matter. It is simple, matches to
different utilities very easily and the forecasts produced present small mean absolute percentage error (MAPE).
• Another system using Fuzzy Logic has also been developed. We have implemented a Fuzzy Logic Forecasting
System using Matlab to predict load for the CEMIG company in 10-minutes interval, 144 steps ahead. In a fuzzy
systems for time series forecasting, the rule extraction is achieved from the numeric data. In this case, the own series
plays as the specialist. There are two basic methodologies for rule extraction: the data is used to define the fuzzy
sets, or the fuzzy sets are pre-assigned, with the data being late associated to these sets. We employed the second
methodology. Many tests were made varying the overlap ratio of the fuzzy sets in order to obtain the best results
(1.15% of mean absolute error).
• [Teixeira et al 98, 99] present some results about the study of the partially recurrent neural networks applied in
hourly electric load forecasting with real data. In the present problem of load forecasting there is little information
from the output layer (it has only one output unit for the value of load which will be predicted). Therefore Elman
networks was chosen (instead of Jordan networks). Elman networks (EN’s) were applied to electric load forecasting,
using global and local models and were trained both by Backpropagation and Backpropagation through time
algorithms. For the local models the Neural Gas network was used as the classifier. Many networks architectures
were analyzed for the Global Models (Feedforward Networks and Elman Networks) and for the Local Models
(Neural Gas + Feedforward Networks, Neural Gas + Elman Networks and Recursive Neural Gas + Elman
Networks). The tests conducted showed better results for the local models than for the global models. The local
model architecture that performed better are the architectures that integrates Neural Gas or Recursive Neural Gas
with Elman’s neural network. The MAPE performance of these architectures were much better than the results
obtained by the other models investigated.
• [Zebelum et al 97] developed the neural networks of software called NeuroPrev, which is used by 32 Brazilian
power electric companies for forecasting monthly electric load. The NeuroPrev system features an Artificial Neural
Network (ANN) modeled to perform load forecasting. The ANN employs the Backpropagation Algorithm for
network training and uses binary codification for the month as auxiliary ANN inputs. To evaluate training
performance, two statistics are provided graphically: the Mean Squared Error of the training process, and the
forecasting error for the last observed load data, which are not included in the training process.
• [Ribeiro et al 97, 98] have also investigated the performance of ANN in Very Short Term (VST) load forecasting,
with prediction leading times of 10 minutes. The VST load forecasting system has been tested with real load data
from CEMIG Power Electric Co., Brazil. The systems involves: pre-processing, training and forecasting. Pre-
processing the load series consists of two steps: preparing load data values for training and forecasting; and
interpolating missing values. The Backpropagation Algorithm has been used for training. The performance of the
VST load forecasting system has been assessed using a multi-step procedure, in which forecasted load values are fed
back as neural network inputs. A mean absolute percentage error of 1,17% has been achieved when predicting load
144 steps ahead (24 hours).
• [Tito et al 99] have applied Bayesian Neural Networks to electric load forecasting. The Bayesian methods used were
the Gaussian Approximation and the Markov Chain Monte Carlo. Bayesian Neural Networks offer important
advantages over the backpropagation model such as the confidence interval which can be assigned to the predictions
generated by the network. The results of the Bayesian neural net were favorably compared with some statistical
techniques such as Box & Jenkins and Holt-Winterss and also with backpropagation neural networks.
3.3 Hybrid Statistics/Neural Models for Time Series
[Velloso 99] have developed a new class of non linear additive hybrid model for time series forecasting. In these
models the varying coefficients are modeled by a neural network and, both the conditional mean and variance are
modeled explicitly. This work demonstrated that the proposed models are universal approximators and can model
explicitly conditional variance. Gradient calculus and algorithms for weight estimation were also developed based on the
10
11.
main estimation methods: least mean squares and maximum likelihood. These models have been tested with synthetic
and real series. The results obtained were superior or, at least, equivalent when compared with other methods.
[Soto 99] reviews the main algorithms and investigates the performance of Timed Neural Networks: TDNN (Time
Delay Neural Net), FIR (Finite-Response Impulse Response) Neural Net; Jordan and Elman Neural Nets. These
algorithms were implemented and assessed in two classes of applications: time series and voice filtering.
3.4 Overflow Control of Electric Generator Reservoir through Genetic Algorithm
[Ribeiro et al 98] have also assessed the performance of Genetic Algorithms (GA) in Overflow Control of Electric
Generator Reservoir. In this work we used GA to calculate the minimum volume of water to assign to a reservoir in order
to attend energy demand, avoiding overflow. This is basically a constrained problem which required a chromosome
decoder designed specifically to guarantee valid solutions.
3.5 A Theory Refinement Approach for the Design of Distributed Object Oriented Databases
Distributed Design involves making decisions on the fragmentation and placement of data across the sites of a
computer network [Oszu & Valduriez 91]. The first phase of the Distributed Design task in a top-down approach is the
fragmentation phase, which clusters in fragments the information accessed simultaneously by applications. Since
Distributed Design is a very complex task in the context of the OO data model [Ezeife & Barker 94, 95] [Karlapalem et
al 94] [Lima and Matoso 96] [Maier et al 94] [Savonnet et al 96], [Baião & Mattoso 97, 98] have developed in their
previous work an analysis algorithm to assist distributed designers in the fragmentation phase of OO databases. The
analysis algorithm decides the most adequate fragmentation technique to be applied in each class of the database schema,
based on some heuristics.
[Baião et al 98] proposed a theory refinement approach to the DDOODB problem. The objective of the work was to
uncover some previously unknown issues to be considered in the distributed design process. They represent the analysis
algorithm as a set of rules that will be used as background knowledge to obtain a new set of rules by applying the
Inductive Logic Programming (ILP) technique [Lavrac and Dzeroski 94], through the use of the FORTE system
implementation [Richards and Mooney 95]. This new set of rules will represent a revised analysis algorithm that will
propose fragmentation schemas with improved performance.
Some experiments have been done and have resulted in fragmentation schemas offering a high degree of parallelism
together with an important reduction of irrelevant data.
Although they have addressed the problem of class fragmentation in the DDOODB context, an important future work
is the use of the same inductive learning approach in other phases of the Distributed Design (such as the allocation
phase), as well as in the Database Design itself, possibly using another data models (relational or deductive). Also, the
resulting fragmentation schemas obtained from the revised algorithm may be applied to fragment the database that will
be used in the work of [Provost & Henessy 96], which proposes the use of distributed databases in order to scale-up data
mining algorithms.
3.6 Data Mining
[Baião et al 98] discussed some issues in Knowledge Discovery in Database.[Pereira 99] investigated the use of
Genetic Algorithms in KDD. The objective was to model, implement and assess the performance of a Genetic Algorithm
system, designed to evolve association rules from databases. Modelling the GA consisted in define the chromosome
representation and decoding, initialisation strategies, genetic operators, and evaluation functions. The evaluation
functions point to the best rules according to their accuracy and coverage. There have been tested 11 different evaluations
functions, some of them new, in various benchmark and real data mining applications. The tool system, called
RulerEvolver, which combines various resources to manipulate and mine in databases, presented competitive results,
when compared with other related works in data mining, using statistics and neural networks systems.
[Pereira 99] have also assessed the performance of Neural Network models in data mining. He employed various
models: backpropagation, Radio Basis Function and Kohonen, to classify database registers. In these applications, the
neural nets learned the characteristics of distinct clusters and were able to classify new registers correctly. These models
have been applied in real, marketing commercial databases.
3.7 A GIS Expert System for Land Evaluation
[Nehme et al 98] aims at developing a tool for Agricultural Planning based on an expert system for Land Evaluation.
The social and economical consequences of unsuitable land use is well known although, in some countries, the natural
potential is not considered causing the misuse of land potential and, as a result, economical and environmental
impairment.
11
12.
A land use planning is essential during economical development. Without correct interventions in the distribution of
the development sectors, based on land planning, regional economic development problems will probably occur.
In many countries, economical development occurs without land use planning. This causes development problems
such as: imbalance growth, income disparity, decreases of water quality, to name some.
A Decision Support System for Land Evaluation will be used as a first step for land use planning. In this way, five
limiting factors can be initially considered: soil fertility, water excess/deficiency, erosion susceptibility and the required
degree of soil tillage. Besides those environmental aspects, some economical issues should be also considered,
expressing communities development degree, market tendency and so on.
According to FAO [1995 / Chapter 10- Agenda 21]: an integrated physical and land use planning and management is
an eminently practical way to move towards more effective and efficient use of the land and its natural resources. That
integration should take place at two levels, considering on the one hand all environmental, social and economic factors
and on the other all environmental and resources components together.
This integration can be obtained through the utilization of landscape ecological methods, through the definition of
landscape units able to represent this integrative approach. Landscape units will be used as elementary portions providing
information for the analytical process. Landscape units are geographical entities containing attributes which allow a
differentiation between them having a simultaneous dynamic connection.
This system encompasses two dimensions: the ecological, which reflects the limitations and potentialities of
sustainable use of natural resources, and the economical, which expresses the development of the communities that live
in the region or zone whilst exploiting it.
Geographic Information Systems (GIS) have been used for many applications as a tool for spatial analysis. Thus, a
GIS will be used to support the spatial aspects of the above mentioned knowledge-based system for land evaluation. This
information system will help decision making and can be a powerful tool for land use planning and environmental
management.
The Decision Support System and the methodology presented have been validated in Paty de Alferes region, Rio de
Janeiro, Brazil with good results in comparison with the traditional land evaluation methods and land adequacy analysis
performed by a soil scientists group of CNPS/Embrapa (National Soil Research Center of Brazilian Agricultural
Research Corporation). The next step is to implement fuzzy algorithms to the Expert System in order to make it
possible to measure costs and establish risk analysis associated with the decision making. For this purpose we intend to
use the methodology proposed by BOGRADI [1].
The fuzzy reliability measurement can be used to compare alternatives decisions when the consequence of failure is
considered.
3.8 Robot Control and Games
In this project we have used Genetic Algorithms, Neural Networks and Fuzzy Logic in the design of machine
learning systems and have applied them to robot control and games. In [Thomaz et al 99], a Genetic Algorithm (GA) is
used for designing behavioral controllers for a robot. A new GA path-planning approach employs the evolution of a
chromosome attitudes structure to control a simulated mobile robot, called Khepera. These attitudes define the basic
robot actions to reach a goal location, performing straight motion and avoiding obstacles. The GA fitness function,
employed to teach robot’s movements, was engineered to achieve this type of behavior in spite of any changes in
Khepera’s goals and environment. The results obtained demonstrate the controller’s adaptability, displaying near-
optimal paths in different configurations of the environment. The same robot simulator was used to evaluate a fuzzy
controller. [Brasil 99] implemented the reinforcement learning method, called TD(λ), to learn a game.
3.9 Neural Networks for Image Classification
Traditional image classification (or object recognition) systems are passive devices. Given an input image,
identification or recognition is obtained by processing the entire image. Typically, this fact implies that the whole image
must pre-processed, an extremelly costly procedure. This state of affairs should be contrasted with human vision which,
instead of passively dealing with "input images", involves the continuous interaction with the outside world through so-
called saccadic eye movements (Yarbus, 1967). The central role of eye movements in human vision has inspired a range
of computational vision models to explore such properties, producing a number of "active vision" proposals (Aloimonos
et al., 1988; Ballard,1991; Tistarelli & Sandini, 1992). The present work utilizes basic principles of active vision
systems in the task of image interpretation and object recognition.
The main objective of the present component of the project were the following.
1- Develop image representation methods based on the properties of the primate visual system. In particular, design
filters inspired in the concentric receptive field organization found in the retina, as well as the oriented structure found
in the visual cortex (Neumann & Pessoa, 1994; Pessoa et al., 1995; Grossberg & Pessoa, 1988; Exel & Pessoa, 1988).
2- Determine the efficacy of this representation in the problems of image classification and object recognition. In
particular, investigate the performance of the methods developed in data bases extensively used for testing and
benchmarking.
12
13.
3- Based on the representation chosen, develop new image classification and object recognition methods with
reduced computational costs. In particular, develop multi-scale pyramid and space-variant representations for the active
processing of images.
The results obtained were the following:
1- Development of an active vision system for object recognition. In this system, the image in not treated as a whole,
but its regions are visited sequentially. Each region considered provides new image pixels that are accrued to the input
feature vector that is considered by a Fuzzy ART neural network for image coding. The present development introduced
new attentional strategies, i.e., strategies of how to sequentially visit image regions such that they are optimally chosen
for recognition (i.e., they help disambiguate the recognition process the most). See Exel and Pessoa (1998); Exel
(1999).
2- The active vision object recognition system was extended to deal with multiple-object scenes, or images. In this
case the input to the system was not an image with only one object (e.g., a face image), but a an image containing a host
of objects in arbitrary positions (e.g., a collection of faces). The active vision philosophy adopted in (1) above was
maintained for the scene exploration system. In this system, objects belonging to the image (e.g., a person) are marked
(or tagged) by mechanisms that determine "interesting regions" or regions that contain information that should be
investigated in detail. These canditate regions are by their turn treated as a single object and investigated in detail by the
single-object system developed in (1). See Pessoa et al. (1998a,b).
3.10 Evolutionary Methodologies Applied to the Design of Electronic Circuits
[Zebelum et al 99] presented a major investigation in a novel research area called Evolutionary Electronics (EE). EE
extends the concepts of Genetic Algorithms to the evolution of electronic circuits. The main idea behind EE is that each
possible electronic circuit can be represented as an individual or a chromosome of an evolutionary process, which
performs standard genetic operations over the circuits, such as selection, crossover and mutation. This research work
comprises genetic algorithms modeling, the proposal of novel evolutionary methodologies and a large number of case
studies: Variable length representation (VLR) Evolutionary Algorithms; Memory based genetic algorithms; New
crossover and mutation operators for VLR; Search space partitioning methods; Multi-Objective optimization
methodology baesd on Neural Networks; Electronic filter design; Synthesis of Low Power operational amplifiers;
Analog electronic circuits design; Design and optimization of logic gates; Micro power CMOS analog cells; Synthesis
and optimization of combinational and sequential digital circuits; Switched Capacitors circuits design.
4. Conclusion
The ICOM project have developed and implemented several computational intelligent systems and applied them in
real world problems. We have organized the First Brazilian School on Machine Learninng and Knowledge Discovery in
Databases (MLKKDD’98) (www.cos.ufrj.br/~mlkdd) and two workshops (see [França 97]). We have formed 19 MSc
and 5 PhD students and several researchers that worked on the project. We have published 100 papers: 1 book, 7 book
chapters, 11 papers in international journals, 47 in international conferences and 34 in Brazilian conferences.
Acknowledgments
This project was funded by the Brazilian Research Council CNPq/ProTeM-CC. Some authors are partially financially
supported by CNPq research productivity grants. We also would like to thank Ernesto Burattini, Saso Dzeroski, Alex de
Freitas, Massimo de Gregori, Steffen Holldobler, Raymond Mooney, Foster Provost, Jude Shavlik and Ah-Hwee Tan.
References
Aloimonos, J., Bandopadhay, A., & Weiss, I. (1988). Active vision. International Journal of Computer Vision, 1, 333-356.
Aiello, A., E. Burattini and G. Tamburrini, 1995a. Purely Neural, Rule-Based Diagnostic Systems. I. Production Rules. International
Journal of Intelligent Systems, 10, 735-749.
Aiello, A., E. Burattini and G. Tamburrini,1995b. Purely Neural, Rule-Based Diagnostic Systems. II. Uncertain Reasoning.
International Journal of Intelligent Systems, 10, 751-769.
R. Andrews, J. Diederich and A. B. Tickle; “A Survey and Critique of Techniques for Extracting Rules from Trained Artificial
Neural Networks”; Knowledge-based Systems; Vol. 8, no 6; 1995.
K. R. Apt and N. Bol; “Logic Programming and Negation: A Survey”, Journal of Logic Programming 19-20, pp.9-71, 1994.
BAIÃO, F., MATTOSO, M., 1998, “A Mixed Fragmentation Algorithm for Distributed Object Oriented Databases”, Proceedings of
the International Conference on Computing and Information, Winnipeg, Canada, June
Ballard, D. (1991). Animate vision. Artificial Intelligence, 48, 57-86.
13
14.
V. C. Barbosa; “Massively Parallel Models of Computation”; Ellis Horwood Series in Artificial Intelligence; 1993.
R. Blasig; “GDS: Gradient Descent Generation of Symbolic Classification Rules”; Advances in Neural Information Processing
System, Vol. 6; 1994.
Buckley, J. J., and E. Eslami, "Fuzzy neural nets: capabilities," in W. Pedrycz (Ed.),Fuzzy Modelling: Paradigms and Practice,
Kluwer, Boston, MA, 167-183, 1996.
Buckley, J. J., and Y. Hayashi,"Fuzzy neural networks: a survey," Fuzzy Sets andSystems 66 (1994), 1-13.
Buckley, J. J., and Y. Hayashi, "Neural nets for fuzzy systems, "Fuzzy Sets and Systems 71 (1995), 265-276.
Caianiello E., 1989. A theory of neural networks. In: I. Aleksander (ed), Neural Computing Architectures: The Design of Brain-like
Machines. North Oxford Academic Publishers, 8-25.
*Carvalho L A V., Ferreira, N C, Fiszman, A., A Neurocomputational Model for Autism, Submitted to Psychophysiology
*Carvalho, L A V, Melo, A C D, A Neurocomputational Model for Chronic Stress, Submitted to Mathematical and Computer
Modeling.
N. Chaves, P.Martins e C. Fernando. Um Estudo de Paralelismo em Redes Backpropagation. II Simpósio Brasileiro de Redes Neurais, 1995.
L. Coetzee. Parallel Approaches to Training Feedforward Neural Nets. Faculty of Engineering, University of Pretoria, 1996.
*Rodrigo M. L. A. Costa, Estudo sobre o Desempenho de Algoritmos Genéticos (A Study on the Performance and Convergence of
Genetic Algorithms), Departamento de Engenharia Elétrica, PUC-Rio, March 1997.
De Raedt, L., Lavrac, N. and Dzeroski, S., Multiple Predicate Learning, in: Proceedings of the 13th International Joint Conference on
Artificial Intelligence, Morgan Kaufman, 1993, pp. 1037-1042.
EZEIFE, C., BARKER, K., 1994, Vertical Class Fragmentation in a Distributed Object Based System, Technical Report 94-03,
Department of Computer Science, University of Manitoba, Canada
EZEIFE, C., BARKER, K., 1995, "A Comprehensive Approach to Horizontal Class Fragmentation in a Distributed Object Based
System”, Distributed and Parallel Databases, vol. 3, n° 3, pp. 247-272
M. Fischer e J. Dongarra. Another Architecture: PVM on Windows 95/NT, 1994.
*França, F.M.G. (Editor), Proceedings of the Workshop in Computational Intelligence: Projects ICOM e IPAC, Protem III-CC,
CNPq, Programa de Engenharia de Sistemas e Computação - COPPE/UFRJ, Technical Report ES-450/97, september 1997.
M. Gelfond and V. Lifschitz; “The Stable Model Semantics for Logic Programming”; 5th Logic Programming Symposium, MIT
Press; 1988.
M. Gelfond and V. Lifschitz; “Classical Negation in Logic Programs and Disjunctive Databases”; New Generation Computing, Vol.
9, Springer-Verlag; 1991.
Gupta, M. M., and D. H. Rao, "On the principles of fuzzy neural networks, "Fuzzy Sets and Systems 61 (1994), 1-18.
S. Haykin. Neural Networks. A Comprehensive Foundation. Macmilian College Publishing Company, 1991
K. Hwang. Advanced Computer Archtecture Parallelism-Scalability-Programmability. Mc Graw Hill, 199
J. Hertz, A. Krogh and R. G. Palmer; “Introduction to the Theory of Neural Computation”; Santa Fe Institute, Studies in the Science
of Complexity, Ed. Addison-Wesley Publishing Company; 1991.
M. Hilario; “An Overview of Strategies for Neurosymbolic Integration”; Connectionist-Symbolic Integration: from Unified to
Hybrid Approaches - IJCAI 95; 1995.
S. Holldobler; “Automated Inferencing and Connectionist Models”; Postdoctoral Thesis, Intellektik, Informatik, TH Darmstadt;
1993.
S. Holldobler and Y. Kalinke; “Toward a New Massively Parallel Computational Model for Logic Programming”; Workshop on
Combining Symbolic and Connectionist Processing, ECAI 94; 1994.
M. I. Jordan; “Attractor Dynamics and Parallelisms in a Connectionist Sequential Machine”; Eighth Annual Conference of the
Cognitive Science Society, pp. 531-546; 1986.
Kandel, A., and G. Langholz, Hybrid Architectures for Intelligent Systems, CRC Press, Boca Raton, FL, 1992.
KARLAPALEM, K., NAVATHE, S., MORSI, M., 1994, “Issues in Distribution Design of Object-Oriented Databases”. In: Özsu, M.
et. al (eds), Distributed Object Management, Morgan Kaufman Publishers References
Keller, J. M., R. Krishnapuram, and F. C. Rhee, "Evidence aggregation networks for fuzzy logic inference," IEEE Trans. on Neural
Networks 3 (1992), 761-769.
B. Kruatrachue e T. Lewis. Gran size Determination for Parallel Processing. IEEE Software, New York, 1998.
N. Lavrac and S. Dzeroski; “Inductive Logic Programming: Techniques and Applications”; Ellis Horwood Series in Artificial
Intelligence; 1994.
P. Leray, P. Gallinari e E. Didelet. A Neural Network Modular Architecture for Network Traffic Management. CESA, Lille, 1994.
LIMA, F. MATTOSO, M., 1996, “Performance Evaluation of Distribution in OODBMS: a Case Study with O2” In: Proc IX Int.
Conf on Parallel & Distributed Computing Systems, France, pp.720-726
LIMA, P. M. V., 1992. Logical Abduction and Prediction of Unit Clauses in Symmetric Hopfield Networks. In: I. Aleksander and J.
Taylor (eds.), Artificial Neural Networks (ICANN-92), Elsevier Science Publishers, 1, 721-725.
LIMA, P. M. V., 1994. Resolution-Based Inference with Artificial Neural Networks. presented at the Workshop on Logic and
Reasoning with Neural Networks at the International Conference on Logic Programming’94 (ICLP-94), Santa Margherita.
Workshop. Proceedings can be retrieved at ftp.site: ftp.di.unipi.it, dir: pub/Papers/perso/ICLP-WS.
Liporace, F. S., R. J. Machado, and V. C. Barbosa, Equalization of the training set for back-propagation networks applied to
classification systems," Proc. of the First Brazilian Congress on Neural Networks, 99-103, 1994.
J. W. Lloyd; “Foundations of Logic Programming”; Springer - Verlag; 1987.
Machado, R. J., and A. F. Rocha, "A method for incremental learning in fuzzy neural networks," Proc. of the First Brazilian Symp.
on Neural Networks, 105-110, 1994.
Machado, R. J., and A. F. Rocha, "Integrating symbolic and neural methods for building intelligent systems," Proc. of the Int. Symp.
on Integrating Knowledge and Neural Heuristics, 283-292, 1994.
14
15.
Machado, R. J., and A. F. Rocha, "Inference, inquiry and explanation in expert systems by means of fuzzy neural networks," Proc. of
the Second IEEE Int. Conf. on Fuzzy Systems, 351-356, 1993.
Mahoney, J. and Mooney, R.1993. Combining Connectionist and Symbolic Learning Methods to refine Certainty-factors Rule-based
Systems. Connection Science 5 (special issue on architectures for integrating neural and symbolic processing) pp.339-364.
MAIER, D., GRAEFE, G., SHAPIRO, L. et al., 1994, “Issues in Distributed Object Assembly”. In: Özsu, M., Dayal, U., Valduriez,
P. (eds), Distributed Object Management, Morgan Kaufmann Publishers
J. McCarthy, “Epistemological challenges for connectionism”, Behavior and Brain Sciences, vol. 11, n. 1; pp. 44; 1988.
M. Minsky; “Logical versus Analogical, Symbolic versus Connectionist, Neat versus Scruffy”; AI Magazine, Vol. 12, no 2; 1991.
MITCHELL, T., 1997, Machine Learning, McGraw-Hill Companies, Inc
S. Muggleton and L. Raedt; “Inductive Logic Programming: Theory and Methods”; The Journal of Logic Programming; 1994.
Neumann, H., and Pessoa, L. (1994). A simple cell model with multiple spatial frequency selectivity and linear/non-linear response
properties. Proceedings of the World Congress on Neural Networks
ÖZSU, M., VALDURIEZ, P., 1991, Principles of Distributed Database Systems, New Jersey, Prentice-Hall
P. Pacheco. Parallel Programming with MPI. Morgan Kaufmamm, 1996.
M. Pazzani and D. Kibler; “The Utility of Knowledge in Inductive Learning”; Machine Learning, Vol. 9, pp. 57-94; 1992.
Pedrycz, W., "Fuzzy neural networks and neurocomputations," Fuzzy Sets and Systems 56 (1993), 1-28.
Pedrycz, W., and A. F. Rocha, "Fuzzy-set based models of neurons and knowledge-based networks," IEEE Trans. on Fuzzy Systems
1 (1993), 254-266.
G. F. Pfister. In Search of Clusters. Prentice Hall, 1998.
G. Pinkas; “Energy Minimization and the Satisfiability of Propositional Calculus”; Neural Computation, Vol. 3, pp.282-291; 1991.
G. Pinkas; “Logical Inference in Symmetric Connectionist Networks”; Doctoral Thesis, Sever Institute of Technology, Washington
University; 1992.
PROVOST, F., HENNESSY, D., 1996, “Scaling-Up: Distributed Machine Learning with Cooperation”, In: Proceedings of the
Thirteenth National Conference on Artificial Intelligence (AAAI '96), Portland, Oregon
PVM 3 Users Guide and Reference Manual. Oak Laboratories. Tennessee USA, 1994.
T. Resk. Distributed Simulation of Synchronous Neural Networks. Invited Article, Neural Network, Vol 7, No 1, USA., 1994.
T. Resk. Mapping and Parallel Simulation of Synchronous Neural Network on Multiprocessors. Procedings of Euromicro Conference. Barcelona,
1993.
RICHARDS, B. L., MOONEY, R. J., 1995, "Refinement of First-Order Horn-Clause Domain Theories", Machine Learning, vol. 19,
n° 2, pp. 95-131
SANTOS COSTA, V., R. BIANCHINI and I. C. DUTRA, 1996. Evaliuating the impact of coherence protocols on parallel logic
programming systems.Proceedings of the 5th EUROMICRO Workshop on Parallel and Distributed Processing, 376-381.
J. Shavlik; “Combining Symbolic and Neural Learning”; Machine Leaning, Vol. 14, no 2, pp. 321-331; 1994.
*Flávio Joaquim de Souza, Marley M. R. Vellasco, Marco A. C. Pacheco, “Hierarchical Neuro-Fuzzy Quadtree Models”, to be
submitted to the Fuzzy Sets and Systems journal.
Tan, Ah-Hwee 1997. Cascade ARTMAP: Integrating Neural Computation and Symbolic Knowledge Processing. IEEE Transactions
on Neural Networks, Vol. 8, No. 2 (March 1997), pp. 237-250.
S. B. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S. E. Fahlman, D. Fisher, R. Haumann,
K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R. S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich, H. Vafaie, K. Van de
Welde, W. Wenzel, J. Wnek and J. Zhang; “The MONK’s Problem: A Performance Comparison of Different Learning
Algorithms”; Technical Report, Carnegie Mellon University; 1991.
Tistarelli, M., & Sandini, G. (1992). Dynamic aspects in active vision. Computer Vision, Graphics, and Image Processing, 56,
108-129.
Tito, E., Zaverucha, G., Vellasco, M., Pacheco, M.1999. Applying Bayesian Neural Networks to Electric Load Forecasting.
Submitted.
G. Towell and J. W. Shavlik; “The Extraction of Refined Rules From Knowledge Based Neural Networks”; Machine Learning, Vol.
131; 1993.
G. Towell and J. W. Shavlik; “Knowledge-Based Artificial Neural Networks”; Artificial Intelligence, Vol. 70; 1994.
(WCNN-94), San Diego, Vol. IV, IV-290-298.
YANG, R., V. SANTOS COSTA,and D. H. D. WARREN, 1991. The Andorra I Engine: A parallel implementation of the Basic
Andorra model. Proceedings of the Eighth International Conference on Logic Programmi ng (ICLP'91), 58-67.
Yarbus, (1967). Eye movements and vision. New York: Plenum.
SAVONNET, M., TERRASSE, M., YÉTONGNON, K., 1996, “Using Structural Schema Information as Heuristics for Horizontal
Fragmentation of Object Classes in Distributed OODB”, In: Proc IX Int. Conf on Parallel & Distributed Computing Systems,
France, pp. 732-737
*R.S. Zebulum, M.A.C. Pacheco, M.M.B.R. Vellasco, A Novel Multi-Objective Optimisation Methodology Applied to Synthesis of
CMOS Operational Amplifiers, submitted to the Journal of Solid-State Devices and Circuits, Microelectronics Society –
SBMICRO 1999.
*R.S. Zebulum, M.A.C. Pacheco, M.M.B.R. Vellasco, Variable Length Representation in Evolutionary Electronics, submitted to the
Evolutionary Computation Journal, MIT Press, 1999.
Publications From The ICOM Project
15
16.
Books : 01
1. Braga, A. P., Carvalho, A . P. L. e Ludermir, T. B.(1998) Fundamentos de Redes Neurais. Livro publicado na XII Escola de
Computação. Rio de Janeiro, 13-17 de julho de 1998. Revised version published by Livros Técnicos e Científicos (LTC), 1999.
Papers in International Journals: 11
2. C.R.H. Barbosa, A.C. Bruno, M.M.B.R. Vellasco, M.A.C. Pacheco, J.P. Wikswo Jr., A. P. Ewing, C.S. Camerini, Automation
of SQUID Nondestructive Evaluation of Steel Plates by Neural Networks, IEEE Transactions on Applied Superconductivity,
IEEE Inc. 1999.
3. Carvalho L. A. V., "Modelling Distributed Concept Representation in Hopfield Neural Networks", Mathematical and
Computer Modelling, accepted for publication.
4. Garcez, A. S., Zaverucha, G. 1999. A Connectionist Inductive Learning and Logic Programming System. Accepted in Applied
Intelligence Journal, Kluwer, special volume on Neural Networks and Structured Knowledge.
5. Grossberg, S., & Pessoa, L. (1998). Texture segregation, surface representation, and figure-ground separation. Vision Research,
38, 2657-2684.
6. Ludermir, T.B., Carvalho, A.P.L, Braga, A.P. e de Souto, M. (1998) Weightless Neural Models: a review of current and past
works. Neural Computing Surveys, vol 2, p 41-61. ISBN:1093-7609.
7. Ricardo J. Machado e Armando F. da Rocha, Inference, Inquiry, Evidence Censorship, and Explanation in Connectionist Expert
Systems, IEEE Transactions on Fuzzy Systems, Vol. 5, N. 03, pp. 443-459, agosto de 1997.
8. Ricardo J. Machado, Valmir C. Barbosa, and Paulo A. Neves, "Learning in the combinatorial neural model," IEEE Trans. on
Neural Networks, 9 (1998), 831-847.
9. Nobre, C.N., Braga, A.P., Carvalho, A.P.L. e Ludermir, T.B. (1999) Rule extraction: A comparison between classic and
conexionist methods. To appear in the International Journal of Neural Systems.
10. Pessoa, L., Mingolla, E., and Neumann, H. (1995). A contrast- and luminance-driven multiscale network model of brightness
perception. Vision Research, 35, 2201-2223.
11. Pinto de Melo, C., Santos, F.L., Souza, B.B., Santos, M.S. e Ludermir, T.B. (1998) Polypyrrole Based Aroma Sensor.
12. F. Protti and G. Zaverucha, "Recognizing Classes of Logic Programs", Journal of the IGPL, Vol 5, Number 6, November 1997,
pp. 913-915.
Book Chapters:07
13. F. Baiao, M. Mattoso and G. Zaverucha, " A Knowledge-Based Perspective of the Distributed Design of Object Oriented
databases", Ed. Nelson F. Ebecken, Data Mining, pp. 383-399, WIT Press, England, 1998.
14. Hall Barbosa, A.C. Bruno, Marley Vellasco, Marco Pacheco e C.S. Camerini, Automation of
Electric Current Injection NDE by Neural Networks, Editors D. Thompson e D.E. Himenti, in
Review of Progress in Quantitative Nondestructive Evaluation, Vol. 16, pp. 789-795, Plenum
Press, NY, EUA, 1997
15. Braga, A.P., Carvalho, A.P.L. e Ludermir, T.B. (1999) Radial Basis Functions: Theory and Aplications. In Nonlinear Modelling
and Forescasting of High Frequency Financial and Economic Time Series, Soofi, A.S. and Cao, L. editors, Kluver.
16. Carvalho, A.P.L., Braga, A.P., e Ludermir, T.B (1999) Input-output modeling of credit approval data-set using neural data
mining set. In Nonlinear Modelling and Forescasting of High Frequency Financial and Economic Time Series, Soofi, A.S. and
Cao, L. editors, Kluver.
17. De Souto, M.C.P., Ludermir, T.B. e De Oliveira, W.R. (1998) Synthesis of Probabilistic Automata in pRAM Neural Networks-
vol II ( com 1198 páginas), editado por Niklasson, L., Bodén, M., e Ziemke, T., pp. 603-608. Springer Verlag.
18. Ludermir, T.B., Braga, A.P., Nobre, C.N e Carvalho, A.P.L. (1998) Extracting Rules from Neural Networks: A Data Mining
Approach. No livro Data Mining editado por Nelson Ebecken. WIT Press, Computational Mechanics Publication.
Southhampton, England. Setembro de 1998, p. 303-314.
19. Santos, M.S., Ludermir, T.B., Santos, F.L., Pinto de Melo, C. e Gomes, J.E. (1998), Artificial Nose and Data Analysis using
Multi Layer Perceptron. No livro Data Mining editado por Nelson Ebecken. WIT Press, Computational Mechanics Publication.
Southhampton, England. Setembro de 1998, p. 251-264.
Papers in International Conferences: 47
20. Baião, F., Mattoso, M. and Zaverucha, G. 1998. Towards an Inductive Design of Distributed Object Oriented Databases. Proc.
of the Third IFCIS Conference on Cooperative Information Systems (COOPIS’98), IEEE Computer Society, New York, USA,
August 20-22, pp.188-197.
21. F. Baiao, M. Mattoso and G. Zaverucha, 1998"Issues in Knowledge Discovery in Databases, Proc. World Multiconference on
Systemics, Cybernetics and Informatics / Fourth International Conference on On Information Systems, Analysis and Synthesis,
Vol. 2 pp. 164-169. (SCI’98), Orlando, Florida, USA, July.
22. Barros, M.M., Souza, G.M. e Ludermir, T.B. (1998) Features extraction on boolean artificial neural networks. 3rd Internacional
Conference on Computational Intelligence and Neuroscience. Carolina do Norte, USA, 23-28 de outubro de 1998, p. 99-102,
Association For Intelligent Machinery.
23. Barros, M.M., Ludermir, T.B. e Valadares, J.F.L. (1999) Rule Extraction from boolean artificial neural networks. Aceito na
International Joint Conference on Neural Networks 1999, Washington (July 1999). INNS Press.
16
17.
24. R. Basilio, G. Zaverucha and A. Garcez, 1998 " Inducing Relational Concepts with Neural Networks Via the LINUS System",
Proc. Fifth International Conference on Neural Information Processing (ICONIP’98), Vol. 3 pp. 1507-1510, 21-23 October,
Kitakyushu, Japan.
25. Luiz Biondi Neto, Marco Aurélio Pacheco, Marley M.B. R. Vellasco, Emmanuel P. Lopes Passos e Luis Chiganer, Hybrid
System for Detection and Diagnosis of Fault in Electrical Networks, XII ISCIS - The Twelfth International Symposium on
Computer and Information Sciences, Antalya, Turquia, 27-29 October 1997.
26. Luiz Biondi Neto, Marco Aurélio Pacheco, Marley M.B. R. Vellasco, Emmanuel P. Lopes Passos e Luis Chiganer, Hybrid
System for Detection and Diagnosis of Fault in Electrical Networks, XII ISCIS - The Twelfth International Symposium on
Computer and Information Sciences, Antalya, Turquia, 27-29 de outubro de 1997.
27. de Oliveira, W. R. (1996). Program Schemes over Continuous Algebras.Computability and Complexity in Analysis’ 96, Trier,
Alemanha, 1996.
28. de Oliveira, W. R. e de Barros, R. S. M. (1997). The Real Numbers in Z. Proceedings of the 2nd BCS-FACS Norther in Formal
Methods Workshop, Ikley 1997, D.J. Duke and A.S. Evans (Eds). Eletronic Workshops in Computing:
http://rwic.springer.co.uk/Springer
29. De Souto, M. C. P., Ludermir, T. B. e De Oliveira, W. R. (1998) Synthesis of Probabilistic Automata in pRAM Neural
Networks. Nos Anais do ICANN98. Suécia, 2-4 de setembro de 1998.
30. Exel, S., & Pessoa, L. (1998). Attentive visual recognition. Proceedings of the International Conference on Pattern
Recognition, August 16-20th, Brisbane, Australia.
31. Fogel, L., Zaverucha, G. 1998. Normal Programs and Multiple Predicate Learning. International Conference on Inductive Logic
Programming (ILP’98), Lecture Notes in Artificial Intelligence (LNAI), Springer Verlag. Madison - Wisconsin, USA, 22-24
July 1998.
32. Garcez, A.S., Zaverucha, G, Silva, V.N.A.L 1997. Applying the Connectionist Inductive Learning and Logic Programming
System to Power Systems Diagnosis. Proceedings of the IEEE International Conference on Neural Networks (ICNN-97), 9-12
June, Houston, Texas, USA, Vol.1 pp.121-126 .
33. Garcez, A.S., Zaverucha, G, Carvalho, L.A. 1997. Logic Programming and Inductive Learning in Artificial Neural Networks.
Knowledge Representation in Neural Networks, pp. 33-46, Logos-Verlag Berlin, ISBN 3-931216-77-2.
34. Garcez, A.S., Zaverucha, G, Carvalho, L.A. 1996. Logical Inference and Inductive Learning in Artificial Neural Networks. XII
European Conference of Artificial Intelligence, ECAI96, Workshop on Neural Network and Structural Knowledge, August
12-16, Budapest, Hungary, pp. 41-49.
35. Anchizes do E.L. Gonçalves Filho, Marley M.B.R. Vellasco e Marco A.C. Pacheco, Neural
Networks and Invariant Moments for Noisy Images Recognition, International Conference on
Engineering Applications of Neural Networks (EANN'98), Gibraltar, 10-12 June 1998.
36. N. Hallack, G. Zaverucha and V. Barbosa, 1998 " Towards a Hybrid Model of First-Order Theory Refinement", presented in
Neural Information Processing Systems (NIPS)/ Workshop on Hybrid Neural Symbolic Integration , 4-5 December,
Breckenridge, Colorado, USA. To appear as LNAI, Springer Verlag, 1999.
37. Kilkerry Neto, A., Zaverucha, G, Carvalho, L.A. 1999. An Implementation of a Theorem Prover in Symmetric Neural
Networks. Accepted in International Joint Conference on Neural Networks 1999, Washington (July 1999). INNS Press.
38. Lima, P. M. V.,"Towards Neural Computation of REFORMed Programs", Proceedings of the Workshop on High Performance
Logic Programming Systems, European Summer School on Logic, Languages and Implementation (ESSLLI'96), Prague, 1996,
August.
39. Ludermir, T.B. e de Oliveira, W.R (1998) Extractin Rules from Boolean Neural Networks. Nos anais do ICONIP98.
Kitakyushu-Japão, 21-23 de outubro de 1998, pp.1666-69, IOS Press.
40. R Menezes, G. Zaverucha and V. Barbosa, 1998 " A Penalty-Function Approach to Rule extraction from Knowledge-Based
Neural Networks", in Proc. Fifth International Conference on Neural Information Processing (ICONIP’98), Vol. 3 pp.
1497-1500, 21-23 October, Kitakyushu, Japan.
41. Nehme, C. C., Carvalho, L. A. V., Mendes, S. B. T., “A Temporal Model to Storage Spatiotemporal Pattern Memory in
Shunting Cooperative Competitive Network", in SpatioTemporal Models in Biological and Artificial Systems, Eds. Silva, F L,
Principe J C, Almeida L B, IOS Press, Washington D.C., Jan. 1998
42. Nehme, C., C., Fuks, S. and Meirelles, M. 1988 A GIS Expert System for Land Evaluation. GisPlanet´98, Lisbon, Portugal.
43. Guilherme F. Ribeiro, Victor N. A. L. da Silva, Plutarcho Lourenço, Ricardo Linden – Comparative Studies of Short-Term Load
Forecasting Using Hybrid Neural Networks - International Symposium on Forecasting – Barbados
44. Luiz Sabino Ribeiro Neto, Marley M. Vellasco, Marco Aurélio Pacheco e Ricardo Zebulum, Very Short Term Load Forecasting
System Using Neural Networks, The Seventeenth Annual International Symposium on Forecasting, pp. 51, Barbados, 19-21 de
Junho de 1997.
45. Pessoa, L., Exel, S., Roque, A., & Leitao, A.P. (1998a). Attentive visual recognition for scene exploration. Proceedings of the
International Conference on Neural Information Processing, Kitakyushu, Japan, October 21-23, pp. 1291-1294.
46. Pessoa, L., Exel, S., Roque, A., & Leitao, A.P. (1998b). Active scene exploration vision system. Proceedings of the
International Conference on Neural Networks and Brain, Beijing, China, October, 27-30, pp. 543-546.
47. Pinto de Melo, C. , Santos, F. L., Souza, B.B., Santos, M. S. e Ludermir, T. B. (1998) Polypyrrole Based Aroma Sensor. Nos
anais da International Conference of Synthetic Metals, França, junho de 1998.
48. F. Protti and G. Zaverucha, 1998 "On the Relations Between Acceptable Programs and Stratifiable Classes", Proc. of the 14th
Brazilian Symposium on Artificial Intelligence (SBIA-98), Porto Alegre, Brazil, November, LNAI 1515, Springer Verlag, pp.
141-150.
49. Santos, M. S., Ludermir, T. B., Santos, F.L., Pinto de Melo, C. e Gomes, J.E. (1998) Artificial Nose and Data Analusis using
Multi Layer Perceptron. Nos anais da International Conference of Data Mining. Computational Mechanics Publication.
Southhampton, England. Setembro de 1998.
17
Be the first to comment