SlideShare a Scribd company logo
1 of 9
Download to read offline
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
Bio-Inspired Animated Characters: A Mechanistic &
Cognitive View
Ben Kenwright
School of Media Arts and Technology
Southampton Solent University
United Kingdom
Abstract—Unlike traditional animation techniques, which at-
tempt to copy human movement, ‘cognitive’ animation solutions
mimic the brain’s approach to problem solving, i.e., a logical (in-
telligent) thinking structure. This procedural animation solution
uses bio-inspired insights (modelling nature and the workings of
the brain) to unveil a new generation of intelligent agents. As with
any promising new approach, it raises hopes and questions; an
extremely challenging task that offers a revolutionary solution,
not just in animation but to a variety of fields, from intelligent
robotics and physics to nanotechnology and electrical engineering.
Questions, such as, how does the brain coordinate muscle signals?
How does the brain know which body parts to move? With all
these activities happening in our brain, we examine how our
brain ‘sees’ our body and how it can affect our movements.
Through this understanding of the human brain and the cognitive
process, models can be created to mimic our abilities, such as,
synthesizing actions that solve and react to unforeseen problems
in a humanistic manner. We present an introduction to the
concept of cognitive skills, as an aid in finding and designing a
viable solution. This helps us address principal challenges, such
as: How do characters perceive the outside world (input) and
how does this input influence their motions? What is required
to emulate adaptive learning skills as seen in higher life-forms
(e.g., a child’s cognitive learning process)? How can we control
and ‘direct’ these autonomous procedural character motions?
Finally, drawing from experimentation and literature, we suggest
hypotheses for solving these questions and more. In summary,
this article analyses the biological and cognitive workings of
the human mind, specifically motor skills. Reviewing cognitive
psychology research related to movement in an attempt to
produce more attentive behavioural characteristics. We conclude
with a discussion on the significance of cognitive methods for
creating virtual character animations, limitations and future
applications.
Keywords—animation; life-like; movement; cognitive; bio-
mechanics; human; reactive; responsive; instinctual; learning;
adapting; biological; optimisation; modular; scalable
I. INTRODUCTION
Movement is Life Animated films and video games are
pushing the limits of what is possible.
In today’s virtual environments, animations tends to be
data-driven [1], [2]. It is common to see animated characters
using pre-recorded motion capture data, but it is rare to
see the animated characters driven using purely proce-
dural solutions. With the dawn of Virtual Reality (VR) and
Augmented Reality (AR) there is an ever growing need for
content - to create indistinguishably realistic virtual worlds
quickly and cost effectively. While rendered scenes may appear
highly realistic, the ‘movement’ of actively driven systems
(e.g., biological creatures) is an open area of research [2].
Specifically, the question of how to ‘automatically’ create
realistic actions that mimic the real-world. This includes, the
ability to learn and adapt to unforeseen circumstances in a
life-like manner. While we are able to ‘record’ and ‘playback’
highly realistic animations in virtual environments, they have
limitations. The motions are constrained to specific skeleton
topologies, not to mention, time consuming and challenging
to create motions for non-humans (creatures and aliens). What
is more, the recording of animations for dangerous situations is
impossible using motion capture (so must be manually done
using artistic intervention). Another key thing to remember,
in dynamically changing environments (video games), pre-
recorded animations are unable to adapt automatically to
changing situations.
This article attempts to solve these problems using biologi-
cally inspired concepts. We investigate neurological, cognitive
and behavioural methods. These methods provide inspirational
solutions for creating adaptable models that synthesize life-
like character characteristics. We examine how the human
brain ‘thinks’ to accomplish tasks; and how the brain solves
unforeseen problems. Exploiting the knowledge of how the
brain functions, we formulate a system of conditions that
attempt to replicate humanistic properties. We discusses novel
approaches around solving these problems, by questioning,
analysing and formulating a system based on the human
cognitive processes.
Cognitive vs Machine Learning Essentially, cognitive
computing has the ability to reason creatively about data,
patterns, situations, and extended models (dynamically). How-
ever, most statistics-based machine learning algorithms cannot
handle problems much beyond what they have seen and learned
(match). The machine learning algorithm has to be paired
with cognitive capabilities to deal with truly ‘new situation’.
Cognitive science therefore raises challenges for, and draws
inspiration from, machine learning; and insights about the
human mind to help inspire new directions for animation.
Hence, cognitive computing along with many other disciplines
within the field of artificial intelligence are gaining popularity,
especially in character systems, so in the not so distant future
will have a colossal impact on the animation industry.
Automation The ability to ‘automatically’ generate phys-
ically correct humanistic animations is revolutionary. Remove
and add behavioural components (happy and sad). Create
animations for different physical skeletons using a single set of
training data. Perform a diverse range of actions, for instance,
getting-up, jumping, dancing, and walking. The ability to react
to external interventions, while completing assigned task (i.e.,
combining motions with priorities). These problem-solving
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1079 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
skills are highly valued. We want character agents to learn
and adapt to the situation. This includes:
• physically based models (e.g., rigid bodies) that
are controlled through internal joint torques (muscle
forces)
• controllable adjustable joint signals to accomplish
specific actions (trained)
• learn and retain knowledge from past experiences
• embed personal traits (personality)
Problems We want the method to be automatic (i.e., not
depend too heavily on pre-canned libraries). Avoid simply
playing back captured animations, but instead parama-
terizing and re-using animations for different contexts
(provide stylistic advice to the training algorithm). We
want the solution to have the ability to adapt on-the-fly to
unforeseen situations in a natural life-like manner. Having
said that, we also want to accommodate a diverse range of
complex motions, not just balanced walking, but getting-up,
climbing, and dancing actions. With a physics-based model
at the heart of the system (i.e., not just a kinematic skeleton
but joint torques/muscles), we are able to ensure a physically
correct solution. While a real-world human skeleton has a huge
number of degrees-of-freedom, we accept that a lower fidelity
model is able to represent the necessary visual characteristics
(enable reasonable computational overheads). Of course, even
a simplified model possesses a large amount of ambiguity
with singularities. All things considered, we do not want to
focus on the ‘actions’ - but embrace the autonomous emotion,
behaviour and cognitive properties that sit on top of the motion
(intelligent learning component).
Fig. 1. Homunculus Body Map - The somato-sensory homunculus is a kind
of map of the body [3], [4]. The distorted model/view of a person (see
Figure 2) represents the amount of sensory information a body part sends to
the central nervous system (CNS)
Geometric to Cognitive Synthesizing animated characters
for virtual environments addresses the challenges of automat-
ing a variety of difficult development tasks. Early research
combined geometric and inverse kinematic models to simplify
key-framing. Physical models for animating particles, rigid
bodies, deformable solids, fluids, and gases have offered
the means to generate copious quantities of realistic motion
through dynamic simulation. Bio-mechanical models employ
simulated physics to automate the lifelike animation of animals
with internal muscle actuators. In recent years, research in be-
havioral modeling has made progress towards ‘self-animating’
characters that react appropriately to perceived environmental
stimuli [5], [6], [7], [8]. It has remained difficult, however, to
instruct these autonomous characters so that they satisfy the
programmer’s goals. As pointed out by Funge et al. [9], the
computer graphics solution has evolved, from geometric solu-
tions to more logical mathematical approaches, and ultimately
cognitive models, as shown in Figure 3.
A large amount of work has been done into motion re-
targeting (i.e., taking existing pre-recorded animations and
modifying them to different situations) [10], [11], [12]. Tar-
geted solutions that generate animations for specific situations,
such as, locomotion [13] and climbing [14]. Kinematic models
do not take into account the physical properties of the model, in
addition, are only able to solve local problems (e.g., reading
and stepping and not complex rhythmic actions) [15], [16],
[17]. Procedural models may not converge to natural looking
motions [18], [19], [20]. Cognitive models go beyond be-
havioral models, in that they govern what a character knows,
how that knowledge is acquired, and how it can be used to
plan actions. Cognitive models are applicable in instructing a
new breed of highly autonomous, quasi-intelligent char-
acters that are beginning to find use in interactive virtual
environments. We decompose cognitive modeling into two
related sub-tasks: (1) domain knowledge specification and
(2) character instruction. This is reminiscent of the classic
dictum from the field of artificial intelligence (AI) that tries
to promote modularity of design by separating out knowledge
from control.
knowledge + instruction = intelligent behavior (1)
Domain (knowledge) specification involves administering
knowledge to the character about its world and how that world
can change. Character instructions tell the character to try to
behave in a certain way within its world in order to achieve
specific goals. Like other advanced modeling tasks, both of
these steps can be fraught with difficulty unless developers
are given the right tools for the job.
Components We wanted to avoid a ‘single’ amalgamated
algorithm (e.g., Neural Networks or connectionist models
[21]). Instead we investigate modular or dissectable learning
models for adapting joint signals to accomplish tasks. For
example, genetic algorithms [18], in combination with Fourier
methods to subdivide complex actions into components (i.e.,
extract and identify behavioural characteristics [22]). Coupled
with the fact that, joint motions are essentially signals, while
the physics-based model ensures the generated motions are
physically correct [23]. To say nothing of the advancements in
parallel hardware - we envision the exploitation of massively
parallel architecture constitutional.
Contribution The novel contribution of this technical
article is the amalgamation of numerous methods, for instance,
bio-mechanics, psychology, robotics, and computer animation,
to address the question of ‘how can we make virtual characters
solve unforeseen problems automatically and in a realistic
manner?’ (i.e., mimic the human cognitive learning process).
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1080 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
Fig. 3. Timeline - Computer Graphics Cognitive Development Model (Geometric, Kinematic, Physical, Behavioural, and Cognitive) ([9]. Simplified illustrate
of milestones over the years that have contributed novel animation solutions - emphasises the gradual transition from kinematic and physical techniques to
intelligent behavioural models. [A] [24]; [B] [20]; [C] [19]; [D] [25]; [E] [26]; [F] [27]; [G] [28]; [H] [18]; [I] [29]; [J] [30]; [K] [31]; [L] [32]; [M] [33]; [N]
[34]; [O] [35]; [P] [8]; [Q] [36]; [R] [7]; [S] [5]; [T] [6]; [U] [37]; [V] [38];
Fig. 2. Homunculus Body Map - Reinert et al [4], presented a graphical
paper on mesh deformation to visualize the somato-sensory information of
the brain-body. The figure conveys the importance of the neuronal
homunculus - i.e., the human body part size relation to neural density and
the brain.
II. BACKGROUND & RELATED WORK
Literature Gap The research in this article brings together
numerous diverse concepts and while in their individual field
they are well studied, in their whole and applied to virtual
character animations, there is a serious gap in the referential
literature. Hence, we begin by exploring branches of research
from cognitive psychology and bio-mechanics before taking
them across and combining them with computer animation and
robotics concepts.
Autonomous Animation Solutions Formal approaches to
animation, such as, genetic algorithms [18], [19], [20], may
not converge to natural looking motions without additional
work, such as, artist intervention or constrained/complex fit-
ness functions. This causes limitations and constrains the ‘au-
tomation’ factor. We see autonomy as the emergent of salient,
novel, action discovery, through self organisation of high level
goal directed orders. The behavioural aspect emerges from
the physical (or virtual) constraints and fundamental low level
mechanisms. We adapt bodily motor controls (joint signals)
from randomness to purposeful actions based on cognitive
development (Lee [39] referred to this process as evolving
from babbling to play). Interestingly, this intrinsic method of
behavioural learning has also been demonstrated in biological
models (known as action discovery) [40].
Navigation/Controllers/Mechanical Synthesizing human
movement that mimics real-world behaviours ‘automatically’
is a challenging and important topic. Typically, reactive ap-
proaches for navigation and pursuit [24], [41], [42], [27], may
not readily accommodate task objectives, sensing costs, and
cognitive principles. A cognitive solution adapts and learns
(finds answers to unforeseen problems).
Expression/Emotion Humans exhibit a wide variety of
expressive actions, which reflect their personalities, emotions,
and communicative needs [25], [26], [28]. These variations
often influence the performance of simpler gestural or facial
movements.
Components Essential Components:
• Fourier - subdivide actions into components, extract
and identify behavioural characteristics [22]
• Heuristic Optimisation [18] - adapting non-linear
signals (with purpose)
• Physics-Based [43], [23] - torques and forces to
control the model
• Parallel Architecture - exploit massively parallel pro-
cessor architecture, such as, the graphical processing
unit (GPU)
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1081 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
• Randomness - inject awareness and randomness
(blood flow, repository signals, background noise)
[44], [45]
Brain Body Map As shown in Figure 1, we are able
to map the minds awareness of different body parts. This is
known as the homunculus body map. So why is it important
for movement? Helps understanding the neural mechanisms
of human sensori-motor coordination and cognitive connec-
tion. While we are a complex biological organism, we need
feedback and information (input) to be able to move and thus
live (i.e., movement is life). The motor part of the brain relies
on information from the sensory systems. The control signals
are dynamically changing depending on our state. Simply put,
the better the central representation, the better the motor
output will be and the more life-like and realistic the final
animations will be. Our motor systems need to know the state
of our body. If the situation is not known or not very clear, the
movements will not be good, because the motor systems will
be ‘afraid’ to go all out. Very similar to driving a car on an
unknown road in misty conditions with only an old, worn and
worm eaten map. We drive slow and tense, to avoid hitting
something or getting of road. This is safety behaviour: safe,
but taxing on the system.
Cognitive Science The cognitive science of motion is an
interdisciplinary scientific study of the mind and its processes.
We examines what cognition motion is, what it does and how
it works. This includes research in to intelligence and be-
haviour, especially focusing on how information is represented,
processed, and transformed (in faculties such as perception,
language, memory, attention, reasoning, and emotion) within
nervous systems (humans or other animals) and machines
(e.g. computers). Cognitive motion science consists of multiple
research disciplines, including robotics, psychology, artificial
intelligence, philosophy, neuroscience, linguistics, and anthro-
pology. The subject spans multiple levels of analysis, from
low level learning and decision mechanisms to high level
logic and planning; from neural circuitry to modular brain
organization. However, the fundamental concept of cognitive
motion is the understanding of instinctual thinking in terms of
the structural mind and computational procedures that operate
on those structures. Importantly, cognitive solutions are not
only adaptive but also anticipatory and prospective, that
is, they need to have (by virtue of their phylogeny) or develop
(by virtue of their ontogeny) some mechanism to rehearse
hypothetical scenarios.
Neural Networks and Cognitive Simulators Compu-
tational Neuroscience [46], [29], [47] biologically inspired
solutions for neural models for simulating information process-
ing and cognition and behaviour modelling. The majority of
the research has focused on modelling ‘isolated components’.
Cognitive architectures [48] using biologically based models
for goal driven learning and behaviours. Publically available
neural network simulators are available [49].
Motor Skills Our brain sees the world in ‘maps’. The maps
are distorted, depending on how we use each sense, but they are
still maps. Almost every sense has a map. Most senses have
multiple maps. We have a ‘tonotopic’ map, which is a map
of sound frequency, from high pitched to low pitched, which
is how our brain processes sound. We have a ‘retinotopic’
map, which is a reproduction of what you are seeing, and it
is how the brain processes sight. Our brain loves maps. Most
importantly, we have maps of our muscles. The mapping
from sensory information to motor movement is shown in
Figure 1. For muscle movements, the finer, more detailed the
movements are, the more brain space those muscles have.
Hence, we can address which muscles take priority and under
what circumstances (i.e., sensory input). This also opens the
door to lots of interesting and exciting questions, such as, what
happens to the maps if we lose a body part, such as, a finger.
Psychology Aspect A number of interesting facts are
hidden in the psychology aspect of movement that are often
taken for granted or overlooked. Incorporating them in a
dynamic system allows us to solve a number of problems.
For example, when we observe movements which are slightly
different from each other but possess similar characteristics.
The work by Armstrong [50], showed that when a movements
sequence is speeded up as a unit, the overall relative move-
ment or ‘phasing’ remains constant. Led to the discovery of
relative forces or the relationship among forces in the muscles
participating in the action.
How the Brain Controls Muscles Let us pretend that
we want to go to the kitchen, because we are hungry. First,
an area in our brain called the parietal lobe comes up with
a lots of possible plans. We could get to the kitchen by
skipping, sprinting, uncoordinated somersaulting, or walking.
The parietal lobe sends these plans to another brain area called
the basal ganglia. The basal ganglia picks ‘walking’ as the
best plan (with uncoordinated somersaulting as close second
option). It tells the parietal lobe the plan. The parietal lobe
confirms it, and sends the ‘walk to kitchen’ plan down the
spinal cord and to the muscles. The muscles move. As they
move, our cerebellum kicks into high gear, making sure we
turn right before we crash into the kitchen counter, and that
we jump over the dog. Part of the cerebellum’s job is to
make quick changes to muscle movements while they are
happening (see Figure 4).
Visualizing the Solution (Offline) We visualize a goal.
In our mind, over and over and over again. We picture the
movements. We see ourself catching that ball. Dancing that
toe touch. Swimming that breaststroke. We watch it in the
movie of our mind whenever we can. Scrutinize it. Is our wrist
turning properly? Is our kick high enough? If not, we change
the picture. See ourself doing the movement perfectly. As far
as our parietal lobe and basal ganglia are concerned, this is
exactly the same as doing the movement. When we visualize
the movement, we activate all those planning pathways. Those
neurons fire, over and over again. Which is what needs to
happen for our synapses to strengthen. In other words, by
picturing the movements, we are actually learning them. This
makes it easier for the parietal lobe to send the right message to
the muscles. So when we actually try to perform a movement,
we will get better, faster. We will need less physical practice
to be good at sports. This does not work for general fitness
(i.e., increased strength). We still need to train our muscles,
heart, and lungs to become strong. However, its good for
skilled movements. Basketball lay ups. Gymnastics routines.
For improved technique, visualization works. We train our
brain, which makes it easier to control our muscles. What
does this have to do with character simulations? We are able
to mimic the ‘visualization’ approach by having our system
constantly run simulations in the background. Exploit all that
parallel processing power. Run large numbers of simulations
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1082 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
Fig. 4. Brain and Actions - The phases (left-to-right) the human brain goes through - from thinking about doing a task to accomplishing it (e.g., walking to
the kitchen to get a drink from the cupboard).
Fig. 5. Overview - High level view of interconnected components and their justifications. (a) We have a current (starting) state and a final state. The unknown
middle transitioning states is what we are searching for. The transition state is a dynamic problem that is specific to the problem. For instance, the terrain or
the situation may vary (slopes or crawling under obstacles). (b) A heuristic model would be able to train a set of trigonometric functions (e.g., Fourier series),
to create rhythmic motions that are able to accomplish the task. The low level task (fitness function), being a simple ‘overall centre of mass trajectory’. (c)
With (b) on its own, the solution is plagued with issues, such as, how to steer or control the type of motion and if the final motion is ‘humanistic’ or
‘life-like’. Hence, we have a ‘pre-defined’ library of motions that are chosen based on the type of animation we are leaning towards (standard walk or
hopping). The information from the animation is fed back into the fitness function in (b). Providing a multi-objective problem, centre of mass, end-effectors,
and frequency components for ‘style’. (d) The solution from each problem is ‘stored’ in a sub-bank of the animation and used for future problems. This builds
upon using previous knowledge to help solve new problems faster in a coherent manner (e.g., previous experiences will cause different characters to create
slightly different solutions over time).
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1083 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
one or two seconds in advance and see how the result leads out.
If the character’s food it a few centimetres forward, if we use
more torque on the knee muscle, how does this compare with
our ideal animation we are aiming for? As we find solutions,
we store them and improve upon them each time a similar
situation arises.
Physically Correct Model Our solution controls a physics
based model using joint torques as in the real world. This
mimics the real world more closely, not only do we require
the model to move in a realistic manner but it also has to
control joint muscles in sufficient ratios to achieve the final
motion (e.g., balance control). Adjusting the physical model,
for instance, muscle strength or leg lengths, allows the model
to retrain to achieve the action.
(Get Up) Rise Animations Animation is diverse and
complex area, so rather than try and create solutions for every
possible situation, we focus on a particular set of actions,
that is, rising movements. Rise animations require a suitably
diverse range of motor skills. We formate a set of tasks
to evaluate our algorithm, such as, get up from front, get
up from back, get up on uneven ground and so on. The
model also encapsulates underlying properties, such as, visual
attention and expressive qualities (tired, unsure, eager) and
human expressiveness. We consider a number of factors, such
as, inner and outer information, emotion, personality, primary
and secondary goals.
III.OVERVIEW
High Level Elements The system is driven by three key
sources of information:
1) the internal information (e.g., logistics of the brain,
experience, mood)
2) the aim or action
3) external input (e.g., environmental, contacts, comfort,
lighting)
4) memory and information retrieval (e.g., parallel mod-
els and associative memory)
Motion Capture Data (Control) We have a library of
actions as reference material for look-up and comparison.
Some form of ‘control’ and ‘input’ to steer the characters to
perform actions in a particular way (e.g., instead of the artist
creating a large look-up array of animations for every single
possible solution), we provide fundamental poses and simple
pre-recorded animations to ‘guide’ the learning algorithm. As
search models are able to explore their diverse search-space
to reach the goal (e.g., heuristically adjusting joint muscles),
however, a reference ‘library’ allows us to steer the solution
towards what is ‘natural-looking’. As there are a wide number
of ways of accomplishing a task - but what is ‘normal’
and what is ‘strange’ and uncomfortable. The key points we
concentrate on are:
1) the animations requires basic empirical information
(e.g., reference key-poses) from human movement
and cognitive properties;
2) the movement should not simply reply pre-recorded
motions, but adapt and modify them to different
contexts;
3) the solution must react to disturbances and changes
in the world while completing the given task;
4) the senses provide unique pieces of information,
which should be combined with internal personality
and emotion mechanisms to create the desired actions
and/or re-actions.
Blending/Adapting Animation Libraries During motor
skill acquisition, the brain learns to map between ‘intended’
limb motion and requisite muscular forces. We propose that
regions (i.e., particular body segments) in the animation library
area are blended together to find a solution that is aesthetically
pleasing. (i.e., based upon pre-recorded motions instead of
randomly searching).
Virtual Infant (or Baby) Imagine a baby with no knowl-
edge or understanding. As we explained, a bottom up view,
starting with nothing and educating the system to mimic
humanistic (organic) qualities. Learning algorithms to tune
skeletal motor signals to accomplish high-level tasks. As with a
child - ‘trial-and-error’ approach to learning - exploring what
is possible and impossible - to eventually reach a solution.
This requires continuously integrating in corrective guidance
(as with a child - without knowing what is right and wrong
- the child will never learn). This guidance is through fitness
criteria and example motion clips (as children do - see and
copy - or try to). Performing multiple training exercises over
and over again to learn skills. Having the algorithm actively
improve (e.g., proprioception - how the brain understands the
body). As we learn to perform motions, there are thousands
of small adjustments that our body as a whole is making
every millisecond to ensure optimal (quickest, energy efficient,
closest idea/style). Constantly monitoring the body by sending
and receiving sensory information (e.g., to and from every
joint, limb, and contact). Over time, the experience strengthens
the model’s ability to accomplish tasks quicker and more
efficiently.
Stability Autonomous systems have ‘stability’ issues (i.e.,
they are far from equilibrium stability) [51]. Due to the
dynamic nature of a character’s actions, they are dependent
for their environment (external factors) requiring interaction,
which are open processes (exhibit closed self-organization).
However, we can measure stability in relation to reference
poses, energy, and balance to draw conclusions of the effec-
tiveness of the learned solution.
Memory Learn through explorative searching (i.e, with
quantative measures for comfort, security, and satisfaction).
While a character may find an ‘optimal’ solution that meets the
specified criteria - it will continue to expand its memory reper-
toire of actions. This is a powerful component, increasing the
efficiency in achieving a goal (e.g., the development of walking
and retention of balanced motion in different circumstances
would be more effective). The view that exploration and re-
tention (memory) is crucial to ontogenetic development, which
is supported by research findings in developmental psychology
[52]. Hofsten [53] explains that it is not necessarily success
at achieving task-specific goals that drives development but
the discovery of new way of doing something (through explo-
ration). Forms a solution that builds upon ‘prior knowledge’
with an increased reliance on machine learning and statistical
evaluation (i.e., for tuning the system parameters). This leads
to an model that constantly acquires new knowledge both for
the current and future task.
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1084 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
IV.COMPLEXITY
Experimenting with optimisation algorithms (i.e., different
fitness criteria for specific situations). Highly dynamic ani-
mations (jumping or flying through the air). Close proximity
simulations (dancing, wrestling, getting in/out of a vehicle).
Exploring ‘beyond’ human but creative creatures (multiple
legs and arms). Instead of aesthetic qualities, investigate ‘in-
teresting’ behaviours. As the system and training evolves to
use a ‘control language’ to give orders. Not just limited to
generic motions (i.e., walking and jumping), but the ability to
learn and search for solutions (whatever the method). Introduce
risk, harm, and comfort to ‘limit’ the solutions to be more
‘human’ and organic. Avoid unsupervised learning since it
leads to random unnatural and uncontrollable motions. Simple
examples (i.e., training data) to steer the learning. Gather
knowledge and extend the memory of experiences to help solve
future problems (learn from past problems). This method is
very promising for building organic real-life systems (handle
unpredictable situations in a logical natural manner). Tech-
nique is scalable and generalizes across topologies. Learned
solutions can be shared and transferred between characters
(i.e., accelerated learning through sharing).
Fig. 6. Complexity - As animation and behavioural character models become
increasing complex, it becomes more challenging and time consuming to
customize and create solutions for specific environments/situations.
An physically correct, self-adapting, learning animation
system to mimic human cognitive mechanics is a complex task
that embodies a wide range of biologically based concepts. A
bottom up approach (i.e., starting with nothing). This forms
a foundation from which greater details can be added. As the
model grows in complexity and details more expressive and
autonomous animations appear. Leading on to collaborative
agents, i.e., social learning and interaction (i.e., behaviour
in groups). The enormous complexity of the human brain
and its ability to problem solve cannot be underestimated
- however, through simple approximations we are able to
develop autonomous animation models that embody and pos-
sess humanistic qualities, such as, cognitive and behavioural
learning abilities.
Tackle a complex problem - our movement allows us to
express a vast array of behaviours in addition to solving
physical problems, such as, balance and locomotion. We have
only scraped the surface of what is possible - constructing and
explaining a simple solution (for a relatively complex neuro-
behavioural model) - to investigate a modular extendible
framework to synthesize human movement (i.e., mapping
functionality, problem solving, mapping of brain to anatomy,
and learning/experience).
Body Language The way we ‘move’ says a lot. How we
stand and how we walk expels ‘emotional’ details. We humans
are very good at spotting these underlying characteristics.
These fundamental physiological motions are important in
animation - if we want to synthesize life-like characters. While
these subtle underlying motions are aesthetic (i.e., sitting on
top of the physical action or goal), they are non the less equally
important. Emotional synthesis is often classified as a low-
level biological process [54]. Chemical reactions in the brain
for stress and pain - correlate and modulate various behaviours
(including motor control) - vast array of effects - influencing
sensitivity, mood, and emotional responses. We have took a
view that the motion and learning is driven by a high level
cognitive model (avoid the various underlying physiological
and chemical parameters).
Input (Sensory Data) The brain has a vast array of sensory
data, such as, the eyes, sound, temperature, smell, and feelings,
that feed in to make the final decision. Technically, our simple
assumption is analogous to a blind person taking lots of short
exploratory motions to discover how to accomplish the task.
Reduce the skeleton complexity compared to a full human
model (numerical complexity). Physical information from the
environment, like contacts, centre of mass, and end-effector
locations. The output motor control signals - with behavioural
selection, example learning motion library, emotion, and fitness
evaluation.
V. CONCLUSION
We have specified a set of simple constraints to steer and
control the animation (e.g., get-up poses). We developed a
model based on biology, cognitive psychology, and adaptive
heuristics to create animations to control a physics-based
skeleton that adapts and re-trains parameters to meet changing
situations (e.g., different physical and environmental informa-
tion). We inject personality and behavioural components to
create animations that capture life-like qualities (e.g., mood,
tired, and scared).
This article addresses several possibilities for future work.
It would be valuable to do further tests on specific hypotheses
and assumptions by constructing more focused and rigorous
experiments. However, these hypotheses are hard to state
precisely, and thus have mixed feelings - since we are trying
to model humanistic cognitive abilities. A practical approach
might be to directly compare and contrast real-world and
synthesized situations. For instance, an experiment of an actor
dealing with difficult situations, such as, stepping over objects
and walking under bridges. Younger children approach the
problem in a different way - similar to our computer agent
- learning through trial and error, behaving less mechanically
and more consciously. Further, communication between direc-
tor (e.g., example animations and posses for control) might
lead to more formal languages of commands. This would help
us learn precisely what sorts of commands are needed and
when there should be issued. Finally, we could go further
by developing richer cognitive models and control languages
for describing motion and style to solve questions not even
imagined.
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1085 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
We have taken a simplified view of cognitive modelling.
We will continue to see cognitive architectures develop over
the coming years that are capable of adapting and self-
modifying, both in terms of parameter adjustment phylogenetic
skills. This will be through learning and, more importantly,
through the modification of the very structure and organization
of the system itself (memory and algorithm) so that it is
capable of altering its system dynamics based on experience,
to expand its repertoire of actions, and thereby adapt to new
circumstances [52]. A variety of learning paradigms will need
to be developed to accomplish these goals, including, but
not necessarily limited to, unsupervised, reinforcement, and
supervised learning.
Learning through watching Providing the ability to
translate 2D video images to 3D animation sequences would
allow cognitive learning algorithms the ability to constantly
‘watch’ and learn from people. Watching people in the street
walking and avoiding one another, climbing over obstacles,
and interacting to reproduce similar characteristics virtually.
REFERENCES
[1] D. Vogt, S. Grehl, E. Berger, H. B. Amor, and B. Jung, “A data-driven
method for real-time character animation in human-agent interaction,”
in Intelligent Virtual Agents. Springer, 2014, pp. 463–476. 1
[2] T. Geijtenbeek and N. Pronost, “Interactive character animation using
simulated physics: A state-of-the-art review,” in Computer Graphics
Forum, vol. 31, no. 8. Wiley Online Library, 2012, pp. 2492–2515. 1
[3] E. N. Marieb and K. Hoehn, Human anatomy & physiology. Pearson
Education, 2007. 2
[4] B. Reinert, T. Ritschel, and H.-P. Seidel, “Homunculus warping: Con-
veying importance using self-intersection-free non-homogeneous mesh
deformation,” Computer Graphics Forum (Proc. Pacific Graphics 2012),
vol. 5, no. 31, 2012. 2, 3
[5] T. Conde and D. Thalmann, “Learnable behavioural model for au-
tonomous virtual agents: low-level learning,” in Proceedings of the fifth
international joint conference on Autonomous agents and multiagent
systems. ACM, 2006, pp. 89–96. 2, 3
[6] F. Amadieu, C. Marin´e, and C. Laimay, “The attention-guiding effect
and cognitive load in the comprehension of animations,” Computers in
Human Behavior, vol. 27, no. 1, 2011, pp. 36–40. 2, 3
[7] E. Lach, “fact-animation framework for generation of virtual characters
behaviours,” in Information Technology, 2008. IT 2008. 1st Interna-
tional Conference on. IEEE, 2008, pp. 1–4. 2, 3
[8] J.-S. Monzani, A. Caicedo, and D. Thalmann, “Integrating behavioural
animation techniques,” in Computer Graphics Forum, vol. 20, no. 3.
Wiley Online Library, 2001, pp. 309–318. 2, 3
[9] J. Funge, X. Tu, and D. Terzopoulos, “Cognitive modeling: knowledge,
reasoning and planning for intelligent characters,” in Proceedings of
the 26th annual conference on Computer graphics and interactive
techniques. ACM Press/Addison-Wesley Publishing Co., 1999, pp.
29–38. 2, 3
[10] S. Tak and H.-S. Ko, “A physically-based motion retargeting filter,”
ACM Transactions on Graphics (TOG), vol. 24, no. 1, 2005, pp. 98–
117. 2
[11] S. Baek, S. Lee, and G. J. Kim, “Motion retargeting and evaluation for
vr-based training of free motions,” The Visual Computer, vol. 19, no. 4,
2003, pp. 222–242. 2
[12] J.-S. Monzani, P. Baerlocher, R. Boulic, and D. Thalmann, “Using an
intermediate skeleton and inverse kinematics for motion retargeting,”
in Computer Graphics Forum, vol. 19, no. 3. Wiley Online Library,
2000, pp. 11–19. 2
[13] B. Kenwright, R. Davison, and G. Morgan, “Dynamic balancing and
walking for real-time 3d characters,” in Motion in Games. Springer,
2011, pp. 63–73. 2
[14] C. Balaguer, A. Gim´enez, J. M. Pastor, V. Padron, and M. Abderrahim,
“A climbing autonomous robot for inspection applications in 3d com-
plex environments,” Robotica, vol. 18, no. 03, 2000, pp. 287–297. 2
[15] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi´c, “Style-based
inverse kinematics,” in ACM Transactions on Graphics (TOG), vol. 23,
no. 3. ACM, 2004, pp. 522–531. 2
[16] D. Tolani, A. Goswami, and N. I. Badler, “Real-time inverse kinematics
techniques for anthropomorphic limbs,” Graphical models, vol. 62,
no. 5, 2000, pp. 353–388. 2
[17] T. B. Moeslund, A. Hilton, and V. Kr¨uger, “A survey of advances in
vision-based human motion capture and analysis,” Computer vision and
image understanding, vol. 104, no. 2, 2006, pp. 90–126. 2
[18] B. Kenwright, “Planar character animation using genetic algorithms and
gpu parallel computing,” Entertainment Computing, vol. 5, no. 4, 2014,
pp. 285–294. 2, 3
[19] K. Sims, “Evolving virtual creatures,” in Proceedings of the 21st annual
conference on Computer graphics and interactive techniques. ACM,
1994, pp. 15–22. 2, 3
[20] J. T. Ngo and J. Marks, “Spacetime constraints revisited,” in Proceed-
ings of the 20th annual conference on Computer graphics and interactive
techniques. ACM, 1993, pp. 343–350. 2, 3
[21] J. A. Feldman and D. H. Ballard, “Connectionist models and their
properties,” Cognitive science, vol. 6, no. 3, 1982, pp. 205–254. 2
[22] M. Unuma, K. Anjyo, and R. Takeuchi, “Fourier principles for emotion-
based human figure animation,” in Proceedings of the 22nd annual
conference on Computer graphics and interactive techniques. ACM,
1995, pp. 91–96. 2, 3
[23] P. Faloutsos, M. Van de Panne, and D. Terzopoulos, “Composable
controllers for physics-based character animation,” in Proceedings of
the 28th annual conference on Computer graphics and interactive
techniques. ACM, 2001, pp. 251–260. 2, 3
[24] H. Noser, O. Renault, D. Thalmann, and N. M. Thalmann, “Navigation
for digital actors based on synthetic vision, memory, and learning,”
Computers and graphics, vol. 19, no. 1, 1995, pp. 7–19. 3
[25] H. H. Vilhj´almsson, “Autonomous communicative behaviors in avatars,”
Ph.D. dissertation, Massachusetts Institute of Technology, 1997. 3
[26] J. Cassell, H. H. Vilhj´almsson, and T. Bickmore, “Beat: the behavior
expression animation toolkit,” in Life-Like Characters. Springer, 2004,
pp. 163–185. 3
[27] X. Tu and D. Terzopoulos, “Artificial fishes: physics, locomotion,
perception, behavior,” in Proceedings of the 21st annual conference on
computer graphics and interactive techniques. ACM, 1994, pp. 43–50.
3
[28] J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket,
B. Douville, S. Prevost, and M. Stone, “Animated conversation: rule-
based generation of facial expression, gesture & spoken intonation
for multiple conversational agents,” in Proceedings of the 21st annual
conference on Computer graphics and interactive techniques. ACM,
1994, pp. 413–420. 3
[29] X. Yao, “Evolving artificial neural networks,” Proceedings of the IEEE,
vol. 87, no. 9, 1999, pp. 1423–1447. 3, 4
[30] H. A. ElMaraghy, “Kinematic and geometric modelling and animation
of robots,” in Proc. of Graphics Interface’86 Conference. ACM, 1986,
pp. 15–19. 3
[31] C. W. Reynolds, “Computer animation with scripts and actors,” in ACM
SIGGRAPH Computer Graphics, vol. 16, no. 3. ACM, 1982, pp. 289–
296. 3
[32] N. Burtnyk and M. Wein, “Interactive skeleton techniques for enhancing
motion dynamics in key frame animation,” Communications of the
ACM, vol. 19, no. 10, 1976, pp. 564–569. 3
[33] C. Csuri, R. Hackathorn, R. Parent, W. Carlson, and M. Howard,
“Towards an interactive high visual complexity animation system,” in
ACM SIGGRAPH Computer Graphics, vol. 13, no. 2. ACM, 1979,
pp. 289–299. 3
[34] R. A. Goldstein and R. Nagel, “3-d visual simulation,” Simulation,
vol. 16, no. 1, 1971, pp. 25–31. 3
[35] A. Bruderlin and T. W. Calvert, “Goal-directed, dynamic animation of
human walking,” ACM SIGGRAPH Computer Graphics, vol. 23, no. 3,
1989, pp. 233–242. 3
[36] I. Mlakar and M. Rojc, “Towards ecas animation of expressive complex
behaviour,” in Analysis of Verbal and Nonverbal Communication and
Enactment. The Processing Issues. Springer, 2011, pp. 185–198. 3
[37] M. Soliman and C. Guetl, “Implementing intelligent pedagogical agents
in virtual worlds: Tutoring natural science experiments in openwonder-
land,” in Global Engineering Education Conference (EDUCON), 2013
IEEE. IEEE, 2013, pp. 782–789. 3
[38] J. Song, X.-w. Zheng, and G.-j. Zhang, “Method of generating intel-
ligent group animation by fusing motion capture data,” in Ubiquitous
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1086 | P a g e
FTC 2016 - Future Technologies Conference 2016
6-7 December 2016 | San Francisco, United States
Computing Application and Wireless Sensor. Springer, 2015, pp. 553–
560. 3
[39] M. H. Lee, “Intrinsic activitity: from motor babbling to play,” in De-
velopment and Learning (ICDL), 2011 IEEE International Conference
on, vol. 2. IEEE, 2011, pp. 1–6. 3
[40] K. Gurney, N. Lepora, A. Shah, A. Koene, and P. Redgrave, “Action
discovery and intrinsic motivation: a biologically constrained formal-
isation,” in Intrinsically Motivated Learning in Natural and Artificial
Systems. Springer, 2013, pp. 151–181. 3
[41] W.-Y. Lo, C. Knaus, and M. Zwicker, “Learning motion controllers
with adaptive depth perception,” in Proceedings of the ACM SIG-
GRAPH/Eurographics Symposium on Computer Animation. Euro-
graphics Association, 2012, pp. 145–154. 3
[42] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral
model,” in ACM Siggraph Computer Graphics, vol. 21, no. 4. ACM,
1987, pp. 25–34. 3
[43] K. Erleben, J. Sporring, K. Henriksen, and H. Dohlmann, Physics-based
animation. Charles River Media Hingham, 2005. 3
[44] K. Perlin, “Real time responsive animation with personality,” Visual-
ization and Computer Graphics, IEEE Transactions on, vol. 1, no. 1,
1995, pp. 5–15. 4
[45] B. Kenwright, “Generating responsive life-like biped characters,” in Pro-
ceedings of the The third workshop on Procedural Content Generation
in Games. ACM, 2012, p. 1. 4
[46] T. Trappenberg, Fundamentals of computational neuroscience. OUP
Oxford, 2009. 4
[47] P. Dayan and L. Abbott, “Theoretical neuroscience: computational
and mathematical modeling of neural systems,” Journal of Cognitive
Neuroscience, vol. 15, no. 1, 2003, pp. 154–155. 4
[48] A. V. Samsonovich, “Toward a unified catalog of implemented cognitive
architectures.” BICA, vol. 221, 2010, pp. 195–244. 4
[49] R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J. M.
Bower, M. Diesmann, A. Morrison, P. H. Goodman, F. C. Harris Jr
et al., “Simulation of networks of spiking neurons: a review of tools
and strategies,” Journal of computational neuroscience, vol. 23, no. 3,
2007, pp. 349–398. 4
[50] T. R. Armstrong, “Training for the production of memorized movement
patterns,” Ph.D. dissertation, The University of Michigan, 1970. 4
[51] M. H. Bickhard, “Autonomy, function, and representation,” Communi-
cation and Cognition-Artificial Intelligence, vol. 17, no. 3-4, 2000, pp.
111–131. 6
[52] D. Vernon, G. Metta, and G. Sandini, “A survey of artificial cognitive
systems: Implications for the autonomous development of mental ca-
pabilities in computational agents,” IEEE Transactions on Evolutionary
Computation, vol. 11, no. 2, 2007, p. 151. 6, 8
[53] C. von Hofsten, On the development of perception and action. London:
Sage, 2003. 6
[54] M. Sagar, P. Robertson, D. Bullivant, O. Efimov, K. Jawed, R. Kalarot,
and T. Wu, “A visual computing framework for interactive neural system
models of embodied cognition and face to face social learning,” in
Unconventional Computation and Natural Computation. Springer,
2015, pp. 71–88. 7
978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1087 | P a g e

More Related Content

Viewers also liked

นายกิติกร เพชรคง
นายกิติกร เพชรคง นายกิติกร เพชรคง
นายกิติกร เพชรคง Phitsinee Mymintz
 
นางสาววิภาวี ยอดชัยภูมิ
นางสาววิภาวี ยอดชัยภูมินางสาววิภาวี ยอดชัยภูมิ
นางสาววิภาวี ยอดชัยภูมิPhitsinee Mymintz
 
นางสาวพิชญ์สิณี ศรีไพศาลสุข
นางสาวพิชญ์สิณี ศรีไพศาลสุขนางสาวพิชญ์สิณี ศรีไพศาลสุข
นางสาวพิชญ์สิณี ศรีไพศาลสุขPhitsinee Mymintz
 
openEHR Medinfo2015 Brazil Sponsor Session
openEHR Medinfo2015 Brazil Sponsor SessionopenEHR Medinfo2015 Brazil Sponsor Session
openEHR Medinfo2015 Brazil Sponsor SessionopenEHR Foundation
 
Cartilha rimada agroecologia - empresa baiana de desenvolvimento agrícola s...
Cartilha rimada   agroecologia - empresa baiana de desenvolvimento agrícola s...Cartilha rimada   agroecologia - empresa baiana de desenvolvimento agrícola s...
Cartilha rimada agroecologia - empresa baiana de desenvolvimento agrícola s...Serginho Sucesso
 
12 ge lecture presentation
12 ge lecture presentation12 ge lecture presentation
12 ge lecture presentationmahmood jassim
 
openEHR and DIPS Arena: the 'Best of Breed 3.0' revolution
openEHR and DIPS Arena: the 'Best of Breed 3.0' revolutionopenEHR and DIPS Arena: the 'Best of Breed 3.0' revolution
openEHR and DIPS Arena: the 'Best of Breed 3.0' revolutionIan McNicoll
 
Arquitectura y Urbanismo en el Mundo Antiguo
Arquitectura y Urbanismo en el Mundo AntiguoArquitectura y Urbanismo en el Mundo Antiguo
Arquitectura y Urbanismo en el Mundo Antiguomoisesdbm
 
19 ge dna tech lecture presentation
19 ge dna tech lecture presentation19 ge dna tech lecture presentation
19 ge dna tech lecture presentationmahmood jassim
 
Assistive Technology_Research
Assistive Technology_ResearchAssistive Technology_Research
Assistive Technology_ResearchMeng Kry
 
นางสาวณัฐวดี ชุมสิงห์
นางสาวณัฐวดี ชุมสิงห์ นางสาวณัฐวดี ชุมสิงห์
นางสาวณัฐวดี ชุมสิงห์ Phitsinee Mymintz
 
นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1
นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1
นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1Phitsinee Mymintz
 
07 ge lecture presentation
07 ge lecture presentation07 ge lecture presentation
07 ge lecture presentationmahmood jassim
 
10 ge lecture presentation
10  ge lecture presentation10  ge lecture presentation
10 ge lecture presentationmahmood jassim
 
08 ge lecture presentation
08 ge lecture presentation08 ge lecture presentation
08 ge lecture presentationmahmood jassim
 
นางสาวกัญญารัตน์ คิดในทางดี
นางสาวกัญญารัตน์ คิดในทางดีนางสาวกัญญารัตน์ คิดในทางดี
นางสาวกัญญารัตน์ คิดในทางดีPhitsinee Mymintz
 
11 ge lecture presentation
11 ge lecture presentation11 ge lecture presentation
11 ge lecture presentationmahmood jassim
 

Viewers also liked (19)

นายกิติกร เพชรคง
นายกิติกร เพชรคง นายกิติกร เพชรคง
นายกิติกร เพชรคง
 
นางสาววิภาวี ยอดชัยภูมิ
นางสาววิภาวี ยอดชัยภูมินางสาววิภาวี ยอดชัยภูมิ
นางสาววิภาวี ยอดชัยภูมิ
 
นางสาวพิชญ์สิณี ศรีไพศาลสุข
นางสาวพิชญ์สิณี ศรีไพศาลสุขนางสาวพิชญ์สิณี ศรีไพศาลสุข
นางสาวพิชญ์สิณี ศรีไพศาลสุข
 
openEHR Medinfo2015 Brazil Sponsor Session
openEHR Medinfo2015 Brazil Sponsor SessionopenEHR Medinfo2015 Brazil Sponsor Session
openEHR Medinfo2015 Brazil Sponsor Session
 
Cartilha rimada agroecologia - empresa baiana de desenvolvimento agrícola s...
Cartilha rimada   agroecologia - empresa baiana de desenvolvimento agrícola s...Cartilha rimada   agroecologia - empresa baiana de desenvolvimento agrícola s...
Cartilha rimada agroecologia - empresa baiana de desenvolvimento agrícola s...
 
12 ge lecture presentation
12 ge lecture presentation12 ge lecture presentation
12 ge lecture presentation
 
openEHR and DIPS Arena: the 'Best of Breed 3.0' revolution
openEHR and DIPS Arena: the 'Best of Breed 3.0' revolutionopenEHR and DIPS Arena: the 'Best of Breed 3.0' revolution
openEHR and DIPS Arena: the 'Best of Breed 3.0' revolution
 
Arquitectura y Urbanismo en el Mundo Antiguo
Arquitectura y Urbanismo en el Mundo AntiguoArquitectura y Urbanismo en el Mundo Antiguo
Arquitectura y Urbanismo en el Mundo Antiguo
 
19 ge dna tech lecture presentation
19 ge dna tech lecture presentation19 ge dna tech lecture presentation
19 ge dna tech lecture presentation
 
Assistive Technology_Research
Assistive Technology_ResearchAssistive Technology_Research
Assistive Technology_Research
 
นางสาวณัฐวดี ชุมสิงห์
นางสาวณัฐวดี ชุมสิงห์ นางสาวณัฐวดี ชุมสิงห์
นางสาวณัฐวดี ชุมสิงห์
 
นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1
นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1
นางสาวอรอนงค์ เกตุดาว รหัสนิสิต 59170038 กลุ่ม 1
 
cv english
cv englishcv english
cv english
 
cv english
cv englishcv english
cv english
 
07 ge lecture presentation
07 ge lecture presentation07 ge lecture presentation
07 ge lecture presentation
 
10 ge lecture presentation
10  ge lecture presentation10  ge lecture presentation
10 ge lecture presentation
 
08 ge lecture presentation
08 ge lecture presentation08 ge lecture presentation
08 ge lecture presentation
 
นางสาวกัญญารัตน์ คิดในทางดี
นางสาวกัญญารัตน์ คิดในทางดีนางสาวกัญญารัตน์ คิดในทางดี
นางสาวกัญญารัตน์ คิดในทางดี
 
11 ge lecture presentation
11 ge lecture presentation11 ge lecture presentation
11 ge lecture presentation
 

Similar to Bioinspired Character Animations: A Mechanistic and Cognitive View

Machine Learning, Artificial General Intelligence, and Robots with Human Minds
Machine Learning, Artificial General Intelligence, and Robots with Human MindsMachine Learning, Artificial General Intelligence, and Robots with Human Minds
Machine Learning, Artificial General Intelligence, and Robots with Human MindsUniversity of Huddersfield
 
CEN launch, Gert Westermann
CEN launch, Gert WestermannCEN launch, Gert Westermann
CEN launch, Gert WestermannYishay Mor
 
An informative and descriptive title for your literature survey
An informative and descriptive title for your literature survey An informative and descriptive title for your literature survey
An informative and descriptive title for your literature survey John Wanjiru
 
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - Lieto
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - LietoCognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - Lieto
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - LietoAntonio Lieto
 
Machine Learning Meets Human Learning
Machine Learning Meets Human LearningMachine Learning Meets Human Learning
Machine Learning Meets Human Learningbutest
 
Emotional Learning in a Simulated Model of the Mental Apparatus
Emotional Learning in a Simulated Model of the Mental Apparatus Emotional Learning in a Simulated Model of the Mental Apparatus
Emotional Learning in a Simulated Model of the Mental Apparatus cscpconf
 
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUS
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUSEMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUS
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUScsandit
 
Agent-Based Modeling for Sociologists
Agent-Based Modeling for SociologistsAgent-Based Modeling for Sociologists
Agent-Based Modeling for SociologistsSimone Gabbriellini
 
A Path Towards Autonomous Machines
A Path Towards Autonomous MachinesA Path Towards Autonomous Machines
A Path Towards Autonomous MachinesAlejandro Franceschi
 
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD ijcsit
 
The Mischievous Robot
The Mischievous RobotThe Mischievous Robot
The Mischievous Robotguest49fc20
 
Convolutional Networks
Convolutional NetworksConvolutional Networks
Convolutional NetworksNicole Savoie
 
AI/ML session by GDSC ZHCET AMU, ALIGARH
AI/ML session by GDSC ZHCET AMU, ALIGARHAI/ML session by GDSC ZHCET AMU, ALIGARH
AI/ML session by GDSC ZHCET AMU, ALIGARHjamesbond00714
 
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common SenseDark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common SenseBoston Global Forum
 
Computational Social Neuroscience - E Tognoli
Computational Social Neuroscience - E TognoliComputational Social Neuroscience - E Tognoli
Computational Social Neuroscience - E TognoliEmmanuelleTognoli
 
Chaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind MahineChaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind MahineSyedVAhamed
 
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...Antonio Lieto
 

Similar to Bioinspired Character Animations: A Mechanistic and Cognitive View (20)

Machine Learning, Artificial General Intelligence, and Robots with Human Minds
Machine Learning, Artificial General Intelligence, and Robots with Human MindsMachine Learning, Artificial General Intelligence, and Robots with Human Minds
Machine Learning, Artificial General Intelligence, and Robots with Human Minds
 
CEN launch, Gert Westermann
CEN launch, Gert WestermannCEN launch, Gert Westermann
CEN launch, Gert Westermann
 
EWIC talk - 07 June, 2018
EWIC talk - 07 June, 2018EWIC talk - 07 June, 2018
EWIC talk - 07 June, 2018
 
An informative and descriptive title for your literature survey
An informative and descriptive title for your literature survey An informative and descriptive title for your literature survey
An informative and descriptive title for your literature survey
 
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - Lieto
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - LietoCognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - Lieto
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - Lieto
 
Machine Learning Meets Human Learning
Machine Learning Meets Human LearningMachine Learning Meets Human Learning
Machine Learning Meets Human Learning
 
Unit 1 ai
Unit 1 aiUnit 1 ai
Unit 1 ai
 
Emotional Learning in a Simulated Model of the Mental Apparatus
Emotional Learning in a Simulated Model of the Mental Apparatus Emotional Learning in a Simulated Model of the Mental Apparatus
Emotional Learning in a Simulated Model of the Mental Apparatus
 
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUS
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUSEMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUS
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUS
 
Agent-Based Modeling for Sociologists
Agent-Based Modeling for SociologistsAgent-Based Modeling for Sociologists
Agent-Based Modeling for Sociologists
 
A Path Towards Autonomous Machines
A Path Towards Autonomous MachinesA Path Towards Autonomous Machines
A Path Towards Autonomous Machines
 
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
 
The Mischievous Robot
The Mischievous RobotThe Mischievous Robot
The Mischievous Robot
 
Convolutional Networks
Convolutional NetworksConvolutional Networks
Convolutional Networks
 
AI/ML session by GDSC ZHCET AMU, ALIGARH
AI/ML session by GDSC ZHCET AMU, ALIGARHAI/ML session by GDSC ZHCET AMU, ALIGARH
AI/ML session by GDSC ZHCET AMU, ALIGARH
 
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common SenseDark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
 
Computational Social Neuroscience - E Tognoli
Computational Social Neuroscience - E TognoliComputational Social Neuroscience - E Tognoli
Computational Social Neuroscience - E Tognoli
 
AI PAPER
AI PAPERAI PAPER
AI PAPER
 
Chaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind MahineChaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind Mahine
 
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...
 

Recently uploaded

Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 

Recently uploaded (20)

Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 

Bioinspired Character Animations: A Mechanistic and Cognitive View

  • 1. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States Bio-Inspired Animated Characters: A Mechanistic & Cognitive View Ben Kenwright School of Media Arts and Technology Southampton Solent University United Kingdom Abstract—Unlike traditional animation techniques, which at- tempt to copy human movement, ‘cognitive’ animation solutions mimic the brain’s approach to problem solving, i.e., a logical (in- telligent) thinking structure. This procedural animation solution uses bio-inspired insights (modelling nature and the workings of the brain) to unveil a new generation of intelligent agents. As with any promising new approach, it raises hopes and questions; an extremely challenging task that offers a revolutionary solution, not just in animation but to a variety of fields, from intelligent robotics and physics to nanotechnology and electrical engineering. Questions, such as, how does the brain coordinate muscle signals? How does the brain know which body parts to move? With all these activities happening in our brain, we examine how our brain ‘sees’ our body and how it can affect our movements. Through this understanding of the human brain and the cognitive process, models can be created to mimic our abilities, such as, synthesizing actions that solve and react to unforeseen problems in a humanistic manner. We present an introduction to the concept of cognitive skills, as an aid in finding and designing a viable solution. This helps us address principal challenges, such as: How do characters perceive the outside world (input) and how does this input influence their motions? What is required to emulate adaptive learning skills as seen in higher life-forms (e.g., a child’s cognitive learning process)? How can we control and ‘direct’ these autonomous procedural character motions? Finally, drawing from experimentation and literature, we suggest hypotheses for solving these questions and more. In summary, this article analyses the biological and cognitive workings of the human mind, specifically motor skills. Reviewing cognitive psychology research related to movement in an attempt to produce more attentive behavioural characteristics. We conclude with a discussion on the significance of cognitive methods for creating virtual character animations, limitations and future applications. Keywords—animation; life-like; movement; cognitive; bio- mechanics; human; reactive; responsive; instinctual; learning; adapting; biological; optimisation; modular; scalable I. INTRODUCTION Movement is Life Animated films and video games are pushing the limits of what is possible. In today’s virtual environments, animations tends to be data-driven [1], [2]. It is common to see animated characters using pre-recorded motion capture data, but it is rare to see the animated characters driven using purely proce- dural solutions. With the dawn of Virtual Reality (VR) and Augmented Reality (AR) there is an ever growing need for content - to create indistinguishably realistic virtual worlds quickly and cost effectively. While rendered scenes may appear highly realistic, the ‘movement’ of actively driven systems (e.g., biological creatures) is an open area of research [2]. Specifically, the question of how to ‘automatically’ create realistic actions that mimic the real-world. This includes, the ability to learn and adapt to unforeseen circumstances in a life-like manner. While we are able to ‘record’ and ‘playback’ highly realistic animations in virtual environments, they have limitations. The motions are constrained to specific skeleton topologies, not to mention, time consuming and challenging to create motions for non-humans (creatures and aliens). What is more, the recording of animations for dangerous situations is impossible using motion capture (so must be manually done using artistic intervention). Another key thing to remember, in dynamically changing environments (video games), pre- recorded animations are unable to adapt automatically to changing situations. This article attempts to solve these problems using biologi- cally inspired concepts. We investigate neurological, cognitive and behavioural methods. These methods provide inspirational solutions for creating adaptable models that synthesize life- like character characteristics. We examine how the human brain ‘thinks’ to accomplish tasks; and how the brain solves unforeseen problems. Exploiting the knowledge of how the brain functions, we formulate a system of conditions that attempt to replicate humanistic properties. We discusses novel approaches around solving these problems, by questioning, analysing and formulating a system based on the human cognitive processes. Cognitive vs Machine Learning Essentially, cognitive computing has the ability to reason creatively about data, patterns, situations, and extended models (dynamically). How- ever, most statistics-based machine learning algorithms cannot handle problems much beyond what they have seen and learned (match). The machine learning algorithm has to be paired with cognitive capabilities to deal with truly ‘new situation’. Cognitive science therefore raises challenges for, and draws inspiration from, machine learning; and insights about the human mind to help inspire new directions for animation. Hence, cognitive computing along with many other disciplines within the field of artificial intelligence are gaining popularity, especially in character systems, so in the not so distant future will have a colossal impact on the animation industry. Automation The ability to ‘automatically’ generate phys- ically correct humanistic animations is revolutionary. Remove and add behavioural components (happy and sad). Create animations for different physical skeletons using a single set of training data. Perform a diverse range of actions, for instance, getting-up, jumping, dancing, and walking. The ability to react to external interventions, while completing assigned task (i.e., combining motions with priorities). These problem-solving 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1079 | P a g e
  • 2. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States skills are highly valued. We want character agents to learn and adapt to the situation. This includes: • physically based models (e.g., rigid bodies) that are controlled through internal joint torques (muscle forces) • controllable adjustable joint signals to accomplish specific actions (trained) • learn and retain knowledge from past experiences • embed personal traits (personality) Problems We want the method to be automatic (i.e., not depend too heavily on pre-canned libraries). Avoid simply playing back captured animations, but instead parama- terizing and re-using animations for different contexts (provide stylistic advice to the training algorithm). We want the solution to have the ability to adapt on-the-fly to unforeseen situations in a natural life-like manner. Having said that, we also want to accommodate a diverse range of complex motions, not just balanced walking, but getting-up, climbing, and dancing actions. With a physics-based model at the heart of the system (i.e., not just a kinematic skeleton but joint torques/muscles), we are able to ensure a physically correct solution. While a real-world human skeleton has a huge number of degrees-of-freedom, we accept that a lower fidelity model is able to represent the necessary visual characteristics (enable reasonable computational overheads). Of course, even a simplified model possesses a large amount of ambiguity with singularities. All things considered, we do not want to focus on the ‘actions’ - but embrace the autonomous emotion, behaviour and cognitive properties that sit on top of the motion (intelligent learning component). Fig. 1. Homunculus Body Map - The somato-sensory homunculus is a kind of map of the body [3], [4]. The distorted model/view of a person (see Figure 2) represents the amount of sensory information a body part sends to the central nervous system (CNS) Geometric to Cognitive Synthesizing animated characters for virtual environments addresses the challenges of automat- ing a variety of difficult development tasks. Early research combined geometric and inverse kinematic models to simplify key-framing. Physical models for animating particles, rigid bodies, deformable solids, fluids, and gases have offered the means to generate copious quantities of realistic motion through dynamic simulation. Bio-mechanical models employ simulated physics to automate the lifelike animation of animals with internal muscle actuators. In recent years, research in be- havioral modeling has made progress towards ‘self-animating’ characters that react appropriately to perceived environmental stimuli [5], [6], [7], [8]. It has remained difficult, however, to instruct these autonomous characters so that they satisfy the programmer’s goals. As pointed out by Funge et al. [9], the computer graphics solution has evolved, from geometric solu- tions to more logical mathematical approaches, and ultimately cognitive models, as shown in Figure 3. A large amount of work has been done into motion re- targeting (i.e., taking existing pre-recorded animations and modifying them to different situations) [10], [11], [12]. Tar- geted solutions that generate animations for specific situations, such as, locomotion [13] and climbing [14]. Kinematic models do not take into account the physical properties of the model, in addition, are only able to solve local problems (e.g., reading and stepping and not complex rhythmic actions) [15], [16], [17]. Procedural models may not converge to natural looking motions [18], [19], [20]. Cognitive models go beyond be- havioral models, in that they govern what a character knows, how that knowledge is acquired, and how it can be used to plan actions. Cognitive models are applicable in instructing a new breed of highly autonomous, quasi-intelligent char- acters that are beginning to find use in interactive virtual environments. We decompose cognitive modeling into two related sub-tasks: (1) domain knowledge specification and (2) character instruction. This is reminiscent of the classic dictum from the field of artificial intelligence (AI) that tries to promote modularity of design by separating out knowledge from control. knowledge + instruction = intelligent behavior (1) Domain (knowledge) specification involves administering knowledge to the character about its world and how that world can change. Character instructions tell the character to try to behave in a certain way within its world in order to achieve specific goals. Like other advanced modeling tasks, both of these steps can be fraught with difficulty unless developers are given the right tools for the job. Components We wanted to avoid a ‘single’ amalgamated algorithm (e.g., Neural Networks or connectionist models [21]). Instead we investigate modular or dissectable learning models for adapting joint signals to accomplish tasks. For example, genetic algorithms [18], in combination with Fourier methods to subdivide complex actions into components (i.e., extract and identify behavioural characteristics [22]). Coupled with the fact that, joint motions are essentially signals, while the physics-based model ensures the generated motions are physically correct [23]. To say nothing of the advancements in parallel hardware - we envision the exploitation of massively parallel architecture constitutional. Contribution The novel contribution of this technical article is the amalgamation of numerous methods, for instance, bio-mechanics, psychology, robotics, and computer animation, to address the question of ‘how can we make virtual characters solve unforeseen problems automatically and in a realistic manner?’ (i.e., mimic the human cognitive learning process). 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1080 | P a g e
  • 3. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States Fig. 3. Timeline - Computer Graphics Cognitive Development Model (Geometric, Kinematic, Physical, Behavioural, and Cognitive) ([9]. Simplified illustrate of milestones over the years that have contributed novel animation solutions - emphasises the gradual transition from kinematic and physical techniques to intelligent behavioural models. [A] [24]; [B] [20]; [C] [19]; [D] [25]; [E] [26]; [F] [27]; [G] [28]; [H] [18]; [I] [29]; [J] [30]; [K] [31]; [L] [32]; [M] [33]; [N] [34]; [O] [35]; [P] [8]; [Q] [36]; [R] [7]; [S] [5]; [T] [6]; [U] [37]; [V] [38]; Fig. 2. Homunculus Body Map - Reinert et al [4], presented a graphical paper on mesh deformation to visualize the somato-sensory information of the brain-body. The figure conveys the importance of the neuronal homunculus - i.e., the human body part size relation to neural density and the brain. II. BACKGROUND & RELATED WORK Literature Gap The research in this article brings together numerous diverse concepts and while in their individual field they are well studied, in their whole and applied to virtual character animations, there is a serious gap in the referential literature. Hence, we begin by exploring branches of research from cognitive psychology and bio-mechanics before taking them across and combining them with computer animation and robotics concepts. Autonomous Animation Solutions Formal approaches to animation, such as, genetic algorithms [18], [19], [20], may not converge to natural looking motions without additional work, such as, artist intervention or constrained/complex fit- ness functions. This causes limitations and constrains the ‘au- tomation’ factor. We see autonomy as the emergent of salient, novel, action discovery, through self organisation of high level goal directed orders. The behavioural aspect emerges from the physical (or virtual) constraints and fundamental low level mechanisms. We adapt bodily motor controls (joint signals) from randomness to purposeful actions based on cognitive development (Lee [39] referred to this process as evolving from babbling to play). Interestingly, this intrinsic method of behavioural learning has also been demonstrated in biological models (known as action discovery) [40]. Navigation/Controllers/Mechanical Synthesizing human movement that mimics real-world behaviours ‘automatically’ is a challenging and important topic. Typically, reactive ap- proaches for navigation and pursuit [24], [41], [42], [27], may not readily accommodate task objectives, sensing costs, and cognitive principles. A cognitive solution adapts and learns (finds answers to unforeseen problems). Expression/Emotion Humans exhibit a wide variety of expressive actions, which reflect their personalities, emotions, and communicative needs [25], [26], [28]. These variations often influence the performance of simpler gestural or facial movements. Components Essential Components: • Fourier - subdivide actions into components, extract and identify behavioural characteristics [22] • Heuristic Optimisation [18] - adapting non-linear signals (with purpose) • Physics-Based [43], [23] - torques and forces to control the model • Parallel Architecture - exploit massively parallel pro- cessor architecture, such as, the graphical processing unit (GPU) 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1081 | P a g e
  • 4. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States • Randomness - inject awareness and randomness (blood flow, repository signals, background noise) [44], [45] Brain Body Map As shown in Figure 1, we are able to map the minds awareness of different body parts. This is known as the homunculus body map. So why is it important for movement? Helps understanding the neural mechanisms of human sensori-motor coordination and cognitive connec- tion. While we are a complex biological organism, we need feedback and information (input) to be able to move and thus live (i.e., movement is life). The motor part of the brain relies on information from the sensory systems. The control signals are dynamically changing depending on our state. Simply put, the better the central representation, the better the motor output will be and the more life-like and realistic the final animations will be. Our motor systems need to know the state of our body. If the situation is not known or not very clear, the movements will not be good, because the motor systems will be ‘afraid’ to go all out. Very similar to driving a car on an unknown road in misty conditions with only an old, worn and worm eaten map. We drive slow and tense, to avoid hitting something or getting of road. This is safety behaviour: safe, but taxing on the system. Cognitive Science The cognitive science of motion is an interdisciplinary scientific study of the mind and its processes. We examines what cognition motion is, what it does and how it works. This includes research in to intelligence and be- haviour, especially focusing on how information is represented, processed, and transformed (in faculties such as perception, language, memory, attention, reasoning, and emotion) within nervous systems (humans or other animals) and machines (e.g. computers). Cognitive motion science consists of multiple research disciplines, including robotics, psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthro- pology. The subject spans multiple levels of analysis, from low level learning and decision mechanisms to high level logic and planning; from neural circuitry to modular brain organization. However, the fundamental concept of cognitive motion is the understanding of instinctual thinking in terms of the structural mind and computational procedures that operate on those structures. Importantly, cognitive solutions are not only adaptive but also anticipatory and prospective, that is, they need to have (by virtue of their phylogeny) or develop (by virtue of their ontogeny) some mechanism to rehearse hypothetical scenarios. Neural Networks and Cognitive Simulators Compu- tational Neuroscience [46], [29], [47] biologically inspired solutions for neural models for simulating information process- ing and cognition and behaviour modelling. The majority of the research has focused on modelling ‘isolated components’. Cognitive architectures [48] using biologically based models for goal driven learning and behaviours. Publically available neural network simulators are available [49]. Motor Skills Our brain sees the world in ‘maps’. The maps are distorted, depending on how we use each sense, but they are still maps. Almost every sense has a map. Most senses have multiple maps. We have a ‘tonotopic’ map, which is a map of sound frequency, from high pitched to low pitched, which is how our brain processes sound. We have a ‘retinotopic’ map, which is a reproduction of what you are seeing, and it is how the brain processes sight. Our brain loves maps. Most importantly, we have maps of our muscles. The mapping from sensory information to motor movement is shown in Figure 1. For muscle movements, the finer, more detailed the movements are, the more brain space those muscles have. Hence, we can address which muscles take priority and under what circumstances (i.e., sensory input). This also opens the door to lots of interesting and exciting questions, such as, what happens to the maps if we lose a body part, such as, a finger. Psychology Aspect A number of interesting facts are hidden in the psychology aspect of movement that are often taken for granted or overlooked. Incorporating them in a dynamic system allows us to solve a number of problems. For example, when we observe movements which are slightly different from each other but possess similar characteristics. The work by Armstrong [50], showed that when a movements sequence is speeded up as a unit, the overall relative move- ment or ‘phasing’ remains constant. Led to the discovery of relative forces or the relationship among forces in the muscles participating in the action. How the Brain Controls Muscles Let us pretend that we want to go to the kitchen, because we are hungry. First, an area in our brain called the parietal lobe comes up with a lots of possible plans. We could get to the kitchen by skipping, sprinting, uncoordinated somersaulting, or walking. The parietal lobe sends these plans to another brain area called the basal ganglia. The basal ganglia picks ‘walking’ as the best plan (with uncoordinated somersaulting as close second option). It tells the parietal lobe the plan. The parietal lobe confirms it, and sends the ‘walk to kitchen’ plan down the spinal cord and to the muscles. The muscles move. As they move, our cerebellum kicks into high gear, making sure we turn right before we crash into the kitchen counter, and that we jump over the dog. Part of the cerebellum’s job is to make quick changes to muscle movements while they are happening (see Figure 4). Visualizing the Solution (Offline) We visualize a goal. In our mind, over and over and over again. We picture the movements. We see ourself catching that ball. Dancing that toe touch. Swimming that breaststroke. We watch it in the movie of our mind whenever we can. Scrutinize it. Is our wrist turning properly? Is our kick high enough? If not, we change the picture. See ourself doing the movement perfectly. As far as our parietal lobe and basal ganglia are concerned, this is exactly the same as doing the movement. When we visualize the movement, we activate all those planning pathways. Those neurons fire, over and over again. Which is what needs to happen for our synapses to strengthen. In other words, by picturing the movements, we are actually learning them. This makes it easier for the parietal lobe to send the right message to the muscles. So when we actually try to perform a movement, we will get better, faster. We will need less physical practice to be good at sports. This does not work for general fitness (i.e., increased strength). We still need to train our muscles, heart, and lungs to become strong. However, its good for skilled movements. Basketball lay ups. Gymnastics routines. For improved technique, visualization works. We train our brain, which makes it easier to control our muscles. What does this have to do with character simulations? We are able to mimic the ‘visualization’ approach by having our system constantly run simulations in the background. Exploit all that parallel processing power. Run large numbers of simulations 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1082 | P a g e
  • 5. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States Fig. 4. Brain and Actions - The phases (left-to-right) the human brain goes through - from thinking about doing a task to accomplishing it (e.g., walking to the kitchen to get a drink from the cupboard). Fig. 5. Overview - High level view of interconnected components and their justifications. (a) We have a current (starting) state and a final state. The unknown middle transitioning states is what we are searching for. The transition state is a dynamic problem that is specific to the problem. For instance, the terrain or the situation may vary (slopes or crawling under obstacles). (b) A heuristic model would be able to train a set of trigonometric functions (e.g., Fourier series), to create rhythmic motions that are able to accomplish the task. The low level task (fitness function), being a simple ‘overall centre of mass trajectory’. (c) With (b) on its own, the solution is plagued with issues, such as, how to steer or control the type of motion and if the final motion is ‘humanistic’ or ‘life-like’. Hence, we have a ‘pre-defined’ library of motions that are chosen based on the type of animation we are leaning towards (standard walk or hopping). The information from the animation is fed back into the fitness function in (b). Providing a multi-objective problem, centre of mass, end-effectors, and frequency components for ‘style’. (d) The solution from each problem is ‘stored’ in a sub-bank of the animation and used for future problems. This builds upon using previous knowledge to help solve new problems faster in a coherent manner (e.g., previous experiences will cause different characters to create slightly different solutions over time). 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1083 | P a g e
  • 6. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States one or two seconds in advance and see how the result leads out. If the character’s food it a few centimetres forward, if we use more torque on the knee muscle, how does this compare with our ideal animation we are aiming for? As we find solutions, we store them and improve upon them each time a similar situation arises. Physically Correct Model Our solution controls a physics based model using joint torques as in the real world. This mimics the real world more closely, not only do we require the model to move in a realistic manner but it also has to control joint muscles in sufficient ratios to achieve the final motion (e.g., balance control). Adjusting the physical model, for instance, muscle strength or leg lengths, allows the model to retrain to achieve the action. (Get Up) Rise Animations Animation is diverse and complex area, so rather than try and create solutions for every possible situation, we focus on a particular set of actions, that is, rising movements. Rise animations require a suitably diverse range of motor skills. We formate a set of tasks to evaluate our algorithm, such as, get up from front, get up from back, get up on uneven ground and so on. The model also encapsulates underlying properties, such as, visual attention and expressive qualities (tired, unsure, eager) and human expressiveness. We consider a number of factors, such as, inner and outer information, emotion, personality, primary and secondary goals. III.OVERVIEW High Level Elements The system is driven by three key sources of information: 1) the internal information (e.g., logistics of the brain, experience, mood) 2) the aim or action 3) external input (e.g., environmental, contacts, comfort, lighting) 4) memory and information retrieval (e.g., parallel mod- els and associative memory) Motion Capture Data (Control) We have a library of actions as reference material for look-up and comparison. Some form of ‘control’ and ‘input’ to steer the characters to perform actions in a particular way (e.g., instead of the artist creating a large look-up array of animations for every single possible solution), we provide fundamental poses and simple pre-recorded animations to ‘guide’ the learning algorithm. As search models are able to explore their diverse search-space to reach the goal (e.g., heuristically adjusting joint muscles), however, a reference ‘library’ allows us to steer the solution towards what is ‘natural-looking’. As there are a wide number of ways of accomplishing a task - but what is ‘normal’ and what is ‘strange’ and uncomfortable. The key points we concentrate on are: 1) the animations requires basic empirical information (e.g., reference key-poses) from human movement and cognitive properties; 2) the movement should not simply reply pre-recorded motions, but adapt and modify them to different contexts; 3) the solution must react to disturbances and changes in the world while completing the given task; 4) the senses provide unique pieces of information, which should be combined with internal personality and emotion mechanisms to create the desired actions and/or re-actions. Blending/Adapting Animation Libraries During motor skill acquisition, the brain learns to map between ‘intended’ limb motion and requisite muscular forces. We propose that regions (i.e., particular body segments) in the animation library area are blended together to find a solution that is aesthetically pleasing. (i.e., based upon pre-recorded motions instead of randomly searching). Virtual Infant (or Baby) Imagine a baby with no knowl- edge or understanding. As we explained, a bottom up view, starting with nothing and educating the system to mimic humanistic (organic) qualities. Learning algorithms to tune skeletal motor signals to accomplish high-level tasks. As with a child - ‘trial-and-error’ approach to learning - exploring what is possible and impossible - to eventually reach a solution. This requires continuously integrating in corrective guidance (as with a child - without knowing what is right and wrong - the child will never learn). This guidance is through fitness criteria and example motion clips (as children do - see and copy - or try to). Performing multiple training exercises over and over again to learn skills. Having the algorithm actively improve (e.g., proprioception - how the brain understands the body). As we learn to perform motions, there are thousands of small adjustments that our body as a whole is making every millisecond to ensure optimal (quickest, energy efficient, closest idea/style). Constantly monitoring the body by sending and receiving sensory information (e.g., to and from every joint, limb, and contact). Over time, the experience strengthens the model’s ability to accomplish tasks quicker and more efficiently. Stability Autonomous systems have ‘stability’ issues (i.e., they are far from equilibrium stability) [51]. Due to the dynamic nature of a character’s actions, they are dependent for their environment (external factors) requiring interaction, which are open processes (exhibit closed self-organization). However, we can measure stability in relation to reference poses, energy, and balance to draw conclusions of the effec- tiveness of the learned solution. Memory Learn through explorative searching (i.e, with quantative measures for comfort, security, and satisfaction). While a character may find an ‘optimal’ solution that meets the specified criteria - it will continue to expand its memory reper- toire of actions. This is a powerful component, increasing the efficiency in achieving a goal (e.g., the development of walking and retention of balanced motion in different circumstances would be more effective). The view that exploration and re- tention (memory) is crucial to ontogenetic development, which is supported by research findings in developmental psychology [52]. Hofsten [53] explains that it is not necessarily success at achieving task-specific goals that drives development but the discovery of new way of doing something (through explo- ration). Forms a solution that builds upon ‘prior knowledge’ with an increased reliance on machine learning and statistical evaluation (i.e., for tuning the system parameters). This leads to an model that constantly acquires new knowledge both for the current and future task. 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1084 | P a g e
  • 7. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States IV.COMPLEXITY Experimenting with optimisation algorithms (i.e., different fitness criteria for specific situations). Highly dynamic ani- mations (jumping or flying through the air). Close proximity simulations (dancing, wrestling, getting in/out of a vehicle). Exploring ‘beyond’ human but creative creatures (multiple legs and arms). Instead of aesthetic qualities, investigate ‘in- teresting’ behaviours. As the system and training evolves to use a ‘control language’ to give orders. Not just limited to generic motions (i.e., walking and jumping), but the ability to learn and search for solutions (whatever the method). Introduce risk, harm, and comfort to ‘limit’ the solutions to be more ‘human’ and organic. Avoid unsupervised learning since it leads to random unnatural and uncontrollable motions. Simple examples (i.e., training data) to steer the learning. Gather knowledge and extend the memory of experiences to help solve future problems (learn from past problems). This method is very promising for building organic real-life systems (handle unpredictable situations in a logical natural manner). Tech- nique is scalable and generalizes across topologies. Learned solutions can be shared and transferred between characters (i.e., accelerated learning through sharing). Fig. 6. Complexity - As animation and behavioural character models become increasing complex, it becomes more challenging and time consuming to customize and create solutions for specific environments/situations. An physically correct, self-adapting, learning animation system to mimic human cognitive mechanics is a complex task that embodies a wide range of biologically based concepts. A bottom up approach (i.e., starting with nothing). This forms a foundation from which greater details can be added. As the model grows in complexity and details more expressive and autonomous animations appear. Leading on to collaborative agents, i.e., social learning and interaction (i.e., behaviour in groups). The enormous complexity of the human brain and its ability to problem solve cannot be underestimated - however, through simple approximations we are able to develop autonomous animation models that embody and pos- sess humanistic qualities, such as, cognitive and behavioural learning abilities. Tackle a complex problem - our movement allows us to express a vast array of behaviours in addition to solving physical problems, such as, balance and locomotion. We have only scraped the surface of what is possible - constructing and explaining a simple solution (for a relatively complex neuro- behavioural model) - to investigate a modular extendible framework to synthesize human movement (i.e., mapping functionality, problem solving, mapping of brain to anatomy, and learning/experience). Body Language The way we ‘move’ says a lot. How we stand and how we walk expels ‘emotional’ details. We humans are very good at spotting these underlying characteristics. These fundamental physiological motions are important in animation - if we want to synthesize life-like characters. While these subtle underlying motions are aesthetic (i.e., sitting on top of the physical action or goal), they are non the less equally important. Emotional synthesis is often classified as a low- level biological process [54]. Chemical reactions in the brain for stress and pain - correlate and modulate various behaviours (including motor control) - vast array of effects - influencing sensitivity, mood, and emotional responses. We have took a view that the motion and learning is driven by a high level cognitive model (avoid the various underlying physiological and chemical parameters). Input (Sensory Data) The brain has a vast array of sensory data, such as, the eyes, sound, temperature, smell, and feelings, that feed in to make the final decision. Technically, our simple assumption is analogous to a blind person taking lots of short exploratory motions to discover how to accomplish the task. Reduce the skeleton complexity compared to a full human model (numerical complexity). Physical information from the environment, like contacts, centre of mass, and end-effector locations. The output motor control signals - with behavioural selection, example learning motion library, emotion, and fitness evaluation. V. CONCLUSION We have specified a set of simple constraints to steer and control the animation (e.g., get-up poses). We developed a model based on biology, cognitive psychology, and adaptive heuristics to create animations to control a physics-based skeleton that adapts and re-trains parameters to meet changing situations (e.g., different physical and environmental informa- tion). We inject personality and behavioural components to create animations that capture life-like qualities (e.g., mood, tired, and scared). This article addresses several possibilities for future work. It would be valuable to do further tests on specific hypotheses and assumptions by constructing more focused and rigorous experiments. However, these hypotheses are hard to state precisely, and thus have mixed feelings - since we are trying to model humanistic cognitive abilities. A practical approach might be to directly compare and contrast real-world and synthesized situations. For instance, an experiment of an actor dealing with difficult situations, such as, stepping over objects and walking under bridges. Younger children approach the problem in a different way - similar to our computer agent - learning through trial and error, behaving less mechanically and more consciously. Further, communication between direc- tor (e.g., example animations and posses for control) might lead to more formal languages of commands. This would help us learn precisely what sorts of commands are needed and when there should be issued. Finally, we could go further by developing richer cognitive models and control languages for describing motion and style to solve questions not even imagined. 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1085 | P a g e
  • 8. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States We have taken a simplified view of cognitive modelling. We will continue to see cognitive architectures develop over the coming years that are capable of adapting and self- modifying, both in terms of parameter adjustment phylogenetic skills. This will be through learning and, more importantly, through the modification of the very structure and organization of the system itself (memory and algorithm) so that it is capable of altering its system dynamics based on experience, to expand its repertoire of actions, and thereby adapt to new circumstances [52]. A variety of learning paradigms will need to be developed to accomplish these goals, including, but not necessarily limited to, unsupervised, reinforcement, and supervised learning. Learning through watching Providing the ability to translate 2D video images to 3D animation sequences would allow cognitive learning algorithms the ability to constantly ‘watch’ and learn from people. Watching people in the street walking and avoiding one another, climbing over obstacles, and interacting to reproduce similar characteristics virtually. REFERENCES [1] D. Vogt, S. Grehl, E. Berger, H. B. Amor, and B. Jung, “A data-driven method for real-time character animation in human-agent interaction,” in Intelligent Virtual Agents. Springer, 2014, pp. 463–476. 1 [2] T. Geijtenbeek and N. Pronost, “Interactive character animation using simulated physics: A state-of-the-art review,” in Computer Graphics Forum, vol. 31, no. 8. Wiley Online Library, 2012, pp. 2492–2515. 1 [3] E. N. Marieb and K. Hoehn, Human anatomy & physiology. Pearson Education, 2007. 2 [4] B. Reinert, T. Ritschel, and H.-P. Seidel, “Homunculus warping: Con- veying importance using self-intersection-free non-homogeneous mesh deformation,” Computer Graphics Forum (Proc. Pacific Graphics 2012), vol. 5, no. 31, 2012. 2, 3 [5] T. Conde and D. Thalmann, “Learnable behavioural model for au- tonomous virtual agents: low-level learning,” in Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems. ACM, 2006, pp. 89–96. 2, 3 [6] F. Amadieu, C. Marin´e, and C. Laimay, “The attention-guiding effect and cognitive load in the comprehension of animations,” Computers in Human Behavior, vol. 27, no. 1, 2011, pp. 36–40. 2, 3 [7] E. Lach, “fact-animation framework for generation of virtual characters behaviours,” in Information Technology, 2008. IT 2008. 1st Interna- tional Conference on. IEEE, 2008, pp. 1–4. 2, 3 [8] J.-S. Monzani, A. Caicedo, and D. Thalmann, “Integrating behavioural animation techniques,” in Computer Graphics Forum, vol. 20, no. 3. Wiley Online Library, 2001, pp. 309–318. 2, 3 [9] J. Funge, X. Tu, and D. Terzopoulos, “Cognitive modeling: knowledge, reasoning and planning for intelligent characters,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 1999, pp. 29–38. 2, 3 [10] S. Tak and H.-S. Ko, “A physically-based motion retargeting filter,” ACM Transactions on Graphics (TOG), vol. 24, no. 1, 2005, pp. 98– 117. 2 [11] S. Baek, S. Lee, and G. J. Kim, “Motion retargeting and evaluation for vr-based training of free motions,” The Visual Computer, vol. 19, no. 4, 2003, pp. 222–242. 2 [12] J.-S. Monzani, P. Baerlocher, R. Boulic, and D. Thalmann, “Using an intermediate skeleton and inverse kinematics for motion retargeting,” in Computer Graphics Forum, vol. 19, no. 3. Wiley Online Library, 2000, pp. 11–19. 2 [13] B. Kenwright, R. Davison, and G. Morgan, “Dynamic balancing and walking for real-time 3d characters,” in Motion in Games. Springer, 2011, pp. 63–73. 2 [14] C. Balaguer, A. Gim´enez, J. M. Pastor, V. Padron, and M. Abderrahim, “A climbing autonomous robot for inspection applications in 3d com- plex environments,” Robotica, vol. 18, no. 03, 2000, pp. 287–297. 2 [15] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi´c, “Style-based inverse kinematics,” in ACM Transactions on Graphics (TOG), vol. 23, no. 3. ACM, 2004, pp. 522–531. 2 [16] D. Tolani, A. Goswami, and N. I. Badler, “Real-time inverse kinematics techniques for anthropomorphic limbs,” Graphical models, vol. 62, no. 5, 2000, pp. 353–388. 2 [17] T. B. Moeslund, A. Hilton, and V. Kr¨uger, “A survey of advances in vision-based human motion capture and analysis,” Computer vision and image understanding, vol. 104, no. 2, 2006, pp. 90–126. 2 [18] B. Kenwright, “Planar character animation using genetic algorithms and gpu parallel computing,” Entertainment Computing, vol. 5, no. 4, 2014, pp. 285–294. 2, 3 [19] K. Sims, “Evolving virtual creatures,” in Proceedings of the 21st annual conference on Computer graphics and interactive techniques. ACM, 1994, pp. 15–22. 2, 3 [20] J. T. Ngo and J. Marks, “Spacetime constraints revisited,” in Proceed- ings of the 20th annual conference on Computer graphics and interactive techniques. ACM, 1993, pp. 343–350. 2, 3 [21] J. A. Feldman and D. H. Ballard, “Connectionist models and their properties,” Cognitive science, vol. 6, no. 3, 1982, pp. 205–254. 2 [22] M. Unuma, K. Anjyo, and R. Takeuchi, “Fourier principles for emotion- based human figure animation,” in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. ACM, 1995, pp. 91–96. 2, 3 [23] P. Faloutsos, M. Van de Panne, and D. Terzopoulos, “Composable controllers for physics-based character animation,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques. ACM, 2001, pp. 251–260. 2, 3 [24] H. Noser, O. Renault, D. Thalmann, and N. M. Thalmann, “Navigation for digital actors based on synthetic vision, memory, and learning,” Computers and graphics, vol. 19, no. 1, 1995, pp. 7–19. 3 [25] H. H. Vilhj´almsson, “Autonomous communicative behaviors in avatars,” Ph.D. dissertation, Massachusetts Institute of Technology, 1997. 3 [26] J. Cassell, H. H. Vilhj´almsson, and T. Bickmore, “Beat: the behavior expression animation toolkit,” in Life-Like Characters. Springer, 2004, pp. 163–185. 3 [27] X. Tu and D. Terzopoulos, “Artificial fishes: physics, locomotion, perception, behavior,” in Proceedings of the 21st annual conference on computer graphics and interactive techniques. ACM, 1994, pp. 43–50. 3 [28] J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket, B. Douville, S. Prevost, and M. Stone, “Animated conversation: rule- based generation of facial expression, gesture & spoken intonation for multiple conversational agents,” in Proceedings of the 21st annual conference on Computer graphics and interactive techniques. ACM, 1994, pp. 413–420. 3 [29] X. Yao, “Evolving artificial neural networks,” Proceedings of the IEEE, vol. 87, no. 9, 1999, pp. 1423–1447. 3, 4 [30] H. A. ElMaraghy, “Kinematic and geometric modelling and animation of robots,” in Proc. of Graphics Interface’86 Conference. ACM, 1986, pp. 15–19. 3 [31] C. W. Reynolds, “Computer animation with scripts and actors,” in ACM SIGGRAPH Computer Graphics, vol. 16, no. 3. ACM, 1982, pp. 289– 296. 3 [32] N. Burtnyk and M. Wein, “Interactive skeleton techniques for enhancing motion dynamics in key frame animation,” Communications of the ACM, vol. 19, no. 10, 1976, pp. 564–569. 3 [33] C. Csuri, R. Hackathorn, R. Parent, W. Carlson, and M. Howard, “Towards an interactive high visual complexity animation system,” in ACM SIGGRAPH Computer Graphics, vol. 13, no. 2. ACM, 1979, pp. 289–299. 3 [34] R. A. Goldstein and R. Nagel, “3-d visual simulation,” Simulation, vol. 16, no. 1, 1971, pp. 25–31. 3 [35] A. Bruderlin and T. W. Calvert, “Goal-directed, dynamic animation of human walking,” ACM SIGGRAPH Computer Graphics, vol. 23, no. 3, 1989, pp. 233–242. 3 [36] I. Mlakar and M. Rojc, “Towards ecas animation of expressive complex behaviour,” in Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues. Springer, 2011, pp. 185–198. 3 [37] M. Soliman and C. Guetl, “Implementing intelligent pedagogical agents in virtual worlds: Tutoring natural science experiments in openwonder- land,” in Global Engineering Education Conference (EDUCON), 2013 IEEE. IEEE, 2013, pp. 782–789. 3 [38] J. Song, X.-w. Zheng, and G.-j. Zhang, “Method of generating intel- ligent group animation by fusing motion capture data,” in Ubiquitous 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1086 | P a g e
  • 9. FTC 2016 - Future Technologies Conference 2016 6-7 December 2016 | San Francisco, United States Computing Application and Wireless Sensor. Springer, 2015, pp. 553– 560. 3 [39] M. H. Lee, “Intrinsic activitity: from motor babbling to play,” in De- velopment and Learning (ICDL), 2011 IEEE International Conference on, vol. 2. IEEE, 2011, pp. 1–6. 3 [40] K. Gurney, N. Lepora, A. Shah, A. Koene, and P. Redgrave, “Action discovery and intrinsic motivation: a biologically constrained formal- isation,” in Intrinsically Motivated Learning in Natural and Artificial Systems. Springer, 2013, pp. 151–181. 3 [41] W.-Y. Lo, C. Knaus, and M. Zwicker, “Learning motion controllers with adaptive depth perception,” in Proceedings of the ACM SIG- GRAPH/Eurographics Symposium on Computer Animation. Euro- graphics Association, 2012, pp. 145–154. 3 [42] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” in ACM Siggraph Computer Graphics, vol. 21, no. 4. ACM, 1987, pp. 25–34. 3 [43] K. Erleben, J. Sporring, K. Henriksen, and H. Dohlmann, Physics-based animation. Charles River Media Hingham, 2005. 3 [44] K. Perlin, “Real time responsive animation with personality,” Visual- ization and Computer Graphics, IEEE Transactions on, vol. 1, no. 1, 1995, pp. 5–15. 4 [45] B. Kenwright, “Generating responsive life-like biped characters,” in Pro- ceedings of the The third workshop on Procedural Content Generation in Games. ACM, 2012, p. 1. 4 [46] T. Trappenberg, Fundamentals of computational neuroscience. OUP Oxford, 2009. 4 [47] P. Dayan and L. Abbott, “Theoretical neuroscience: computational and mathematical modeling of neural systems,” Journal of Cognitive Neuroscience, vol. 15, no. 1, 2003, pp. 154–155. 4 [48] A. V. Samsonovich, “Toward a unified catalog of implemented cognitive architectures.” BICA, vol. 221, 2010, pp. 195–244. 4 [49] R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J. M. Bower, M. Diesmann, A. Morrison, P. H. Goodman, F. C. Harris Jr et al., “Simulation of networks of spiking neurons: a review of tools and strategies,” Journal of computational neuroscience, vol. 23, no. 3, 2007, pp. 349–398. 4 [50] T. R. Armstrong, “Training for the production of memorized movement patterns,” Ph.D. dissertation, The University of Michigan, 1970. 4 [51] M. H. Bickhard, “Autonomy, function, and representation,” Communi- cation and Cognition-Artificial Intelligence, vol. 17, no. 3-4, 2000, pp. 111–131. 6 [52] D. Vernon, G. Metta, and G. Sandini, “A survey of artificial cognitive systems: Implications for the autonomous development of mental ca- pabilities in computational agents,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 2, 2007, p. 151. 6, 8 [53] C. von Hofsten, On the development of perception and action. London: Sage, 2003. 6 [54] M. Sagar, P. Robertson, D. Bullivant, O. Efimov, K. Jawed, R. Kalarot, and T. Wu, “A visual computing framework for interactive neural system models of embodied cognition and face to face social learning,” in Unconventional Computation and Natural Computation. Springer, 2015, pp. 71–88. 7 978-1-5090-4171-8/16/$31.00 c 2016 IEEE 1087 | P a g e