The document describes a system for generating co-speech gestures for the humanoid robot NAO through the Behavior Markup Language (BML). The system extends an existing virtual agent system to generate communicative gestures for both virtual and physical agents like NAO. It takes as input a specification of multi-modal behaviors encoded in BML and synchronizes and realizes the verbal and nonverbal behaviors on the robot. The system includes a behavior planner that selects gestures from a repository and a behavior realizer that generates the animations displayed by the robot based on the BML output from the planner.
Design of Mobile Robot Navigation system using SLAM and Adaptive Tracking Con...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
- Le CNC et le court métrage ;
- Les premiers chiffres de la production cinématographique en 2014 ;
- Le Fonds de soutien cinéma – audiovisuel – multimédia du CNC en 2015 : dossier ;
- Entretien avec Jean-Claude Saurel, président et trésorier de Sauve qui peut le court métrage
El Curso "Diseño, Gestión y Dirección de Proyectos en b-learning", le permitirá obtener el título de Experto Profesional por la Universidad Nacional de Educación a Distancia de España (UNED). 46 créditos ECTS (1.150 horas).
Las Redes Sociales se basan en compartir información relevante. Desde SNTalent podrás compartir con tus contactos ofertas de empleo y estar al día de oportunidades laborales.
Design of Mobile Robot Navigation system using SLAM and Adaptive Tracking Con...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
- Le CNC et le court métrage ;
- Les premiers chiffres de la production cinématographique en 2014 ;
- Le Fonds de soutien cinéma – audiovisuel – multimédia du CNC en 2015 : dossier ;
- Entretien avec Jean-Claude Saurel, président et trésorier de Sauve qui peut le court métrage
El Curso "Diseño, Gestión y Dirección de Proyectos en b-learning", le permitirá obtener el título de Experto Profesional por la Universidad Nacional de Educación a Distancia de España (UNED). 46 créditos ECTS (1.150 horas).
Las Redes Sociales se basan en compartir información relevante. Desde SNTalent podrás compartir con tus contactos ofertas de empleo y estar al día de oportunidades laborales.
This is a directory of Canadian US patents holders. It has the latest information about who has US patents in Canada, where the US patent holders are, what they patented in the US market and the trends of their US patents.
In 2009, when I was working for the Region of Peel government, Canada, I successfully used patent mapping to identify 20 US patent intensive companies as the potential employers for highly educated immigrants. Following this initiative, I created a Canadian patent competitive intelligence (CI) database to track the latest patent competence of over 5000 Canadian entities, in all sector throughout Canada, on a weekly basis. My work with Region of Peel from 2010 to 2012 showed that this database can provide the "no-older-than-7-day" intelligence for long-term strategic research/planning and short-term tactics. This is also the first attempt in Canada to use patent landscape as a regional economic strength indicator and a baseline for policy harmonization and policy performance evaluation.
Spacious modern apartment walking distance to Palais des Festivals | Burger-D...Burger Davis SIR
Beautifully finished spacious apartment with oriental touch, ideally situated for congres clients or for exploring Cannes and surrounding area.
Location :
Right in the centre of Cannes just 5 minutes walk to Cannes train station and 30 minutes drive to Nice airport.
El contenido del catálogo, con diseño de Víctor Sánchez Balaguer, es el siguiente: “Una visión integradora de Miguel y su pueblo”, de Juan José Sánchez Balaguer, pp.5-7; “La Orihuela de Miguel Hernández (1910-1942)”, de Ana Mas de Sanfélix, pp.9-10; “La política oriolana en la primera etapa de Miguel (1910-1931): del colapso del liberalismo de élites al arranque de la democracia de masas”, de Jesús Millán, pp.13-28; “La política oriolana entre las elecciones de 1931 y la victoria del Frente Popular en 1936”, de Antonio José Mazón Albarracín, pp.29-45; “Orihuela y la Guerra Civil”, de Agustín Castaño Martínez y Ricardo Castaño Martínez, pp.47-62; “Orihuela, transformación socioeconómica de una ciudad y su territorio (1910-1942), de Gregorio Canales Martínez y Alejandro López Pomares, pp.63-92; “La prensa oriolana en tiempos de Miguel Hernández”, de Juan José Sánchez Balaguer, pp.93-122; “Arte y Arquitectura en la Orihuela de Miguel Hernández (1910-1942)”, de Emilio Díz Ardid, pp.123-148; “Miguel Hernández. Notas para una biografía”, de José Luis Ferris, pp.151-174; “Personajes oriolanos de una comedia humana terminada en tragedia. Hacia una revisión de personas y actitudes”, de Antonio Luis Galiano Pérez, pp.175-191; “La literatura en la Orihuela de Miguel Hernández”, de Luis Mariano Abad Merino y José Antonio Torregrosa Díaz, pp.195-268; y “La difusión de Miguel Hernández en Orihuela durante la posguerra”, de Aitor L. Larrabide, pp.271-302.
La exposición homónima pudo contemplarse en la Sala-Museo San Juan de Dios, de Ayuntamiento de Orihuela, desde el 29 de noviembre hasta el 31 de diciembre de 2011. Sus comisarios fueron Emilio Díz Ardid, Aitor L. Larrabide y Víctor Sánchez Balaguer.
Série MATRIX
Série ACTIV
Caixas CEMU - Caixas para Moradias Familiares
Acessórios - Mód. TAP do ATI e Coluna Montante CC
Caixas ATE - Armários Telecom. de Edifícios
Caixas Tipo C - Cxs. da Rede Colectiva de Tubagens
Bastidores- ITED - ITUR
Computer Science
Active and Programmable Networks
Active safety systems
Ad Hoc & Sensor Network
Ad hoc networks for pervasive communications
Adaptive, autonomic and context-aware computing
Advance Computing technology and their application
Advanced Computing Architectures and New Programming Models
Advanced control and measurement
Aeronautical Engineering,
Agent-based middleware
Alert applications
Automotive, marine and aero-space control and all other control applications
Autonomic and self-managing middleware
Autonomous vehicle
Biochemistry
Bioinformatics
BioTechnology(Chemistry, Mathematics, Statistics, Geology)
Broadband and intelligent networks
Broadband wireless technologies
CAD/CAM/CAT/CIM
Call admission and flow/congestion control
Capacity planning and dimensioning
Changing Access to Patient Information
Channel capacity modelling and analysis
Civil Engineering,
Cloud Computing and Applications
Collaborative applications
Communication application
Communication architectures for pervasive computing
Communication systems
Computational intelligence
Computer and microprocessor-based control
Computer Architecture and Embedded Systems
Computer Business
Computer Sciences and Applications
Computer Vision
Computer-based information systems in health care
Computing Ethics
Computing Practices & Applications
Congestion and/or Flow Control
Content Distribution
Context-awareness and middleware
Creativity in Internet management and retailing
Cross-layer design and Physical layer based issue
Cryptography
Data Base Management
Data fusion
Data Mining
Data retrieval
Data Storage Management
Decision analysis methods
Decision making
Digital Economy and Digital Divide
Digital signal processing theory
Distributed Sensor Networks
Drives automation
Drug Design,
Drug Development
DSP implementation
E-Business
E-Commerce
E-Government
Electronic transceiver device for Retail Marketing Industries
Electronics Engineering,
Embeded Computer System
Emerging advances in business and its applications
Emerging signal processing areas
Enabling technologies for pervasive systems
Energy-efficient and green pervasive computing
Environmental Engineering,
Estimation and identification techniques
Evaluation techniques for middleware solutions
Event-based, publish/subscribe, and message-oriented middleware
Evolutionary computing and intelligent systems
Expert approaches
Facilities planning and management
Flexible manufacturing systems
Formal methods and tools for designing
Fuzzy algorithms
Fuzzy logics
GPS and location-based app
This is a directory of Canadian US patents holders. It has the latest information about who has US patents in Canada, where the US patent holders are, what they patented in the US market and the trends of their US patents.
In 2009, when I was working for the Region of Peel government, Canada, I successfully used patent mapping to identify 20 US patent intensive companies as the potential employers for highly educated immigrants. Following this initiative, I created a Canadian patent competitive intelligence (CI) database to track the latest patent competence of over 5000 Canadian entities, in all sector throughout Canada, on a weekly basis. My work with Region of Peel from 2010 to 2012 showed that this database can provide the "no-older-than-7-day" intelligence for long-term strategic research/planning and short-term tactics. This is also the first attempt in Canada to use patent landscape as a regional economic strength indicator and a baseline for policy harmonization and policy performance evaluation.
Spacious modern apartment walking distance to Palais des Festivals | Burger-D...Burger Davis SIR
Beautifully finished spacious apartment with oriental touch, ideally situated for congres clients or for exploring Cannes and surrounding area.
Location :
Right in the centre of Cannes just 5 minutes walk to Cannes train station and 30 minutes drive to Nice airport.
El contenido del catálogo, con diseño de Víctor Sánchez Balaguer, es el siguiente: “Una visión integradora de Miguel y su pueblo”, de Juan José Sánchez Balaguer, pp.5-7; “La Orihuela de Miguel Hernández (1910-1942)”, de Ana Mas de Sanfélix, pp.9-10; “La política oriolana en la primera etapa de Miguel (1910-1931): del colapso del liberalismo de élites al arranque de la democracia de masas”, de Jesús Millán, pp.13-28; “La política oriolana entre las elecciones de 1931 y la victoria del Frente Popular en 1936”, de Antonio José Mazón Albarracín, pp.29-45; “Orihuela y la Guerra Civil”, de Agustín Castaño Martínez y Ricardo Castaño Martínez, pp.47-62; “Orihuela, transformación socioeconómica de una ciudad y su territorio (1910-1942), de Gregorio Canales Martínez y Alejandro López Pomares, pp.63-92; “La prensa oriolana en tiempos de Miguel Hernández”, de Juan José Sánchez Balaguer, pp.93-122; “Arte y Arquitectura en la Orihuela de Miguel Hernández (1910-1942)”, de Emilio Díz Ardid, pp.123-148; “Miguel Hernández. Notas para una biografía”, de José Luis Ferris, pp.151-174; “Personajes oriolanos de una comedia humana terminada en tragedia. Hacia una revisión de personas y actitudes”, de Antonio Luis Galiano Pérez, pp.175-191; “La literatura en la Orihuela de Miguel Hernández”, de Luis Mariano Abad Merino y José Antonio Torregrosa Díaz, pp.195-268; y “La difusión de Miguel Hernández en Orihuela durante la posguerra”, de Aitor L. Larrabide, pp.271-302.
La exposición homónima pudo contemplarse en la Sala-Museo San Juan de Dios, de Ayuntamiento de Orihuela, desde el 29 de noviembre hasta el 31 de diciembre de 2011. Sus comisarios fueron Emilio Díz Ardid, Aitor L. Larrabide y Víctor Sánchez Balaguer.
Série MATRIX
Série ACTIV
Caixas CEMU - Caixas para Moradias Familiares
Acessórios - Mód. TAP do ATI e Coluna Montante CC
Caixas ATE - Armários Telecom. de Edifícios
Caixas Tipo C - Cxs. da Rede Colectiva de Tubagens
Bastidores- ITED - ITUR
Computer Science
Active and Programmable Networks
Active safety systems
Ad Hoc & Sensor Network
Ad hoc networks for pervasive communications
Adaptive, autonomic and context-aware computing
Advance Computing technology and their application
Advanced Computing Architectures and New Programming Models
Advanced control and measurement
Aeronautical Engineering,
Agent-based middleware
Alert applications
Automotive, marine and aero-space control and all other control applications
Autonomic and self-managing middleware
Autonomous vehicle
Biochemistry
Bioinformatics
BioTechnology(Chemistry, Mathematics, Statistics, Geology)
Broadband and intelligent networks
Broadband wireless technologies
CAD/CAM/CAT/CIM
Call admission and flow/congestion control
Capacity planning and dimensioning
Changing Access to Patient Information
Channel capacity modelling and analysis
Civil Engineering,
Cloud Computing and Applications
Collaborative applications
Communication application
Communication architectures for pervasive computing
Communication systems
Computational intelligence
Computer and microprocessor-based control
Computer Architecture and Embedded Systems
Computer Business
Computer Sciences and Applications
Computer Vision
Computer-based information systems in health care
Computing Ethics
Computing Practices & Applications
Congestion and/or Flow Control
Content Distribution
Context-awareness and middleware
Creativity in Internet management and retailing
Cross-layer design and Physical layer based issue
Cryptography
Data Base Management
Data fusion
Data Mining
Data retrieval
Data Storage Management
Decision analysis methods
Decision making
Digital Economy and Digital Divide
Digital signal processing theory
Distributed Sensor Networks
Drives automation
Drug Design,
Drug Development
DSP implementation
E-Business
E-Commerce
E-Government
Electronic transceiver device for Retail Marketing Industries
Electronics Engineering,
Embeded Computer System
Emerging advances in business and its applications
Emerging signal processing areas
Enabling technologies for pervasive systems
Energy-efficient and green pervasive computing
Environmental Engineering,
Estimation and identification techniques
Evaluation techniques for middleware solutions
Event-based, publish/subscribe, and message-oriented middleware
Evolutionary computing and intelligent systems
Expert approaches
Facilities planning and management
Flexible manufacturing systems
Formal methods and tools for designing
Fuzzy algorithms
Fuzzy logics
GPS and location-based app
Golem-II+ is the latest service robot developed by the Golem Group. We design and construct domain independent service robots. Our developments are based in a theory of Human-Robot Communication centered in the specification of protocols representing the structure of service robots' tasks, which are called Dialogue Models (DMs).
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
Identifier of human emotions based on convolutional neural network for assist...TELKOMNIKA JOURNAL
This paper proposes a solution for the problem of continuous prediction in real-time of the emotional state of a human user from the identification of characteristics in facial expressions. In robots whose main task is the care of people (children, sick or elderly people) is important to maintain a close relationship man-machine, anld a rapid response of the robot to the actions of the person under care. We propose to increase the level of intimacy of the robot, and its response to specific situations of the user, identifying in real time the emotion reflected by the person's face. This solution is integrated with algorithms of the research group related to the tracking of people for use on an assistant robot. The strategy used involves two stages of processing, the first involves the detection of faces using HOG and linear SVM, while the second identifies the emotion in the face using a CNN. The strategy was completely tested in the laboratory on our robotic platform, demonstrating high performance with low resource consumption. Through various controlled laboratory tests with different people, which forced a certain emotion on their faces, the scheme was able to identify the emotions with a success rate of 92%.
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD ijcsit
Technological progress leads the apparition of robot in human environment; we began to find them in hospitals, museums, and homes. However these situations require an interaction of robots with humans and an adoption of social behaviors. We have shown in this paper how disciplines like computer science in general and human computer interaction in particular are used to improve human robot interaction. Then we indicated how we can use action theory into design of interaction between human and Robot. Finally we proposed some practical scenarios for illustrations.
ROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIESIJCNCJournal
This paper presents the design and implementation of a Human Interface for a housekeeper robot. It bases
on the idea of making the robot understand the human needs without making the human go through the
details of robots work, for example, the way that the robot implements the work or the method that the
robot uses to plan the path in order to reach the work area. The interface commands based on idioms of the
natural human language and designed in a manner that the user gives the robot several commands with
their execution date/time. As a result, the robot has a list of tasks to be doneon certain dates/times.
However, the robot performs the tasks assigned to it without any human intervention and then gives
feedback to the human about each task progress in a dedicated list. As well as, the user decides to get the
feedback either through the interface, through the wireless communication, or both of them. Hence, the
user’s presence not necessary during the robot tasks execution.
Modelling of walking humanoid robot with capability of floor detection and dy...ijfcstjournal
Most humanoid robots have highly complicated structure and design of robots that are very similar to
human is extremely difficult. In this paper, modelling of a general and comprehensive algorithm for control
of humanoid robots is presented using Colored Petri Nets. For keeping dynamic balance of the robot,
combination of Gyroscope and Accelerometer sensors are used in algorithm. Image processing is used to
identify two fundamental issues: first, detection of target or an object which robot must follow; second,
detecting surface of the ground so that walking robot could maintain its balance just like a human and
shows its best performance. Presented model gives high-level view of humanoid robot's operations.
Objeto de conferencia
VI International Conference on Education and Information Systems, Technologies and Applications (EISTA) (Florida, Estados Unidos)
Simulation is the process of executing a model, that is a representation of a system with enough detail to describe it but not too excessive. This model has a set of entities, an internal state, a set of input variables that can be controlled and others that cannot, a list of processes that bind these input variables with the entities and one or more output values, which result from the execution of events.
Running a model is totally useless if it can not be analyzed, which means to study all interactions among input variables, model entities and their weight in the values of the output variables. In this work we consider Discrete Event Simulation, which means that the status of the system variables being simulated change in a countable set of instants, finite or countable infinite.
Existing GPSS implementations and IDE's provide a wide range of tools for analysis of the results, for generation and execution of experiments and to perform complex analysis (such as Analysis of Variance, screening and so on). This is usually enough for most common analysis, but more detailed information and much more specific analysis are often required.
In addition, teaching this kind of simulation languages is always a challenge, since the way it executes the models and the abstraction level that their entities achieve is totally different compared to general purpose programming languages, well known by most students of the area. And this is usually hard for students to understand how everything works underground.
We have developed an open source simulation framework that implements a subset of entities of GPSS language, which can be used for students to improve the understanding of these entities. This tool has also the ability to store all entities of simulations in every single simulation time, which is very useful for debugging simulations, but also for having a detailed history of all entities (permanents and temporary) in the simulations, knowing exactly how they have behaved in every simulation time.
In this paper we provide an overview of this development, making special stress on the simulation model design and on the persistence of simulation entities, which are the basis that students and researchers of the area need in order to extend the model, adapt it to their needs or add all analysis tools to the framework.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5557
Improving Posture Accuracy of Non-Holonomic Mobile Robot System with Variable...TELKOMNIKA JOURNAL
This paper presents a method to decrease imprecision and inaccuracy that have the tendency to
influence the posture of non-holonomic mobile robot by using the adaptive tuning of universe of discourse.
As such, the primary objective of the study is to force the posture error of , , and towards
zero. Hence, for each step of tuning the fuzzy domain, about 20% of imprecision and inaccuracy had been
added automatically into the variable universe fuzzy, while the control input was bound via scaling gain.
Furthermore, the simulation results showed that the tuning of universe fuzzy parameters could increase
the performance of the system from the aspects of response time and error for steady state through better
control of inaccuracy. Besides, the domains of universe fuzzy input [-4,4] and output [0,6] exhibited good
performance in inching towards zero values as the steady state error was about 1% for x(t) position, 0.02%
for y(t) position, and 0.16% for θ(t) orientation, whereas the posture error in the given reference was about
0.0002% .
Similar to Lecture Notes in Computer Science (LNCS) (20)
Automatic vs. human question answering over multimedia meeting recordingsLê Anh
Information access in meeting recordings can be assisted by
meeting browsers, or can be fully automated following a
question-answering (QA) approach. An information access task
is defined, aiming at discriminating true vs. false parallel statements
about facts in meetings. An automatic QA algorithm is
applied to this task, using passage retrieval over a meeting transcript.
The algorithm scores 59% accuracy for passage retrieval,
while random guessing is below 1%, but only scores 60% on
combined retrieval and question discrimination, for which humans
reach 70%–80% and the baseline is 50%. The algorithm
clearly outperforms humans for speed, at less than 1 second
per question, vs. 1.5–2 minutes per question for humans. The
degradation on ASR compared to manual transcripts still yields
lower but acceptable scores, especially for passage identification.
Automatic QA thus appears to be a promising enhancement
to meeting browsers used by humans, as an assistant for
relevant passage identification.
ICMI 2012 Workshop on gesture and speech productionLê Anh
In this slides, we present a common gesture speech framework for both virtual agents like ECA, IVA, VH and physical agents like humanoid robots. This framework is designed for different embodiments so that its processus are independent from a specific agent.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Lecture Notes in Computer Science (LNCS)
1. Generating Co-speech Gestures
for the Humanoid Robot NAO through BML
Quoc Anh Le and Catherine Pelachaud
CNRS, LTCI Telecom ParisTech, France
{quoc-anh.le,catherine.pelachaud}@telecom-paristech.fr
Abstract. We extend and develop an existing virtual agent system to
generate communicative gestures for different embodiments (i.e. virtual
or physical agents). This paper presents our ongoing work on an imple-
mentation of this system for the NAO humanoid robot. From a spec-
ification of multi-modal behaviors encoded with the behavior markup
language, BML, the system synchronizes and realizes the verbal and
nonverbal behaviors on the robot.
Keywords: Conversational humanoid robot, expressive gestures, gesture-
speech production and synchronization, Human-Robot Interaction, NAO,
GRETA, FML, BML, SAIBA.
1 Introduction
We aim at building a model generating expressive communicative gestures for
embodied agents such as the NAO humanoid robot [2] and the GRETA virtual
agent [11]. To reach this goal, we extend and develop our GRETA system [11],
which follows the SAIBA (i.e. Situation, Agent, Intention, Behavior, Anima-
tion) framework (cf. Figure 1). The GRETA system consists of three separated
modules: the first module, Intent Planning, defines communicative intents to be
conveyed. The second, Behavior Planning, plans the corresponding multi-modal
behaviors to be realized. And the third module, Behavior Realizer, synchronizes
and realizes the planned behaviors.
Fig. 1. The SAIBA framework for generating multimodal behavior
The results of the first module is the input for the second module presented
through an interface described with a representation markup language, named
E. Efthimiou, G. Kouroupetroglou, S.-E. Fotinea (Eds.): GW 2011, LNAI 7206, pp. 228–237, 2012.
c Springer-Verlag Berlin Heidelberg 2012
2. Generating Co-speech Gestures for the Humanoid Robot NAO through BML 229
FML (i.e Function Markup Language). The output of the second module is en-
coded with another representation language, named BML (i.e. Behavior Markup
Language) [6] and then sent to the third module. Both FML and BML are XML-
based languages and they do not refer to any particular specification agent (e.g.
its wrist joint).
From given communicative intentions, the system selects and plans gestures
taken from a repository of gestures , called Gestural Lexicon or Gestuary (cf.
Figure 1). In the repository, gestures are described symbolically with an exten-
sion of the BML representation language. Then the system calculates the timing
of the selected gestures to be synchronized with speech. After that, the gestures
are instantiated as robot joint values and sent to the robot in order to execute
the hand-arm movements.
Our aim is to be able to use the same system to control both agents (i.e. the
virtual one and the physique one). However, the robot and the agent do not
have the same movement capacities (e.g. the robot can move its legs and torso
but does not have facial expression and has very limited hand-arm movements
compared to the agent). Therefore, the nonverbal behaviors to be displayed by
the robot may be different from those of the virtual agent. For instance, the
robot has only two hand configurations, open and closed; it cannot extend just
one finger. Thus, to do a deictic gesture it can make use of its whole right arm
to point at a target rather than using an extended index finger as done by the
virtual agent.
To control communicative behaviors of the robot and the virtual agent, while
taking into account their physical constraint, we consider two repertoires of ges-
tures, one for the robot and another for the agent. To ensure that both the robot
and the virtual agent convey similar information, their gesture repertoires should
have entries for the same list of communicative intentions. The elaboration of
repertoires encompasses the notion of gesture family with variants proposed by
Calbris [1]. Gestures from the same family convey similar meanings but may
differ in their shape (i.e. the element deictic exists in both repertoires; it corre-
sponds to an extended finger or to an arm extension). In the proposed model,
therefore, the Behavior Planning module remains the same for both agents and
unchanged from the GRETA system. From the BML scripts outputted by the
Behavior Planner, we instantiate BML tags from either gesture repertoires. That
is, given a set of intentions and emotions to convey, the GRETA system, through
the Behavior Planning, the corresponding sequence of behaviors specified with
BML. The Behavior Realizer module has been developed to create the anima-
tion for both agents with different behavior capabilities. Figure 2 presents an
overview of our system.
In this paper, we presents our current implementation of the proposed ex-
pressive gesture model for the NAO humanoid robot. This work is conducted
within the framework of the French Nation Agency for Research project, named
GVLEX (Gesture and Voice for an Expressive Lecture), whose objective is to
build an expressive robot able to display communicative gestures with different
behavior qualities while telling a story. Whistle other partners of the project deal
3. 230 Q.A. Le and C. Pelachaud
Fig. 2. An overview of the proposed system
with expressive voice, our work focuses on expressive nonverbal behaviors, espe-
cially on gestures. In this project, we have elaborated a repository of gestures
specific to the robot based on gesture annotations extracted from a storytelling
video corpus [8]. The model takes into account the physical characteristics of the
robot. Each gesture is guaranteed to be executable by the robot. When gestures
are realized, their expressivity is increased by considering a set of quality dimen-
sions such as the amplitude (SPC), fluidity (FLD), power (PWR), or speed of
gestures (TMP) that has been previously developed for the GRETA agent [3].
The paper is structured as follows. The next section, state of the art describes
some recent initiatives in controlling humanoid robot hand-arm gestures. Then,
Section 3 presents in detail the design and implementation of a gesture database
and a behavior realizer for the robot. Section 4 concludes and proposes some
future works.
2 State of the Art
Several initiatives have been proposed recently to control multi-modal behaviors
of a humanoid robot. Salem et al. [14] use the gesture engine of the MAX virtual
agent to drive the ASIMO humanoid robot. Rich et al. [13] implement a system
following an event-driven architecture to solve the problem of unpredictability
in performance of their MELVIN humanoid robot. Meanwhile, Ng-Thow-Hing
et al. [10] develop a system that takes any text as input to select and produce
the corresponding gestures for the ASIMO robot. In this system Ng-Thow-Hing
added some parameters for expressivity to make gestures more expressive such as
tension, smooth and timing of gesture trajectories. These parameters correspond
to the power, fluidity and temporal extend in our system. In [7] Kushida et al.
equip their robot with a capacity of producing deictic gestures when the robot
4. Generating Co-speech Gestures for the Humanoid Robot NAO through BML 231
gives a presentation on the screen. These systems have several common char-
acteristics. They calculate animation parameters of the robot from a symbolic
description encoded with a script language such as MURML [14], BML [13],
MPML-HR [7], etc. The synchronization of gestures with speech is guaranteed
by adapting the gesture movements to the timing of the speech [14,10]. This is
also the method used in our system. Some systems have a feedback mechanism
to receive and process feedback information from the robot in real time, which is
then used to improve the smoothness of gesture movements [14], or to improve
the synchronization of gestures with speech [13].
Our system has some differences from these works. It focuses not only on the
coordination of gestures and speech but also on the signification and the ex-
pressivity of gestures performed by the robot. In fact, the gesture signification is
studied carefully when elaborating a repertoire of robot gestures. In terms of ges-
ture expressivity, it is enhanced by adding a set of gesture dimension parameters
such as spatial extension (SPC), temporal extension (TMP).
3 System Design and Implementation
The proposed model is developed based on the GRETA system. It uses its exist-
ing Behavior Planner module to select and plan multi-modal behaviors. A new
Behavior Realizer module has been developed to adapt to the behavior capabili-
ties of the robot. The main objective of this module is to generate the animations,
which will be displayed by the robot from received BML messages. This process
is divided into two tasks: the first one is to create a gesture database specific to
the robot and the second one is to realize selected and planned gestures on the
robot. Figure 3 shows different steps of the system.
3.1 Gesture Database
Gestures are described symbolically using a BML-based extension language and
are stored in a lexicon. All entries of the lexicon are tested to guarantee their
realizability on the robot (e.g. avoid collision or conflict between robot joints
when doing a gesture, or avoid singular positions where the robot hands cannot
reach). The gestures are instantiated dynamically into joint values of the robot
when creating the animation in real-time. The instantiation values according to
the values of their expressivity parameters.
Gesture Annotations. The elaboration of symbolic gestures in the robot lexi-
con is based on gesture annotations extracted from a Storytelling Video Corpus,
which was recorded and annotated by Jean-Claude Martin et al. [8], a partner
of the GVLEX project. To create this corpus, six actors were videotaped while
telling a French story ”Three Little Pieces of Night” twice. Two cameras were
used (front and side view) to get postural expressions in the three dimensions
space. Then, the Anvil video annotation tool [5] is used to annotate gesture
information such as its category (i.e. iconic, beat, metaphoric and deictic), its
5. 232 Q.A. Le and C. Pelachaud
Fig. 3. Steps in the system
duration and which hand is being used, etc (cf. Figure 4). Based on the shape
of gestures captured from the video and their annotated information, we have
elaborated a corresponding symbolic gesture repository.
Fig. 4. Gestural annotations from a video corpus with the Anvil tool
Gesture Specification. We have proposed a new XML schema as an exten-
sion of the BML language to describe symbolically gestures in the repository
(i.e. lexicon). The specification of a gesture relies on the gestural description
of McNeill [9], the gesture hierarchy of Kendon [4] and some notions from the
HamNoSys system [12]. As a result, a hand gesture action may be divided into
several phases of wrist movements. The stroke phase carries the meaning of the
gesture. It may be preceded by a preparatory phase, which takes the articulatory
joints (i.e. hands and wrists) to the position ready for the stroke phase. After
that, it may be followed by a retraction phase that returns the hands and arms
6. Generating Co-speech Gestures for the Humanoid Robot NAO through BML 233
to the relax positions or positions initialized for the next gesture (cf. Figure 7).
In the lexicon, only the description of the stroke phase is specified for each ges-
ture. Other phases are generated automatically by the system. A stroke phase
is represented through a sequence of key poses, each of which is described with
the information of hand shape, wrist position, palm orientation, etc. The wrist
position is always defined by three tags namely vertical location that corresponds
to the Y axis, horizontal location that corresponds to the X axis, and location
distance corresponding to the Z axis in a limited movement space.
Fig. 5. An example of the gesture specification
Following the gesture space proposed by McNeill [9], we have five horizon-
tal values (XEP, XP, XC, XCC, XOppC ), seven vertical values (YUpperEP,
YUpperP, YUpperC, YCC, YLowerC, YLowerP, YLowerEP ), and three dis-
tance values (Znear, Zmiddle, Zfar). By combining these values, we have 105
possible wrist positions. An example of the description for the greeting gesture
is presented in Figure 5. In this gesture, the stroke phase consists of two key
poses. These key poses represent the position of the right hand (here, above the
head), the hand shape (open) and the palm orientation (forward). The two key
poses are different from only one symbolic value on the horizontal position. This
is to display a wave hand movement when greeting someone. The NAO robot
cannot rotate its wrist (i.e. it has only the WristYaw joint). Consequently, there
is no description of wrist orientation in the gesture specification for the robot.
However, this attribute can be added for other agents (e.g. the GRETA agent).
Movement Space Specification. Each symbolic position is translated into con-
crete joint values of the robot joints when the gestures are realized. In our case,
these include four NAO joints: ElbowRoll, ElbowYaw, ShoulderPitch and Shoul-
derRoll. In addition to the set of 105 possible wrist positions (i.e. following the
gesture space of McNeill, see Table 1), two wrist positions are added to specify re-
lax positions. These positions are used in the retraction phase of a gesture. The first
7. 234 Q.A. Le and C. Pelachaud
position indicates a full relax position (i.e. two hands are let loose along the body)
while the second one indicates a partial relax position (i.e. one or two hands are
retracted partially). Depending on the available time allocated to the retraction
phase, one relax position is selected and used by the system.
Table 1. Specification of key-arm positions
Table 2. Specificatio of gesture movement durations
Movement Speed Specification. Because the robot has limited movement
speed, we need to have a procedure to verify the temporal feasibility of gesture
actions. That means the system ought to estimate the minimal duration of a
hand-arm movement that moves robot wrist from one position to another one
in a gesture action, as well as between two consecutive gestures. However, the
NAO robot does not allow us to predict these durations before realizing real
movements. Hence, we have to pre-estimate the minimal time between any two
hand-arm positions in the gesture movement space, as shown in Table 2. The
results in this table are used to calculate the duration of gesture phases to
eliminate inappropriate gestures (i.e. the allocated time is less than the necessary
time to perform the gesture) and to coordinate gestures with speech.
3.2 Gesture Realization
The main task of this module is to compute the animation described in BML
messages received from the Behavior Planner. An example of the format of a
BML message is shown in Figure 6.
8. Generating Co-speech Gestures for the Humanoid Robot NAO through BML 235
Fig. 6. An example of the BML description
In our system, we focus on the synchronization of gestures with speech. This
synchronization is realized by adapting the timing of the gestures to the speech
timing. It means the temporal information of gestures within BML tags are rel-
ative to the speech (cf. Figure 6). They are specified through time markers. As
shown in Figure 7, they are encoded by seven sync points: start, ready, stroke-
start, stroke, stroke-end, relax and end [6]. These sync points divide a gesture
action into certain phases such as preparation, stroke, retraction phases as de-
fined by Kendon [4]. The most meaningful part occurs between the stroke-start
and the stroke-end (i.e. the stroke phase). According to McNeill’s observations
[9], a gesture always coincides or lightly precedes speech. In our system, the
synchronization between gesture and speech is ensured by forcing the starting
time of the stroke phase to coincide with the stressed syllables. The system has
to pre-estimate the time required for realizing the preparation phase, in order to
make sure that the stroke happens on the stressed syllables of the speech. This
pre-estimation is done by calculating the distance between current hand-arm
position and the next desired positions. This is also calculated by computing the
time it takes to perform the trajectory. The results of this step are obtained by
using values in the Tables 1 and 2.
The last Execution module (cf. Figure 3) translates gesture descriptions into
joint values of the robot. The symbolic positions of the robot hand-arm (i.e. the
combination of three values within BML tags respectively: horizontal-location,
vertical-location and location-distance) are translated into concrete values of four
robot joints: ElbowRoll, ElbowYaw, ShoulderPitch, ShoulderRoll using Table 1.
The shape of the robot hands (i.e. the value indicated within hand-shape tag)
is translated into the value of the robot joints, RHand and LHand respectively.
The palm orientation (i.e. the value specified within palm-orientation tag) and
the direction of extended wrist concerns the wrist joints. As Nao has only the
WristYaw joint, there is no symbolic description for the direction of the extended
wrist in the gesture description. For the palm orientation, this value is translated
9. 236 Q.A. Le and C. Pelachaud
Fig. 7. Gesture phases and synchronization points
into a robot joint, namely WristYaw by calculating the current orientation and
the desired orientation of the palm. Finally, the joint values and the timing of
movements are sent to the robot. The animation is obtained by interpolating
between joint values with the robot built-in proprietary procedures [2]. Data
to be sent to the robot (i.e. timed joint values) are sent to a waiting list. This
mechanism allows the system to receive and process a series of BML messages
continuously. Certain BML messages can be executed with a higher priority
order by using an attribute specifying its priority level. This can be used when
the robot wants to suspend its current actions to do an exceptional gesture (e.g.
make a greeting gesture to a new listener while telling story).
4 Conclusion and Future Work
In this paper, we have presented an expressive gesture model for the humanoid
robot NAO. The realization of the gestures are synchronized with speech. In-
trinsic constraints (e.g. joint and speed limits) are also taken into account.
Gesture expressivity is calculated in real-time by setting values to a set of
parameters that modulate gestural animation. In the near future, we aim at
improving the movement speed specification with the Fitt’s Law (i.e. simulating
human movement). So far, the model has been developed for arm-hand gestures
only. In the next stage, we will extend the system for head and torso gestures.
Then, the system needs to be equipped with a feedback mechanism, which is
important to re-adapt the actual state of the robot while scheduling gestures.
Last but not least, we aim to validate our model through perceptive evaluations.
We will test how expressive the robot is perceived when reading a story.
Acknowledgment. This work has been partially funded by the French ANR
GVLEX project.
References
1. Calbris, G.: Contribution ` une analyse s´miologique de la mimique faciale et
a e
gestuelle fran¸aise dans ses rapports avec la communication verbale. Ph.D. thesis
c
(1983)
2. Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P.,
Marnier, B., Serre, J., Maisonnier, B.: Mechatronic design of NAO humanoid. In:
Robotics and Automation, ICRA 2009, pp. 769–774. IEEE Press (2009)
10. Generating Co-speech Gestures for the Humanoid Robot NAO through BML 237
3. Hartmann, B., Mancini, M., Pelachaud, C.: Implementing Expressive Gesture Syn-
thesis for Embodied Conversational Agents. In: Gibet, S., Courty, N., Kamp, J.-F.
(eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)
4. Kendon, A.: Gesture: Visible action as utterance. Cambridge University Press
(2004)
5. Kipp, M., Neff, M., Albrecht, I.: An annotation scheme for conversational gestures:
How to economically capture timing and form. Language Resources and Evalua-
tion 41(3), 325–339 (2007)
6. Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H.,
Th´risson, K.R., Vilhj´lmsson, H.: Towards a Common Framework for Multimodal
o a
Generation: The Behavior Markup Language. In: Gratch, J., Young, M., Aylett,
R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217.
Springer, Heidelberg (2006)
7. Kushida, K., Nishimura, Y., Dohi, H., Ishizuka, M., Takeuchi, J., Tsujino, H.:
Humanoid robot presentation through multimodal presentation markup language
mpml-hr. In: AAMAS 2005 Workshop on Creating Bonds with Humanoids. IEEE
Press (2005)
8. Martin, J.C.: The contact video corpus (2009)
9. McNeill, D.: Hand and mind: What gestures reveal about thought. University of
Chicago Press (1992)
10. Ng-Thow-Hing, V., Luo, P., Okita, S.: Synchronized gesture and speech production
for humanoid robots. In: Intelligent Robots and Systems (IROS), pp. 4617–4624.
IEEE Press (2010)
11. Pelachaud, C.: Modelling multimodal expression of emotion in a virtual agent.
Philosophical Transactions of the Royal Society B: Biological Sciences 364(1535),
3539 (2009)
12. Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J., et al.: HamNoSys
Version 2.0: Hamburg notation system for sign languages: An Introductory Guide,
vol. 5. University of Hamburg (1989)
13. Rich, C., Ponsleur, B., Holroyd, A., Sidner, C.: Recognizing engagement in human-
robot interaction. In: Proceeding of the 5th ACM/IEEE International Conference
on Human-robot Interaction, pp. 375–382. ACM Press (2010)
14. Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Generating robot gesture using
a virtual agent framework. In: Intelligent Robots and Systems (IROS), pp. 3592–
3597. IEEE Press (2010)