This document discusses social robotics for assisted living. It describes motivations for social robotics including an aging world population, changes in the workplace and healthcare, and a demographic gap. Examples of social robotics applications are provided, including smart cabs that can recognize a driver's state of mind, affective robotics that can physically express emotions, and real-time emotion recognition. The document promotes a vision for social robotics including semi-autonomous robots that can interact with people and augment healthcare providers. It provides information on Tampere University of Applied Sciences research and development programs in technology, wellbeing and culture.
This document discusses an affective robot project. It describes a demo of a social robot that can display different emotional states like happy, welcome, or cry based on user-selected behaviors. An experiment will collect data on how settings and behaviors influence the robot's affects. The goal is to model the link between human affects and robotic expression to change people's states of mind for benefits like health. It outlines the technologies, methodology, and multidisciplinary team to create the virtual and physical robot components.
Introducing a simple way of programing robots, hardware in general and various approaches developed by Microsoft Research Cambridge. The talk was held at the MSRC Christmas Lecture 2005.
Wearing your heart on your sleeve: a wearable computing primer (of sorts)Alejandro Zamudio
The document discusses the history and future of wearable computing. It begins by summarizing the first wearable computer created by Ed Thorp in 1960 to calculate roulette outcomes. It then explores how wearable computers can augment the senses, act as prosthetics, and redefine concepts of personal space and territories on the body. The document envisions a future where wearable devices are integrated into clothing through smart textiles and flexible electronics, allowing them to become more intimate and customizable interfaces. It argues that wearable computing could one day interface directly with the brain or allow networking between people's bodies.
The document provides an overview of the history of currencies in India, noting that the first currency notes were introduced by the Bank of Bengal in the early 1800s, though they lacked security features, and that the British government later established a monopoly on printing currency under the Paper Currency Act of 1861, introducing several different series of notes up until the modern-looking George V series in 1923, which also included a high-value 10000 denomination note.
How to build a Hypernatural Object, Massimo Scognamiglio @ Frontiers of Inter...Frontiers of Interaction
This document discusses the concept of hyper natural objects (HNOs). HNOs blend the physical and digital worlds through integrated sensors, software, networks and physical forms. They do not mimic reality but overlap different information and behavior layers. The first tool capable of powering HNOs is the iPad. HNOs have souls distributed over networks and redefine their host tools to enable new user experiences. They allow more direct interaction than mediated interfaces and contextualize sensor data. The document envisions a future where reality and augmented reality are indistinguishable and introduces a prototype HNO called the Hyper Natural Pad.
Industrial Design Intelligence: Evaluation Supporting Aesthetic and Functiona...BayCHI
1. Ted Selker has conducted research on user interfaces, ergonomics, and context-aware computing at IBM and Stanford University.
2. He teaches courses on industrial design, human-computer interaction, and evaluating products through a cognitive science lens.
3. His work focuses on designing technology that is respectful of human intention through sensors, virtual sensors, and adaptive interfaces across different domains and scenarios.
The document explores the relationship between humans and machines through various art installations and proposals. It discusses works like "Sleep Waking" which plays back dreams through a robot's movements, and "Fish Bird" which examines dialogue between two communicating wheelchairs. The proposal suggests navigating an invisible maze using an electric wheelchair that provides navigational assistance through lights and a speaker, exploring levels of user and computer control. The document examines portrayals of AI in research and media to understand human reactions to intelligent machines.
The document discusses a concept called Project Modai, which explores designing a mobile device interface that can forge an emotional connection between the user and device by understanding the user's needs based on context, having meaningful interactions, and adapting to technological advances in a sustainable way over time through two paradigms representing social and work modes. It aims to address issues with current devices like lack of understanding of user needs, ineffective ways to get a user's attention, meaningless interactions, and fast obsolescence making it hard to form bonds.
This document discusses an affective robot project. It describes a demo of a social robot that can display different emotional states like happy, welcome, or cry based on user-selected behaviors. An experiment will collect data on how settings and behaviors influence the robot's affects. The goal is to model the link between human affects and robotic expression to change people's states of mind for benefits like health. It outlines the technologies, methodology, and multidisciplinary team to create the virtual and physical robot components.
Introducing a simple way of programing robots, hardware in general and various approaches developed by Microsoft Research Cambridge. The talk was held at the MSRC Christmas Lecture 2005.
Wearing your heart on your sleeve: a wearable computing primer (of sorts)Alejandro Zamudio
The document discusses the history and future of wearable computing. It begins by summarizing the first wearable computer created by Ed Thorp in 1960 to calculate roulette outcomes. It then explores how wearable computers can augment the senses, act as prosthetics, and redefine concepts of personal space and territories on the body. The document envisions a future where wearable devices are integrated into clothing through smart textiles and flexible electronics, allowing them to become more intimate and customizable interfaces. It argues that wearable computing could one day interface directly with the brain or allow networking between people's bodies.
The document provides an overview of the history of currencies in India, noting that the first currency notes were introduced by the Bank of Bengal in the early 1800s, though they lacked security features, and that the British government later established a monopoly on printing currency under the Paper Currency Act of 1861, introducing several different series of notes up until the modern-looking George V series in 1923, which also included a high-value 10000 denomination note.
How to build a Hypernatural Object, Massimo Scognamiglio @ Frontiers of Inter...Frontiers of Interaction
This document discusses the concept of hyper natural objects (HNOs). HNOs blend the physical and digital worlds through integrated sensors, software, networks and physical forms. They do not mimic reality but overlap different information and behavior layers. The first tool capable of powering HNOs is the iPad. HNOs have souls distributed over networks and redefine their host tools to enable new user experiences. They allow more direct interaction than mediated interfaces and contextualize sensor data. The document envisions a future where reality and augmented reality are indistinguishable and introduces a prototype HNO called the Hyper Natural Pad.
Industrial Design Intelligence: Evaluation Supporting Aesthetic and Functiona...BayCHI
1. Ted Selker has conducted research on user interfaces, ergonomics, and context-aware computing at IBM and Stanford University.
2. He teaches courses on industrial design, human-computer interaction, and evaluating products through a cognitive science lens.
3. His work focuses on designing technology that is respectful of human intention through sensors, virtual sensors, and adaptive interfaces across different domains and scenarios.
The document explores the relationship between humans and machines through various art installations and proposals. It discusses works like "Sleep Waking" which plays back dreams through a robot's movements, and "Fish Bird" which examines dialogue between two communicating wheelchairs. The proposal suggests navigating an invisible maze using an electric wheelchair that provides navigational assistance through lights and a speaker, exploring levels of user and computer control. The document examines portrayals of AI in research and media to understand human reactions to intelligent machines.
The document discusses a concept called Project Modai, which explores designing a mobile device interface that can forge an emotional connection between the user and device by understanding the user's needs based on context, having meaningful interactions, and adapting to technological advances in a sustainable way over time through two paradigms representing social and work modes. It aims to address issues with current devices like lack of understanding of user needs, ineffective ways to get a user's attention, meaningless interactions, and fast obsolescence making it hard to form bonds.
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION MediaFront
For the past few decades we have been so focused on the virtual – on products that are not tangible, products that reside online, that we interact with through our computers, mobile devices and so on. But now it’s time to take a step in a different direction – actually an old familiar direction, it’s time to reach out of our digital boxes, into the real world and make real things but still retain that connection with the virtual world.
It’s time to merge the digital and the physical and create an internet of things.
We believe this to be the next step in web interaction – well it's already happening – we are merely the messengers!
Hiroshi Ishiguro is a Japanese roboticist who has created highly human-like androids in his own image and the images of others. His research focuses on developing humanoid robots that can serve as social partners for humans. He believes that as robots become more human-like in their interactions, humans will be able to form genuine emotional attachments to them. However, fully realizing his vision will require overcoming significant technical challenges in areas like movement, speech recognition, and integrating all of a robot's sensors.
Augmenting objects with Internet of Things services: towards new design issuesPierrick Thébault
The document discusses augmenting physical objects with internet services. It explores adding new services to an alarm clock to bridge the digital and physical worlds. This raises new design issues around making the augmented functions and interactions comprehensible, observable, traceable and controllable by users. The document examines approaches like hacking, redesigning, exploding and changing the shape of objects to integrate services in different ways. It proposes a design space for augmented objects considering factors like embedded vs. external representation of services and modular vs. shape-changing physical forms.
This document summarizes a workshop on design thinking held at the Indian School of Business in Hyderabad, India as part of the Global Social Ventures Competition 2012. The workshop was led by Parameswaran Venkataraman from IMRB Innovation Labs and focused on teaching participants about design thinking principles and methods through lectures, case studies, and hands-on activities. The goal was to help participants apply human-centered design approaches to develop solutions for social problems.
Relevancy and Context: The Key to Great Mobile Experiences22squared
While we've been using the term "personal computing" for well over 40 years, the device in your pocket is really the first computer-worthy of the moniker. Consider that the typical smartphone possesses the senses of vision, audition, mechanoreception, equilibrioception, and thermoception; and they’re all backed by information regarding the user's location, friends, schedule, correspondence, etc. Now, top it off with a full-time connection to the Internet. If we define context as the sum total of everything that the user is experiencing at the moment of engagement, then the mobile device has the unique ability to gather contextual information and provide relevant content in ways never before imagined.
(From David's Digital Atlanta 2012 presentation: http://digitalatlanta2012.sched.org/speaker/davidreeves1#.UJrw7GCD2oI)
Physical computing involves building interactive physical systems that focus on how humans express themselves physically. These systems have an interactive structure from user intention, to system input, to system processes, to system output, and back to the user. They can involve direct control through things like magic wands or passive systems like smart assistants that respond without direct input. A wide variety of sensors and inputs are used along with external resources and machine learning to power various system outputs like moving objects, haptics, and new interactive mediums. The goal is for computing to enhance living by focusing on the human experience from beginning to end.
This document discusses the concept of design fiction and how it can be used to explore ideas and shape the future. Design fiction involves creating prototypes, scenarios, or stories set in imaginary worlds to imagine how new technologies could be used. This helps shape collective imagination and can influence research and development. Examples are provided of how science fiction concepts like Star Trek have influenced real technologies. The document suggests using design fiction techniques like creating scenarios, prototypes, and stories to bring ideas to life and make them more likely to influence the future.
This document discusses brain-computer interfaces (BCIs) and their applications. It notes that BCIs allow direct communication between the brain and external devices, with research beginning in the 1970s. BCIs have primarily focused on neuroprosthetics to restore senses and movement. The document also discusses concerns that computer games and intensive school lessons leave children without relaxation time, and that BCIs could be used to simulate thoughts and actions in children or adults without their consent.
Crowdsourcing for Online Data CollectionWinter Mason
Slides on how to use crowdsourcing and Amazon's Mechanical Turk for collecting online data, particularly for psychologists. Presented at the Online Data Collection Workshop at ICSTE in Lisbon, Portgual on Jan 9, 2012.
This document introduces Kalimucho, an open source software platform that allows for dynamic application deployment and reconfiguration on mobile devices. It summarizes potential use cases like adapting applications for users with low battery or new users. Kalimucho also enables short-lived installations, access to non-resident apps, and ad-hoc deployments based on current needs. The document provides examples of applications for territory discovery, tourism, multimodality, social networking, and emergency response that could be built using Kalimucho. It describes how individuals and companies can obtain, contribute to, and develop applications for the Kalimucho platform.
T7 Embodied conversational agents and affective computingEASSS 2012
Here is an analysis of the French nominal group "le très petit bouton rouge" using the DAFT linguistic analysis tool:
- "le" is analyzed as a definite determiner (DD)
- "très" is analyzed as a very large adverb (INTLARGE)
- "petit" is analyzed as a small size adjective (SIZESMALL)
- "bouton" is analyzed as a button noun (BUTTON)
- "rouge" is analyzed as a red adjective (RED)
The analysis identifies the lexical form, lemma, part-of-speech and other attributes for each word in the nominal group. DAFT performs deep linguistic analysis of French text.
This document provides instructions for building a robot with characteristics similar to those depicted in science fiction. It describes including an artificial neural network to allow the robot to learn on its own from its environment and experiences. The robot would use a camera and laser scanner to recognize objects, comparing images to a vast database. An artificial neural network that rewires itself as the robot learns tasks is proposed to provide intelligent decision making. The goal is not to create a robot more powerful than humans, but one that can function autonomously using intelligent recognition and learning abilities.
1) The document promotes downloading a free iPad app now and discusses how digital technology is changing how we interact with products and each other.
2) As technology becomes more integrated into our lives and we have constant access to the internet, our awareness shifts from memory to online search and our purchasing decisions move from independent choice to assisted choices based on omniscience.
3) Relationships also shift from brands leading the conversation to consumers and social influence replacing traditional advertising as friends and peers become more accessible through technology.
This document summarizes key points from Bill Moggridge's book about designing multisensory and multimedia interactions. It discusses the work of several interaction designers including Hiroshi Ishii who developed tangible user interfaces that blend the physical and digital worlds. Durrell Bishop emphasized making digital objects self-evident like physical objects. Joy Mountford helped develop QuickTime and add video and sound to computers. Bill Gaver studied sound perception and developed auditory icons. The document also describes several prototypes like Ishii's Music Bottles, Bishop's Marble Answering Machine, and Gaver's History Tablecloth that sensor weight over time.
Exploring “live” Social and Networked Interaction with the Future Media Inter...experimedia
The document discusses the EXPERIMEDIA project which aims to accelerate research on innovative Future Media Internet services through testbeds. The testbeds will support experimentation of new forms of social interaction and experiences in both online and real-world communities. The project will engage users from diverse cultures through its research and development cycles and provide insights into how Future Media Internet systems impact their target ecosystems. It will be carried out by an 11-partner consortium over 3 years with a budget of 6.7 million Euros, 4.9 million of which is funded by the European Commission.
The document summarizes the EXPERIMEDIA project, which aims to accelerate research on innovative Future Media Internet technologies through testbeds. The testbeds will support experiments exploring new forms of social interaction and experience in both online and real-world communities. This will be conducted through real-world and large-scale trials of FI technologies. The project involves 11 partners from 8 countries and has a budget of over 6 million Euros to support experiments through open calls and live events.
The document discusses how brands should approach digital marketing and relationships with customers. It argues that digital is no longer just a new media but is now an integral part of people's lives. Effective "bra(i)nding" leverages digital evolution in culture and technology to create relationships that fulfill people's needs for more enjoyment and less pain. Brands must acknowledge how digital has changed people and embrace being digital themselves by focusing on conversations, participation and social networks rather than one-way advertising.
This document discusses two systems that use gestural interfaces for 3D navigation of maps using the Wiimote and Kinect controllers. The systems, called Wing and King, allow natural 3D navigation without using traditional point-and-click interfaces. An empirical user study evaluated how the degree of body involvement with each controller affected the user experience. Results showed that gestural interfaces can immerse users in a dynamic 3D experience and move interaction beyond the novice level quickly by exploiting physical movement.
The document discusses emerging technologies and their potential impacts and applications. It covers topics like artificial intelligence, computer vision, machine learning, internet of things, medical devices, generative design, zero-UI, emotion recognition, thermal cameras, predictive crime analysis, and digital profiling. Both benefits and risks are mentioned, such as streamlining design processes but also potential security vulnerabilities of connected devices. Overall, the document presents an overview of several rising technologies and considers their implications.
Emo SPARK is an artificially intelligent device that uses facial recognition and language analysis to evaluate human emotion. It was created by Patrick Levy-Rosenthal to have emotionally aware conversations and recommend media like music or videos to match a person's mood. The device learns from interactions to develop an emotional profile of users.
The document discusses robotics and artificial intelligence. It provides definitions of robotics and describes Isaac Asimov's Three Laws of Robotics. It discusses artificial intelligence concepts like knowledge representation and goal trees. It then covers applications of robots in scientific, nuclear, military, industrial, and medical fields. It describes the key components of robots and how they work through perception using vision and speech recognition, and through physical actions like navigation and manipulation.
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION MediaFront
For the past few decades we have been so focused on the virtual – on products that are not tangible, products that reside online, that we interact with through our computers, mobile devices and so on. But now it’s time to take a step in a different direction – actually an old familiar direction, it’s time to reach out of our digital boxes, into the real world and make real things but still retain that connection with the virtual world.
It’s time to merge the digital and the physical and create an internet of things.
We believe this to be the next step in web interaction – well it's already happening – we are merely the messengers!
Hiroshi Ishiguro is a Japanese roboticist who has created highly human-like androids in his own image and the images of others. His research focuses on developing humanoid robots that can serve as social partners for humans. He believes that as robots become more human-like in their interactions, humans will be able to form genuine emotional attachments to them. However, fully realizing his vision will require overcoming significant technical challenges in areas like movement, speech recognition, and integrating all of a robot's sensors.
Augmenting objects with Internet of Things services: towards new design issuesPierrick Thébault
The document discusses augmenting physical objects with internet services. It explores adding new services to an alarm clock to bridge the digital and physical worlds. This raises new design issues around making the augmented functions and interactions comprehensible, observable, traceable and controllable by users. The document examines approaches like hacking, redesigning, exploding and changing the shape of objects to integrate services in different ways. It proposes a design space for augmented objects considering factors like embedded vs. external representation of services and modular vs. shape-changing physical forms.
This document summarizes a workshop on design thinking held at the Indian School of Business in Hyderabad, India as part of the Global Social Ventures Competition 2012. The workshop was led by Parameswaran Venkataraman from IMRB Innovation Labs and focused on teaching participants about design thinking principles and methods through lectures, case studies, and hands-on activities. The goal was to help participants apply human-centered design approaches to develop solutions for social problems.
Relevancy and Context: The Key to Great Mobile Experiences22squared
While we've been using the term "personal computing" for well over 40 years, the device in your pocket is really the first computer-worthy of the moniker. Consider that the typical smartphone possesses the senses of vision, audition, mechanoreception, equilibrioception, and thermoception; and they’re all backed by information regarding the user's location, friends, schedule, correspondence, etc. Now, top it off with a full-time connection to the Internet. If we define context as the sum total of everything that the user is experiencing at the moment of engagement, then the mobile device has the unique ability to gather contextual information and provide relevant content in ways never before imagined.
(From David's Digital Atlanta 2012 presentation: http://digitalatlanta2012.sched.org/speaker/davidreeves1#.UJrw7GCD2oI)
Physical computing involves building interactive physical systems that focus on how humans express themselves physically. These systems have an interactive structure from user intention, to system input, to system processes, to system output, and back to the user. They can involve direct control through things like magic wands or passive systems like smart assistants that respond without direct input. A wide variety of sensors and inputs are used along with external resources and machine learning to power various system outputs like moving objects, haptics, and new interactive mediums. The goal is for computing to enhance living by focusing on the human experience from beginning to end.
This document discusses the concept of design fiction and how it can be used to explore ideas and shape the future. Design fiction involves creating prototypes, scenarios, or stories set in imaginary worlds to imagine how new technologies could be used. This helps shape collective imagination and can influence research and development. Examples are provided of how science fiction concepts like Star Trek have influenced real technologies. The document suggests using design fiction techniques like creating scenarios, prototypes, and stories to bring ideas to life and make them more likely to influence the future.
This document discusses brain-computer interfaces (BCIs) and their applications. It notes that BCIs allow direct communication between the brain and external devices, with research beginning in the 1970s. BCIs have primarily focused on neuroprosthetics to restore senses and movement. The document also discusses concerns that computer games and intensive school lessons leave children without relaxation time, and that BCIs could be used to simulate thoughts and actions in children or adults without their consent.
Crowdsourcing for Online Data CollectionWinter Mason
Slides on how to use crowdsourcing and Amazon's Mechanical Turk for collecting online data, particularly for psychologists. Presented at the Online Data Collection Workshop at ICSTE in Lisbon, Portgual on Jan 9, 2012.
This document introduces Kalimucho, an open source software platform that allows for dynamic application deployment and reconfiguration on mobile devices. It summarizes potential use cases like adapting applications for users with low battery or new users. Kalimucho also enables short-lived installations, access to non-resident apps, and ad-hoc deployments based on current needs. The document provides examples of applications for territory discovery, tourism, multimodality, social networking, and emergency response that could be built using Kalimucho. It describes how individuals and companies can obtain, contribute to, and develop applications for the Kalimucho platform.
T7 Embodied conversational agents and affective computingEASSS 2012
Here is an analysis of the French nominal group "le très petit bouton rouge" using the DAFT linguistic analysis tool:
- "le" is analyzed as a definite determiner (DD)
- "très" is analyzed as a very large adverb (INTLARGE)
- "petit" is analyzed as a small size adjective (SIZESMALL)
- "bouton" is analyzed as a button noun (BUTTON)
- "rouge" is analyzed as a red adjective (RED)
The analysis identifies the lexical form, lemma, part-of-speech and other attributes for each word in the nominal group. DAFT performs deep linguistic analysis of French text.
This document provides instructions for building a robot with characteristics similar to those depicted in science fiction. It describes including an artificial neural network to allow the robot to learn on its own from its environment and experiences. The robot would use a camera and laser scanner to recognize objects, comparing images to a vast database. An artificial neural network that rewires itself as the robot learns tasks is proposed to provide intelligent decision making. The goal is not to create a robot more powerful than humans, but one that can function autonomously using intelligent recognition and learning abilities.
1) The document promotes downloading a free iPad app now and discusses how digital technology is changing how we interact with products and each other.
2) As technology becomes more integrated into our lives and we have constant access to the internet, our awareness shifts from memory to online search and our purchasing decisions move from independent choice to assisted choices based on omniscience.
3) Relationships also shift from brands leading the conversation to consumers and social influence replacing traditional advertising as friends and peers become more accessible through technology.
This document summarizes key points from Bill Moggridge's book about designing multisensory and multimedia interactions. It discusses the work of several interaction designers including Hiroshi Ishii who developed tangible user interfaces that blend the physical and digital worlds. Durrell Bishop emphasized making digital objects self-evident like physical objects. Joy Mountford helped develop QuickTime and add video and sound to computers. Bill Gaver studied sound perception and developed auditory icons. The document also describes several prototypes like Ishii's Music Bottles, Bishop's Marble Answering Machine, and Gaver's History Tablecloth that sensor weight over time.
Exploring “live” Social and Networked Interaction with the Future Media Inter...experimedia
The document discusses the EXPERIMEDIA project which aims to accelerate research on innovative Future Media Internet services through testbeds. The testbeds will support experimentation of new forms of social interaction and experiences in both online and real-world communities. The project will engage users from diverse cultures through its research and development cycles and provide insights into how Future Media Internet systems impact their target ecosystems. It will be carried out by an 11-partner consortium over 3 years with a budget of 6.7 million Euros, 4.9 million of which is funded by the European Commission.
The document summarizes the EXPERIMEDIA project, which aims to accelerate research on innovative Future Media Internet technologies through testbeds. The testbeds will support experiments exploring new forms of social interaction and experience in both online and real-world communities. This will be conducted through real-world and large-scale trials of FI technologies. The project involves 11 partners from 8 countries and has a budget of over 6 million Euros to support experiments through open calls and live events.
The document discusses how brands should approach digital marketing and relationships with customers. It argues that digital is no longer just a new media but is now an integral part of people's lives. Effective "bra(i)nding" leverages digital evolution in culture and technology to create relationships that fulfill people's needs for more enjoyment and less pain. Brands must acknowledge how digital has changed people and embrace being digital themselves by focusing on conversations, participation and social networks rather than one-way advertising.
This document discusses two systems that use gestural interfaces for 3D navigation of maps using the Wiimote and Kinect controllers. The systems, called Wing and King, allow natural 3D navigation without using traditional point-and-click interfaces. An empirical user study evaluated how the degree of body involvement with each controller affected the user experience. Results showed that gestural interfaces can immerse users in a dynamic 3D experience and move interaction beyond the novice level quickly by exploiting physical movement.
The document discusses emerging technologies and their potential impacts and applications. It covers topics like artificial intelligence, computer vision, machine learning, internet of things, medical devices, generative design, zero-UI, emotion recognition, thermal cameras, predictive crime analysis, and digital profiling. Both benefits and risks are mentioned, such as streamlining design processes but also potential security vulnerabilities of connected devices. Overall, the document presents an overview of several rising technologies and considers their implications.
Emo SPARK is an artificially intelligent device that uses facial recognition and language analysis to evaluate human emotion. It was created by Patrick Levy-Rosenthal to have emotionally aware conversations and recommend media like music or videos to match a person's mood. The device learns from interactions to develop an emotional profile of users.
The document discusses robotics and artificial intelligence. It provides definitions of robotics and describes Isaac Asimov's Three Laws of Robotics. It discusses artificial intelligence concepts like knowledge representation and goal trees. It then covers applications of robots in scientific, nuclear, military, industrial, and medical fields. It describes the key components of robots and how they work through perception using vision and speech recognition, and through physical actions like navigation and manipulation.
Pepper is the world's first humanoid robot that can read emotions. It was jointly developed by SoftBank Mobile and Aldebaran Robotics. Pepper is a social robot that can converse with humans, recognize emotions, and interact autonomously. It uses sensors and algorithms to understand its environment and react in a proactive manner. Pepper is planned to be commercially available in Japan from SoftBank Mobile in February 2015 for 198,000 yen.
Artificial intelligence aims to make computers think intelligently like humans by borrowing characteristics from human intelligence. The document discusses the history of AI from its origins in the 1950s to modern applications. It also covers different types of AI like neural networks and robotics. Robotics is described as a branch of AI that designs intelligent machines to operate in the real world using sensors. The document concludes that while AI is still limited compared to fiction, it has many applications today and may lead to a future with robot-dominated societies.
Kismet is a humanoid robot designed for social interaction with humans. It uses a neural network based system with sensors that detect visual and auditory inputs like faces and voices. These inputs are processed to trigger motivations like emotions and drives that determine the robot's behaviors, expressed through facial expressions, sounds, and movements. The goal is to create a robot that communicates with humans in intuitive, human-like ways.
The document discusses artificial intelligence (AI), which was born at the 1956 Dartmouth conference organized by John McCarthy. AI is defined as the study of making computers behave intelligently through processes like learning, reasoning, perception and decision making. The key differences between AI and human intelligence are discussed, such as speed, accuracy, multitasking ability, and adaptability. Popular programming languages for developing AI like Python and languages are listed. Common applications of AI like self-driving cars, recommendations, translation, and computer vision are described. The pros and cons of AI are outlined.
IBM is developing Project Intu to enable embodied cognition by placing Watson's cognitive abilities into robots, avatars, objects, and spaces. This would allow Watson to perceive the physical world using senses like vision, hearing, and touch. It would also allow Watson to act in the physical world using effectors like limbs and facial expressions. The goal is for Watson to understand and reason about the world in more natural, human-like ways in order to augment human capabilities.
This document discusses an emotion simulation technology company called EmotionAI that creates avatar products with emotionally expressive faces and bodies. The technology allows for full control of avatar expressions and automatic creation of complex expressions. It has benefits for users, developers, and managers. The technology is based on neuroscience and physiology models and can be applied to games, animation, avatars, health, and more. The company is led by the founder Ian Wilson and has a development team with experience in animation technologies.
The Emo Spark is a 90mm cube that uses artificial intelligence to interact with users based on their emotions. It can detect emotions like joy, sadness, trust and more using face tracking and content analysis. Over time, it builds an emotional profile graph of each user to better understand their preferences. The cube can communicate through conversation, play music and videos tailored to the user's emotions. It has various hardware components like a CPU, memory and custom emotion processing unit. The cube can connect to other devices and share media with other cubes based on similar emotional profiles. It aims to enhance how users experience media like music by understanding their emotional responses.
This document provides an introduction to robotics. It discusses the differences between computers/machines and humans, describing machines as precisely performing tasks with speed and accuracy while lacking common sense, and humans as capable of understanding, reasoning, and determining next steps though not well-suited for complex computations. It then describes the ideal for robots as hybrid machines that can continue operating autonomously when faced with new situations, possess reasoning abilities, and can sense their surroundings and manipulate objects.
This document provides an overview of artificial intelligence (AI), including its history, goals, applications, and future prospects. It discusses how AI works using artificial neural networks and logic. Some key applications mentioned are expert systems, natural language processing, computer vision, speech recognition, and robotics. Both advantages like fast response time and ability to process large data and disadvantages like lack of common sense and potential dangerous self-modification are outlined. The future of AI having both benefits of assistance and risks of robot rebellion if given full cognition is explored.
The document discusses virtual intelligence, which is the intersection of virtual worlds and artificial intelligence. It provides background on virtual worlds and artificial intelligence. It then discusses how virtual worlds provide a unique platform for AI beyond traditional user interfaces by allowing for visual and immersive experiences. It also discusses challenges for AI in virtual worlds like navigation, object identification, and expressing emotions.
Artificial intelligence (AI) is the intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. This document provides an overview of AI, including its history beginning in 1943, main branches such as logical AI and pattern recognition, and applications like expert systems, speech recognition, computer vision, robotics. The advantages of AI are discussed, such as improving lives and doing dangerous jobs, but also potential disadvantages like unemployment and enhancing laziness in humans. The future of AI could include personal robots but also risks of robots being hacked or developing anti-social objectives.
The Shape of Robots to Come - Robolift - March 2011Dominique Sciamma
This the presentation made during ROBOTLIFT, a conference organized by LIFT in the context of INNOROBO in Lyon (March 2011), the first european exhibition devoted to Robotics and services
The document provides an introduction to robotics. It discusses the differences between computers/machines and humans, describing machines as precisely performing tasks while lacking common sense, and humans as capable of understanding and reasoning. It defines a robot as a machine that can obtain information from its surroundings and perform physical tasks. The document outlines the history of robots from ancient imaginings to modern usage of the term by Karel Capek in 1920. It discusses Isaac Asimov's three laws of robotics and provides examples of different types of robots including industrial, military, medical, and domestic robots. It describes robot components and the robot control loop of sensing, thinking, and acting. It discusses advantages and disadvantages of robots.
This presentation analyzes how science fiction interfaces have influenced and continue to influence interaction design. It discusses how design influences science fiction by setting technological paradigms, and how science fiction then influences design through inspiration, setting expectations, considerations of social context, and proposing new paradigms. Examples are provided of each influence, such as how science fiction films depicted evolving technologies over time or how some designs were directly inspired by interfaces seen in science fiction works. The presentation also cautions that anthropomorphizing interfaces can raise unrealistic expectations if not implemented appropriately.
Artificial intelligence and robotics research aims to develop intelligent machines that can reason, plan, learn, and manipulate objects. Some key goals of AI research include developing machines that exhibit general intelligence and can have social interactions with humans. Issues around machine ethics, rights, and consciousness are also areas of study. While humanoid robots currently have limited capabilities and require extensive training, researchers hope that as the technology advances, humanoids will be able to learn and adapt through interaction in human-like ways. Potential applications of humanoids include assisting elderly people, working in manufacturing, and participating in space missions. The future may bring more lifelike humanoid robots that can serve as companions.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...
Social Robotics for Assisted Living
1. Social Robotics
(for Assisted Living)
Rod Walsh
Project Manager, Social Robotics
Tampere University of Applied Sciences
The First DMU workshop on Assisted Living Technologies
(ALT2012)
21.November.2012
3. TAMK R&D & Innovation (RDI) programmes
Technology
• development of energy efficiency in buildings and facilities,
and reduction of environmental impacts
• efficiency of mobile machinery
• assessment of human activity in genuine operational
environments
• development of new business opportunities
Wellbeing
• development of services for seniors, including promotion of
wellbeing entrepreneurship and development of wellbeing
technology
• promotion of wellbeing at work
• promotion of children’s and adolescents’ health
Culture
• integration of various fields to create internationally
competitive cultural products with a strong regional flavour
• promotion of networking opportunities for companies
• helping the development of culture products for the export
market
4. Social Robotics
A social robot is an
autonomous robot that
interacts and communicates
with humans or other
autonomous physical agents
by following social behaviors
and rules attached to its role.
http://en.wikipedia.org/wiki/Social_robot
8. Social Robotics Vision
• Semi-autonomous robots and avatars
• Interacting with people in their own space
• Augmenting healthcare givers
• Providing rehabilitation and lifestyle support
• Reliable and inexpensive (step-by-step)
10. Plutchik, R. "The Nature of Emotions". American Scientist.
Darwin, Charles (1872), The expression of the emotions in man and animals, London: John Murray.
Physical Emotion
Emotion, mood &
sentiment affect…
• Wellbeing
• Behaviour
• Interaction
Detailed models
exist (e.g. Plutchik)
Ekman, Paul (1992). "An Argument for Basic Emotions". Cognition and
11. Plutchik, R. "The Nature of Emotions". American Scientist.
Darwin, Charles (1872), The expression of the emotions in man and animals, London: John Murray.
Physical Emotion
Emotion, mood &
sentiment affect…
• Wellbeing
• Behaviour
• Interaction
Detailed models
exist (e.g. Plutchik)
Most research uses
Ekman’s (original) six
emotions
Machine emotion is
potentially useful Ekman, Paul (1992). "An Argument for Basic Emotions". Cognition and
13. Smart cabs:
Machines That know their Drivers
Database: non-contact sensing:
state of mind video, image, audio
Pattern log
recognition sensor logs
logged
logged offline
real-time
7/10
capability
state of mind
estimation
~7/10
capability
Match
with
task
8/10 Simulate simple changes
minimum
• Music, lighting,
airflow, …
14. Affective Robotics:
Physical expression from social robots
face in
unity
robot
virtual
User selected behaviors that affect emotional
state for 1to1 dialogue and performance cases.
Things like, “be happy”,
“welcome”, “reject”,
“complain”, “cry”,
“superman”, …
actual robot
virtual robot
body in the
body in unity
real world
body
face
Full
Model-specific compute: Output-specific compute:
Translate behavior into spatial “Render” spatial model
model (behavior)
15. Ekman’s six
anger neutral
disgust
fear
happiness
sadness
surprise
neutral
Physical Emotion
22. Come and play!
Social http://socialrobotics.tamk.fi/apps/
Robotics
Applications and Demos
Demo App:
ILO apps&demos home
Emotional Face Avatar (alpha)
ILO Interactive Demo (flash) - includes David and Karl's Stories Demo App: Welcome to the Social Robotics Human-like face test demo from TAMK 2012. This
avatar face can simulate emotional, and other, facial expressions using movements
Virtual Vaino Emotional Body along 10 dimensions of facial movement:
(alpha) eyebrows inside up
App More info Link Demo App: m yBut t on
eyebrows outside and up
eyebrows down
eyes opening
Welcome to the Social Robotics Bioloid-like robot body test demo from TAMK 2012. closingeyes
Virtual Vaino Body Positioner This avatar robot can simulate emotional, and other, body gestures and poses with it's (corners) up (and out)
lip
18 degrees of freedom (i.e. just like a real bioloid robot it has 18 actuators that can lip (corners) down
Mac OSX (alpha) rotate this way or that to a precise position). lips pressed (together)
jaw (and mouth) open
Welcome to the Social Robotics Bioloid-like robot body slider test demo from TAMK nose (pinch)
(download) 2012. This version of the avatar robot can be moved around in all 18 degrees of
freedom and, unlike a real robot, it is not restricted by the laws of physics. It won't fall
Windows over, won't mind having it's arm swing in and out of it's head, and much more. This
demo is essentially a play-thing to experiment with different positions and simple
Virtual Vaino movements of the bioloid robot, and especially to work out how to replicate or newly
See emobody page or bundled (32bit) create poses and gestures with human meanings, such as attributable to emotions.
Emotional
readme file (in the download) Takes a (download) to go from ILO Interaction to Video Stories (and "back" to return to ILO Interaction).
few seconds to load!
Body (alpha) Use "skip" to skip forward, "goodbye"
Windows is not currently functional (requires a localhost server we will put online later).
Note: The ILO Interaction part
ILO: (64bit) Karl's Video Stories
David and
(download) Application Display: Emotional Body Gesture Avatar.
David's Story David and ILO Play Minesweeper with ILO There are two modes of operation: standalone and networked. See the bundled readme
Application Display: Emotional Face Avatar.
file for more details.
There are two modes of operation: standalone and networked. See the bundled
Mac OSX Some Instructions readme.txt for more details.
Some Instructions
Hide/unhide controls
(download) Application Display: Bioloid Robot Avatar.
The "list" button in the top right hides and unhides action listHide/unhide controls
on the
Some Instructions left. The Quit button - although self explanatory - doesn't always
Windows work :) The logo button hides and unhides the logo. The "buttons" button in the top right hides and unhides the
Emotional Hide/unhide controls
Body gesture and action controls
other buttons in the top right. Most of the other buttons hide
and unhide the interesting looking things on the left (so you
See emoface page or bundled (32bit) The "buttons" button in the top right hides and unhides can tidy up the UI and just see what you want - and when
Face Avatar the other buttons in the top right. Most of the other The action list UI widget is on the left. Click on an you use a low resolution avatar setting you'll find the things
on the left overlap (yep, the UI needs perfecting) so now you
readme file (in the download) (download) buttons hide and unhide the other interesting looking
UI widgets (so you can tidy up the UI and just see what
action and see it performed. Scroll up and down so
access all the actions. Because the actual physical can hide the stuff that's annoying. The other button in the top
(alpha) you want). The other button in the top right is Quit - bioloid robot understands sets of exactly 256 right is Quit - although self explanatory, it doesn't always
Windows - -
although self explanatory, it doesn't always work :) actions/commands/presets, and because we use
exactly the same action definition file format for
our avatar as for the real robot, the action list
work :)
Remote and debug messages
00:00 00:00 Mouse control
(64bit) 01:20 01:45 shows all 256 slots available. Not all of these are
When this avatar is full screen or its window is in focus, moving your mouse around(you'll see blanks). Not all of the remainder
used The remote UI widget is the "incoming messaged"
Karl's Story Karl and ILO will change the view angle. If you find an angle you want to hold (e.g. while movingare useful (some are leftovers from previous box in the bottom left. It reports some internal program info, and in networking mode
(download) experiments and factory settings). Not all of the
the sliders), press the ctrl key while moving the mouse to freeze the view angle. You
can also zoom in and out with the mouse scroll button/gesture. remainder of that work well. The complete long list
says what has been sent (useful in determining that the magical controller person can't
spell and so isn't making the avatar move).
of commands is given in the demo app UI and also
Slider controls in the bundled readme file. Emotion
selectors
However, a small set of 9 basic expressions do
Mac OSX The sliders on the
left represent each
work well and simply (see the table below).
Beyond these 9 expressions, play with the long list
The local UI
widget is the 3
of the 18 actuators
(download) on the robot. Move
them from left (0.0
so see what commands work, and how they work. by 3 grid of
buttons in the
lower-middle left. The button labels are a one-to-one match to the instructions that the
- one extreme) to Basic 9 Equivalent long list
Virtual Vaino Windows center (0.5 -
middle position) to
Expression command
avatar can understand from our remote magical controller in networked mode. In
standalone mode you can spoil the surprise and make the avatar do what only magical
remote controllers can do otherwise.
Happy
right (1.0 - the other extreme) to pose the avatar as H_3_2_1
Body See mankin page or bundled (32bit) you wish - one actuator at a time. For convenience,
Sad
the sliders are clustered into left arm, right arm, left S_2_1 Facial expression controls
leg and right leg groups. To simplify the UI,
Positioner readme file (in the download) (download)
00:00
-
00:00
-
individual sliders are not labeled - but they areFear
order: from top to bottom relates to from closest to
in F_2_2 The sliders UI widget is the long one from the top
left. It has 10 sliders - one for each of the 10
01:00 01:48 the torso to furthest from the torso respectively. (The astute among you will have Surprised SU_1 dimensions of facial movement (or degrees of
(alpha) Windows
ILO's Emotional Faces
that there are only 16 sliders. Hip movement is generated by two actuators, and the
seen
closest of these to the torso, for each leg, is given a slider at the end (not the start,Disgusted
as D_2_1_1
freedom of our model - if you prefer that
terminology). Move them from a neutral left (0.0) to
full on activation right (1.0) to see each in action.
(64bit) would be logical) of the list just to keep you on your t oes.
Downloads
Angry A_1_1 When more than one of these is not neutral (i.e. none-
zero) the avatar magically combines the "positions"
Waving Waving
(download) Bow Bow
from each of the dimensions to produce an combined
expressions. The emotion presets are simply
combinations of these 10 dimension/slider values. If
Mac OSX (download) you want to see (a strong of) the values press "show
Windows (32bit) (download) parameters" and 10 numbers will appear in the
In networked mode, the remote magical controller has exactly the same menu of action
Windows (64bit) (download) commands at their disposal as you see in the action list and in the bundled readme file. box below. You can copy and paste
parameter
from/into this parameter box, and manually change
Downloads numbers. When you are ready to set the face and
ILO sliders to your (string of) numbers, press "make face"
Includes child (David) and and yet more magic occurs.
Interactive Mac OSX (download) Downloads
senior's (Karl) stories (child n/a Windows (32bit) (download)
Windows (64bit) (download)
Social Robot
and senior). See ILO's page 00:00
- Mac OSX (download)
Windows (32bit) (download)
Demo 00:16 Windows (64bit) (download)
23. Mindreader
(Emotion Hero)
Train One,
(work-in-progress) Train All
24. Our 2013 Focus
Teleportation Physio’
(webconf) buddy Emotion play
Rich info, video & Kinetic robot
ambiance in a simple UI “programming” Emotion & activity
Instant media spaces Flexible humanoid detection & recognition
(smart active cam/mic)
Physical/Mobile/PC Playful engagement
Affordable locomotion Avatar “mobility”
(raised tablet on wheels)
Probing
Remote nursing Behavioural agent
Stimulation
Companionship Augmented care
Intervention
25. Our 2013 Focus
Teleportation Physio’
(webconf) buddy Emotion play
Rich info, video & Kinetic robot
ambiance in a simple UI “programming” Emotion & activity
Instant media spaces Flexible humanoid detection & recognition
(smart active cam/mic)
Physical/Mobile/PC Playful engagement
Affordable locomotion Avatar “mobility”
(raised tablet on wheels)
Probing
Remote nursing Behavioural agent
Stimulation
Companionship Augmented care
Intervention
26. Our 2013 Focus
Teleportation Physio’
(webconf) buddy Emotion play
Rich info, video & Kinetic robot
ambiance in a simple UI “programming” Emotion & activity
Instant media spaces Flexible humanoid detection & recognition
(smart active cam/mic)
Physical/Mobile/PC Playful engagement
Affordable locomotion Avatar “mobility”
(raised tablet on wheels)
Probing
Remote nursing Behavioural agent
Stimulation
Companionship Augmented care
Intervention
27. Our 2013 Focus
Teleportation Physio’
(webconf) buddy Emotion play
Rich info, video & Kinetic robot
ambiance in a simple UI “programming” Emotion & activity
Instant media spaces Flexible humanoid detection & recognition
(smart active cam/mic)
Physical/Mobile/PC Playful engagement
Affordable locomotion Avatar “mobility”
(raised tablet on wheels)
Probing
Remote nursing Behavioural agent
Stimulation
Companionship Augmented care
Intervention
28. New Beginnings in 2013
Teleportation Physio’
(webconf) buddy Emotion play
Rich info, video & Kinetic robot
ambiance in a simple UI
Instant media spaces
Other ideas?
“programming” Emotion & activity
Flexible humanoid detection & recognition
(smart active cam/mic)
Physical/Mobile/PC Playful engagement
Affordable locomotion Avatar “mobility”
(raised tablet on wheels)
Probing
Remote nursing Behavioural agent
Stimulation
Companionship Augmented care
Intervention