Using Physiological sensing and scene reconstruction in remote collaborationUniversity of Auckland
In this research, We explore novel Augmented Virtual Teleportation (AVT) methods based on the hybrid technologies of Augmented Reality (AR), Virtual Reality (VR), 3D live scene capturing, and multimodal interaction. Natural behavioral cues (Hand gestures, eye gaze, etc.) that are used in face-to-face communication play an essential role in effective collaboration. In contrast, most Mixed Reality (MR) remote collaboration systems mainly investigated computer-generated visual cues rendered as graphic objects or text for delivering instructions. In this research, we first study natural communication cues that people use in face-to-face collaboration. We then develop a novel remote collaboration system to enable people to communicate remotely as face-to-face. The system will contain two main parts: 1) Live scene capturing to enable real-time environment reconstruction and sharing of a user’s location, 2) Multimodal input such as gaze, gesture, and physiological signals to enhance remote communication. So far we have conducted two experiments to study the collaboration between a person with an AR interface and a remote user within a VR interface using multimodal input. We found that the remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users by combing gaze and gesture cues than using the gaze cue alone. The combined cues were also rated significantly higher than using gaze cues alone in terms of the ease of conveying spatial actions. We plan to extend this system to study the effect of incorporating physiological signals in communication, especially in co-presence and usability. There are many potential applications of this research in different areas such as training, tourism, entertainment, gaming, and others. In conclusion, this thesis aims to study the effect of incorporating multimodal input and scene capture in remote collaboration systems in terms of presence, engagement, and task efficiency. This research will produce many benefits, such as design guidelines for future AVT systems, software libraries making it easy to create AVT systems, sample data-sets from experiments conducted, research publications, and more.
Crowd Recognition System Based on Optical Flow Along with SVM classifierIJECEIAES
The manuscript discusses about abnormalities in a crowded scenario. To prevent the mishap at a public place, there is no much mechanism which could prevent or alert the concerned authority about suspects in a crowd. Usually in a crowded scene, there are chances of some mishap like a terrorist attack or a crime. Our target is finding techniques to identify such activities and to possibly prevent them. If the crowd members exhibit abnormal behavior, we could identify and say that this particular person is a suspect and then the concerned authority would look into the matter. There are various methods to identify the abnormal behavior. The proposed approach is based on optical flow model. It has an ability to detect the sudden changes in motion of an individual among the crowd. First, the main region of motion is extracted by the help of motion heat map. Harris corner detector is used for extracting point of interest of extracted motion area. Based on the point of interest an optical flow is estimated here. After analyzing this optical flow model, a threshold value is fixed. Basically optical flow is an energy level of individual frame. The threshold value is forwarded to SVM classifier, which produces a better result with 99.71% accuracy. This approach is very useful in real time video surveillance system where a machine can monitor unwanted crowd activity.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Mobile Robot competitions are vital way for distribution of science and engineering to the worldwide public but are also brilliant way of testing and comparing unlike research policies. It is discuss how today's study challenges of Intelligent and Autonomous Mobile Robots are being fingered by the Autonomous Driving competition that takes place in the Portuguese Robotics Open annual mobile robotics competition. Karthick Vishal. K | Dr. S. Venkatesh Kumar "A Study on Mobile Robotics in Robotics" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-6 , October 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18649.pdf
Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...Darius Burschka
The talk motivates a re-thinking of the way, how perception passes the information to the control modules. Metric information is not a native space of the camera and apparently also not used in biology for navigation. Early abstraction of information from images loses a lot of important information that can be directly used for following (Visual-Servoing), motion estimation(Motion Blurr), and collision relations(Optical Flow Clustering). I present in this talk ways, how we use the image information in "classical way" that does not require any learning and runs on low-power CPUs.
Haptic Virtual Fixtures to Assist Endonasal Micro Robotic Surgery through Vir...saulnml
Robot-assisted endonasal neurosurgery requires precise manipulation under restricted workspace and indirect visibility conditions. Virtual Fixtures (VF) are algorithms developed to assist human operators in man-machine cooperative systems. In a master-slave surgical system, VF are typically implemented by imposing kinematic constraints on the slave manipulator. Haptic Virtual Fixtures (HVF) in counterpart, apply forces to the human operator on the master side, either to avoid a forbidden region (FRVF), or to provide guidance (GVF). Although VF prevent harmful manipulation, a trade-off between automation and interactivity is necessary. In this work, we propose an HVF framework that combines FRVF and GVF, to constrain the movement of the micro robotic tool for endonasal surgery using Virtual Reality (VR) simulation.
Using Physiological sensing and scene reconstruction in remote collaborationUniversity of Auckland
In this research, We explore novel Augmented Virtual Teleportation (AVT) methods based on the hybrid technologies of Augmented Reality (AR), Virtual Reality (VR), 3D live scene capturing, and multimodal interaction. Natural behavioral cues (Hand gestures, eye gaze, etc.) that are used in face-to-face communication play an essential role in effective collaboration. In contrast, most Mixed Reality (MR) remote collaboration systems mainly investigated computer-generated visual cues rendered as graphic objects or text for delivering instructions. In this research, we first study natural communication cues that people use in face-to-face collaboration. We then develop a novel remote collaboration system to enable people to communicate remotely as face-to-face. The system will contain two main parts: 1) Live scene capturing to enable real-time environment reconstruction and sharing of a user’s location, 2) Multimodal input such as gaze, gesture, and physiological signals to enhance remote communication. So far we have conducted two experiments to study the collaboration between a person with an AR interface and a remote user within a VR interface using multimodal input. We found that the remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users by combing gaze and gesture cues than using the gaze cue alone. The combined cues were also rated significantly higher than using gaze cues alone in terms of the ease of conveying spatial actions. We plan to extend this system to study the effect of incorporating physiological signals in communication, especially in co-presence and usability. There are many potential applications of this research in different areas such as training, tourism, entertainment, gaming, and others. In conclusion, this thesis aims to study the effect of incorporating multimodal input and scene capture in remote collaboration systems in terms of presence, engagement, and task efficiency. This research will produce many benefits, such as design guidelines for future AVT systems, software libraries making it easy to create AVT systems, sample data-sets from experiments conducted, research publications, and more.
Crowd Recognition System Based on Optical Flow Along with SVM classifierIJECEIAES
The manuscript discusses about abnormalities in a crowded scenario. To prevent the mishap at a public place, there is no much mechanism which could prevent or alert the concerned authority about suspects in a crowd. Usually in a crowded scene, there are chances of some mishap like a terrorist attack or a crime. Our target is finding techniques to identify such activities and to possibly prevent them. If the crowd members exhibit abnormal behavior, we could identify and say that this particular person is a suspect and then the concerned authority would look into the matter. There are various methods to identify the abnormal behavior. The proposed approach is based on optical flow model. It has an ability to detect the sudden changes in motion of an individual among the crowd. First, the main region of motion is extracted by the help of motion heat map. Harris corner detector is used for extracting point of interest of extracted motion area. Based on the point of interest an optical flow is estimated here. After analyzing this optical flow model, a threshold value is fixed. Basically optical flow is an energy level of individual frame. The threshold value is forwarded to SVM classifier, which produces a better result with 99.71% accuracy. This approach is very useful in real time video surveillance system where a machine can monitor unwanted crowd activity.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Mobile Robot competitions are vital way for distribution of science and engineering to the worldwide public but are also brilliant way of testing and comparing unlike research policies. It is discuss how today's study challenges of Intelligent and Autonomous Mobile Robots are being fingered by the Autonomous Driving competition that takes place in the Portuguese Robotics Open annual mobile robotics competition. Karthick Vishal. K | Dr. S. Venkatesh Kumar "A Study on Mobile Robotics in Robotics" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-6 , October 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18649.pdf
Robust and Efficient Coupling of Perception to Actuation with Metric and Non-...Darius Burschka
The talk motivates a re-thinking of the way, how perception passes the information to the control modules. Metric information is not a native space of the camera and apparently also not used in biology for navigation. Early abstraction of information from images loses a lot of important information that can be directly used for following (Visual-Servoing), motion estimation(Motion Blurr), and collision relations(Optical Flow Clustering). I present in this talk ways, how we use the image information in "classical way" that does not require any learning and runs on low-power CPUs.
Haptic Virtual Fixtures to Assist Endonasal Micro Robotic Surgery through Vir...saulnml
Robot-assisted endonasal neurosurgery requires precise manipulation under restricted workspace and indirect visibility conditions. Virtual Fixtures (VF) are algorithms developed to assist human operators in man-machine cooperative systems. In a master-slave surgical system, VF are typically implemented by imposing kinematic constraints on the slave manipulator. Haptic Virtual Fixtures (HVF) in counterpart, apply forces to the human operator on the master side, either to avoid a forbidden region (FRVF), or to provide guidance (GVF). Although VF prevent harmful manipulation, a trade-off between automation and interactivity is necessary. In this work, we propose an HVF framework that combines FRVF and GVF, to constrain the movement of the micro robotic tool for endonasal surgery using Virtual Reality (VR) simulation.
DragGAN is a new AI image editing tool that lets you manipulate images with simple drag controls developed by researchers at the University of California, Berkeley.
It uses generative AI to create realistic changes to the structure and appearance of objects in images. You can also rotate images as if they were 3D models.
The user can then use a drag-and-drop interface to edit the image. DragGAN will then generate a new image that reflects the user's edits.
Background: Introduction to Augmented Reality
Projection-based Augmented Reality
Ongoing Research of the Speaker
Ending remarks: Further Research & Future Path
[ICRA 2019] Introduction to Tutorial on Dynamical System-based Learning from ...Nadia Barbara
These slides were part of the ICRA 2019 Tutorial on Dynamical System-based Learning from Demonstration imparted by researchers from the Learning Algorithms and Systems Laboratory (LASA), EPFL.
Reactive Navigation of Autonomous Mobile Robot Using Neuro-Fuzzy SystemWaqas Tariq
Neuro-fuzzy systems have been used for robot navigation applications because of their ability to exert human like expertise and to utilize acquired knowledge to develop autonomous navigation strategies. In this paper, neuro-fuzzy based system is proposed for reactive navigation of a mobile robot using behavior based control. The proposed algorithm uses discrete sampling based optimal training of neural network. With a view to ascertain the efficacy of proposed system; the proposed neuro-fuzzy system’s performance is compared to that of neural and fuzzy based approaches. Simulation results along with detailed behavior analysis show effectiveness of our algorithm in all kind of obstacle environments.
A Central Pattern Generator based Nonlinear Controller to Simulate Biped Loco...Waqas Tariq
This paper mainly deals with designing a biological controller for biped robot to generate biped locomotion inspired from human gait oscillation. The Nonlinear Dynamics of the biological controller is being modeled by designing a Central Pattern Generator (CPG) which is built with the coupling of the Relaxation Oscillators. In this work the CPG consists of four Two-Way coupled Rayleigh Oscillators. The four major leg joints (e.g. two knee joints and two hip joints) are being considered for this modeling. The CPG based parameters are optimized using Genetic Algorithm (GA) to match an actual human locomotion captured by the Intelligent Gait Oscillation Detector (IGOD) biometric device. The Limit Cycle behavior and the dynamic analysis on the biped robot have been successfully simulated on to Spring Flamingo robot in YOBOTICS environment.
Advances in Mixed Reality (MR) technologies are reshaping collaborative practices. The seamless integration of
physical and virtual elements enhances the perception of the
working environment, providing a more enriched collaborative
task experience. While revealing intriguing potential across
various sectors, wearing head-mounted displays (HMDs) can pose challenges in communication and in understanding others’ behaviours. This paper analyses the main elements of collaborative
augmented practices through the case study of Hololiver, a MR
system developed to assist surgeons in planning laparoscopic liver surgeries. The work discusses guidelines for designing interfaces
to preserve awareness in MR interactions.
This presentation deals with the combination of Sixth Sense technology and Robotics.
The Autonomous Robots are controlled using basic Hand gestures through sixth sense technology.
Comparison of Human Machine Interfaces to Control a Robotized WheelchairSuzana Viana Mota
DEMO: https://www.youtube.com/watch?v=8c8PuASMdFk
Assistive robotics solutions help people to recover their lost mobility and autonomy in their daily life. This work presents a comparison between two Human Machine Interfaces (HMIs) based on head postures and facial expressions to control a robotized wheelchair. Comparing both strategies, JoyFace has shown to be the safest and easiest to use, on the other hand, RealSense has demanded more physical efforts but may be
the appropriate solution for people who suffered severe trauma as most of them cannot even move their heads. Although both HMIs need improvements, these strategies have shown to be promising technologies for people paralyzed from down the neck to control a robotized wheelchair.
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
DragGAN is a new AI image editing tool that lets you manipulate images with simple drag controls developed by researchers at the University of California, Berkeley.
It uses generative AI to create realistic changes to the structure and appearance of objects in images. You can also rotate images as if they were 3D models.
The user can then use a drag-and-drop interface to edit the image. DragGAN will then generate a new image that reflects the user's edits.
Background: Introduction to Augmented Reality
Projection-based Augmented Reality
Ongoing Research of the Speaker
Ending remarks: Further Research & Future Path
[ICRA 2019] Introduction to Tutorial on Dynamical System-based Learning from ...Nadia Barbara
These slides were part of the ICRA 2019 Tutorial on Dynamical System-based Learning from Demonstration imparted by researchers from the Learning Algorithms and Systems Laboratory (LASA), EPFL.
Reactive Navigation of Autonomous Mobile Robot Using Neuro-Fuzzy SystemWaqas Tariq
Neuro-fuzzy systems have been used for robot navigation applications because of their ability to exert human like expertise and to utilize acquired knowledge to develop autonomous navigation strategies. In this paper, neuro-fuzzy based system is proposed for reactive navigation of a mobile robot using behavior based control. The proposed algorithm uses discrete sampling based optimal training of neural network. With a view to ascertain the efficacy of proposed system; the proposed neuro-fuzzy system’s performance is compared to that of neural and fuzzy based approaches. Simulation results along with detailed behavior analysis show effectiveness of our algorithm in all kind of obstacle environments.
A Central Pattern Generator based Nonlinear Controller to Simulate Biped Loco...Waqas Tariq
This paper mainly deals with designing a biological controller for biped robot to generate biped locomotion inspired from human gait oscillation. The Nonlinear Dynamics of the biological controller is being modeled by designing a Central Pattern Generator (CPG) which is built with the coupling of the Relaxation Oscillators. In this work the CPG consists of four Two-Way coupled Rayleigh Oscillators. The four major leg joints (e.g. two knee joints and two hip joints) are being considered for this modeling. The CPG based parameters are optimized using Genetic Algorithm (GA) to match an actual human locomotion captured by the Intelligent Gait Oscillation Detector (IGOD) biometric device. The Limit Cycle behavior and the dynamic analysis on the biped robot have been successfully simulated on to Spring Flamingo robot in YOBOTICS environment.
Advances in Mixed Reality (MR) technologies are reshaping collaborative practices. The seamless integration of
physical and virtual elements enhances the perception of the
working environment, providing a more enriched collaborative
task experience. While revealing intriguing potential across
various sectors, wearing head-mounted displays (HMDs) can pose challenges in communication and in understanding others’ behaviours. This paper analyses the main elements of collaborative
augmented practices through the case study of Hololiver, a MR
system developed to assist surgeons in planning laparoscopic liver surgeries. The work discusses guidelines for designing interfaces
to preserve awareness in MR interactions.
This presentation deals with the combination of Sixth Sense technology and Robotics.
The Autonomous Robots are controlled using basic Hand gestures through sixth sense technology.
Comparison of Human Machine Interfaces to Control a Robotized WheelchairSuzana Viana Mota
DEMO: https://www.youtube.com/watch?v=8c8PuASMdFk
Assistive robotics solutions help people to recover their lost mobility and autonomy in their daily life. This work presents a comparison between two Human Machine Interfaces (HMIs) based on head postures and facial expressions to control a robotized wheelchair. Comparing both strategies, JoyFace has shown to be the safest and easiest to use, on the other hand, RealSense has demanded more physical efforts but may be
the appropriate solution for people who suffered severe trauma as most of them cannot even move their heads. Although both HMIs need improvements, these strategies have shown to be promising technologies for people paralyzed from down the neck to control a robotized wheelchair.
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
Similar to Dumb Robots for Smart People (Direct Control Interfaces for Robotics) (20)
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
15. Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2017. A Motion Retargeting Method
for Effective Mimicry-based Teleoperation of Robot Arms. HRI ’17.
Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2018. RelaxedIK: Real-time Synthesis
of Accurate and Feasible Robot Arm Motion. RSS ’18.
Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2019. Effects of Onset Latency and
Robot Speed Delays on Mimicry-Control Teleoperation. Submitted for publication.
43. Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2017. A Motion Retargeting Method
for Effective Mimicry-based Teleoperation of Robot Arms. HRI ’17.
Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2018. RelaxedIK: Real-time Synthesis
of Accurate and Feasible Robot Arm Motion. RSS ’18.
Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2019. Effects of Onset Latency and
Robot Speed Delays on Mimicry-Control Teleoperation. Submitted for publication.
44.
45. Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2018. An Autonomous Dynamic
Camera Method for Effective Remote Teleoperation. HRI ’18.
Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2019. Remote Telemanipulation with
Adapting Viewpoints in Visually Complex Environments. RSS ’19.
Daniel Rakita, Bilge Mutlu, Michael Gleicher, and Laura M. Hiatt. 2019. Shared control–
based bimanual robot manipulation. Science Robotics 4, 30 (May 2019).
46.
47. It takes more than 2 one-handed systems
We must help with coordination
61. Motion Retargeting
Optimization
User Motion Input
Manipulation Robot
Configuration
(per update)
Camera Robot
Motion Optimization
Live Video Stream
Camera Robot
Configuration
(per update)
69. Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2018. An Autonomous Dynamic
Camera Method for Effective Remote Teleoperation. HRI ’18.
Daniel Rakita, Bilge Mutlu, and Michael Gleicher. 2019. Remote Telemanipulation with
Adapting Viewpoints in Visually Complex Environments. RSS ’19.
Daniel Rakita, Bilge Mutlu, Michael Gleicher, and Laura M. Hiatt. 2019. Shared control–
based bimanual robot manipulation. Science Robotics 4, 30 (May 2019).
70.
71. Pragathi Praveena, Guru Subramani, Bilge Mutlu, and Michael Gleicher. 2019.
Characterizing Input Methods for Human-to-Robot Demonstrations. HRI ‘19.
Guru Subramani, Michael Zinn, and Michael Gleicher. 2018. Inferring geometric
constraints in human demonstrations. Conference on Robot Learning ‘18.
Guru Subramani, Michael Hagenow, Michael Zinn, and Michael Gleicher. Constraint
Inference Using Pose and Wrench Measurements. Submitted for publication.
Guru Subramani, Michael Hagenow, Bolun Zhang, Michael Zinn and Michael Gleicher.
Robust Replay of Human Demonstrations using Identified Constraints. Submitted for
publication.
75. Photo by Meghan Schiereck on Unsplash
Photo by Mat Reding on Unsplash
Photo from Robotiq Website
76. Efficient – Prevent wasteful use of resource
Subjective performance – Demonstrator perception
Facile – Demonstrator ease of use
Amenable to analysis – Post-processing
Desired demonstrations – Experimenter objectives
Affords quality demonstrations – Data quality
Easy to learn – Process to proficiency
Preference – Demonstrator liking
Feedback – Access to demonstration performance
Plausibility – Equivalent strategies
Feasibility – Correspondence
Instrumentable – Measurement capabilities
Desirable properties in a demonstration method
77. • Easy for demonstrator
• High quality demonstrations
• Hard to instrument
• Hard to map to robot
Kinesthetic Teaching
• Hard for demonstrator
• Poor quality demonstrations
• Easy to instrument (with caveat)
• Feasible mappings (with caveats)
Showing tasks to robots (common approaches)
Hand Demonstrations
100. Pragathi Praveena, Guru Subramani, Bilge Mutlu, and Michael Gleicher. 2019.
Characterizing Input Methods for Human-to-Robot Demonstrations. HRI ‘19.
Guru Subramani, Michael Zinn, and Michael Gleicher. 2018. Inferring geometric
constraints in human demonstrations. Conference on Robot Learning ‘18.
Guru Subramani, Michael Hagenow, Michael Zinn, and Michael Gleicher. Constraint
Inference Using Pose and Wrench Measurements. Submitted for publication.
Guru Subramani, Michael Hagenow, Bolun Zhang, Michael Zinn and Michael Gleicher.
Robust Replay of Human Demonstrations using Identified Constraints. Submitted for
publication.