My talk at the Amazon Development Centre in Berlin. Including work on how to improve robotic reaching, grasping and manipulation. And getting away from chasing grasp success rates.
A guest lecture about (deep) reinforcement learning and on-going projects includign at QUT. This is for the Machine Learningcourse (CAB 420) at the Queensland University of Technology (QUT)
Deep reinforcement learning framework for autonomous drivingGopikaGopinath5
Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, it is possible to propose a framework for autonomous driving using deep reinforcement learning.
It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios.
Team ACRV's experience at #AmazonPickingChallenge 2016Juxi Leitner
Building on Repeatable Grasping Experiments
Team ACRV: Lessons Learned from the Amazon Picking Challenge 2016
Juxi Leitner, ACRV, Queensland University of Technology (Team ACRV, 2016, 2017)
We describe our entry into the 2016 Amazon Picking Challenge (APC) and the lessons learned from deploying a complex, robotic system outside of the lab. To help future developments decided to create a new physical benchmark challenge for robotic picking to drive scientific progress and make research into (end-to-end) picking comparable. It consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils.
A guest lecture about (deep) reinforcement learning and on-going projects includign at QUT. This is for the Machine Learningcourse (CAB 420) at the Queensland University of Technology (QUT)
Deep reinforcement learning framework for autonomous drivingGopikaGopinath5
Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, it is possible to propose a framework for autonomous driving using deep reinforcement learning.
It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios.
Team ACRV's experience at #AmazonPickingChallenge 2016Juxi Leitner
Building on Repeatable Grasping Experiments
Team ACRV: Lessons Learned from the Amazon Picking Challenge 2016
Juxi Leitner, ACRV, Queensland University of Technology (Team ACRV, 2016, 2017)
We describe our entry into the 2016 Amazon Picking Challenge (APC) and the lessons learned from deploying a complex, robotic system outside of the lab. To help future developments decided to create a new physical benchmark challenge for robotic picking to drive scientific progress and make research into (end-to-end) picking comparable. It consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils.
ACRV Research Fellow Intro/Tutorial [Vision and Action]Juxi Leitner
A short introduction about me and my work at the Queensland University of Technology (QUT) for the Australian Centre of Excellence for Robotic Vision.
Giving some background in Image Based Visual Servoing (IBVS) and some research goals/ideas/avenues...
Keeping it light and simple (hopefully..)
These slides were used in the guest lecture for QUT's Image processing class.
The two part presentation consists of our Amazon Robotics Challenge robot #Cartman and some introduction to (deep) reinforcement learning.
Visual Attention Model: Teach a Robot How to Watch the World by Assia Benbihi, PhD @ Thales & Georgia Institute of Technology, during the Paris Women in Machine Learning & Data Science meet-up in April 2018.
The Need For Robots To Grasp the WorldJuxi Leitner
These slides were used for a few talks in the last couple of months to excite people about the intelligent robotic systems. In particular, why I believe that it is important for robots to grasp the world, both in the sense of perceiving and understanding but also in the physical sense of actually changing the state of the world by picking objects and interacting with a wide range of items.
These slides (with slight variations) were presented at QUT, Uni Sydney, Uni Cambridge, DeepMind, Uni Birmingham, Amazon Robotics, ...
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
More Related Content
Similar to Improving Robotic Manipulation with Vision and Learning @AmazonDevCentre Berlin
ACRV Research Fellow Intro/Tutorial [Vision and Action]Juxi Leitner
A short introduction about me and my work at the Queensland University of Technology (QUT) for the Australian Centre of Excellence for Robotic Vision.
Giving some background in Image Based Visual Servoing (IBVS) and some research goals/ideas/avenues...
Keeping it light and simple (hopefully..)
These slides were used in the guest lecture for QUT's Image processing class.
The two part presentation consists of our Amazon Robotics Challenge robot #Cartman and some introduction to (deep) reinforcement learning.
Visual Attention Model: Teach a Robot How to Watch the World by Assia Benbihi, PhD @ Thales & Georgia Institute of Technology, during the Paris Women in Machine Learning & Data Science meet-up in April 2018.
The Need For Robots To Grasp the WorldJuxi Leitner
These slides were used for a few talks in the last couple of months to excite people about the intelligent robotic systems. In particular, why I believe that it is important for robots to grasp the world, both in the sense of perceiving and understanding but also in the physical sense of actually changing the state of the world by picking objects and interacting with a wide range of items.
These slides (with slight variations) were presented at QUT, Uni Sydney, Uni Cambridge, DeepMind, Uni Birmingham, Amazon Robotics, ...
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
Cartman, how to win the amazon robotics challenge with robotic vision and dee...Juxi Leitner
Cartman, how to win the amazon robotics challenge with robotic vision and deep learning #GTC18 S8842
Douglas Morrison and Juxi Leitner
Australian Centre for Robotic Vision
roboticvision.org
ACRV Picking Benchmark: how to benchmark pick and place robotics researchJuxi Leitner
Presented at the IROS workshop on "DEVELOPMENT OF BENCHMARKING PROTOCOLS FOR ROBOTIC MANIPULATION"
http://ycbbenchmarks.org/IROS2017workshop.html
The ACRV Picking Benchmark has been developed over the last year to facilitate comparison of robotic systems in pick and place settings!
With the ABP we propose a physical benchmark for robotic picking: overall design, objects, configuration, and guidance on appropriate technologies to solve it. Challenges are an important way to drive progress but they occur only occasionally and the test conditions are difficult to replicate outside the challenge. This benchmark is motivated by experience in the recent Amazon Picking Challenge and contains a commonly-available shelf, 42 objects, a set of stencils and standardized task setups.
A major focus through the design of this benchmark was to maximise reproducibility: a number of carefully chosen scenarios with precise instructions on how to place, orient, and align objects with the help of printable stencils are defined. To make the benchmark as accessible as possible to the research community, a white IKEA shelf is used for all picking tasks. Furthermore, we carefully curated a set of 42 objects to ensure global availability and reduced chance of import restrictions.
How to place 6th in the Amazon Picking Challenge (ENB329, QUT)Juxi Leitner
A guest lecture about project management and how to organise a team for the Amazon Picking Challenge. This is for the mechatronics design project course (ENB 329) at the Queensland University of Technology (QUT).
LunaRoo: Designing a Hopping Lunar Science Payload #space #explorationJuxi Leitner
Presentation slides from the talk given at the IEEE Aerospace Conference (@IEEEAeroConf) 2016 in Big Sky, Montana, USA.
We describe a hopping science payload solution de- signed to exploit the Moon’s lower gravity to leap up to 20m above the surface. The entire solar-powered robot is compact enough to fit within a 10cm cube, whilst providing unique observation and mission capabilities by creating imagery during the hop. The LunaRoo concept is a proposed payload to fly onboard a Google Lunar XPrize entry. Its compact form is specifically designed for lunar exploration and science mission within the constraints given by PTScientists. The core features of LunaRoo are its method of locomotion – hopping like a kangaroo - and its imaging system capable of unique over-the- horizon perception. The payload will serve as a proof of concept, highlighting the benefits of alternative mobility solutions, in particular enabling observation and exploration of terrain not traversable by wheeled robots. in addition providing data for beyond line-of-sight planning and communications for surface assets, extending overall mission capabilities.
Presenation about my current research in computer vision, machine learning and robotics at the IEEE Queensland Computational Intelligence Society Colloquium at Griffith University.
My slides for the Hands-on part of the Robotic Vision Summer School 2015 in Kioloa, Australia.
This is part of the robotics workshop, aiming to teach the participants how to program the turtlebot .
Reactive Reaching and Grasping on a Humanoid: Towards Closing the Action-Perc...Juxi Leitner
My presentation at the ICINCO 2014 (the 11th International Conference on Informatics in Control, Automation and Robotics)
Abstract: We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles – other objects detected in the visual stream – while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
Tele-operation of a Humanoid Robot, Using Operator Bio-dataJuxi Leitner
We present our work on tele-operating a complex hu- manoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their elec- tromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasp- ing of objects) on the 7 DOF arm.
Improving Robot Vision Models for Object Detection Through Interaction #ijcnn...Juxi Leitner
presentation during the WCCI 2014 in Beijing, China
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investi- gates how manipulation actions might allow for the development of better visual models and therefore better robot vision.
This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the ‘right’ action, i.e. the action with the best possible improvement of the detector.
How does it feel to be a SpaceMaster? [Erasmus Mundus - ACE Talk]Juxi Leitner
Last December I had the pleasure to take part in the EM-ACE workshop held at the University of Porto, Portugal. I was invited to talk about my experience studying in the "Joint European Master in Space Science and Technology" (SpaceMaster) infront of about 60 students.
http://www.em-ace.eu/en/upload/public-docs/UPORTO_em-ace%20event_agenda.pdf
http://www.em-a.eu/en/home/rss-feed-detail/em-ace-student-event-university-of-porto-16-december-2013-1395.html
Towards Autonomous and Adaptive Humanoids [PhD Proposal @ Università della Sv...Juxi Leitner
The slides for my PhD proposal presentation in Nov 2013 at the Università della Svizzera Italiana (USI).
The proposal can be found on my webpage: http://Juxi.net/phd/
Humanoid Learns to Detect Its Own Hands #cec2013Juxi Leitner
My presentation at the Congress on Evolutionary Computation (CEC) 2013 in Cancun, Mexico.
Abstract—Robust object manipulation is still a hard problem in robotics, even more so in high degree-of-freedom (DOF) humanoid robots. To improve performance a closer integration of visual and motor systems is needed. We herein present a novel method for a robot to learn robust detection of its own hands and fingers enabling sensorimotor coordination. It does so solely using its own camera images and does not require any external systems or markers. Our system based on Cartesian Genetic Programming (CGP) allows to evolve programs to perform this image segmentation task in real-time on the real hardware. We show results for a Nao and an iCub humanoid each detecting its own hands and fingers.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Improving Robotic Manipulation with Vision and Learning @AmazonDevCentre Berlin
1. Juxi Leitner
arc centre of excellence for robotic vision
queensland university of technology
<j.leitner@qut.edu.au> http://Juxi.net
robotic vision and manipulation
ACRV @ UC Berkeley
& learning
7. Dalle Molle Institute for AI (IDSIA)
Work
Juxi
Leitner
PhD Informatics / Intelligent Systems
MSc Space Robotics & Automation
BSc Information & Software Engineering
Intelligent (Space) Robots
European Space Agency (ESA)
Erasmus Intelligent Systems
Work (Humanoid) Robot Vision
Instituto Superior Técnico (IST)
Mobility Intelligent Space Systems Laboratory
About Me
Current Robotic Vision and Actions
h+p://Juxi.net
Queensland University of Technology (QUT)
22. The ACRV Picking Benchmark j.leitner@qut.edu.au
Jürgen ‘Juxi’ Leitner
Juxi
#ICRA2017
http://Juxi.net/acrv-picking-benchmark/
overcome limita.ons of current robo.c system comparison
reproducible research on end-to-end TASKS
24. The ACRV Picking Benchmark j.leitner@qut.edu.au
Jürgen ‘Juxi’ Leitner
Juxi
#ICRA2017
http://Juxi.net/acrv-picking-benchmark/
overcome limita.ons of current robo.c system comparison
reproducible research on end-to-end TASKS