A guest lecture about (deep) reinforcement learning and on-going projects includign at QUT. This is for the Machine Learningcourse (CAB 420) at the Queensland University of Technology (QUT)
Team ACRV's experience at #AmazonPickingChallenge 2016Juxi Leitner
Building on Repeatable Grasping Experiments
Team ACRV: Lessons Learned from the Amazon Picking Challenge 2016
Juxi Leitner, ACRV, Queensland University of Technology (Team ACRV, 2016, 2017)
We describe our entry into the 2016 Amazon Picking Challenge (APC) and the lessons learned from deploying a complex, robotic system outside of the lab. To help future developments decided to create a new physical benchmark challenge for robotic picking to drive scientific progress and make research into (end-to-end) picking comparable. It consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils.
Improving Robotic Manipulation with Vision and Learning @AmazonDevCentre BerlinJuxi Leitner
My talk at the Amazon Development Centre in Berlin. Including work on how to improve robotic reaching, grasping and manipulation. And getting away from chasing grasp success rates.
These slides were used in the guest lecture for QUT's Image processing class.
The two part presentation consists of our Amazon Robotics Challenge robot #Cartman and some introduction to (deep) reinforcement learning.
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcsandit
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Team ACRV's experience at #AmazonPickingChallenge 2016Juxi Leitner
Building on Repeatable Grasping Experiments
Team ACRV: Lessons Learned from the Amazon Picking Challenge 2016
Juxi Leitner, ACRV, Queensland University of Technology (Team ACRV, 2016, 2017)
We describe our entry into the 2016 Amazon Picking Challenge (APC) and the lessons learned from deploying a complex, robotic system outside of the lab. To help future developments decided to create a new physical benchmark challenge for robotic picking to drive scientific progress and make research into (end-to-end) picking comparable. It consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils.
Improving Robotic Manipulation with Vision and Learning @AmazonDevCentre BerlinJuxi Leitner
My talk at the Amazon Development Centre in Berlin. Including work on how to improve robotic reaching, grasping and manipulation. And getting away from chasing grasp success rates.
These slides were used in the guest lecture for QUT's Image processing class.
The two part presentation consists of our Amazon Robotics Challenge robot #Cartman and some introduction to (deep) reinforcement learning.
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcsandit
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
In this report, one of the main applications of fuzzy logic is proposed i.e in robotic navigation.
Starting from scratch to building up the fuzzy logic and its validation using the MATLAB fuzzy logic toolbox , everything is covered in this report. If you find it helpful do like and share it with your friends. Fuzzy logic finds its application in AGVs and autonomous vehicles etc. Nowadays it is employed to find out the instantaneous power split ratio between the Engine and battery in the parallel hybrid EV.
Introduction to the Special issue on ‘‘Future trends in robotics and autonomo...Anand Bhojan
Robotics is an extremely dynamic field with thriving advancement in its technology. As research progresses in robotic systems, more and more aspects of vision based processing, GPS enabled services, Autonomous techniques, very far distance communication in robots, dynamic environment handling, mobility techniques, multi-agent control and coordination techniques, multi-robot communication and coordination are explored to make robotics intelligent and to do specific tasks. Vision has helped in many areas for better services and fastens the process for localized results. Advancements in communication, positioning and localization techniques brought the robotics beyond the controlled industrial environments to more dynamic outdoor environments. Research in autonomous and other intelligent techniques has made robots capable of taking decisions in complex environments. The book covers future trends in robotics research topics including motion path planning, routing in dynamic environments, multi-agent control techniques, nature inspired algorithms and synchronization techniques with interesting applications.
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
ACRV Research Fellow Intro/Tutorial [Vision and Action]Juxi Leitner
A short introduction about me and my work at the Queensland University of Technology (QUT) for the Australian Centre of Excellence for Robotic Vision.
Giving some background in Image Based Visual Servoing (IBVS) and some research goals/ideas/avenues...
Keeping it light and simple (hopefully..)
Page 1 of 14 ENS4152 Project Development Proposal a.docxkarlhennesey
Page 1 of 14
ENS4152 Project Development
Proposal and Risk Assessment Report
Baxter Research Robot: Solving a Rubik’s Cube
Chris Dawes
Student # 10282558
30 Mar 2015
Supervisor: Dr Alexander Rassau
Page 2 of 14
Abstract
Robotics is currently used to perform many tasks but many of these are simple repetition of a
predefined method. By combining AI with robotics we can greatly increase the applications of
robotics. An algorithm that combines the vision and servo systems of a Baxter Research Robot
with a solving solution for a Rubik’s cube will demonstrate that the use of even simple AI with
robotics allows complex tasks to be completed. Further integration of object recognition will
allow the task to be completed in a dynamic environment, and further increase the areas
robots are capable of working within.
1. Introduction
1.1. Motivation
The Baxter Research Robot by Rethink Robotics is a dual arm robot, with seven degrees of
freedom per arm, released in 2012. Developed to be affordable, flexible in its purpose, and
above all else safe, Baxter includes three cameras, one on each wrist and the other on its head,
and a screen for displaying information relating to Baxter’s current task. The robot is designed
to be a versatile research platform while containing the same hardware as its industry
counterpart, allowing research to translate into industrial applications (Rethink Robotics,
2015).
In general robotics artificial intelligence (AI) has been developed separately to robotics, but is
now starting to become integrated. Unfortunately current AI is fragmented as each application
focuses on one area, as opposed to making a true AI that thinks like a human (Bogue, 2014).
Current usable AI is more akin to ‘smart’ robotics where decisions are made and problems
solved by the robot in very specific applications. In industry, robots are expanding into areas
that require more flexibility allowing robots to fill many more positions in increasingly complex
areas (Hajduk, Jenčík, Jezný, & Vargovčík, 2013). Mobile robots are even becoming more
common place, allowing for dynamic and spread out workspaces. These are all due to adding
sensing and analysis to robots allowing them to react to dynamic environments.
To further robotics in industry, multi robot work cells have been designed that combine
several robots working on the same part while cooperatively performing either one task, such
as welding and the required handling, or multiple tasks at the same time (Hajduk, Jenčík, Jezný,
& Vargovčík, 2013). The number of activities these work cells can perform increases
Page 3 of 14
dramatically, as the complexity of the task or tasks can be higher while the robots don’t need
to be capable of performing the whole task individually.
For performing more human tasks, dual arm robots have begun to emerge (Hajduk, Jenčík,
Jezný, & Vargovčík, 2013 ...
Page 1 of 14 ENS4152 Project Development Proposal a.docxsmile790243
Page 1 of 14
ENS4152 Project Development
Proposal and Risk Assessment Report
Baxter Research Robot: Solving a Rubik’s Cube
Chris Dawes
Student # 10282558
30 Mar 2015
Supervisor: Dr Alexander Rassau
Page 2 of 14
Abstract
Robotics is currently used to perform many tasks but many of these are simple repetition of a
predefined method. By combining AI with robotics we can greatly increase the applications of
robotics. An algorithm that combines the vision and servo systems of a Baxter Research Robot
with a solving solution for a Rubik’s cube will demonstrate that the use of even simple AI with
robotics allows complex tasks to be completed. Further integration of object recognition will
allow the task to be completed in a dynamic environment, and further increase the areas
robots are capable of working within.
1. Introduction
1.1. Motivation
The Baxter Research Robot by Rethink Robotics is a dual arm robot, with seven degrees of
freedom per arm, released in 2012. Developed to be affordable, flexible in its purpose, and
above all else safe, Baxter includes three cameras, one on each wrist and the other on its head,
and a screen for displaying information relating to Baxter’s current task. The robot is designed
to be a versatile research platform while containing the same hardware as its industry
counterpart, allowing research to translate into industrial applications (Rethink Robotics,
2015).
In general robotics artificial intelligence (AI) has been developed separately to robotics, but is
now starting to become integrated. Unfortunately current AI is fragmented as each application
focuses on one area, as opposed to making a true AI that thinks like a human (Bogue, 2014).
Current usable AI is more akin to ‘smart’ robotics where decisions are made and problems
solved by the robot in very specific applications. In industry, robots are expanding into areas
that require more flexibility allowing robots to fill many more positions in increasingly complex
areas (Hajduk, Jenčík, Jezný, & Vargovčík, 2013). Mobile robots are even becoming more
common place, allowing for dynamic and spread out workspaces. These are all due to adding
sensing and analysis to robots allowing them to react to dynamic environments.
To further robotics in industry, multi robot work cells have been designed that combine
several robots working on the same part while cooperatively performing either one task, such
as welding and the required handling, or multiple tasks at the same time (Hajduk, Jenčík, Jezný,
& Vargovčík, 2013). The number of activities these work cells can perform increases
Page 3 of 14
dramatically, as the complexity of the task or tasks can be higher while the robots don’t need
to be capable of performing the whole task individually.
For performing more human tasks, dual arm robots have begun to emerge (Hajduk, Jenčík,
Jezný, & Vargovčík, 2013.
Page 1 of 14 ENS4152 Project Development Proposal a.docxjakeomoore75037
Page 1 of 14
ENS4152 Project Development
Proposal and Risk Assessment Report
Baxter Research Robot: Solving a Rubik’s Cube
Chris Dawes
Student # 10282558
30 Mar 2015
Supervisor: Dr Alexander Rassau
Page 2 of 14
Abstract
Robotics is currently used to perform many tasks but many of these are simple repetition of a
predefined method. By combining AI with robotics we can greatly increase the applications of
robotics. An algorithm that combines the vision and servo systems of a Baxter Research Robot
with a solving solution for a Rubik’s cube will demonstrate that the use of even simple AI with
robotics allows complex tasks to be completed. Further integration of object recognition will
allow the task to be completed in a dynamic environment, and further increase the areas
robots are capable of working within.
1. Introduction
1.1. Motivation
The Baxter Research Robot by Rethink Robotics is a dual arm robot, with seven degrees of
freedom per arm, released in 2012. Developed to be affordable, flexible in its purpose, and
above all else safe, Baxter includes three cameras, one on each wrist and the other on its head,
and a screen for displaying information relating to Baxter’s current task. The robot is designed
to be a versatile research platform while containing the same hardware as its industry
counterpart, allowing research to translate into industrial applications (Rethink Robotics,
2015).
In general robotics artificial intelligence (AI) has been developed separately to robotics, but is
now starting to become integrated. Unfortunately current AI is fragmented as each application
focuses on one area, as opposed to making a true AI that thinks like a human (Bogue, 2014).
Current usable AI is more akin to ‘smart’ robotics where decisions are made and problems
solved by the robot in very specific applications. In industry, robots are expanding into areas
that require more flexibility allowing robots to fill many more positions in increasingly complex
areas (Hajduk, Jenčík, Jezný, & Vargovčík, 2013). Mobile robots are even becoming more
common place, allowing for dynamic and spread out workspaces. These are all due to adding
sensing and analysis to robots allowing them to react to dynamic environments.
To further robotics in industry, multi robot work cells have been designed that combine
several robots working on the same part while cooperatively performing either one task, such
as welding and the required handling, or multiple tasks at the same time (Hajduk, Jenčík, Jezný,
& Vargovčík, 2013). The number of activities these work cells can perform increases
Page 3 of 14
dramatically, as the complexity of the task or tasks can be higher while the robots don’t need
to be capable of performing the whole task individually.
For performing more human tasks, dual arm robots have begun to emerge (Hajduk, Jenčík,
Jezný, & Vargovčík, 2013.
The Need For Robots To Grasp the WorldJuxi Leitner
These slides were used for a few talks in the last couple of months to excite people about the intelligent robotic systems. In particular, why I believe that it is important for robots to grasp the world, both in the sense of perceiving and understanding but also in the physical sense of actually changing the state of the world by picking objects and interacting with a wide range of items.
These slides (with slight variations) were presented at QUT, Uni Sydney, Uni Cambridge, DeepMind, Uni Birmingham, Amazon Robotics, ...
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
In this report, one of the main applications of fuzzy logic is proposed i.e in robotic navigation.
Starting from scratch to building up the fuzzy logic and its validation using the MATLAB fuzzy logic toolbox , everything is covered in this report. If you find it helpful do like and share it with your friends. Fuzzy logic finds its application in AGVs and autonomous vehicles etc. Nowadays it is employed to find out the instantaneous power split ratio between the Engine and battery in the parallel hybrid EV.
Introduction to the Special issue on ‘‘Future trends in robotics and autonomo...Anand Bhojan
Robotics is an extremely dynamic field with thriving advancement in its technology. As research progresses in robotic systems, more and more aspects of vision based processing, GPS enabled services, Autonomous techniques, very far distance communication in robots, dynamic environment handling, mobility techniques, multi-agent control and coordination techniques, multi-robot communication and coordination are explored to make robotics intelligent and to do specific tasks. Vision has helped in many areas for better services and fastens the process for localized results. Advancements in communication, positioning and localization techniques brought the robotics beyond the controlled industrial environments to more dynamic outdoor environments. Research in autonomous and other intelligent techniques has made robots capable of taking decisions in complex environments. The book covers future trends in robotics research topics including motion path planning, routing in dynamic environments, multi-agent control techniques, nature inspired algorithms and synchronization techniques with interesting applications.
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
ACRV Research Fellow Intro/Tutorial [Vision and Action]Juxi Leitner
A short introduction about me and my work at the Queensland University of Technology (QUT) for the Australian Centre of Excellence for Robotic Vision.
Giving some background in Image Based Visual Servoing (IBVS) and some research goals/ideas/avenues...
Keeping it light and simple (hopefully..)
Page 1 of 14 ENS4152 Project Development Proposal a.docxkarlhennesey
Page 1 of 14
ENS4152 Project Development
Proposal and Risk Assessment Report
Baxter Research Robot: Solving a Rubik’s Cube
Chris Dawes
Student # 10282558
30 Mar 2015
Supervisor: Dr Alexander Rassau
Page 2 of 14
Abstract
Robotics is currently used to perform many tasks but many of these are simple repetition of a
predefined method. By combining AI with robotics we can greatly increase the applications of
robotics. An algorithm that combines the vision and servo systems of a Baxter Research Robot
with a solving solution for a Rubik’s cube will demonstrate that the use of even simple AI with
robotics allows complex tasks to be completed. Further integration of object recognition will
allow the task to be completed in a dynamic environment, and further increase the areas
robots are capable of working within.
1. Introduction
1.1. Motivation
The Baxter Research Robot by Rethink Robotics is a dual arm robot, with seven degrees of
freedom per arm, released in 2012. Developed to be affordable, flexible in its purpose, and
above all else safe, Baxter includes three cameras, one on each wrist and the other on its head,
and a screen for displaying information relating to Baxter’s current task. The robot is designed
to be a versatile research platform while containing the same hardware as its industry
counterpart, allowing research to translate into industrial applications (Rethink Robotics,
2015).
In general robotics artificial intelligence (AI) has been developed separately to robotics, but is
now starting to become integrated. Unfortunately current AI is fragmented as each application
focuses on one area, as opposed to making a true AI that thinks like a human (Bogue, 2014).
Current usable AI is more akin to ‘smart’ robotics where decisions are made and problems
solved by the robot in very specific applications. In industry, robots are expanding into areas
that require more flexibility allowing robots to fill many more positions in increasingly complex
areas (Hajduk, Jenčík, Jezný, & Vargovčík, 2013). Mobile robots are even becoming more
common place, allowing for dynamic and spread out workspaces. These are all due to adding
sensing and analysis to robots allowing them to react to dynamic environments.
To further robotics in industry, multi robot work cells have been designed that combine
several robots working on the same part while cooperatively performing either one task, such
as welding and the required handling, or multiple tasks at the same time (Hajduk, Jenčík, Jezný,
& Vargovčík, 2013). The number of activities these work cells can perform increases
Page 3 of 14
dramatically, as the complexity of the task or tasks can be higher while the robots don’t need
to be capable of performing the whole task individually.
For performing more human tasks, dual arm robots have begun to emerge (Hajduk, Jenčík,
Jezný, & Vargovčík, 2013 ...
Page 1 of 14 ENS4152 Project Development Proposal a.docxsmile790243
Page 1 of 14
ENS4152 Project Development
Proposal and Risk Assessment Report
Baxter Research Robot: Solving a Rubik’s Cube
Chris Dawes
Student # 10282558
30 Mar 2015
Supervisor: Dr Alexander Rassau
Page 2 of 14
Abstract
Robotics is currently used to perform many tasks but many of these are simple repetition of a
predefined method. By combining AI with robotics we can greatly increase the applications of
robotics. An algorithm that combines the vision and servo systems of a Baxter Research Robot
with a solving solution for a Rubik’s cube will demonstrate that the use of even simple AI with
robotics allows complex tasks to be completed. Further integration of object recognition will
allow the task to be completed in a dynamic environment, and further increase the areas
robots are capable of working within.
1. Introduction
1.1. Motivation
The Baxter Research Robot by Rethink Robotics is a dual arm robot, with seven degrees of
freedom per arm, released in 2012. Developed to be affordable, flexible in its purpose, and
above all else safe, Baxter includes three cameras, one on each wrist and the other on its head,
and a screen for displaying information relating to Baxter’s current task. The robot is designed
to be a versatile research platform while containing the same hardware as its industry
counterpart, allowing research to translate into industrial applications (Rethink Robotics,
2015).
In general robotics artificial intelligence (AI) has been developed separately to robotics, but is
now starting to become integrated. Unfortunately current AI is fragmented as each application
focuses on one area, as opposed to making a true AI that thinks like a human (Bogue, 2014).
Current usable AI is more akin to ‘smart’ robotics where decisions are made and problems
solved by the robot in very specific applications. In industry, robots are expanding into areas
that require more flexibility allowing robots to fill many more positions in increasingly complex
areas (Hajduk, Jenčík, Jezný, & Vargovčík, 2013). Mobile robots are even becoming more
common place, allowing for dynamic and spread out workspaces. These are all due to adding
sensing and analysis to robots allowing them to react to dynamic environments.
To further robotics in industry, multi robot work cells have been designed that combine
several robots working on the same part while cooperatively performing either one task, such
as welding and the required handling, or multiple tasks at the same time (Hajduk, Jenčík, Jezný,
& Vargovčík, 2013). The number of activities these work cells can perform increases
Page 3 of 14
dramatically, as the complexity of the task or tasks can be higher while the robots don’t need
to be capable of performing the whole task individually.
For performing more human tasks, dual arm robots have begun to emerge (Hajduk, Jenčík,
Jezný, & Vargovčík, 2013.
Page 1 of 14 ENS4152 Project Development Proposal a.docxjakeomoore75037
Page 1 of 14
ENS4152 Project Development
Proposal and Risk Assessment Report
Baxter Research Robot: Solving a Rubik’s Cube
Chris Dawes
Student # 10282558
30 Mar 2015
Supervisor: Dr Alexander Rassau
Page 2 of 14
Abstract
Robotics is currently used to perform many tasks but many of these are simple repetition of a
predefined method. By combining AI with robotics we can greatly increase the applications of
robotics. An algorithm that combines the vision and servo systems of a Baxter Research Robot
with a solving solution for a Rubik’s cube will demonstrate that the use of even simple AI with
robotics allows complex tasks to be completed. Further integration of object recognition will
allow the task to be completed in a dynamic environment, and further increase the areas
robots are capable of working within.
1. Introduction
1.1. Motivation
The Baxter Research Robot by Rethink Robotics is a dual arm robot, with seven degrees of
freedom per arm, released in 2012. Developed to be affordable, flexible in its purpose, and
above all else safe, Baxter includes three cameras, one on each wrist and the other on its head,
and a screen for displaying information relating to Baxter’s current task. The robot is designed
to be a versatile research platform while containing the same hardware as its industry
counterpart, allowing research to translate into industrial applications (Rethink Robotics,
2015).
In general robotics artificial intelligence (AI) has been developed separately to robotics, but is
now starting to become integrated. Unfortunately current AI is fragmented as each application
focuses on one area, as opposed to making a true AI that thinks like a human (Bogue, 2014).
Current usable AI is more akin to ‘smart’ robotics where decisions are made and problems
solved by the robot in very specific applications. In industry, robots are expanding into areas
that require more flexibility allowing robots to fill many more positions in increasingly complex
areas (Hajduk, Jenčík, Jezný, & Vargovčík, 2013). Mobile robots are even becoming more
common place, allowing for dynamic and spread out workspaces. These are all due to adding
sensing and analysis to robots allowing them to react to dynamic environments.
To further robotics in industry, multi robot work cells have been designed that combine
several robots working on the same part while cooperatively performing either one task, such
as welding and the required handling, or multiple tasks at the same time (Hajduk, Jenčík, Jezný,
& Vargovčík, 2013). The number of activities these work cells can perform increases
Page 3 of 14
dramatically, as the complexity of the task or tasks can be higher while the robots don’t need
to be capable of performing the whole task individually.
For performing more human tasks, dual arm robots have begun to emerge (Hajduk, Jenčík,
Jezný, & Vargovčík, 2013.
The Need For Robots To Grasp the WorldJuxi Leitner
These slides were used for a few talks in the last couple of months to excite people about the intelligent robotic systems. In particular, why I believe that it is important for robots to grasp the world, both in the sense of perceiving and understanding but also in the physical sense of actually changing the state of the world by picking objects and interacting with a wide range of items.
These slides (with slight variations) were presented at QUT, Uni Sydney, Uni Cambridge, DeepMind, Uni Birmingham, Amazon Robotics, ...
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
Cartman, how to win the amazon robotics challenge with robotic vision and dee...Juxi Leitner
Cartman, how to win the amazon robotics challenge with robotic vision and deep learning #GTC18 S8842
Douglas Morrison and Juxi Leitner
Australian Centre for Robotic Vision
roboticvision.org
ACRV Picking Benchmark: how to benchmark pick and place robotics researchJuxi Leitner
Presented at the IROS workshop on "DEVELOPMENT OF BENCHMARKING PROTOCOLS FOR ROBOTIC MANIPULATION"
http://ycbbenchmarks.org/IROS2017workshop.html
The ACRV Picking Benchmark has been developed over the last year to facilitate comparison of robotic systems in pick and place settings!
With the ABP we propose a physical benchmark for robotic picking: overall design, objects, configuration, and guidance on appropriate technologies to solve it. Challenges are an important way to drive progress but they occur only occasionally and the test conditions are difficult to replicate outside the challenge. This benchmark is motivated by experience in the recent Amazon Picking Challenge and contains a commonly-available shelf, 42 objects, a set of stencils and standardized task setups.
A major focus through the design of this benchmark was to maximise reproducibility: a number of carefully chosen scenarios with precise instructions on how to place, orient, and align objects with the help of printable stencils are defined. To make the benchmark as accessible as possible to the research community, a white IKEA shelf is used for all picking tasks. Furthermore, we carefully curated a set of 42 objects to ensure global availability and reduced chance of import restrictions.
How to place 6th in the Amazon Picking Challenge (ENB329, QUT)Juxi Leitner
A guest lecture about project management and how to organise a team for the Amazon Picking Challenge. This is for the mechatronics design project course (ENB 329) at the Queensland University of Technology (QUT).
LunaRoo: Designing a Hopping Lunar Science Payload #space #explorationJuxi Leitner
Presentation slides from the talk given at the IEEE Aerospace Conference (@IEEEAeroConf) 2016 in Big Sky, Montana, USA.
We describe a hopping science payload solution de- signed to exploit the Moon’s lower gravity to leap up to 20m above the surface. The entire solar-powered robot is compact enough to fit within a 10cm cube, whilst providing unique observation and mission capabilities by creating imagery during the hop. The LunaRoo concept is a proposed payload to fly onboard a Google Lunar XPrize entry. Its compact form is specifically designed for lunar exploration and science mission within the constraints given by PTScientists. The core features of LunaRoo are its method of locomotion – hopping like a kangaroo - and its imaging system capable of unique over-the- horizon perception. The payload will serve as a proof of concept, highlighting the benefits of alternative mobility solutions, in particular enabling observation and exploration of terrain not traversable by wheeled robots. in addition providing data for beyond line-of-sight planning and communications for surface assets, extending overall mission capabilities.
Presenation about my current research in computer vision, machine learning and robotics at the IEEE Queensland Computational Intelligence Society Colloquium at Griffith University.
My slides for the Hands-on part of the Robotic Vision Summer School 2015 in Kioloa, Australia.
This is part of the robotics workshop, aiming to teach the participants how to program the turtlebot .
Reactive Reaching and Grasping on a Humanoid: Towards Closing the Action-Perc...Juxi Leitner
My presentation at the ICINCO 2014 (the 11th International Conference on Informatics in Control, Automation and Robotics)
Abstract: We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles – other objects detected in the visual stream – while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
Tele-operation of a Humanoid Robot, Using Operator Bio-dataJuxi Leitner
We present our work on tele-operating a complex hu- manoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their elec- tromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasp- ing of objects) on the 7 DOF arm.
Improving Robot Vision Models for Object Detection Through Interaction #ijcnn...Juxi Leitner
presentation during the WCCI 2014 in Beijing, China
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investi- gates how manipulation actions might allow for the development of better visual models and therefore better robot vision.
This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the ‘right’ action, i.e. the action with the best possible improvement of the detector.
How does it feel to be a SpaceMaster? [Erasmus Mundus - ACE Talk]Juxi Leitner
Last December I had the pleasure to take part in the EM-ACE workshop held at the University of Porto, Portugal. I was invited to talk about my experience studying in the "Joint European Master in Space Science and Technology" (SpaceMaster) infront of about 60 students.
http://www.em-ace.eu/en/upload/public-docs/UPORTO_em-ace%20event_agenda.pdf
http://www.em-a.eu/en/home/rss-feed-detail/em-ace-student-event-university-of-porto-16-december-2013-1395.html
Towards Autonomous and Adaptive Humanoids [PhD Proposal @ Università della Sv...Juxi Leitner
The slides for my PhD proposal presentation in Nov 2013 at the Università della Svizzera Italiana (USI).
The proposal can be found on my webpage: http://Juxi.net/phd/
Humanoid Learns to Detect Its Own Hands #cec2013Juxi Leitner
My presentation at the Congress on Evolutionary Computation (CEC) 2013 in Cancun, Mexico.
Abstract—Robust object manipulation is still a hard problem in robotics, even more so in high degree-of-freedom (DOF) humanoid robots. To improve performance a closer integration of visual and motor systems is needed. We herein present a novel method for a robot to learn robust detection of its own hands and fingers enabling sensorimotor coordination. It does so solely using its own camera images and does not require any external systems or markers. Our system based on Cartesian Genetic Programming (CGP) allows to evolve programs to perform this image segmentation task in real-time on the real hardware. We show results for a Nao and an iCub humanoid each detecting its own hands and fingers.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
1. Juxi Leitner
arc centre of excellence for robotic vision
queensland university of technology
<j.leitner@qut.edu.au>
http://Juxi.net
reinforcement
learning
Juxi
Guest Lecture, CAB420 Machine Learning
(deep)
3. Dalle Molle Institute for AI (IDSIA)
Work
Juxi
Leitner
PhD Informatics / Intelligent Systems
MSc Space Robotics & Automation
BSc Information & Software Engineering
Intelligent (Space) Robots
European Space Agency (ESA)
Erasmus Intelligent Systems
Work (Humanoid) Robot Vision
Instituto Superior Técnico (IST)
Mobility Intelligent Space Systems Laboratory
About Me
Current Robotic Vision and Actions
Queensland University of Technology (QUT)
arc centre of excellence for robotic vision | qut
juxi.net | roboticvision.org | bne-robotics.net | brisbane.ai
8. BRISBANE.AI
defining AI
study of "intelligent agents”:
any device that perceives its environment and takes actions
that maximize its chance of success at some goal
18. http://roboticvision.org/
foundations
a policy, a reward signal, a value func,on,
and, op,onally, a model of the environment
http://cs.stanford.edu/people/karpathy/reinforcejs/http://karpathy.github.io/2016/05/31/rl/
21. http://roboticvision.org/
mdp
An information state (a.k.a. Markov state)
contains all useful information from the history.
i.e. the state is a sufficient statistic of the future
pomdp
what if: robot with camera vision isn’t told its absolute location
agent state != environment state
Formally this is a partially observable Markov decision process
(POMDP)
23. http://roboticvision.org/
Example 3.2: Pick-and-Place Robot
Consider using reinforcement learning to control the mo4on of a robot arm in a
repe44ve pick-and-place task. If we want to learn movements that are fast and smooth,
the learning agent will have to control the motors directly and have low-latency
informa4on about the current posi4ons and veloci4es of the mechanical linkages. The
ac4ons in this case might be the voltages applied to each motor at each joint, and the
states might be the latest readings of joint angles and veloci4es. The reward might be
+1 for each object successfully picked up and placed. To encourage smooth movements,
on each 4me step a small, nega4ve reward can be given as a func4on of the moment-
to-moment “jerkiness” of the mo4on.
35. http://roboticvision.org/
[Zhang et al, arxiv.org]
deep learning visual control
understanding limita,ons of deep nets,
reinforcement learning and transfer of knowledge
37. deep learning visual servoing
Perception Module Control Module
Conv1 Conv2 Conv3 FC_c2 FC_c3FC_c1
Q-values
7×7conv+ReLU
stride2
4×4conv+ReLU
stride2
3×3conv+ReLU
stride1
64 lters 64 lters 64 lters
fullyconn.
300units
9units
84×84
400units
fullyconn.
fullyconn.+ReLU
fullyconn.+ReLU
I BN
5units
θ
Bottleneck
Or
Occlusion
A B C ED
Occlusion Occlusion
Occlusion
[Zhang et al, arxiv.org]
understanding limita,ons of deep nets,
reinforcement learning and transfer of knowledge
38. ARC Centre of Excellence for Robotic Vision roboticvision.org
limita,ons of current robo,c systems
reproducible research on TASKS not datasets
picking benchmark
http://Juxi.net/dataset/acrv-picking-benchmark/
https://arxiv.org/abs/1609.05258
43. http://roboticvision.org/
Is NeuroEvolution coming back?
some recent papers:
"Evolving Deep Neural Networks" by Miikkulainen et al (Sentient Technologies)
"Large-scale Evolution of Image Classifiers" by Real et al (Google Brain)
"PathNet: Evolution Channels Gradient Descent in Super Neural Networks" by
Fernando et al. (DeepMind)
"Evolution Strategies as a Scalable Alternative to Reinforcement Learning" by
Salimans et al (OpenAI)
evolution and RL
Neuroevolution of augmented topologiesNEAT (2002):
46. BRISBANE.AI
new developments
arxiv-sanity, twitter & get your hands dirty
come to Brisbane.AI meetups! :)
how to keep in the loop?
http://Juxi.net/workshop/deep-learning-rss-2017/
Tools and toolboxes
Neuroscience vs Deep Learning
&
Evolutionary approaches
Generative Adversarial Networks
Unsupervised Learning, Embodied Learning
47. BRISBANE.AI
Jürgen ‘Juxi’ Leitner
arc centre of excellence for robotic vision | qut
juxi.net | roboticvision.org | bne-robotics.net | brisbane.ai
In which we try to explain why we consider ar,ficial
intelligence to be a subject most worthy of study, and
in which we try to decide what exactly it is, this
being a good thing to decide before embarking.
TUTORIAL ONE
BRISBANE ARTIFICIAL INTELLIGENCE
http://Juxi.net
<juxi.leitner@gmail.com>