Presenation about my current research in computer vision, machine learning and robotics at the IEEE Queensland Computational Intelligence Society Colloquium at Griffith University.
Reactive Reaching and Grasping on a Humanoid: Towards Closing the Action-Perc...Juxi Leitner
My presentation at the ICINCO 2014 (the 11th International Conference on Informatics in Control, Automation and Robotics)
Abstract: We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles – other objects detected in the visual stream – while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
Autonomous Learning of Robust Visual Object Detection & Identification on a H...Juxi Leitner
n this work we introduce a technique for a hu- manoid robot to autonomously learn the representations of objects in its visual environment. Our approach involves feature- based segmentation of the images followed by learning to identify the object using Cartesian Genetic Programming. The learned identification is able to provide robust and fast segmentation of the objects, without using features. To allow for autonomous learning an attention mechanism is coupled with the training process. We showcase our system on a humanoid robot.
Design and implementation of smart guided glass for visually impaired peopleIJECEIAES
The objective of this paper is to develop an innovative microprocessor-based sensible glass for those who are square measure visually impaired. Among all existing devices in the market, one can help blind people by giving a buzzer sound when detecting an object. There are no devices that can provide object, hole, and barrier information associated with distance, family member, and safety information in a single device. Our proposed guiding glass provides all that necessary information to the blind person’s ears as audio instructions. The proposed system relies on Raspberry pi three model B, Pi camera, and NEO-6M global positioning system (GPS) module. We use TensorFlow and faster region-based convolutional neural network (R-CNN) approach for detection of objects and recognition of family members of the blind man. This system provides voice information through headphones to the ears of the blind person, and facile the blind individual to gain independence and freedom within the indoor and outdoor atmosphere.
Fake currency notes pose a significant threat to financial systems. Counterfeit banknotes and its impact on the economy.
Digital Image Processing (DIP) techniques can be employed to identify counterfeit banknotes.
A BRIEF OVERVIEW ON DIFFERENT ANIMAL DETECTION METHODSsipij
Researches based on animal detection plays a very vital role in many real life applications. Applications
which are very important are preventing animal vehicle collision on roads, preventing dangerous animal
intrusion in residential area, knowing locomotive behavioural of targeted animal and many more. There
are limited areas of research related to animal detection. In this paper we will discuss some of these areas
for detection of animals.
The technology is growing vastly. Everyone in humanity has some limitations. One of those limitations is visual disability. So we are here with a system that helps the visually disabled people. The framework here contains object detection with voice assistance within an app and a hardware part attached to the blinds stick for distance calculation. The app is designed to support the blind person to explore freely anywhere he wants. The working of the framework begins by surveilling the situations around the user and distinguishing them utilizing a camera. The app will then detect the objects present in the input video frame by using the SSD algorithm comparing it with the trained model. The video captured is partitioned into grids to detect the object obstacle. In this way, the subtleties of the object detected can be achieved and along with it distance measurement can also be calculated using specific algorithms. A Text to Speech TTS converter is utilized for changing over the data about the object detected into an audio speech format. The framework application passes on the scene which the blind people is going in his her territorial language with the snap of a catch. The technologies utilized here makes the framework execution effective. Sabin Khader | Meerakrishna M R | Reshma Roy | Willson Joseph C "Godeye: An Efficient System for Blinds" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31631.pdf Paper Url :https://www.ijtsrd.com/engineering/computer-engineering/31631/godeye-an-efficient-system-for-blinds/sabin-khader
The Need For Robots To Grasp the WorldJuxi Leitner
These slides were used for a few talks in the last couple of months to excite people about the intelligent robotic systems. In particular, why I believe that it is important for robots to grasp the world, both in the sense of perceiving and understanding but also in the physical sense of actually changing the state of the world by picking objects and interacting with a wide range of items.
These slides (with slight variations) were presented at QUT, Uni Sydney, Uni Cambridge, DeepMind, Uni Birmingham, Amazon Robotics, ...
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
More Related Content
Similar to Robotic Vision - Vision for Robotics #IEEE #QLD #CIS #Colloquium
Reactive Reaching and Grasping on a Humanoid: Towards Closing the Action-Perc...Juxi Leitner
My presentation at the ICINCO 2014 (the 11th International Conference on Informatics in Control, Automation and Robotics)
Abstract: We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles – other objects detected in the visual stream – while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.
Autonomous Learning of Robust Visual Object Detection & Identification on a H...Juxi Leitner
n this work we introduce a technique for a hu- manoid robot to autonomously learn the representations of objects in its visual environment. Our approach involves feature- based segmentation of the images followed by learning to identify the object using Cartesian Genetic Programming. The learned identification is able to provide robust and fast segmentation of the objects, without using features. To allow for autonomous learning an attention mechanism is coupled with the training process. We showcase our system on a humanoid robot.
Design and implementation of smart guided glass for visually impaired peopleIJECEIAES
The objective of this paper is to develop an innovative microprocessor-based sensible glass for those who are square measure visually impaired. Among all existing devices in the market, one can help blind people by giving a buzzer sound when detecting an object. There are no devices that can provide object, hole, and barrier information associated with distance, family member, and safety information in a single device. Our proposed guiding glass provides all that necessary information to the blind person’s ears as audio instructions. The proposed system relies on Raspberry pi three model B, Pi camera, and NEO-6M global positioning system (GPS) module. We use TensorFlow and faster region-based convolutional neural network (R-CNN) approach for detection of objects and recognition of family members of the blind man. This system provides voice information through headphones to the ears of the blind person, and facile the blind individual to gain independence and freedom within the indoor and outdoor atmosphere.
Fake currency notes pose a significant threat to financial systems. Counterfeit banknotes and its impact on the economy.
Digital Image Processing (DIP) techniques can be employed to identify counterfeit banknotes.
A BRIEF OVERVIEW ON DIFFERENT ANIMAL DETECTION METHODSsipij
Researches based on animal detection plays a very vital role in many real life applications. Applications
which are very important are preventing animal vehicle collision on roads, preventing dangerous animal
intrusion in residential area, knowing locomotive behavioural of targeted animal and many more. There
are limited areas of research related to animal detection. In this paper we will discuss some of these areas
for detection of animals.
The technology is growing vastly. Everyone in humanity has some limitations. One of those limitations is visual disability. So we are here with a system that helps the visually disabled people. The framework here contains object detection with voice assistance within an app and a hardware part attached to the blinds stick for distance calculation. The app is designed to support the blind person to explore freely anywhere he wants. The working of the framework begins by surveilling the situations around the user and distinguishing them utilizing a camera. The app will then detect the objects present in the input video frame by using the SSD algorithm comparing it with the trained model. The video captured is partitioned into grids to detect the object obstacle. In this way, the subtleties of the object detected can be achieved and along with it distance measurement can also be calculated using specific algorithms. A Text to Speech TTS converter is utilized for changing over the data about the object detected into an audio speech format. The framework application passes on the scene which the blind people is going in his her territorial language with the snap of a catch. The technologies utilized here makes the framework execution effective. Sabin Khader | Meerakrishna M R | Reshma Roy | Willson Joseph C "Godeye: An Efficient System for Blinds" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31631.pdf Paper Url :https://www.ijtsrd.com/engineering/computer-engineering/31631/godeye-an-efficient-system-for-blinds/sabin-khader
The Need For Robots To Grasp the WorldJuxi Leitner
These slides were used for a few talks in the last couple of months to excite people about the intelligent robotic systems. In particular, why I believe that it is important for robots to grasp the world, both in the sense of perceiving and understanding but also in the physical sense of actually changing the state of the world by picking objects and interacting with a wide range of items.
These slides (with slight variations) were presented at QUT, Uni Sydney, Uni Cambridge, DeepMind, Uni Birmingham, Amazon Robotics, ...
Significant progress in computer vision in the past years has excited a whole field of researchers. In robotics we are now able to use these techniques to build robotic systems that can observe, understand, and interact with the world, in short, we can build robots that grasp the world.
This is an overview of the efforts in the Australien Centre for Robotic Vision under the umbrella of "Robotic Manipulation" led by Dr. Juxi Leitner.
Slides used for a series of presentations in Australia and Europe in Sep/Oct 2018.
Feel free to reach out for opportunities to juxi@lyro.io
Improving Robotic Manipulation with Vision and Learning @AmazonDevCentre BerlinJuxi Leitner
My talk at the Amazon Development Centre in Berlin. Including work on how to improve robotic reaching, grasping and manipulation. And getting away from chasing grasp success rates.
Cartman, how to win the amazon robotics challenge with robotic vision and dee...Juxi Leitner
Cartman, how to win the amazon robotics challenge with robotic vision and deep learning #GTC18 S8842
Douglas Morrison and Juxi Leitner
Australian Centre for Robotic Vision
roboticvision.org
These slides were used in the guest lecture for QUT's Image processing class.
The two part presentation consists of our Amazon Robotics Challenge robot #Cartman and some introduction to (deep) reinforcement learning.
ACRV Picking Benchmark: how to benchmark pick and place robotics researchJuxi Leitner
Presented at the IROS workshop on "DEVELOPMENT OF BENCHMARKING PROTOCOLS FOR ROBOTIC MANIPULATION"
http://ycbbenchmarks.org/IROS2017workshop.html
The ACRV Picking Benchmark has been developed over the last year to facilitate comparison of robotic systems in pick and place settings!
With the ABP we propose a physical benchmark for robotic picking: overall design, objects, configuration, and guidance on appropriate technologies to solve it. Challenges are an important way to drive progress but they occur only occasionally and the test conditions are difficult to replicate outside the challenge. This benchmark is motivated by experience in the recent Amazon Picking Challenge and contains a commonly-available shelf, 42 objects, a set of stencils and standardized task setups.
A major focus through the design of this benchmark was to maximise reproducibility: a number of carefully chosen scenarios with precise instructions on how to place, orient, and align objects with the help of printable stencils are defined. To make the benchmark as accessible as possible to the research community, a white IKEA shelf is used for all picking tasks. Furthermore, we carefully curated a set of 42 objects to ensure global availability and reduced chance of import restrictions.
Team ACRV's experience at #AmazonPickingChallenge 2016Juxi Leitner
Building on Repeatable Grasping Experiments
Team ACRV: Lessons Learned from the Amazon Picking Challenge 2016
Juxi Leitner, ACRV, Queensland University of Technology (Team ACRV, 2016, 2017)
We describe our entry into the 2016 Amazon Picking Challenge (APC) and the lessons learned from deploying a complex, robotic system outside of the lab. To help future developments decided to create a new physical benchmark challenge for robotic picking to drive scientific progress and make research into (end-to-end) picking comparable. It consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils.
A guest lecture about (deep) reinforcement learning and on-going projects includign at QUT. This is for the Machine Learningcourse (CAB 420) at the Queensland University of Technology (QUT)
How to place 6th in the Amazon Picking Challenge (ENB329, QUT)Juxi Leitner
A guest lecture about project management and how to organise a team for the Amazon Picking Challenge. This is for the mechatronics design project course (ENB 329) at the Queensland University of Technology (QUT).
LunaRoo: Designing a Hopping Lunar Science Payload #space #explorationJuxi Leitner
Presentation slides from the talk given at the IEEE Aerospace Conference (@IEEEAeroConf) 2016 in Big Sky, Montana, USA.
We describe a hopping science payload solution de- signed to exploit the Moon’s lower gravity to leap up to 20m above the surface. The entire solar-powered robot is compact enough to fit within a 10cm cube, whilst providing unique observation and mission capabilities by creating imagery during the hop. The LunaRoo concept is a proposed payload to fly onboard a Google Lunar XPrize entry. Its compact form is specifically designed for lunar exploration and science mission within the constraints given by PTScientists. The core features of LunaRoo are its method of locomotion – hopping like a kangaroo - and its imaging system capable of unique over-the- horizon perception. The payload will serve as a proof of concept, highlighting the benefits of alternative mobility solutions, in particular enabling observation and exploration of terrain not traversable by wheeled robots. in addition providing data for beyond line-of-sight planning and communications for surface assets, extending overall mission capabilities.
My slides for the Hands-on part of the Robotic Vision Summer School 2015 in Kioloa, Australia.
This is part of the robotics workshop, aiming to teach the participants how to program the turtlebot .
ACRV Research Fellow Intro/Tutorial [Vision and Action]Juxi Leitner
A short introduction about me and my work at the Queensland University of Technology (QUT) for the Australian Centre of Excellence for Robotic Vision.
Giving some background in Image Based Visual Servoing (IBVS) and some research goals/ideas/avenues...
Keeping it light and simple (hopefully..)
Tele-operation of a Humanoid Robot, Using Operator Bio-dataJuxi Leitner
We present our work on tele-operating a complex hu- manoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their elec- tromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasp- ing of objects) on the 7 DOF arm.
Improving Robot Vision Models for Object Detection Through Interaction #ijcnn...Juxi Leitner
presentation during the WCCI 2014 in Beijing, China
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investi- gates how manipulation actions might allow for the development of better visual models and therefore better robot vision.
This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the ‘right’ action, i.e. the action with the best possible improvement of the detector.
How does it feel to be a SpaceMaster? [Erasmus Mundus - ACE Talk]Juxi Leitner
Last December I had the pleasure to take part in the EM-ACE workshop held at the University of Porto, Portugal. I was invited to talk about my experience studying in the "Joint European Master in Space Science and Technology" (SpaceMaster) infront of about 60 students.
http://www.em-ace.eu/en/upload/public-docs/UPORTO_em-ace%20event_agenda.pdf
http://www.em-a.eu/en/home/rss-feed-detail/em-ace-student-event-university-of-porto-16-december-2013-1395.html
Towards Autonomous and Adaptive Humanoids [PhD Proposal @ Università della Sv...Juxi Leitner
The slides for my PhD proposal presentation in Nov 2013 at the Università della Svizzera Italiana (USI).
The proposal can be found on my webpage: http://Juxi.net/phd/
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Robotic Vision - Vision for Robotics #IEEE #QLD #CIS #Colloquium
1. Robotic Vision
Jürgen ‘Juxi’ Leitner
arc centre of excellence for robotic vision
queensland university of technology
a vision for robotics
j.leitner@qut.edu.au - http://Juxi.net
3. http://roboticvision.org/3
recognising objects & stuff
recognising places
detec4ng mo4on move to see
see to move
context for seeing
seeing for context
seeing creates memories
memory helps seeing
paying a;en4on
recognizing humans, their ac4vi4es and intent
Seeing
10. http://roboticvision.org/
Dalle Molle Institute for AI (IDSIA)
10
Work
Juxi
Leitner
PhD Informatics / Intelligent Systems
MSc Space Robotics & Automation
BSc Information & Software Engineering
Intelligent (Space) Robots
European Space Agency (ESA)
Erasmus Intelligent Systems
Work (Humanoid) Robot Vision
Instituto Superior Técnico (IST)
Mobility Intelligent Space Systems Laboratory
About Me
Work Robotic Vision and Actions
h"p://Juxi.net
Queensland University of Technology (QUT)
57. http://roboticvision.org/57
case #1learning
from supervised to robotic-assisted
unsupervised learning
Autonomous Learning Of Robust Visual Object Detection And Identification On A Humanoid. J. Leitner, P. Chandrashekhariah, S. Harding, M. Frank, G. Spina, A. Förster, J. Triesch, J. Schmidhuber. ICDL/EpiRob 2012.
61. http://roboticvision.org/61
conclusions
a novel way of object segmentation
learning and teaching perception
integration action-perception side
reactive reaching/grasping
improving perception with (inter-)actions
learning neuro-controllers