Presented in Int. conf. on Advanced Mechatronics 2015.
It gives a series of methods to detect scissors by vision system and to generate robotic caging grasp motions by Choreonoid, a motion planner.
Abstract: Caging is a method to capture an object geometrically by position-controlled robots without any force and tactile sensors. Many previous researches focused on caging constraints of objects, and those on planning are few. In this paper, we present a motion planner for caging by a multifingered hand and a manipulator to produce whole motion which includes approaching to a target object and capturing it without any collisions. We derive sufficient conditions required for the caging tasks about three caging patterns. Since the planner requires the object properties including the position
and orientation of the object, we adopt an object recognition using AR picture markers. We apply the proposed method to caging about four target objects: a cylinder, a ring, a mug and a dumbbell. Some experimental results shows that each motion are successfully planned, and executed by the arm/hand system.
Presented in ICMA2012 in Chengdu, China
This document summarizes the research of Satoshi Makita, an assistant professor specializing in robotic manipulation by incomplete grasping. His research topics include analyzing contact forces in robotic manipulation, caging grasping using geometrical constraints from position-controlled robots, and designing and controlling robot hands. Specifically, he studies graspless manipulation where objects are supported by both the robot and environment, caging where objects are constrained but can move, and planning caging grasps using motion planners. He also works to improve the manipulability of tendon-driven hands like the human hand. Makita hopes to learn more about Korean robotics and make contacts in the field.
1) 3D multifingered caging involves using multiple robot fingers to constrain an object's movement in 3D space without grasping it.
2) The document discusses different types of caging patterns (envelope, ring, waist) and develops sufficient conditions for caging common shapes like spheres, disks, rings, and dumbbells.
3) It also introduces the concept of partial caging, which aims to constrain an object as much as possible even if a robot is unable to achieve complete caging due to limitations like low degrees of freedom. Partial caging could be useful for prosthetic hands.
For sightseeing, Northern Kyushu, Japan, has excellent cities and cultures! From Hakata, Fukuoka, which is the largest city in Kyushu, most of the cities in Kyushu can be easily reached by train or bus.
This document proposes a mechanical model to estimate the elasticity of the flexor digitorum muscle-tendon complex (MTC) in the human hand. It describes measuring the relationship between joint angle and fingertip force applied during loading and unloading of the finger. The results show an exponential curve for loading and hysteresis between loading and unloading, consistent with tendon behavior. It also measures the relationship between joint angle and angular velocity when the finger is released, again showing hysteresis. This simple model allows estimating the elastic properties of the MTC without expensive equipment, providing insight into hand function and application to sports coaching.
The Japanese slides report an approach to monitor Tsushima-Yamaneko, a kind of wildcat, by using UAVs.
In addition, an out-reaching activity for children in Tsushima, Nagasaki, Japan was held in August 2015.
Abstract: Caging is a method to capture an object geometrically by position-controlled robots without any force and tactile sensors. Many previous researches focused on caging constraints of objects, and those on planning are few. In this paper, we present a motion planner for caging by a multifingered hand and a manipulator to produce whole motion which includes approaching to a target object and capturing it without any collisions. We derive sufficient conditions required for the caging tasks about three caging patterns. Since the planner requires the object properties including the position
and orientation of the object, we adopt an object recognition using AR picture markers. We apply the proposed method to caging about four target objects: a cylinder, a ring, a mug and a dumbbell. Some experimental results shows that each motion are successfully planned, and executed by the arm/hand system.
Presented in ICMA2012 in Chengdu, China
This document summarizes the research of Satoshi Makita, an assistant professor specializing in robotic manipulation by incomplete grasping. His research topics include analyzing contact forces in robotic manipulation, caging grasping using geometrical constraints from position-controlled robots, and designing and controlling robot hands. Specifically, he studies graspless manipulation where objects are supported by both the robot and environment, caging where objects are constrained but can move, and planning caging grasps using motion planners. He also works to improve the manipulability of tendon-driven hands like the human hand. Makita hopes to learn more about Korean robotics and make contacts in the field.
1) 3D multifingered caging involves using multiple robot fingers to constrain an object's movement in 3D space without grasping it.
2) The document discusses different types of caging patterns (envelope, ring, waist) and develops sufficient conditions for caging common shapes like spheres, disks, rings, and dumbbells.
3) It also introduces the concept of partial caging, which aims to constrain an object as much as possible even if a robot is unable to achieve complete caging due to limitations like low degrees of freedom. Partial caging could be useful for prosthetic hands.
For sightseeing, Northern Kyushu, Japan, has excellent cities and cultures! From Hakata, Fukuoka, which is the largest city in Kyushu, most of the cities in Kyushu can be easily reached by train or bus.
This document proposes a mechanical model to estimate the elasticity of the flexor digitorum muscle-tendon complex (MTC) in the human hand. It describes measuring the relationship between joint angle and fingertip force applied during loading and unloading of the finger. The results show an exponential curve for loading and hysteresis between loading and unloading, consistent with tendon behavior. It also measures the relationship between joint angle and angular velocity when the finger is released, again showing hysteresis. This simple model allows estimating the elastic properties of the MTC without expensive equipment, providing insight into hand function and application to sports coaching.
The Japanese slides report an approach to monitor Tsushima-Yamaneko, a kind of wildcat, by using UAVs.
In addition, an out-reaching activity for children in Tsushima, Nagasaki, Japan was held in August 2015.
A major challenge for the next decade is to design virtual and augmented reality systems (VR at large) for real-world use cases such as healthcare, entertainment, e-education, and high-risk missions. This requires VR systems to operate at scale, in a personalized manner, remaining bandwidth-tolerant whilst meeting quality and latency criteria. One key challenge to reach this goal is to fully understand and anticipate user behaviours in these mixed reality settings.
This can be accomplished only by a fundamental revolution of the network and VR systems that have to put the interactive user at the heart of the system rather than at the end of the chain. With this goal in mind, in this talk, we describe our current researches on user-centric systems. First, we describe our view-port based streaming strategies for 360-degree video. Then, we present more in details our research on of users‘ behaviour analysis, when users interact with the 360-degree content. Specifically, we describe a set of metrics that allows us to identify key behaviours among users and quantify the level of similarity of these behaviours. Specifically, we present our clique-based clustering methodology, information theory and trajectory base in-depth analysis. Finally, we conclude with an overview of the extension of this work to navigation within volumetric video sequences.
Human action recognition with kinect using a joint motion descriptorSoma Boubou
- We proposed a novel descriptor for motion of skeleton joints.
- Proposed descriptor proved to outperform the state-of-the-art descriptors such as HON4D and the one proposed by Chen et al 2013.
- Our proposed approached proved to be effective for periodic actions (e.g., Waving, Walking, Jogging, Side-Boxing, etc).
- Grouping was effective for actions with unique joints trajectories (e.g., Tennis serving, Side kicking , etc).
- Grouping joints into eight groups is always effective with actions of MSR3D dataset.
Automated Laser Scanning System For Reverse Engineering And InspectionJennifer Daniel
This document summarizes an automated laser scanning system developed for reverse engineering and inspection of parts with freeform surfaces. The system generates optimal scan plans considering parameters like view angle, depth of field, and occlusion. It uses a laser scanner mounted on a motorized rotary table to automatically scan parts according to the generated scan plans. The point data is then automatically registered and evaluated by comparing to CAD models. The system aims to automate the scanning process for more efficient inspection and reverse engineering of complex parts.
This document summarizes an analysis of iris recognition based on false acceptance rate (FAR) and false rejection rate (FRR) using the Hough transform. It first provides an overview of iris recognition and its typical stages: image acquisition, localization/segmentation, normalization, feature extraction, and pattern matching. It then describes existing methods used in each stage, including the Hough transform and rubber sheet model for localization and normalization. The proposed methodology applies Canny edge detection, Hough transform for boundary detection, normalization with the rubber sheet model, and calculates metrics like mean squared error, root mean squared error, signal-to-noise ratio, and root signal-to-noise ratio to evaluate the accuracy of iris recognition using FAR
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET Journal
The document describes an image processing technique that uses Hough transformation and contour detection to extract features from images and count objects. It proposes an integrated method to detect circular objects, detach overlapping objects, and count objects of any shape. The method applies Canny edge detection, contour detection, and circular Hough transform to segment overlapping circular objects. It then uses contour detection to count all objects regardless of shape. Experimental results show the method can successfully segment and count overlapping circular and non-circular objects in test images.
A Case Study : Circle Detection Using Circular Hough TransformIJSRED
This paper presents a case study on using the circular Hough transform (CHT) to detect circles in binary images. The CHT works by detecting edges using algorithms like Canny edge detection, and then applying the Hough transform to find parameter triplets (x, y, R) that correspond to circles. An accumulator matrix is used to tally parameter combinations that correspond to edges, with the highest tallies indicating detected circles. The paper applies this method to detect coins in an image, finding circles with 95% accuracy. It concludes the CHT is an effective algorithm for circle detection, though future work could optimize it and improve accuracy.
This document discusses training a CRS robotic arm using human arm imitation through the Kinect sensor. It describes how the Kinect can track human joints in 3D space using its skeletal API. The robotic arm has 6 degrees of freedom and the Denavit-Hartenberg parameters will be used to define reference frames for each link. Joint angles will be calculated using inverse kinematics and transmitted to the robot via serial communication. Local optimization can remove redundant states from the motion path to efficiently perform tasks.
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. This allows generating control signals to guide the robotic arm via a controller. Applications are in automated industries like assembly and potential enhancements are also discussed.
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers key concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. Control signals are sent via an interface to guide the robotic arm based on image analysis. Potential applications and advantages like consistency and hazardous task handling are also summarized.
This document presents a methodology for real-time object tracking using a webcam. It combines Prewitt edge detection for object detection and Kalman filtering for tracking. Prewitt edge detection is used to detect the edges of the moving object in each video frame. Then, Kalman filtering is used to track the detected object across subsequent frames by predicting its location. Experiments show the approach can efficiently track objects under deformation, occlusion, and can track multiple objects simultaneously. The combination of Prewitt edge detection and Kalman filtering provides an effective method for real-time object tracking.
Motion planning and controlling algorithm for grasping and manipulating movin...ijscai
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
Localization and navigation are important tasks for mobile robots. Localization involves determining a robot's position and orientation, which can be done using global positioning systems outdoors or local sensor networks indoors. Navigation involves planning a path to reach a goal destination. Common navigation algorithms include Dijkstra's algorithm, A* algorithm, potential field method, wandering standpoint algorithm, and DistBug algorithm. Each algorithm has different requirements and approaches to planning paths between a starting point and goal.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
ABSTRACT Feature extraction plays a vital role in the analysis and interpretation of remotely sensed data. The two important components of Feature extraction are Image enhancement and information extraction. Image enhancement techniques help in improving the visibility of any portion or feature of the image. Information extraction techniques help in obtaining the statistical information about any particular feature or portion of the image. This presented work focuses on the various feature extraction techniques and area of optical character recognition is a particularly important in Image processing. Keywords— Image character recognition, Methods for Feature Extraction, Basic Gabor Filter, IDA, and PCA.
Applying edge density based region growing with frame difference for detectin...eSAT Publishing House
1. The document presents a method for detecting moving objects in video surveillance systems using edge density based region growing with frame difference.
2. It involves preprocessing frames through edge detection, frame differencing to eliminate stationary backgrounds, and applying edge density based region growing to connect regions of moving objects.
3. Experimental results on videos of a moving person and cylinder show the method can accurately detect moving objects in complex backgrounds.
The document discusses multi-biometric identification using iris and periocular biometrics. It provides an overview of iris and periocular biometrics, including the structure of the eye, advantages of iris recognition, acquisition techniques, preprocessing, segmentation approaches, feature extraction methods, and current research directions. Some key points covered include the use of iris recognition in large-scale ID programs, the subsystems of ocular biometrics like iris, retina, cornea, techniques like hybrid biometrics that combine iris and other biometrics, and segmentation approaches such as using active contours, Hough transforms, graph cuts and cellular automata.
A new technique to fingerprint recognition based on partial windowAlexander Decker
1) The document presents a new technique for fingerprint recognition based on analyzing a partial window around the core point of a fingerprint.
2) The technique first locates the core point of a fingerprint, then determines a window around the core point. Features are extracted from this window and input into an artificial neural network (ANN) to recognize fingerprints.
3) The technique aims to reduce computation time for fingerprint recognition by focusing the analysis on a partial window rather than the whole fingerprint image.
The document summarizes a project where an industrial robot named Baxter, made by Rethink Robotics, plays pool using its vision sensors and inverse kinematics. The key steps involved finding the desired orientation to strike the cue ball into the pocket using either a 3D sensor or Baxter's head camera, moving Baxter's arm to that orientation, then using visual servoing with Baxter's hand camera to align the end effector with the center of the ball while maintaining orientation. Once aligned, Baxter would linearly move its end effector to strike the ball. Some limitations were the workspace was limited due to Baxter's fixed position, and insufficient force was achieved on the ball. Future work proposed making Baxter mobile and using linear actuators to increase striking force
EFFECTIVE INTEREST REGION ESTIMATION MODEL TO REPRESENT CORNERS FOR IMAGE sipij
One of the most important steps to describe local features is to estimate the interest region around the feature location to achieve the invariance against different image transformation. The pixels inside the interest region are used to build the descriptor, to represent a feature. Estimating the interest region
around a corner location is a fundamental step to describe the corner feature. But the process is challenging under different image conditions. Most of the corner detectors derive appropriate scales to estimate the region to build descriptors. In our approach, we have proposed a new local maxima-based
interest region detection method. This region estimation method can be used to build descriptors to represent corners. We have performed a comparative analysis to match the feature points using recent corner detectors and the result shows that our method achieves better precision and recall results than
existing methods.
セル生産方式におけるロボットの活用には様々な問題があるが,その一つとして 3 体以上の物体の組み立てが挙げられる.一般に,複数物体を同時に組み立てる際は,対象の部品をそれぞれロボットアームまたは治具でそれぞれ独立に保持することで組み立てを遂行すると考えられる.ただし,この方法ではロボットアームや治具を部品数と同じ数だけ必要とし,部品数が多いほどコスト面や設置スペースの関係で無駄が多くなる.この課題に対して音𣷓らは組み立て対象物に働く接触力等の解析により,治具等で固定されていない対象物が組み立て作業中に運動しにくい状態となる条件を求めた.すなわち,環境中の非把持対象物のロバスト性を考慮して,組み立て作業条件を検討している.本研究ではこの方策に基づいて,複数物体の組み立て作業を単腕マニピュレータで実行することを目的とする.このとき,対象物のロバスト性を考慮することで,仮組状態の複数物体を同時に扱う手法を提案する.作業対象としてパイプジョイントの組み立てを挙げ,簡易な道具を用いることで単腕マニピュレータで複数物体を同時に把持できることを示す.さらに,作業成功率の向上のために RGB-D カメラを用いた物体の位置検出に基づくロボット制御及び動作計画を実装する.
This paper discusses assembly operations using a single manipulator and a parallel gripper to simultaneously
grasp multiple objects and hold the group of temporarily assembled objects. Multiple robots and jigs generally operate
assembly tasks by constraining the target objects mechanically or geometrically to prevent them from moving. It is
necessary to analyze the physical interaction between the objects for such constraints to achieve the tasks with a single
gripper. In this paper, we focus on assembling pipe joints as an example and discuss constraining the motion of the
objects. Our demonstration shows that a simple tool can facilitate holding multiple objects with a single gripper.
This paper provides a comprehensive overview of motion planning for robotic manipulation, encompassing grasp planning, motion planning, MoveIt in ROS, OMPL, RRT, forward and inverse kinematics, singularity of robotic manipulators, and manipulability.
More Related Content
Similar to Icam2015 presentation poster - Object Recognition and Planning of Ring-type Caging for Scissors
A major challenge for the next decade is to design virtual and augmented reality systems (VR at large) for real-world use cases such as healthcare, entertainment, e-education, and high-risk missions. This requires VR systems to operate at scale, in a personalized manner, remaining bandwidth-tolerant whilst meeting quality and latency criteria. One key challenge to reach this goal is to fully understand and anticipate user behaviours in these mixed reality settings.
This can be accomplished only by a fundamental revolution of the network and VR systems that have to put the interactive user at the heart of the system rather than at the end of the chain. With this goal in mind, in this talk, we describe our current researches on user-centric systems. First, we describe our view-port based streaming strategies for 360-degree video. Then, we present more in details our research on of users‘ behaviour analysis, when users interact with the 360-degree content. Specifically, we describe a set of metrics that allows us to identify key behaviours among users and quantify the level of similarity of these behaviours. Specifically, we present our clique-based clustering methodology, information theory and trajectory base in-depth analysis. Finally, we conclude with an overview of the extension of this work to navigation within volumetric video sequences.
Human action recognition with kinect using a joint motion descriptorSoma Boubou
- We proposed a novel descriptor for motion of skeleton joints.
- Proposed descriptor proved to outperform the state-of-the-art descriptors such as HON4D and the one proposed by Chen et al 2013.
- Our proposed approached proved to be effective for periodic actions (e.g., Waving, Walking, Jogging, Side-Boxing, etc).
- Grouping was effective for actions with unique joints trajectories (e.g., Tennis serving, Side kicking , etc).
- Grouping joints into eight groups is always effective with actions of MSR3D dataset.
Automated Laser Scanning System For Reverse Engineering And InspectionJennifer Daniel
This document summarizes an automated laser scanning system developed for reverse engineering and inspection of parts with freeform surfaces. The system generates optimal scan plans considering parameters like view angle, depth of field, and occlusion. It uses a laser scanner mounted on a motorized rotary table to automatically scan parts according to the generated scan plans. The point data is then automatically registered and evaluated by comparing to CAD models. The system aims to automate the scanning process for more efficient inspection and reverse engineering of complex parts.
This document summarizes an analysis of iris recognition based on false acceptance rate (FAR) and false rejection rate (FRR) using the Hough transform. It first provides an overview of iris recognition and its typical stages: image acquisition, localization/segmentation, normalization, feature extraction, and pattern matching. It then describes existing methods used in each stage, including the Hough transform and rubber sheet model for localization and normalization. The proposed methodology applies Canny edge detection, Hough transform for boundary detection, normalization with the rubber sheet model, and calculates metrics like mean squared error, root mean squared error, signal-to-noise ratio, and root signal-to-noise ratio to evaluate the accuracy of iris recognition using FAR
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET Journal
The document describes an image processing technique that uses Hough transformation and contour detection to extract features from images and count objects. It proposes an integrated method to detect circular objects, detach overlapping objects, and count objects of any shape. The method applies Canny edge detection, contour detection, and circular Hough transform to segment overlapping circular objects. It then uses contour detection to count all objects regardless of shape. Experimental results show the method can successfully segment and count overlapping circular and non-circular objects in test images.
A Case Study : Circle Detection Using Circular Hough TransformIJSRED
This paper presents a case study on using the circular Hough transform (CHT) to detect circles in binary images. The CHT works by detecting edges using algorithms like Canny edge detection, and then applying the Hough transform to find parameter triplets (x, y, R) that correspond to circles. An accumulator matrix is used to tally parameter combinations that correspond to edges, with the highest tallies indicating detected circles. The paper applies this method to detect coins in an image, finding circles with 95% accuracy. It concludes the CHT is an effective algorithm for circle detection, though future work could optimize it and improve accuracy.
This document discusses training a CRS robotic arm using human arm imitation through the Kinect sensor. It describes how the Kinect can track human joints in 3D space using its skeletal API. The robotic arm has 6 degrees of freedom and the Denavit-Hartenberg parameters will be used to define reference frames for each link. Joint angles will be calculated using inverse kinematics and transmitted to the robot via serial communication. Local optimization can remove redundant states from the motion path to efficiently perform tasks.
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. This allows generating control signals to guide the robotic arm via a controller. Applications are in automated industries like assembly and potential enhancements are also discussed.
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers key concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. Control signals are sent via an interface to guide the robotic arm based on image analysis. Potential applications and advantages like consistency and hazardous task handling are also summarized.
This document presents a methodology for real-time object tracking using a webcam. It combines Prewitt edge detection for object detection and Kalman filtering for tracking. Prewitt edge detection is used to detect the edges of the moving object in each video frame. Then, Kalman filtering is used to track the detected object across subsequent frames by predicting its location. Experiments show the approach can efficiently track objects under deformation, occlusion, and can track multiple objects simultaneously. The combination of Prewitt edge detection and Kalman filtering provides an effective method for real-time object tracking.
Motion planning and controlling algorithm for grasping and manipulating movin...ijscai
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
Localization and navigation are important tasks for mobile robots. Localization involves determining a robot's position and orientation, which can be done using global positioning systems outdoors or local sensor networks indoors. Navigation involves planning a path to reach a goal destination. Common navigation algorithms include Dijkstra's algorithm, A* algorithm, potential field method, wandering standpoint algorithm, and DistBug algorithm. Each algorithm has different requirements and approaches to planning paths between a starting point and goal.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
ABSTRACT Feature extraction plays a vital role in the analysis and interpretation of remotely sensed data. The two important components of Feature extraction are Image enhancement and information extraction. Image enhancement techniques help in improving the visibility of any portion or feature of the image. Information extraction techniques help in obtaining the statistical information about any particular feature or portion of the image. This presented work focuses on the various feature extraction techniques and area of optical character recognition is a particularly important in Image processing. Keywords— Image character recognition, Methods for Feature Extraction, Basic Gabor Filter, IDA, and PCA.
Applying edge density based region growing with frame difference for detectin...eSAT Publishing House
1. The document presents a method for detecting moving objects in video surveillance systems using edge density based region growing with frame difference.
2. It involves preprocessing frames through edge detection, frame differencing to eliminate stationary backgrounds, and applying edge density based region growing to connect regions of moving objects.
3. Experimental results on videos of a moving person and cylinder show the method can accurately detect moving objects in complex backgrounds.
The document discusses multi-biometric identification using iris and periocular biometrics. It provides an overview of iris and periocular biometrics, including the structure of the eye, advantages of iris recognition, acquisition techniques, preprocessing, segmentation approaches, feature extraction methods, and current research directions. Some key points covered include the use of iris recognition in large-scale ID programs, the subsystems of ocular biometrics like iris, retina, cornea, techniques like hybrid biometrics that combine iris and other biometrics, and segmentation approaches such as using active contours, Hough transforms, graph cuts and cellular automata.
A new technique to fingerprint recognition based on partial windowAlexander Decker
1) The document presents a new technique for fingerprint recognition based on analyzing a partial window around the core point of a fingerprint.
2) The technique first locates the core point of a fingerprint, then determines a window around the core point. Features are extracted from this window and input into an artificial neural network (ANN) to recognize fingerprints.
3) The technique aims to reduce computation time for fingerprint recognition by focusing the analysis on a partial window rather than the whole fingerprint image.
The document summarizes a project where an industrial robot named Baxter, made by Rethink Robotics, plays pool using its vision sensors and inverse kinematics. The key steps involved finding the desired orientation to strike the cue ball into the pocket using either a 3D sensor or Baxter's head camera, moving Baxter's arm to that orientation, then using visual servoing with Baxter's hand camera to align the end effector with the center of the ball while maintaining orientation. Once aligned, Baxter would linearly move its end effector to strike the ball. Some limitations were the workspace was limited due to Baxter's fixed position, and insufficient force was achieved on the ball. Future work proposed making Baxter mobile and using linear actuators to increase striking force
EFFECTIVE INTEREST REGION ESTIMATION MODEL TO REPRESENT CORNERS FOR IMAGE sipij
One of the most important steps to describe local features is to estimate the interest region around the feature location to achieve the invariance against different image transformation. The pixels inside the interest region are used to build the descriptor, to represent a feature. Estimating the interest region
around a corner location is a fundamental step to describe the corner feature. But the process is challenging under different image conditions. Most of the corner detectors derive appropriate scales to estimate the region to build descriptors. In our approach, we have proposed a new local maxima-based
interest region detection method. This region estimation method can be used to build descriptors to represent corners. We have performed a comparative analysis to match the feature points using recent corner detectors and the result shows that our method achieves better precision and recall results than
existing methods.
Similar to Icam2015 presentation poster - Object Recognition and Planning of Ring-type Caging for Scissors (20)
セル生産方式におけるロボットの活用には様々な問題があるが,その一つとして 3 体以上の物体の組み立てが挙げられる.一般に,複数物体を同時に組み立てる際は,対象の部品をそれぞれロボットアームまたは治具でそれぞれ独立に保持することで組み立てを遂行すると考えられる.ただし,この方法ではロボットアームや治具を部品数と同じ数だけ必要とし,部品数が多いほどコスト面や設置スペースの関係で無駄が多くなる.この課題に対して音𣷓らは組み立て対象物に働く接触力等の解析により,治具等で固定されていない対象物が組み立て作業中に運動しにくい状態となる条件を求めた.すなわち,環境中の非把持対象物のロバスト性を考慮して,組み立て作業条件を検討している.本研究ではこの方策に基づいて,複数物体の組み立て作業を単腕マニピュレータで実行することを目的とする.このとき,対象物のロバスト性を考慮することで,仮組状態の複数物体を同時に扱う手法を提案する.作業対象としてパイプジョイントの組み立てを挙げ,簡易な道具を用いることで単腕マニピュレータで複数物体を同時に把持できることを示す.さらに,作業成功率の向上のために RGB-D カメラを用いた物体の位置検出に基づくロボット制御及び動作計画を実装する.
This paper discusses assembly operations using a single manipulator and a parallel gripper to simultaneously
grasp multiple objects and hold the group of temporarily assembled objects. Multiple robots and jigs generally operate
assembly tasks by constraining the target objects mechanically or geometrically to prevent them from moving. It is
necessary to analyze the physical interaction between the objects for such constraints to achieve the tasks with a single
gripper. In this paper, we focus on assembling pipe joints as an example and discuss constraining the motion of the
objects. Our demonstration shows that a simple tool can facilitate holding multiple objects with a single gripper.
This paper provides a comprehensive overview of motion planning for robotic manipulation, encompassing grasp planning, motion planning, MoveIt in ROS, OMPL, RRT, forward and inverse kinematics, singularity of robotic manipulators, and manipulability.
This paper presents a load estimation method using a mechanochromic hydrogel sheet. The structural color of the gel is changed depending on the applied pressure to the gel sheet. The proposed load estimator based on the combined approaches with image features and machine learning can detect the applied load from the captured images of the gel sheet. The extracted image features of the color images of the gel sheet are superimposed on the captured initial images. By using the superimposed images as input to the machine learning system, we improve the success rate and precision of the load estimation. The experimental results show that the estimator recognizes the applied force with every 100 gf from 0 to 1,000.
第26回ロボティクス・シンポジア オーバーナイトセッション「どうするどうなるオンライン学会発表!画面越しでも楽しむ100の方法」で話題提供した,オンラインサービスのまとめです.
A survey of web services for on-line academic conferences. For example, I pick up services for video chat or web meeting, services for texting, environments for virtual reality, webinars, and broadcasting.
This paper presents a homogeneous evaluation of difficulty of moving attributed to both geometrical and mechanical constraints. Although caging grasp usually considers to confine an object geometrically by surrounding robots, it is not always feasible due to limitation of robots such as few number of robots or fingers. Such incomplete caging is often called as partial caging, and in which the object can escape from the cage of robots. And then the object is prevented from moving by both geometrical constraints and mechanical effects. The former can be discussed with arrangements of robots and environments, and the latter is investigated with static/dynamic analyses of contact forces. This paper addresses both different indexes homogeneously based on robustness measure for grasping and contact tasks. We introduce a novel interpretation for evaluation of complete/partial caging quality, and show some numerical examples.
Keywords: Manipulation, Grasping, Caging, Force analysis
This paper proposes a motion classification with electromyogram for twisting manipulation, which is composed
of flexion/expansion and pronation/supination. Instead of attaching a set of electrodes at the surfaces on each target muscle, we adopt a commercial arm-band-type electrodes array with focusing on wearability. A typical signal processing, Integrated Electromyogram, and a classifier, Support Vector Machine, are employed to analyze eight channels of electromyogram for six hand actions. We experimentally investigate the accuracy of classification in real-time and interference of muscle fatigue. Since each pattern of electromyogram for a particular action is changed by posture of upper limb, its interference as noises are also investigated.
本研究では幾何学的拘束と力学的拘束を同時に評価する手法を提案する.ケージングは対象物をロボットで囲い込み,抜け出せないように拘束する手法である.物体の囲い込みが不完全なとき,対象物はロボットが障害物となる幾何学的拘束と,重力などの力学的作用との両方によって運動しにくくなる.本論文ではマニピュレーションのロバスト性評価に基づいて,幾何学的拘束を力学解析の枠組みで評価できる解釈を示す.いくつかの数値例を以て,その有用性を検証する.
This paper presents a novel measurement method for caging quality based on static analysis of robotic grasping and manipulation. Caging is a geometrical constraint of objects in which they captured by surrounding robots are restricted to move in the bounded space. In cases of partially caged objects, simultaneous evaluation of both caging quality and force closure is required, and we propose one based on the robustness measure of grasping and manipulation. Some numerical results are presented to validate our proposed procedure of evaluation.
牧原 昂志,槇田 諭,第24回ロボティクスシンポジア,pp. 189-192,2019年3月15日.
Abstract --- This paper introduces a community for young researchers in robotics, HUROBINT (HUman and ROBot INTeraction) and its social activities. Additionally it addresses importance and merit of science communication with citizens and researchers in other fields. The HUROBINT community helps young researchers make collaborators who join their research projects and other activities. And it provides us with good opportunities to make friends in the same fields. Social activities and science communication are essentially parts of research activities, and they contribute to not only widely spread our achievement but also provide us with interdisciplinary viewpoints. Making social network and implementation of out-reaching will give us fruitful effects on robotics researches that are going to be highly complicated day by day.
Using telepresence robots and video conferencing systems, researchers aimed to address social issues faced by remote island communities by increasing educational opportunities. The robots allowed virtual tours of research labs and museums located hundreds of kilometers away. Live streaming and video conferencing were also used to broadcast forums and discussions. The goal was to cancel geographical barriers and provide more communication and learning opportunities for children and others living in small, remote islands.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Overcoming the PLG Trap: Lessons from Canva's Head of Sales & Head of EMEA Da...
Icam2015 presentation poster - Object Recognition and Planning of Ring-type Caging for Scissors
1. Object Recognition and Planning of Ring-type Caging for Scissors
* S. Makita*1
, S. Tsuji*2
, T. Tsuji*3
, K. Harada*4
*1
NIT Sasebo College, *2
Omron Corp., *3
Kyushu Univ., *4
AIST
Keywords
Caging, Grasping, Manipulation, Object recognition,
Motion planning, Object with holes
Abstract
● Scissors, an object with holes, can be caged by a two-
fingered hand
● Necessary features for caging a scissors are
● Position of the holes (a handle)
● Orientation of the scissors (especially blades)
● Object recognition is based on OpenCV and SURF
● Choreonoid*[1]
, a motion planner, and graspPlugin*[2]
, a
grasp planner, are used for caging motion generation
Motivation
● Caging, a geometrical constraint by robots, is substitution of grasping, a force-control-based approach
● Ring-type caging can be easily achieved by simple visual features (loop-shapes)
● Vision system has an advantage over some RGB-D system on the viewpoint of size restrictions
Our proposal
● Obtain visual features by webcams for ring-type caging
● Ellipsoid approximation for handle of scissors
● Majority decisions for orientation of blades with Hough transform of line segments
● Motion planning with Choreonoid and graspPlugin modified for caging with its sufficient conditions
Sufficient condition for ring-type caging
● Capture a loop shape of object by robot fingers and make a Hopf link
RGB vs. RGB-D
● RGB-D images has richer information of objects than only RGB.
● RGB-D sensors are still larger than RGB cameras. It is a disadvantage for implementation to ro-
bot hands.
Caging planning by Choreonoid*[1]
, a motion planner
● A motion planner using PRM (Probabilistic Roadmap Method)
graspPlugin*[2]
● Grasp planner for Choreonoid, including grasping, trajectory and task planning
Our modification
Reference: *[1] Choreonoid, http://choreonoid.org/en/, *[2] graspPlugin, http://choreonoid.org/GraspPlugin/i/?q=en
2. Object recognition
Position of handle of scissors
1) Detect contours using luminance gradient, and run closing
processing to connect separated contours
2) Recognize loop contours from a binary image, and the loops
make nested structure.
3) The loops surrounded by the outermost loop are approxi-
mated by each ellipse using least square method.
4) Estimate major and minor axis of each ellipse and compare
the distance of each axis in all the combinations of ellipses.
5) Assume a handle that is composed by two ellipses whose
sum of differences is smallest in all the combinations.
Orientation of scissors
1) Define a direction vector between each center of two el-
lipses approximating the hollows of handle
2) Detect the blades of scissors as several line segments rec-
ognized from Hough transform applied to contours
3) Define a direction vector from the handle to the line seg-
ments and examine the relative position of them by outer
product
4) Estimate orientation of the scissors by majority decision
Distance estimation by stereo vision
Widely-used method, with SURF features, epipolar geometry and stereo rectification
Motion planning by Choreonoid
1) Move the robot hand to right over of midpoint between
the handle and rotate it to let the gripper vertical
2) Let the line segment between both gripper fingers in
parallel to the line segment between the handle’s loops
3) Approach the gripper to the object and close it
Note: “Robot fingers and the table” cooperatively make a
“loop” to cage the scissors
0
10
20
30
40
50
60
70
80
90
100
Scissors A Scissors B Scissors C Scissors D
Successrate[%]
Handle detection Orientation estimation
2
Acknowledgment: This work was supported by JSPS KAKENHI Grant #23760248
Average planning time
● Generate goal posture:
10 ms
● Path planning: 10ms
Experimental results
● Scissors with decorated texture has less success rate
● Orientation estimation is executed with high accuracy if
handle detection is successfully achieved
● Some obstacles are removed through labelling process
● Note: Scissors are on the white flat table
A failure case