Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
Object tracking using motion flow projection for pan-tilt configurationIJECEIAES
We propose a new object tracking model for two degrees of freedom mechanism. Our model uses a reverse projection from a camera plane to a world plane. Here, the model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. A pan-tilt (PT) mounting system is used to verify the performance of our model and maintain the tracked object within a region of interest (ROI). This system contains two servo motors to enable a webcam rotating along PT axes. The PT rotation angles are estimated based on a rigid transformation of the the optic flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axes are used. Our model was tested and evaluated using different objects with different motions. The results reveal that our model can keep the target object within a certain region in the camera view.
Real-time Moving Object Detection using SURFiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
We presents a technique for moving objects extraction. There are several different approaches for moving object extraction, clustering is one of object extraction method with a stronger teorical foundation used in many applications. And need high performance in many extraction process of moving object. We compare K-Means and Self-Organizing Map method for extraction moving objects, for performance measurement of moving object extraction by applying MSE and PSNR. According to experimental result that the MSE value of K-Means is smaller than Self-Organizing Map. It is also that PSNR of K-Means is higher than Self-Organizing Map algorithm. The result proves that K-Means is a promising method to cluster pixels in moving objects extraction.
Gait Based Person Recognition Using Partial Least Squares Selection Scheme ijcisjournal
The variations of viewing angle and intra-class of human beings have great impact on gait recognition
systems. This work represents an Arbitrary View Transformation Model (AVTM) for recognizing the gait.
Gait energy image (GEI) based gait authentication is effective approach to address the above problem, the
method establishes an AVTM based on principle component analysis (PCA). Feature selection (FS) is
performed using Partial least squares (PLS) method. The comparison of the AVTM PLS method with the
existing methods shows significant advantages in terms of observing angle variation, carrying and attire
changes. Experiments evaluated over CASIA gait database, shows that the proposed method improves the
accuracy of recognition compared to the other existing methods.
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES sipij
Human action recognition is still a challenging problem and researchers are focusing to investigate this
problem using different techniques. We propose a robust approach for human action recognition. This is
achieved by extracting stable spatio-temporal features in terms of pairwise local binary pattern (P-LBP)
and scale invariant feature transform (SIFT). These features are used to train an MLP neural network
during the training stage, and the action classes are inferred from the test videos during the testing stage.
The proposed features well match the motion of individuals and their consistency, and accuracy is higher
using a challenging dataset. The experimental evaluation is conducted on a benchmark dataset commonly
used for human action recognition. In addition, we show that our approach outperforms individual features
i.e. considering only spatial and only temporal feature.
Object tracking using motion flow projection for pan-tilt configurationIJECEIAES
We propose a new object tracking model for two degrees of freedom mechanism. Our model uses a reverse projection from a camera plane to a world plane. Here, the model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. A pan-tilt (PT) mounting system is used to verify the performance of our model and maintain the tracked object within a region of interest (ROI). This system contains two servo motors to enable a webcam rotating along PT axes. The PT rotation angles are estimated based on a rigid transformation of the the optic flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axes are used. Our model was tested and evaluated using different objects with different motions. The results reveal that our model can keep the target object within a certain region in the camera view.
Real-time Moving Object Detection using SURFiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
We presents a technique for moving objects extraction. There are several different approaches for moving object extraction, clustering is one of object extraction method with a stronger teorical foundation used in many applications. And need high performance in many extraction process of moving object. We compare K-Means and Self-Organizing Map method for extraction moving objects, for performance measurement of moving object extraction by applying MSE and PSNR. According to experimental result that the MSE value of K-Means is smaller than Self-Organizing Map. It is also that PSNR of K-Means is higher than Self-Organizing Map algorithm. The result proves that K-Means is a promising method to cluster pixels in moving objects extraction.
Gait Based Person Recognition Using Partial Least Squares Selection Scheme ijcisjournal
The variations of viewing angle and intra-class of human beings have great impact on gait recognition
systems. This work represents an Arbitrary View Transformation Model (AVTM) for recognizing the gait.
Gait energy image (GEI) based gait authentication is effective approach to address the above problem, the
method establishes an AVTM based on principle component analysis (PCA). Feature selection (FS) is
performed using Partial least squares (PLS) method. The comparison of the AVTM PLS method with the
existing methods shows significant advantages in terms of observing angle variation, carrying and attire
changes. Experiments evaluated over CASIA gait database, shows that the proposed method improves the
accuracy of recognition compared to the other existing methods.
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES sipij
Human action recognition is still a challenging problem and researchers are focusing to investigate this
problem using different techniques. We propose a robust approach for human action recognition. This is
achieved by extracting stable spatio-temporal features in terms of pairwise local binary pattern (P-LBP)
and scale invariant feature transform (SIFT). These features are used to train an MLP neural network
during the training stage, and the action classes are inferred from the test videos during the testing stage.
The proposed features well match the motion of individuals and their consistency, and accuracy is higher
using a challenging dataset. The experimental evaluation is conducted on a benchmark dataset commonly
used for human action recognition. In addition, we show that our approach outperforms individual features
i.e. considering only spatial and only temporal feature.
Effective Object Detection and Background Subtraction by using M.O.IIJMTST Journal
This paper proposes efficient motion detection and people counting based on background subtraction using dynamic threshold approach with mathematical morphology. Here these different methods are used effectively for object detection and compare these performance based on accurate detection. Here the techniques frame differences, dynamic threshold based detection will be used. After the object foreground detection, the parameters like speed, velocity motion will be determined. For this, most of previous methods depend on the assumption that the background is static over short time periods. In dynamic threshold based object detection, morphological process and filtering also used effectively for unwanted pixel removal from the background. The background frame will be updated by comparing the current frame intensities with reference frame. Along with this dynamic threshold, mathematical morphology also used which has an ability of greatly attenuating color variations generated by background motions while still highlighting moving objects. Finally the simulated results will be shown that used approximate median with mathematical morphology approach is effective rather than prior background subtraction methods in dynamic texture scenes and performance parameters of moving object such sensitivity, speed and velocity will be evaluated by using MOI.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
Motion Detection and Clustering Using PCA and NN in Color Image SequenceTELKOMNIKA JOURNAL
This paper presents a motion detection method with the use of the Principal Component
Analysis. This method is able to detect and track moving objects in a sequence of images. The tested
sequence is segmented within the meaning of movement. In this paper, the concept of extracting
significant information from a large number of data is adopted to provide an effective method for tracking
moving objects on the video image. The principal components are different in term of getting significant
information, the nature of motion (the nature of information) is responsible of this difference, the algorithm
in this paper distinguish the motion nature and choose the appropriate components to give a best
segmentation.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
Design and implementation of video tracking system based on camera field of viewsipij
The basic idea of this paper is to design and implement of video tracking system based on Camera Field of
View (CFOV), Otsu’s method was used to detect targets such as vehicles and people. Whereas most
algorithms were spent a lot of time to execute the process, an algorithm was developed to achieve it in a
little time. The histogram projection was used in both directional to detect target from search region,
which is robust to various light conditions in Charge Couple Device (CCD) camera images and saves
computation time.
Our algorithm based on background subtraction, and normalize cross correlation operation from a series
of sequential sub images can estimate the motion vector. Camera field of view (CFOV) was determined and
calibrated to find the relation between real distance and image distance. The system was tested by
measuring the real position of object in the laboratory and compares it with the result of computed one. So
these results are promising to develop the system in future.
GPGPU-Assisted Subpixel Tracking Method for Fiducial MarkersNaoki Shibata
With an aim to realizing highly accurate position estimation, we propose in this paper a method for efficiently and accurately detecting the 3D positions and poses of traditional fiducial markers with black frames in high resolution images taken by ordinary web cameras. Our tracking method can be efficiently executed utilizing GPGPU computation, and in order to realize this, we devised a connected-component labeling method suitable for GPGPU execution. In order to improve accuracy, we devised a method for detecting 2D positions of the corners of markers in subpixel accuracy. We implemented our method in Java and OpenCL, and we confirmed that the proposed method provides better detection and measurement accuracy, and recognizing from high-resolution images is beneficial for improving accuracy. We also confirmed that our method is more than two times as fast as the existing method with CPU computation.
AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYST...ijaia
Recent years have witnessed an astronomical growth in the amount of textual information available both on the web and institutional wise document repositories. As a result, text mining has become extremely prevalent and processing of textual information from such repositories got the focus of the current age researchers. Indeed, in the researcher front of text analysis, there are numerous cutting edge applications are available for text mining. More specifically, the classification oriented text mining has been gaining more attention as it concentrates measures like coverage and accuracy. Along with the huge volume of data, the aspirations of the user are growing far higher than the human capacity, thus, an utomated and competitive intelligent systems are essential for reliable text analysis. Towards this, the authors in the present paper propose an Intelligent Text Data Classification System
(ITDCS) which is designed in the light of biological nature of genetic approach and able to acquire
computational intelligence accurately. Initially, ITDCS focusses on preparing structured data from the huge volume of unstructured data with its procedural steps and filter methods. Subsequently, it emphasises on classifying the text data into labelled classes using KNN classification based on the selection of best features derived by genetic algorithm. In this process, it specially concentrates on adding the power of
intelligence to the classifier using together with the biological parts namely, encoding strategy, fitness function and operators of genetic algorithm. The integration of all biological zomponents of genetic algorithm in ITDCS significantly improves the accuracy and reduces the misclassification rate in classifying the text data
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Effective Object Detection and Background Subtraction by using M.O.IIJMTST Journal
This paper proposes efficient motion detection and people counting based on background subtraction using dynamic threshold approach with mathematical morphology. Here these different methods are used effectively for object detection and compare these performance based on accurate detection. Here the techniques frame differences, dynamic threshold based detection will be used. After the object foreground detection, the parameters like speed, velocity motion will be determined. For this, most of previous methods depend on the assumption that the background is static over short time periods. In dynamic threshold based object detection, morphological process and filtering also used effectively for unwanted pixel removal from the background. The background frame will be updated by comparing the current frame intensities with reference frame. Along with this dynamic threshold, mathematical morphology also used which has an ability of greatly attenuating color variations generated by background motions while still highlighting moving objects. Finally the simulated results will be shown that used approximate median with mathematical morphology approach is effective rather than prior background subtraction methods in dynamic texture scenes and performance parameters of moving object such sensitivity, speed and velocity will be evaluated by using MOI.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
An enhanced fireworks algorithm to generate prime key for multiple users in f...journalBEEI
This work presents a new method to enhance the performance of fireworks algorithm to generate a prime key for multiple users. A threshold technique in image segmentation is used as one of the major steps. It is used processing the digital image. Some useful algorithms and methods for dividing and sharing an image, including measuring, recognizing, and recognizing, are common. In this research, we proposed a hybrid technique of fireworks and camel herd algorithms (HFCA), where Fireworks are based on 3-dimension (3D) logistic chaotic maps. Both, the Otsu method and the convolution technique are used in the pre-processing image for further analysis. The Otsu is employed to segment the image and find the threshold for each image, and convolution is used to extract the features of the used images. The sample of the images consists of two images of fingerprints taken from the Biometric System Lab (University of Bologna). The performance of the anticipated method is evaluated by using FVC2004 dataset. The results of the work enhanced algorithm, so quick response code (QRcode) is used to generate a stream key by using random text or number, which is a class of symmetric-key algorithm that operates on individual bits or bytes.
Motion Detection and Clustering Using PCA and NN in Color Image SequenceTELKOMNIKA JOURNAL
This paper presents a motion detection method with the use of the Principal Component
Analysis. This method is able to detect and track moving objects in a sequence of images. The tested
sequence is segmented within the meaning of movement. In this paper, the concept of extracting
significant information from a large number of data is adopted to provide an effective method for tracking
moving objects on the video image. The principal components are different in term of getting significant
information, the nature of motion (the nature of information) is responsible of this difference, the algorithm
in this paper distinguish the motion nature and choose the appropriate components to give a best
segmentation.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
Design and implementation of video tracking system based on camera field of viewsipij
The basic idea of this paper is to design and implement of video tracking system based on Camera Field of
View (CFOV), Otsu’s method was used to detect targets such as vehicles and people. Whereas most
algorithms were spent a lot of time to execute the process, an algorithm was developed to achieve it in a
little time. The histogram projection was used in both directional to detect target from search region,
which is robust to various light conditions in Charge Couple Device (CCD) camera images and saves
computation time.
Our algorithm based on background subtraction, and normalize cross correlation operation from a series
of sequential sub images can estimate the motion vector. Camera field of view (CFOV) was determined and
calibrated to find the relation between real distance and image distance. The system was tested by
measuring the real position of object in the laboratory and compares it with the result of computed one. So
these results are promising to develop the system in future.
GPGPU-Assisted Subpixel Tracking Method for Fiducial MarkersNaoki Shibata
With an aim to realizing highly accurate position estimation, we propose in this paper a method for efficiently and accurately detecting the 3D positions and poses of traditional fiducial markers with black frames in high resolution images taken by ordinary web cameras. Our tracking method can be efficiently executed utilizing GPGPU computation, and in order to realize this, we devised a connected-component labeling method suitable for GPGPU execution. In order to improve accuracy, we devised a method for detecting 2D positions of the corners of markers in subpixel accuracy. We implemented our method in Java and OpenCL, and we confirmed that the proposed method provides better detection and measurement accuracy, and recognizing from high-resolution images is beneficial for improving accuracy. We also confirmed that our method is more than two times as fast as the existing method with CPU computation.
AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYST...ijaia
Recent years have witnessed an astronomical growth in the amount of textual information available both on the web and institutional wise document repositories. As a result, text mining has become extremely prevalent and processing of textual information from such repositories got the focus of the current age researchers. Indeed, in the researcher front of text analysis, there are numerous cutting edge applications are available for text mining. More specifically, the classification oriented text mining has been gaining more attention as it concentrates measures like coverage and accuracy. Along with the huge volume of data, the aspirations of the user are growing far higher than the human capacity, thus, an utomated and competitive intelligent systems are essential for reliable text analysis. Towards this, the authors in the present paper propose an Intelligent Text Data Classification System
(ITDCS) which is designed in the light of biological nature of genetic approach and able to acquire
computational intelligence accurately. Initially, ITDCS focusses on preparing structured data from the huge volume of unstructured data with its procedural steps and filter methods. Subsequently, it emphasises on classifying the text data into labelled classes using KNN classification based on the selection of best features derived by genetic algorithm. In this process, it specially concentrates on adding the power of
intelligence to the classifier using together with the biological parts namely, encoding strategy, fitness function and operators of genetic algorithm. The integration of all biological zomponents of genetic algorithm in ITDCS significantly improves the accuracy and reduces the misclassification rate in classifying the text data
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Data mining model for the data retrieval from central server configurationijcsit
A server, which is to keep track of heavy document traffic, is unable to filter the documents that are most
relevant and updated for continuous text search queries. This paper focuses on handling continuous text
extraction sustaining high document traffic. The main objective is to retrieve recent updated documents
that are most relevant to the query by applying sliding window technique. Our solution indexes the
streamed documents in the main memory with structure based on the principles of inverted file, and
processes document arrival and expiration events with incremental threshold-based method. It also ensures
elimination of duplicate document retrieval using unsupervised duplicate detection. The documents are
ranked based on user feedback and given higher priority for retrieval.
Systems variability modeling a textual model mixing class and feature conceptsijcsit
System’s reusability and cost are very important in software product line design area. Developers’ goal is
to increase system reusability and decreasing cost and efforts for building components from scratch for
each software configuration. This can be reached by developing software product line (SPL). To handle
SPL engineering process, several approaches with several techniques were developed. One of these
approaches is called separated approach. It requires separating the commonalities and variability for
system’s components to allow configuration selection based on user defined features. Textual notationbased
approaches have been used for their formal syntax and semantics to represent system features and
implementations. But these approaches are still weak in mixing features (conceptual level) and classes
(physical level) that guarantee smooth and automatic configuration generation for software releases. The
absence of methodology supporting the mixing process is a real weakness. In this paper, we enhanced
SPL’s reusability by introducing some meta-features, classified according to their functionalities. As a first
consequence, mixing class and feature concepts is supported in a simple way using class interfaces and
inherent features for smooth move from feature model to class model. And as a second consequence, the
mixing process is supported by a textual design and implementation methodology, mixing class and feature
models by combining their concepts in a single language. The supported configuration generation process
is simple, coherent, and complete.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
The comparison of the text classification methods to be used for the analysis...ijcsit
Text classification is used for the purpose of preventing the leakage of the data which is highly important
within the institution through unallowed ways. The results obtained from the text classification process
should be integrated into the DLP architecture immediately. The data flowing through the net requires
instant control and the flow of the sensitive data should be prevented. The use of the machinery learning
methods is required to perform the text classification which will be integrated into the DLP architecture.
The experimental results of the comparison of text classification methods to be used in the interface written
on the ICAP protocol have been prepared in the networked architecture developed for the DLP system.
Also, the choice of the text classification method to be used in the instant control of the sensitive data has
been carried out. The DLP text classification architecture developed helps decide the classification method
through the examination of the data in motion. The method to be chosen for the text classification is applied
to the ICAP protocol, and the analysis of the sensitive data and confidentiality are provided.
Improving initial generations in pso algorithm for transportation network des...ijcsit
Transportation Network Design Problem (TNDP) aims to select the best project sets among a number of new projects. Recently, metaheuristic methods are applied to solve TNDP in the sense of finding better solutions sooner. PSO as a metaheuristic method is based on stochastic optimization and is a parallel revolutionary computation technique. The PSO system initializes with a number of random solutions and seeks for optimal solution by improving generations. This paper studies the behavior of PSO on account of improving initial generation and fitness value domain to find better solutions in comparison with previous attempts.
A privacy learning objects identity system for smartphones based on a virtu...ijcsit
Smartphones are widely used today, with many features such as GPS map navigation, capturing
photos with camera equipment such as digital camera, internet connection via wifi or 3G devices that
function as computers. These devices are being used for various purposes including online learning, where
learners can study from anywhere and anytime for example in the street, home, office and school. However,
identifing a method by which teachers in these virtural environements can remember their learners “faces”
in the classroom or manage "Identification Number Student" (ID student or user) is not reliable when the
teacher cannot see all of the learners in the class or know who is online from a particular account. In this
paper, we propose a system, Android Virtual Learner Identify (AVLI), which collects images captured by
the face of the learning object directly from the camera, the location of the learner by identifing where the
learner is studying and configuration of information including Time, Mac, IP addresses, IMEI number and
location via GPS. The systen then saves learner profiles to help the teacher or education managers on the
Virtual Learning Environment (VLE) identify learning object. We used the VLE that we built on
mobile.ona.vn domain. We implemented the AVLI prototype Android phone with solution password
encryption and images taken directly from the camera to ensure that the information is transmitted and
stored securely in the Virtual Learning Environment System Database (VLE Data) of learning objects while
preserving the ability to identify learning objects by a teacher or education manager.
Cardinal direction relations in qualitative spatial reasoningijcsit
Representation and reasoning with spatial information is a fundamental aspect of artificial
intelligence. Qualitative methods have become prominent in spatial reasoning. In geophysical explorations,
one of the aspects is to determine compass direction between the regions. In this paper, we present an
efficient approach to cardinal directions between free form regions. The development is very simple,
mathematically sound and can be implemented efficiently. The extension to 3D is seamless; it needs no
additional formulation for transition from 2D to 3D.It has no adverse impact on the computational
efficiency, as the technique is akin to 2D. This work is directly applicable to geographical information
systems for location determination, robot navigation, and spatio-temporal networks databases where
direction changes frequently.
We take some parts of a theoretical mobility model in a two-dimension grid proposed by Greenlaw and
Kantabutra to be our model. The model has eight necessary factors that we commonly use in a mobile
wireless network: sources or wireless signal providers, the directions that a source can move, users or
mobile devices, the given directions which define a user’s movement, the given directions which define a
source’s movement, source’s velocity, source’s coverage, and obstacles. However, we include only the
sources, source’s coverage, and the obstacles in our model. We define SQUARE GRID POINTS COVERAGE
(SGPC) problem to minimize number of sources with coverage radius of one to cover a square grid point
size of p with the restriction that all the sources must be communicable and proof that SGPC is in NPcomplete
class. We also give an APPROX-SQUARE-GRID-COVERAGE (ASGC) algorithm to compute the
approximate solution of SGPC. ASGC uses the rule that any number can be obtained from the addition of
3, 4 and 5 and then combines 3-gadgets, 4-gadgets and 5-gadgets to specify the position of sources to cover
a square grid point size of p. We find that the algorithm achieves an approximation ratio of
2
1 2 10 2
p
p .
Moreover, we state about the extension usage of our algorithm and show some examples. We show that if
we use ASPC on a square grid size of p and if sources can be moved, the area under the square grid can be
covered in eight-time-steps movement. We also prove that if we extend our source coverage radius to 1.59,
without any movement the area under the square gird will also be covered. Further studies are also
discussed and a list of some tentative problems is given in the conclusion.
Motion Planning and Controlling Algorithm for Grasping and Manipulating Movin...ijscai
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
Obstacle detection for autonomous systems using stereoscopic images and bacte...IJECEIAES
This paper presents a low cost strategy for real-time estimation of the position of ob- stacles in an unknown environment for autonomous robots. The strategy was intended for use in autonomous service robots, which navigate in unknown and dynamic indoor environments. In addition to human interaction, these environments are characterized by a design created for the human being, which is why our developments seek morphological and functional similarity equivalent to the human model. We use a pair of cameras on our robot to achieve a stereoscopic vision of the environment, and we analyze this information to determine the distance to obstacles using an algorithm that mimics bacterial behavior. The algorithm was evaluated on our robotic platform demonstrating high performance in the location of obstacles and real-time operation.
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...CSCJournals
This paper presents the implementation of a novel technique for sensor based path planning of autonomous mobile robots. The proposed method is based on finding free-configuration eigen spaces (FCE) in the robot actuation area. Using the FCE technique to find optimal paths for autonomous mobile robots, the underlying hypothesis is that in the low-dimensional manifolds of laser scanning data, there lies an eigenvector which corresponds to the free-configuration space of the higher order geometric representation of the environment. The vectorial combination of all these eigenvectors at discrete time scan frames manifests a trajectory, whose sum can be treated as a robot path or trajectory. The proposed algorithm was tested on two different test bed data, real data obtained from Navlab SLAMMOT and data obtained from the real-time robotics simulation program Player/Stage. Performance analysis of FCE technique was done with existing four path planning algorithms under certain working parameters, namely computation time needed to find a solution, the distance travelled and the amount of turning required by the autonomous mobile robot. This study will enable readers to identify the suitability of path planning algorithm under the working parameters, which needed to be optimized. All the techniques were tested in the real-time robotic software Player/Stage. Further analysis was done using MATLAB mathematical computation software.
Path Planning for Mobile Robot Navigation Using Voronoi Diagram and Fast Marc...Waqas Tariq
For navigation in complex environments, a robot needs to reach a compromise between the need for having efficient and optimized trajectories and the need for reacting to unexpected events. This paper presents a new sensor-based Path Planner which results in a fast local or global motion planning able to incorporate the new obstacle information. In the first step the safest areas in the environment are extracted by means of a Voronoi Diagram. In the second step the Fast Marching Method is applied to the Voronoi extracted areas in order to obtain the path. The method combines map-based and sensor-based planning operations to provide a reliable motion plan, while it operates at the sensor frequency. The main characteristics are speed and reliability, since the map dimensions are reduced to an almost unidimensional map and this map represents the safest areas in the environment for moving the robot. In addition, the Voronoi Diagram can be calculated in open areas, and with all kind of shaped obstacles, which allows to apply the proposed planning method in complex environments where other methods of planning based on Voronoi do not work.
This paper describes a computer simulated artificial intelligence (AI) agent moving in 2D and 3D
environments. In the presented algorithm, the agent can take two operating modes: Manual Mode and Map
or Autopilot mode. The user can control the agent fully in a manual mode by moving it in all possible
directions depending on the environment. Obstacles are sensed by the agent from a certain distance and
are avoided accordingly. Another important mode is the Map mode. In this mode the user create a custom
map where initial position and a goal position are set. The user is able also to assign sudden and
predefined obstacles. By finding the shortest path, the agent moves to the goal position avoiding any
obstacles on its path.
The paper documents a set of algorithms that can help the agent to find the shortest path to a predefined
target location in a complex 3D environment, such as cities and mountains, avoiding all predefined and
sudden obstacles. These obstacles are avoided also in manual mode and the agent moves automatically to a
safe location. The implementation is based on the Hill Climbing algorithm (descent version), where the
agent finds its path to the global minimum (target goal). The Map generation algorithm, which is used to
assign costs to every location in the map, avoids a lot of the limitations of Hill Climbing.
Kinematics modeling of six degrees of freedom humanoid robot arm using impro...IJECEIAES
The robotic arm has functioned as an arm in the humanoid robot and is generally used to perform grasping tasks. Accordingly, kinematics modeling both forward and inverse kinematics is required to calculate the end-effector position in the cartesian space before performing grasping activities. This research presents the kinematics modeling of six degrees of freedom (6-DOF) robotic arm of the T-FLoW humanoid robot for the grasping mechanism of visual grasping systems on the robot operating system (ROS) platform and CoppeliaSim. Kinematic singularity is a common problem in the inverse kinematics model of robots, but. However, other problems are mechanical limitations and computational time. The work uses the homogeneous transformation matrix (HTM) based on the Euler system of the robot for the forward kinematics and demonstrates the capability of an improved damped least squares (I-DLS) method for the inverse kinematics. The I-DLS method was obtained by improving the original DLS method with the joint limits and clamping techniques. The I-DLS performs better than the original DLS during the experiments yet increases the calculation iteration by 10.95%, with a maximum error position between the endeffector and target positions in path planning of 0.1 cm.
Mathematical modeling and kinematic analysis of 5 degrees of freedom serial l...IJECEIAES
Modeling and kinematic analysis are crucial jobs in robotics that entail identifying the position of the robot’s joints in order to accomplish particular tasks. This article uses an algebraic approach to model the kinematics of a serial link, 5 degrees of freedom (DOF) manipulator. The analytical method is compared to an optimization strategy known as sequential least squares programming (SLSQP). Using an Intel RealSense 3D camera, the colored object is picked up and placed using vision-based technology, and the pixel location of the object is translated into robot coordinates. The LOBOT LX15D serial bus servo controller was used to transmit these coordinates to the robotic arm. Python3 programming language was used throughout the entire analysis. The findings demonstrated that both analytical and optimized inverse kinematic solutions correctly identified colored objects and positioned them in their appropriate goal points.
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
Three-dimensional structure from motion recovery of a moving object with nois...IJECEIAES
In this paper, a Nonlinear Unknown Input Observer (NLUIO) based approach is proposed for three-dimensional (3-D) structure from motion identification. Unlike the previous studies that require prior knowledge of either the motion parameters or scene geometry, the proposed approach assumes that the object motion is imperfectly known and considered as an unknown input to the perspective dynamical system. The reconstruction of the 3-D structure of the moving objects can be achieved using just twodimensional (2-D) images of a monocular vision system. The proposed scheme is illustrated with a numerical example in the presence of measurement noise for both static and dynamic scenes. Those results are used to clearly demonstrate the advantages of the proposed NLUIO.
Trajectory reconstruction for robot programming by demonstration IJECEIAES
The reproduction of hand movements by a robot remains difficult and conventional learning methods do not allow us to faithfully recreate these movements because it is very difficult when the number of crossing points is very large. Programming by Demonstration gives a better opportunity for solving this problem by tracking the user’s movements with a motion capture system and creating a robotic program to reproduce the performed tasks. This paper presents a Programming by Demonstration system in a trajectory level for the reproduction of hand/tool movement by a manipulator robot; this was realized by tracking the user’s movement with the ArToolkit and reconstructing the trajectories by using the constrained cubic spline. The results obtained with the constrained cubic spline were compared with cubic spline interpolation. Finally the obtained trajectories have been simulated in a virtual environment on the Puma 600 robot.
IEEE CASE 2016 On Avoiding Moving Objects for Indoor Autonomous QuadrotorsPeter SHIN
Abstract - A mini quadrotor can be used in many applica- tions, such as indoor airborne surveillance, payload delivery, and warehouse monitoring. In these applications, vision-based autonomous navigation is one of the most interesting research topics because precise navigation can be implemented based on vision analysis. However, pixel-based vision analysis approaches require a high-powered computer, which is inappropriate to be attached to a small indoor quadrotor. This paper proposes a method called the Motion-vector-based Moving Objects Detec- tion. This method detects and avoids obstacles using stereo motion vectors instead of individual pixels, thereby substan- tially reducing the data processing requirement. Although this method can also be used in the avoidance of stationary obstacles by taking into account the ego-motion of the quadrotor, this paper primarily focuses on providing our empirical verification on the real-time avoidance of moving objects.
High-Speed Neural Network Controller for Autonomous Robot Navigation using FPGAiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
Motion planning and controlling algorithm for grasping and manipulating moving objects in the presence of obstacles
1. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
Motion Planning and Controlling Algorithm for
Grasping and Manipulating Moving Objects in the
Presence of Obstacles
Ali Chaabani1, Mohamed Sahbi Bellamine2 and Moncef Gasmi3
1National School of Engineering of Tunis, University of Manar, TUNISIA
2,3Computer Laboratory for Industrial Systems, National Institute of Applied Sciences
and Technology, University of Carthage, TUNISIA
ABSRACT
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
KEYWORDS
Grasping, moving object, trajectory planning , robot hand , obstacles.
1. INTRODUCTION
The problem of grasping a moving object in the presence of obstacles with a robotic manipulator
has been reported in different works. There have been many studies on grasping motion planning
for a manipulator to avoid obstacles [1], [2], [3]. One may want to apply a method used for
mobile robots, but it would cause a problem since it only focuses on grasping motion of robot
hands and since the configuration space dimension is too large. Motion planning for a
manipulator to avoid obstacles, however, which takes account of the interference between
machine joints and obstacles, has been extensively studied in recent years and now has reached a
practical level. Grasping operations in an environment with obstacles are now commonly
conducted in industrial applications and by service robots.
Many robotic applications have been designed to use the concept of “Planning Using Visual
Information”, i.e. control a given robot manipulator via a “servo loop” that use real world images
to take decisions [4], [5], [6]. At this point, the use of predictive algorithms, as the core of the
robot servo, tend to offer a better performance in the tracking and grasping process. [7]
DOI :10.5121/ijscai.2014.3401 1
2. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
developed a system to grasp moving targets using a static camera and precalibrated camera-manipulator
transform. [8] proposed a control theory approach for grasping using visual
information. [9] presented a system to track and grasp an electric toy train moving in an
oval path using calibrated static stereo cameras.
The major challenges encountered within the visual servoing are mainly, how to reduce the robot
grasp response, considering the delay introduced by images processing, and who to resolve target
occlusion, when obstacles may obstruct the potential way to the object. Predictive algorithms
construct on of the best approaches to escape such performance limits, and ensuring a smart
tracking and grasping process. [10] use a prediction module which consists of a linear
predictor with the purpose of predicting the location that a moving object will have and
thus generate the control signal to move the eyes of a humanoid robot, which is capable
of using behavior models similar to those of human infants to track objects. [11] present a
tracking algorithm based on a linear prediction of second order solved by the Maximum
Entropy Method. It attempts to predict the centroid of the moving object in the next
frame, based on several past centroid measurements. [12] represent the tracked object as a
constellation of spatially localized linear predictors which are trained on a single image
sequence. In a learning stage, sets of pixels whose intensities allow for optimal prediction
of the transformations are selected as a support for the linear predictor . [13]
Implementation of tracking and capturing a moving object using a mobile robot.
The researchers who use the visual servoing system and the cameras for grasping moving object
find many difficulties to record images, to treat them, because of a lot computing and image
processing and also who use the predictive algorithms find a problem in the complexity of
algorithms witch based on many calculated and estimation[14]. In this research we want to grasp
a moving object with limited motion velocity. This can be done by determining desired position
for the object, the robot moves and aligns the end effector with the object and reaches towards it.
This paper presents a motion planning and controlling an arm of a humanoid robot for grasping
and manipulating of a moving object without cameras. We used an algorithm to control the end
effector pose (position and orientation) with respect to the pose of objects which can be moved in
the workspace of the robot. The proposed algorithm successfully grasped a moving object in a
reasonable time.
After introducing the grasp object problem, a discussion is made to distinguish the actual solution
among the others published in the literature. A description of the Rapidly-Exploring Random
Trees (RRT) is detailed in section 2. Then, in section 3, we give a brief overview of the transpose
of the Jacobian. The next section contains a description of the WAM™ arm. In Section 5
contains the Robot Dynamics. In Section 6, some results are given. Section 7 presents
conclusions drawn from this work.
2
2. RAPIDLY-EXPLORING RANDOM TREES (RRT)
In previous work [15],[16], approaching the motion planning problem was based on placing the
end effector at pre-configured locations, computed using the inverse kinematics(IK) applied to
some initial samples taken from the goal region. These locations are then set as goals for a
randomized planner, such as an RRT or BiRRT [17], [18]. The solution presented by this
approach remain unfinished because of the miss considered probabilistic aspect. The issue is that
the planner is forced to use numbers priori chosen from the goal regions.
Another way to tackle the grasp planning, certain types of workspace goals, is to explore the
configuration space of the robot with a heuristic search tree, and try to push the exploration
toward one goal region [19]. Nevertheless, the goal regions and heuristics presented in [20] are
3. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
highly problem specific to generalize and tricky to adjust. Drumwright and Ng-Thow-Hing [21]
employ a similar strategy of extending toward a randomly-generated IK solution for a workspace
point. In [22], Vande Weghe et al. present the RRT-JT algorithm, which uses a forward-searching
tree to explore the C-space and a gradient-descent heuristic based on the Jacobian-transpose to
bias the tree toward a work-space goal point.
[23] present two probabilistically complete planners: an extension of RRT-JT, and a new
algorithm called IKBiRRT. Both algorithms function by interleaving exploration of the robot's C-space
with exploitation of WGRs(Workspace Goal Regions). The extended RRT-JT (Figure 2) is
designed for robots that do not have such algorithms and is able to combine the configuration
space exploration of RRTs with a workspace goal bias to produce direct paths through complex
environments extremely efficiently, without the need for any inverse kinematics.
Q (the configuration space) and a desired end effector goal
X, where X is the space of end effector positions R3, we are interested in computing an
X is the end
3
Figure1. Configuration space(C-space)
3. USING THE JACOBIAN
Given a robot arm configuration q
xg
extension in configuration space from q to wards xg. Although the mapping from Q to X is often
nonlinear and hence expensive to deduct, its derivative the Jacobian,is a linear map from the
tangent space of Q to that of X, that can be computed easily (Jq˙=x˙ , where x
effector position (or pose) corresponding to q). Ideally, to drive the end effector to a desired
configuration xg, (d xg /dt0: object moves slowly) we could compute the error e(t)=( xg −x) and
run a controller of the form q˙=KJ−1e, where K is a positive gain. This simple controller is
capable to attain the target without considering any possible barriers or articulation limits.
However this turn into a complex controller, where the inverse of the Jacobian must be done at
each time step. To escape this expensive approach, we use alternatively the transpose of the
Jacobian and the control law fall into the form of q˙=KJTe. The controller eliminates the large
overhead of computing the inverse by using the easy-to-compute Jacobian instead. It is easy to
show that, under the same obstacle-free requirements as the Jacobian inverse controller, the
Jacobian transpose(JT) controller is also guaranteed to reach the goal. The instantaneous motion
of the end effector is given by x˙ =Jq˙ =J(KJTe). The inner product of this Instantaneous motion
with the error vector is given by eTx˙ = keTJJTe 0. As this is always positive, under our
assumptions with obstacles, we may ensure that the controller will be able to make onward
progress towards the target[24].
4. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
Figure 2. Depiction of the RRT-JT algorithm searching in C-space: from the start configuration to (WGRs).
4
The forward-searching tree is shown with green nodes, the blue regions are obstacles, [14].
4. THE WAM™ ARM
The WAM arm is a robotic back drivable manipulator. It has a stable joint-torque control with a
direct-drive capability. It offers a zero backslash and near zero friction to enhance the
performance of today’s robots. It comes with three main variants 4-DoF, 7- DoF, both with
human-like kinematics, and 4-DoF with 3-DoF Gimbals. Its articulation ranges go beyond those
for conventional robotic arms [25].
We use WAM 7-DoF Arm with attached Barrett Hand.
Figure 3. WAM 7-DoF dimensions and D-H frames, [26].
Figure 3. presents the whole 7-DoF WAM system in the initial position. A positive joint motion
is on the right hand rule, for each axis.The following equation of homogeneous transformation in
Figure 4 is used to determine the transformation between the axes K and K-1.
5. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
5
Figure 4. D-H generalized transform matrix
•ak−1=the distance from Zk−1 to Zk measured along Xk−1
•dk=the distance from Xk−1 to Xk measured along Zk
•k−1=angle between Zk−1 to Zk was approximately Xk−1
•k =angle between Xk−1 to Xk was approximately Zk
The Table 1 contains the parameters of the arm with 7-DoF
Table 1. 7-DoF WAM frame parameters
K ak k dk k
1 0 −/2 0 1
2 0 /2 0 2
3 0.045 −/2 0.55 3
4 −0.045 /2 0 4
5 0 −/2 0.3 5
6 0 /2 0 6
7 0 0 0.060 7
T 0 0 0
As with the previous example, we define the frame
for our specific end effector. By
multiplying all of the transforms up to and including the final frame , we determine the forward
kinematics for any frame on the robot. To determine the end tip location and orientation we use
the following equation:
6.
7. 5. ROBOT DYNAMICS
The dynamic simulation uses the Lagrange equations to get the angular acceleration from the
torque of each joint.
First, I computed the body Jacobian of each joint corresponding to , where is the ith
joint’s inertia matrix. So the manipulator inertia matrix can be calculated as
Also calculate potential part,
8. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
6
Second, I computed the torques of each joint using Lagrange equation, which is,
!
#
where represents the torque of the joint and,
$% ! '(
!!
After expanding the components of the equation, the equation is as following:
)* +$% !! * ,$% ! #
where +$% ! is the Coriolis and centrifugal force term and ,$% ! is the gravity term and
-
9. .T /01! 23
24 % ) 253
245.
6. RESULTS AND ANALYSIS
To demonstrate and illustrate the proposed procedure, we present an example which the robot is
equipped with a 7-DoF arm (see Figure 3) and a three-fingered Barrett hand(in fact in each time
there are three tests: test1, test2 and test3 ). The goal is to follow a moving model mug, stably
holding it, pick it up and move it to the desired position while avoiding the existing obstacles. The
mug was moving in a straight line trajectory in the space with velocity range 8-32 mm/s. The
initial positions of the end effector were (-0.730m,0.140m ,2.168m) and those of the moving
object were (-0.005m,-0.200m ,1.105m). In order to grasp the moving object stably and move it,
the robot hand reaches the object than it closes its fingers.
6.1. Grasping object in the environment without obstacles
6.1.1. Object moves with velocity V1= 8mm/s:
The transformation equations used to update the manipulator's joints until the distance between
the end effector and the moving object almost equal to zero. Once the position of the contact is
achieved, the Barret hand closes its fingers and grasps the object.
Figure 5: Successful grasping of a moving object
10. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
As mentioned in Figure 5, the grasping of the object is done successfully. Figure 5.a show that
the hand of the robot keeps at a distance from the object, the Barret hand and the object are
in the initial position, Figure 5.b the object moves with the velocity V1= 8mm/s and the robot
moves to the position of the centroid of the object, opens the fingers, closes the fingers and
finally grasps the object. In Figure 5.c the robot picks up the object and moves it to the desired
position.
To capture the moving object safety and to lift it up stably without slippage, the end effector
needs to be as controlled as the relation between their position and the object’ones. So they
determine the actual position of the moving object, and pick the closest distance between the end
effector and the target.
7
Figure 6: The trajectory of the object
Those tree figures represent the same trajectory of a moving object with the same velocity V1 in a
different dimension. Figure 6.a illustrates the trajectory based on the Z axis, while figure 6.b
illustrates the trajectory in the plane(Y,Z), and figure 6.c is in the space(X,Y,Z). The object
moves in a straight line.
Figure 7: The trajectory of the object and the end effector
Figure 7.a illustrates the curves of the third test: the robot grasps the object in time Tgrasp= 3.75
s, which moves according to the Z axis with velocity V1, Figure 7.b represents the curves of the
first test: the robot grasps the object in time Tgrasp= 3.99 s, which moves in the plane(Y,Z) with
velocity V1, and in Figure 7.c the curves of the second test: the robot grasps the object in time
Tgrasp= 2.81 s, which moves in the space(X,Y,Z) with velocity V1.
11. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
8
Table 2. Object moves with V1
according (Z)axis in(Y,Z) in(X,Y,Z)
Tgras
Tend(s) Tgrasp(
Tend(s
Tgrasp(
p(s)
s)
)
s)
Tend(s)
test
1
2.91 9.94 3.99 10.08 3.22 8.27
test
2
2.40 6.55 2.55 6.28 2.81 6.49
test
3
3.75 6.8 3.21 8.15 4.17-11.3 15.26
The table 2 provides the results in separately; the time for grasping the moving object and, the
time to move the object to the desired position, the object moves with velocity V1. Times are nigh
in the different test. In test3 where the object moves in the space, we note two times to grasping:
the first grasping attempt fails, the robot does a second grasp and it succeeds.
6.1.2. Object moves with velocity V2=4V1:
Figure 8: The trajectory of the end effector and the object
Figure 8.a the curves of the second test: the robot grasps the object in time Tgrasp= 4.07 s, the
object moves according to the Z axis with velocity V2, Figure 8.b the curves of the third test: the
robot grasps the object in time Tgrasp= 3.48 s, it moves in the plane (Y, Z) with velocity V2,
Figure 8.c the curves of the second test: the robot grasps the object in time Tgrasp= 3.02 s, the
latter moves in the space(X,Y,Z) with velocity V2.
Table 3. OBJECT moves with V2
according (Z)axis in(Y,Z) in(X,Y,Z)
Tgrasp(s) Tend(s) Tgrasp(s) Tend(s) Tgrasp(s) Tend(s)
test1 3.89 8.57 2.9 7.54 3.75 8.73
test2 4.07 9.93 3.05 8.57 3.02 8.18
test3 3.51 11.4 3.48 7.48 3.21 11.8
The table 3 presents results separately of the time for grasping the moving object which moves
with velocity V2=4V1 and the time to move the object to the desired position.
12. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
9
If we increase the velocity of the object, we see that the results are nigh but slightly higher.
Therefore, increasing the speed affects on the time of grasping the moving object, even the
direction of movement of the object affects on the time of grasping.
As shown in the tables, our algorithm successfully picked it up 100% of the time, and our robot
successfully grasped the objects. We demonstrate that the robot is able to grasp the moving object
in a reasonable time.
6.2 . Grasping object in the presence of one obstacle in the environment:
6.2.1. Object moves with velocity V1= 8mm/s:
Figure 9. Grasps a moving object while avoiding obstacle with success.
As presented in Figure 9, the grasping of the object is done successfully. In Figure 9.a, the Barret
hand and the object are in their initial locations. The object moves with a velocity of 8mm/s. In
Figure 9.b, the robot reach the centroid of the object, while keep avoiding any encountered
obstacles, and finally grasps the object and closes the fingers. In figure 9.c the robot pick up the
object while avoiding obstacle and in figure 9.d the robot moves the object to a desired position.
To capture a moving object safety without collision and to lift up the object stably without
slippage, the end effectors needs to be controlled while considering the relation between the
position of the end effectors, the position of the moving object and the position of the obstacle.
The end effectors determines the position of the obstacle (in the middle between the object and
the end effectors) and the moving object and selects the shortest distance from its current
position to the moving object, while avoiding obstacle in the environment.
13. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
10
Figure 10. The trajectory of the end effector and the object
Figure 10.a illustrates the curves of the first test: the robot grasps the object in time Tgrasp= 2.45
s, which moves according to the Z axis with velocity V1, figure 10.b illustrates the curves of the
third test: the robot grasps the object in time Tgrasp= 3.03 s, which moves in the plane(Y,Z)
with velocity V1, figure 10.c illustrates the curves of the second test: the robot grasps the object
in time Tgrasp= 2.91 s, which moves in the space(X,Y,Z) with velocity V1.
Table 4.object moves with V1 in the presence of obstacle
according
(Z)axis
in(Y,Z) in(X,Y,Z)
Tgras
p(s)
Tend(s
)
Tgrasp
(s)
Tend(s
)
Tgrasp
(s)
Tend(s
)
test1 2.45 8.91 3.11 7.63 5.16 11.38
test2 2.95 9.08 2.83 9.33 2.91 7.07
test3 3.18 9.74 3.03 7.6 4.24 8.77
The table 4 presents results separately of the time for grasping the moving object which moves
with velocity V1 while avoiding obstacle, and the time to move the object to the desired position.
Times are nigh in the different test. The direction of movement of the object affects on the time
grasping (Tgrasp) and on the time to move the object to desired position (Tend).
6.2.2. Object moves with velocity V2=4V1:
Figure 11. The trajectory of the end effector and the object
14. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
Figure 11.a illustrates the curves of the second test: the robot grasps the object in time Tgrasp=
2.43 s, which moves according to the Z axis with velocity , figure 11.b illustrates the curves of
the second test: the robot grasps the object in time Tgrasp= 2.74 s, which moves in the plane (Y,
Z) with velocity V2, figure 11.c illustrates the curves of the second test: the robot grasps the
object in time Tgrasp= 2.42 s, which moves in the space(X,Y,Z) with velocity V2.
11
Table 5. object moves with V2 in the presence of obstacle
according (Z)axis in(Y,Z) in(X,Y,Z)
Tgrasp(s) Tend(s) Tgrasp(s) Tend(s) Tgrasp(s) Tend(s)
test1 2.96 9.47 3.58 9.57 3 9.51
test2 2.43 8.18 2.74 7.6 2.42 7.16
test3 2.37 6.87 2.63 8.13 2.54 7.35
The table 5 presents results separately of the time for grasping the moving object which moves
with velocity V2=4V1 while avoiding obstacle and the time to move the object to the desired
position.
If we increase the velocity of the object, we see that the results are nigh but slightly higher.
Therefore, increasing the speed affects on the time of grasping the moving object, even the
direction of movement of the object affects on the time of grasping, we note that in the presence
of obstacles the times are slightly higher than in the absence of obstacles.
As shown in the tables, our algorithm successfully picked it up 100% of the time, and our robot
successfully grasped the objects. We demonstrate that the robot is able to grasp the moving object
in a reasonable time. The times recorded in the presence of the obstacle are slightly higher than
recorded in the absence of the obstacle.
6.3. Grasping object in the presence of two obstacles in the environment:
In the presence of obstacles, we plan a path in 7-DoF configuration space that takes the end
effector from the starting position to a goal position, avoiding obstacles. For computing the goal
orientation of the end effector and the configuration of the fingers, we used a criterion that
attempts to minimize the opening of the hand without touching the object being grasped or other
nearby obstacles. Finally finds a shortest path from the starting position to possible target
positions.
15. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
12
6.3.1. Two obstacles between the object and the desired position(see Figure 12):
Figure 12. Two obstacles between the object and the desired position
The presence of two obstacles between the object and the desired position affects on the time to
move the object to the desired position.
Figure 13. The trajectory of the end effector and the object while avoiding two obstacles between the object
and the desired position
Figure 13.a illustrates the curves of the second test: the robot grasps the object in time Tgrasp=
3.67 s and Tend= 8.99 s, which moves according to the Z axis with velocity V2, figure 13.b
illustrates the curves of the second test: the robot grasps the object in time Tgrasp= 2.93 s and
Tend= 8.83 s, which moves in the plane (Y, Z) with velocity V2, figure 13.c illustrates the
curves of the second test: the robot grasps the object in time Tgrasp= 3.12 s and Tend= 10.62 s,
which moves in the space(X,Y,Z) with velocity V2.
16. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
13
6.3.2. Two obstacles between the arm and the object(see Figure 14):
Figure 14. Two obstacles between the arm and the object
The presence of two obstacles between the arm and the object affects on the time to grasp the
moving object.
Figure 15. The trajectory of the end effector and the object while avoiding two obstacles between the arm
and the object
Figure 15.a illustrates the curves of the second test: the robot grasps the object in time Tgrasp=
3.21 s and Tend= 7.93 s, which moves according to the Z axis with velocity V2, figure 15.b
illustrates the curves of the second test: the robot grasps the object in time Tgrasp= 3.37 s and
Tend= 8.98 s, which moves in the plane (Y, Z) with velocity V2, figure 15.c illustrates the curves
of the second test: the robot grasps the object in time Tgrasp= 3.47 s and Tend= 7.50 s, which
moves in the space(X,Y,Z) with velocity V2.
17. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
14
6.4. Grasping object in the presence of three obstacles in the environment:
Figure 16. WAM Arm avoids three obstacles
In figure 16, the robot successfully grasping the item. Figure 16.a shows that the hand of the
robot keeps a distance from the object, the Barret hand and the object are in the initial
position, in figure 16.b the object moves with the velocity V1= 8mm/s and the robot moves to the
position of the centroid of the object, it chooses the shortest way to the object between the two
obstacles, opens the fingers, closes them and finally grasps it in Tgrasp=2.46s. In figure 16.c the
robot picks it up while avoiding obstacle and in figure 16.d the robot moves it to the desired
position in Tend= 6.68s.
Table 6. Object moves with V1 in the presence of three obstacles
according (Z) axis in(Y,Z) in(X,Y,Z)
Tgrasp(s) Tend(s) Tgrasp(s) Tend(s) Tgrasp(s) Tend(s)
test1 2.38 7.14 3.20 8.43 2.88 9.08
test2 2.56 7.88 2.50 7.30 2.46 6.68
test3 2.13 7.20 2.50 8.20 2.67 7.04
The table 6 presents results of the time for grasping the moving object which moves with velocity
V1 while avoiding three obstacles and the time to move it to the desired position.
If we increase the number of the obstacles, the time of grasping the object was reduced. The
choice of the trajectory by the robot is reduced. The robot chooses the shortest trajectory to the
object.
18. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
15
Figure 17. WAM Arm avoids three obstacles
As shown in the image sequence of figure 17, the robot successfully grasping the item. Figure
17.a shows that the hand of the robot keeps a distance from the object, the Barret hand and
the object are in the initial position, in figure 17.b the object moves with the velocity V1= 8mm/s
and the robot moves to the position of the centroid of the object, opens the fingers, closes them
and finally grasps it in Tgrasp=2.78s. In figure 17.c the robot picks it up while avoiding obstacle
, it chooses the shortest way to the object between the two obstacles and in figure 17.d the robot
moves it to the desired position in Tend= 6.90s.
7. CONCLUSIONS
So far, we have presented a simulation of grasping a moving object with different velocities in
terme to deplace it to a desired position while avoiding obstacles using the 7-DoF robotic arm
with the Barret hand in which we involve the RRT algorithm. In fact, this algorithm allows us
overcome the problem of the inverse kinematics by exploiting the nature of the Jacobian as a
transformation from a configuration space to workspace.
We set forth separately the time of grasping the moving object shifting with different velocity in
the presence and the absence of obstacles and the time to put this object in a desired position.
Firstly, it moves with velocity V1 Second it moves with velocity V2=4 V1. The proposed
algorithm successfully holding the moving object in a rational time putting it in the aim station.
Times are nigh in the different test. The presence of obstacles increases the speed of grasping the
object. The direction of movement affects on the time of holding the item and on the time to put
it in the desired position. The times recorded in the presence of the obstacle are slightly higher
than recorded in the absence of them.
In this article, we have studied an algorithm dedicated for grasping a moving target while
trying to escape a fixed obstacle. A future work will tend to enhance the present
algorithm, with introduction of movable obstacles.
19. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
16
REFERENCES
[1] F.Ruggiero,Grasp and manipulation of objects with am ulti-fingered hand in unstructured
environments ,Ph.D.Thesis, UNIVERSITA DEGLISTUDI DI NAPOLI (2010).
[2] T.Adam Leper, Kayden Hsiao, Strategies for human-in-the-loop robotic grasping, Robot
Manipulation and Programming (2012).
[3] K.Nagase,Y.Aiyana,Grasp motion planning with redundant dof of grasping pose, Journal of Robotics
and Mechatronics Vol.25No.3 (2013).
[4] H. Faster, A robot ping pong player: optimized mechanics, high performance 3dvision, and
intelligent sensor control, Robotersystemepp. 161170 (1990).
[5] M.-S.Kim, Robot visual servo through trajectory estimation of amoving object using kalman filter
emerging intelligent computing technology and applications,D.-S.Huang, etal.,Eds., ed: Springer
Berlin /Heidel- bergvol.5754(2009)pp.1122–1130.
[6] F.Husain, Real time tracking and grasping of a moving object from range video(2013).
[7] N.Hushing, Control of a robotic manipulator to grasp a moving tar- get using vision,.IEEE Int.
Conf.Robotics and Automation (1990)604-609.
[8] A. Kiva, on adaptive vision feed back control of robotic manipulators, in: Proc. IEEE Conf. Decision
and Control2,1991,pp.1883–1888.
[9] B.M.. .J. Canny., Easily computable optimum grasps in2-dand 3-d,In Proc. IEEE International
Conference on Robotics and Automation (1994) pages739–747.
[10] J.B.Balkenius, C., Event prediction and object motion estimation in the development of visual
attention, Proceedings of the Fifth International Workshop on Epigenetic (2005).
[11] A.-B. S.Yeoh, P., Accurate real-time object tracking with linear predic- tion method, Proceedings of
International Conferenceon Image Processing Volume3 (2003).
[12] Z.-K.Matas, J., Learning efficient linear predictors for motion estimation, Proceedings of 5 Indian
Conference on Computer Vision, Graphics and Image Processing, Springer-Verlag, Madurai, India
(2006).
[13] J.-w. P. Sang-joo Kim, J. Lee., Implementation of tracking and capturing a moving object using a
mobile robot, International journal of control automation and systems 3(2005)444.
[14] D. Berenson, Manipulation Planning with Workspace Goal Regions, The Robotics Institute, Carnegie
Mellon University, USA (2009).
[15] J. K. M. Stilman, J.-U. Schamburek,T. Asfour, Manipulation planning among movable obstacles, in
IROS (2007).
[16] K.K.Y.Hirano, S.Yoshizawa, Image-based object recognition and dexteroushand/arm motion
planning using rrts for grasping in cluttered scene, in IROS (2005).
[17] S. LaValle, J. Kuffner, Rapidly-exploring random trees: Progress and prospects, in WAFR
(2000).
[18] S.M. LaValle, Planning algorithms, Cambridge, U.K.: Cambridge University Press, available at
http://planning.cs.uiuc.edu/(2006).
[19] R.D.D.Bertram, J.Kuffner, T.Asfour, An integrated approach to inverse kinematics and path planning
for redundant manipulators, in ICRA (2006).
[20] E. Drumwright, V.Ng-Thow-Hing, Toward interactive reaching in static environments for humanoid
robots, in Proc.IEEE/RSJ International Conference on Intelligent Robots and Systems
(2006)pp.846–851.
[21] D.F.M. Vande Weghe, S.Srinivasa, Randomized path planning forredundant manipulators with out
inverse kinematics, in Humanoids (2007).
[22] S.S.S.Dmitry Berenson, Pittsburgh and by the National Science Founda- tionunder GrantNo.EEC-
0540865 (2009).
[23] M.V.Weghe, Randomized path planning for redundant manipulators with- out inverse kinematics.
[24] E.L.Damian, Grasp planning for object manipulation by an autonomous robot, Master’s thesis,
National Institute of Applied Sciences of T oulouze (2006).
[25] M.W.Spong,M.Vidyasagar, Robot dynamics and control (1989).
[26] WAM Arm User’s Guide, www.barrett.com
20. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.3, No. 3/4, November 2014
17
Authors
Ali CHAABANI was born in Kairouan,Tunisia, in 1987. He received, from the Sfax
National School of Engineering (ENIS), the Principal Engineering Diploma in
Computer science in 2011, from the National Institute of Applied Sciences and
Technology (INSAT) the Master of Computer science and Automation in 2013.Now,
he is a PhD student in Tunis National School of Engineering (ENIT) and researcher in
Informatics Laboratory for Industrial Systems (LISI) at the National Institute of
Applied Sciences and Technology (INSAT).
Mohamed Sahbi BELLAMINE is an assistant professor in the National Institute of
Applied Sciences and Technology, University of Carthage. He is a member of
Laboratory of Computer Science of Industrials Systems (LISI). His domain of interests
is Human-Robot Interaction, Social Sociable robots.
Moncef GASMI was born in Tunis,Tunisia, in 1958. He received the Principal
Engineering Diploma in Electrical Engineering in 1984 from the Tunis National School
of Engineering (ENIT), the Master of Systems Analysis and Computational Treatment
in 1985, the Doctorate in Automatic Control in 1989 and the State Doctorate in
Electrical Engineering in 2001. Now, he teach as a Professor and Director of the
Informatics Laboratory for Industrial Systems (LISI) at the National Institute of Applied
Sciences and Technology (INSAT). His interest are about