SlideShare a Scribd company logo
1 of 9
Download to read offline
IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 2, Ver. I (Mar-Apr. 2016), PP 91-99
www.iosrjournals.org
DOI: 10.9790/0661-18219199 www.iosrjournals.org 91 | Page
3d Object Recognition Using Centroidal Representation
Anwer Subhi Abdulhussein Oleiwi
Avicenna Center for E-learning, Al-Mustansiriyah University, Iraq
Abstract: Three dimensional object recognition from two-dimensional image is implemented in this work with
the use of 512 different poses of the object (which is represented by an airplane or a cube or a satellite). The
object is rotated in the three directions (x,y,z) by an angle of 2/8 rad. (45). Centroidal sample representation
was used to represent the data extracted from the object. Also four or five variables for each object are
calculated. These variables represent fast-calculated features of the object that can speed up the recognition
process. These variables are scaled and position in variant representation of the object. The mathematical
model that represents the object was calculated in two methods. These were the Wavelet Transform Model and
the Auto regression Model.
Keywords: Three Dimensional Objects, Two-Dimensional Image, the Mathematical Model, Wavelet
Transform, Autoregression Model
I. Introduction
Recognition of an object image requires associating that image with views of the object in computer
memory (Database which contains just enough different poses of the object under consideration). These views
are called the Object Model and it is needed because as in real world man cannot recognize objects that he had
never seen before [1]. The object is attached to the model image with the minimum distance if the distance is
less than a predefined value. In real world, rendering an image is done with speed of light shining on the objects
and reflecting into human eyes, but rendering of virtual worlds take much more time. This is the reason that the
variable rendering quality has always been a biased balance between the computation time and the displayed
effects. This setting is differed according to the application [2]. When large object is used searching will become
extremely time consuming and hence not realistic. Consequently, the main concern here is to solve the
combinatorial time complexity problem in an optimal manner, so that, the correct matching (recognition) is
found as fast as possible [3].
II. Proposed Method
The proposed method presents new approach for recognition three- dimensional object from two-
dimensional image. Many stages are implemented in this work as shown in figure 1.
Filtering
Spatial filters have been used to remove various types of noise in digital images. These spatial filters
typically operate on a small neighborhood area in the range from 33 pixels to 1111 pixels. If the noise added
to each pixel is assumed to be additive white Gaussian, then the intensity of each pixel can be represented as
follows [4]:
d(x,y)=I(x,y)+n(x,y) ...(1)
where x and y are the coordinate of the pixel under consideration in the image.
d(x,y) = Degraded pixel intensity in the image.
I(x,y) = Original pixel intensity in the image.
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 92 | Page
n(x,y) = Additive White Gaussian function value with zero mean which represents the noise intensity added
to the pixel (x,y).
Here through this work adaptive Alpha-trimmed filtering is suggested with the condition that if the
filtering action is made over an edge pixel, then the filtering action will be canceled and the pixel point will have
the same former value in the few filtered image. The decision for an edge point or not is made when the sum of
the maximum three pixels values exceeds the sum of the three minimum pixels by a predefined threshold value
which is determined experimentally. Figure 2 illustrates the filtered object.
Binary Conversion
In order to distinguish the object under consideration by its boundary, the image is transferred to a
binary image after filtering process. The black level represents anything other than the object in the image and
the maximum shiny white level represents the object. This procedure helps in eliminating the surface details of
the object and keeps only the boundary information of the object. In some application this conversion may not
be used. Figure 3 show binary image after process of filtered object.
Edge Detection
In this work, the main interest is in the presence of an edge not in its direction, so that Roberts operator
(which is a (22) operator) would work well for the purpose of edge detection. This operator indicates the
presence of an edge not its direction. If the magnitude of this operator exceeds some predefined value then an
edge point will be detected. The magnitude of this operator is defined as:
Magnitude =I(x,y)=I(x-1,y-1)+I(x,y-1)-I(x-1,y) ...(2)
Robert operator is used here because of its computational efficiency. When the magnitude of this
operator exceeds some predefined threshold, then an edge will be detected [5].
Center Point Calculation
The fastest method to calculate the center of the object in an image is to imagine a rectangle that
surrounds the object. The cross point of the diagonals of the rectangle will be the center point of the object,
which can be calculated quickly by calculating its coordinates as follows [6]:
2
21 xx
Xc


…(3)
2
21 yy
Yc

 …(4)
Where x1 and x2 are the X coordinates of the first and second vertical lines of the rectangle
respectively, which represent the X coordinates of the first left and the first right points of the object in the
image respectively. y1 and y2 are the Y coordinates of the first and second horizontal lines of the rectangle
respectively which represent the Y coordinates of the highest top and the lowest bottom points of the object
respectively.
Data Acquisition and Data Normalization
The data that represent some predefined features of the object is obtained from the object image. All
the representation used is one-dimensional representation in which a single set of variables represents the object
under consideration.
Figure 2: Filtered Object
Figure 3: Binary Conversion
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 93 | Page
The Centroidal Representation [7]: In this type of representation, the center of the object is determined first.
Then, the distances from that center to the object boundary is calculated every (2/N) angle from a predefined
starting point on the boundary, where N is the number of the samples and it must equal to (2n) (where n=1,2,..).
Through this work n=4 and n=5 were used (N=16 and N=32). The variables (samples) obtained in this type of
representation are scale variant, and so, some forms of normalization processes must be introduced [8].
Two types of normalization processes are suggested. The first type is object dependent normalization in
which each samples. Dividing it by the first one normalizes the last sample in the set such shown in figure 4.
The second type of normalization process is to calculate the average value of all samples and then divide all
samples by the average value such shown in figure 5.
Mathematical Models
After the data extraction from the image of the object, data manipulation must be applied in order to
have a suitable Mathematical Model for the object.
a) Autoregression Model
The measured data can be represented or approximately represented by selected Autoregression Model.
The parametric spectral can be made in a three-step procedure.
1- The model set is selected (AR (all pole Autoregression)).
2- The model parameters are estimated from the data.
3- The spectral estimation is made by the estimated model parameter.
For linear systems with random input, the measurement of the output spectrum is related to the input
spectrum by the factorization model [9].
)()()()()()( 2*
ZSZHZSZHZHZS xxxxyy  …(5)
Figure 6 shows the schematic diagram of the AR model.
Figure 6 : Autoregression Model
The model parametric estimation is [10]:
1- Select the representative model (AR Model).
2- Estimation the model parameter from data, that is given y(t), estimate (A(q-1
)), where AR: H()=1/A()
(all pole model).
The all pole model is defined as:
y(t) A(q-1
) = e(t)
Where
y(t) : is the measured data
e(t) : is the zero mean, white noise sequence with variance Ree.
+
1-A(q-
1
)
y(t
)
e(t
)
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 94 | Page
A(q-1
): is an Nath order polynomial in backward shift operator.
The basic parameter estimation problem for the Autoregression in the general case (infinite covariance)
is given as minimum (error) variance solution to
))(()(min 2
teEtJ
a
 …(6)
e(t) = y(t)-y(t) …(7)
= {Minimum error between estimated and real value}.
y(t) is the minimum variance-estimate obtained from AR model.


aN
i
i teityaty
1
)()()(
…(8)


aN
i
i ityaty
1
^
)()(
…(9)
y(t) is actually the one step predictor. The one step predictor estimates y(t/t-1) based on the past data samples.
This operation is called linear predictor. The coefficients of the predictor were calculated using Durban's
recursive algorithm. The order of each predictor depends on the number of samples (N) used in calculating the
predictor coefficients and equals to the first integer > (ln(N)).
b) Wavelet Transform Model
Two forms of the discrete wavelet transform are used throughout this work. These are Haar wavelet
transform and Daubechies wavelet transform:
Haar Wavelet Transform is a cold topic, their graphs are made from flat pieces {1's and 0's}. When dealing
with a four elements one-dimensional signal. The Haar Wavelet Transform of this signal is a scaling function
and with three Wavelet coefficients. If these scaling and wavelet coefficients are multiplied with Haar basis
vectors then, the original signal vector will be retrieved. The Haar four element basis vectors form a matrix of
the following form:
















1011
1011
0111
0111
H …(10)
As it can be seen the basis vectors of the Haar transform are orthogonal. This means that if any vector
is multiplied by the inner product with any vector other than itself the result will be zero. (D = the signal vector
& S = Wavelet transform vector)
DHS  …(11)
DHHSH 11 
 …(12)
DHS 1
 …(13)
where
















5.05.000
005.05.0
25.025.025.025.0
25.025.025.025.0
1
H …(14)
with


























4
3
2
1
3
2
1
&
a
a
a
a
D
W
W
W
S
S …(15)
Here throughout this work, Fast Haar algorithm is used which will transfer the four elements one
dimensional signal vector to a scaling function with three-wavelet coefficient. This type of transform is done by
the use of the Pyramid algorithm. The component of the signal vector are a1,a2,a3 and a4. The pyramid
algorithm is applied as in figure 7 [11].
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 95 | Page
It was found through the mathematical calculations that the ratio of the scaling function over the first
wavelet coefficient with the sum of wavelet coefficient from the second wavelet coefficient to the last one
divided by the first wavelet coefficient gives a good normalized (scale invariant) mathematical representation of
the data vector. This system is tested on different data sets having the same elements but with different
sequences and it gives a good discrimination between these vectors .The first ratio equals:
Ratio1 = (S/W1) …(16)
The second ratio equals:
Ratio2 = (W1+….Wn-1/W1) …(17)
Where n is the number of elements in the data vector. This representation was extended to 32 and 64
element vector.
Some malfunction may in appear in the theoretical calculation of the fast Haar transform (such as when
dealing with the two vectors of data (5,4,2,1) and (6,3,2,1). These two vectors have different elements but they
have the same first ratio (S/W1). A second ratio (W2+W3)/W1 must be used in order to distinguish between the
two vectors. A modification over the Harr basis vectors is assigned in order to distinguish between the two
vectors from the first ratio. An example of this modification is given below for four elements.
















0121
1012
1012
0121
H …(18)
















05.05.00
5.0005.0
2.01.01.02.0
1.02.02.01.0
1
H …(19)
From this for elements modified Haar transform the scaling function and the three wavelet coefficients
are calculated as follows:
S = (a1+2x(a2+a3)+a4)/10 …(20)
W1 = (-2x(a1+a4)+a3+a4)/10 …(21)
W2 = (a1-a4)/2 …(22)
W3 = (a2-a3)/2 …(23)
The second proposed modified Harr vectors give the following form for four-element signal vector.
















1012
0221
1012
0221
H …(24)
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 96 | Page
















5.005.00
025.0025.0
1.02.01.02.0
1.02.02.01.0
1
H …(25)
These proposed wavelet transforms were extended to 16 and 64 element wavelets.
Daubechies wavelet transform: Three conditions must be satisfied when building the Daubechies Matrix of
basis vectors these condition are:
1- The basis vectors must be orthogonal, which mean that the inner product of any two different vector is zero.
2- The difference between the odd and even coefficients is zero.
3- The summation of the coefficients must be bounded (through references, the select number for bounding is
(2)).
Three approaches for modified Daubechies wavelet basis vectors are proposed for the 16-element data
vector. Each one uses different conditions from the others.




























































00000000000000
00000000000000
00000000000000
00000000000000
00000000000000
00000000000000
00000000000000
00000000000000
000000000000
000000000000
000000000000
000000000000
00000000
00000000
1615
1516
1615
1516
1615
1516
1615
1516
16151413
13141516
16151413
13141516
161514131211109
910111213141516
12345678910111213141516
16151413121110987654321
dd
dd
dd
dd
dd
dd
dd
dd
dddd
dddd
dddd
dddd
dddddddd
dddddddd
dddddddddddddddd
dddddddddddddddd
The sets of equations that must be fulfilled in order to solve for the d's are:
d13d15+d14d16 = 0 …(26)
d9d15+d10d16 = 0 …(27)
d1 d15+d2d16 = 0 …(28)
d5d15+d6d16 = 0 …(29)
d13d9+d14d10 + d11d15+d12d16 = 0 …(30)
d13d1+d14d2 + d8d15+d4d16 = 0 …(31)
d16d8+d15d7 + d14d6+d13d5 + d12d4+d11d8 + d10d2+d9d1= 0 …(32)
The preceding equations satisfy the following Daubechies condition:
)0(0
1
1
2 


 mdd
N
k
mkk …(33)
Where N is the number of elements in each one of the Daubechies basis vectors
d16+d15+d14+d13+d12+d11+d10+d9+d8+d7+d6+d5+d4+d3+d2+d1= 0 …(34)
which satisfies the Daubechies condition for wavelet basis vectors elements:
2
1
1




N
k
kd …(35)
d16-d15+d14-d13+d12-d11+d10-d9+d8-d7+d6-d5+d4-d3+d2-d1= 0 …(36)
which satisfies the Daubechies condition for wavelet basis vectors elements:
1-)2/(.,0,1,2,3,..for)1(
1
1




N
k
k
mk
Nmdk …(37)
Equation (37) (n) is taken when the value (0).
d15d8-d16d7 = 0 …(38)
d15d4-d16d3 = 0 …(39)
d12d15-d11d16 = 0 …(40)
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 97 | Page
d13d8-d14d7 + d6d15-d5d16 = 0 …(41)
d13d9+d14d10 + d11d15+d12d16 = 0 …(42)
d13+d14+d15+d16 = 2 …(43)
d16- d15+d14- d13 = 0 …(44)
d16-2 d15+3d14- 4d13 = 0 …(45)
In the first approach of Daubechies basis vectors building equation (35) is set equal to 8. In second
approach equation (45) is substituted by the following non-linear equation:
d16-4d15+16d14- 64d13 = 0 …(46)
In the third approach the all equations are used as they are given. With all modified Dauberchies
approaches the ratio S/W1 is used as the mathematical model for the pose of the object.
Distance Measurement
When the Mathematical Model for the object is built, then the recognition process can be fulfilled. The
decision of the object being recognized or not will depend on three types of distance measurement, these are:
1- When dealing with a single element set such as the variable ratio or angle1 or angle2,…,etc., the direct
absolute difference is used. Which is given as:
21Distance RR  …(47)
2- When dealing with one-dimensional vector with more than one element, such as Autoregression Model
Euclidean distance is a good measure for the distance between two vectors. If the two vectors are
(a1,a2,…..,an) and (b1,b2,….,bn), then the Euclidean distance between them is defined as [7]:
22
22
2
11 )(.....)()(Distance nn bababa 
…(48)
3- For the same preceding case, Non-metric measure similarity function is good distance measure also. It is
defined as S(A,B).
CosBABABAS  1/),(
…(49)
This measurement gives high value for Cos when the similarity between the two vectors A&B is high.
Object Model
The first step after the filtering and edge extraction processes is the calculation of four variables. These
variables represent some fast-calculation features of the designed object. These are the first four variables in the
Mathematical Model of each pose in the Object Model. In the recognition process, each one of these variables
for the object under consideration is compared with corresponding variable for the pose in the Object Model. If
the distance between the two corresponding variables is higher than a predefined value then the recognition
process with that pose is terminated. These four variables are defined as Ratio, Angle1, Angle2 and ratio3. In
some applications a fifth variable is calculated which is the Average Crossing. The variables are:
Ratio: It is the ratio of the line connecting the highest pixel to the lowest pixel in the object image to the line
connecting the first left pixel to first right pixel in the object image. (when more than one pixel appear at any
one of the four sides, then for the Top side it is the first left pixel. For the Bottom side it is the first right pixel.
For the Right side, it is the first bottom pixel and for the Left side, it is the first top pixel.
Angle1: It is the angle between the line connecting the top pixel to the bottom pixel in the object image to the
horizontal axis.
Angle2: It is the angle between the line connecting the first left pixel to the first right pixel in the object image
to the horizontal axis.
Ratio3: It is the ratio of the line between the object boundaries and passing through the center of the object
horizontally to that passing through the center vertically.
Average Crossing: One type of scale invariant representation that may be used in the Object Model building is
the Average Crossing, which is used with centroidal representation of the object. This variable is calculated as
follows:
The average of centroidal samples is calculated. After that the number of the times that the centroidal samples
switches between values higher or less than the average value of the samples is calculated which will give a
scale invariant representation of the object.
III. Results
Every proposed algorithm in this work has its own sequential database, which represent the Object
Model containing the mathematical representation of all possible 512 poses of the object. Different types of
desired and undesired object images were tested. When the following distances were used, the algorithms failed
to recognize the correct objects and failed to reject the undesired objects.
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 98 | Page
Maximum allowable distance for variable Ratio = 0.3.
Maximum allowable distance for variable Angle1 = .
Maximum allowable distance for variable Angle2 = 2.
Maximum allowable distance for variable Ratio3 = 0.35.
Maximum allowable distance for variable Average Crossing = 0.
For the Wavelet Transform Model and Centroidal Representation for 16 samples and average
normalization, the maximum allowable distances as shown in table 1.
Table:1 maximum allowable distances when using 16 samples and average normalization
H.T
1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T
S/W1 (W2+…..+Wn)/W1
0.15 0.5 0.2 0.2 0.15 0.05 0.07
For the Wavelet Transform Model and Centroidal Representation for 16 samples and proceeding
sample normalization, the maximum allowable distances as shown in table 2.
Table 2 maximum allowable distances when using for 16 samples and proceeding sample
normalization
H.T
1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T
S/W1 (W2+…..+Wn)/W1
0.15 0.5 0.25 0.25 0.25 0.1 0.09
For the Wavelet Transform Model and Centroidal Representation for 32 samples and average normalization, the
maximum allowable distances are:
Table 3maximum allowable distances when using 32 samples and average normalization
H.T
1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T
S/W1 (W2+…..+Wn)/W1
0.15 0.55 0.25 0.3 0.1 0.09 0.75
For the Wavelet Transform Model and Centroidal Representation for 32 samples and proceeding sample
normalization, the maximum allowable distances are:
Table 4 maximum allowable distances when using for 32 samples and proceeding sample
normalization
H.T
1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T
S/W1 (W2+…..+Wn)/W1
0.2 0.55 0.25 0.35 0.13 0.05 0.085
For Autoregression Model:
Maximum Euclidean distance for Centroidal representation of 16 samples with average normalization = 0.8.
Maximum Euclidean distance for Centroidal representation of 16 samples with proceeding sample
normalization = 0.85.
Maximum Euclidean distance for Centroidal representation of 32 samples with average normalization = 0.8.
Maximum Euclidean distance for Centroidal representation of 16 samples with proceeding sample
normalization = 1.
Maximum non-metric measure of similarity distance for Centroidal representation with 16 sample and average
normalization = 0.1.
Maximum non-metric measure of similarity distance for Centroidal representation with 16 sample and
proceeding sample normalization = 0.15.
Maximum non-metric measure of similarity distance for Centroidal representation with 32 sample and average
normalization = 0.1.
Maximum non-metric measure of similarity distance for Centroidal representation with 32 sample and
proceeding sample normalization = 0.1.
IV. Conclusion
In this work, simple, easy implemented and robust techniques for solving the 3D-object recognition are
proposed. These techniques are Object Model based. The use of adaptive filter through this work saves the
object edges and high frequency component from being blurred. The fastest method for calculating the data
concerned with object was the Centroidal representation with 16 samples. When dealing with objects of high
3D Object Recognition Using Centroidal Representation
DOI: 10.9790/0661-18219199 www.iosrjournals.org 99 | Page
boundary details this method fail to give a good representation to 32, 64 or even more samples which increases
the time consumption of the algorithms. The four or five variables used in each pose mathematical model in
Object Model are scale and position invariant sine they represent ratio and angles.
Simple Haar Wavelet transformation is the fastest method for calculating the Mathematical Model for
the object. This transformation does not contain any multiplication or division processes. The malfunction of
this type of transformation is that it needs to calculate two ratios for its mathematical model. The modified Haar
and Daubechies transforms proposed through this work give a good mathematical representation for the object
by using only Ratio variable, which is S/W1. The modified Daubechies Wavewlet coefficients were calculated
analytically not numerically which can simplify the calculations.
References
[1]. Chuin-Mu Wang,”The human-height measurement scheme by using image processing techniques”, Information Security and
Intelligence Control (ISIC), International Conference on IEEE,2012.
[2]. Stefan Haas,” Computers & Graphics”, Volume 18, Issue 2, Pages 193–199, March–April 1994.
[3]. Hongbin Zhaa1, Hideki Nanamegia, Tadashi Nagataa, “ecognizing 3-D objects by using a Hopfield-style optimization algorithm for
matching patch-based descriptions”, Volume 31, Issue 6, Pages 727–741, June 1998.
[4]. Haitham Hasan , S. Abdul-Kareem,”Static hand gesture recognition using neural networks”, Artificial Intelligence Review,Volume
41, Issue 2, pp 147-181, January 2012.
[5]. Duan-Yu Chen , Yuan Ze Univ ,Chungli, Taiwan, “An online people counting system for electronic advertising
machines”,Multimedia and Expo, IEEE International Conference. ICME 2009.
[6]. Marc van Kreveld, Bettina Speckmann, “On rectangular cartograms” Volume 37, Issue 3, Pages 175–187, August 2007.
[7]. Tariq Zeyad Ismail, Zainab Ibrahim Abood” 3-D OBJECT RECOGNITION USING MULTI WAVELET AND NEURAL
NETWORK” Journal of Engineering, Number1,Volume 18 January 2012.
[8]. R. L. Kashyap and R. Chellappa, "stochastic Model for Closed Boundary Analysis: Representation and Reconstruction", Trans. On
Information Theory, Vol. 1T-27, No. 5, 1981.
[9]. Yingbo Hua, “Blind methods of system identification”,Volume 21, Issue 1, pp 91-108,2002.
[10]. James V. Candy ,”Discrete Random Signals and Systems”, Wiley online library 7 OCT 2005.
[11]. Eloy Ramos-Cordoba, István Mayer, Eduard Matito, Pedro Salvador,”Toward a Unique Definition of the Local Spin” Journal of
chemical theory and computation,2012.

More Related Content

What's hot

Research Paper v2.0
Research Paper v2.0Research Paper v2.0
Research Paper v2.0Kapil Tiwari
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
 
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODFORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGESsipij
 
Denoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped WindowsDenoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
 
FAN search for image copy-move forgery-amalta 2014
 FAN search for image copy-move forgery-amalta 2014 FAN search for image copy-move forgery-amalta 2014
FAN search for image copy-move forgery-amalta 2014SondosFadl
 
ijrrest_vol-2_issue-2_013
ijrrest_vol-2_issue-2_013ijrrest_vol-2_issue-2_013
ijrrest_vol-2_issue-2_013Ashish Gupta
 

What's hot (18)

PPT s04-machine vision-s2
PPT s04-machine vision-s2PPT s04-machine vision-s2
PPT s04-machine vision-s2
 
PPT s08-machine vision-s2
PPT s08-machine vision-s2PPT s08-machine vision-s2
PPT s08-machine vision-s2
 
Quatum fridge
Quatum fridgeQuatum fridge
Quatum fridge
 
PPT s07-machine vision-s2
PPT s07-machine vision-s2PPT s07-machine vision-s2
PPT s07-machine vision-s2
 
Research Paper v2.0
Research Paper v2.0Research Paper v2.0
Research Paper v2.0
 
Cj36511514
Cj36511514Cj36511514
Cj36511514
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
 
H1802054851
H1802054851H1802054851
H1802054851
 
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODFORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR OPTICAL IMAGES
 
Image segmentation
Image segmentation Image segmentation
Image segmentation
 
50120130405020
5012013040502050120130405020
50120130405020
 
Denoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped WindowsDenoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped Windows
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
FAN search for image copy-move forgery-amalta 2014
 FAN search for image copy-move forgery-amalta 2014 FAN search for image copy-move forgery-amalta 2014
FAN search for image copy-move forgery-amalta 2014
 
ijrrest_vol-2_issue-2_013
ijrrest_vol-2_issue-2_013ijrrest_vol-2_issue-2_013
ijrrest_vol-2_issue-2_013
 
G1802053147
G1802053147G1802053147
G1802053147
 

Viewers also liked

Study On Traffic Conlict At Unsignalized Intersection In Malaysia
Study On Traffic Conlict At Unsignalized Intersection In Malaysia Study On Traffic Conlict At Unsignalized Intersection In Malaysia
Study On Traffic Conlict At Unsignalized Intersection In Malaysia IOSR Journals
 
Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...
Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...
Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...IOSR Journals
 
Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...
Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...
Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...IOSR Journals
 
Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...
Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...
Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...IOSR Journals
 
Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...
Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...
Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...IOSR Journals
 
Stable endemic malaria in a rainforest community of Southeastern Nigeria
Stable endemic malaria in a rainforest community of Southeastern NigeriaStable endemic malaria in a rainforest community of Southeastern Nigeria
Stable endemic malaria in a rainforest community of Southeastern NigeriaIOSR Journals
 
Magnetic Properties and Interactions of Nanostructured CoCrTa Thin Films
Magnetic Properties and Interactions of Nanostructured CoCrTa Thin FilmsMagnetic Properties and Interactions of Nanostructured CoCrTa Thin Films
Magnetic Properties and Interactions of Nanostructured CoCrTa Thin FilmsIOSR Journals
 
Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...
Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...
Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...IOSR Journals
 
Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...
Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...
Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...IOSR Journals
 

Viewers also liked (20)

H010415362
H010415362H010415362
H010415362
 
C011131928
C011131928C011131928
C011131928
 
Study On Traffic Conlict At Unsignalized Intersection In Malaysia
Study On Traffic Conlict At Unsignalized Intersection In Malaysia Study On Traffic Conlict At Unsignalized Intersection In Malaysia
Study On Traffic Conlict At Unsignalized Intersection In Malaysia
 
D017122026
D017122026D017122026
D017122026
 
F010244149
F010244149F010244149
F010244149
 
D010411923
D010411923D010411923
D010411923
 
Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...
Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...
Properties of CdS Chemically Deposited thin films on the Effect of Ammonia Co...
 
D010121923
D010121923D010121923
D010121923
 
Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...
Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...
Batch Thermodynamics and Kinetic Study for Removal of Cationic Dye from Aqueo...
 
Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...
Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...
Potential Biodeteriogens of Indoor and Outdoor Surfaces (Coated With Gloss, E...
 
Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...
Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...
Exact Solutions of Axially Symmetric Bianchi Type-I Cosmological Model in Lyr...
 
Stable endemic malaria in a rainforest community of Southeastern Nigeria
Stable endemic malaria in a rainforest community of Southeastern NigeriaStable endemic malaria in a rainforest community of Southeastern Nigeria
Stable endemic malaria in a rainforest community of Southeastern Nigeria
 
Magnetic Properties and Interactions of Nanostructured CoCrTa Thin Films
Magnetic Properties and Interactions of Nanostructured CoCrTa Thin FilmsMagnetic Properties and Interactions of Nanostructured CoCrTa Thin Films
Magnetic Properties and Interactions of Nanostructured CoCrTa Thin Films
 
J010325764
J010325764J010325764
J010325764
 
C1103041623
C1103041623C1103041623
C1103041623
 
K017167076
K017167076K017167076
K017167076
 
Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...
Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...
Improving the Heat Transfer Rate for Multi Cylinder Engine Piston and Piston ...
 
Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...
Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...
Comparative Analysis of the Different Brassica OleraceaVarieties Grown on Jos...
 
G010314352
G010314352G010314352
G010314352
 
A1803050106
A1803050106A1803050106
A1803050106
 

Similar to N018219199

Basics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.pptBasics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.pptMagde Gad
 
A Case Study : Circle Detection Using Circular Hough Transform
A Case Study : Circle Detection Using Circular Hough TransformA Case Study : Circle Detection Using Circular Hough Transform
A Case Study : Circle Detection Using Circular Hough TransformIJSRED
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemIAEME Publication
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemiaemedu
 
Simple Pendulum Experiment and Automatic Survey Grading using Computer Vision
Simple Pendulum Experiment and Automatic Survey Grading using Computer VisionSimple Pendulum Experiment and Automatic Survey Grading using Computer Vision
Simple Pendulum Experiment and Automatic Survey Grading using Computer VisionAnish Patel
 
A New Key Stream Generator Based on 3D Henon map and 3D Cat map
A New Key Stream Generator Based on 3D Henon map and 3D Cat mapA New Key Stream Generator Based on 3D Henon map and 3D Cat map
A New Key Stream Generator Based on 3D Henon map and 3D Cat maptayseer Karam alshekly
 
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...ZaidHussein6
 
A Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic BackgroundA Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic BackgroundIJERA Editor
 
Comparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition TechniquesComparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition TechniquesIJERA Editor
 
Object Shape Representation by Kernel Density Feature Points Estimator
Object Shape Representation by Kernel Density Feature Points Estimator Object Shape Representation by Kernel Density Feature Points Estimator
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
 
Edge detection algorithm based on quantum superposition principle and photons...
Edge detection algorithm based on quantum superposition principle and photons...Edge detection algorithm based on quantum superposition principle and photons...
Edge detection algorithm based on quantum superposition principle and photons...IJECEIAES
 
Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4IAEME Publication
 
Lec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdfLec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdfnagwaAboElenein
 

Similar to N018219199 (20)

mini prjt
mini prjtmini prjt
mini prjt
 
Basics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.pptBasics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.ppt
 
A Case Study : Circle Detection Using Circular Hough Transform
A Case Study : Circle Detection Using Circular Hough TransformA Case Study : Circle Detection Using Circular Hough Transform
A Case Study : Circle Detection Using Circular Hough Transform
 
Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
Tracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance systemTracking and counting human in visual surveillance system
Tracking and counting human in visual surveillance system
 
T01022103108
T01022103108T01022103108
T01022103108
 
Simple Pendulum Experiment and Automatic Survey Grading using Computer Vision
Simple Pendulum Experiment and Automatic Survey Grading using Computer VisionSimple Pendulum Experiment and Automatic Survey Grading using Computer Vision
Simple Pendulum Experiment and Automatic Survey Grading using Computer Vision
 
A New Key Stream Generator Based on 3D Henon map and 3D Cat map
A New Key Stream Generator Based on 3D Henon map and 3D Cat mapA New Key Stream Generator Based on 3D Henon map and 3D Cat map
A New Key Stream Generator Based on 3D Henon map and 3D Cat map
 
W33123127
W33123127W33123127
W33123127
 
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
 
A Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic BackgroundA Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic Background
 
Comparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition TechniquesComparative Analysis of Hand Gesture Recognition Techniques
Comparative Analysis of Hand Gesture Recognition Techniques
 
Object Shape Representation by Kernel Density Feature Points Estimator
Object Shape Representation by Kernel Density Feature Points Estimator Object Shape Representation by Kernel Density Feature Points Estimator
Object Shape Representation by Kernel Density Feature Points Estimator
 
Edge detection algorithm based on quantum superposition principle and photons...
Edge detection algorithm based on quantum superposition principle and photons...Edge detection algorithm based on quantum superposition principle and photons...
Edge detection algorithm based on quantum superposition principle and photons...
 
I0343065072
I0343065072I0343065072
I0343065072
 
Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4Object tracking by dtcwt feature vectors 2-3-4
Object tracking by dtcwt feature vectors 2-3-4
 
Lec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdfLec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdf
 

More from IOSR Journals (20)

A011140104
A011140104A011140104
A011140104
 
M0111397100
M0111397100M0111397100
M0111397100
 
L011138596
L011138596L011138596
L011138596
 
K011138084
K011138084K011138084
K011138084
 
J011137479
J011137479J011137479
J011137479
 
I011136673
I011136673I011136673
I011136673
 
G011134454
G011134454G011134454
G011134454
 
H011135565
H011135565H011135565
H011135565
 
F011134043
F011134043F011134043
F011134043
 
E011133639
E011133639E011133639
E011133639
 
D011132635
D011132635D011132635
D011132635
 
C011131925
C011131925C011131925
C011131925
 
B011130918
B011130918B011130918
B011130918
 
A011130108
A011130108A011130108
A011130108
 
I011125160
I011125160I011125160
I011125160
 
H011124050
H011124050H011124050
H011124050
 
G011123539
G011123539G011123539
G011123539
 
F011123134
F011123134F011123134
F011123134
 
E011122530
E011122530E011122530
E011122530
 
D011121524
D011121524D011121524
D011121524
 

Recently uploaded

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 

Recently uploaded (20)

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 

N018219199

  • 1. IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 2, Ver. I (Mar-Apr. 2016), PP 91-99 www.iosrjournals.org DOI: 10.9790/0661-18219199 www.iosrjournals.org 91 | Page 3d Object Recognition Using Centroidal Representation Anwer Subhi Abdulhussein Oleiwi Avicenna Center for E-learning, Al-Mustansiriyah University, Iraq Abstract: Three dimensional object recognition from two-dimensional image is implemented in this work with the use of 512 different poses of the object (which is represented by an airplane or a cube or a satellite). The object is rotated in the three directions (x,y,z) by an angle of 2/8 rad. (45). Centroidal sample representation was used to represent the data extracted from the object. Also four or five variables for each object are calculated. These variables represent fast-calculated features of the object that can speed up the recognition process. These variables are scaled and position in variant representation of the object. The mathematical model that represents the object was calculated in two methods. These were the Wavelet Transform Model and the Auto regression Model. Keywords: Three Dimensional Objects, Two-Dimensional Image, the Mathematical Model, Wavelet Transform, Autoregression Model I. Introduction Recognition of an object image requires associating that image with views of the object in computer memory (Database which contains just enough different poses of the object under consideration). These views are called the Object Model and it is needed because as in real world man cannot recognize objects that he had never seen before [1]. The object is attached to the model image with the minimum distance if the distance is less than a predefined value. In real world, rendering an image is done with speed of light shining on the objects and reflecting into human eyes, but rendering of virtual worlds take much more time. This is the reason that the variable rendering quality has always been a biased balance between the computation time and the displayed effects. This setting is differed according to the application [2]. When large object is used searching will become extremely time consuming and hence not realistic. Consequently, the main concern here is to solve the combinatorial time complexity problem in an optimal manner, so that, the correct matching (recognition) is found as fast as possible [3]. II. Proposed Method The proposed method presents new approach for recognition three- dimensional object from two- dimensional image. Many stages are implemented in this work as shown in figure 1. Filtering Spatial filters have been used to remove various types of noise in digital images. These spatial filters typically operate on a small neighborhood area in the range from 33 pixels to 1111 pixels. If the noise added to each pixel is assumed to be additive white Gaussian, then the intensity of each pixel can be represented as follows [4]: d(x,y)=I(x,y)+n(x,y) ...(1) where x and y are the coordinate of the pixel under consideration in the image. d(x,y) = Degraded pixel intensity in the image. I(x,y) = Original pixel intensity in the image.
  • 2. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 92 | Page n(x,y) = Additive White Gaussian function value with zero mean which represents the noise intensity added to the pixel (x,y). Here through this work adaptive Alpha-trimmed filtering is suggested with the condition that if the filtering action is made over an edge pixel, then the filtering action will be canceled and the pixel point will have the same former value in the few filtered image. The decision for an edge point or not is made when the sum of the maximum three pixels values exceeds the sum of the three minimum pixels by a predefined threshold value which is determined experimentally. Figure 2 illustrates the filtered object. Binary Conversion In order to distinguish the object under consideration by its boundary, the image is transferred to a binary image after filtering process. The black level represents anything other than the object in the image and the maximum shiny white level represents the object. This procedure helps in eliminating the surface details of the object and keeps only the boundary information of the object. In some application this conversion may not be used. Figure 3 show binary image after process of filtered object. Edge Detection In this work, the main interest is in the presence of an edge not in its direction, so that Roberts operator (which is a (22) operator) would work well for the purpose of edge detection. This operator indicates the presence of an edge not its direction. If the magnitude of this operator exceeds some predefined value then an edge point will be detected. The magnitude of this operator is defined as: Magnitude =I(x,y)=I(x-1,y-1)+I(x,y-1)-I(x-1,y) ...(2) Robert operator is used here because of its computational efficiency. When the magnitude of this operator exceeds some predefined threshold, then an edge will be detected [5]. Center Point Calculation The fastest method to calculate the center of the object in an image is to imagine a rectangle that surrounds the object. The cross point of the diagonals of the rectangle will be the center point of the object, which can be calculated quickly by calculating its coordinates as follows [6]: 2 21 xx Xc   …(3) 2 21 yy Yc   …(4) Where x1 and x2 are the X coordinates of the first and second vertical lines of the rectangle respectively, which represent the X coordinates of the first left and the first right points of the object in the image respectively. y1 and y2 are the Y coordinates of the first and second horizontal lines of the rectangle respectively which represent the Y coordinates of the highest top and the lowest bottom points of the object respectively. Data Acquisition and Data Normalization The data that represent some predefined features of the object is obtained from the object image. All the representation used is one-dimensional representation in which a single set of variables represents the object under consideration. Figure 2: Filtered Object Figure 3: Binary Conversion
  • 3. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 93 | Page The Centroidal Representation [7]: In this type of representation, the center of the object is determined first. Then, the distances from that center to the object boundary is calculated every (2/N) angle from a predefined starting point on the boundary, where N is the number of the samples and it must equal to (2n) (where n=1,2,..). Through this work n=4 and n=5 were used (N=16 and N=32). The variables (samples) obtained in this type of representation are scale variant, and so, some forms of normalization processes must be introduced [8]. Two types of normalization processes are suggested. The first type is object dependent normalization in which each samples. Dividing it by the first one normalizes the last sample in the set such shown in figure 4. The second type of normalization process is to calculate the average value of all samples and then divide all samples by the average value such shown in figure 5. Mathematical Models After the data extraction from the image of the object, data manipulation must be applied in order to have a suitable Mathematical Model for the object. a) Autoregression Model The measured data can be represented or approximately represented by selected Autoregression Model. The parametric spectral can be made in a three-step procedure. 1- The model set is selected (AR (all pole Autoregression)). 2- The model parameters are estimated from the data. 3- The spectral estimation is made by the estimated model parameter. For linear systems with random input, the measurement of the output spectrum is related to the input spectrum by the factorization model [9]. )()()()()()( 2* ZSZHZSZHZHZS xxxxyy  …(5) Figure 6 shows the schematic diagram of the AR model. Figure 6 : Autoregression Model The model parametric estimation is [10]: 1- Select the representative model (AR Model). 2- Estimation the model parameter from data, that is given y(t), estimate (A(q-1 )), where AR: H()=1/A() (all pole model). The all pole model is defined as: y(t) A(q-1 ) = e(t) Where y(t) : is the measured data e(t) : is the zero mean, white noise sequence with variance Ree. + 1-A(q- 1 ) y(t ) e(t )
  • 4. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 94 | Page A(q-1 ): is an Nath order polynomial in backward shift operator. The basic parameter estimation problem for the Autoregression in the general case (infinite covariance) is given as minimum (error) variance solution to ))(()(min 2 teEtJ a  …(6) e(t) = y(t)-y(t) …(7) = {Minimum error between estimated and real value}. y(t) is the minimum variance-estimate obtained from AR model.   aN i i teityaty 1 )()()( …(8)   aN i i ityaty 1 ^ )()( …(9) y(t) is actually the one step predictor. The one step predictor estimates y(t/t-1) based on the past data samples. This operation is called linear predictor. The coefficients of the predictor were calculated using Durban's recursive algorithm. The order of each predictor depends on the number of samples (N) used in calculating the predictor coefficients and equals to the first integer > (ln(N)). b) Wavelet Transform Model Two forms of the discrete wavelet transform are used throughout this work. These are Haar wavelet transform and Daubechies wavelet transform: Haar Wavelet Transform is a cold topic, their graphs are made from flat pieces {1's and 0's}. When dealing with a four elements one-dimensional signal. The Haar Wavelet Transform of this signal is a scaling function and with three Wavelet coefficients. If these scaling and wavelet coefficients are multiplied with Haar basis vectors then, the original signal vector will be retrieved. The Haar four element basis vectors form a matrix of the following form:                 1011 1011 0111 0111 H …(10) As it can be seen the basis vectors of the Haar transform are orthogonal. This means that if any vector is multiplied by the inner product with any vector other than itself the result will be zero. (D = the signal vector & S = Wavelet transform vector) DHS  …(11) DHHSH 11   …(12) DHS 1  …(13) where                 5.05.000 005.05.0 25.025.025.025.0 25.025.025.025.0 1 H …(14) with                           4 3 2 1 3 2 1 & a a a a D W W W S S …(15) Here throughout this work, Fast Haar algorithm is used which will transfer the four elements one dimensional signal vector to a scaling function with three-wavelet coefficient. This type of transform is done by the use of the Pyramid algorithm. The component of the signal vector are a1,a2,a3 and a4. The pyramid algorithm is applied as in figure 7 [11].
  • 5. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 95 | Page It was found through the mathematical calculations that the ratio of the scaling function over the first wavelet coefficient with the sum of wavelet coefficient from the second wavelet coefficient to the last one divided by the first wavelet coefficient gives a good normalized (scale invariant) mathematical representation of the data vector. This system is tested on different data sets having the same elements but with different sequences and it gives a good discrimination between these vectors .The first ratio equals: Ratio1 = (S/W1) …(16) The second ratio equals: Ratio2 = (W1+….Wn-1/W1) …(17) Where n is the number of elements in the data vector. This representation was extended to 32 and 64 element vector. Some malfunction may in appear in the theoretical calculation of the fast Haar transform (such as when dealing with the two vectors of data (5,4,2,1) and (6,3,2,1). These two vectors have different elements but they have the same first ratio (S/W1). A second ratio (W2+W3)/W1 must be used in order to distinguish between the two vectors. A modification over the Harr basis vectors is assigned in order to distinguish between the two vectors from the first ratio. An example of this modification is given below for four elements.                 0121 1012 1012 0121 H …(18)                 05.05.00 5.0005.0 2.01.01.02.0 1.02.02.01.0 1 H …(19) From this for elements modified Haar transform the scaling function and the three wavelet coefficients are calculated as follows: S = (a1+2x(a2+a3)+a4)/10 …(20) W1 = (-2x(a1+a4)+a3+a4)/10 …(21) W2 = (a1-a4)/2 …(22) W3 = (a2-a3)/2 …(23) The second proposed modified Harr vectors give the following form for four-element signal vector.                 1012 0221 1012 0221 H …(24)
  • 6. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 96 | Page                 5.005.00 025.0025.0 1.02.01.02.0 1.02.02.01.0 1 H …(25) These proposed wavelet transforms were extended to 16 and 64 element wavelets. Daubechies wavelet transform: Three conditions must be satisfied when building the Daubechies Matrix of basis vectors these condition are: 1- The basis vectors must be orthogonal, which mean that the inner product of any two different vector is zero. 2- The difference between the odd and even coefficients is zero. 3- The summation of the coefficients must be bounded (through references, the select number for bounding is (2)). Three approaches for modified Daubechies wavelet basis vectors are proposed for the 16-element data vector. Each one uses different conditions from the others.                                                             00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 000000000000 000000000000 000000000000 000000000000 00000000 00000000 1615 1516 1615 1516 1615 1516 1615 1516 16151413 13141516 16151413 13141516 161514131211109 910111213141516 12345678910111213141516 16151413121110987654321 dd dd dd dd dd dd dd dd dddd dddd dddd dddd dddddddd dddddddd dddddddddddddddd dddddddddddddddd The sets of equations that must be fulfilled in order to solve for the d's are: d13d15+d14d16 = 0 …(26) d9d15+d10d16 = 0 …(27) d1 d15+d2d16 = 0 …(28) d5d15+d6d16 = 0 …(29) d13d9+d14d10 + d11d15+d12d16 = 0 …(30) d13d1+d14d2 + d8d15+d4d16 = 0 …(31) d16d8+d15d7 + d14d6+d13d5 + d12d4+d11d8 + d10d2+d9d1= 0 …(32) The preceding equations satisfy the following Daubechies condition: )0(0 1 1 2     mdd N k mkk …(33) Where N is the number of elements in each one of the Daubechies basis vectors d16+d15+d14+d13+d12+d11+d10+d9+d8+d7+d6+d5+d4+d3+d2+d1= 0 …(34) which satisfies the Daubechies condition for wavelet basis vectors elements: 2 1 1     N k kd …(35) d16-d15+d14-d13+d12-d11+d10-d9+d8-d7+d6-d5+d4-d3+d2-d1= 0 …(36) which satisfies the Daubechies condition for wavelet basis vectors elements: 1-)2/(.,0,1,2,3,..for)1( 1 1     N k k mk Nmdk …(37) Equation (37) (n) is taken when the value (0). d15d8-d16d7 = 0 …(38) d15d4-d16d3 = 0 …(39) d12d15-d11d16 = 0 …(40)
  • 7. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 97 | Page d13d8-d14d7 + d6d15-d5d16 = 0 …(41) d13d9+d14d10 + d11d15+d12d16 = 0 …(42) d13+d14+d15+d16 = 2 …(43) d16- d15+d14- d13 = 0 …(44) d16-2 d15+3d14- 4d13 = 0 …(45) In the first approach of Daubechies basis vectors building equation (35) is set equal to 8. In second approach equation (45) is substituted by the following non-linear equation: d16-4d15+16d14- 64d13 = 0 …(46) In the third approach the all equations are used as they are given. With all modified Dauberchies approaches the ratio S/W1 is used as the mathematical model for the pose of the object. Distance Measurement When the Mathematical Model for the object is built, then the recognition process can be fulfilled. The decision of the object being recognized or not will depend on three types of distance measurement, these are: 1- When dealing with a single element set such as the variable ratio or angle1 or angle2,…,etc., the direct absolute difference is used. Which is given as: 21Distance RR  …(47) 2- When dealing with one-dimensional vector with more than one element, such as Autoregression Model Euclidean distance is a good measure for the distance between two vectors. If the two vectors are (a1,a2,…..,an) and (b1,b2,….,bn), then the Euclidean distance between them is defined as [7]: 22 22 2 11 )(.....)()(Distance nn bababa  …(48) 3- For the same preceding case, Non-metric measure similarity function is good distance measure also. It is defined as S(A,B). CosBABABAS  1/),( …(49) This measurement gives high value for Cos when the similarity between the two vectors A&B is high. Object Model The first step after the filtering and edge extraction processes is the calculation of four variables. These variables represent some fast-calculation features of the designed object. These are the first four variables in the Mathematical Model of each pose in the Object Model. In the recognition process, each one of these variables for the object under consideration is compared with corresponding variable for the pose in the Object Model. If the distance between the two corresponding variables is higher than a predefined value then the recognition process with that pose is terminated. These four variables are defined as Ratio, Angle1, Angle2 and ratio3. In some applications a fifth variable is calculated which is the Average Crossing. The variables are: Ratio: It is the ratio of the line connecting the highest pixel to the lowest pixel in the object image to the line connecting the first left pixel to first right pixel in the object image. (when more than one pixel appear at any one of the four sides, then for the Top side it is the first left pixel. For the Bottom side it is the first right pixel. For the Right side, it is the first bottom pixel and for the Left side, it is the first top pixel. Angle1: It is the angle between the line connecting the top pixel to the bottom pixel in the object image to the horizontal axis. Angle2: It is the angle between the line connecting the first left pixel to the first right pixel in the object image to the horizontal axis. Ratio3: It is the ratio of the line between the object boundaries and passing through the center of the object horizontally to that passing through the center vertically. Average Crossing: One type of scale invariant representation that may be used in the Object Model building is the Average Crossing, which is used with centroidal representation of the object. This variable is calculated as follows: The average of centroidal samples is calculated. After that the number of the times that the centroidal samples switches between values higher or less than the average value of the samples is calculated which will give a scale invariant representation of the object. III. Results Every proposed algorithm in this work has its own sequential database, which represent the Object Model containing the mathematical representation of all possible 512 poses of the object. Different types of desired and undesired object images were tested. When the following distances were used, the algorithms failed to recognize the correct objects and failed to reject the undesired objects.
  • 8. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 98 | Page Maximum allowable distance for variable Ratio = 0.3. Maximum allowable distance for variable Angle1 = . Maximum allowable distance for variable Angle2 = 2. Maximum allowable distance for variable Ratio3 = 0.35. Maximum allowable distance for variable Average Crossing = 0. For the Wavelet Transform Model and Centroidal Representation for 16 samples and average normalization, the maximum allowable distances as shown in table 1. Table:1 maximum allowable distances when using 16 samples and average normalization H.T 1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T S/W1 (W2+…..+Wn)/W1 0.15 0.5 0.2 0.2 0.15 0.05 0.07 For the Wavelet Transform Model and Centroidal Representation for 16 samples and proceeding sample normalization, the maximum allowable distances as shown in table 2. Table 2 maximum allowable distances when using for 16 samples and proceeding sample normalization H.T 1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T S/W1 (W2+…..+Wn)/W1 0.15 0.5 0.25 0.25 0.25 0.1 0.09 For the Wavelet Transform Model and Centroidal Representation for 32 samples and average normalization, the maximum allowable distances are: Table 3maximum allowable distances when using 32 samples and average normalization H.T 1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T S/W1 (W2+…..+Wn)/W1 0.15 0.55 0.25 0.3 0.1 0.09 0.75 For the Wavelet Transform Model and Centroidal Representation for 32 samples and proceeding sample normalization, the maximum allowable distances are: Table 4 maximum allowable distances when using for 32 samples and proceeding sample normalization H.T 1M.H.T 2M.H.T 1M.D.T 2M.D.T 3M.D.T S/W1 (W2+…..+Wn)/W1 0.2 0.55 0.25 0.35 0.13 0.05 0.085 For Autoregression Model: Maximum Euclidean distance for Centroidal representation of 16 samples with average normalization = 0.8. Maximum Euclidean distance for Centroidal representation of 16 samples with proceeding sample normalization = 0.85. Maximum Euclidean distance for Centroidal representation of 32 samples with average normalization = 0.8. Maximum Euclidean distance for Centroidal representation of 16 samples with proceeding sample normalization = 1. Maximum non-metric measure of similarity distance for Centroidal representation with 16 sample and average normalization = 0.1. Maximum non-metric measure of similarity distance for Centroidal representation with 16 sample and proceeding sample normalization = 0.15. Maximum non-metric measure of similarity distance for Centroidal representation with 32 sample and average normalization = 0.1. Maximum non-metric measure of similarity distance for Centroidal representation with 32 sample and proceeding sample normalization = 0.1. IV. Conclusion In this work, simple, easy implemented and robust techniques for solving the 3D-object recognition are proposed. These techniques are Object Model based. The use of adaptive filter through this work saves the object edges and high frequency component from being blurred. The fastest method for calculating the data concerned with object was the Centroidal representation with 16 samples. When dealing with objects of high
  • 9. 3D Object Recognition Using Centroidal Representation DOI: 10.9790/0661-18219199 www.iosrjournals.org 99 | Page boundary details this method fail to give a good representation to 32, 64 or even more samples which increases the time consumption of the algorithms. The four or five variables used in each pose mathematical model in Object Model are scale and position invariant sine they represent ratio and angles. Simple Haar Wavelet transformation is the fastest method for calculating the Mathematical Model for the object. This transformation does not contain any multiplication or division processes. The malfunction of this type of transformation is that it needs to calculate two ratios for its mathematical model. The modified Haar and Daubechies transforms proposed through this work give a good mathematical representation for the object by using only Ratio variable, which is S/W1. The modified Daubechies Wavewlet coefficients were calculated analytically not numerically which can simplify the calculations. References [1]. Chuin-Mu Wang,”The human-height measurement scheme by using image processing techniques”, Information Security and Intelligence Control (ISIC), International Conference on IEEE,2012. [2]. Stefan Haas,” Computers & Graphics”, Volume 18, Issue 2, Pages 193–199, March–April 1994. [3]. Hongbin Zhaa1, Hideki Nanamegia, Tadashi Nagataa, “ecognizing 3-D objects by using a Hopfield-style optimization algorithm for matching patch-based descriptions”, Volume 31, Issue 6, Pages 727–741, June 1998. [4]. Haitham Hasan , S. Abdul-Kareem,”Static hand gesture recognition using neural networks”, Artificial Intelligence Review,Volume 41, Issue 2, pp 147-181, January 2012. [5]. Duan-Yu Chen , Yuan Ze Univ ,Chungli, Taiwan, “An online people counting system for electronic advertising machines”,Multimedia and Expo, IEEE International Conference. ICME 2009. [6]. Marc van Kreveld, Bettina Speckmann, “On rectangular cartograms” Volume 37, Issue 3, Pages 175–187, August 2007. [7]. Tariq Zeyad Ismail, Zainab Ibrahim Abood” 3-D OBJECT RECOGNITION USING MULTI WAVELET AND NEURAL NETWORK” Journal of Engineering, Number1,Volume 18 January 2012. [8]. R. L. Kashyap and R. Chellappa, "stochastic Model for Closed Boundary Analysis: Representation and Reconstruction", Trans. On Information Theory, Vol. 1T-27, No. 5, 1981. [9]. Yingbo Hua, “Blind methods of system identification”,Volume 21, Issue 1, pp 91-108,2002. [10]. James V. Candy ,”Discrete Random Signals and Systems”, Wiley online library 7 OCT 2005. [11]. Eloy Ramos-Cordoba, István Mayer, Eduard Matito, Pedro Salvador,”Toward a Unique Definition of the Local Spin” Journal of chemical theory and computation,2012.