This document presents two numerical methods for evaluating integrals involving the linear-shape function times the 3D Green's function on a plane triangle. The conventional method splits the integral into an analytical and numerical part, while the alternative method proposed evaluates the integral fully numerically. Numerical results show the alternative method is conceptually simpler, easier to implement, and achieves similar or better accuracy compared to the conventional method.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
A Dependent Set Based Approach for Large Graph AnalysisEditor IJCATR
Now a day’s social or computer networks produced graphs of thousands of nodes & millions of edges. Such Large graphs
are used to store and represent information. As it is a complex data structure it requires extra processing. Partitioning or clustering
methods are used to decompose a large graph. In this paper dependent set based graph partitioning approach is proposed which
decomposes a large graph into sub graphs. It creates uniform partitions with very few edge cuts. It also prevents the loss of
information. The work also focuses on an approach that handles dynamic updation in a large graph and represents a large graph in
abstract form.
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
Generalized method of moment estimating function enables one to estimate regression parameters consistently and efficiently. However, it involves one major computational problem: in complex data settings, solving generalized method of moments estimating function via Newton-Raphson technique gives rise often to non-invertible Jacobian matrices. Thus, parameter estimation becomes unreliable and computationally inefficient. To overcome this problem, we propose to use secant method based on vector divisions instead of the usual Newton-Raphson technique to estimate the regression parameters. This new method of estimation demonstrates a decrease in the number of non-convergence iterations as compared to the Newton-Raphson technique and provides reliable estimates.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
Algorithms based on minimum volume constraint or sum of squared distances constraint is
widely used in Hyperspectral image unmixing. However, there are few works about performing
comparison between these two algorithms. In this paper, comparison analysis between two
algorithms is presented to evaluate the performance of two constraints under different situations. Comparison is implemented from the following three aspects: flatness of simplex, initialization effects and robustness to noise. The analysis can provide a guideline on which constraint should be adopted under certain specific tasks.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
This chapter lays out a discussion on discrete data representation, continuous data sampling and re- construction. Fundamental differences between continuous (sampled) and discrete data are outlined. It introduces basic functions, discrete meshes and cells as means of constructing piecewise continuous approximations from sampled data. One learns about various types of datasets commonly used in the visualization practice: their advantages, limitations and constraints. This chapter gives an understanding of various trade-offs involved in the choice of a dataset for a given visualization application while focuses on efficiency of implementing the most commonly used datasets presented with cell types in d ∈ [0, 3] dimensions.
Single to multiple kernel learning with four popular svm kernels (survey)eSAT Journals
Abstract Machine learning applications and pattern recognition have gained great attention recently because of the variety of applications
depend on machine learning techniques, these techniques could make many processes easier and also reduce the amount of
human interference (more automation). This paper research four of the most popular kernels used with Support Vector Machines
(SVM) for Classification purposes. This survey uses Linear, Polynomial, Gaussian and Sigmoid kernels, each in a single form and
all together as un-weighted sum of kernels as form of Multi-Kernel Learning (MKL), with eleven datasets, these data sets are
benchmark datasets with different types of features and different number of classes, so some will be used with Two-Classes
Classification (Binary Classification) and some with Multi-Class Classification. Shogun machine learning Toolbox is used with
Python programming language to perform the classification and also to handle the pre-classification operations like Feature
Scaling (Normalization).The Cross Validation technique is used to find the best performance Out of the suggested different
kernels' methods .To compare the final results two performance measuring techniques are used; classification accuracy and Area
Under Receiver Operating Characteristic (ROC). General basics of SVM and used Kernels with classification parameters are
given through the first part of the paper, then experimental details are explained in steps and after those steps, experimental
results from the steps are given with final histograms represent the differences of the outputs' accuracies and the areas under
ROC curves (AUC). Finally best methods obtained are applied on remote sensing data sets and the results are compared to a
state of art work published in the field using the same set of data.
Keywords: Machine Learning, Classification, SVM, MKL, Cross Validation and ROC
The Geometric Characteristics of the Linear Features in Close Range Photogram...IJERD Editor
The accuracy of photogrammetry can be increased with better instruments, careful geometric
characteristics of the system, more observations and rigorous adjustment. The main objective of this research is
to develop a new mathematical model of two types of linear features (straight line, spline curve) in addition to
relating linear features in object space to the image space using the Direct Linear Transformation (DLT). The
second main objective of the present paper is to study of some geometric characteristics of the system, when the
linear features are used in close range photogrammetric reduction processes. In this research, the accuracy
improvement has been evaluated by adopting certain assessment criteria, this will be performed by computing
the positional discrepancies between the photogrammetrically calculated object space coordinates of some check
object points, with the original check points of the test field, in terms of their respective RMS errors values. In
addition, the resulting least squares estimated covariance matrices of the check object point's space coordinates.
To perform the above purposes, some experiments are performed with synthetic images. The obtained results
showed significant improvements in the positional accuracy of close range photogrammetry, when starting node,
end nodes, and interior node on straight line and spline curve are increased with certain specifications regarding
the location and magnitude of each type of them.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationBrian Erandio
Correction with the misspelled langrange.
and credits to the owners of the pictures (Fantasmagoria01, eugene-kukulka, vooga, and etc.) . I do not own all of the pictures used as background sorry to those who aren't tagged.
The presentation contains topics from Applied Numerical Methods with MATHLAB for Engineers and Scientist 6th and International Edition.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
CONCURRENT TERNARY GALOIS-BASED COMPUTATION USING NANO-APEX MULTIPLEXING NIBS...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their
corresponding carbon-based field emission controlled switching is introduced in this article. The
formalistic ternary nano-based implementation utilizes recent findings in field emission and nano
applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via
field-emission methods. The presented work implements multi-valued Galois functions by utilizing
concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field
emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first
part of the article. The introduced computational extension utilizing many-to-one carbon field-emission
devices will be further utilized in implementing congestion-free architectures within the third part of the
article. The emerging nano-based technologies form important directions in low-power compact-size
regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using
much less power than silicon-based devices. Applications include low-power design of VLSI circuits for
signal processing and control of autonomous robots.
CONCURRENT TERNARY GALOIS-BASED COMPUTATION USING NANO-APEX MULTIPLEXING NIBS...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their
corresponding carbon-based field emission controlled switching is introduced in this article. The
formalistic ternary nano-based implementation utilizes recent findings in field emission and nano
applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via
field-emission methods. The presented work implements multi-valued Galois functions by utilizing
concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field
emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first
part of the article. The introduced computational extension utilizing many-to-one carbon field-emission
devices will be further utilized in implementing congestion-free architectures within the third part of the
article. The emerging nano-based technologies form important directions in low-power compact-size
regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using
much less power than silicon-based devices. Applications include low-power design of VLSI circuits for
signal processing and control of autonomous robots.
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
A Dependent Set Based Approach for Large Graph AnalysisEditor IJCATR
Now a day’s social or computer networks produced graphs of thousands of nodes & millions of edges. Such Large graphs
are used to store and represent information. As it is a complex data structure it requires extra processing. Partitioning or clustering
methods are used to decompose a large graph. In this paper dependent set based graph partitioning approach is proposed which
decomposes a large graph into sub graphs. It creates uniform partitions with very few edge cuts. It also prevents the loss of
information. The work also focuses on an approach that handles dynamic updation in a large graph and represents a large graph in
abstract form.
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
Generalized method of moment estimating function enables one to estimate regression parameters consistently and efficiently. However, it involves one major computational problem: in complex data settings, solving generalized method of moments estimating function via Newton-Raphson technique gives rise often to non-invertible Jacobian matrices. Thus, parameter estimation becomes unreliable and computationally inefficient. To overcome this problem, we propose to use secant method based on vector divisions instead of the usual Newton-Raphson technique to estimate the regression parameters. This new method of estimation demonstrates a decrease in the number of non-convergence iterations as compared to the Newton-Raphson technique and provides reliable estimates.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
Algorithms based on minimum volume constraint or sum of squared distances constraint is
widely used in Hyperspectral image unmixing. However, there are few works about performing
comparison between these two algorithms. In this paper, comparison analysis between two
algorithms is presented to evaluate the performance of two constraints under different situations. Comparison is implemented from the following three aspects: flatness of simplex, initialization effects and robustness to noise. The analysis can provide a guideline on which constraint should be adopted under certain specific tasks.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
This chapter lays out a discussion on discrete data representation, continuous data sampling and re- construction. Fundamental differences between continuous (sampled) and discrete data are outlined. It introduces basic functions, discrete meshes and cells as means of constructing piecewise continuous approximations from sampled data. One learns about various types of datasets commonly used in the visualization practice: their advantages, limitations and constraints. This chapter gives an understanding of various trade-offs involved in the choice of a dataset for a given visualization application while focuses on efficiency of implementing the most commonly used datasets presented with cell types in d ∈ [0, 3] dimensions.
Single to multiple kernel learning with four popular svm kernels (survey)eSAT Journals
Abstract Machine learning applications and pattern recognition have gained great attention recently because of the variety of applications
depend on machine learning techniques, these techniques could make many processes easier and also reduce the amount of
human interference (more automation). This paper research four of the most popular kernels used with Support Vector Machines
(SVM) for Classification purposes. This survey uses Linear, Polynomial, Gaussian and Sigmoid kernels, each in a single form and
all together as un-weighted sum of kernels as form of Multi-Kernel Learning (MKL), with eleven datasets, these data sets are
benchmark datasets with different types of features and different number of classes, so some will be used with Two-Classes
Classification (Binary Classification) and some with Multi-Class Classification. Shogun machine learning Toolbox is used with
Python programming language to perform the classification and also to handle the pre-classification operations like Feature
Scaling (Normalization).The Cross Validation technique is used to find the best performance Out of the suggested different
kernels' methods .To compare the final results two performance measuring techniques are used; classification accuracy and Area
Under Receiver Operating Characteristic (ROC). General basics of SVM and used Kernels with classification parameters are
given through the first part of the paper, then experimental details are explained in steps and after those steps, experimental
results from the steps are given with final histograms represent the differences of the outputs' accuracies and the areas under
ROC curves (AUC). Finally best methods obtained are applied on remote sensing data sets and the results are compared to a
state of art work published in the field using the same set of data.
Keywords: Machine Learning, Classification, SVM, MKL, Cross Validation and ROC
The Geometric Characteristics of the Linear Features in Close Range Photogram...IJERD Editor
The accuracy of photogrammetry can be increased with better instruments, careful geometric
characteristics of the system, more observations and rigorous adjustment. The main objective of this research is
to develop a new mathematical model of two types of linear features (straight line, spline curve) in addition to
relating linear features in object space to the image space using the Direct Linear Transformation (DLT). The
second main objective of the present paper is to study of some geometric characteristics of the system, when the
linear features are used in close range photogrammetric reduction processes. In this research, the accuracy
improvement has been evaluated by adopting certain assessment criteria, this will be performed by computing
the positional discrepancies between the photogrammetrically calculated object space coordinates of some check
object points, with the original check points of the test field, in terms of their respective RMS errors values. In
addition, the resulting least squares estimated covariance matrices of the check object point's space coordinates.
To perform the above purposes, some experiments are performed with synthetic images. The obtained results
showed significant improvements in the positional accuracy of close range photogrammetry, when starting node,
end nodes, and interior node on straight line and spline curve are increased with certain specifications regarding
the location and magnitude of each type of them.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationBrian Erandio
Correction with the misspelled langrange.
and credits to the owners of the pictures (Fantasmagoria01, eugene-kukulka, vooga, and etc.) . I do not own all of the pictures used as background sorry to those who aren't tagged.
The presentation contains topics from Applied Numerical Methods with MATHLAB for Engineers and Scientist 6th and International Edition.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
CONCURRENT TERNARY GALOIS-BASED COMPUTATION USING NANO-APEX MULTIPLEXING NIBS...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their
corresponding carbon-based field emission controlled switching is introduced in this article. The
formalistic ternary nano-based implementation utilizes recent findings in field emission and nano
applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via
field-emission methods. The presented work implements multi-valued Galois functions by utilizing
concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field
emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first
part of the article. The introduced computational extension utilizing many-to-one carbon field-emission
devices will be further utilized in implementing congestion-free architectures within the third part of the
article. The emerging nano-based technologies form important directions in low-power compact-size
regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using
much less power than silicon-based devices. Applications include low-power design of VLSI circuits for
signal processing and control of autonomous robots.
CONCURRENT TERNARY GALOIS-BASED COMPUTATION USING NANO-APEX MULTIPLEXING NIBS...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their
corresponding carbon-based field emission controlled switching is introduced in this article. The
formalistic ternary nano-based implementation utilizes recent findings in field emission and nano
applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via
field-emission methods. The presented work implements multi-valued Galois functions by utilizing
concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field
emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first
part of the article. The introduced computational extension utilizing many-to-one carbon field-emission
devices will be further utilized in implementing congestion-free architectures within the third part of the
article. The emerging nano-based technologies form important directions in low-power compact-size
regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using
much less power than silicon-based devices. Applications include low-power design of VLSI circuits for
signal processing and control of autonomous robots.
A Weighted Duality based Formulation of MIMO SystemsIJERA Editor
This work is based on the modeling and analysis of multiple-input multiple-output (MIMO) system in downlink communication system. We take into account a recent work on the ratio of quadratic forms to formulate the weight matrices of quadratic norm in a duality structure. This enables us to achieve exact solutions for MIMO system operating under Rayleigh fading channels. We outline couple of scenarios dependent on the structure of eigenvalues to investigate the system behavior. The results obtained are validated by means of Monte Carlo simulations.
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
Perfectionnement d’un outil de simulation de la propagation en milieu micro cellulaire urbain du réseau de communication mobile GSM (norme DCS 1800) au sein de la ville de Bern (Suisse).
The Effects of Mutual Coupling and Transformer Connection Type on Frequency R...ijsrd.com
in this paper, a novel harmonic modeling technique by utilizing the concept of multi -terminal components is presented and applied to frequency scan analysis in multiphase distribution system. The proposed modeling technique is based on gathering the same phase busses and elements as a separate group (phase grouping technique, PGT) and uses multi-terminal components to model three-phase distribution system. Using multi-terminal component and PGT, distribution system elements, particularly, lines and transformers can effectively be modeled even in harmonic domain. The proposed modeling technique is applied to a test system for frequency scan analysis in order to show the frequency response of the test system in single and three-phase conditions. Consequently, the effects of mutual coupling and transformer connection types on three-phase frequency scan responses are analyzed for symmetrical and asymmetrical line configurations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
2. ROSSI AND CULLEN: LINEAR-SHAPE FUNCTION TIMES 3-D GREEN’S FUNCTION ON PLANE TRIANGLE 399
Fig. 1. Plane triangle T, local coordinate system (u, v), and auxiliary
systems (ua, va) and (, ).
where and are orthonormal vectors. For convenience,
(1)–(3) can be grouped in a more compact form as follows:
(6)
For completeness, we observe that the following series of
inequalities hold:
where is the circular domain centered at , and
having radius equal to the maximum distance between any
of the three vertices of and the observation point itself
( ). Thus, exists and is finite; however, care must
be taken with its numerical evaluation due to the presence of
the singularity in the integrand.
A. Conventional Method
The typical approach [3] to the calculation of is to
separately evaluate the integrals and as follows:
(7)
(8)
The integral (7), namely, , can be calculated analytically.
The resulting expression, see for instance [3], is
(9)
where is the index associated with the side of and
the functions , , and depend on geometrical
parameters, which are illustrated in Fig. 1.
In fact, a simpler expression of can be obtained by
manipulating some fundamental results extracted from [5].
Since we have not seen this formulation published, we present
it here for convenience. We refer to the coordinate system
, centered at , . Let us define for each side of
the characteristic coefficients , , and of the line
containing the side itself so that either the equation
(10)
or the equation (in the case of a “vertical” line )
(11)
is satisfied by the coordinate pairs of the endpoints of .
Furthermore, referring to Fig. 1, let and be the angles
associated with the endpoints of . The reference axis for
measuring the angles is chosen to be with angles increasing
in the counterclockwise direction. As can be seen in Fig. 1,
, , , , , . The subscripts and
are related to the and endpoint of , which is defined as
follows: connects the points (subscript ) and
(subscript ), , , where denotes
the remainder of , where , are integers. It can then
be shown that
(12)
where we have introduced the following triplet of functions
of the angle :
with
3. 400 IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 47, NO. 4, APRIL 1999
when (10) is satisfied or
with
when (11) is satisfied. We notice that when , the th
term of the sum given in (12) is zero by definition. The results
achieved using (12) are identical to those obtained using (9).
The integral (8), i.e., , can be evaluated numerically
without difficulties, the integrand being bounded over the
integration domain . As described in [3], the numerical
multiple integration of a bounded function over a triangular
domain can always be deduced from an integral of the
following type:
(13)
where are the so-called triangle area coordinates, which
are related to the coordinates , by a linear operator. The
numerical integration of (13) can be performed by several
methods. For example, a generalized product rule (for instance,
combination of two Gaussian rules) can be applied as follows:
(14)
or, alternatively, an -point quadrature formula, as reported in
[6], can be considered as follows:
(15)
Thus, the final result is given by
(16)
In summary, to evaluate the integral it is first split into two
parts. The first part is performed analytically using either
(9) or alternatively (12) and the second is evaluated
numerically.
B. Alternative Method
In the previous section, we outlined the Graglia’s procedure
for the evaluation of and presented a modified formulation
of the analytical part of that method which we find useful.
In this section, we develop an alternative approach for
the evaluation of the integral (6), which we feel is both
conceptually simpler and also easier to apply. We also begin
by splitting the integral into two parts, but in our case, the
analytical part is zero. Observe that
with
(17)
(18)
is the intersection between a disk of radius centered at
the observation point , and . It is straightforward to
obtain
where the function , is defined as
and if is not located at a vertex; otherwise, is
the angle between the two sides of meeting at the vertex
where falls.
The integral (17) may now be expressed as
(19)
where the sum over extends to the three triangles
formed by the observation point and the endpoints of :
(see Fig. 1). is the domain given by
the intersection between and . We can now write
(20)
where
(21)
where is the distance between the integration and observation
points, which is considered as the origin of a polar coordinate
system ( is the reference axis, as previously stated).
is the distance of any point of from the observation point
and is a function of the characteristic coefficients and
in (10) as follows:
or the coefficient in (11) as follows:
4. ROSSI AND CULLEN: LINEAR-SHAPE FUNCTION TIMES 3-D GREEN’S FUNCTION ON PLANE TRIANGLE 401
TABLE I
NUMERICAL AND REFERENCE RESULTS FOR FIVE DIFFERENT OBSERVATION POINTS
with and defined in the previous section. Once again,
we observe that if , then the terms associated with
in (21) are zero. The integrals in (21) are easily evaluated.
The results are
where
Now, the final step in the solution of the original problem is the
evaluation of the three integrals involving the functions ,
, and over the domains .
This can be achieved numerically employing, for example, a
Gaussian quadrature formula
(22)
where and are, respectively, the
sets of weights and abscissas adopted for each .
The first remark about the alternative approach presented
here relates to the integrand functions , , and .
We observe that the longer side of a triangular patch, where
a basis current function shall be defined, is smaller than ,
with for an accurate implementation of the moment
method. Thus, we get
which shows that the domains
correspond to a relatively small portion of the period
of the function. This guarantees a sufficiently smooth
behavior of the functions to be integrated numerically, which,
in turns, provides a closer approximation.
Essentially, the alternative method presented in this section
is fully numerical since the analytical part evaluates to zero.
The problem of calculating is now reduced to the problem
of evaluating three triplets of integrals (one triplet for each
side of ) of functions of one variable .
In the conventional approach outlined in the previous sec-
tions, is evaluated as the sum of three triplets of integrals
calculated analytically ( ) and one triplet of multiple in-
tegrals carried out numerically ( ). Thus, the alternative
technique is certainly simpler than the usual one (the difficulty
in handling numerical integrations decreases in passing from
two dimensions to one dimension).
Another factor to be taken into account is the accuracy of
the two methods. In the conventional approach, the evaluation
of can be done by employing a product rule (
points considered) or a simpler quadrature formula ( points
sampled), with , which is obviously faster as well
as less accurate than the product rule. However, with our
alternative method, is calculated straightforwardly by an
-point Gaussian scheme, with satisfactory results.
It is also worth noting that in the implementation of the
moment method, one faces the double integration of the type
where is the vector belonging to given by (5) and,
similarly, is the vector belonging to given by (4). is
the vector associated with one of the vertices of as follows:
Thus, introducing the vector defined as
we can write the following equation:
5. 402 IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 47, NO. 4, APRIL 1999
Recalling that can be calculated as described above for every
couple of coordinates , , it is now clear that the integral
given in (23) can be easily evaluated via a numerical scheme
of the following type:
Finally, it is straightforward to generalize the method ex-
posed here in the case of planar polygonal domains. In fact,
a planar polygon having sides can always be represented
as the union of disjointed plane triangles.
C. Numerical Results
The two techniques described in the previous section have
been compared for a triangle having the following features:
, , (see Fig. 1 and
m). Table I displays five different results obtained for
five different observation points belonging to the line
. The first two rows refer to the values and ,
which correspond to two different numerical evaluation of
the integral defined in (22) with and ,
respectively. In the successive couple of rows, results returned
implementing (16) are reported. and are the
numerical integrals defined in (8) and evaluated according to
two different algorithms: a seven-point rule described in [6]
and a product Gaussian rule with [see (14)]. Each
algorithm has been implemented using C programming
language. Finally, the last row shows the reference results
returned by the well-known Mathematica 3.0 software
package.
III. CONCLUSIONS
An alternative method for the numerical integration of
a linear-shape function times a 3-D Green’s function on a
planar triangle has been presented. The alternative method is
carefully compared to the popular approach, as described by
Graglia, and is demonstrated to possess certain advantages.
The technique presented is suitable for an accurate evaluation
of the impedance matrix elements in 3-D electromagnetic
scattering problems.
REFERENCES
[1] S. M. Rao, D. R. Wilton, and A. W. Glisson, “Electromagnetic scattering
by surfaces of arbitrary shape,” IEEE Trans. Antennas Propagat., vol.
AP-30, pp. 409–418, May 1982.
[2] D. R. Wilton, S. M. Rao, A. W. Glisson, D. H. Schaubert, O. M. Al-
Bundak, and C. M. Butler, “Potential integrals for uniform and linear
source distributions on polygonal and polyhedral domains,” IEEE Trans.
Antennas Propagat., vol. AP-32, pp. 276–281, Mar. 1984.
[3] R. D. Graglia, “On the numerical integration of the linear shape
functions times the 3-D Green’s function or its gradient on a plane
triangle,” IEEE Trans. Antennas Propagat., vol. 41, pp. 1448–1455,
Oct. 1993.
[4] R. Klees, “Numerical calculation of weakly singular surface integrals,”
J. Geodesy, vol. 70, pp. 781–797, 1996.
[5] I. S. Gradshsteyn and I. M. Ryzhik, Tables of Integrals, Series and
Products. New York: Academic, 1980.
[6] C. T. Reddy and D. J. Shippy, “Alternative integration formulae for
triangular finite elements,” Int. J. Numer. Methods Eng., vol. 17, pp.
133–139, 1981.
Luca Rossi was born in Carrara, Italy, in 1971. He
received the Laurea (Doctor) degree in telecommu-
nications engineering from the University of Pisa,
Pisa, Italy, in 1996, and is currently working toward
the Ph.D. degree at Trinity College, Dublin, Ireland.
Since September 1996, he has been with the De-
partment of Electrical and Electronic Engineering,
Trinity College. His research interests include com-
putational electromagnetics and its application to
high-frequency radio-wave propagation predictions.
Peter J. Cullen (M’95) has been a Lecturer of engi-
neering science at Trinity College, Dublin, Ireland,
since 1990. His research interests are mainly in
the field of electromagnetic-wave propagation and
scattering applied to radio communications. He is
a Director of Teltec Ireland, Trinity College, which
is an Irish Government program in advanced com-
munications technology. (Further details regarding
Teltec Ireland may be obtained from the URL:
http://www.mee.tcd.ie/mobile_radio.) He is on the
management committee of the European research
initiatives Cost 259 (wireless flexible personalized communications) and Cost
255 (satellite propagation at Ku-band and above).