The Map Maker algorithm which converts survey data into geometric data with 2-dimensional Cartesian
coordinates has been previously published. Analysis of the performance of this algorithm is continuing. The
algorithm is suitable for generating 2D maps and it would be helpful to have this algorithm generalized to
generate 3D and higher dimensional coordinates. The trigonometric approach of the Map Maker algorithm
does not extend well into higher dimensions however this paper reports on an algebraic approach which
solves the problem. A similar algorithm called the Coordinatizator algorithm has been published which
converts survey data defining a higher dimensional space of measured sites into the lowest
dimensionalcoordinatization accurately fitting the data. Therefore the Coordinatizator algorithm is not a
projection transformation whereas the n-dimensional Map Maker algorithm is.
A landing gear assembly consists of various components viz. Lower side stay, Upperside stay, Locking actuators, Extension actuators, Tyres, and Locking pins to name a few. Each unit having a specific operation to deal with, in this project the main unit being studied is the lower brace. The primary objective is to analyse stresses in the element of the lower brace unit using strength of materials or RDM method and Finite Element Method (FEM) and compare both. Using the obtained data a suitable material is proposed for the component. The approach used here is to study the overall behaviour of the element by taking up each aspect, finally summing up the total effect of all the aspects in the functioning of the element.
An efficient VLSI architecture implementation for barrel distortion correction in surveillance camera images is presented. The distortion correction model is based on least squares estimation method. To reduce the computing complexity, an odd-order polynomial to approximate the back-mapping expansion polynomial is used. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Horner’s algorithm. The proposed VLSI architecture can achieve frequency 218MHz with 1490 logic elements by using 0.18μm technology. Compared with previous techniques, the circuit reduces the hardware cost and the requirement of memory usage.
Radix-3 Algorithm for Realization of Type-II Discrete Sine TransformIJERA Editor
In this paper, radix-3 algorithm for computation of type-II discrete sine transform (DST-II) of length N =
3푚 (푚 = 1,2, … . ) is presented. The DST-II of length N can be realized from three DST-II sequences, each of
length N/3. A block diagram of for computation of the radix-3 DST-II algorithm is given. Signal flow graph for
DST-II of length 푁 = 32 is shown to clarify the proposed algorithm.
A landing gear assembly consists of various components viz. Lower side stay, Upperside stay, Locking actuators, Extension actuators, Tyres, and Locking pins to name a few. Each unit having a specific operation to deal with, in this project the main unit being studied is the lower brace. The primary objective is to analyse stresses in the element of the lower brace unit using strength of materials or RDM method and Finite Element Method (FEM) and compare both. Using the obtained data a suitable material is proposed for the component. The approach used here is to study the overall behaviour of the element by taking up each aspect, finally summing up the total effect of all the aspects in the functioning of the element.
An efficient VLSI architecture implementation for barrel distortion correction in surveillance camera images is presented. The distortion correction model is based on least squares estimation method. To reduce the computing complexity, an odd-order polynomial to approximate the back-mapping expansion polynomial is used. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Horner’s algorithm. The proposed VLSI architecture can achieve frequency 218MHz with 1490 logic elements by using 0.18μm technology. Compared with previous techniques, the circuit reduces the hardware cost and the requirement of memory usage.
Radix-3 Algorithm for Realization of Type-II Discrete Sine TransformIJERA Editor
In this paper, radix-3 algorithm for computation of type-II discrete sine transform (DST-II) of length N =
3푚 (푚 = 1,2, … . ) is presented. The DST-II of length N can be realized from three DST-II sequences, each of
length N/3. A block diagram of for computation of the radix-3 DST-II algorithm is given. Signal flow graph for
DST-II of length 푁 = 32 is shown to clarify the proposed algorithm.
Image Interpolation Techniques with Optical and Digital Zoom Concepts -semina...mmjalbiaty
full details about Spatial and Intensity Resolution , optical and digital zoom concepts and the common three interpolation algorithms for implementing zoom in image processing
THE EVIDENCE THEORY FOR COLOR SATELLITE IMAGE COMPRESSIONcscpconf
The color satellite image compression technique by vector quantization can be improved either
by acting directly on the step of constructing the dictionary or by acting on the quantization step
of the input vectors. In this paper, an improvement of the second step has been proposed. The knearest
neighbor algorithm was used on each axis separately. The three classifications,
considered as three independent sources of information, are combined in the framework of the
evidence theory. The best code vector is then selected, after the image is quantized, Huffman
schemes compression is applied for encoding and decoding.
Identification system of characters in vehicular platesIJRES Journal
In this paper, is proposed a system that identifies the location and dimensions of the characters
relative to the image of a vehicular plate, when the location of the plate has not been accurate. The system is
divided into 4 stages, each with a specific purpose. Which are: binarization by thresholding, morphological
filtering, identification of the largest area and segmentation similarity. The first stage is used to find at the extent
if possible, the region occupied by the plate relative to the rest of the image. Then, in the filtering step, it seeks
to eliminate as far as possible the noise that interferes with the identification of the characters. The third stage is
used to identify the region occupied by the plate. Finally, in the fourth and last stage the segmentation by
similarity is used to identify position and dimension of the characters in the image, in stage a Kohonen neural
network is used.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Performance Improvement of Vector Quantization with Bit-parallelism HardwareCSCJournals
Vector quantization is an elementary technique for image compression; however, searching for the nearest codeword in a codebook is time-consuming. In this work, we propose a hardware-based scheme by adopting bit-parallelism to prune unnecessary codewords. The new scheme uses a “Bit-mapped Look-up Table” to represent the positional information of the codewords. The lookup procedure can simply refer to the bitmaps to find the candidate codewords. Our simulation results further confirm the effectiveness of the proposed scheme.
Moment Preserving Approximation of Independent Components for the Reconstruct...rahulmonikasharma
The application of Independent Component Analysis (ICA) has found considerable success in problems where sets of observed time series may be considered as results of linearly mixed instantaneous source signals. The Independent Components (IC’s) or features can be used in the reconstruction of observed multivariate time seriesfollowing an optimal ordering process. For trend discovery and forecasting, the generated IC’s can be approximated for the purpose of noise removal and for the lossy compression of the signals.We propose a moment-preserving (MP) methodology for approximating IC’s for the reconstruction of multivariate time series.The methodologyis based on deriving the approximation in the signal domain while preserving a finite number of geometric moments in its Fourier domain.Experimental results are presented onthe approximation of both artificial time series and actual time series of currency exchange rates. Our results show that the moment-preserving (MP) approximations of time series are superior to other usual interpolation approximation methods, particularly when the signals contain significant noise components. The results also indicate that the present MP approximations have significantly higher reconstruction accuracy and can be used successfully for signal denoising while achieving in the same time high packing ratios. Moreover, we find that quite acceptable reconstructions of observed multivariate time series can be obtained with only the first few MP approximated IC’s.
New Image Steganography by Secret Fragment Visible Mosaic Image for Secret Im...ijsrd.com
A new type of image similarity method is proposed, which created automatically by composing small fragments of given secret image in mosaic form in the target image .The mosaic image is yielded by dividing the secret image into fragments and transforming the colour characteristics of secret image to that of target image. Skilful techniques are used in the colour transformation process so that secret image may be recovered nearly lossless.
I am Mathew K. I am a Planetary Science Assignment Expert at eduassignmenthelp.com. I hold a Masters’s Degree in Planetary Science, The University of Chicago, USA. I have been helping students with their homework for the past 8 years. I solve assignments related to Planetary Sciences.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Planetary Science Assignments.
Traveling Salesman Problem in Distributed Environmentcsandit
In this paper, we focus on developing parallel algorithms for solving the traveling salesman problem (TSP) based on Nicos Christofides algorithm released in 1976. The parallel algorithm
is built in the distributed environment with multi-processors (Master-Slave). The algorithm is installed on the computer cluster system of National University of Education in Hanoi,
Vietnam (ccs1.hnue.edu.vn) and uses the library PJ (Parallel Java). The results are evaluated and compared with other works.
A UGMENT R EALITY IN V OLUMETRIC M EDICAL I MAGING U SING S TEREOSCOPIC...ijcga
This paper is written about augment reality in medi
cine. Medical imaging equipment (CT, PET, MRI) are
produced 3D volumetric data, so using the stereosco
pic 3D display, observer feels depth perception. Th
e
major factors about depth-Convergence, Accommodatio
n, Relative size are tested. Convergence and
Accommodation have affected depth perception but re
lative size is negligibl
A WIRE PARAMETER AND REACTION MANAGER BASED BIPED CHARACTER SETUP AND RIGGI...ijcga
n this paper a unique way of rigging a bipedal character
is discussed
inside 3Ds Max, using constraints,
wire parameter setup and linking the controls. 3Ds Max is more commonly used for game character
modeling and rigging. For
this purpose a
built
-
in Biped setup is present that is a pre
-
rigged structure,
which can be easily
applied on
any
bipedal
character type easily and efficiently. However, the built
-
in
Biped system lacks the
functionality of manual
control and motion retargeted manipulation, mostly found in
custom rigs. This
proposed
system builds a custom rig using a
uni
que
approach of
combining
multiple tools
and expressions.
Segmentation of cysts in kidney and 3 d volume calculation from ct images ijcga
This paper proposes a segmentation method and a three-dimensional (3-D) volume calculation method of
cysts in kidney from a number of computer tomography (CT) slice images. The input CT slice images
contain both sides of kidneys. There are two segmentation steps used in the proposed method: kidney
segmentation and cyst segmentation. For kidney segmentation, kidney regions are segmented from CT slice
images by using a graph-cut method that is applied to the middle slice of input CT slice images. Then, the
same method is used for the remaining CT slice images. In cyst segmentation, cyst regions are segmented
from the kidney regions by using fuzzy C-means clustering and level-set methods that can reduce noise of
non-cyst regions. For 3-D volume calculation, cyst volume calculation and 3-D volume visualization are
used. In cyst volume calculation, the area of cyst in each CT slice image equals to the number of pixels in
the cyst regions multiplied by spatial density of CT slice images, and then the volume of cysts is calculated
by multiplying the cyst area and thickness (interval) of CT slice images. In 3-D volume visualization, a 3-D
visualization technique is used to show the distribution of cysts in kidneys by using the result of cyst volume
calculation. The total 3-D volume is the sum of the calculated cyst volume in each CT slice image.
Experimental results show a good performance of 3-D volume calculation. The proposed cyst segmentation
and 3-D volume calculation methods can provide practical supports to surgery options and medical
practice to medical students
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
The midpoint method or technique is a “measurement” and as each measurement it has a tolerance, but
worst of all it can be invalid, called Out-of-Control or OoC. The core of all midpoint methods is the accurate
measurement of the difference of the squared distances of two points to the “polar” of their midpoint
with respect to the conic. When this measurement is valid, it also measures the difference of the squared
distances of these points to the conic, although it may be inaccurate, called Out-of-Accuracy or OoA. The
primary condition is the necessary and sufficient condition that a measurement is valid. It is comletely
new and it can be checked ultra fast and before the actual measurement starts. .
Modeling an incremental algorithm, shows that the curve must be subdivided into “piecewise monotonic”
sections, the start point must be optimal, and it explains that the 2D-incremental method can find, locally,
the global Least Square Distance. Locally means that there are at most three candidate points for a given
monotonic direction; therefore the 2D-midpoint method has, locally, at most three measurements.
When all the possible measurements are invalid, the midpoint method cannot be applied, and in that case
the ultra fast “OoC-rule” selects the candidate point. This guarantees, for the first time, a 100% stable,
ultra-fast, berserkless midpoint algorithm, which can be easily transformed to hardware. The new algorithm
is on average (26.5±5)% faster than Mathematica, using the same resolution and tested using 42
different conics. Both programs are completely written in Mathematica and only ContourPlot[] has been
replaced with a module to generate the grid-points, drawn with Mathematica’s
Graphics[Line{gridpoints}] function.
AUTOMATIC DETECTION OF LOGOS IN VIDEO AND THEIR REMOVAL USING INPAINTIijcga
In this paper, we propose an algorithm that automatically detects and removes moving logos in video. The
proposed algorithm consists of logo detection, tracking and extraction, and removal. In the logo detection
step, we automatically detect moving logos using the saliency map and the scale invariant feature transform.
In the logo tracking and extraction step, an improved mean shift tracking method and backward tracking
technique are used in which only logo regions are extracted by using color information. Finally, in the logo
removal step, the detected logo region is filled with the region of neighborhood using an exemplar-based
inpainting that can fill a large detected region without artifacts. These steps are effectively interconnected
using control flags. Experimental results with various test sequences show that the proposed algorithm is
effective to detect, track, and remove moving logos in video.
Image Interpolation Techniques with Optical and Digital Zoom Concepts -semina...mmjalbiaty
full details about Spatial and Intensity Resolution , optical and digital zoom concepts and the common three interpolation algorithms for implementing zoom in image processing
THE EVIDENCE THEORY FOR COLOR SATELLITE IMAGE COMPRESSIONcscpconf
The color satellite image compression technique by vector quantization can be improved either
by acting directly on the step of constructing the dictionary or by acting on the quantization step
of the input vectors. In this paper, an improvement of the second step has been proposed. The knearest
neighbor algorithm was used on each axis separately. The three classifications,
considered as three independent sources of information, are combined in the framework of the
evidence theory. The best code vector is then selected, after the image is quantized, Huffman
schemes compression is applied for encoding and decoding.
Identification system of characters in vehicular platesIJRES Journal
In this paper, is proposed a system that identifies the location and dimensions of the characters
relative to the image of a vehicular plate, when the location of the plate has not been accurate. The system is
divided into 4 stages, each with a specific purpose. Which are: binarization by thresholding, morphological
filtering, identification of the largest area and segmentation similarity. The first stage is used to find at the extent
if possible, the region occupied by the plate relative to the rest of the image. Then, in the filtering step, it seeks
to eliminate as far as possible the noise that interferes with the identification of the characters. The third stage is
used to identify the region occupied by the plate. Finally, in the fourth and last stage the segmentation by
similarity is used to identify position and dimension of the characters in the image, in stage a Kohonen neural
network is used.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Performance Improvement of Vector Quantization with Bit-parallelism HardwareCSCJournals
Vector quantization is an elementary technique for image compression; however, searching for the nearest codeword in a codebook is time-consuming. In this work, we propose a hardware-based scheme by adopting bit-parallelism to prune unnecessary codewords. The new scheme uses a “Bit-mapped Look-up Table” to represent the positional information of the codewords. The lookup procedure can simply refer to the bitmaps to find the candidate codewords. Our simulation results further confirm the effectiveness of the proposed scheme.
Moment Preserving Approximation of Independent Components for the Reconstruct...rahulmonikasharma
The application of Independent Component Analysis (ICA) has found considerable success in problems where sets of observed time series may be considered as results of linearly mixed instantaneous source signals. The Independent Components (IC’s) or features can be used in the reconstruction of observed multivariate time seriesfollowing an optimal ordering process. For trend discovery and forecasting, the generated IC’s can be approximated for the purpose of noise removal and for the lossy compression of the signals.We propose a moment-preserving (MP) methodology for approximating IC’s for the reconstruction of multivariate time series.The methodologyis based on deriving the approximation in the signal domain while preserving a finite number of geometric moments in its Fourier domain.Experimental results are presented onthe approximation of both artificial time series and actual time series of currency exchange rates. Our results show that the moment-preserving (MP) approximations of time series are superior to other usual interpolation approximation methods, particularly when the signals contain significant noise components. The results also indicate that the present MP approximations have significantly higher reconstruction accuracy and can be used successfully for signal denoising while achieving in the same time high packing ratios. Moreover, we find that quite acceptable reconstructions of observed multivariate time series can be obtained with only the first few MP approximated IC’s.
New Image Steganography by Secret Fragment Visible Mosaic Image for Secret Im...ijsrd.com
A new type of image similarity method is proposed, which created automatically by composing small fragments of given secret image in mosaic form in the target image .The mosaic image is yielded by dividing the secret image into fragments and transforming the colour characteristics of secret image to that of target image. Skilful techniques are used in the colour transformation process so that secret image may be recovered nearly lossless.
I am Mathew K. I am a Planetary Science Assignment Expert at eduassignmenthelp.com. I hold a Masters’s Degree in Planetary Science, The University of Chicago, USA. I have been helping students with their homework for the past 8 years. I solve assignments related to Planetary Sciences.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Planetary Science Assignments.
Traveling Salesman Problem in Distributed Environmentcsandit
In this paper, we focus on developing parallel algorithms for solving the traveling salesman problem (TSP) based on Nicos Christofides algorithm released in 1976. The parallel algorithm
is built in the distributed environment with multi-processors (Master-Slave). The algorithm is installed on the computer cluster system of National University of Education in Hanoi,
Vietnam (ccs1.hnue.edu.vn) and uses the library PJ (Parallel Java). The results are evaluated and compared with other works.
A UGMENT R EALITY IN V OLUMETRIC M EDICAL I MAGING U SING S TEREOSCOPIC...ijcga
This paper is written about augment reality in medi
cine. Medical imaging equipment (CT, PET, MRI) are
produced 3D volumetric data, so using the stereosco
pic 3D display, observer feels depth perception. Th
e
major factors about depth-Convergence, Accommodatio
n, Relative size are tested. Convergence and
Accommodation have affected depth perception but re
lative size is negligibl
A WIRE PARAMETER AND REACTION MANAGER BASED BIPED CHARACTER SETUP AND RIGGI...ijcga
n this paper a unique way of rigging a bipedal character
is discussed
inside 3Ds Max, using constraints,
wire parameter setup and linking the controls. 3Ds Max is more commonly used for game character
modeling and rigging. For
this purpose a
built
-
in Biped setup is present that is a pre
-
rigged structure,
which can be easily
applied on
any
bipedal
character type easily and efficiently. However, the built
-
in
Biped system lacks the
functionality of manual
control and motion retargeted manipulation, mostly found in
custom rigs. This
proposed
system builds a custom rig using a
uni
que
approach of
combining
multiple tools
and expressions.
Segmentation of cysts in kidney and 3 d volume calculation from ct images ijcga
This paper proposes a segmentation method and a three-dimensional (3-D) volume calculation method of
cysts in kidney from a number of computer tomography (CT) slice images. The input CT slice images
contain both sides of kidneys. There are two segmentation steps used in the proposed method: kidney
segmentation and cyst segmentation. For kidney segmentation, kidney regions are segmented from CT slice
images by using a graph-cut method that is applied to the middle slice of input CT slice images. Then, the
same method is used for the remaining CT slice images. In cyst segmentation, cyst regions are segmented
from the kidney regions by using fuzzy C-means clustering and level-set methods that can reduce noise of
non-cyst regions. For 3-D volume calculation, cyst volume calculation and 3-D volume visualization are
used. In cyst volume calculation, the area of cyst in each CT slice image equals to the number of pixels in
the cyst regions multiplied by spatial density of CT slice images, and then the volume of cysts is calculated
by multiplying the cyst area and thickness (interval) of CT slice images. In 3-D volume visualization, a 3-D
visualization technique is used to show the distribution of cysts in kidneys by using the result of cyst volume
calculation. The total 3-D volume is the sum of the calculated cyst volume in each CT slice image.
Experimental results show a good performance of 3-D volume calculation. The proposed cyst segmentation
and 3-D volume calculation methods can provide practical supports to surgery options and medical
practice to medical students
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
The midpoint method or technique is a “measurement” and as each measurement it has a tolerance, but
worst of all it can be invalid, called Out-of-Control or OoC. The core of all midpoint methods is the accurate
measurement of the difference of the squared distances of two points to the “polar” of their midpoint
with respect to the conic. When this measurement is valid, it also measures the difference of the squared
distances of these points to the conic, although it may be inaccurate, called Out-of-Accuracy or OoA. The
primary condition is the necessary and sufficient condition that a measurement is valid. It is comletely
new and it can be checked ultra fast and before the actual measurement starts. .
Modeling an incremental algorithm, shows that the curve must be subdivided into “piecewise monotonic”
sections, the start point must be optimal, and it explains that the 2D-incremental method can find, locally,
the global Least Square Distance. Locally means that there are at most three candidate points for a given
monotonic direction; therefore the 2D-midpoint method has, locally, at most three measurements.
When all the possible measurements are invalid, the midpoint method cannot be applied, and in that case
the ultra fast “OoC-rule” selects the candidate point. This guarantees, for the first time, a 100% stable,
ultra-fast, berserkless midpoint algorithm, which can be easily transformed to hardware. The new algorithm
is on average (26.5±5)% faster than Mathematica, using the same resolution and tested using 42
different conics. Both programs are completely written in Mathematica and only ContourPlot[] has been
replaced with a module to generate the grid-points, drawn with Mathematica’s
Graphics[Line{gridpoints}] function.
AUTOMATIC DETECTION OF LOGOS IN VIDEO AND THEIR REMOVAL USING INPAINTIijcga
In this paper, we propose an algorithm that automatically detects and removes moving logos in video. The
proposed algorithm consists of logo detection, tracking and extraction, and removal. In the logo detection
step, we automatically detect moving logos using the saliency map and the scale invariant feature transform.
In the logo tracking and extraction step, an improved mean shift tracking method and backward tracking
technique are used in which only logo regions are extracted by using color information. Finally, in the logo
removal step, the detected logo region is filled with the region of neighborhood using an exemplar-based
inpainting that can fill a large detected region without artifacts. These steps are effectively interconnected
using control flags. Experimental results with various test sequences show that the proposed algorithm is
effective to detect, track, and remove moving logos in video.
Terra formation control or how to move mountainsijcga
The new Uplift Model of terrain generation is generalized here and provides new possibilities for terra
formation control unlike previous fractal terrain generation methods. With the Uplift Model fine-grained
editing is possible allowing the designer to move mountains and small hills to more suitable locations
creating gaps or valleys or deep bays rather than only being able to accept the positions dictated by the
algorithm itself. Coupled with this is a compressed file storage format considerably smaller in size that the
traditional height field or height map storage requirements.
3D M ESH S EGMENTATION U SING L OCAL G EOMETRYijcga
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based
on
analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as
convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave
vertices are considered potential feature regionboundari
es. We propose a new region growing technique
starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified
by a region
-
merging method. Experiments indicatethat our algorithm segments a broad range of 3D
model
s at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized
surfaces and ease of implementation makeit an appealing alternative in various applications
One of the most efficient ways of generating goal-directed walking motions is synthesising the final motion
based on footprints. Nevertheless, current implementations have not examined the generation of continuous
motion based on footprints, where different behaviours can be generated automatically. Therefore, in this
paper a flexible approach for footprint-driven locomotion composition is presented. The presented solution
is based on the ability to generate footprint-driven locomotion, with flexible features like jumping, running,
and stair stepping. In addition, the presented system examines the ability of generating the desired motion
of the character based on predefined footprint patterns that determine which behaviour should be
performed. Finally, it is examined the generation of transition patterns based on the velocity of the root and
the number of footsteps required to achieve the target behaviour smoothly and naturally.
MAGE Q UALITY A SSESSMENT OF T ONE M APPED I MAGESijcga
This paper proposes an objective assessment method
for perceptual image quality of tone mapped images.
Tone mapping algorithms are used to display high dy
namic range (HDR) images onto standard display
devices that have low dynamic range (LDR). The prop
osed method implements visual attention to define
perceived structural distortion regions in LDR imag
es, so that a reasonable measurement of distortion
between HDR and LDR images can be performed. Since
the human visual system is sensitive to structural
information, quality metrics that can measure struc
tural similarity between HDR and LDR images are
used. Experimental results with a number of HDR and
tone mapped image pairs show the effectiveness of
the proposed method.
Efficient video indexing for fast motion videoijcga
Due to advances in recent multimedia technologies, various digital video contents become available from different multimedia sources. Efficient management, storage, coding, and indexing of video are required because video contains lots of visual information and requires a large amount of memory. This paper proposes an efficient video indexing method for video with rapid motion or fast illumination change, in which motion information and feature points of specific objects are used. For accurate shot boundary detection, we make use of two steps: block matching algorithm to obtain accurate motion information and modified displaced frame difference to compensate for the error in existing methods. We also propose an object matching algorithm based on the scale invariant feature transform, which uses feature points to group shots semantically. Computer simulation with five fast-motion video shows the effectiveness of the proposed video indexing method.
Visual hull construction from semitransparent coloured silhouettesijcga
This paper attempts to create coloured
semi
-
transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the f
inal projections.
We propose a method to convert coloured semi
-
transparent shadow images to a 3D visual hull
. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each vie
wpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
Ghost free image using blur and noise estimationijcga
This paper presents an efficient image enhancement method by fusion of two different exposure images in
low-light condition. We use two degraded images with different exposures: one is a long-exposure image
that preserves the brightness but contains blur and the other is a short-exposure image that contains a lot of
noise but preserves object boundaries. The weight map used for image fusion without artifacts of blur and
noise is computed using blur and noise estimation. To get a blur map, edges in a long-exposure image are
detected at multiple scales and the amount of blur is estimated at detected edges. Also, we can get a noise
map by noise estimation using a denoised short-exposure image. Ghost effect between two successive
images is avoided according to the moving object map that is generated by a sigmoid comparison function
based on the ratio of two input images. We can get result images by fusion of two degraded images using
the weight maps. The proposed method can be extended to high dynamic range imaging without using
information of a camera response function or generating a radiance map. Experimental results with various
sets of images show the effectiveness of the proposed method in enhancing details and removing ghost
artifacts.
Terrain generation finds many applications such as
in CGI movies, animations and video games. This
paper describes a new and simple-to-implement terra
in generator called the Uplift Model. It is based o
n
the theory of crustal deformations by uplifts in Ge
ology. When a number of uplifts are made on the Ear
th’s
surface, the final net effect is an average of the
influence of each uplift at each point on the terra
in. The
result of applying this model from Nature is a very
realistic looking effect in the generated terrains
. The
model uses 6 parameters which allow for a great var
iety in landscape types produced. Comparisons are
made with other existing terrain generation algorit
hms. Averaging causes erosion of the surface wherea
s
fractal surfaces tend to be very jagged and more su
ited to alien worlds.
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREijcga
We develop a polygonal mesh simplification algorithm based on a novel analysis of the mesh
geometry.
Particularly, we propose first a characterization of vertices as hyperbolic or non
-
hyperbolic depend
-
ing
upon their discrete local geometry. Subsequently, the simplification process computes a volume cost for
each non
-
hyperbolic vertex, in anal
-
ogy with spherical volume, to capture the loss of fidelity if that vertex
is decimated. Vertices of least volume cost are then successively deleted and the resulting holes re
-
triangulated using a method based on a novel heuristic. Preliminary experiments i
ndicate a performance
comparable to that of the best known mesh simplification algorithms
3D-COMPUTER ANIMATION FOR A YORUBA NATIVE FOLKTALEijcga
Computer graphics has wide range of applications which are implemented into computer animation,
computer modeling e.t.c. Since the invention of computer graphics researchers haven’t paid much of
attentions toward the possibility of converting oral tales otherwise known as folktales into possible cartoon
animated videos. This paper is based on how to develop cartoons of local folktales that will be of huge
benefits to Nigerians. The activities were divided into 5 stages; analysis, design, development,
implementation and evaluation which involved various processes and use of various specialized software
and hardware. After the implementation of this project, the video characteristics were evaluated using
likert scale. Analysis of 30 user responses indicated that 17 users (56.7%) rated the image quality as
excellent, the video and image synchronization was rated as excellent by 9 users (30%), the Background
noise was rated excellent by 18 users (60%), the Character Impression was rated Excellent by 11 users
(36.67%), the general assessment of the storyline was rated excellent by 17 users (56.7%), the video
Impression was rated excellent by 11 users (36.67%) and the voice quality was rated by 10 users (33.33%)
as excellent.
Analysis of design principles and requirements for procedural rigging of bipe...ijcga
Character rigging is a process of endowing a character with a set of custom manipulators and controls
making it easy to animate by the animators. These controls consist of simple joints, handles, or even
separate character selection windows.This research paper present an automated rigging system for
quadruped characters with custom controls and manipulators for animation.The full character rigging
mechanism is procedurally driven based on various principles and requirements used by the riggers and
animators. The automation is achieved initially by creating widgets according to the character type. These
widgets then can be customized by the rigger according to the character shape, height and proportion.
Then joint locations for each body parts are calculated and widgets are replaced programmatically.Finally
a complete and fully operational procedurally generated character control rig is created and attached with
the underlying skeletal joints. The functionality and feasibility of the rig was analyzed from various source
of actual character motion and a requirements criterion was met. The final rigged character provides an
efficient and easy to manipulate control rig with no lagging and at high frame rate.
Ghost and Noise Removal in Exposure Fusion for High Dynamic Range Imagingijcga
For producing a single high dynamic range image (HDRI), multiple low dynamic range images (LDRIs)
are captured with different exposures and combined. In high dynamic range (HDR) imaging, local motion
of objects and noise in a set of LDRIs can influence a final HDRI: local motion of objects causes the ghost
artifact and LDRIs, especially captured with under-exposure, make the final HDRI noisy. In this paper, we
propose a ghost and noise removal method for HDRI using exposure fusion with subband architecture, in
which Haar wavelet filter is used. The proposed method blends weight map of exposure fusion in the
subband pyramid, where the weight map is produced for ghost artifact removal as well as exposure fusion.
Then, the noise is removed using multi-resolution bilateral filtering. After removing the ghost artifact and
noise in subband images, details of the images are enhanced using a gain control map. Experimental
results with various sets of LDRIs show that the proposed method effectively removes the ghost artifact and
noise, enhancing the contrast in a final HDRI.
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
A new algorithm is presented which determines the dimensionality and signature of a measured space. The
algorithm generalizes the Map Maker’s algorithm
from 2D to n dimensions and works the same for 2D
measured spaces as the Map Maker’s algorithm but with better efficiency. The difficulty of generalizing the
geometric approach of the Map Maker’s algorithm from 2D to 3D and then to higher dimensions is
avo
ided by using this new approach. The new algorithm preserves all distances of the distance matrix and
also leads to a method for building the curved space as a subset of the N
-
1 dimensional embedding space.
This algorithm has direct application to Scientif
ic Visualization for data viewing and searching based on
Computational Geometry.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Comparison of Distance Transform Based FeaturesIJERA Editor
The distance transform based features are widely used in pattern recognition applications. A distance transform
assigns to each background pixel in a binary image a value equal to its distance to the nearest foreground pixel
according to a defined metric. Among these metrics the Chessboard, Euclidean, Chamfer and City-block are
popular. The role of a feature extraction method is quite important in pattern recognition applications. Before
applying a feature, it is essential to judge its performance on the given applications. In this research work, a
study on the performance of above mentioned distance transform based features is made. We have conducted
experiments with 500 hand-printed characters/class and the study has been performed on 43 classes. The
classifiers used are k-NN, MLP, SVM and PNN.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
3D reconstruction is the process of capturing the shape and appearance of real objects. In this project we are using passive methods which only use sensors to measure the radiance reflected or emitted by the objects surface to infer its 3D structure.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
Dense Visual Odometry Using Genetic AlgorithmSlimane Djema
Our work aims to estimate the camera motion mounted on the head of a mobile robot or a moving object from RGB-D images in a static scene. The problem of motion estimation is transformed into a nonlinear least squares function. Methods for solving such problems are iterative. Various classic methods gave an iterative solution by linearizing this function. We can also use the metaheuristic optimization method to solve this problem and improve results. In this paper, a new algorithm is developed for visual odometry using a sequence of RGB-D images. This algorithm is based on a genetic algorithm. The proposed iterative genetic algorithm searches using particles to estimate the optimal motion and then compares it to the traditional methods. To evaluate our method, we use the root mean square error to compare it with the based energy method and another metaheuristic method. We prove the efficiency of our innovative algorithm on a large set of images.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
The N-Dimensional Map Maker Algorithm
1. International Journal of Computer Graphics & Animation (IJCGA) Vol.4, No.4, October 2014
THE N-DIMENSIONAL MAP MAKER ALGORITHM
John R Rankin
Department of Computer Science and Computer Engineering
La Trobe University, Australia
ABSTRACT
The Map Maker algorithm which converts survey data into geometric data with 2-dimensional Cartesian
coordinates has been previously published. Analysis of the performance of this algorithm is continuing. The
algorithm is suitable for generating 2D maps and it would be helpful to have this algorithm generalized to
generate 3D and higher dimensional coordinates. The trigonometric approach of the Map Maker algorithm
does not extend well into higher dimensions however this paper reports on an algebraic approach which
solves the problem. A similar algorithm called the Coordinatizator algorithm has been published which
converts survey data defining a higher dimensional space of measured sites into the lowest
dimensionalcoordinatization accurately fitting the data. Therefore the Coordinatizator algorithm is not a
projection transformation whereas the n-dimensional Map Maker algorithm is.
KEYWORDS
N-dimensional space, Projections, Distance Matrices, Map Maker, Coordinatizations
1. INTRODUCTION
The Map Maker algorithm [1] serves a fundamental purpose. Given geodetic survey data on N
sites Pi for i = 1 to N in the form of an NxN distance matrix D the algorithm generates 2D
coordinates for each point i.e. it determines xi and yi such that Pi = (xi,yi) satisfies the distance
matrix D, according to
2. (1)
for all i and j = 1 to N. The points Pi however may not in reality all lie on a single flat plane and
geodetic survey data is well-known to be non-planar. For example we can test the Map Maker
algorithm by starting with a given three dimensional point set Pi = (xi,yi,zi) for i = 1 to N and from
these generate a distance matrix D via Eq(1) using the 3-dimensional Euclidean distance measure.
Then subsequently we may take this distance matrix D and feed it through the Map Maker
algorithm to produce N points Qi for i = 1 to N in 2 dimensions. The generated points Qi can then
be plotted on the flat screen or output page. Taken together these steps constitute a projection of
points Pi in 3D to corresponding points Qi in 2D. This ‘projection’ is not a formula applicable to
single points as it requires a set of N ³ 3 points to perform the process. The first point P1 is fixed
at the origin so that Q1 = (0,0). The second point P2is placed along the x-axis in the positive
direction so that Q2 = (D12,0). The third point P3 is placed such that Q3 has 2-dimensional distance
D13 from point Q1 and 2-dimensional distance D23 from point Q2 and a positive y component. This
is equivalent to selecting a plane in 3D space which is the plane of the 3D triangle P1P2P3, then
selecting an origin in the plane as point P1, the x-axis direction as the vector P1P2 and the y axis
perpendicular to this such that P3 is on the positive y side of the plane. All other points Pifor i = 4
to Nare projected onto this plane by rotating each triangle PiP1P2in 3D to the triangle Pi’P1P2
where Pi’ lies in the plane of P1P2P3. Whether Pi’ is placed in the positive y half of the plane or
DOI : 10.5121/ijcga.2014.4402 19
3. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
the negative side is based on which choice has a closer distance value to the distance D3i. In this
way all distances D1i and D2i are preserved and D3i is as close to preserved as possible in the
projection. Denoting the distance matrix of the projected points as D’ we therefore have D1i = D1i’
and D2i = D2i’ for i = 1 to N. The other elements of the D and D’ matrices will in general differ
and the Map Maker algorithm tries to minimize these differences. A measure of the effectiveness
of this minimization is the root mean square error (RMSE) in the differences of D and D’. Work
in reference [1] suggests that reordering the points which results in a permutation of the D matrix
can result in lower RMSE values. The RMSE values are also dependent on N, the range of
coordinate values that were used to generate D and their dimensionality.
Two other projection operations were proposed by Lee in [2] called the Second Nearest
Neighbour approach and the Reference Point approach although he does not attempt to compute
coordinates for the mapped points. While the Map Maker algorithm preserves the distances of all
points to P1 and P2, the Second Nearest Neighbour approach preserves the distances of the next
mapped point Pk to the two nearest previously mapped points Pi and Pj and the Reference Point
approach preserves distances of the next mapped point Pk to P1 and the previously mapped nearest
point Pi (to point Pk). Thus Lee’s methods lead to different spatial locations and therefore
coordinatizations of the sites Pi unless the sites are intrinsically two-dimensional from the start.
Other authors ([3-12]) have shown the importance of locating objects in space given the distance
matrix alone. Rankin in [13] has shown the remarkable result that the distance matrix can
determine the dimension of the embedding space and that this space can be determined to be
either Euclidean or pseudo-Euclidean from the distance matrix alone. This is achieved through
the Coordinatizator algorithm described in that paper.
Youldash and Rankin {14] have shown the methodology adopted for evaluating the speed and
accuracy of the Map Maker algorithmand preliminary results were given. Graphing the results
was found to be difficult due to the wide statistical variations coming from random point sets so
that statistical methods need to be applied. While the 2D Map Maker algorithm generates a 2D
coordinatization of the survey sites from their distance matrix, the 3D Map Maker algorithm
generates a 3D coordinatization of the survey sites from their distance matrix. It is therefore
reasonable to expect that if the survey sites are intrinsically located in a 3-dimensional space, that
the 3D Map Maker algorithm should in general produce lower RMSE values. The first aim of this
research was to create the 3D Map Maker algorithm and thence test this conjecture. This is
achieved in section 2 below with the pseudo-code in section 3 and testing results in section 6. The
next aim of the research was to carry this process forward into n-dimensional space and sections 4
and 5 below report on this work.
20
2. THE 3D MAP MAKER ALGORITHM
Consider N sites Pi with a distance matrix D and we want to create 3D coordinates for each site.
We will proceed as in the Map Maker algorithm by declaring the first site P1 to be located at the
origin so that
4. (2)
Next P2 is declared to define the direction of the x-axis preserving the distance D12 so that
5. (3)
Next site P3 is declared to define the direction of the y axis by having P3y 0 and P3z = 0. As in the
Map Maker algorithm we compute the angle P2P1P3 as so that
7. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
Finally we define the sense of the z-axis perpendicular to the plane of P1P2P3 by asserting that P4
has a positive z component. Figure 1 shows the geometry where C is the point subtended by P4 on
the plane of triangle P1P2P3. Thus
8. (5)
where z4 0.We need to relate the x4, y4 and z4 to the components of D. Thus the problem is to
solve the tetrahedron for x4, y4 and z4 when all the side lengths D12, D13, D14, D23, D24 and D34 of
the tetrahedron are known. The solution is:
21
(6)
(7)
(8)
where is the same angle used above andalso used in the 2D Map Maker algorithm.
We now enter a loop to place points Pi for i = 5 to N in 3D space. To place point Pii.e. to
determine xi, yi and zi, we use the same formulas as Equations (6) to (8) except replacing index 4
with i. For placing P4 the positive root is taken in equation (8) but for index i 4 we must decide
for each point Piwhether to flip the point and use the negative root instead. As with the 2D Map
Maker algorithm, the decision is made by seeing which point Pi’ or Pi’ flipped has a distance to P4
which is closer in value to distance D4i.
Figure 1. Deriving the coordinates of Pi for i = 4 to n.
9. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
22
3. 3D MAP MAKER PSEUDO-CODE
The 3D Map Maker algorithm can be expressed in the following pseudo-code which is readily
converted to C++, Java or other suitable programming language. It assumes only that the NxN
real matrix D is available.
Set P[1] = (0,0,0)
Set P[2] = (D[1,2],0,0)
Set P[3] = (D[1.3]*cos_alpha,D[1,3]*sin_alpha,0)
for i = 4 to N
Set x = (Sq(D[1,2]) – Sq(D[2,i]) + Sq(D[1,i]))/(2*D[1.2])
Set y = (Sq(P[3].x) + Sq(P[3].y) – 2*P[3].x*x – Sq(D[3,i]) + Sq(D[1,i]))/2*P[3].y)
Set z = SqRoot(Sq(D[1,i]) – Sq(x) – Sq(y))
Set P[i] = (x,y,z)
// test for point flipping:
if i 4 then
Set Flipped = (x,y,-z)
if Abs(D[4,i] – Distance(P[4],Flipped)) Abs(D[4,i] – Distance(P[4],P[i])) then
Set P[i] = Flipped
This pseudo-code assumes that 3D points are stored as C++ structs with components called x, y
and z and that the point array is dimensioned to hold elements 1 to N. The terms cos_alpha and
sin_alpha are computed trigonometrically (once only for all sites i 4) as in the Map Maker
algorithm. The code has been implemented and undergone extensive testing which is reported in
section 6.
4. THE N-DIMENSIONAL MAP MAKER ALGORITHM
In 2D we used the first 3 points to define the 2D coordinate system: P1 defines the origin of
coordinates, P2 defines the x-axis and P3 defines the y-axis.In 3D we used the first 4 points to
define the 3D coordinate system: P1 defines the origin of coordinates, P2 defines the x-axis, P3
defines the y-axis and P4 defines the z-axis. So in n dimensions we need to use the first n+1 points
to define the coordinate system and then the remaining points can be located inside this
coordinate system. Therefore the first n+1 points will be coordinatized as follows:
12. $ $ $ $
(9)
In these formulas it is required in defining axial directions that
# %
(10)
fori = 2 to n+1. There is no sign restriction on the other xij values. It is also clear that in selecting
a projection dimension n we are restricted to the requirement:
13. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
23
' (
(11)
In the case of equality in (11), the problem reduces to the Coordinatizator algorithm solved in
[13]. For the n-dimensional Map Maker algorithm therefore we will assume that n N-1.The
components xij for i = 2 to n+1 and j = 1 to i-1 are n(n+1)/2 unknowns to be determined first.
After the xij are determined then the components of any point Pi for i n+1 can be determined in
terms of the xij and D. To see how the components of Pi are determined in terms of the xij and D,
for simplicity denote the components of Pi as xi for i = 1 to n. Then form the n equations:
, /
15. )* )+, - .*+
+#2
132
/4
43
(12)
for j = 1 to n. These are n equations in n unknowns, the xi for i = 1 to n. The general solution is
given by the following recurrence relation:
4
4 5/4$
3
4$
4#
/4$
3
6 7
16. 4$ 4
(13)
for k = 1 to n-1 and finally
/
89
#
3
(14)
The plus or minus sign in equation (14) indicates that a flip decision has to be made for this
component to minimize the RMSE. The decision is based on which point flipped or unflipped
results in a distance value closer to Di n+1.Equations (13) and (14) are the general solution to
finding the coordinates of Pi for i n+1 in terms of the components xij and the given distance
matrix D. Equation (13) is the generalization of the cosine rule as used in Map Maker and
equation (14) is the generalization of the sine formula used in the Map Maker algorithm.
Furthermore this procedure for deriving the coordinates of Pialso applies to finding the xij in the
first place by sequentially solving for the xij down the list given in (9) and always taking the
positive root in equation (14). However in doing this care must be taken in computing only the
coordinates up to i-2 allowing the remainder to be zero. This process is shown in the algorithm
presented in the next section.
5. ALGORITHM
For implementing the n-dimensional Map Maker’s algorithm the following pseudo-code
represents this new algorithm. For this algorithm each point is a dynamic array of floating point
numbers (doubles) of length n and the set of all points is a dynamic array of length N of this point
type.
for i = 1 to N
for j = 1 to n
Set P[i,j] = 0
17. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
24
Set P[2,1] = D[1,2]
Set P[3,1] = D[1.3]*cos_alpha
Set P[3.2] = D[1,3]*sin_alpha
for i = 4 to n+1
for j = 1 to n
Pi[j] = 0
for k = 1 to i-2
Set sum1 = 0
Set sum2 = 0
for j = 1 to k
Set sum1 = sum1 + Sq(P[k+1,j])
Set sum2 = sum2 + Pi[j]*P[k+1,j]
Set P[I,k] = (sum1 – Sq(D[k+1,i]) + Sq(D[1,i]) – 2*sum2)/(2*P[k+1,k])
Set sum1 = 0
for j = 1 to n-1
Set sum1 = sum1 – Sq(Pi[j])
Set Pi[i-1] = SqRoot(sum1)
for j = 1 to n
Set P[i,j] = Pi[j]
for i = n+2 to N
for j = 1 to n
Pi[j] = 0
for k = 1 to n-2
Set sum1 = 0
Set sum2 = 0
for j = 1 to k
Set sum1 = sum1 + Sq(P[k+1,j])
Set sum2 = sum2 + Pi[j]*P[k+1,j]
Set P[i,k] = (sum1 – Sq(D[k+1,i]) + Sq(D[1,i]) – 2*sum2)/(2*P[k+1,k])
Set sum1 = 0
for j = 1 to n-1
Set sum1 = sum1 – Sq(Pi[j])
Set Pi[n] = SqRoot(sum1)
for j = 1 to n-1
Set Flipped[j] = Pi[j]
Set Flipped[n] = -Pi[n]
if Abs(D[n+1,i] – Distance(P[n+1],Flipped)) Abs(D[n+1,i] – Distance(P[n+1],P[i])) then
Set Pi = Flipped
for j = 1 to n
Set P[I,j] = Pi[j]
The algorithm for the n-dimensional Map Maker is clearly longer and more detailed than either
the Map Maker or 3D Map Maker algorithms and requires more memory. The next section will
present the initial results and implications of this algorithm.
18. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
25
6. DISCUSSION
Normally when making maps from survey distance information we expect that the maps will be
drawn on a flat 2D sheet of paper or flat screen and the Map Maker algorithm performs this
service. However the concept is generalizable so that given the distance matrix for a set of N sites
one can now generate Cartesian coordinates for each site for any specified dimension n decided in
advance. For geodetic survey data in our 3-dimensional world we could expect that n = 3 is
sufficient. The 3D output when displayed graphically shows the positional relationships between
the survey sites and the user can view this 3D data from many different angles and viewpoints.
Research is underway to convert the 3D coordinatized point data into a height map that can show
the detailed topography and lie of the land.
To test the 3D Map Maker algorithm, many sets of random 3D points were constructed. These
sets were each converted into distance matrices and stored in files. The distance matrix files were
then given to the 3D Map Maker algorithm which converted the distance matrix input to 3D
coordinates for each survey point. The output 3D point sets do not directly compare with the
initial 3D point sets that were constructed because the algorithm causes a translation operation to
P1 as origin and then reflections and rotations of the points about P1so that P1P2 aligns with the x-axis,
P3 is in the x-y plane with positive y component and P4 has positive z component. Therefore
instead of comparing the point sets, we compute a new distance matrix based on the output points
from the algorithm and compare this with the original distance matrix. The results varied based
on the number N of points in the set. As an example with N = 5, a total RMS absolute error value
of 0.000104 was obtained with a total RMS relative error of 0.00000581. For N = 10 a sample
result was 0.00145 for the absolute error and 0.00337 for the relative error.
A similar testing process was used on the n-dimensional Map Maker algorithm. For N = 10 and n
= 5 a total RMS absolute error was 0.0056 with a total RMS relative error of 0.00577. Low errors
suggests that the n-dimensional Map Maker algorithm is doing its job correctly. If the distance
matrix used in the algorithm was created from a point set of dimensionality different from n then
we could expect the error rate to rise. This is subject to future testing.
It should be noted that the n-dimensional Map Maker algorithm produces enantiomorphs because
handedness (i.e. parity) cannot be transmitted in the distance matrix. This means that the spatial
arrangement of the nodes can be any of 2n-1 possible mirror images produced by parity inversions
i.e. reflections along any axis other than the first and all enantiomorphs have the same distance
matrix D making them indistinguishable in the Map Maker algorithms.
CONCLUSIONS
The n-dimensional Map Maker’s algorithm converts spatial distance survey data to point
coordinates in n dimensions. The first n+1 points get mapped to prescribed positions to define the
positive directions of the n axes. In so doing these first n+1 points contain n(n+1)/2 zeros in their
coefficients and preserve n(n-1)/2 distances. If the distance matrix was originally derived not by
survey measurement but from N points in a k-dimensional space then the n-dimensional Map
Maker’s algorithm can be viewed as a projection transformation from k dimensions to n
dimensions. If N n+1 and k n we can expect to see distance preservation failures indicated by
a significant positive RMSE. If N £ n+1 or k £ n there should be no significant systemic errors so
that only numerical calculation errors should appear. Continuing work in this area includes testing
of the speed of the algorithm and the behaviour of the RMS errors as well as applications of the
algorithm to real world problems.
19. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
26
REFERENCES
[1] Rankin J, “Map Making From Tables” IJCGA, April 2013, Vol3, Nr 2 pp 11-17.
http://airccse.org/journal/ijcga/current2013.html
[2] Lee RTC et al, “A Triangulation Method for the Sequential Mapping of Points from N-Space to Two-
Space”, IEEE Transactions on Computers, March 1977, pp288-292.
[3] Yin H, “Nonlinear Dimensionality Reduction and Data Visualization: A Review”, International
Journal of Automation and Computing 04(3), July 2007, pp294-303.
[4] Sammon J W Jr, “A Nonlinear Mapping for Data Structure Analysis”, IEE Transaction on
Computers, Vol C-18, No. 5, May 1969, pp401-409.
[5] Shisko JF et al, “IGODS: An Important New Tool for Managing and Visualizing Spatial Data”,
OCEANS 2010, pp1-10.
[6] Li Q et al, “A Chunking Method for Euclidean Distance Matrix Calculation on Large Dataset Using
Multi-GPU”, IEEE Machine Learning and Applications (ICMLA) Dec 2010, pp 208-213.
[7] Hyo-Sung A, “Command Coordination In Multi-Agent Formation: A Distance Matrix Approach”,
IEEE Control Automation and Systems (ICCAS) 2010, pp 1592-1597.
[8] Srinkanthan S, “Accelerating the Euclidean Distance Matrix Computation Using GPUs”, IEEE
Electronics Computer Technology (ICECT) 2011, pp422-426.
[9] Del Bue A et al, “2D-3D Registration of Deformable Shapes with Manifold Projection”, 2009, pp
1061-1064.
[10] Drineas P et al, “Distance Matrix Reconstruction from Incomplete Distance Information for Sensor
Network Localization”, Sensor and Ad Hoc Communications and Networks 2006, pp 536-544.
[11] Ayhan S et al “Implementing Geospatially Enabled Aviation Web Services”, Integrated
Communications, Navigation and Surveillance (ICNS) 2008, pp1-8.
[12] Xiao Q et al,“Application of Visualization Technology in Spatial Data Mining”, Computing, Control
and Industrial Engineering (CCIE), 2010, pp153-157.
[13] Rankin J, “Visualization of General Defined Space Data”, International Journal of Computer
Graphics and Animation, October 2013, Vol3, Nr 4.http://airccse.org/journal/ijcga/current2013.html
[14] Youldash MM Rankin J, “Evaluation of A Geometric Approach ToCoordinatization of Metric
Spaces”, 8th International Conference on Information Technology and Applications, ICITA 2013,
July Sydney 2013.