SlideShare a Scribd company logo
1 of 43
Download to read offline
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 1 2021-22
CHAPTER 1
PREAMBLE
1.1 INTRODUCTION
Our visual world is richly filled with a great variety of textures, present in images ranging from
multispectral satellite views to microscopic pictures of tissue samples. As a powerful visual cue like
color, texture provides useful information in identifying objects or regions of interest in images.
Texture is different from color in that it refers to the spatial organization of a set of basic elements or
primitives (textons), the fundamental microstructures in natural images and the atoms of pre-attentive
human visual perception. A textured region will obey some statistical properties, exhibiting
periodically repeated textons with some degree of variability in their appearance and relative position.
Textures may range from purely variational to perfectly regular and everything in between.
The representation of textures open the door to several diverse and appealing applications. Some
recent works that utilized the representation and analysis of texture are in areas such as object
recognition, robotics/autonomous navigation, quality inspection and assessment, scene understanding,
facial image analysis, image and video editing, crowd behaviour analysis, remote sensing, geological
structure interpretation, and medical image analysis.
The texture is recognizable in both tactile and optical ways. Tactile texture refers to the tangible feel
of a surface and visual texture refers to see the shape or contents of the image. Diagnosis of
texture in a human vision system is easily feasible but in the machine vision domain and image
processing have their own complexity. In the image processing, the texture can be defined as a
function of spatial variation of the brightness intensity of the pixels. The texture represents the
variations of each level, which measures characteristics such as smoothness, smoothness,
coarseness and regularity of each surface in different order directions. Textural images in the
image processingand machine vision refer to the images in which a specific pattern of distribution and
dispersion of the intensity of the pixel illumination is repeated sequentially throughout the image.
In the “Texture Shape Extraction”, the objective is to extract 3D images which are covered in a picture
with a specific texture. This field studies the structure and shape of the elements in the image by
analyzing their textual properties and the spatial relationship each with each other.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 2 2021-22
The purpose of “Texture Synthesis” is to produce images that have the same texture as the input
texture. Applications of this field are creation of graphic images and computer games. Eliminate of a
part of the image and stow it with the background texture, creation of a scene with lighting and a
different viewing angle, creation of artistic effects on images like embossed textures are other.
Segmentation. In other words, in texture segmentation, the features of the boundaries and areas are
compared and if their texture characteristics are sufficiently different, the boundary range has been
found.
State-of-the-art texture descriptors such as [7] are computationally prohibitive to use on large textures.
These issues are especially acute for video, where the amount of data is significantly larger. Typically,
these descriptors as well as texture synthesis algorithms assume that the size of the input texture is
small, and yet large enough to compute statistics that are representative of the entire texture domain.
Few works in the literature deal with how to infer automatically this smaller texturefrom an image
and even fewer from a video. In most cases, it is assumed that the input texture is given, usually
manually by cropping a larger one[7] propose an inverse texture synthesis algorithm, where given an
input texture I, a compaction is synthesized that allows subsequent re-synthesis of a new instance ˆI.
The method achieves good results, but it is semi-automatic, since it relies on external information such
as a control map (e.g. an orientation field or other contextual information) to synthesize time-varying
textures and on manual adjustment of the scale of neighborhoods for sampling from the
compaction.
The texture is recognizable in both tactile and optical ways. Tactile texture refers to the tangible feel
of a surface and visual texture refers to see the shape or contents of the image. Diagnosis of
texture in a human vision system is easily feasible but in the machine vision domain and image
processing have their own complexity. In the image processing, the texture can be defined as a
function of spatial variation of the brightness intensity of the pixels. The texture represents the
variations of each level, which measures characteristics such as smoothness, smoothness,
coarseness and regularity of each surface in different order directions. . Textural images in the
image processingand machine vision refer to the images in which a specific pattern of distribution and
dispersion of the intensity of the pixel illumination is repeated sequentially throughout the image. .
This field studies the structure and shape of the elements in the image
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 3 2021-22
segmentation. In other words, in texture segmentation, the features of the boundaries and areas are
compared and if their texture characteristics are sufficiently different, the boundary range has been
found.
New transform techniques that specifically address the problems of image enhancement and
compression, edge and feature extraction, and texture analysis received much attention in recent years
especially in biomedical imaging. These techniques are often found under the names multi resolution
analysis, time-frequency analysis, pyramid algorithms, and wavelet transforms. They became
competitors to the traditional Fourier transform, whose basis functions are sinusoids. The wavelet
transform is based on wavelets, which are small waves of varying frequency and limited duration. In
addition to the traditional Fourier transform, they provide not only frequency but also temporal
information on the signal.
“Visual textures” are regions of images that exhibit some form of spatial regularity. They include the
so-called “regular” or “quasi-regular” textures, “stochastic” textures (top-right), possibly deformed
either in the domain (bottom-left) or range (bottom-right). Analysis of textures has been used for
image and video representation [4], while synthesis has proven useful for image super resolution [11],
hole-filling [9] and compression [5]. For such applications, large textures carry a high cost on
storage and computation. State-of-the-art texture descriptors such as [7] are computationally
prohibitive to use on large textures. These issues are especially acute for video, where the amount of
data is significantly larger. Typically, these descriptors as well as texture synthesis algorithms assume
that the size of the input texture is small, and yet large enough to compute statistics that are
representative of the entire texture domain. Few works in the literature deal with how to infer
automatically this smaller texturefrom an image and even fewer from a video. In most cases, it is
assumed that the input texture is given, usually manually by cropping a larger one[7] propose an
inverse texture synthesis algorithm, where given an input texture I, a compaction is synthesized that
allows subsequent re-synthesis of a new instance ˆI. The method achieves good results, but it is semi-
automatic, since it relies on external information such as a control map (e.g. an orientation field or
other contextual information) to synthesize time-varying textures and on manual adjustment of the
scale of neighborhoods for sampling from the compaction.
Here proposed an alternative scheme, which avoids using any external information by automatically
inferring a compact time-varying texture representation. The algorithm also automatically determines
the scale of local neighborhoods, which is necessary for texture synthesis [9]. Since our representation
consists of samples from the input texture, for applications such as classification [7], we are able
toavoid synthesis biases that affect other methods [3].
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 4 2021-22
Our contributions are to
i. Summarize an image/video into a representation that requires significantly less storage than the
input
ii. Use our representation for synthesis using the texture optimization technique
iii. Extend this framework to video using a causal scheme and show results for multiple time-varying
textures, synthesize multiple textures simultaneously on video without explicitly computing a
segmentation map, unlike [4], which is useful for hole-filling and video compression and
propose a criterion (“Texture Qualitative Criterion” (TQC)) that measures structural and
statistical dissimilarity between textures.
The regions of images that exhibit some form of spatial regularity. They include the so-called
“regular” or “quasi-regular” textures, “stochastic” textures (top-right), possibly deformed either in the
domain (bottom-left) or range (bottom-right). Analysis of textures has been used for image and
video representation [4], while synthesis has proven useful for image super resolution [11], hole-filling
[9] and compression [5]. For such applications, large textures carry a high cost on storage and
computation. State-of-the-art texture descriptors such as [7] are computationally prohibitive to use on
large textures. These issues are especially acute for video, where the amount of data is significantly
larger. Typically, these descriptors as well as texture synthesis algorithms assume that the size of the
input texture is small, and yet large enough to compute statistics that are representative of the entire
texture domain. Few works in the literature deal with how to infer automatically this smaller
texture from an image and even fewer from a video. In most cases, it is assumed that the input
texture is given, usually manually by cropping a larger one[7] propose an inverse texture synthesis
algorithm, where given an input texture I, a compaction is synthesized that allows subsequent re-
synthesis of a new instance ˆI. The method achieves good results, but it is semi-automatic, since it
relies on external information such as a control map (e.g. an orientation field or other contextual
information) to synthesize time-varying textures and on manual adjustment of the scale of
neighborhoods for sampling from the compaction.
Here proposed an alternative scheme, which avoids using any external information by automatically
inferring a compact time-varying texture representation. The algorithm also automatically determines
the scale of local neighborhoods, which is necessary for texture synthesis [9]. Since our representation
consists of samples from the input texture, for applications such as classification [7], we are able
toavoid synthesis biases that affect other methods.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 5 2021-22
Extend this framework to video using a causal scheme and show results for multiple time-varying
textures, synthesize multiple textures simultaneously on video without explicitly computing a
segmentation map, unlike [4], which is useful for hole-filling and video compression and propose a
criterion (“Texture Qualitative Criterion” (TQC)) that measures structural and statistical dissimilarity
between textures.
1.1 BACKGROUND INFORMATION
Texture is one of the popular tasks in NLP that allows a program to classify free-text documents based
on pre-defined classes. The classes can be based on topic, genre, or sentiment. Today’s emergence of
large digital documents makes the text classification task more crucial, especially for companies to
maximize their workflow or even profits. Recently, the progress of NLP research on text
classificationhas arrived at the state-of-the-art (SOTA). It has achieved terrific results, showing Deep
Learning methods as the cutting-edge technology to perform such tasks. Hence, the need to assess the
performance of the SOTA deep learning models for text classification is essential not only for academic
purposes but also for AI practitioners or professionals that need guidance and benchmark on similar
projects. The goal of texture representation or texture feature extraction is to transform the input
texture image into a feature vector that describes the properties of a texture, facilitating subsequent
tasks such as texture classification, as illustrated in Fig. 4. Since texture is a spatial phenomenon,
texture representation cannot be based on a single pixel, and generally requires the analysis of patterns
over local pixel neighborhoods. Therefore, a texture image is first transformed to a pool of local features,
which are then aggregated into a global representation for an entire image or region. Since the properties
of texture are usually translationally invariant, most texture representations are based on an orderless
aggregation of local texture features, such as a sum or max operation.
Texture analysis can be divided into four areas: classification, segmentation, synthesis, and shape from
texture. Texture classification deals with designing algorithms for declaring a given texture region or
image as belonging to one of a set of known texture categories of which training samples have been
provided. Texture classification may also be a binary hypothesis testing problem, such as
differentiating a texture as being within or outside of a given class, such as distinguishing between
healthy and pathological tissues in medial image analysis. The goal of texture segmentation is to
partition a given image into disjoint regions of homogeneous texture. Texture synthesis is the process of
generating new texture images which are perceptually equivalent to a given texture sample. As
textures provide powerful shape cues, approaches for shape from texture attempt to recover the three
dimensional shapeof a textured object from its image. It should be noted that the concept of “texture”
may have different connotations or definitions depending on the given objective. Classification,
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 6 2021-22
segmentation, and synthesis are closely related and widely studied, with shape from texture receiving
comparatively less attention. Nevertheless, texture representation is at the core of these four problems.
It is generally agreed that the extraction of powerful texture features plays a relatively more important
role, since if poor features are used even the best classifier will fail to achieve good results. While this
survey is not explicitly concerned with texture synthesis, studying synthesis can be instructive, for
example, classification of textures via analysis by synthesis in which a model is first constructed for
synthesizing textures and then inverted for the purposes of classification. The texture is an important
parameter to analyze medical images. The simplicity of GLCM-based texture inspection motivates
researchers to use it in diagnoses from medical images. Magnetic resonance images (MRI) which are not
visually accessed can use texture analysis method to extract information. In Reference GLCM is used to
analyze the texture pattern in brain images, this technique is applied to analyze brain images of
patients suffering from Alzheimer's disease. However, the GLCM matrix application is not limited to
MRI images but also proved helpful in the detection of other health-related conditions. The features
calculated from GLCM constructed at the 00
are best suited to discriminate masses tissues. Now
GLCM is combined with other methods for extending its scope and features. Gabor filter and GLCM are
combined with other methods to obtain improved quantification of texture features.
. Texture classification deals with designing algorithms for declaring a given texture region or image as
belonging to one of a set of known texture categories of which training samples have been provided.
Texture classification may also be a binary hypothesis testing problem, such as differentiating a texture
as being within or outside of a given class, such as distinguishing between healthy and pathological
tissues in medial image analysis. The goal of texture segmentation is to partition a given image into
disjoint regions of homogeneous texture. Texture synthesis is the process of generating new texture
images which are perceptually equivalent to a given texture sample. As textures provide powerful
shape cues, approaches for shape from texture attempt to recover the three dimensional shape of a
textured object from its image.
The goal of texture representation or texture feature extraction is to transform the input texture image
into a feature vector that describes the properties of a texture, facilitating subsequent tasks such as
texture classification, as illustrated in Fig. 4. Since texture is a spatial phenomenon, texture
representation cannot be based on a single pixel, and generally requires the analysis of patterns over
local pixel neighborhoods. Therefore, a texture image is first transformed to a pool of local features,
which are then aggregated into a global representation for an entire image or region . Since the properties
of texture are usually translationally invariant.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 7 2021-22
Texture representation, together with texture classification, will form the primary focus of this survey.
As a classical pattern recognition problem, texture classification primarily consists of two critical sub
problems: texture representation and classification. It is generally agreed that the extraction of powerful
texture features plays a relatively more important role, since if poor features are used even the best
classifier will fail to achieve good results. While this survey is not explicitly concerned with texture
synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by
synthesis in which a model is first constructed for synthesizing textures and then inverted for the
purposes of classification. The texture is an important parameter to analyze medical images. The
simplicity of GLCM-based texture inspection motivates researchers to use it in diagnoses from medical
images. Magnetic resonance images (MRI) which are not visually accessed can use texture analysis
method to extract information. In Reference GLCM is used to analyze the texture pattern in brain
images, this technique is applied to analyze brain images of patients suffering from Alzheimer's disease.
However, the GLCM matrix application is not limited to MRI images but also proved helpful in the
detection of other health-related conditions. The features calculated from GLCM constructed at
the 00
are best suited to discriminate masses tissues. Now GLCM is combined with other methods for
extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain
improved quantification of texture features.
As a classical pattern recognition problem, texture classification primarily consists of two critical sub
problems: texture representation and classification. It is generally agreed that the extraction of powerful
texture features plays a relatively more important role, since if poor features are used even the best
classifier will fail to achieve good results. While this survey is not explicitly concerned with texture
synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by
synthesis in which a model is first constructed for synthesizing textures and then inverted for the
purposes of classification.
The goal of texture segmentation is to partition a given image into disjoint regions of homogeneous
texture. Texture synthesis is the process of generating new texture images which are perceptually
equivalent to a given texture sample. As textures provide powerful shape cues, approaches for
shapefrom texture attempt to recover the three dimensional shape of a textured object from its image.
It should be noted that the concept of “texture” may have different connotations or definitions
dependingon the given objective.
The classes can be based on topic, genre, or sentiment. Today’s emergence of large digital documents
makes the text classification task more crucial, especially for companies to maximize their workflow or
even profits.
`
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 8 2021-22
Texture representation, together with texture classification, will form the primary focus of this survey.
As a classical pattern recognition problem, texture classification primarily consists of two critical sub
problems: texture representation and classification. It is generally agreed that the extraction of powerful
texture features plays a relatively more important role, since if poor features are used even the best
classifier will fail to achieve good results. While this survey is not explicitly concerned with texture
synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by
synthesis in which a model is first constructed for synthesizing textures and then inverted for the
purposes of classification. The texture is an important parameter to analyze medical images. The
simplicity of GLCM-based texture inspection motivates researchers to use it in diagnoses from medical
images. Magnetic resonance images (MRI) which are not visually accessed can use texture analysis
method to extract information. In Reference GLCM is used to analyze the texture pattern in brain
images, this technique is applied to analyze brain images of patients suffering from Alzheimer's disease.
However, the GLCM matrix application is not limited to MRI images but also proved helpful in the
detection of other health-related conditions. The features calculated from GLCM constructed at
the 00
are best suited to discriminate masses tissues. Now GLCM is combined with other methods for
extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain
improved quantification of texture features.
As a classical pattern recognition problem, texture classification primarily consists of two critical sub
problems: texture representation and classification. It is generally agreed that the extraction of powerful
texture features plays a relatively more important role, since if poor features are used even the best
classifier will fail to achieve good results. While this survey is not explicitly concerned with texture
synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by
synthesis in which a model is first constructed for synthesizing textures and then inverted for the
purposes of classification.
The goal of texture segmentation is to partition a given image into disjoint regions of homogeneous
texture. Texture synthesis is the process of generating new texture images which are perceptually
equivalent to a given texture sample. As textures provide powerful shape cues, approaches for
shapefrom texture attempt to recover the three dimensional shape of a textured object from its image.
It should be noted that the concept of “texture” may have different connotations or definitions
dependingon the given objective.
The classes can be based on topic, genre, or sentiment. Today’s emergence of large digital documents
makes the text classification task more crucial, especially for companies to maximize their workflow or
even profits.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 9 2021-22
1.3 TECHNOLOGY
1. DEEP LEARNING
Deep learning-based methods have been explored for texture representation and amongthe society.
Texture/material recognition generally is challenging in demanding an orderless representation of
micro-structures (i.e., texture encoding). Previous research generally combines concatenated global
CNN activations and a fully connected layer, which fails to meet the need for a geometry-invariant
representation describing local feature distributions. To overcome this drawback, a Fisher-vector CNN
descriptor is proposed which significantly boosts performance for texture analysis- related vision tasks.
Owing to their deep architectures and large parameters sets, convolutional neural networks are
displacing traditional methods in image recognition as state-of-the-art methods in an increasing
number of applications. Training large and complex convolutional neural networks completely from
scratch can be prohibitively costly, if sufficient data for training are available in the first place.
Therefore, the use of pretrained networks is a potentially attractive approach to automated image
recognition. Texture analysis in images is important in a wide range of industries. It is not precisely
defined, since image texture is not precisely defined, but intuitively, image texture analysis attempts to
quantify qualities such as roughness, smoothness, heterogeneity, regularity, etc., as a function of the
spatial variation in pixel intensities. In materials, image texture analysis can be used to derive
quantitative descriptors of the distributions of the orientations and sizes of grains in polycrystalline
materials. Almost all engineering materials have texture, which is strongly correlated with their
properties, such as mechanical strength, resistance to stress corrosion cracking and radiation damage,
etc. In this sense, image textures and textures in materials are closely related. In the case of
metalliferous ores or rocks, image texture provides critical information with regard to the response of
the materials during mining and mineral processing [1,2]. For example, more energy is generally
required to liberate finely disseminated minerals from ores, i.e., ores with fine textures, than is the
case with ores with coarse textures.
2. Texture Representation and Pipeline Analysis
A general pipeline of texture analysis and its goal is to transform an image patch or an entire image
into a compact feature vector that describes texture structures or properties. The extracted feature
vectors are then adapted for visual tasks under different scenarios. The transformation from image
pixels to feature vectors usually involve two major steps: feature extraction and feature encoding.
Feature extraction which are robust against rotations and translations of images, are able to provide
discriminative features for describing local image regions.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 10 2021-22
Feature encoding this is to learn texton dictionary (codebook) and link local texture representation and
feature pooling as a global feature representation of an image.
A computational pipeline is an integral part of Texture Analysis study in Medical Imaging. It was
developed to combine texture analysis and pattern classification algorithms for investigating
associations between high-resolution MRI / MRE features and clinical patient data, and also between
MRI / MRE features and histological data [2]. A typical Pipeline design structure consists of three
main stages i.e., Preprocessing, Feature extraction and Analysis. Figure 5 illustrates the pictorial
representation of a medical imaging pipeline.
Training large and complex convolutional neural networks completely from scratch can be
prohibitively costly, if sufficient data for training are available in the first place. Therefore, the use of
pretrained networks is a potentially attractive approach to automated image recognition. Texture
analysis in images is important in a wide range of industries. It is not precisely defined, since image
texture is not precisely defined, but intuitively, image texture analysis attempts to quantify qualities
such as roughness, smoothness, heterogeneity, regularity, etc., as a function of the spatial variation in
pixel intensities.
A general pipeline of texture analysis and its goal is to transform an image patch or an entire image
into a compact feature vector that describes texture structures or properties. The extracted feature
vectors are then adapted for visual tasks under different scenarios. The transformation from image
pixels to feature vectors usually involve two major steps: feature extraction and feature encoding.
A computational pipeline is an integral part of Texture Analysis study in Medical Imaging. It was
developed to combine texture analysis and pattern classification algorithms for investigating
associations between high-resolution MRI / MRE features and clinical patient data, and also between
MRI / MRE features and histological data [2]. A typical Pipeline design structure consists of three
main stages i.e., Preprocessing, Feature extraction and Analysis. Figure 5 illustrates the pictorial
representation of a medical imaging pipeline.
3. Food Texture Analyzer(FRTS Series)
It is food texture analyzer which quantifies food texture. It allows to perform measurement by simply
selecting food or standard, following the guidance on the touch screen. The supplied software
enables visual analysis of measurement result by graphing.. It quantifies texture in force to evaluate
the textural properties of food such as firmness, tackiness, cohension, elasticity, etc
Also reduces testing time by selecting food sample or a test standard anf present condition from the
touch screen to confirm measuring condition. Easy to perform food measurements complying with the
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 11 2021-22
correspomding part of ISO and more.
4. Best First Technique
It is a selection technique that combines both forward selection and backward elimination rules. It is a
method that does not just terminate when the performance starts to drop but keeps a list of all attribute
subsets evaluated so far, sorted in order of the performance measure, so that it can revisitan earlier
configuration instead. Given enough time it will explore the entire space, unless this is prevented by
some kind of stopping criterion. It can search forward from the empty set of attributes, backward from
the full set, or start at an intermediate point and searches in both the directions by considering all
possible single attribute additions and deletions.
5. Greedy Stepwise Technique
Greedy stepwise searches greedily through the space of attribute subsets. Like best first technique it
may progress forward from the empty set or backward from the full set. Unlike best first technique, it
does not backtrack but terminates as soon as adding or deleting the best remaining attribute
decreases the evaluation metric. Training large and complex convolutional neural networks
completely from scratch can be prohibitively costly, if sufficient data for training are available in the
first place. Therefore, the use of pretrained networks is a potentially attractive approach to automated
image recognition. Texture analysis in images is important in a wide range of industries. It is not
precisely defined, since image texture is not precisely defined, but intuitively, image texture analysis
attempts to quantify qualities such as roughness, smoothness, heterogeneity, regularity, etc., as a
function of the spatial variation in pixel intensities.
The texture analysis and its goal is to transform an image patch or an entire image into a compact
feature vector that describes texture structures or properties. The extracted feature vectors are then
adapted for visual tasks under different scenarios. The transformation from image pixels to feature
vectors usually involve two major steps: feature extraction and feature encoding.
. Almost all engineering materials have texture, which is strongly correlated with their properties,
such as mechanical strength, resistance to stress corrosion cracking and radiation damage, etc. In this
sense, image textures and textures in materials are closely related. In the case of metalliferous ores or
rocks, image texture provides critical information with regard to the response of the materials during
mining and mineral processing,
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 12 2021-22
6. Data Preprocessing and Data Cleaning
One of the most important steps of model building is preprocessing of the dataset. For data
preprocessing, a good understanding of dataset is very important. Data preprocessing consists of
following steps: data cleaning, transformation, normalization, feature extraction and feature selection.
The data preprocessing step is considered to be an important step as it can have a significant impact on
how a supervised machine learning algorithm performs.
7. Automatic Transmit Power Control
Cable Free Microwave links feature Automatic Transmit Power Control (ATPC) which
automatically increases the transmit power during “Fade” conditions such as heavy rainfall. ATPC
can be used separately to ACM or together to maximise link uptime, stability and availability.
When the “fade” conditions (rainfall) are over, the ATPC system reduces the transmit power
again. This reduces the stress on the microwave power amplifiers, which reduces power
consumption, heat generation and increases expected lifetime (MTBF).Automatic Transmit
Power Control is a key technology to ensure reliable transmission in all weather conditions
especially in regions with high rainfall and also longer links with fading/ducting[43].
8. Intra-Chip Free-Space Optical Interconnect
Optical interconnect is a promising long term solution. However, while significant progress in
optical signaling has been made in recent years, networking issues for on-chip optical interconnect still
require much investigation. Taking the underlying optical signaling systems as a drop-in replacement
for conventional electrical signaling while maintaining conventional packet- switching architectures is
unlikely to realize the full potential of optical interconnects. In this paper,this propose and study
the design of a fully distributed interconnect architecture based on free-space optics. The architecture
leverages a suite of newly-developed or emerging devices, circuits, and optics technologies. The
interconnect avoids packet relay altogether, offers an ultra-low transmission latency and scalable
bandwidth, and provides fresh opportunities for coherency substrate designs and optimizations[44].
9. Smart and cognitive radios
Cognitive radios (CRs) can scan and analyze their environment and adapt their transmission/reception
parameters to better convey and protect transmitted data [19]. CR can be mainly divided into two
categories: smart individual radios and smart networks (largely considered as cognitive radios). A
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 13 2021-22
smart radio can dynamically be auto-programmed and configured. Smart networks optimize the
total use of available physical resources among its members. In the case of a cognitive radio, the main
decision function can be made in the central unit while the scanning and the
analysis procedures can be done in each individual unit (a transmission unit can be affected to a primary
or a secondary user). In order to optimally share the physical resources, CR classifies the transmitters (the
users) into two categories: primary and secondary users. A primary user (PU) is the user holding a
license of a defined spectrum. He is allowed to use his bandwidth any time as far as heis respecting the
cover area and the transmission power. As many primary users do not broadcast all the time, their
protected bandwidths are not used optimally. Therefore, an opportunistic user (i.a. a secondary user (SU))
can use the best available bandwidth as far as his signal does not interfere with the signal of PU at any
time. The run-length matrix is a technique where we search the image, always across a particular
direction,for number of pixels that have the same grey-level value. Therefore, given a particular direction
(for example, the vertical direction), the run-length matrix computes for each allowed grey-level value
how many instances there are runs of, example, 2 consecutive pixels with the same grey-level value.
The texture analysis and its goal is to transform an image patch or an entire image into a compact
feature vector that describes texture structures or properties. The extracted feature vectors are then
adapted for visual tasks under different scenarios. The transformation from image pixels to feature
vectors usually involve two major steps: feature extraction and feature encoding.
. Almost all engineering materials have texture, which is strongly correlated with their properties,
such as mechanical strength, resistance to stress corrosion cracking and radiation damage, etc. In this
sense, image textures and textures in materials are closely related. In the case of metalliferous ores or
rocks, image texture provides critical information with regard to the response of the materials during
mining and mineral processing,
The transformation from image pixels to feature vectors usually involve two major steps: feature
extraction and feature encoding.
Feature extraction which are robust against rotations and translations of images, are able to provide
discriminative features for describing local image regions. A primary user (PU) is theuser holding a
license of a defined spectrum. He is allowed to use his bandwidth any time as far as heis respecting
the cover area and the transmission power. As many primary users do not broadcast allthe time,
their protected bandwidths are not used optimally. Therefore, an opportunistic user (i.a. a secondary
user (SU)) can use the best available bandwidth as far as his signal does not interfere with the signal
of PU at any time
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 14 2021-22
analysis procedures can be done in each individual unit (a transmission unit can be affected to a
primary or a secondary user). In order to optimally share the physical resources, CR classifies the
transmitters (the users) into two categories: primary and secondary users. A primary user (PU) is the
user holding a license of a defined spectrum. He is allowed to use his bandwidth any time as far as
he is respecting the cover area and the transmission power. As many primary users do not
broadcast allthe time, their protected bandwidths are not used optimally. Therefore, an opportunistic
user (i.a. a secondary user (SU)) can use the best available bandwidth as far as his signal does not
interfere with the signal of PU at any time[47].
10. PSTN
The public switched telephone network, or PSTN, is the world's collection of
interconnected voice-oriented public telephone networks. A public switched telephone network is a
combination of telephone networks used worldwide, including telephone lines, fiber optic, switching
centers, cellular networks, satellites and cable systems.[49]
11. Visible Light Communication(VLC)
VLC, a subset of OWC, has emerged as a promising technology in the past decade. VLC
based on LED or LD can be a promising solution for upcoming high-density and high-capacity 5G
wireless networks. IoT is becoming increasingly important because it allows a large number of
devices to be connected for sensing, monitoring, and resource sharing, and VLC could play an
important role to this end [51]. VLC technology offers 10,000 times more bandwidth capacity than
RF-based technologies Moreover, it is not harmful to humans. LED’s can switch to different light
intensity levels at a very fast rate, which allows data to be modulated through LED at a speed that the
human eye cannot detect[49].
12. USWN
Underwater wireless sensor networks (UWSNs) are envisioned to enable applications for a
wide variety of purposes such as tsunami warnings, offshore exploration, tactical surveillance,
monitoring of oil and gas spills, assisted navigation, pollution monitoring, and for many commercial
purposes. To make those applications viable, there is a need to enable communications among
underwater devices. distance based segmentation (DBS) clustering protocol which is a variant of
LEACH protocol. This is assumed that the availability.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 15 2021-22
1.4 METHODOLOGY
i. Statistical Methods, in which the texture is characterized by statistical distribution of intensity
values Example of these methods, are Histogram, GLCM, and Run Length Matrix.
ii. Structural Methods, where the texture is characterized by feature primitives and their spatial
arrangements.
iii. Mathematical model based Methods, such as fractal models which usually generate an
empirical model of all the pixels contained within that image considering the weighted average of the
pixel intensities in its neighborhood.
iv. Transform based Methods, where the image is converted into new form using spatial
frequency properties of the pixel intensity variations. Some examples of this method are Wavelet
Transform, Fourier Transform and S transform.
Each of these methodologies has been briefly described as following:
(a) Statistical Methods: In statistical methods, texture is described by a collection of
statistics of selected features. Statistical approach of texture analysis primarily describes texture of
regions in an image using higher order moments of their grayscale histograms values [2]. Selecting
various textural features from a Gray level co-occurrence matrix (GLCM) is apparently, the most
commonly cited method for texture analysis [3]. In addition to the traditional statistical texture
analysis methods, multivariate statistical techniques have also been considered for extraction of
textural features. If we consider an image as a matrix, the singular value decomposition (SVD)
spectrum of the image texture is a summary vector represented by its singular values [3].
Alternatively, the run length matrix (RLM) includes higher-order statistics of the gray level
histogram for an image. The RLM approach of texture analysis distinguishes fine textures of an
image as having few pixels in a constant gray level run andcoarse textures with many pixels in
such a run [3].
(i) Histograms:
In digital images, the allowed value for the grey level that can be given to a pixel is limited. The
grey value is usually an integer ranging from 0 to 2b-1, where b denotes the number of bits of the
image [22]. The histogram of an image is drawn by counting the number of pixels in the image that
possess a given grey-level value. For example in a 12 bits image, the histogram may be represented
by a graph, where the x-coordinates range from 0 to 4095 and y-coordinates represents the
corresponding pixel count [22]. From the histogram many parameters may be derived, such as its
mean, variance and percentiles.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
(ii) Run Length Matrix:
The run-length matrix is a technique where we search the image, always across a particular
direction, for number of pixels that have the same grey-level value. Therefore, given a particular
direction (for example, the vertical direction), the run-length matrix computes for each allowed grey-
level value how many instances there are runs of, example, 2 consecutive pixels with the same grey-
level value. Next it repeats the same for 3 consecutive pixels, then for 4, 5 and so on [22]. Thus using
a single image, typically four matrices are generated, for the vertical, horizontal and two diagonal
directions [22].
(iii) Haralick’s co-occurrence matrix: The Haralick’s co-occurrence matrix is a method that
helps us togather sta information of an image or an image ROI based on distribution of pixels of
that image. It is calculatedby defining a direction and a distance i.e., the pairs of pixels separated
by this distance. Once this has been donenumber of pairs of pixels given distribution of grey-level
values. one similar grey-level distribution Co- occurrence matrix is a good grey-levels in relation
to other grey where Ng is the total number of gray levels in the image. The element of the matrix
is produced by counting the total occasions a pixel with value is adjacent toa pixel with value j
and then subsequently dividing the whole matrix by the total number of such comparisons that are
made. Each entry in the resulting matrix is considered as the probability that a pixel with value is
to be found that is adjacent to a pixel of value .
(b) Structural Methods: This texture analysis technique characterizes a texture as the
combinationof well defined texture elements such as regularly spaced parallel lines. The image texture
is defined by the properties and placement rules of the texture elements. Different structural texture
analysis approaches have been recommended which ranges from utilizing different shapes of
structuring elements to understanding real textures as distorted versions of ideal textures. However, as
far as practical application of these methods is concerned, they are in limited use since they can only
describe very regular textures [5].
(c) Mathematical Model Based Methods: In this approach of texture analysis a texture in an
image is represented using sophisticated mathematical models (such as stochastic or fractal). The
model parameters are estimated and used for the image analysis [72]. Mathematical model based
texture analysis techniques generate an empirical model of each pixel in the image based on a
weighted averageof the pixel intensities in its neighborhood [7]. The disadvantage of these models
is that the estimation of these parameters is computationally very complex [2]. The estimated
parameters of the image models are used as textural feature descriptors. Examples of such model-
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 17 2021-22
based texture descriptors are autoregressive (AR) models, Markov random fields (MRF) and fractal
models [8].
(d) Transform based Methods: Finally, the transform-based texture analysis method alerts
the image into a new form by using the spatial frequency properties of the pixel intensity variations.
The success of these modern techniques is largely due to the type of transform they use to extract
textural
features from the image. In this method the texture properties of an image may be analyzed in the scale
space or the frequency space. Transform based methods are based on the Gabor, Fourier, or Wavelet
transforms [1].
(i) Wavelet Transform:
The Wavelet transform is a spatial/frequency analytical tool which is being used extensively during
past ten years and has been an area for research for many researchers. Wavelet transform is a
traditional pyramid-type transform that decomposes signals to sub signals in low frequency channels
[8]. Howevera drawback is that most significant information of a textured image often appears in the
middle frequency channels therefore the conventional wavelet transform does not work properly in
the texture
context. To rectify this drawback, the transform is modified and an energy function is used to
characterize the strength of a sub signal contained in a frequency channel requiring further
decomposition. This idea leads formation of tree structured wavelet transform.
Fig1.1: Wavelet transform of the image
Figure1.1 shows an example of a wavelet transform for the image shown in Figure 1. The top left
corner of the image depicts the low frequency and a small-scale version of the original image.
Whereas the other images in Figure 1 represents higher frequency versions of the original image but
on different scales [2]. An example of a parameter derived from wavelet transform is the wavelet
energy associated with a given scale and given direction. This parameter gives us the measure of the
frequency content ofthe image on a given scale and in a given direction [2].
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 18 2021-22
(ii) S-Transform
The S-transform (ST) relates closely to the continuous wavelet transform as it uses the complex Morlet
mother wavelet and therefore it measures directly the local frequency composition in an image for
each and every pixel. The S-transform has been successful in analyzing signals in various
applications, such as ground vibrations, seismic recordings, gravitational waves, power system
analysis and hydrology.
Almost all engineering materials have texture, which is strongly correlated with their properties, such
as mechanical strength, resistance to stress corrosion cracking and radiation damage, etc. In this sense,
image textures and textures in materials are closely related. In the case of metalliferous ores or rocks,
image texture provides critical information with regard to the response of the materials during mining
and mineral processing,
The transformation from image pixels to feature vectors usually involve two major steps: feature
extraction and feature encoding.
Feature extraction which are robust against rotations and translations of images, are able to provide
discriminative features for describing local image regions. A primary user (PU) is theuser holding a
license of a defined spectrum. He is allowed to use his bandwidth any time as far as heis respecting
the cover area and the transmission power. As many primary users do not broadcast allthe time,
their protected bandwidths are not used optimally. Therefore, an opportunistic user (i.a. a secondary
user (SU)) can use the best available bandwidth as far as his signal does not interfere with the signal
of PU at any time
The 1D S-transform has proved to be a useful tool for analyzing the medical signals, such as laser
Doppler flow metry, EEG and functional magnetic resonance imaging. The S-transform works
satisfactorily for texture analysis of images in medical industry due to its optimum space frequency
resolution and close connection to the Fourier transform (FT).
analysis for 2D images has been its redundant nature. In order to calculate and store the texture
features of large medical images, extensive calculation time and a large memory space are required
[8].
As a result, the S-transform of a 256×256 MR image takes almost one and half hours to calculate on
one computer with memory requirements of almost 32 GB [8].
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 19 2021-22
The 1D S-transform has proved to be a useful tool for analyzing the medical signals, such as laser
Doppler flow metry, EEG and functional magnetic resonance imaging. The S-transform works
satisfactorily for texture analysis of images in medical industry due to its optimum space frequency
resolution and close connection to the Fourier transform (FT).
analysis for 2D images has been its redundant nature. In order to calculate and store the texture
features of large medical images, extensive calculation time and a large memory space are required
[8].
As a result, the S-transform of a 256×256 MR image takes almost one and half hours to calculate on
one computer with memory requirements of almost 32 GB [8].
(iii) Discrete Orthonormal S Transform (DOST)
The Discrete Orthonormal space-frequency transform (DOST) is a relatively new and effective
approach for describing an image texture [8]. In order to obtain a rotationally consistent set of texture
features, the DOST components can be combined together, which in turn accurately distinguishes
between a series of texture patterns [8]. The DOST is highly efficient as it provides the multi-scale
information and computational efficiency of wavelet transforms, when it provides the texture
features as Fourier frequencies. It is better than other leading wavelet-based texture analysis
techniques and is more efficient as compared to primitive Haralick’s Co-occurrence Matrix [8].
(iv) Fast Time Frequency Transform (FTFT)
FTFT is a method that is developed by Chun Hing Cheng and Ross Mitchell from Mayo Clinic. It
is afast and accurate way to generate a highly compressed form of the values of S Transform directly.
It is used when N is so large that we cannot find and store the ST values first. It encodes the time
frequency representation (TRF) information uniformly and so can then be used for analyzing the
TRF correctly and processing the data efficiently and effectively. The compression that FTFT
provides can help storage, transmission and visualization of S Transform. Using FTFT the values of
S Transform can be calculated. It is highly efficient as it provides the multi-scale information and
computational efficiency of wavelet transforms, when it provides the texture features as Fourier
frequencies. It is better than others S Transform can be calculated. It is highly efficient as it
provides the multi-scale information and computational efficiency of wavelet transforms, when it
provides the texture features as Fourier frequencies. It is better than others
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 20 2021-22
1.5 ALGORITHMS
Fig 1.2: Flowchart of the texture images [12]
Figure 1.2 shows the flowchart of the texture images. The texture classification means assignment of a
sample image to a previously defined texture group. This classification usually involves a two-step
process.
A. The first stage, the feature extraction phase:
In this section, textural properties are extracted. The goal is to create a model for each one of the
textures that exist in the training platform.
The first stage in extracting texture features is to create a model for each one of the textures found in
educational imagery. Extractive features at this stage can be numerical, discrete histograms, empirical
distributions, texture features such as contrast, spatial structure, direction, etc. These features are
used
for teaching classification. So far, many ways to categorize texture have been proposed which the
efficiency of these methods depends to a great extent on the type of features extracted. Among the
most common ones, they can be divided into four main groups of "statistical methods", "structural
methods", "model-based methods", "transform methods" Each of these methods extracts the various
features of the texture [3, 4] It is worth noting that today it is difficult to put some methods in a
particular group due to the complexity of the methods and the use of the combined properties
because most of them fall into
several groups. Types of widely used and popular methods of features extraction texture will be
described in detail in the next section
B. The second stage, the classification phase:
In this phase, the test sample image texture is first analyzed using the same technique used in
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 21 2021-22
C. The first stage, the feature extraction phase:
In this section, textural properties are extracted. The goal is to create a model for each one of the
textures that exist in the training platform.
The first stage in extracting texture features is to create a model for each one of the textures found in
educational imagery. Extractive features at this stage can be numerical, discrete histograms, empirical
distributions, texture features such as contrast, spatial structure, direction, etc. These features are
used for teaching classification. So far, many ways to categorize texture have been proposed
which the efficiency of these methods depends to a great extent on the type of features extracted.
Among the most common ones, they can be divided into four main groups of "statistical methods",
"structural methods", "model-based methods", "transform methods" Each of these methods extracts the
various features of the texture [3, 4] It is worth noting that today it is difficult to put some methods
in a particular group due tothe complexity of the methods and the use of the combined properties
because most of them fall into several groups. Types of widely used and popular methods of
features extraction texture will be described in detail in the next section
D. The second stage, the classification phase:
In this phase, the test sample image texture is first analyzed using the same technique used in the
previous step and then, using a classification algorithm, the extraction features of the test image are
compared with the train imagery and its class is determined. The general flowchart of methods for the
texture images classification is indicated in Fig 1.2, based on the two preceeding phases.
In the second stage, the texture classification is based on the use of machine learning algorithms with
monitoring or classification algorithms; so that the appropriate class for each image is selected from
the comparison of the vector of the extracted features in the educational phase with the vector of the
selection test phase characteristics and its class is determined. This step is repeated for each image
that is in the test phase. At the end, the estimated classes for testing with their actual class are
adapted and the recognition rate is calculated which indicates the efficiency of the implemented
method which the recognition rate of each algorithm is used to compare the efficiency of its algorithm
with other available methods This step is repeated for each image thatis in the test phase. At the
end, the estimated classes for testing with their actual class are adapted and the recognition rate is
calculated which indicates the efficiency of the implemented method which the recognition rate of
each algorithm is used to compare the efficiency of its algorithm with other available methods.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 15 2021-22
In the second stage, the texture classification is based on the use of machine learning algorithms
with monitoring or classification algorithms; so that the appropriate class for each image is selected
from the comparison of the vector of the extracted features in the educational phase with the vector of
the selection test phase characteristics and its class is determined. This step is repeated for each
image thatis in the test phase. At the end, the estimated classes for testing with their actual class are
adapted and the recognition rate is calculated which indicates the efficiency of the implemented
method which the recognition rate of each algorithm is used to compare the efficiency of its algorithm
with other available methods.
Machine Learning Algorithms:
a) Decision Trees (J48 & Random Forest):
Decision tree is a simple yet widely used classification technique. They follow a nonparametric
approach for classification models building. In other words, it does not require any previous
assumptions regarding the type of probability distributions that the class and other attributes should
satisfy. In a decision tree, every leaf node has an assigned class label. Attribute test conditions are
usedto separate records having different characteristics in the non-terminal nodes, which consist of the
root node and other internal nodes. Decision trees, especially smaller-sized trees are relatively easier
tointerpret. They are quite robust to the presence of noise, especially when methods for avoiding over
fitting. The accuracy of a decision trees is not adversely affected by the presence of redundant
attributes.
b) ADA Boost:
ADA Boost is an iterative technique that adaptively changes the distribution of training samples which
helps the base classifiers to concentrate on examples that are difficult to classify. Ada boost algorithm
assigns equal weights to all instances at the beginning in the training data. It then recalls the learning
algorithm to develop a classifier for this data and then reweights each instance in according to the
classifier output. Therefore the weight of instances that were correctly classified is decreased and that
of misclassified ones in increased.
c) Bagging:
Bagging is also known as bootstrap aggregating. It is a technique that repeatedly samples from a
dataset, with replacement, in accordance with uniform probability distribution. Every bootstrap
sample has the same size as the original dataset. As we see that the sampling is done with
replacement, therefore some of the instances might appear more than once in the same training
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 15 2021-22
set, while othersmight get eliminated from the training set. Bagging is a technique that improves
on the generalizationerror by reduction in variance of the base classifiers. The stability of the
base classifier decides the study of artificial neural network (ANN) got its inspiration from
simulation models on biological neural systems. Similar to human brain structure, an ANN comprises
of an interconnected network of nodes and directed links. Multilayer neural networks with at least one
hidden layer are universal approximators, i.e., they can be used to approximate any target functions.
ANN can handle redundant features because the weights are automatically learned during training
step. The disadvantage of ANNis that they are sensitive to the presence of noise in the training data
and also they are a time consuming process, especially when the number of hidden nodes is large.
The term "Artificial neural network" refers to a biologically inspired sub-field of artificial intelligence
modeled after the brain. An Artificial neural network is usually a computational network based on
biological neural networks that construct the structure of the human brain. Similar to a human brain
has neurons interconnected to each other, artificial neural networks also have neurons that are
linked to each other in various layers of the networks. These neurons are known as nodes.
Artificial neural network tutorial covers all the aspects related to the artificial neural network. In this
tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen self-organizing map, Building
blocks, unsupervised learning, Genetic algorithm, etc.
They follow a nonparametric approach for classification models building. In other words, it does
not require any previous assumptions regarding the type of probability distributions that the class and
other attributes should satisfy. In a decision tree, every leaf node has an assigned class label.
Attribute test conditions are used to separate records having different characteristics in the non-
terminal nodes, which consist of the root node and other internal nodes. Decision trees, especially
smaller-sized trees are relatively easier to interpret. They are quite robust to the presence of noise,
especially when methods for avoiding overfitting. The accuracy of a decision trees is not adversely
affected by the presence of redundant attributes.
It then recalls the learning algorithm to develop a classifier for this data and then reweights each
instance in according to the classifier output. Therefore the weight of instances that were correctly
classified is decreased and that of misclassified ones in increased. At the end, the estimated classes
for testing with their actual class are adapted and the recognition rate is calculated which indicates the
efficiency of the implemented method which the recognition rate of each algorithm is used to compare
the efficiency of its algorithm with other available methods This step is repeated for each image that
is in the test phase.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
performance of bagging method. Bagging does not focus on any particular instance of the training
data. This is due to the fact that every sample has an equal probability of getting selected. It is
therefore less affected to over-fitting the model when applied to a noisy data.
d) Support Vector Machines (SVM):
A classification technique that has received considerable attention is Support vector machine (SVM).
This technique has originated from the statistical learning theory. SVM and has shown promising
results in many practical applications. SVM works well with high-dimensional data and is not
affectedby the dimensionality problem. SVM performs capacity control by maximizing the margin of
the decision boundary. Nevertheless, the user must still provide other parameters such as the type of
kernel function to use and the cost function C for introducing each slack variable. SVM works well for
binary class categorical indicators. Support Vector Machine or SVM is one of the most popular
Supervised Learning algorithms, which is used for Classification as well as Regression problems.
However, primarily, it is used for Classification problems in Machine Learning.
The goal of the SVM algorithm is to create the best line or decision boundary that can segregate n-
dimensional space into classes so that we can easily put the new data point in the correct category in
the future. This best decision boundary is called a hyperplane.
e) Artificial Neural Network (ANN):
The study of artificial neural network (ANN) got its inspiration from simulation models on biological
neural systems. Similar to human brain structure, an ANN comprises of an interconnected network of
nodes and directed links. Multilayer neural networks with at least one hidden layer are universal
approximators, i.e., they can be used to approximate any target functions. ANN can handle redundant
features because the weights are automatically learned during training step. The disadvantage of
ANN is that they are sensitive to the presence of noise in the training data and also they are a time
consuming process, especially when the number of hidden nodes is large.
The term "Artificial neural network" refers to a biologically inspired sub-field of artificial intelligence
modeled after the brain. An Artificial neural network is usually a computational network based on
biological neural networks that construct the structure of the human brain. Similar to a human brain
has neurons interconnected to each other, artificial neural networks also have neurons that are
linked to each other in various layers of the networks. These neurons are known as nodes.
Artificial neural network tutorial covers all the aspects related to the artificial neural network. In this
tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen self-organizing map, Building
blocks, unsupervised learning, Genetic algorithm, etc.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
The term "Artificial neural network" refers to a biologically inspired sub-field of artificial intelligence
modeled after the brain. An Artificial neural network is usually a computational network based on
biological neural networks that construct the structure of the human brain. Similar to a human brain
has neurons interconnected to each other, artificial neural networks also have neurons that are
linked to each other in various layers of the networks. These neurons are known as nodes.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
CHAPTER 2
2.1 LITERATURE SURVEY
1) Haralick et al. proposed a texture descriptor, the gray level co-occurrence matrix (GLCM),
which extracts co-occurrence gray scale values and collects their statistics. There are three texture
descriptors homogeneous, non-homogeneous and the texture browsing descriptor, the
homogeneous descriptor describes directionality and regularity of patterns in texture images. The
non-homogeneous texture descriptor also knows as edge histogram descriptor captures the spatial
distribution of edges within and image. And the texture browsing descriptor provides a coarser
description of the texture than that obtained using the homogeneous texture descriptor. A
statistical method of examining texture that considers the spatial relationship of pixels is the
gray-level co-occurrence matrix (GLCM), also known as the gray-level spatial dependence
matrix. The GLCM functions characterize the texture of an image by calculating how often
pairs of pixel with specific values and in a specified spatial relationship occur in an image,
creating a GLCM, and then extracting statistical measures from this matrix. (The texture filter
functions, described in Calculate Statistical Measures of Textures cannot provide information
about shape, that is, the spatial relationships of pixels in an image), GLCM and a set of 14
different features for classifying the texture of an image. He developed a spatial relationship of
image pixels as a statistical function of the gray level which is used as a quantitative
measurement for the texture of an image. Julesz firstly uses the spatial relationship of the gray
level and their co occurrences in statistical form for texture description. Investigation of Sutton
and Hall is also based on the statistics of gray level pairs, but Deutsch and Belknap presented a
more elaborate version of GLCM which combines the information in 2 × 2 matrices using
different separation and orientation between gray level pairs. GLCM is calculated by using the
frequency of occurrence of the image pixels followed by the same or another gray level. The
result of GLCM created from an input image of the 4 × 4 dimension displayingthe frequency
of occurrence of gray level pairs in matrices is shown in Figure 1B. The normalized formof the
GLCM is shown in Figure 1C. Mathematical notation of the statistical feature obtained from
Haralick GLCM is expressed from Equations (1) to (14), more details about GLCM can be
obtained. Here p(i, j) represents the pixel of an image having x number of rows and y number
of columns with coordinates (i, j). Gray tone “i” followed by another gray tone “j” with
distance “d” having a relative frequency is expressed as p(i, j).
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
2) Bela Julesz proposed the concept of “texton”,which stresses the importance of
microstructures (edges, corners and blobs) for pre-attentive human perception and texture
discrimination. speed data connectivity. In texton-based texture classifiers, texture is viewed upon as a
probabilistic generator of textons. The underlying probability distribution of the generator is
estimated by means of a texton. frequency histogram that measures the relative frequency of textons
from the codebook in a texture image. A texton frequency histogram is constructed from a texture
image by scanning over the texture image and extracting small texture patches. The small texture
patches are converted to the image representation that is used in the codebook in order to obtain a
collection of textons. Each extracted texton is compared to the textons in the codebook in order to
identify the most similar texton from the codebook, and the texton frequency histogram bin
corresponding to this texton is incremented.
Frequency histogram that measures the relative frequency of textons from the codebook in a texture
image. A texton frequency histogram is constructed from a texture image by scanning over the texture
image and extracting small texture patches. The small texture patches are converted to the image
representation that is used in the codebook in order to obtain a collection of textons. Each extracted
texton is compared to the textons in the codebook in order to identify the most similar texton from the
codebook, and the texton frequency histogram bin corresponding to this texton is incremented. After
normalization, the texton frequency histogram forms a feature vector that models the texture, and
canbe used in order to train a classifier. [2]
3) Leung and Malik introduced “pre-defined filter banks” with various scales and orientations to
local regions and utilized the distribution of local filter responses to characterize textural information.
The motivation for introducing these filters sets is twofold. The first is to overcome the limitations of
traditional rotationally invariant filters which do not respond strongly to oriented image patches and
thus do not provide good features for anisotropic textures. The second motivation arises out of a
concern about the dimensionality of the filter response space. Quite apart from extra processing.
pioneered the problem of classifying textures under varying view-point and illumination. The LM
filters used for local texture feature extraction are illustrated in Fig. S. In particular, they markeda
milestoneby giving an operational definition of textons: the cluster centers of the filter response
vectors. Theirwork has been widely followed by other researchers [35]. To handle 3D effects caused
by imaging, they proposed 3D textons which were cluster centers of filter responses over a stack of
images with representative viewpoints and lighting, as illustrated in Fig. 9. In their texture
classification algorithm,20 images of each texture were geometrically registered and transformed
into 48D local features with the LM Filters. Then the 48D filter response vectors of 20 selected
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
images of the same pixel were concatenated to obtain a 960D feature vector as the local texture
representation. Subsequently, these 960D feature vectors were input into a BoW pipeline for texture
classification. Downside of the methodis that it is not suitable for classify in a single texture image
under unknown imaging condition which usually arises in practical applications.
4) Varma and Zisserman introduced “Patch Descriptors” challenged the dominant role of
the filter banks [132, 170] in texture analysis, and instead developed a simple Patch Descriptor
simply keeping the raw pixel intensities of a square neighborhood to form a feature vector, as
illustrated in Fig. 10. Byreplacing the MR8 filter responses with the Patch Descriptor in texture
classification, Varma and Zisserman [214] observed very good classification performance using
extremely compact neighborhoods (3 × 3), and that for any fixed size of neighborhood the Patch
Descriptor leads to superior classification as compared to filter banks with the same support. A
clear limitation of the Patch. Descriptor itself is sensitivity to nearly any change (brightness,
rotation). The simplest way to describe the neighborhood around an interest point is to write down
the list of intensities to form a feature vector.Maximum Response (MR8) Filters, consist of 38 root
filters but only 8 filter.
5) filter response across all orientations, in order to achieve rotation in-variance. The root filters
are a subset of the LM Filters [10], retaining the two rotational symmetry filters, the edge filter, and
the bar filter at 3 scales and 6 orientations. Recording only the maximum response across
orientations reduces the number of responses from 38 to 8 (3 scales for 2 anisotropic filters, plus 2
isotropic), resulting the so-called MR8 filter bank. The root filters are a subset of the LM Filters
recording only the maximum response across orientations reduces the number of responses. A clear
limitation of the Patch. Descriptor itself is sensitivity to nearly any change.
6) Weszka et al extended Haralick's concepts on the cooccurrence matrix. Haralick et al9
developed GLCM as a function of distance and angle, Weszka proved that features derived from
GLCM matrixgive improved results when the distance of small magnitude is used between the gray
level pairs. Zucker and Terzopolous explained the procedure of best distance and angle selection for
the cooccurrence matrix.
7) Recording only the maximum response across orientations reducesthe number of responses
from 38 to 8 (3 scales for 2 anisotropic filters, plus 2 isotropic), resulting the so-called MR8 filter
bank. The root filters are a subset of the LM Filters recording only the maximum response across
orientations reduces the number of responses. A clear limitation of the Patch. Descriptor itself is
sensitivity to nearly any change.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
filter response across all orientations, in order to achieve rotation in-variance. The root filters are a
subset of the LM Filters [10], retaining the two rotational symmetry filters, the edge filter, and the bar
filter at 3 scales and 6 orientations. Recording only the maximum response across orientations
reduces the number of responses from 38 to 8 (3 scales for 2 anisotropic filters, plus 2 isotropic),
resulting the so-called MR8 filter bank. The root filters are a subset of the LM Filters recording only
the maximum response across orientations reduces the number of responses. A clear limitation of
the Patch. Descriptor itself is sensitivity to nearly any change.
8) Weszka et al extended Haralick's concepts on the cooccurrence matrix. Haralick et al9
developed GLCM as a function of distance and angle, Weszka proved that features derived from
GLCM matrixgive improved results when the distance of small magnitude is used between the gray
level pairs. Zucker and Terzopolous explained the procedure of best distance and angle selection for
the cooccurrence matrix.
9) Davis et al proposed a new advanced approach over GLCM called generalized cooccurrence
matrix (GCM). GCM calculated features to give more accurate results in comparison of GLCM for
texture analysis of the image. However, it could not become more popular than GLCM. Till 1980
GLCM derived statistical features were began to be used in analyzing aerial, terrain, microscopic, and
satellite images. Haralick et al used features calculated from GLCM to identify aerial images and eight
other terrain classes with 82% accuracy. Chen and Pavlidis state that GLCM can be combined with a
“split and merge algorithm” and can be used for textural segmentation of the image. In the early
1980s, many spatial statistical approaches came into existence, competing with GLCM but GLCM
proves successfulin maintaining its reliable application in various fields and even found applicability
in the field ofremote sensing.
10) Hsu et al introduced another statistical method referred to as simple statistical transformation
(SST), this method is computationally simple with respect to GLCM. Jensen compared the
performance of SST and GLCM by classifying six land-use types.
11) Wang and He et al introduced another new statistical-based approach for extracting spatial
information referred to as texture spectrum (TS). This is basically developed to overcome the
limitation of GLCM. Gong et al made a comparison between three spatial feature extraction methods,
that is, GLCM, SST, and TS for land-use classification of high-resolution visible (HRV) SPOT
satellite multispectral data. The result of this comparison indicates that GLCM produces good
classificationresults over other methods.
12) Arivazhagan et al derived curvelet statistical features and curvelet cooccurrence features from
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
the subband of curvelet decomposition and used it for classification of texture. As a result, a high
degree of success in classification is obtained. Nosaka et al introduced the improvement of GLCM
to extractimage features by extending LBP with the assistance of GLCM. Results proved that
their proposed method for face recognization through texture classification shows better performance
than the conventional LBP. Many statistical methods are used for the inspection of the machined
surfaces, as texture analysis becomes popular in machine surface analysis over structural texture
analysis methods.[12]
13) Castellano et al state that GLCM-based texture features are useful in the analysis of the medical
images. GLCM derived texture features can differentiate masses and non masses tissue in digital-
mammograms. The features calculated from GLCM constructed at the 00
are best suited to
discriminate masses tissues. Now GLCM is combined with other methods for extending its scope
and features. Gabor filter and GLCM are combined with other methods to obtain improved
quantification of texture features.[13]method for face recognization through texture classification
shows better performance than the conventional LBP. Many statistical methods are used for the
inspection of the machined surfaces, as texture analysis becomes popular in machine surface analysis
over structural texture analysis methods.[12]
14) Castellano et al state that GLCM-based texture features are useful in the analysis of the
medical images. GLCM derived texture features can differentiate masses and non masses tissue in
digital- mammograms. The features calculated from GLCM constructed at the 00
are best suited to
discriminate masses tissues. Now GLCM is combined with other methods for extending its scope
and features. Gabor filter and GLCM are combined with other methods to obtain improved
quantification of texture features.[13]
15) Nur Haedzerin Md Noor and et al, Performance Analysis of a Free Space Optics Link with
Multiple Transmitters/ Receivers Multiple transmitters/receivers (TX/RX) are used to improve the
quality of Free Space Optics (FSO) communication systems. With the current needs of this technology
for longer distance communication, the qualitative analysis of the system has become essential. In this
work, the received power level (PR) and bit error rate (BER) are considered to influence the FSO
link performance. The relationship between the two parameters are investigated and analysed.
Furthermore,the received power for various numbers of TXs and RXs are experimentally measured
and compared with the values obtained from theoretical calculations. The first part of the work deals
with the theoretical calculation and simulation designs of multiple laser beams based on the
commercial FSO used in actual sites. The second part describes the practical work and analysis of the
system’s performance[18].
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
16)Armi and Fekri-Ershad developed a methodology for texture classification by studying
variation in the “energy” feature. In his proposed method he used three texture classification
techniques, that is, GLCM, LBP, and edge segmentation. The three methods are applied individually
to the image and energy feature is extracted from them and combined. Later from the original image
energy feature is extracted. Finally, the energy feature is compared for both the cases [19].
17) Du et al developed a novel method of texture classification on the basis of rotation and
illumination. They proposed a new descriptor for the image texture known as “LSP”. This proposed
descriptor uses a2D artificial neuron network. They used the “Outex texture dataset” in their research
work and come to the conclusion that LSP outperforms other texture descriptors. Represents the
“Spiking cortical neuron model” developed by Du for texture classification. [45].
18) Chang et al developed a new texture classification technique using “single value
decomposition (SVD) and discrete wavelet transform (DWT)”. This developed technique uses a
support vector machine algorithm to perform image classification. Thus the newly developed texture
classification technique is named SRITCSD. This technique uses the following steps to per
classification.
DWT domain meanwhile particle swarm optimization is used to optimize this method. Results
conclude that the SRITCSD method can outperform other methods for texture classification.
in the DWT domain meanwhile particle swarm optimization is used to optimize this method. Results
conclude that the SRITCSD method can outperform other methods for texture classification.
19) Liu et al analyzed the change in the texture through interclass variation. This variation
includes illumination, rotation, and viewpoint. They noticed that a slight change, in contrast, can
change the texture appearance completely. Thus texture variation due to changing scale is “hardest to
handle”. They proposed a network called GANet which used a genetic algorithm to change the
background filter. They have developed a new dataset called “Extreme-Scale Variation Texture” to
test the performance of their system. Their developed system outperforms several existing texture
classification techniques by more than 10%.
Results conclude that the SRITCSD method can outperform other methods for texture classification.
in the DWT domain meanwhile particle swarm optimization is used to optimize this method. Results
conclude that the SRITCSD method can outperform other methods for texture classification.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 16 2021-22
in the DWT domain meanwhile particle swarm optimization is used to optimize this method.
Results conclude that the SRITCSD method can outperform other methods for texture
classification.
20) Liu et al analyzed the change in the texture through interclass variation. This variation
includes illumination, rotation, and viewpoint. They noticed that a slight change, in contrast, can
change the texture appearance completely. Thus texture variation due to changing scale is
“hardest to handle”. They proposed a network called GANet which used a genetic algorithm to
change the background filter. They have developed a new dataset called “Extreme-Scale Variation
Texture” to test the performance of their system. Their developed system outperforms several
existing texture classification techniques by more than 10%.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 25 2021-22
CHAPTER-3
3.1 Inference
In this section we discuss how to infer a (minimal) representation {ω, ¯ω,θ¯ ω} from a given
texture image {Ω,I}, and how to synthesize a novel texture image ˆ I from it. We start from the
latter since the algorithm that infers the representation utilizes the synthesis procedure
Algorithm1: Texture Synthesis
0 Initialize ˆ I(0) to a random texture;
1 Set νωs = 1 for s = 1,...,S and jmax = 20,b = 0.7 ;
2 for j = 1,...,jmax do
3 for s = 1,...,S do
4 ω(j) s = nrst nghbr(θ¯ ω, ¯ ω, ˆ I(j−1)(ˆ ωs));
5 Let ˆ I(j) = argminˆ I E(ˆ I,{ω(j) s }S s=1});
6 νˆ ωs = kˆ I(j)(ˆ ωs)−I(ω(j) s )kb−2 for s = 1,...,S ;
7 if (∀ˆ ωs ∈ ˆ ΩS : ˆ I(j)(ˆ ωs) = ˆ I(j−1)(ˆ ωs)) then break; Function nrst
nghbr(θ¯ω, ¯ ω, ˆ I(ˆ ω))
8 Let s be the index of the the nearest neighbor of ˆ I(ˆ ω) in θ¯ ω ;
9 Retrieve ωs within ¯ ω ;
10 return ωs ;
Table 3.1: Algorithm for texture synthesis
3.2 Image texture synthesis
Given a representation {ω, ¯ω,θ¯ ω}, we can synthesize novel instances of the texture
by sampling from dP(I(ω)) within ¯ ω. This is straightforward in a non-parametric setting,
where the representation is itself a collection of samples. One can simply select
neighborhoods ωλ within ¯ ω, and populate a new lattice with patches I(ωλ) ensuring
compatibility along patch boundaries and intersections. Efros et. al. [9] proposed a causal
sampling scheme that satisfies such compatibility conditions, but fails to respect the Markov s
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 26 2021-22
compatibility conditions and by construction also respect the Markov structure. We perform
this selection and simultaneously also infer ˆ I. We do so by first initializing ˆ I at random. We
select neighborhoods ˆ ωs on a grid on the domain of the synthesized texture every √r 4 . We
let ˆ ΩS = {ˆ ωs}S s=1 denote the collection of the selected ˆ ωs, ΩS = {ωs}S s=1 denote the
chosen neighborhoods within ¯ ω and I(ωs) ∈ θ¯ ω denote the nearest neighbor of ˆ I(ˆ ωs).
We minimize with respect to{ωs}S s=1 and ˆ I the function.
The procedure to minimize the above energy function is given in Alg. 1. An illustration of the
quantities involved is shown in νˆ ωs, defined in Alg. 1, is used to reduce the effect of
outliers, as done typically in iteratively re-weighted least squares [19]. The process is
performed in a multiscale fashion, by repeating the procedure over 3 neighborhood sizes:
|ˆ ωs|,|ˆ ωs 2 |,|ˆ ωs 4 |. By first
synthesizing at scale |ˆ ωs| = r, we capture the Markov structure of the texture.
Subsequent
repetitions refine the synthesized texture by adding finer details. We also repeat this process over
anumber of different output image sizes.
3.3 Vedio texture synthesis
The texture synthesis algorithm in [19] was extended to temporal textures, which however relied
on the availability of optical flow. Unfortunately, optical flow is expensive to store, as encoding
it is more costly than encoding the original images. We propose a temporal texture synthesis
algorithm that relies on neighborhoods ωλ that extend in time. We take the input video {It}T t=1,
and compute a compact representation θ¯ ωt, from which we synthesize{ˆ It}T t=1. In this
section we assume we have θ¯ ωtand in Sec. 4.5 we explain how to infer it. We re-define all
quantities to have domains that extend in time. To reduce computational complexity we fix the
temporal extension of the neighborhoods to 2 frames, although longer choices are possible.
Hence for t > 1, ωt λ ⊂ (1,1,t − 1) : (X,Y,t), which makes ita 3-D neighborhood and ¯ ω
becomes ¯ ωt . =Sλ=1,...,Λ ωt λ, a unionof3-Dneighborhoods. It(ωt λ)is therefore define don
the3-Dlatticeand θ¯ ωt . = {It(ωt 1),...,It(ωt Λ)}. For t = 1,ω t=1 λ , ¯ ωt=1 and θ¯ ωt=1 remain 2-
D.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 27 2021-22
Fig 3.1: Temporal texture synthesis
The above figure shows that temporal texture synthesis, in order to achieve compatibility with the
already synthesized textures. We then unmask all neighborhoods and use them to minimize the energy
function.
3.1 Synthesizing Multiple Textures Simultaneously
We demonstrate how multiple textures can be synthesized simultaneously for video and images
without computing a segmentation map. This is useful for applications such as video compression
(where{ω,
¯ω, θ¯ ω}can be used to synthesize the textures of the input video) or for image processing tasks
such as hole-filling and frame interpolation. To place the textures in their corresponding locations in a
video (or image) we implicitly define their boundaries by partitioning each frame into two types of
regions: Textures and their complementary region type, structures. Structures are regions of images
that trigger isolated responses of a feature detector. These include blobs, corners, edges, junctions and
other sparse features. We determine which regions are structures, by using a feature point tracker such
as [24]. Partitioning images or video into two types of regions has been previously proposed by
several works
[29] using a single image. In our framework, if a region with a scale triggers an isolated response of a
feature detector (i.e., it is a structure at scale), then the underlying process is, by definition, not
stationary at the scale|ω| = . Therefore, it is not a texture. It also implies that any region ω of size |¯ ω|
is not sufficient to predict the image outside that region. This of course does not prevent the region
from being a texture at a scale σ >> . Within a region σ there may be multiple frames of size, spatially
distributed in a way that is stationary/Markovian. Vice-versa, if a region of an image is a texture
with σ
= |¯ ω|, it cannot have a unique (isolated) extremum within ¯ ω. Ofcourse, it could have multiple
extreme a, each isolated within a region of size << σ. We conclude that, for any given scale of
observation σ.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 28 2021-22
The above figure shows that temporal texture synthesis, in order to achieve compatibility with the
already synthesized textures. We then unmask all neighborhoods and use them to minimize the energy
function.
We demonstrate how multiple textures can be synthesized simultaneously for video and images
without computing a segmentation map. This is useful for applications such as video
compression (where{ω,
¯ω, θ¯ ω}can be used to synthesize the textures of the input video) or for image processing tasks
such as hole-filling and frame interpolation. To place the textures in their corresponding locations in a
video (or image) we implicitly define their boundaries by partitioning each frame into two types of
regions: Textures and their complementary region type, structures. Structures are regions of images
that trigger isolated responses of a feature detector. These include blobs, corners, edges, junctions and
other sparse features. We determine which regions are structures, by using a feature point tracker such
as [24]. Partitioning images or video into two types of regions has been previously proposed by
several works
[29] using a single image. In our framework, if a region with a scale triggers an isolated response of a
feature detector (i.e., it is a structure at scale), then the underlying process is, by definition, not
stationary at the scale|ω| = . Therefore, it is not a texture. It also implies that any region ω of size |¯ ω|
is not sufficient to predict the image outside that region. This of course does not prevent the region
from being a texture at a scale σ >> . Within a region σ there may be multiple frames of size, spatially
distributed in a way that is stationary/Markovian. Vice-versa, if a region of an image is a texture
with σ
= |¯ ω|, it cannot have a unique (isolated) extremum within ¯ ω. Ofcourse, it could have multiple
extreme a, each isolated within a region of size << σ. We conclude that, for any given scale of
observation σ.
One must impose boundary conditions so that the texture regions fill around structure regions
seamlessly. Toper form texture extrapolation, we follow an approach similar to the one used for video
texture synthesis. The video is initialized to a random texture. At locations where the structures were
detected and tracked, we place the actual image (intensity) values. We select ˆ ωt s ∈ ˆ Ωt S like
before, on a 3-D grid of the synthesized frames, but with the added restriction that ˆ ωt s needs to
have at least one pixel in the texture domain (otherwise it is entirely determined that it is a
structure). The patches that are entirely lying in the texture domain need to be synthesized. The
patches that straddle thetexture/structure partition are used as boundary conditions and are synthesized
causally.
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 29 2021-22
3.1 Texture Qualitative Criterion
To evaluate the quality of the texture synthesis algorithm, we need a criterion that measuresthe
similarity of the input, I, and synthesized, ˆ I, textures. The peak signal-to-noise ratio (PSNR) is
typically used as the criterion for evaluating the quality of a reconstruction. However, when the
final user is the human visual system, PSNR is known to be a poor criterion, especially for
textures, as imperceptible differences can cause large PSNR changes. Works such as [36]
operate on general images and do not exploit properties of textures. To address this issue, we
introduce the Texture Qualitative Criterion (TQC), represented by ETQC, which is composed of two
terms. The first one, E1(ˆI,I), penalizes structural dissimilarity, whereas E2(ˆI,I) penalizes
statistical dissimilarity. We let ˆ ωs/ωi be patches within ˆ Ω/Ω, the domains of ˆ I/I, and their
nearest neighbors be ωs/ˆ ωi, which are selected within the domains ofI/ˆ I. I/ˆ I can correspond to
the input/synthesized textures, or simply two textures, which we wish to compare. For E1(ˆI,I), we
select NS patches ˆ ωs ⊂ ˆ Ω and NI patches ωi ⊂ Ω on a dense grid in the domain of the
synthesized and input images respectively. We let ˆ I(ˆ ωs) and I(ωi) correspond to the intensity
values in the synthesized and input neighborhoods respectively. We use the patches selected to
compute the following cost function.
Fig3.2: Domain transformation of the input texture
The above figure shows that domain transformation of the input texture, Note that this
expression
resembles Eq. (5), with one change: There is an added summation in, which is over patches in the
input image. The need of both of these terms has also been noted by others[37] and isillustrated in
the above figure. The first term identifies domain/range deformations of the input texture, where
as the second term identifies artifacts in the synthesized texture. We compute thiscost function
over multiple scales (typically 3) and average over all scales. This makes the cost function more
robust, as it is able to compute similarity of patches at multiple scales
3.2 Inference of Texture Representation
Given a complexity constraint, we have a bound on the number, Λ .= Λ(r), of samples, ωλ, that
can be stored in ¯ ω, which depends on r, the scale of ωλ. To estimate r, the inference algorithm
involves two levels of computation. The first level involves fixing rcand, a candidate of r, and
TEXTURE REPRESENTATION
Dept. of ECE, SJCIT 30 2021-22
computing which samples ωλ ⊂ Ω should be stored in ¯ ω. This computation is repeated for a
finite number of rcand. To choose ˆ r, an estimate of r, we use TQC to rank the representations
and choose the bestone according tothis ranking. In this section, we describe this procedure in
greater detail. For each rcand, we use Alg.1 (see Sec. 4.1) (or its variant if the input is a video)
to synthesize a novel instance of the texture at just one scale, rcand. Upon convergence, for each
ˆ ωs there is an ωs (its nearest neighbor), that is assigned to it. The set ΩS = {ωs}S s=1 denotes
the collection of nearest neighbors within Ω and it is the entire data that the algorithm needs to
synthesize the texture
3.3 Inference of Texture Representation
Given a complexity constraint, we have a bound on the number, Λ .= Λ(r), of samples, ωλ, that
can be stored in ¯ ω, which depends on r, the scale of ωλ. To estimate r, the inference algorithm
involves two levels of computation. The first level involves fixing rcand, a candidate of r, and
computing which samples ωλ ⊂ Ω should be stored in ¯ ω. This computation is repeated for a
finite number of rcand. To choose ˆ r, an estimate of r, we use TQC to rank the representations
and choose the bestone according tothis ranking. In this section, we describe this procedure in
greater detail. For each rcand, we use Alg.1 (see Sec. 4.1) (or its variant if the input is a video)
to synthesize a novel instance of the texture at just one scale, rcand. Upon convergence, for each
ˆ ωs there is an ωs (its nearest neighbor), that is assigned to it. The set ΩS = {ωs}S s=1 denotes
the collection of nearest neighbors within Ω and it is the entire data that the algorithm needs to
synthesize the texture.
TEXTURE REPRESENTATION.pdf
TEXTURE REPRESENTATION.pdf
TEXTURE REPRESENTATION.pdf
TEXTURE REPRESENTATION.pdf
TEXTURE REPRESENTATION.pdf

More Related Content

Similar to TEXTURE REPRESENTATION.pdf

Content Based Image Retrieval Using Dominant Color and Texture Features
Content Based Image Retrieval Using Dominant Color and Texture FeaturesContent Based Image Retrieval Using Dominant Color and Texture Features
Content Based Image Retrieval Using Dominant Color and Texture FeaturesIJMTST Journal
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
 
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGESDOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
 
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent  High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent ijsc
 
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTHIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTijsc
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
 
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy LogicMaximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy Logicsipij
 
Image Enhancement and Restoration by Image Inpainting
Image Enhancement and Restoration by Image InpaintingImage Enhancement and Restoration by Image Inpainting
Image Enhancement and Restoration by Image InpaintingIJERA Editor
 
Comparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithmsComparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithmsAlexander Decker
 
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
 
11.comparative analysis and evaluation of image imprinting algorithms
11.comparative analysis and evaluation of image imprinting algorithms11.comparative analysis and evaluation of image imprinting algorithms
11.comparative analysis and evaluation of image imprinting algorithmsAlexander Decker
 
Image Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringImage Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringIJERA Editor
 
2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...Alexander Decker
 
2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...Alexander Decker
 
11.combined structure and texture image inpainting algorithm for natural scen...
11.combined structure and texture image inpainting algorithm for natural scen...11.combined structure and texture image inpainting algorithm for natural scen...
11.combined structure and texture image inpainting algorithm for natural scen...Alexander Decker
 
2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...Alexander Decker
 

Similar to TEXTURE REPRESENTATION.pdf (20)

Content Based Image Retrieval Using Dominant Color and Texture Features
Content Based Image Retrieval Using Dominant Color and Texture FeaturesContent Based Image Retrieval Using Dominant Color and Texture Features
Content Based Image Retrieval Using Dominant Color and Texture Features
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGESDOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
 
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent  High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
 
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTHIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
 
Image Inpainting
Image InpaintingImage Inpainting
Image Inpainting
 
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy LogicMaximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
 
Image Enhancement and Restoration by Image Inpainting
Image Enhancement and Restoration by Image InpaintingImage Enhancement and Restoration by Image Inpainting
Image Enhancement and Restoration by Image Inpainting
 
Comparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithmsComparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithms
 
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
 
93 98
93 9893 98
93 98
 
11.comparative analysis and evaluation of image imprinting algorithms
11.comparative analysis and evaluation of image imprinting algorithms11.comparative analysis and evaluation of image imprinting algorithms
11.comparative analysis and evaluation of image imprinting algorithms
 
Image Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation ClusteringImage Segmentation Using Pairwise Correlation Clustering
Image Segmentation Using Pairwise Correlation Clustering
 
N42018588
N42018588N42018588
N42018588
 
2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...
 
2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...
 
11.combined structure and texture image inpainting algorithm for natural scen...
11.combined structure and texture image inpainting algorithm for natural scen...11.combined structure and texture image inpainting algorithm for natural scen...
11.combined structure and texture image inpainting algorithm for natural scen...
 
2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...2.[7 12]combined structure and texture image inpainting algorithm for natural...
2.[7 12]combined structure and texture image inpainting algorithm for natural...
 
40120140505017
4012014050501740120140505017
40120140505017
 

Recently uploaded

AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...
AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...
AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...yordanosyohannes2
 
Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...
Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...
Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...makika9823
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...Suhani Kapoor
 
Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111Sapana Sha
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxOAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxhiddenlevers
 
(办理学位证)加拿大萨省大学毕业证成绩单原版一比一
(办理学位证)加拿大萨省大学毕业证成绩单原版一比一(办理学位证)加拿大萨省大学毕业证成绩单原版一比一
(办理学位证)加拿大萨省大学毕业证成绩单原版一比一S SDS
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawlmakika9823
 
Financial institutions facilitate financing, economic transactions, issue fun...
Financial institutions facilitate financing, economic transactions, issue fun...Financial institutions facilitate financing, economic transactions, issue fun...
Financial institutions facilitate financing, economic transactions, issue fun...Avanish Goel
 
VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130
VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130
VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130Suhani Kapoor
 
(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办
(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办
(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办fqiuho152
 
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service NashikHigh Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spiritegoetzinger
 
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services  9892124323 | ₹,4500 With Room Free DeliveryMalad Call Girl in Services  9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free DeliveryPooja Nehwal
 
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Roomdivyansh0kumar0
 
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...shivangimorya083
 
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdfBPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdfHenry Tapper
 
Classical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam SmithClassical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam SmithAdamYassin2
 
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Log your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaignLog your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaignHenry Tapper
 

Recently uploaded (20)

AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...
AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...
AfRESFullPaper22018EmpiricalPerformanceofRealEstateInvestmentTrustsandShareho...
 
Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...
Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...
Independent Lucknow Call Girls 8923113531WhatsApp Lucknow Call Girls make you...
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
 
Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
 
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxOAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
 
(办理学位证)加拿大萨省大学毕业证成绩单原版一比一
(办理学位证)加拿大萨省大学毕业证成绩单原版一比一(办理学位证)加拿大萨省大学毕业证成绩单原版一比一
(办理学位证)加拿大萨省大学毕业证成绩单原版一比一
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
 
Financial institutions facilitate financing, economic transactions, issue fun...
Financial institutions facilitate financing, economic transactions, issue fun...Financial institutions facilitate financing, economic transactions, issue fun...
Financial institutions facilitate financing, economic transactions, issue fun...
 
VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130
VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130
VIP Call Girls Service Begumpet Hyderabad Call +91-8250192130
 
(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办
(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办
(办理原版一样)QUT毕业证昆士兰科技大学毕业证学位证留信学历认证成绩单补办
 
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service NashikHigh Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Maya 7001305949 Independent Escort Service Nashik
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spirit
 
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services  9892124323 | ₹,4500 With Room Free DeliveryMalad Call Girl in Services  9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free Delivery
 
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Room
 
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
 
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdfBPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
 
Classical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam SmithClassical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam Smith
 
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Log your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaignLog your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaign
 

TEXTURE REPRESENTATION.pdf

  • 1. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 1 2021-22 CHAPTER 1 PREAMBLE 1.1 INTRODUCTION Our visual world is richly filled with a great variety of textures, present in images ranging from multispectral satellite views to microscopic pictures of tissue samples. As a powerful visual cue like color, texture provides useful information in identifying objects or regions of interest in images. Texture is different from color in that it refers to the spatial organization of a set of basic elements or primitives (textons), the fundamental microstructures in natural images and the atoms of pre-attentive human visual perception. A textured region will obey some statistical properties, exhibiting periodically repeated textons with some degree of variability in their appearance and relative position. Textures may range from purely variational to perfectly regular and everything in between. The representation of textures open the door to several diverse and appealing applications. Some recent works that utilized the representation and analysis of texture are in areas such as object recognition, robotics/autonomous navigation, quality inspection and assessment, scene understanding, facial image analysis, image and video editing, crowd behaviour analysis, remote sensing, geological structure interpretation, and medical image analysis. The texture is recognizable in both tactile and optical ways. Tactile texture refers to the tangible feel of a surface and visual texture refers to see the shape or contents of the image. Diagnosis of texture in a human vision system is easily feasible but in the machine vision domain and image processing have their own complexity. In the image processing, the texture can be defined as a function of spatial variation of the brightness intensity of the pixels. The texture represents the variations of each level, which measures characteristics such as smoothness, smoothness, coarseness and regularity of each surface in different order directions. Textural images in the image processingand machine vision refer to the images in which a specific pattern of distribution and dispersion of the intensity of the pixel illumination is repeated sequentially throughout the image. In the “Texture Shape Extraction”, the objective is to extract 3D images which are covered in a picture with a specific texture. This field studies the structure and shape of the elements in the image by analyzing their textual properties and the spatial relationship each with each other.
  • 2. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 2 2021-22 The purpose of “Texture Synthesis” is to produce images that have the same texture as the input texture. Applications of this field are creation of graphic images and computer games. Eliminate of a part of the image and stow it with the background texture, creation of a scene with lighting and a different viewing angle, creation of artistic effects on images like embossed textures are other. Segmentation. In other words, in texture segmentation, the features of the boundaries and areas are compared and if their texture characteristics are sufficiently different, the boundary range has been found. State-of-the-art texture descriptors such as [7] are computationally prohibitive to use on large textures. These issues are especially acute for video, where the amount of data is significantly larger. Typically, these descriptors as well as texture synthesis algorithms assume that the size of the input texture is small, and yet large enough to compute statistics that are representative of the entire texture domain. Few works in the literature deal with how to infer automatically this smaller texturefrom an image and even fewer from a video. In most cases, it is assumed that the input texture is given, usually manually by cropping a larger one[7] propose an inverse texture synthesis algorithm, where given an input texture I, a compaction is synthesized that allows subsequent re-synthesis of a new instance ˆI. The method achieves good results, but it is semi-automatic, since it relies on external information such as a control map (e.g. an orientation field or other contextual information) to synthesize time-varying textures and on manual adjustment of the scale of neighborhoods for sampling from the compaction. The texture is recognizable in both tactile and optical ways. Tactile texture refers to the tangible feel of a surface and visual texture refers to see the shape or contents of the image. Diagnosis of texture in a human vision system is easily feasible but in the machine vision domain and image processing have their own complexity. In the image processing, the texture can be defined as a function of spatial variation of the brightness intensity of the pixels. The texture represents the variations of each level, which measures characteristics such as smoothness, smoothness, coarseness and regularity of each surface in different order directions. . Textural images in the image processingand machine vision refer to the images in which a specific pattern of distribution and dispersion of the intensity of the pixel illumination is repeated sequentially throughout the image. . This field studies the structure and shape of the elements in the image
  • 3. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 3 2021-22 segmentation. In other words, in texture segmentation, the features of the boundaries and areas are compared and if their texture characteristics are sufficiently different, the boundary range has been found. New transform techniques that specifically address the problems of image enhancement and compression, edge and feature extraction, and texture analysis received much attention in recent years especially in biomedical imaging. These techniques are often found under the names multi resolution analysis, time-frequency analysis, pyramid algorithms, and wavelet transforms. They became competitors to the traditional Fourier transform, whose basis functions are sinusoids. The wavelet transform is based on wavelets, which are small waves of varying frequency and limited duration. In addition to the traditional Fourier transform, they provide not only frequency but also temporal information on the signal. “Visual textures” are regions of images that exhibit some form of spatial regularity. They include the so-called “regular” or “quasi-regular” textures, “stochastic” textures (top-right), possibly deformed either in the domain (bottom-left) or range (bottom-right). Analysis of textures has been used for image and video representation [4], while synthesis has proven useful for image super resolution [11], hole-filling [9] and compression [5]. For such applications, large textures carry a high cost on storage and computation. State-of-the-art texture descriptors such as [7] are computationally prohibitive to use on large textures. These issues are especially acute for video, where the amount of data is significantly larger. Typically, these descriptors as well as texture synthesis algorithms assume that the size of the input texture is small, and yet large enough to compute statistics that are representative of the entire texture domain. Few works in the literature deal with how to infer automatically this smaller texturefrom an image and even fewer from a video. In most cases, it is assumed that the input texture is given, usually manually by cropping a larger one[7] propose an inverse texture synthesis algorithm, where given an input texture I, a compaction is synthesized that allows subsequent re-synthesis of a new instance ˆI. The method achieves good results, but it is semi- automatic, since it relies on external information such as a control map (e.g. an orientation field or other contextual information) to synthesize time-varying textures and on manual adjustment of the scale of neighborhoods for sampling from the compaction. Here proposed an alternative scheme, which avoids using any external information by automatically inferring a compact time-varying texture representation. The algorithm also automatically determines the scale of local neighborhoods, which is necessary for texture synthesis [9]. Since our representation consists of samples from the input texture, for applications such as classification [7], we are able toavoid synthesis biases that affect other methods [3].
  • 4. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 4 2021-22 Our contributions are to i. Summarize an image/video into a representation that requires significantly less storage than the input ii. Use our representation for synthesis using the texture optimization technique iii. Extend this framework to video using a causal scheme and show results for multiple time-varying textures, synthesize multiple textures simultaneously on video without explicitly computing a segmentation map, unlike [4], which is useful for hole-filling and video compression and propose a criterion (“Texture Qualitative Criterion” (TQC)) that measures structural and statistical dissimilarity between textures. The regions of images that exhibit some form of spatial regularity. They include the so-called “regular” or “quasi-regular” textures, “stochastic” textures (top-right), possibly deformed either in the domain (bottom-left) or range (bottom-right). Analysis of textures has been used for image and video representation [4], while synthesis has proven useful for image super resolution [11], hole-filling [9] and compression [5]. For such applications, large textures carry a high cost on storage and computation. State-of-the-art texture descriptors such as [7] are computationally prohibitive to use on large textures. These issues are especially acute for video, where the amount of data is significantly larger. Typically, these descriptors as well as texture synthesis algorithms assume that the size of the input texture is small, and yet large enough to compute statistics that are representative of the entire texture domain. Few works in the literature deal with how to infer automatically this smaller texture from an image and even fewer from a video. In most cases, it is assumed that the input texture is given, usually manually by cropping a larger one[7] propose an inverse texture synthesis algorithm, where given an input texture I, a compaction is synthesized that allows subsequent re- synthesis of a new instance ˆI. The method achieves good results, but it is semi-automatic, since it relies on external information such as a control map (e.g. an orientation field or other contextual information) to synthesize time-varying textures and on manual adjustment of the scale of neighborhoods for sampling from the compaction. Here proposed an alternative scheme, which avoids using any external information by automatically inferring a compact time-varying texture representation. The algorithm also automatically determines the scale of local neighborhoods, which is necessary for texture synthesis [9]. Since our representation consists of samples from the input texture, for applications such as classification [7], we are able toavoid synthesis biases that affect other methods.
  • 5. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 5 2021-22 Extend this framework to video using a causal scheme and show results for multiple time-varying textures, synthesize multiple textures simultaneously on video without explicitly computing a segmentation map, unlike [4], which is useful for hole-filling and video compression and propose a criterion (“Texture Qualitative Criterion” (TQC)) that measures structural and statistical dissimilarity between textures. 1.1 BACKGROUND INFORMATION Texture is one of the popular tasks in NLP that allows a program to classify free-text documents based on pre-defined classes. The classes can be based on topic, genre, or sentiment. Today’s emergence of large digital documents makes the text classification task more crucial, especially for companies to maximize their workflow or even profits. Recently, the progress of NLP research on text classificationhas arrived at the state-of-the-art (SOTA). It has achieved terrific results, showing Deep Learning methods as the cutting-edge technology to perform such tasks. Hence, the need to assess the performance of the SOTA deep learning models for text classification is essential not only for academic purposes but also for AI practitioners or professionals that need guidance and benchmark on similar projects. The goal of texture representation or texture feature extraction is to transform the input texture image into a feature vector that describes the properties of a texture, facilitating subsequent tasks such as texture classification, as illustrated in Fig. 4. Since texture is a spatial phenomenon, texture representation cannot be based on a single pixel, and generally requires the analysis of patterns over local pixel neighborhoods. Therefore, a texture image is first transformed to a pool of local features, which are then aggregated into a global representation for an entire image or region. Since the properties of texture are usually translationally invariant, most texture representations are based on an orderless aggregation of local texture features, such as a sum or max operation. Texture analysis can be divided into four areas: classification, segmentation, synthesis, and shape from texture. Texture classification deals with designing algorithms for declaring a given texture region or image as belonging to one of a set of known texture categories of which training samples have been provided. Texture classification may also be a binary hypothesis testing problem, such as differentiating a texture as being within or outside of a given class, such as distinguishing between healthy and pathological tissues in medial image analysis. The goal of texture segmentation is to partition a given image into disjoint regions of homogeneous texture. Texture synthesis is the process of generating new texture images which are perceptually equivalent to a given texture sample. As textures provide powerful shape cues, approaches for shape from texture attempt to recover the three dimensional shapeof a textured object from its image. It should be noted that the concept of “texture” may have different connotations or definitions depending on the given objective. Classification,
  • 6. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 6 2021-22 segmentation, and synthesis are closely related and widely studied, with shape from texture receiving comparatively less attention. Nevertheless, texture representation is at the core of these four problems. It is generally agreed that the extraction of powerful texture features plays a relatively more important role, since if poor features are used even the best classifier will fail to achieve good results. While this survey is not explicitly concerned with texture synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by synthesis in which a model is first constructed for synthesizing textures and then inverted for the purposes of classification. The texture is an important parameter to analyze medical images. The simplicity of GLCM-based texture inspection motivates researchers to use it in diagnoses from medical images. Magnetic resonance images (MRI) which are not visually accessed can use texture analysis method to extract information. In Reference GLCM is used to analyze the texture pattern in brain images, this technique is applied to analyze brain images of patients suffering from Alzheimer's disease. However, the GLCM matrix application is not limited to MRI images but also proved helpful in the detection of other health-related conditions. The features calculated from GLCM constructed at the 00 are best suited to discriminate masses tissues. Now GLCM is combined with other methods for extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain improved quantification of texture features. . Texture classification deals with designing algorithms for declaring a given texture region or image as belonging to one of a set of known texture categories of which training samples have been provided. Texture classification may also be a binary hypothesis testing problem, such as differentiating a texture as being within or outside of a given class, such as distinguishing between healthy and pathological tissues in medial image analysis. The goal of texture segmentation is to partition a given image into disjoint regions of homogeneous texture. Texture synthesis is the process of generating new texture images which are perceptually equivalent to a given texture sample. As textures provide powerful shape cues, approaches for shape from texture attempt to recover the three dimensional shape of a textured object from its image. The goal of texture representation or texture feature extraction is to transform the input texture image into a feature vector that describes the properties of a texture, facilitating subsequent tasks such as texture classification, as illustrated in Fig. 4. Since texture is a spatial phenomenon, texture representation cannot be based on a single pixel, and generally requires the analysis of patterns over local pixel neighborhoods. Therefore, a texture image is first transformed to a pool of local features, which are then aggregated into a global representation for an entire image or region . Since the properties of texture are usually translationally invariant.
  • 7. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 7 2021-22 Texture representation, together with texture classification, will form the primary focus of this survey. As a classical pattern recognition problem, texture classification primarily consists of two critical sub problems: texture representation and classification. It is generally agreed that the extraction of powerful texture features plays a relatively more important role, since if poor features are used even the best classifier will fail to achieve good results. While this survey is not explicitly concerned with texture synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by synthesis in which a model is first constructed for synthesizing textures and then inverted for the purposes of classification. The texture is an important parameter to analyze medical images. The simplicity of GLCM-based texture inspection motivates researchers to use it in diagnoses from medical images. Magnetic resonance images (MRI) which are not visually accessed can use texture analysis method to extract information. In Reference GLCM is used to analyze the texture pattern in brain images, this technique is applied to analyze brain images of patients suffering from Alzheimer's disease. However, the GLCM matrix application is not limited to MRI images but also proved helpful in the detection of other health-related conditions. The features calculated from GLCM constructed at the 00 are best suited to discriminate masses tissues. Now GLCM is combined with other methods for extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain improved quantification of texture features. As a classical pattern recognition problem, texture classification primarily consists of two critical sub problems: texture representation and classification. It is generally agreed that the extraction of powerful texture features plays a relatively more important role, since if poor features are used even the best classifier will fail to achieve good results. While this survey is not explicitly concerned with texture synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by synthesis in which a model is first constructed for synthesizing textures and then inverted for the purposes of classification. The goal of texture segmentation is to partition a given image into disjoint regions of homogeneous texture. Texture synthesis is the process of generating new texture images which are perceptually equivalent to a given texture sample. As textures provide powerful shape cues, approaches for shapefrom texture attempt to recover the three dimensional shape of a textured object from its image. It should be noted that the concept of “texture” may have different connotations or definitions dependingon the given objective. The classes can be based on topic, genre, or sentiment. Today’s emergence of large digital documents makes the text classification task more crucial, especially for companies to maximize their workflow or even profits. `
  • 8. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 8 2021-22 Texture representation, together with texture classification, will form the primary focus of this survey. As a classical pattern recognition problem, texture classification primarily consists of two critical sub problems: texture representation and classification. It is generally agreed that the extraction of powerful texture features plays a relatively more important role, since if poor features are used even the best classifier will fail to achieve good results. While this survey is not explicitly concerned with texture synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by synthesis in which a model is first constructed for synthesizing textures and then inverted for the purposes of classification. The texture is an important parameter to analyze medical images. The simplicity of GLCM-based texture inspection motivates researchers to use it in diagnoses from medical images. Magnetic resonance images (MRI) which are not visually accessed can use texture analysis method to extract information. In Reference GLCM is used to analyze the texture pattern in brain images, this technique is applied to analyze brain images of patients suffering from Alzheimer's disease. However, the GLCM matrix application is not limited to MRI images but also proved helpful in the detection of other health-related conditions. The features calculated from GLCM constructed at the 00 are best suited to discriminate masses tissues. Now GLCM is combined with other methods for extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain improved quantification of texture features. As a classical pattern recognition problem, texture classification primarily consists of two critical sub problems: texture representation and classification. It is generally agreed that the extraction of powerful texture features plays a relatively more important role, since if poor features are used even the best classifier will fail to achieve good results. While this survey is not explicitly concerned with texture synthesis, studying synthesis can be instructive, for example, classification of textures via analysis by synthesis in which a model is first constructed for synthesizing textures and then inverted for the purposes of classification. The goal of texture segmentation is to partition a given image into disjoint regions of homogeneous texture. Texture synthesis is the process of generating new texture images which are perceptually equivalent to a given texture sample. As textures provide powerful shape cues, approaches for shapefrom texture attempt to recover the three dimensional shape of a textured object from its image. It should be noted that the concept of “texture” may have different connotations or definitions dependingon the given objective. The classes can be based on topic, genre, or sentiment. Today’s emergence of large digital documents makes the text classification task more crucial, especially for companies to maximize their workflow or even profits.
  • 9. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 9 2021-22 1.3 TECHNOLOGY 1. DEEP LEARNING Deep learning-based methods have been explored for texture representation and amongthe society. Texture/material recognition generally is challenging in demanding an orderless representation of micro-structures (i.e., texture encoding). Previous research generally combines concatenated global CNN activations and a fully connected layer, which fails to meet the need for a geometry-invariant representation describing local feature distributions. To overcome this drawback, a Fisher-vector CNN descriptor is proposed which significantly boosts performance for texture analysis- related vision tasks. Owing to their deep architectures and large parameters sets, convolutional neural networks are displacing traditional methods in image recognition as state-of-the-art methods in an increasing number of applications. Training large and complex convolutional neural networks completely from scratch can be prohibitively costly, if sufficient data for training are available in the first place. Therefore, the use of pretrained networks is a potentially attractive approach to automated image recognition. Texture analysis in images is important in a wide range of industries. It is not precisely defined, since image texture is not precisely defined, but intuitively, image texture analysis attempts to quantify qualities such as roughness, smoothness, heterogeneity, regularity, etc., as a function of the spatial variation in pixel intensities. In materials, image texture analysis can be used to derive quantitative descriptors of the distributions of the orientations and sizes of grains in polycrystalline materials. Almost all engineering materials have texture, which is strongly correlated with their properties, such as mechanical strength, resistance to stress corrosion cracking and radiation damage, etc. In this sense, image textures and textures in materials are closely related. In the case of metalliferous ores or rocks, image texture provides critical information with regard to the response of the materials during mining and mineral processing [1,2]. For example, more energy is generally required to liberate finely disseminated minerals from ores, i.e., ores with fine textures, than is the case with ores with coarse textures. 2. Texture Representation and Pipeline Analysis A general pipeline of texture analysis and its goal is to transform an image patch or an entire image into a compact feature vector that describes texture structures or properties. The extracted feature vectors are then adapted for visual tasks under different scenarios. The transformation from image pixels to feature vectors usually involve two major steps: feature extraction and feature encoding. Feature extraction which are robust against rotations and translations of images, are able to provide discriminative features for describing local image regions.
  • 10. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 10 2021-22 Feature encoding this is to learn texton dictionary (codebook) and link local texture representation and feature pooling as a global feature representation of an image. A computational pipeline is an integral part of Texture Analysis study in Medical Imaging. It was developed to combine texture analysis and pattern classification algorithms for investigating associations between high-resolution MRI / MRE features and clinical patient data, and also between MRI / MRE features and histological data [2]. A typical Pipeline design structure consists of three main stages i.e., Preprocessing, Feature extraction and Analysis. Figure 5 illustrates the pictorial representation of a medical imaging pipeline. Training large and complex convolutional neural networks completely from scratch can be prohibitively costly, if sufficient data for training are available in the first place. Therefore, the use of pretrained networks is a potentially attractive approach to automated image recognition. Texture analysis in images is important in a wide range of industries. It is not precisely defined, since image texture is not precisely defined, but intuitively, image texture analysis attempts to quantify qualities such as roughness, smoothness, heterogeneity, regularity, etc., as a function of the spatial variation in pixel intensities. A general pipeline of texture analysis and its goal is to transform an image patch or an entire image into a compact feature vector that describes texture structures or properties. The extracted feature vectors are then adapted for visual tasks under different scenarios. The transformation from image pixels to feature vectors usually involve two major steps: feature extraction and feature encoding. A computational pipeline is an integral part of Texture Analysis study in Medical Imaging. It was developed to combine texture analysis and pattern classification algorithms for investigating associations between high-resolution MRI / MRE features and clinical patient data, and also between MRI / MRE features and histological data [2]. A typical Pipeline design structure consists of three main stages i.e., Preprocessing, Feature extraction and Analysis. Figure 5 illustrates the pictorial representation of a medical imaging pipeline. 3. Food Texture Analyzer(FRTS Series) It is food texture analyzer which quantifies food texture. It allows to perform measurement by simply selecting food or standard, following the guidance on the touch screen. The supplied software enables visual analysis of measurement result by graphing.. It quantifies texture in force to evaluate the textural properties of food such as firmness, tackiness, cohension, elasticity, etc Also reduces testing time by selecting food sample or a test standard anf present condition from the touch screen to confirm measuring condition. Easy to perform food measurements complying with the
  • 11. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 11 2021-22 correspomding part of ISO and more. 4. Best First Technique It is a selection technique that combines both forward selection and backward elimination rules. It is a method that does not just terminate when the performance starts to drop but keeps a list of all attribute subsets evaluated so far, sorted in order of the performance measure, so that it can revisitan earlier configuration instead. Given enough time it will explore the entire space, unless this is prevented by some kind of stopping criterion. It can search forward from the empty set of attributes, backward from the full set, or start at an intermediate point and searches in both the directions by considering all possible single attribute additions and deletions. 5. Greedy Stepwise Technique Greedy stepwise searches greedily through the space of attribute subsets. Like best first technique it may progress forward from the empty set or backward from the full set. Unlike best first technique, it does not backtrack but terminates as soon as adding or deleting the best remaining attribute decreases the evaluation metric. Training large and complex convolutional neural networks completely from scratch can be prohibitively costly, if sufficient data for training are available in the first place. Therefore, the use of pretrained networks is a potentially attractive approach to automated image recognition. Texture analysis in images is important in a wide range of industries. It is not precisely defined, since image texture is not precisely defined, but intuitively, image texture analysis attempts to quantify qualities such as roughness, smoothness, heterogeneity, regularity, etc., as a function of the spatial variation in pixel intensities. The texture analysis and its goal is to transform an image patch or an entire image into a compact feature vector that describes texture structures or properties. The extracted feature vectors are then adapted for visual tasks under different scenarios. The transformation from image pixels to feature vectors usually involve two major steps: feature extraction and feature encoding. . Almost all engineering materials have texture, which is strongly correlated with their properties, such as mechanical strength, resistance to stress corrosion cracking and radiation damage, etc. In this sense, image textures and textures in materials are closely related. In the case of metalliferous ores or rocks, image texture provides critical information with regard to the response of the materials during mining and mineral processing,
  • 12. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 12 2021-22 6. Data Preprocessing and Data Cleaning One of the most important steps of model building is preprocessing of the dataset. For data preprocessing, a good understanding of dataset is very important. Data preprocessing consists of following steps: data cleaning, transformation, normalization, feature extraction and feature selection. The data preprocessing step is considered to be an important step as it can have a significant impact on how a supervised machine learning algorithm performs. 7. Automatic Transmit Power Control Cable Free Microwave links feature Automatic Transmit Power Control (ATPC) which automatically increases the transmit power during “Fade” conditions such as heavy rainfall. ATPC can be used separately to ACM or together to maximise link uptime, stability and availability. When the “fade” conditions (rainfall) are over, the ATPC system reduces the transmit power again. This reduces the stress on the microwave power amplifiers, which reduces power consumption, heat generation and increases expected lifetime (MTBF).Automatic Transmit Power Control is a key technology to ensure reliable transmission in all weather conditions especially in regions with high rainfall and also longer links with fading/ducting[43]. 8. Intra-Chip Free-Space Optical Interconnect Optical interconnect is a promising long term solution. However, while significant progress in optical signaling has been made in recent years, networking issues for on-chip optical interconnect still require much investigation. Taking the underlying optical signaling systems as a drop-in replacement for conventional electrical signaling while maintaining conventional packet- switching architectures is unlikely to realize the full potential of optical interconnects. In this paper,this propose and study the design of a fully distributed interconnect architecture based on free-space optics. The architecture leverages a suite of newly-developed or emerging devices, circuits, and optics technologies. The interconnect avoids packet relay altogether, offers an ultra-low transmission latency and scalable bandwidth, and provides fresh opportunities for coherency substrate designs and optimizations[44]. 9. Smart and cognitive radios Cognitive radios (CRs) can scan and analyze their environment and adapt their transmission/reception parameters to better convey and protect transmitted data [19]. CR can be mainly divided into two categories: smart individual radios and smart networks (largely considered as cognitive radios). A
  • 13. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 13 2021-22 smart radio can dynamically be auto-programmed and configured. Smart networks optimize the total use of available physical resources among its members. In the case of a cognitive radio, the main decision function can be made in the central unit while the scanning and the analysis procedures can be done in each individual unit (a transmission unit can be affected to a primary or a secondary user). In order to optimally share the physical resources, CR classifies the transmitters (the users) into two categories: primary and secondary users. A primary user (PU) is the user holding a license of a defined spectrum. He is allowed to use his bandwidth any time as far as heis respecting the cover area and the transmission power. As many primary users do not broadcast all the time, their protected bandwidths are not used optimally. Therefore, an opportunistic user (i.a. a secondary user (SU)) can use the best available bandwidth as far as his signal does not interfere with the signal of PU at any time. The run-length matrix is a technique where we search the image, always across a particular direction,for number of pixels that have the same grey-level value. Therefore, given a particular direction (for example, the vertical direction), the run-length matrix computes for each allowed grey-level value how many instances there are runs of, example, 2 consecutive pixels with the same grey-level value. The texture analysis and its goal is to transform an image patch or an entire image into a compact feature vector that describes texture structures or properties. The extracted feature vectors are then adapted for visual tasks under different scenarios. The transformation from image pixels to feature vectors usually involve two major steps: feature extraction and feature encoding. . Almost all engineering materials have texture, which is strongly correlated with their properties, such as mechanical strength, resistance to stress corrosion cracking and radiation damage, etc. In this sense, image textures and textures in materials are closely related. In the case of metalliferous ores or rocks, image texture provides critical information with regard to the response of the materials during mining and mineral processing, The transformation from image pixels to feature vectors usually involve two major steps: feature extraction and feature encoding. Feature extraction which are robust against rotations and translations of images, are able to provide discriminative features for describing local image regions. A primary user (PU) is theuser holding a license of a defined spectrum. He is allowed to use his bandwidth any time as far as heis respecting the cover area and the transmission power. As many primary users do not broadcast allthe time, their protected bandwidths are not used optimally. Therefore, an opportunistic user (i.a. a secondary user (SU)) can use the best available bandwidth as far as his signal does not interfere with the signal of PU at any time
  • 14. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 14 2021-22 analysis procedures can be done in each individual unit (a transmission unit can be affected to a primary or a secondary user). In order to optimally share the physical resources, CR classifies the transmitters (the users) into two categories: primary and secondary users. A primary user (PU) is the user holding a license of a defined spectrum. He is allowed to use his bandwidth any time as far as he is respecting the cover area and the transmission power. As many primary users do not broadcast allthe time, their protected bandwidths are not used optimally. Therefore, an opportunistic user (i.a. a secondary user (SU)) can use the best available bandwidth as far as his signal does not interfere with the signal of PU at any time[47]. 10. PSTN The public switched telephone network, or PSTN, is the world's collection of interconnected voice-oriented public telephone networks. A public switched telephone network is a combination of telephone networks used worldwide, including telephone lines, fiber optic, switching centers, cellular networks, satellites and cable systems.[49] 11. Visible Light Communication(VLC) VLC, a subset of OWC, has emerged as a promising technology in the past decade. VLC based on LED or LD can be a promising solution for upcoming high-density and high-capacity 5G wireless networks. IoT is becoming increasingly important because it allows a large number of devices to be connected for sensing, monitoring, and resource sharing, and VLC could play an important role to this end [51]. VLC technology offers 10,000 times more bandwidth capacity than RF-based technologies Moreover, it is not harmful to humans. LED’s can switch to different light intensity levels at a very fast rate, which allows data to be modulated through LED at a speed that the human eye cannot detect[49]. 12. USWN Underwater wireless sensor networks (UWSNs) are envisioned to enable applications for a wide variety of purposes such as tsunami warnings, offshore exploration, tactical surveillance, monitoring of oil and gas spills, assisted navigation, pollution monitoring, and for many commercial purposes. To make those applications viable, there is a need to enable communications among underwater devices. distance based segmentation (DBS) clustering protocol which is a variant of LEACH protocol. This is assumed that the availability.
  • 15. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 15 2021-22 1.4 METHODOLOGY i. Statistical Methods, in which the texture is characterized by statistical distribution of intensity values Example of these methods, are Histogram, GLCM, and Run Length Matrix. ii. Structural Methods, where the texture is characterized by feature primitives and their spatial arrangements. iii. Mathematical model based Methods, such as fractal models which usually generate an empirical model of all the pixels contained within that image considering the weighted average of the pixel intensities in its neighborhood. iv. Transform based Methods, where the image is converted into new form using spatial frequency properties of the pixel intensity variations. Some examples of this method are Wavelet Transform, Fourier Transform and S transform. Each of these methodologies has been briefly described as following: (a) Statistical Methods: In statistical methods, texture is described by a collection of statistics of selected features. Statistical approach of texture analysis primarily describes texture of regions in an image using higher order moments of their grayscale histograms values [2]. Selecting various textural features from a Gray level co-occurrence matrix (GLCM) is apparently, the most commonly cited method for texture analysis [3]. In addition to the traditional statistical texture analysis methods, multivariate statistical techniques have also been considered for extraction of textural features. If we consider an image as a matrix, the singular value decomposition (SVD) spectrum of the image texture is a summary vector represented by its singular values [3]. Alternatively, the run length matrix (RLM) includes higher-order statistics of the gray level histogram for an image. The RLM approach of texture analysis distinguishes fine textures of an image as having few pixels in a constant gray level run andcoarse textures with many pixels in such a run [3]. (i) Histograms: In digital images, the allowed value for the grey level that can be given to a pixel is limited. The grey value is usually an integer ranging from 0 to 2b-1, where b denotes the number of bits of the image [22]. The histogram of an image is drawn by counting the number of pixels in the image that possess a given grey-level value. For example in a 12 bits image, the histogram may be represented by a graph, where the x-coordinates range from 0 to 4095 and y-coordinates represents the corresponding pixel count [22]. From the histogram many parameters may be derived, such as its mean, variance and percentiles.
  • 16. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 (ii) Run Length Matrix: The run-length matrix is a technique where we search the image, always across a particular direction, for number of pixels that have the same grey-level value. Therefore, given a particular direction (for example, the vertical direction), the run-length matrix computes for each allowed grey- level value how many instances there are runs of, example, 2 consecutive pixels with the same grey- level value. Next it repeats the same for 3 consecutive pixels, then for 4, 5 and so on [22]. Thus using a single image, typically four matrices are generated, for the vertical, horizontal and two diagonal directions [22]. (iii) Haralick’s co-occurrence matrix: The Haralick’s co-occurrence matrix is a method that helps us togather sta information of an image or an image ROI based on distribution of pixels of that image. It is calculatedby defining a direction and a distance i.e., the pairs of pixels separated by this distance. Once this has been donenumber of pairs of pixels given distribution of grey-level values. one similar grey-level distribution Co- occurrence matrix is a good grey-levels in relation to other grey where Ng is the total number of gray levels in the image. The element of the matrix is produced by counting the total occasions a pixel with value is adjacent toa pixel with value j and then subsequently dividing the whole matrix by the total number of such comparisons that are made. Each entry in the resulting matrix is considered as the probability that a pixel with value is to be found that is adjacent to a pixel of value . (b) Structural Methods: This texture analysis technique characterizes a texture as the combinationof well defined texture elements such as regularly spaced parallel lines. The image texture is defined by the properties and placement rules of the texture elements. Different structural texture analysis approaches have been recommended which ranges from utilizing different shapes of structuring elements to understanding real textures as distorted versions of ideal textures. However, as far as practical application of these methods is concerned, they are in limited use since they can only describe very regular textures [5]. (c) Mathematical Model Based Methods: In this approach of texture analysis a texture in an image is represented using sophisticated mathematical models (such as stochastic or fractal). The model parameters are estimated and used for the image analysis [72]. Mathematical model based texture analysis techniques generate an empirical model of each pixel in the image based on a weighted averageof the pixel intensities in its neighborhood [7]. The disadvantage of these models is that the estimation of these parameters is computationally very complex [2]. The estimated parameters of the image models are used as textural feature descriptors. Examples of such model-
  • 17. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 17 2021-22 based texture descriptors are autoregressive (AR) models, Markov random fields (MRF) and fractal models [8]. (d) Transform based Methods: Finally, the transform-based texture analysis method alerts the image into a new form by using the spatial frequency properties of the pixel intensity variations. The success of these modern techniques is largely due to the type of transform they use to extract textural features from the image. In this method the texture properties of an image may be analyzed in the scale space or the frequency space. Transform based methods are based on the Gabor, Fourier, or Wavelet transforms [1]. (i) Wavelet Transform: The Wavelet transform is a spatial/frequency analytical tool which is being used extensively during past ten years and has been an area for research for many researchers. Wavelet transform is a traditional pyramid-type transform that decomposes signals to sub signals in low frequency channels [8]. Howevera drawback is that most significant information of a textured image often appears in the middle frequency channels therefore the conventional wavelet transform does not work properly in the texture context. To rectify this drawback, the transform is modified and an energy function is used to characterize the strength of a sub signal contained in a frequency channel requiring further decomposition. This idea leads formation of tree structured wavelet transform. Fig1.1: Wavelet transform of the image Figure1.1 shows an example of a wavelet transform for the image shown in Figure 1. The top left corner of the image depicts the low frequency and a small-scale version of the original image. Whereas the other images in Figure 1 represents higher frequency versions of the original image but on different scales [2]. An example of a parameter derived from wavelet transform is the wavelet energy associated with a given scale and given direction. This parameter gives us the measure of the frequency content ofthe image on a given scale and in a given direction [2].
  • 18. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 18 2021-22 (ii) S-Transform The S-transform (ST) relates closely to the continuous wavelet transform as it uses the complex Morlet mother wavelet and therefore it measures directly the local frequency composition in an image for each and every pixel. The S-transform has been successful in analyzing signals in various applications, such as ground vibrations, seismic recordings, gravitational waves, power system analysis and hydrology. Almost all engineering materials have texture, which is strongly correlated with their properties, such as mechanical strength, resistance to stress corrosion cracking and radiation damage, etc. In this sense, image textures and textures in materials are closely related. In the case of metalliferous ores or rocks, image texture provides critical information with regard to the response of the materials during mining and mineral processing, The transformation from image pixels to feature vectors usually involve two major steps: feature extraction and feature encoding. Feature extraction which are robust against rotations and translations of images, are able to provide discriminative features for describing local image regions. A primary user (PU) is theuser holding a license of a defined spectrum. He is allowed to use his bandwidth any time as far as heis respecting the cover area and the transmission power. As many primary users do not broadcast allthe time, their protected bandwidths are not used optimally. Therefore, an opportunistic user (i.a. a secondary user (SU)) can use the best available bandwidth as far as his signal does not interfere with the signal of PU at any time The 1D S-transform has proved to be a useful tool for analyzing the medical signals, such as laser Doppler flow metry, EEG and functional magnetic resonance imaging. The S-transform works satisfactorily for texture analysis of images in medical industry due to its optimum space frequency resolution and close connection to the Fourier transform (FT). analysis for 2D images has been its redundant nature. In order to calculate and store the texture features of large medical images, extensive calculation time and a large memory space are required [8]. As a result, the S-transform of a 256×256 MR image takes almost one and half hours to calculate on one computer with memory requirements of almost 32 GB [8].
  • 19. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 19 2021-22 The 1D S-transform has proved to be a useful tool for analyzing the medical signals, such as laser Doppler flow metry, EEG and functional magnetic resonance imaging. The S-transform works satisfactorily for texture analysis of images in medical industry due to its optimum space frequency resolution and close connection to the Fourier transform (FT). analysis for 2D images has been its redundant nature. In order to calculate and store the texture features of large medical images, extensive calculation time and a large memory space are required [8]. As a result, the S-transform of a 256×256 MR image takes almost one and half hours to calculate on one computer with memory requirements of almost 32 GB [8]. (iii) Discrete Orthonormal S Transform (DOST) The Discrete Orthonormal space-frequency transform (DOST) is a relatively new and effective approach for describing an image texture [8]. In order to obtain a rotationally consistent set of texture features, the DOST components can be combined together, which in turn accurately distinguishes between a series of texture patterns [8]. The DOST is highly efficient as it provides the multi-scale information and computational efficiency of wavelet transforms, when it provides the texture features as Fourier frequencies. It is better than other leading wavelet-based texture analysis techniques and is more efficient as compared to primitive Haralick’s Co-occurrence Matrix [8]. (iv) Fast Time Frequency Transform (FTFT) FTFT is a method that is developed by Chun Hing Cheng and Ross Mitchell from Mayo Clinic. It is afast and accurate way to generate a highly compressed form of the values of S Transform directly. It is used when N is so large that we cannot find and store the ST values first. It encodes the time frequency representation (TRF) information uniformly and so can then be used for analyzing the TRF correctly and processing the data efficiently and effectively. The compression that FTFT provides can help storage, transmission and visualization of S Transform. Using FTFT the values of S Transform can be calculated. It is highly efficient as it provides the multi-scale information and computational efficiency of wavelet transforms, when it provides the texture features as Fourier frequencies. It is better than others S Transform can be calculated. It is highly efficient as it provides the multi-scale information and computational efficiency of wavelet transforms, when it provides the texture features as Fourier frequencies. It is better than others
  • 20. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 20 2021-22 1.5 ALGORITHMS Fig 1.2: Flowchart of the texture images [12] Figure 1.2 shows the flowchart of the texture images. The texture classification means assignment of a sample image to a previously defined texture group. This classification usually involves a two-step process. A. The first stage, the feature extraction phase: In this section, textural properties are extracted. The goal is to create a model for each one of the textures that exist in the training platform. The first stage in extracting texture features is to create a model for each one of the textures found in educational imagery. Extractive features at this stage can be numerical, discrete histograms, empirical distributions, texture features such as contrast, spatial structure, direction, etc. These features are used for teaching classification. So far, many ways to categorize texture have been proposed which the efficiency of these methods depends to a great extent on the type of features extracted. Among the most common ones, they can be divided into four main groups of "statistical methods", "structural methods", "model-based methods", "transform methods" Each of these methods extracts the various features of the texture [3, 4] It is worth noting that today it is difficult to put some methods in a particular group due to the complexity of the methods and the use of the combined properties because most of them fall into several groups. Types of widely used and popular methods of features extraction texture will be described in detail in the next section B. The second stage, the classification phase: In this phase, the test sample image texture is first analyzed using the same technique used in
  • 21. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 21 2021-22 C. The first stage, the feature extraction phase: In this section, textural properties are extracted. The goal is to create a model for each one of the textures that exist in the training platform. The first stage in extracting texture features is to create a model for each one of the textures found in educational imagery. Extractive features at this stage can be numerical, discrete histograms, empirical distributions, texture features such as contrast, spatial structure, direction, etc. These features are used for teaching classification. So far, many ways to categorize texture have been proposed which the efficiency of these methods depends to a great extent on the type of features extracted. Among the most common ones, they can be divided into four main groups of "statistical methods", "structural methods", "model-based methods", "transform methods" Each of these methods extracts the various features of the texture [3, 4] It is worth noting that today it is difficult to put some methods in a particular group due tothe complexity of the methods and the use of the combined properties because most of them fall into several groups. Types of widely used and popular methods of features extraction texture will be described in detail in the next section D. The second stage, the classification phase: In this phase, the test sample image texture is first analyzed using the same technique used in the previous step and then, using a classification algorithm, the extraction features of the test image are compared with the train imagery and its class is determined. The general flowchart of methods for the texture images classification is indicated in Fig 1.2, based on the two preceeding phases. In the second stage, the texture classification is based on the use of machine learning algorithms with monitoring or classification algorithms; so that the appropriate class for each image is selected from the comparison of the vector of the extracted features in the educational phase with the vector of the selection test phase characteristics and its class is determined. This step is repeated for each image that is in the test phase. At the end, the estimated classes for testing with their actual class are adapted and the recognition rate is calculated which indicates the efficiency of the implemented method which the recognition rate of each algorithm is used to compare the efficiency of its algorithm with other available methods This step is repeated for each image thatis in the test phase. At the end, the estimated classes for testing with their actual class are adapted and the recognition rate is calculated which indicates the efficiency of the implemented method which the recognition rate of each algorithm is used to compare the efficiency of its algorithm with other available methods.
  • 22. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 15 2021-22 In the second stage, the texture classification is based on the use of machine learning algorithms with monitoring or classification algorithms; so that the appropriate class for each image is selected from the comparison of the vector of the extracted features in the educational phase with the vector of the selection test phase characteristics and its class is determined. This step is repeated for each image thatis in the test phase. At the end, the estimated classes for testing with their actual class are adapted and the recognition rate is calculated which indicates the efficiency of the implemented method which the recognition rate of each algorithm is used to compare the efficiency of its algorithm with other available methods. Machine Learning Algorithms: a) Decision Trees (J48 & Random Forest): Decision tree is a simple yet widely used classification technique. They follow a nonparametric approach for classification models building. In other words, it does not require any previous assumptions regarding the type of probability distributions that the class and other attributes should satisfy. In a decision tree, every leaf node has an assigned class label. Attribute test conditions are usedto separate records having different characteristics in the non-terminal nodes, which consist of the root node and other internal nodes. Decision trees, especially smaller-sized trees are relatively easier tointerpret. They are quite robust to the presence of noise, especially when methods for avoiding over fitting. The accuracy of a decision trees is not adversely affected by the presence of redundant attributes. b) ADA Boost: ADA Boost is an iterative technique that adaptively changes the distribution of training samples which helps the base classifiers to concentrate on examples that are difficult to classify. Ada boost algorithm assigns equal weights to all instances at the beginning in the training data. It then recalls the learning algorithm to develop a classifier for this data and then reweights each instance in according to the classifier output. Therefore the weight of instances that were correctly classified is decreased and that of misclassified ones in increased. c) Bagging: Bagging is also known as bootstrap aggregating. It is a technique that repeatedly samples from a dataset, with replacement, in accordance with uniform probability distribution. Every bootstrap sample has the same size as the original dataset. As we see that the sampling is done with replacement, therefore some of the instances might appear more than once in the same training
  • 23. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 15 2021-22 set, while othersmight get eliminated from the training set. Bagging is a technique that improves on the generalizationerror by reduction in variance of the base classifiers. The stability of the base classifier decides the study of artificial neural network (ANN) got its inspiration from simulation models on biological neural systems. Similar to human brain structure, an ANN comprises of an interconnected network of nodes and directed links. Multilayer neural networks with at least one hidden layer are universal approximators, i.e., they can be used to approximate any target functions. ANN can handle redundant features because the weights are automatically learned during training step. The disadvantage of ANNis that they are sensitive to the presence of noise in the training data and also they are a time consuming process, especially when the number of hidden nodes is large. The term "Artificial neural network" refers to a biologically inspired sub-field of artificial intelligence modeled after the brain. An Artificial neural network is usually a computational network based on biological neural networks that construct the structure of the human brain. Similar to a human brain has neurons interconnected to each other, artificial neural networks also have neurons that are linked to each other in various layers of the networks. These neurons are known as nodes. Artificial neural network tutorial covers all the aspects related to the artificial neural network. In this tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen self-organizing map, Building blocks, unsupervised learning, Genetic algorithm, etc. They follow a nonparametric approach for classification models building. In other words, it does not require any previous assumptions regarding the type of probability distributions that the class and other attributes should satisfy. In a decision tree, every leaf node has an assigned class label. Attribute test conditions are used to separate records having different characteristics in the non- terminal nodes, which consist of the root node and other internal nodes. Decision trees, especially smaller-sized trees are relatively easier to interpret. They are quite robust to the presence of noise, especially when methods for avoiding overfitting. The accuracy of a decision trees is not adversely affected by the presence of redundant attributes. It then recalls the learning algorithm to develop a classifier for this data and then reweights each instance in according to the classifier output. Therefore the weight of instances that were correctly classified is decreased and that of misclassified ones in increased. At the end, the estimated classes for testing with their actual class are adapted and the recognition rate is calculated which indicates the efficiency of the implemented method which the recognition rate of each algorithm is used to compare the efficiency of its algorithm with other available methods This step is repeated for each image that is in the test phase.
  • 24. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 performance of bagging method. Bagging does not focus on any particular instance of the training data. This is due to the fact that every sample has an equal probability of getting selected. It is therefore less affected to over-fitting the model when applied to a noisy data. d) Support Vector Machines (SVM): A classification technique that has received considerable attention is Support vector machine (SVM). This technique has originated from the statistical learning theory. SVM and has shown promising results in many practical applications. SVM works well with high-dimensional data and is not affectedby the dimensionality problem. SVM performs capacity control by maximizing the margin of the decision boundary. Nevertheless, the user must still provide other parameters such as the type of kernel function to use and the cost function C for introducing each slack variable. SVM works well for binary class categorical indicators. Support Vector Machine or SVM is one of the most popular Supervised Learning algorithms, which is used for Classification as well as Regression problems. However, primarily, it is used for Classification problems in Machine Learning. The goal of the SVM algorithm is to create the best line or decision boundary that can segregate n- dimensional space into classes so that we can easily put the new data point in the correct category in the future. This best decision boundary is called a hyperplane. e) Artificial Neural Network (ANN): The study of artificial neural network (ANN) got its inspiration from simulation models on biological neural systems. Similar to human brain structure, an ANN comprises of an interconnected network of nodes and directed links. Multilayer neural networks with at least one hidden layer are universal approximators, i.e., they can be used to approximate any target functions. ANN can handle redundant features because the weights are automatically learned during training step. The disadvantage of ANN is that they are sensitive to the presence of noise in the training data and also they are a time consuming process, especially when the number of hidden nodes is large. The term "Artificial neural network" refers to a biologically inspired sub-field of artificial intelligence modeled after the brain. An Artificial neural network is usually a computational network based on biological neural networks that construct the structure of the human brain. Similar to a human brain has neurons interconnected to each other, artificial neural networks also have neurons that are linked to each other in various layers of the networks. These neurons are known as nodes. Artificial neural network tutorial covers all the aspects related to the artificial neural network. In this tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen self-organizing map, Building blocks, unsupervised learning, Genetic algorithm, etc.
  • 25. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 The term "Artificial neural network" refers to a biologically inspired sub-field of artificial intelligence modeled after the brain. An Artificial neural network is usually a computational network based on biological neural networks that construct the structure of the human brain. Similar to a human brain has neurons interconnected to each other, artificial neural networks also have neurons that are linked to each other in various layers of the networks. These neurons are known as nodes.
  • 26. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 CHAPTER 2 2.1 LITERATURE SURVEY 1) Haralick et al. proposed a texture descriptor, the gray level co-occurrence matrix (GLCM), which extracts co-occurrence gray scale values and collects their statistics. There are three texture descriptors homogeneous, non-homogeneous and the texture browsing descriptor, the homogeneous descriptor describes directionality and regularity of patterns in texture images. The non-homogeneous texture descriptor also knows as edge histogram descriptor captures the spatial distribution of edges within and image. And the texture browsing descriptor provides a coarser description of the texture than that obtained using the homogeneous texture descriptor. A statistical method of examining texture that considers the spatial relationship of pixels is the gray-level co-occurrence matrix (GLCM), also known as the gray-level spatial dependence matrix. The GLCM functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, creating a GLCM, and then extracting statistical measures from this matrix. (The texture filter functions, described in Calculate Statistical Measures of Textures cannot provide information about shape, that is, the spatial relationships of pixels in an image), GLCM and a set of 14 different features for classifying the texture of an image. He developed a spatial relationship of image pixels as a statistical function of the gray level which is used as a quantitative measurement for the texture of an image. Julesz firstly uses the spatial relationship of the gray level and their co occurrences in statistical form for texture description. Investigation of Sutton and Hall is also based on the statistics of gray level pairs, but Deutsch and Belknap presented a more elaborate version of GLCM which combines the information in 2 × 2 matrices using different separation and orientation between gray level pairs. GLCM is calculated by using the frequency of occurrence of the image pixels followed by the same or another gray level. The result of GLCM created from an input image of the 4 × 4 dimension displayingthe frequency of occurrence of gray level pairs in matrices is shown in Figure 1B. The normalized formof the GLCM is shown in Figure 1C. Mathematical notation of the statistical feature obtained from Haralick GLCM is expressed from Equations (1) to (14), more details about GLCM can be obtained. Here p(i, j) represents the pixel of an image having x number of rows and y number of columns with coordinates (i, j). Gray tone “i” followed by another gray tone “j” with distance “d” having a relative frequency is expressed as p(i, j).
  • 27. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 2) Bela Julesz proposed the concept of “texton”,which stresses the importance of microstructures (edges, corners and blobs) for pre-attentive human perception and texture discrimination. speed data connectivity. In texton-based texture classifiers, texture is viewed upon as a probabilistic generator of textons. The underlying probability distribution of the generator is estimated by means of a texton. frequency histogram that measures the relative frequency of textons from the codebook in a texture image. A texton frequency histogram is constructed from a texture image by scanning over the texture image and extracting small texture patches. The small texture patches are converted to the image representation that is used in the codebook in order to obtain a collection of textons. Each extracted texton is compared to the textons in the codebook in order to identify the most similar texton from the codebook, and the texton frequency histogram bin corresponding to this texton is incremented. Frequency histogram that measures the relative frequency of textons from the codebook in a texture image. A texton frequency histogram is constructed from a texture image by scanning over the texture image and extracting small texture patches. The small texture patches are converted to the image representation that is used in the codebook in order to obtain a collection of textons. Each extracted texton is compared to the textons in the codebook in order to identify the most similar texton from the codebook, and the texton frequency histogram bin corresponding to this texton is incremented. After normalization, the texton frequency histogram forms a feature vector that models the texture, and canbe used in order to train a classifier. [2] 3) Leung and Malik introduced “pre-defined filter banks” with various scales and orientations to local regions and utilized the distribution of local filter responses to characterize textural information. The motivation for introducing these filters sets is twofold. The first is to overcome the limitations of traditional rotationally invariant filters which do not respond strongly to oriented image patches and thus do not provide good features for anisotropic textures. The second motivation arises out of a concern about the dimensionality of the filter response space. Quite apart from extra processing. pioneered the problem of classifying textures under varying view-point and illumination. The LM filters used for local texture feature extraction are illustrated in Fig. S. In particular, they markeda milestoneby giving an operational definition of textons: the cluster centers of the filter response vectors. Theirwork has been widely followed by other researchers [35]. To handle 3D effects caused by imaging, they proposed 3D textons which were cluster centers of filter responses over a stack of images with representative viewpoints and lighting, as illustrated in Fig. 9. In their texture classification algorithm,20 images of each texture were geometrically registered and transformed into 48D local features with the LM Filters. Then the 48D filter response vectors of 20 selected
  • 28. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 images of the same pixel were concatenated to obtain a 960D feature vector as the local texture representation. Subsequently, these 960D feature vectors were input into a BoW pipeline for texture classification. Downside of the methodis that it is not suitable for classify in a single texture image under unknown imaging condition which usually arises in practical applications. 4) Varma and Zisserman introduced “Patch Descriptors” challenged the dominant role of the filter banks [132, 170] in texture analysis, and instead developed a simple Patch Descriptor simply keeping the raw pixel intensities of a square neighborhood to form a feature vector, as illustrated in Fig. 10. Byreplacing the MR8 filter responses with the Patch Descriptor in texture classification, Varma and Zisserman [214] observed very good classification performance using extremely compact neighborhoods (3 × 3), and that for any fixed size of neighborhood the Patch Descriptor leads to superior classification as compared to filter banks with the same support. A clear limitation of the Patch. Descriptor itself is sensitivity to nearly any change (brightness, rotation). The simplest way to describe the neighborhood around an interest point is to write down the list of intensities to form a feature vector.Maximum Response (MR8) Filters, consist of 38 root filters but only 8 filter. 5) filter response across all orientations, in order to achieve rotation in-variance. The root filters are a subset of the LM Filters [10], retaining the two rotational symmetry filters, the edge filter, and the bar filter at 3 scales and 6 orientations. Recording only the maximum response across orientations reduces the number of responses from 38 to 8 (3 scales for 2 anisotropic filters, plus 2 isotropic), resulting the so-called MR8 filter bank. The root filters are a subset of the LM Filters recording only the maximum response across orientations reduces the number of responses. A clear limitation of the Patch. Descriptor itself is sensitivity to nearly any change. 6) Weszka et al extended Haralick's concepts on the cooccurrence matrix. Haralick et al9 developed GLCM as a function of distance and angle, Weszka proved that features derived from GLCM matrixgive improved results when the distance of small magnitude is used between the gray level pairs. Zucker and Terzopolous explained the procedure of best distance and angle selection for the cooccurrence matrix. 7) Recording only the maximum response across orientations reducesthe number of responses from 38 to 8 (3 scales for 2 anisotropic filters, plus 2 isotropic), resulting the so-called MR8 filter bank. The root filters are a subset of the LM Filters recording only the maximum response across orientations reduces the number of responses. A clear limitation of the Patch. Descriptor itself is sensitivity to nearly any change.
  • 29. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 filter response across all orientations, in order to achieve rotation in-variance. The root filters are a subset of the LM Filters [10], retaining the two rotational symmetry filters, the edge filter, and the bar filter at 3 scales and 6 orientations. Recording only the maximum response across orientations reduces the number of responses from 38 to 8 (3 scales for 2 anisotropic filters, plus 2 isotropic), resulting the so-called MR8 filter bank. The root filters are a subset of the LM Filters recording only the maximum response across orientations reduces the number of responses. A clear limitation of the Patch. Descriptor itself is sensitivity to nearly any change. 8) Weszka et al extended Haralick's concepts on the cooccurrence matrix. Haralick et al9 developed GLCM as a function of distance and angle, Weszka proved that features derived from GLCM matrixgive improved results when the distance of small magnitude is used between the gray level pairs. Zucker and Terzopolous explained the procedure of best distance and angle selection for the cooccurrence matrix. 9) Davis et al proposed a new advanced approach over GLCM called generalized cooccurrence matrix (GCM). GCM calculated features to give more accurate results in comparison of GLCM for texture analysis of the image. However, it could not become more popular than GLCM. Till 1980 GLCM derived statistical features were began to be used in analyzing aerial, terrain, microscopic, and satellite images. Haralick et al used features calculated from GLCM to identify aerial images and eight other terrain classes with 82% accuracy. Chen and Pavlidis state that GLCM can be combined with a “split and merge algorithm” and can be used for textural segmentation of the image. In the early 1980s, many spatial statistical approaches came into existence, competing with GLCM but GLCM proves successfulin maintaining its reliable application in various fields and even found applicability in the field ofremote sensing. 10) Hsu et al introduced another statistical method referred to as simple statistical transformation (SST), this method is computationally simple with respect to GLCM. Jensen compared the performance of SST and GLCM by classifying six land-use types. 11) Wang and He et al introduced another new statistical-based approach for extracting spatial information referred to as texture spectrum (TS). This is basically developed to overcome the limitation of GLCM. Gong et al made a comparison between three spatial feature extraction methods, that is, GLCM, SST, and TS for land-use classification of high-resolution visible (HRV) SPOT satellite multispectral data. The result of this comparison indicates that GLCM produces good classificationresults over other methods. 12) Arivazhagan et al derived curvelet statistical features and curvelet cooccurrence features from
  • 30. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 the subband of curvelet decomposition and used it for classification of texture. As a result, a high degree of success in classification is obtained. Nosaka et al introduced the improvement of GLCM to extractimage features by extending LBP with the assistance of GLCM. Results proved that their proposed method for face recognization through texture classification shows better performance than the conventional LBP. Many statistical methods are used for the inspection of the machined surfaces, as texture analysis becomes popular in machine surface analysis over structural texture analysis methods.[12] 13) Castellano et al state that GLCM-based texture features are useful in the analysis of the medical images. GLCM derived texture features can differentiate masses and non masses tissue in digital- mammograms. The features calculated from GLCM constructed at the 00 are best suited to discriminate masses tissues. Now GLCM is combined with other methods for extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain improved quantification of texture features.[13]method for face recognization through texture classification shows better performance than the conventional LBP. Many statistical methods are used for the inspection of the machined surfaces, as texture analysis becomes popular in machine surface analysis over structural texture analysis methods.[12] 14) Castellano et al state that GLCM-based texture features are useful in the analysis of the medical images. GLCM derived texture features can differentiate masses and non masses tissue in digital- mammograms. The features calculated from GLCM constructed at the 00 are best suited to discriminate masses tissues. Now GLCM is combined with other methods for extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain improved quantification of texture features.[13] 15) Nur Haedzerin Md Noor and et al, Performance Analysis of a Free Space Optics Link with Multiple Transmitters/ Receivers Multiple transmitters/receivers (TX/RX) are used to improve the quality of Free Space Optics (FSO) communication systems. With the current needs of this technology for longer distance communication, the qualitative analysis of the system has become essential. In this work, the received power level (PR) and bit error rate (BER) are considered to influence the FSO link performance. The relationship between the two parameters are investigated and analysed. Furthermore,the received power for various numbers of TXs and RXs are experimentally measured and compared with the values obtained from theoretical calculations. The first part of the work deals with the theoretical calculation and simulation designs of multiple laser beams based on the commercial FSO used in actual sites. The second part describes the practical work and analysis of the system’s performance[18].
  • 31. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 16)Armi and Fekri-Ershad developed a methodology for texture classification by studying variation in the “energy” feature. In his proposed method he used three texture classification techniques, that is, GLCM, LBP, and edge segmentation. The three methods are applied individually to the image and energy feature is extracted from them and combined. Later from the original image energy feature is extracted. Finally, the energy feature is compared for both the cases [19]. 17) Du et al developed a novel method of texture classification on the basis of rotation and illumination. They proposed a new descriptor for the image texture known as “LSP”. This proposed descriptor uses a2D artificial neuron network. They used the “Outex texture dataset” in their research work and come to the conclusion that LSP outperforms other texture descriptors. Represents the “Spiking cortical neuron model” developed by Du for texture classification. [45]. 18) Chang et al developed a new texture classification technique using “single value decomposition (SVD) and discrete wavelet transform (DWT)”. This developed technique uses a support vector machine algorithm to perform image classification. Thus the newly developed texture classification technique is named SRITCSD. This technique uses the following steps to per classification. DWT domain meanwhile particle swarm optimization is used to optimize this method. Results conclude that the SRITCSD method can outperform other methods for texture classification. in the DWT domain meanwhile particle swarm optimization is used to optimize this method. Results conclude that the SRITCSD method can outperform other methods for texture classification. 19) Liu et al analyzed the change in the texture through interclass variation. This variation includes illumination, rotation, and viewpoint. They noticed that a slight change, in contrast, can change the texture appearance completely. Thus texture variation due to changing scale is “hardest to handle”. They proposed a network called GANet which used a genetic algorithm to change the background filter. They have developed a new dataset called “Extreme-Scale Variation Texture” to test the performance of their system. Their developed system outperforms several existing texture classification techniques by more than 10%. Results conclude that the SRITCSD method can outperform other methods for texture classification. in the DWT domain meanwhile particle swarm optimization is used to optimize this method. Results conclude that the SRITCSD method can outperform other methods for texture classification.
  • 32. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 16 2021-22 in the DWT domain meanwhile particle swarm optimization is used to optimize this method. Results conclude that the SRITCSD method can outperform other methods for texture classification. 20) Liu et al analyzed the change in the texture through interclass variation. This variation includes illumination, rotation, and viewpoint. They noticed that a slight change, in contrast, can change the texture appearance completely. Thus texture variation due to changing scale is “hardest to handle”. They proposed a network called GANet which used a genetic algorithm to change the background filter. They have developed a new dataset called “Extreme-Scale Variation Texture” to test the performance of their system. Their developed system outperforms several existing texture classification techniques by more than 10%.
  • 33. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 25 2021-22 CHAPTER-3 3.1 Inference In this section we discuss how to infer a (minimal) representation {ω, ¯ω,θ¯ ω} from a given texture image {Ω,I}, and how to synthesize a novel texture image ˆ I from it. We start from the latter since the algorithm that infers the representation utilizes the synthesis procedure Algorithm1: Texture Synthesis 0 Initialize ˆ I(0) to a random texture; 1 Set νωs = 1 for s = 1,...,S and jmax = 20,b = 0.7 ; 2 for j = 1,...,jmax do 3 for s = 1,...,S do 4 ω(j) s = nrst nghbr(θ¯ ω, ¯ ω, ˆ I(j−1)(ˆ ωs)); 5 Let ˆ I(j) = argminˆ I E(ˆ I,{ω(j) s }S s=1}); 6 νˆ ωs = kˆ I(j)(ˆ ωs)−I(ω(j) s )kb−2 for s = 1,...,S ; 7 if (∀ˆ ωs ∈ ˆ ΩS : ˆ I(j)(ˆ ωs) = ˆ I(j−1)(ˆ ωs)) then break; Function nrst nghbr(θ¯ω, ¯ ω, ˆ I(ˆ ω)) 8 Let s be the index of the the nearest neighbor of ˆ I(ˆ ω) in θ¯ ω ; 9 Retrieve ωs within ¯ ω ; 10 return ωs ; Table 3.1: Algorithm for texture synthesis 3.2 Image texture synthesis Given a representation {ω, ¯ω,θ¯ ω}, we can synthesize novel instances of the texture by sampling from dP(I(ω)) within ¯ ω. This is straightforward in a non-parametric setting, where the representation is itself a collection of samples. One can simply select neighborhoods ωλ within ¯ ω, and populate a new lattice with patches I(ωλ) ensuring compatibility along patch boundaries and intersections. Efros et. al. [9] proposed a causal sampling scheme that satisfies such compatibility conditions, but fails to respect the Markov s
  • 34. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 26 2021-22 compatibility conditions and by construction also respect the Markov structure. We perform this selection and simultaneously also infer ˆ I. We do so by first initializing ˆ I at random. We select neighborhoods ˆ ωs on a grid on the domain of the synthesized texture every √r 4 . We let ˆ ΩS = {ˆ ωs}S s=1 denote the collection of the selected ˆ ωs, ΩS = {ωs}S s=1 denote the chosen neighborhoods within ¯ ω and I(ωs) ∈ θ¯ ω denote the nearest neighbor of ˆ I(ˆ ωs). We minimize with respect to{ωs}S s=1 and ˆ I the function. The procedure to minimize the above energy function is given in Alg. 1. An illustration of the quantities involved is shown in νˆ ωs, defined in Alg. 1, is used to reduce the effect of outliers, as done typically in iteratively re-weighted least squares [19]. The process is performed in a multiscale fashion, by repeating the procedure over 3 neighborhood sizes: |ˆ ωs|,|ˆ ωs 2 |,|ˆ ωs 4 |. By first synthesizing at scale |ˆ ωs| = r, we capture the Markov structure of the texture. Subsequent repetitions refine the synthesized texture by adding finer details. We also repeat this process over anumber of different output image sizes. 3.3 Vedio texture synthesis The texture synthesis algorithm in [19] was extended to temporal textures, which however relied on the availability of optical flow. Unfortunately, optical flow is expensive to store, as encoding it is more costly than encoding the original images. We propose a temporal texture synthesis algorithm that relies on neighborhoods ωλ that extend in time. We take the input video {It}T t=1, and compute a compact representation θ¯ ωt, from which we synthesize{ˆ It}T t=1. In this section we assume we have θ¯ ωtand in Sec. 4.5 we explain how to infer it. We re-define all quantities to have domains that extend in time. To reduce computational complexity we fix the temporal extension of the neighborhoods to 2 frames, although longer choices are possible. Hence for t > 1, ωt λ ⊂ (1,1,t − 1) : (X,Y,t), which makes ita 3-D neighborhood and ¯ ω becomes ¯ ωt . =Sλ=1,...,Λ ωt λ, a unionof3-Dneighborhoods. It(ωt λ)is therefore define don the3-Dlatticeand θ¯ ωt . = {It(ωt 1),...,It(ωt Λ)}. For t = 1,ω t=1 λ , ¯ ωt=1 and θ¯ ωt=1 remain 2- D.
  • 35. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 27 2021-22 Fig 3.1: Temporal texture synthesis The above figure shows that temporal texture synthesis, in order to achieve compatibility with the already synthesized textures. We then unmask all neighborhoods and use them to minimize the energy function. 3.1 Synthesizing Multiple Textures Simultaneously We demonstrate how multiple textures can be synthesized simultaneously for video and images without computing a segmentation map. This is useful for applications such as video compression (where{ω, ¯ω, θ¯ ω}can be used to synthesize the textures of the input video) or for image processing tasks such as hole-filling and frame interpolation. To place the textures in their corresponding locations in a video (or image) we implicitly define their boundaries by partitioning each frame into two types of regions: Textures and their complementary region type, structures. Structures are regions of images that trigger isolated responses of a feature detector. These include blobs, corners, edges, junctions and other sparse features. We determine which regions are structures, by using a feature point tracker such as [24]. Partitioning images or video into two types of regions has been previously proposed by several works [29] using a single image. In our framework, if a region with a scale triggers an isolated response of a feature detector (i.e., it is a structure at scale), then the underlying process is, by definition, not stationary at the scale|ω| = . Therefore, it is not a texture. It also implies that any region ω of size |¯ ω| is not sufficient to predict the image outside that region. This of course does not prevent the region from being a texture at a scale σ >> . Within a region σ there may be multiple frames of size, spatially distributed in a way that is stationary/Markovian. Vice-versa, if a region of an image is a texture with σ = |¯ ω|, it cannot have a unique (isolated) extremum within ¯ ω. Ofcourse, it could have multiple extreme a, each isolated within a region of size << σ. We conclude that, for any given scale of observation σ.
  • 36. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 28 2021-22 The above figure shows that temporal texture synthesis, in order to achieve compatibility with the already synthesized textures. We then unmask all neighborhoods and use them to minimize the energy function. We demonstrate how multiple textures can be synthesized simultaneously for video and images without computing a segmentation map. This is useful for applications such as video compression (where{ω, ¯ω, θ¯ ω}can be used to synthesize the textures of the input video) or for image processing tasks such as hole-filling and frame interpolation. To place the textures in their corresponding locations in a video (or image) we implicitly define their boundaries by partitioning each frame into two types of regions: Textures and their complementary region type, structures. Structures are regions of images that trigger isolated responses of a feature detector. These include blobs, corners, edges, junctions and other sparse features. We determine which regions are structures, by using a feature point tracker such as [24]. Partitioning images or video into two types of regions has been previously proposed by several works [29] using a single image. In our framework, if a region with a scale triggers an isolated response of a feature detector (i.e., it is a structure at scale), then the underlying process is, by definition, not stationary at the scale|ω| = . Therefore, it is not a texture. It also implies that any region ω of size |¯ ω| is not sufficient to predict the image outside that region. This of course does not prevent the region from being a texture at a scale σ >> . Within a region σ there may be multiple frames of size, spatially distributed in a way that is stationary/Markovian. Vice-versa, if a region of an image is a texture with σ = |¯ ω|, it cannot have a unique (isolated) extremum within ¯ ω. Ofcourse, it could have multiple extreme a, each isolated within a region of size << σ. We conclude that, for any given scale of observation σ. One must impose boundary conditions so that the texture regions fill around structure regions seamlessly. Toper form texture extrapolation, we follow an approach similar to the one used for video texture synthesis. The video is initialized to a random texture. At locations where the structures were detected and tracked, we place the actual image (intensity) values. We select ˆ ωt s ∈ ˆ Ωt S like before, on a 3-D grid of the synthesized frames, but with the added restriction that ˆ ωt s needs to have at least one pixel in the texture domain (otherwise it is entirely determined that it is a structure). The patches that are entirely lying in the texture domain need to be synthesized. The patches that straddle thetexture/structure partition are used as boundary conditions and are synthesized causally.
  • 37. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 29 2021-22 3.1 Texture Qualitative Criterion To evaluate the quality of the texture synthesis algorithm, we need a criterion that measuresthe similarity of the input, I, and synthesized, ˆ I, textures. The peak signal-to-noise ratio (PSNR) is typically used as the criterion for evaluating the quality of a reconstruction. However, when the final user is the human visual system, PSNR is known to be a poor criterion, especially for textures, as imperceptible differences can cause large PSNR changes. Works such as [36] operate on general images and do not exploit properties of textures. To address this issue, we introduce the Texture Qualitative Criterion (TQC), represented by ETQC, which is composed of two terms. The first one, E1(ˆI,I), penalizes structural dissimilarity, whereas E2(ˆI,I) penalizes statistical dissimilarity. We let ˆ ωs/ωi be patches within ˆ Ω/Ω, the domains of ˆ I/I, and their nearest neighbors be ωs/ˆ ωi, which are selected within the domains ofI/ˆ I. I/ˆ I can correspond to the input/synthesized textures, or simply two textures, which we wish to compare. For E1(ˆI,I), we select NS patches ˆ ωs ⊂ ˆ Ω and NI patches ωi ⊂ Ω on a dense grid in the domain of the synthesized and input images respectively. We let ˆ I(ˆ ωs) and I(ωi) correspond to the intensity values in the synthesized and input neighborhoods respectively. We use the patches selected to compute the following cost function. Fig3.2: Domain transformation of the input texture The above figure shows that domain transformation of the input texture, Note that this expression resembles Eq. (5), with one change: There is an added summation in, which is over patches in the input image. The need of both of these terms has also been noted by others[37] and isillustrated in the above figure. The first term identifies domain/range deformations of the input texture, where as the second term identifies artifacts in the synthesized texture. We compute thiscost function over multiple scales (typically 3) and average over all scales. This makes the cost function more robust, as it is able to compute similarity of patches at multiple scales 3.2 Inference of Texture Representation Given a complexity constraint, we have a bound on the number, Λ .= Λ(r), of samples, ωλ, that can be stored in ¯ ω, which depends on r, the scale of ωλ. To estimate r, the inference algorithm involves two levels of computation. The first level involves fixing rcand, a candidate of r, and
  • 38. TEXTURE REPRESENTATION Dept. of ECE, SJCIT 30 2021-22 computing which samples ωλ ⊂ Ω should be stored in ¯ ω. This computation is repeated for a finite number of rcand. To choose ˆ r, an estimate of r, we use TQC to rank the representations and choose the bestone according tothis ranking. In this section, we describe this procedure in greater detail. For each rcand, we use Alg.1 (see Sec. 4.1) (or its variant if the input is a video) to synthesize a novel instance of the texture at just one scale, rcand. Upon convergence, for each ˆ ωs there is an ωs (its nearest neighbor), that is assigned to it. The set ΩS = {ωs}S s=1 denotes the collection of nearest neighbors within Ω and it is the entire data that the algorithm needs to synthesize the texture 3.3 Inference of Texture Representation Given a complexity constraint, we have a bound on the number, Λ .= Λ(r), of samples, ωλ, that can be stored in ¯ ω, which depends on r, the scale of ωλ. To estimate r, the inference algorithm involves two levels of computation. The first level involves fixing rcand, a candidate of r, and computing which samples ωλ ⊂ Ω should be stored in ¯ ω. This computation is repeated for a finite number of rcand. To choose ˆ r, an estimate of r, we use TQC to rank the representations and choose the bestone according tothis ranking. In this section, we describe this procedure in greater detail. For each rcand, we use Alg.1 (see Sec. 4.1) (or its variant if the input is a video) to synthesize a novel instance of the texture at just one scale, rcand. Upon convergence, for each ˆ ωs there is an ωs (its nearest neighbor), that is assigned to it. The set ΩS = {ωs}S s=1 denotes the collection of nearest neighbors within Ω and it is the entire data that the algorithm needs to synthesize the texture.