Cracks on the concrete surface are one of the earliest symptoms of degradation of the structure which isfundamental for the upkeep as properly the non-stop publicity will lead to the severe injury to the environment.Manual inspection is the acclaimed approach for the crack inspection. In the guide inspection, the diagram of thecrack is organized manually, and the conditions of the irregularities are noted. Since the guide strategy absolutelyrelies upon on the specialist’s expertise and experience, it lacks objectivity in the quantitative analysis. So,automated image-based crack detection is proposed as a replacement. The proposed gadget comprises pictureprocessing and facts acquisition methodologies for crack detection and evaluation of surface degradation. Theacquired outcomes exhibit that the deployment of image processing in an nice way is a key step towards theinspection of giant infrastructures
This paper presents crack detection in concrete structure based on fuzzy logic. Safety inspection of concrete structures is very important since it is closely related with the structural health and reliability. Automated structural health monitoring system becomes necessity in recent years that encourages various researches to be going on in this area. Cheap availability of digital cameras makes research work in this field easier. This paper presents digital image processing and fuzzy logic based efficient crack detection technique in concrete structure. Here features from digital image of concrete structure are extracted by using morphological image processing technique and then extracted features are fed to fuzzy logic to accurately identify the crack.
This document discusses machine vision systems and their components. A basic machine vision system includes a camera, light source, frame grabber, circuitry and programming, and a computer. Key components of machine vision systems are the image, camera, framegrabber, preprocessor, memory, processor, and output interface. The document also describes CCD and vidicon cameras, their advantages and disadvantages, and the functions of framegrabbers in sampling and quantizing images. Object properties that can be analyzed from pixel grey values include color, specular properties, non-uniformities, lighting. Applications of machine vision systems are also mentioned.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
This slides about brief Introduction to Image Restoration Techniques. How to estimate the degradation function, noise models and its probability density functions.
Fundamental concepts and basic techniques of digital image processing. Algorithms and recent research in image transformation, enhancement, restoration, encoding and description. Fundamentals and basic techniques of pattern recognition.
Slides, thesis dissertation defense, deep generative neural networks for nove...mehdi Cherti
In recent years, significant advances made in deep neural networks enabled the creation
of groundbreaking technologies such as self-driving cars and voice-enabled
personal assistants. Almost all successes of deep neural networks are about prediction,
whereas the initial breakthroughs came from generative models. Today,
although we have very powerful deep generative modeling techniques, these techniques
are essentially being used for prediction or for generating known objects
(i.e., good quality images of known classes): any generated object that is a priori
unknown is considered as a failure mode (Salimans et al., 2016) or as spurious
(Bengio et al., 2013b). In other words, when prediction seems to be the only
possible objective, novelty is seen as an error that researchers have been trying hard
to eliminate. This thesis defends the point of view that, instead of trying to eliminate
these novelties, we should study them and the generative potential of deep nets
to create useful novelty, especially given the economic and societal importance of
creating new objects in contemporary societies. The thesis sets out to study novelty
generation in relationship with data-driven knowledge models produced by
deep generative neural networks. Our first key contribution is the clarification of
the importance of representations and their impact on the kind of novelties that
can be generated: a key consequence is that a creative agent might need to rerepresent
known objects to access various kinds of novelty. We then demonstrate
that traditional objective functions of statistical learning theory, such as maximum
likelihood, are not necessarily the best theoretical framework for studying novelty
generation. We propose several other alternatives at the conceptual level. A second
key result is the confirmation that current models, with traditional objective
functions, can indeed generate unknown objects. This also shows that even though
objectives like maximum likelihood are designed to eliminate novelty, practical
implementations do generate novelty. Through a series of experiments, we study
the behavior of these models and the novelty they generate. In particular, we propose
a new task setup and metrics for selecting good generative models. Finally,
the thesis concludes with a series of experiments clarifying the characteristics of
models that can exhibit novelty. Experiments show that sparsity, noise level, and
restricting the capacity of the net eliminates novelty and that models that are better
at recognizing novelty are also good at generating novelty
Anna university-ug-pg-ppt-presentation-formatVeera Victory
This document proposes creating an institutional repository for Anna University to make its scholarly works more accessible. It details collecting and organizing the university's publications from 1989-2012 across departments and categorizing them by type, subject, and country of publication. Statistics show Anna University publishes nearly 10,000 documents annually, with most being articles published in India. The repository would make Anna University's research more visible globally and promote open access to knowledge. It would cost an estimated 8.42 lakhs over two years to develop and maintain.
Homomorphic filtering is a technique used to remove multiplicative noise from images by transforming the image into the logarithmic domain, where the multiplicative components become additive. This allows the use of linear filters to separate the illumination and reflectance components, with a high-pass filter used to remove low-frequency illumination variations while preserving high-frequency reflectance edges. The filtered image is then transformed back to restore the original domain. Homomorphic filtering is commonly used to correct non-uniform illumination and simultaneously enhance contrast in grayscale images.
This paper presents crack detection in concrete structure based on fuzzy logic. Safety inspection of concrete structures is very important since it is closely related with the structural health and reliability. Automated structural health monitoring system becomes necessity in recent years that encourages various researches to be going on in this area. Cheap availability of digital cameras makes research work in this field easier. This paper presents digital image processing and fuzzy logic based efficient crack detection technique in concrete structure. Here features from digital image of concrete structure are extracted by using morphological image processing technique and then extracted features are fed to fuzzy logic to accurately identify the crack.
This document discusses machine vision systems and their components. A basic machine vision system includes a camera, light source, frame grabber, circuitry and programming, and a computer. Key components of machine vision systems are the image, camera, framegrabber, preprocessor, memory, processor, and output interface. The document also describes CCD and vidicon cameras, their advantages and disadvantages, and the functions of framegrabbers in sampling and quantizing images. Object properties that can be analyzed from pixel grey values include color, specular properties, non-uniformities, lighting. Applications of machine vision systems are also mentioned.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
This slides about brief Introduction to Image Restoration Techniques. How to estimate the degradation function, noise models and its probability density functions.
Fundamental concepts and basic techniques of digital image processing. Algorithms and recent research in image transformation, enhancement, restoration, encoding and description. Fundamentals and basic techniques of pattern recognition.
Slides, thesis dissertation defense, deep generative neural networks for nove...mehdi Cherti
In recent years, significant advances made in deep neural networks enabled the creation
of groundbreaking technologies such as self-driving cars and voice-enabled
personal assistants. Almost all successes of deep neural networks are about prediction,
whereas the initial breakthroughs came from generative models. Today,
although we have very powerful deep generative modeling techniques, these techniques
are essentially being used for prediction or for generating known objects
(i.e., good quality images of known classes): any generated object that is a priori
unknown is considered as a failure mode (Salimans et al., 2016) or as spurious
(Bengio et al., 2013b). In other words, when prediction seems to be the only
possible objective, novelty is seen as an error that researchers have been trying hard
to eliminate. This thesis defends the point of view that, instead of trying to eliminate
these novelties, we should study them and the generative potential of deep nets
to create useful novelty, especially given the economic and societal importance of
creating new objects in contemporary societies. The thesis sets out to study novelty
generation in relationship with data-driven knowledge models produced by
deep generative neural networks. Our first key contribution is the clarification of
the importance of representations and their impact on the kind of novelties that
can be generated: a key consequence is that a creative agent might need to rerepresent
known objects to access various kinds of novelty. We then demonstrate
that traditional objective functions of statistical learning theory, such as maximum
likelihood, are not necessarily the best theoretical framework for studying novelty
generation. We propose several other alternatives at the conceptual level. A second
key result is the confirmation that current models, with traditional objective
functions, can indeed generate unknown objects. This also shows that even though
objectives like maximum likelihood are designed to eliminate novelty, practical
implementations do generate novelty. Through a series of experiments, we study
the behavior of these models and the novelty they generate. In particular, we propose
a new task setup and metrics for selecting good generative models. Finally,
the thesis concludes with a series of experiments clarifying the characteristics of
models that can exhibit novelty. Experiments show that sparsity, noise level, and
restricting the capacity of the net eliminates novelty and that models that are better
at recognizing novelty are also good at generating novelty
Anna university-ug-pg-ppt-presentation-formatVeera Victory
This document proposes creating an institutional repository for Anna University to make its scholarly works more accessible. It details collecting and organizing the university's publications from 1989-2012 across departments and categorizing them by type, subject, and country of publication. Statistics show Anna University publishes nearly 10,000 documents annually, with most being articles published in India. The repository would make Anna University's research more visible globally and promote open access to knowledge. It would cost an estimated 8.42 lakhs over two years to develop and maintain.
Homomorphic filtering is a technique used to remove multiplicative noise from images by transforming the image into the logarithmic domain, where the multiplicative components become additive. This allows the use of linear filters to separate the illumination and reflectance components, with a high-pass filter used to remove low-frequency illumination variations while preserving high-frequency reflectance edges. The filtered image is then transformed back to restore the original domain. Homomorphic filtering is commonly used to correct non-uniform illumination and simultaneously enhance contrast in grayscale images.
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
Here are some useful examples and methods of image enhancement:
Filtering with morphological operators, Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE). Decorrelation stretch
Digital Image Processing denotes the process of digital images with the use of digital computer. Digital images are contains various types of noises which are reduces the quality of images. Noises can be removed by various enhancement techniques. Image smoothing is a key technology of image enhancement, which can remove noise in images.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
This document provides an overview of pattern classification and clustering algorithms. It defines key concepts like pattern recognition, supervised and unsupervised learning. For pattern classification, it discusses algorithms like decision trees, kernel estimation, K-nearest neighbors, linear discriminant analysis, quadratic discriminant analysis, naive Bayes classifier and artificial neural networks. It provides examples to illustrate decision tree classification and information gain calculation. For clustering, it mentions hierarchical, K-means and KPCA clustering algorithms. The document is a guide to pattern recognition models and algorithms for classification and clustering.
This document presents information on face detection techniques. It discusses image segmentation as a preprocessing step for face detection. Some common segmentation methods are thresholding, edge-based segmentation, and region-based segmentation. Face detection can be classified as implicit/pattern-based or explicit/knowledge-based. Implicit methods use techniques like templates, PCA, LDA, and neural networks, while explicit methods exploit cues like color, motion, and facial features. One method discussed is human skin color-based face detection, which filters for skin-colored regions and finds facial parts within those regions. Advantages include speed and independence from training data, while disadvantages include sensitivity to lighting and accessories.
Performance analysis of automated brain tumor detection from MR imaging and CT scan using basic image processing techniques based on various hard and soft computing has been performed in our work. Moreover, we applied six traditional classifiers to detect brain tumor in the images. Then we applied CNN for brain tumor detection to include deep learning method in our work. We compared the result of the traditional one having the best accuracy (SVM) with the result of CNN. Furthermore, our work presents a generic method of tumor detection and extraction of its various features.
This document discusses adaptive control systems for machining. It defines adaptive control as a feedback system that automatically adjusts machining variables like cutting speed and feed rate based on actual process conditions. The three main functions of adaptive control are identification, decision, and modification. Adaptive control systems are classified as adaptive control with optimization, which uses a performance index, or adaptive control with constraints, which maximizes variables within set limits. Benefits include increased production and tool life, while limitations include lack of reliable tool sensors and standardized interfaces with CNC units.
Machine Learning for Medical Image Analysis:What, where and how?Debdoot Sheet
A great career advice for EECS (Electrical, electronics and computer science) graduates interested in machine vision and some advice for a PhD career in Medical Image Analysis.
The document provides an overview of convolutional neural networks (CNNs) and their layers. It begins with an introduction to CNNs, noting they are a type of neural network designed to process 2D inputs like images. It then discusses the typical CNN architecture of convolutional layers followed by pooling and fully connected layers. The document explains how CNNs work using a simple example of classifying handwritten X and O characters. It provides details on the different layer types, including convolutional layers which identify patterns using small filters, and pooling layers which downsample the inputs.
This document discusses various point processing and gray level transformation techniques used in image enhancement. It describes point processing as operating directly on pixel intensity values individually to alter them using transformation functions. The document outlines several basic gray level transformations including linear, logarithmic and power law. It also discusses piecewise linear transformations such as contrast stretching, intensity level slicing, and bit plane slicing. These transformations are used to enhance images by modifying their brightness, contrast and emphasis on certain gray levels.
Image Interpolation Techniques with Optical and Digital Zoom Conceptsmmjalbiaty
Digital image concepts and interpolation techniques for optical and digital zoom are discussed. There are three main types of interpolation used for resizing images: nearest neighbor, bilinear, and bicubic. Nearest neighbor is the simplest but produces the lowest quality, while bicubic is the most complex but highest quality. Optical zoom uses lens magnification before sensing, whereas digital zoom interpolates after sensing, resulting in lower quality than optical zoom. Interpolation methods assign pixel values to new locations during resizing based on weighting patterns around the original pixel values.
This document summarizes Dynamic Time Warping (DTW), a technique for measuring similarity between two sequences that may vary in speed. It works by finding an optimal match between the two sequences via a local warping path within a defined window. The warping path is restricted by monotonicity, continuity, boundary conditions, and a warping window to limit how far sequences can be warped in time.
Image segmentation involves grouping similar image components, such as pixels, into segments. It has applications in medical imaging, satellite imagery, and video summarization. Common methods include thresholding, k-means clustering, and region-based approaches. Thresholding segments an image based on pixel intensity values, while k-means clustering groups pixels into a specified number of clusters based on color or other feature similarity. Region-based methods grow or merge regions of similar pixels. Watershed segmentation treats an image as a topographic surface and finds boundaries between regions.
At the end of this lesson, you should be able to;
describe the energy and the EM spectrum.
describe image acquisition methods.
discuss image formation model.
express sampling and quantization.
define dynamic range and image representation.
Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram processing
Using histogram statistics for image enhancement
Uses for Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram Processing
Basics of Spatial Filtering
Image restoration and degradation modelAnupriyaDurai
This document discusses image restoration and degradation. It provides an overview of image restoration techniques which attempt to reverse degradation processes and restore lost image information. Several types of image degradation are described, including motion blur, noise, and misfocus. Common noise models are explained, such as Gaussian, salt and pepper, Erlang, exponential, and uniform noise. Methods for estimating degradation models from observed images are also summarized, including using image observations, experimental replication of degradation, and mathematical modeling.
Lec5: Pre-Processing Medical Images (III) (MRI Intensity Standardization)Ulaş Bağcı
2017 Spring, UCF Medical Image Computing CAVA: Computer Aided Visualization and Analysis • CAD: Computer Aided Diagnosis • Definitions and Terminologies • Coordinate Systems • Pre-Processing Images – Volume of Interest – RegionofInterest – IntensityofInterest – ImageEnhancement • Filtering • Smoothing • Introduction to Medical Image Computing and Toolkits • Image Filtering, Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology
Wavelet Applications in Image Denoising Using MATLABajayhakkumar
The document discusses digital image processing and noise reduction techniques. It covers the following key points:
- Digital image processing uses computer algorithms to process digital images, with advantages over analog processing like a wider range of algorithms.
- Noise reduction is important as images can be contaminated by noise during acquisition, storage, or transmission, degrading quality. Common noise types include Gaussian, salt and pepper, and speckle noise.
- Filtering techniques for noise reduction include spatial filters, frequency domain filters, and wavelet domain techniques like thresholding, with the goal of removing noise while preserving useful image information.
This document discusses medical image processing and its application to breast cancer detection. It provides an overview of digital image processing techniques used in medical imaging like X-rays, mammography, ultrasound, MRI and CT. Computer-aided diagnosis (CAD) helps in tasks like visualization, detection, localization, segmentation and classification of medical images. For breast cancer detection specifically, the document discusses mammography and challenges in detecting tumors in dense breast tissue. It also reviews several published methods for segmenting and analyzing lesions in mammograms and evaluates their performance based on parameters like true positives, false positives, etc.
A New Approach for Multi Index Automatic Change Detection in HR Remotely Sens...IJLT EMAS
In recent years the technology is growing rapidly and the development of the country very fast and also due to this the infrastructures are increasing. Hence the changes could be noted by using new generation of Earth observation sensors with high spatial resolution or high resolution (HR) which provide detailed information for change detection. The widely used methods for high-resolution image change detection rely on textural/structural features. In order to get the high resolution images for viewing the areas in this project we use multi index automatic change detection method is proposed for the high-resolution imagery. The advantages are as follows: 1) The information (images) sent by the satellite would be in very low size, low pixels so in order to improve the viewing ability we use high-dimensional but low-level features (e.g., textural and structural features) i.e. multi index representation method. The multi index representation refers to the enhanced vegetation index, the water index, and the recently developed morphological building index. I am going use this technique for implementing in military places, which has a very large application. Moreover, the traditional methods based on the state-of-the-art textural/morphological features were also implemented for the purpose of comparison, which further validates the advantages of my project.
The document discusses reverse engineering and provides details on the process. It is summarized as follows:
[1] Reverse engineering involves scanning a physical part to create a 3D CAD model and then manufacturing a new part. It is used to provide replacement parts when technical data is lost or to duplicate competitor's parts.
[2] The main steps are obtaining the part geometry through 3D scanning, identifying the material, generating a 3D CAD model, and manufacturing the new part. Geometry can be captured through contact methods like CMM or non-contact like 3D laser scanning. Material identification uses techniques like mass spectrometry and SEM.
[3] Once scanned, reverse engineering software converts the 3D image
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
Here are some useful examples and methods of image enhancement:
Filtering with morphological operators, Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE). Decorrelation stretch
Digital Image Processing denotes the process of digital images with the use of digital computer. Digital images are contains various types of noises which are reduces the quality of images. Noises can be removed by various enhancement techniques. Image smoothing is a key technology of image enhancement, which can remove noise in images.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
This document provides an overview of pattern classification and clustering algorithms. It defines key concepts like pattern recognition, supervised and unsupervised learning. For pattern classification, it discusses algorithms like decision trees, kernel estimation, K-nearest neighbors, linear discriminant analysis, quadratic discriminant analysis, naive Bayes classifier and artificial neural networks. It provides examples to illustrate decision tree classification and information gain calculation. For clustering, it mentions hierarchical, K-means and KPCA clustering algorithms. The document is a guide to pattern recognition models and algorithms for classification and clustering.
This document presents information on face detection techniques. It discusses image segmentation as a preprocessing step for face detection. Some common segmentation methods are thresholding, edge-based segmentation, and region-based segmentation. Face detection can be classified as implicit/pattern-based or explicit/knowledge-based. Implicit methods use techniques like templates, PCA, LDA, and neural networks, while explicit methods exploit cues like color, motion, and facial features. One method discussed is human skin color-based face detection, which filters for skin-colored regions and finds facial parts within those regions. Advantages include speed and independence from training data, while disadvantages include sensitivity to lighting and accessories.
Performance analysis of automated brain tumor detection from MR imaging and CT scan using basic image processing techniques based on various hard and soft computing has been performed in our work. Moreover, we applied six traditional classifiers to detect brain tumor in the images. Then we applied CNN for brain tumor detection to include deep learning method in our work. We compared the result of the traditional one having the best accuracy (SVM) with the result of CNN. Furthermore, our work presents a generic method of tumor detection and extraction of its various features.
This document discusses adaptive control systems for machining. It defines adaptive control as a feedback system that automatically adjusts machining variables like cutting speed and feed rate based on actual process conditions. The three main functions of adaptive control are identification, decision, and modification. Adaptive control systems are classified as adaptive control with optimization, which uses a performance index, or adaptive control with constraints, which maximizes variables within set limits. Benefits include increased production and tool life, while limitations include lack of reliable tool sensors and standardized interfaces with CNC units.
Machine Learning for Medical Image Analysis:What, where and how?Debdoot Sheet
A great career advice for EECS (Electrical, electronics and computer science) graduates interested in machine vision and some advice for a PhD career in Medical Image Analysis.
The document provides an overview of convolutional neural networks (CNNs) and their layers. It begins with an introduction to CNNs, noting they are a type of neural network designed to process 2D inputs like images. It then discusses the typical CNN architecture of convolutional layers followed by pooling and fully connected layers. The document explains how CNNs work using a simple example of classifying handwritten X and O characters. It provides details on the different layer types, including convolutional layers which identify patterns using small filters, and pooling layers which downsample the inputs.
This document discusses various point processing and gray level transformation techniques used in image enhancement. It describes point processing as operating directly on pixel intensity values individually to alter them using transformation functions. The document outlines several basic gray level transformations including linear, logarithmic and power law. It also discusses piecewise linear transformations such as contrast stretching, intensity level slicing, and bit plane slicing. These transformations are used to enhance images by modifying their brightness, contrast and emphasis on certain gray levels.
Image Interpolation Techniques with Optical and Digital Zoom Conceptsmmjalbiaty
Digital image concepts and interpolation techniques for optical and digital zoom are discussed. There are three main types of interpolation used for resizing images: nearest neighbor, bilinear, and bicubic. Nearest neighbor is the simplest but produces the lowest quality, while bicubic is the most complex but highest quality. Optical zoom uses lens magnification before sensing, whereas digital zoom interpolates after sensing, resulting in lower quality than optical zoom. Interpolation methods assign pixel values to new locations during resizing based on weighting patterns around the original pixel values.
This document summarizes Dynamic Time Warping (DTW), a technique for measuring similarity between two sequences that may vary in speed. It works by finding an optimal match between the two sequences via a local warping path within a defined window. The warping path is restricted by monotonicity, continuity, boundary conditions, and a warping window to limit how far sequences can be warped in time.
Image segmentation involves grouping similar image components, such as pixels, into segments. It has applications in medical imaging, satellite imagery, and video summarization. Common methods include thresholding, k-means clustering, and region-based approaches. Thresholding segments an image based on pixel intensity values, while k-means clustering groups pixels into a specified number of clusters based on color or other feature similarity. Region-based methods grow or merge regions of similar pixels. Watershed segmentation treats an image as a topographic surface and finds boundaries between regions.
At the end of this lesson, you should be able to;
describe the energy and the EM spectrum.
describe image acquisition methods.
discuss image formation model.
express sampling and quantization.
define dynamic range and image representation.
Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram processing
Using histogram statistics for image enhancement
Uses for Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram Processing
Basics of Spatial Filtering
Image restoration and degradation modelAnupriyaDurai
This document discusses image restoration and degradation. It provides an overview of image restoration techniques which attempt to reverse degradation processes and restore lost image information. Several types of image degradation are described, including motion blur, noise, and misfocus. Common noise models are explained, such as Gaussian, salt and pepper, Erlang, exponential, and uniform noise. Methods for estimating degradation models from observed images are also summarized, including using image observations, experimental replication of degradation, and mathematical modeling.
Lec5: Pre-Processing Medical Images (III) (MRI Intensity Standardization)Ulaş Bağcı
2017 Spring, UCF Medical Image Computing CAVA: Computer Aided Visualization and Analysis • CAD: Computer Aided Diagnosis • Definitions and Terminologies • Coordinate Systems • Pre-Processing Images – Volume of Interest – RegionofInterest – IntensityofInterest – ImageEnhancement • Filtering • Smoothing • Introduction to Medical Image Computing and Toolkits • Image Filtering, Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology
Wavelet Applications in Image Denoising Using MATLABajayhakkumar
The document discusses digital image processing and noise reduction techniques. It covers the following key points:
- Digital image processing uses computer algorithms to process digital images, with advantages over analog processing like a wider range of algorithms.
- Noise reduction is important as images can be contaminated by noise during acquisition, storage, or transmission, degrading quality. Common noise types include Gaussian, salt and pepper, and speckle noise.
- Filtering techniques for noise reduction include spatial filters, frequency domain filters, and wavelet domain techniques like thresholding, with the goal of removing noise while preserving useful image information.
This document discusses medical image processing and its application to breast cancer detection. It provides an overview of digital image processing techniques used in medical imaging like X-rays, mammography, ultrasound, MRI and CT. Computer-aided diagnosis (CAD) helps in tasks like visualization, detection, localization, segmentation and classification of medical images. For breast cancer detection specifically, the document discusses mammography and challenges in detecting tumors in dense breast tissue. It also reviews several published methods for segmenting and analyzing lesions in mammograms and evaluates their performance based on parameters like true positives, false positives, etc.
A New Approach for Multi Index Automatic Change Detection in HR Remotely Sens...IJLT EMAS
In recent years the technology is growing rapidly and the development of the country very fast and also due to this the infrastructures are increasing. Hence the changes could be noted by using new generation of Earth observation sensors with high spatial resolution or high resolution (HR) which provide detailed information for change detection. The widely used methods for high-resolution image change detection rely on textural/structural features. In order to get the high resolution images for viewing the areas in this project we use multi index automatic change detection method is proposed for the high-resolution imagery. The advantages are as follows: 1) The information (images) sent by the satellite would be in very low size, low pixels so in order to improve the viewing ability we use high-dimensional but low-level features (e.g., textural and structural features) i.e. multi index representation method. The multi index representation refers to the enhanced vegetation index, the water index, and the recently developed morphological building index. I am going use this technique for implementing in military places, which has a very large application. Moreover, the traditional methods based on the state-of-the-art textural/morphological features were also implemented for the purpose of comparison, which further validates the advantages of my project.
The document discusses reverse engineering and provides details on the process. It is summarized as follows:
[1] Reverse engineering involves scanning a physical part to create a 3D CAD model and then manufacturing a new part. It is used to provide replacement parts when technical data is lost or to duplicate competitor's parts.
[2] The main steps are obtaining the part geometry through 3D scanning, identifying the material, generating a 3D CAD model, and manufacturing the new part. Geometry can be captured through contact methods like CMM or non-contact like 3D laser scanning. Material identification uses techniques like mass spectrometry and SEM.
[3] Once scanned, reverse engineering software converts the 3D image
Building extraction from remote sensing imageries by data fusion techniqueseSAT Journals
Abstract This paper presents a data fusion approach for manmade objects extraction from high-resolution IKONOS satellite images. Buildings can have various complex forms and roofs of various compositional materials. Their automatic extraction from imagery is a very difficult problem. Applying normal image processing methods could not achieve satisfied performance, especially for high-resolution satellite images. It is based on edge maps derived from IKONOS data. Local changes or variations of the intensity of the imagery (such as edges and corners) are important information for image processing and pattern recognition. K-MEANS clustering is one of the most popular techniques that can be used to classify satellite images. This technique coupled with canny edge detection, which has double threshold technique is less fooled by noise, forms a very good tool in detection of man-made features. The above mentioned techniques are applied to one meter IKONOS imagery of the highly urbanized Singapore city, to detect building edges within scene. Keywords: Canny edge detection, RGB color matrices, Gaussian filter, K-means clustering, non-maximum suppression.
Building extraction from remote sensing imageries by data fusion techniqueseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
This document describes a system for visualizing deviations on arbitrary surfaces for quality assurance and process control. The system combines 3D measurement devices like terrestrial laser scanners with a projector. This allows detected deviations between a reference model (like CAD data) and measured data to be projected directly onto the measured object. The key steps are determining the relative orientation of the measurement device and projector, computing an inspection map of deviations, and projecting this map. A novel approach called "virtual targets" is presented for establishing the relative orientation without using physical targets when laser scanning. Sample applications for quality inspection and object alignment are demonstrated.
IRJET- Estimation of Propagation Time of Microwave Signal in Different Enviro...IRJET Journal
1. The document analyzes the theoretical estimation of propagation time of microwave signals in different environments.
2. It discusses how the dielectric constant of walls and other materials affects the speed of propagation and time of propagation of microwave signals. Higher dielectric constants result in lower speeds and longer propagation times.
3. MATLAB simulations were used to model signal propagation through walls and analyze the relationships between dielectric constant, reflection coefficient, speed of propagation, and time of propagation. The results show that changes in dielectric constant directly impact these propagation parameters.
A Review on Deformation Measurement from Speckle Patterns using Digital Image...IRJET Journal
This document reviews digital image correlation (DIC) for deformation measurement using speckle patterns. DIC is a non-contact optical method that uses digital images of a speckle pattern on a surface before and after deformation. By comparing the speckle patterns in the images, DIC can determine displacement and strain fields with high accuracy. The document discusses speckle pattern types, the DIC process, related works that have improved DIC methods, and applications of DIC such as for high-temperature testing. DIC provides full-field measurements and greater accuracy compared to conventional contact methods.
Satellite image processing is a technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications. The process of creating thematic maps as spatial distribution of particular information. These are structured by Spectral Bands. These have constant density and when they overlap their densities get added. It performs image analysis on multiple scale images and catches the comprehensive information of system for different application. Examples of themes are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge volume of surveillance data and extremely powerful real time processing for infrastructure
IRJET - Change Detection in Satellite Images using Convolutional Neural N...IRJET Journal
The document describes a method for detecting changes in satellite images using convolutional neural networks. It discusses how existing methods have limitations in terms of accuracy and speed. The proposed method uses preprocessing techniques like median filtering and non-local means filtering. It then applies convolutional neural networks to extracted compressed image features and classify detected changes. The method forms a difference image without explicitly training on change images, making it unsupervised. Testing achieved 91.63% accuracy in change detection, showing the effectiveness of the proposed convolutional neural network approach.
Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...sipij
Remote sensing images (ranges from satellite to seismic) are affected by number of noises like interference, impulse and speckle noises. Image denoising is one of the traditional problems in digital image processing, which plays vital role as a pre-processing step in number of image and video applications. Image denoising still remains a challenging research area for researchers because noise
removal introduces artifacts and causes blurring of the images. This study is done with the intension of designing a best algorithm for impulsive noise reduction in an industrial environment. A review of the typical impulsive noise reduction systems which are based on order statistics are done and particularized for the described situation. Finally, computational aspects are analyzed in terms of PSNR values and some solutions are proposed.
Generating three dimensional photo-realistic model of archaeological monument...eSAT Journals
Abstract
Stimulated by the growing demand for three-dimensional (3D) photo-realistic model of archaeological monument, people are looking
for efficient and more precise methods to generate them. However, a single method is not yet available to give adequate results in all
situations, especially high geometric accuracy, automation, photorealism as well as flexibility and efficiency. This paper describes the
method for generating 3D photo realistic model of Bukit Batu Pahat shrine by the means of multi-sensors data integration. Global
Positioning System (GPS), Total Station and terrestrial laser scanner (Leica ScanStation C10) were used to record spatial data of the
shrine. Nevertheless, due to the low quality of the coloured point cloud captured by the scanner, a digital camera, Nikon D300s was
used to capture photos of the shrine for surface texturing purpose. A photo realistic 3D model with high geometric accuracy of the
shrine was generated through spatial and image data integration. A feature mapping accuracy and geometric mapping accuracy was
conducted to analyse the quality of the 3D photo-realistic model of the monument. Based on the results obtained, the integration of
ScanStation C10 and images data is capable of providing a 3D photo-realistic model of Bukit Batu Pahat shrine where feature
mapping accuracy showed that the model was 90.56 percent similar with the real object. Additionally, the geometrical accuracy of the
model generated by ScanStation C10 data was very convincing which was ±4 millimetres. In summary, the goal of this research has
successfully achieved where a 3D photo- realistic model of Bukit Batu Pahat shrine with good geometry was generated through multisensors
data integration.
Keywords: archaeological monument, terrestrial laser scanner, digital photo, 3D photo-realistic model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document is a project report on noise reduction in images using filters. It was submitted by 4 students - Priya M, Dondla Leela Vasundhara, Inderpreet Kaur, and Nisha Mathew - to the Department of Computer Science at Mount Carmel College in Bengaluru, India. The report discusses image processing techniques including different types of noise, noise reduction methods, and the use of filters to reduce noise in digital images.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This paper proposes a method for image denoising using wavelet thresholding while preserving edge information. It first detects edges in the noisy image using Canny edge detection. It then applies a wavelet transform and thresholds the coefficients, preserving values near detected edges. Two thresholding methods are discussed: Visushrink for sparse images and Sureshrink for others. The inverse wavelet transform is applied to obtain the denoised image with preserved edges. The goal is to remove noise while maintaining important image features like edges. The method is described to provide better denoising than alternatives that oversmooth edges.
Laboratory X-ray CT Applied to the Characterisation of Tubular Composite Spec...Fabien Léonard
This study used X-ray computed tomography (XCT) to characterize defects in a tubular composite specimen. Standard measurements found voids made up 3.1% of the specimen volume. Advanced analysis mapped voids spatially, finding they were concentrated in three radial locations. Vertical cracks near the inner surface were attributed to removal from the manufacturing mandrel, while circumferential cracks further out were linked to thermal stresses during manufacturing. Overall, the study demonstrated how XCT can provide both standard and advanced 3D characterization of defects in tubular composites.
Detection of urban tree canopy from very high resolution imagery using an ob...IJECEIAES
Tree that grows within a town, city and suburban areas, collection of these trees makes the urban forest. These urban forest and urban trees have impact on urban water, pollution and heat. Nowadays we are experiencing drastic climatic changes because of cutting of trees for our growth and increasing population which leads to expansion of roads, towers, and airports. Individual tree crown detection is necessary to map the forest along with feasible planning for urban areas. In this study, using WorldView-2imagery, trees in specific area are detected with object-based image analysis (OBAI) approach. Therefore with improvement in spatial and spectral resolution of an image, extracted from WorldView-2 carried out urban features with better accuracy. The aim of this research is to illustrate how object-based method can be applied to the available data to accurately find out vegetation, which can be further sub-classified to obtain area under tree canopy. The result thus obtained gives area under tree canopy with an accuracy of 92.43 % and a Kappa coefficient of 0.80.
Localization of free 3 d surfaces by the mean of photometricIAEME Publication
This document discusses a photometric stereovision technique to localize free 3D surfaces using multiple images.
It presents a photometric model linking image intensity to surface relief and reflectance properties. This model results in a system of equations with the relief variations and reflectance as unknowns. Solving the system requires 3 images under different lighting conditions.
The document applies this technique to extract the 3D relief of a corrugated plastic surface from images, demonstrating the feasibility of measuring free surfaces with the photometric stereovision approach. Accuracy of the measured surface shape is evaluated.
This document provides an overview of digital image acquisition, sampling, and quantization. It begins with an introduction to the course on digital image processing, including the course objectives and outcomes. It then covers the processes of image acquisition and representation, including a simple image formation model. It discusses image sampling and quantization, which allow analog images to be converted to digital form. Different methods of image interpolation are also introduced, such as nearest neighbor, bilinear, and bicubic interpolation. Homework is assigned on applying interpolation techniques to an example image.
Similar to Crack Detection of Wall Using MATLAB (20)
Understanding the Impact and Challenges of Corona Crisis on Education Sector...vivatechijri
n the second week of March 2020, governments of all states in a country suddenly declared
shutting down of all colleges and schools for a temporary period of time as an immediate measure to stop the
spread of pandemic that is of novel corona virus. As the days pass by almost close to a month with no certainty
when they will again reopen. Due to pandemic like this an alarm bells have started sounding in the field of
education where a huge impact can be seen on teaching and learning process as well as on the entire education
sector in turn. The pandemic disruption like this is actually gave time to educators of today to really think about
the sector. Through the present research article, the author is highlighting on the possible impact of
coronavirus on education sector with the future challenges for education sector with possible suggestions.
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT vivatechijri
This document discusses the importance of leadership in leading an organization towards improvement and development. It states that leadership is responsible for providing a clear vision and strategy to successfully achieve that vision. Effective leadership can impact the success of an organization by controlling its direction and motivating employees. Leadership is different from traditional management in that it guides employees towards organizational goals through open communication and motivation, rather than simply directing work. The paper concludes that only leadership can lead an organization to change according to its evolving environment, while management may simply follow old rules. Leadership is key to adapting to new market needs and trends.
The topic of assignment is a critical problem in mathematics and is further explored in the real
physical world. We try to implement a replacement method during this paper to solve assignment problems with
algorithm and solution steps. By using new method and computing by existing two methods, we analyse a
numerical example, also we compare the optimal solutions between this new method and two current methods. A
standardized technique, simple to use to solve assignment problems, may be the proposed method
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...vivatechijri
The document summarizes research on a nano composite polymer gel electrolyte containing SiO2 nanoparticles. Key points:
1. Polyvinylidene fluoride-co-hexafluoropropylene polymer was used as the base polymer mixed with propylene carbonate, magnesium perchlorate, and SiO2 nanoparticles to synthesize the nano composite polymer gel electrolyte.
2. The electrolyte was characterized using XRD, SEM, and FTIR which confirmed the homogeneous dispersion of SiO2 nanoparticles and increased amorphous nature of the electrolyte, enhancing its ion conductivity.
3. XRD showed decreased crystallinity and disappearance of polymer peaks upon addition of SiO2. SEM revealed
Theoretical study of two dimensional Nano sheet for gas sensing applicationvivatechijri
This study is focus on various two dimensional material for sensing various gases with theoretical
view for new research in gas sensing application. In this paper we review various two dimensional sheet such as
Graphene, Boron Nitride nanosheet, Mxene and their application in sensing various gases present in the
atmosphere.
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOODvivatechijri
Food is essential forliving. Food adulteration deceives consumers and can endanger their health. The
purpose of this document is to list common food adulterant methods commonly found in India. An adulterant is
a substance found in other substances such as food, cosmetics, pharmaceuticals, fuels, or other chemicals that
compromise the safety or effectiveness of that substance. The addition of adulterants is called adulteration. The
most common reason for adulteration is the use of undeclared materials by manufacturers that are cheaper than
the correct and declared ones. The adulterants can be harmful or reduce the effectiveness of the product, or
they can be harmless.
The novel ideas of being a entrepreneur is a key for everyone to get in the hustle, but developing a
idea from core requires a systematic plan, time management, time investment and most importantly client
attention. The Time required for developing may vary from idea to idea and strength of the team. Leadership to
build a team and manage the same throughout the peak of development is the main quality. Innovations and
Techniques to qualify the huddles is another aspect of Business Development and client Retention.
Innovation for supporting prosperity has for quite some time been a focus on numerous orders, including PC science, brain research, and human-PC connection. In any case, the meaning of prosperity isn't continuously clear and this has suggestions for how we plan for and evaluate advances that intend to cultivate it. Here, we talk about current meanings of prosperity and how it relates with and now and then is a result of self-amazing quality. We at that point center around how innovations can uphold prosperity through encounters of self-amazing quality, finishing with conceivable future bearings.
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGEvivatechijri
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up, there emerges a requirement for a storage medium with high capacity, high storage density, and possibility to face up to extreme environmental conditions. According to a research in 2018, every minute Google conducted 3.88 million searches, other people posted 49,000 photos on Instagram, sent 159,362,760 e-mails, tweeted 473,000 times and watched 4.33 million videos on YouTube. In 2020 it estimated a creation of 1.7 megabytes of knowledge per second per person globally, which translates to about 418 zettabytes during a single year. The magnetic or optical data-storage systems that currently hold this volume of 0s and 1s typically cannot last for quite a century. Running data centres takes vast amounts of energy. In short, we are close to have a substantial data-storage problem which will only become more severe over time. Deoxyribonucleic acid (DNA) are often potentially used for these purposes because it isn't much different from the traditional method utilized in a computer. DNA’s information density is notable, 215 petabytes or 215 million gigabytes of data can be stored in just one gram of DNA. First we can encode all data at a molecular level and then store it in a medium that will last for a while and not become out-dated just like floppy disks. Due to the improved techniques for reading and writing DNA, a rapid increase is observed in the amount of possible data storage in DNA.
The usage of chatbots has increased tremendously since past few years. A conversational interface is an interface that the user can interact with by means of a conversation. The conversation can occur by speech but also by text input. When a chatty interface uses text, it is also described as a chatbot or a conversational medium. During this study, the user experience factors of these so called chatbots were investigated. The prime objective is “to spot the state of the art in chatbot usability and applied human-computer interaction methodologies, to research the way to assess chatbots usability". Two sorts of chatbots are formulated, one with and one without personalisation factors. the planning of this research may be a two-by-two factorial design. The independent variables are the two chatbots (unpersonalised versus personalised) and thus the speci?c task or goal the user are ready to do with the chatbot within the ?nancial ?eld (a simple versus a posh task). The results are that there was no noteworthy interaction effect between personalisation and task on the user experience of chatbots. A signi?cant di?erence was found between the two tasks with regard to the user experience of chatbots, however this variation wasn't because of personalisation.
The Smart glasses Technology of wearable computing aims to identify the computing devices into today’s world.(SGT) are wearable Computer glasses that is used to add the information alongside or what the wearer sees. They are also able to change their optical properties at runtime.(SGT) is used to be one of the modern computing devices that amalgamate the humans and machines with the help of information and communication technology. Smart glasses is mainly made up of an optical head-mounted display or embedded wireless glasses with transparent heads- up display or augmented reality (AR) overlay in it. In recent years, it is been used in the medical and gaming applications, and also in the education sector. This report basically focuses on smart glasses, one of the categories of wearable computing which is very popular presently in the media and expected to be a big market in the next coming years. It Evaluate the differences from smart glasses to other smart devices. It introduces many possible different applications from the different companies for the different types of audience and gives an overview of the different smart glasses which are available presently and will be available after the next few years.
Future Applications of Smart Iot Devicesvivatechijri
With the Internet of Things (IoT) bit by bit creating as the resulting time of the headway of the Internet, it gets critical to see the diverse expected zones for the utilization of IoT and the research challenges that are connected with these applications going from splendid savvy urban areas, to medical care administrations, shrewd farming, collaborations and retail. IoT is needed to attack into for all expectations and purposes for all pieces of our day-to-day life. Despite the fact that the current IoT enabling advancements have immensely improved in the continuous years, there are so far different issues that require attention. Since the IoT ideas results from heterogeneous advancements, many examination difficulties will arise. In like manner, IoT is planning for new components of exploration to be finished. This paper presents the progressing headway of IoT advancements and inspects future applications.
Cross Platform Development Using Fluttervivatechijri
Today the development of cross-platform mobile application has under the state of compromise. The developers are not willing to choose an alternative of either building the similar app many times for many operating systems or to accept a lowest common denominator and optimal solution that will going to trade the native speed, accuracy for portability. The Flutter is an open-source SDK for creating high-performance, high fidelity mobile apps for the development of iOS and Android. Few significant features of flutter are - Just-in-time compilation (JIT), Ahead- of-time compilation (AOT compilation) into a native (system-dependent) machine code so that the resulting binary file can execute natively. The Flutter’s hot reload functionality helps us to understand quickly and easily experiment, build UIs, add features, and fix bugs. Hot reload works by injecting updated source code files into the running Dart Virtual Machine (VM). With the help of Flutter, we believe that we would be having a solution that gives us the best of both worlds: hardware accelerated graphics and UI, powered by native ARM code, targeting both popular mobile operating systems.
The Internet, today, has become an important part of our lives. The World Wide Web that was once a small and inaccessible data storage service is now large and valuable. Current activities partially or completely integrated into the physical world can be made to a higher standard. All activities related to our daily life are mapped and linked to another business in the digital world. The world has seen great strides in the Internet and in 3D stereoscopic displays. The time has come to unite the two to bring a new level of experience to the users. 3D Internet is a concept that is yet to be used and requires browsers to be equipped with in-depth visualization and artificial intelligence. When this material is included, the Internet concept of material may become a reality discussed in this paper. In this paper we have discussed the features, possible setting methods, applications, and advantages and disadvantages of using the Internet. With this paper we aim to provide a clear view of 3D Internet and the potential benefits associated with this obviously cost the amount of investment needed to be used.
Recommender System (RS) has emerged as a significant research interest that aims to assist users to seek out items online by providing suggestions that closely match their interests. Recommender system, an information filtering technology employed in many items is presented in internet sites as per the interest of users, and is implemented in applications like movies, music, venue, books, research articles, tourism and social media normally. Recommender systems research is usually supported comparisons of predictive accuracy: the higher the evaluation scores, the higher the recommender. One amongst the leading approaches was the utilization of advice systems to proactively recommend scholarly papers to individual researchers. In today's world, time has more value and therefore the researchers haven't any much time to spend on trying to find the proper articles in line with their research domain. Recommender Systems are designed to suggest users the things that best fit the user needs and preferences. Recommender systems typically produce an inventory of recommendations in one among two ways -through collaborative or content-based filtering. Additionally, both the general public and also the non-public used descriptive metadata are used. The scope of the advice is therefore limited to variety of documents which are either publicly available or which are granted copyright permits. Recommendation systems (RS) support users and developers of varied computer and software systems to beat information overload, perform information discovery tasks and approximate computation, among others.
The study LiFi (Light Fidelity) demonstrates about how can we use this technology as a medium of communication similar to Wifi . This is the latest technology proposed by Harold Haas in 2011. It explains about the process of transmitting data with the help of illumination of an Led bulb and about its speed intensity to transmit data. Basically in this paper, author will discuss about the technology and also explain that how we can replace from WiFi to LiFi . WiFi generally used for wireless coverage within the buildings while LiFi is capable for high intensity wireless data coverage in limited areas with no obstacles .This research paper represents introduction of the Lifi technology,performance,modulation and challenges. This research paper can be used as a reference and knowledge to develop some of LiFitechnology.
Social media platform and Our right to privacyvivatechijri
The advancement of Information Technology has hastened the ability to disseminate information across the globe. In particular, the recent trends in ‘Social Networking’ have led to a spark in personally sensitive information being published on the World Wide Web. While such socially active websites are creative tools for expressing one’s personality it also entails serious privacy concerns. Thus, Social Networking websites could be termed a double edged sword. It is important for the law to keep abreast of these developments in technology. The purpose of this paper is to demonstrate the limits of extending existing laws to battle privacy intrusions in the Internet especially in the context of social networking. It is suggested that privacy specific legislation is the most appropriate means of protecting online privacy. In doing so it is important to maintain a balance between the competing right of expression, the failure of which may hinder the reaping of benefits offered by Internet technology
THE USABILITY METRICS FOR USER EXPERIENCEvivatechijri
THE USABILITY METRICS FOR USER EXPERIENCE was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as THE USABILITY METRICS FOR USER EXPERIENCE that is GFS. THE USABILITY METRICS FOR USER EXPERIENCE is one of the largest file system in operation. Generally THE USABILITY METRICS FOR USER EXPERIENCE is a scalable distributed file system of large distributed data intensive apps. In the design phase of THE USABILITY METRICS FOR USER EXPERIENCE, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. THE USABILITY METRICS FOR USER EXPERIENCE also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, THE USABILITY METRICS FOR USER EXPERIENCE is highly available, replicas of chunk servers and master exists.
Google File System was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as Google File System that is GFS. Google File system is one of the largest file system in operation. Generally Google File System is a scalable distributed file system of large distributed data intensive apps. In the design phase of Google file system, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. Google File System also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, Google file system is highly available, replicas of chunk servers and master exists.
A Study of Tokenization of Real Estate Using Blockchain Technologyvivatechijri
Real estate is by far one of the most trusted investments that people have preferred, being a lucrative investment it provides a steady source of income in the form of lease and rents. Although there are numerous advantages, one of the key downsides of real estate investments is lack of liquidity. Thus, even though global real estate investments amount to about twice the size of investments in stock markets, the number of investors in the real estate market is significantly lower. Block chain technology has real potential in addressing the issues of liquidity and transparency, opening the market to even retail investors. Owing to the functionality and flexibility of creating Security Tokens, which are backed by real-world assets, real estate can be made liquid with the help of Special Purpose Vehicles. Tokens of ERC 777 standard, which represent fractional ownership of the real estate can be purchased by an investor and these tokens can also be listed on secondary exchanges. The robustness of Smart Contracts can enable the efficient transfer of tokens and seamless distribution of earnings amongst the investors. This work describes Ethereum blockchainbased solutions to make the existing Real Estate investment system much more efficient.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Mechatronics is a multidisciplinary field that refers to the skill sets needed in the contemporary, advanced automated manufacturing industry. At the intersection of mechanics, electronics, and computing, mechatronics specialists create simpler, smarter systems. Mechatronics is an essential foundation for the expected growth in automation and manufacturing.
Mechatronics deals with robotics, control systems, and electro-mechanical systems.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
1. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280 Article No. X
PP XX-XX
VIVA Institute of Technology
9thNational Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
1
www.viva-technology.org/New/IJRI
Crack Detection of Wall Using MATLAB
Yash Kore, Sneha Divekar, Sushant Bhostekar, Sanchit Dhivare
(Bachelor Of Engineering Department Of Civil Engineering Mumbai University/ Viva Institue Of
Technology Mumbai-401305)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Abstract: Cracks on the concrete surface are one of the earliest symptoms of degradation of the structure which is
fundamental for the upkeep as properly the non-stop publicity will lead to the severe injury to the environment.
Manual inspection is the acclaimed approach for the crack inspection. In the guide inspection, the diagram of the
crack is organized manually, and the conditions of the irregularities are noted. Since the guide strategy absolutely
relies upon on the specialist’s expertise and experience, it lacks objectivity in the quantitative analysis. So,
automated image-based crack detection is proposed as a replacement. The proposed gadget comprises picture
processing and facts acquisition methodologies for crack detection and evaluation of surface degradation. The
acquired outcomes exhibit that the deployment of image processing in an nice way is a key step towards the
inspection of giant infrastructures
.Keywords -Crack Detection, Surface Degradation, Image Processing, Morphological Operations.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
I. Introduction
Visual inspection along with picture processing is turning into an important place in civil and
development engineering. The state of the structure has to continue to be the same as per the format for
the complete life span, although this can be altered through using regular developing older, via the
action of the surroundings and by using unintended events. Inspecting such constructions in the early
stages of their degradation is integral to their renovation as their damage may end result similarly of
degradation. The degradation of concrete takes place essentially via earthquakes, frost damage, salt
erosion, rain water, and dry shrinkage. Cracks on the concrete ground are one of the earliest warning
signs and symptoms of degradation. Crack detection is necessary for the inspection, diagnosis, and
renovation of concrete structures, on the other hand common strategies ought to know not achieve lots
specific detection because the image of the concrete flooring consists of a quantity of varieties of noise
due to special factors such as concrete bless, stain, inadequate contrast, and shading. The predominant
strategy for crack inspection is to put together a distinct format of the cracks and to simultaneously
measure the scenario of the concrete manually. However, the time-consuming information methods
need appreciation and have computational complexity. Conventional strategies that do no longer use
the crack characteristics can't in a position to distinguish crack from noisy snapshots and results in
identification.
Objective:
To Detect the crack in the wall by the use of image processing.
To Detect the depth of the crack using MATLAB.
To detect and measure a crack autonomously without the any technical person.
Acquisition of an image of Wall with the help of camera to detect the Crack.
To Detect cracks causing leakages in Structure with great ease using these method.
To compare the Mathematical calculation by other NDT method and this method.
1.1 Fundamentals of Image Processing
2. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280 Article No. X
PP XX-XX
VIVA Institute of Technology
9thNational Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
2
www.viva-technology.org/New/IJRI
A photo refers to a 2D light intensity feature f(x, y), the place (x, y) denote spatial coordinates and the fee of f at
any point (x, y) is proportional to the brightness or gray tiers of the photo at that point.
A digital photograph is an image f(x, y) that has been discretized both in spatial coordinates and brightness. The
elements of such a digital array are called picture factors or pixels.
A digital photo a[m, n] described in a 2D discrete region is derived from an analog photograph a(x, y) in a 2D non-
stop house via a sampling approach that is generally referred to as digitization. We will show up at some easy
definitions associated with the digital image. The 2D non-stop photograph a(x, y) is divided into N rows and M
columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates
[m, n] with {m=0,1,2,…’ M–1} and {n=0,1,2,…, N–1} is a[m, n]. In fact, in most cases a(x, y) — which we may
additionally replicate on consideration on to be the bodily sign that impinges on the face of a 2D sensor — is
virtually an attribute of many variables which consists of depth (z), color (λ), and time (t).
1.2 Image Processing Definitions
A digital image a[m,n] described in a 2D discrete space is derived from an analog image a(x,y) in a 2D continuous
space through a sampling process that is frequently referred to as digitization. we will look at some basic
definitions associated with the digital image. The 2D continuous image a(x,y) is divided into N rows and M
columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates
[m,n] with {m=0,1,2,…,M–1} and {n=0,1,2,…,N–1} is a[m,n]. In fact, in most cases a(x,y) – which we might
consider to be the physical signal that impinges on the face of a 2D sensor – is actually a function of many
variables including depth (z), color (λ), and time (t).
1.3 Applications of image processing
Interest in digital image processing methods stems from 2 principal application areas:
(1) Improvement of pictorial information for human interpretation, and
(2) Processing of scene data for autonomous machine perception.
In the second application area, interest focuses on procedures for extracting from image information in a
form suitable for computer processing.
Examples include automatic character recognition, industrial machine vision for product assembly and
inspection, military recognizance, automatic processing of fingerprints etc.
1.4 Fundamental Steps of Digital Image Processing
There are some fundamental steps but as they are fundamental, all these steps may have sub-steps.
a) Image Acquisition
b) Image Enhancement
c) Color Image Processing
d) DWT
e) Compression
f) Morphological Processing
g) Segmentation
h) Representation and Description
i) Object Recognition
1.5 Proposed System
We have proposed a mannequin for a surroundings pleasant and reliable crack detection, which combines the great
facets of canny
edge detection algorithm and Hyperbolic Tangent filtering method the use of a surroundings pleasant Max-Mean
photograph fusion rule. Here the detection architecture consists of the same crucial steps as follows:
(1) Acquisition of involved wall image.
(2) Crack detection the use of two efficient algorithms.
(3) Wavelet decomposition and Fusion.
3. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280 Article No. X
PP XX-XX
VIVA Institute of Technology
9thNational Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
3
www.viva-technology.org/New/IJRI
II. METHODOLOGY
2.1GENERAL:
(1) Acquisition of concerned wall image:-
Since the fine of detection result dominantly rely on the fantastic of the acquisition process, the choice of
acquisition system need to be performed carefully. Normally photograph acquisition with the aid of capacity of 2D
sensors desires photograph processing technique. In our experimental work, the cracked wall photograph pattern is
acquired with the aid of potential of a digicam with focal length of 4 mm, exposure time: 0.002 sec, max aperture:
3.5. The lighting fixtures device must be designed in order to hold the crack edges which can also not well
distinction and negligible as compared to wall image. The illumination problem can be solved by means of capacity
of a stereoscopic system.
(2) Crack Detection:-
The aspect detection algorithm [7] is primarily based on the real profiles of photograph edges, and it optimizes
only two of Canny’s criteria i.e. right part detection and localization. It doesn’t consist of the third criterion- minimal
response i.e. a given aspect in the photograph ought to fully be marked once, and the area possible, photo noise have
to no longer create false edges. So we have chosen canny detector to fulfill the third criterion. Again from the spatial
and frequency residences of BT filter, It is in fact discovered that the household of FIR BT filters has a narrow
bandwidth, indicating higher noise reduction in contrast to Canny’s Gaussian first derivative. Hence, our proposed
issue detection structure affords choicest quite result through the fusion of common as nicely as complementary
factors of Canny and HAT based totally side detection techniques.
(A) CANNY EDGE DETECTION
Canny seen three necessities preferred for any area detector such as awesome detection, remarkable
localization, and minimal response. The approach is in truth viewed as attribute synthesis. The image is smoothed in
the use of Gaussian convolution accompanied by the resource of way of the aid of a 2D first spinoff operator. Then, a
non-maximal suppression method is utilized in the utilization of two thresholds. Usually for exquisite result, the
improved monitoring threshold can be set usually excessive and hinder threshold rather low [8]. A giant Gaussian
kernel reduces the sensitivity of the detector. The element detected with the recommended really helpful aid of the
usage of canny operator are an awful lot greater straight previously and as a stop result extended tolerance to noise.
So in this paper we have viewed canny detector.
(B) HBT FILTERING EDGE DETECTION
A problem similarity dimension beautifully in one of a form regularly in amazing in special particularly based
totally completely in actuality barring a doubt algorithm with the barring a doubt caused barring a doubt
clearly beneficial clearly useful aid of Saravana Kumar [7] offers most high-quality stop furnish up five and cease
end result than GM and AN method. This method is accelerated to rugged irrespective of fluctuate in illumination,
large big difference and noise level. The filtering method in reality highlights the regional of similarities between
image adjacency and directional finite impulse response by way of the usage of the utilization of nicely recognized
everyday typical overall performance of common elegant common ordinary overall performance of hyperbolic
tangent figuration. This hassle detection technique in the essential on the entire primarily based totally sincerely
honestly in actuality absolutely in handy assignment truely in reality on similarity dimension penalties a most
dependable identification of photo edges by brain of the utilization of most quintessential trouble analysis.
4. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280 Article No. X
PP XX-XX
VIVA Institute of Technology
9thNational Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
4
www.viva-technology.org/New/IJRI
2.2PROPOSED SYSTEM:
2.2.1Steps Required To Find Cracks In The Proposed Method:
1. The original RGB is read directly from the camera .
2. The RGB image is converted to the grayscale image .
3. A median filter is applied on the grayscale image to smooth the surface .
4. Sobel’s edge detectors are applied to intensify edges in the image .
5. Otsu’s thresholding method is applied to obtain the binary image .
6. Connected components with an area less than 200 are identified and removed .
7. Connected components with an orientation of 0, 90, and -90 degrees are identified and removed 34.
8. Morphological operation “majority”is applied to connect the objects and fill the holes in them .
9. Objects with total pixels less than 50 are detected and removed .
10. The components of the original image in the HSV color space are calculated
11. Pixels within the connected components is the HSV color space are kept as candidate crack pixels .
12. A new thresholding value is defined based on the S values of the candidate crack pixels: = ( ) + ( )
( 3-1) Where min(S) is the minimum value of all S, and std(S) is the standard deviation of the S.
13. If a candidate crack pixel‟s S value is less than the threshold value calculated by Eq. (3-1), the pixel is
added to the background (non-cracked) .
14. If a candidate crack pixel‟s S value is equal or greater than the threshold value calculated by Eq. (3-1), the
pixel is preserved in the binary image (cracked) .
15. The cracked pixels are superimposed on the original image with and the total number of crack pixels are
computed.
III. Figures and TABLES
3.1 Wall Crack Detection Image
IV. Conclusion
5. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280 Article No. X
PP XX-XX
VIVA Institute of Technology
9thNational Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
5
www.viva-technology.org/New/IJRI
Different strategies have been rising in the subject of crack detection and assessment of structures. This
survey paper overviewed extraordinary methods related to crack detection. More lookup is wished though in order
to enhance the prevailing issues with regard to visual inspection. This paper mentioned strategies such as detection
based on edge detection, Morphological pinnacle hat transform, Improved K-means algorithm, Image fusion.
Detection through grey scale imaging located to be resulted in misidentification of cracks, so a coloration
primarily based mannequin with morphological operation looks correct in the investigation.
The difficulties from the problems discussed above precipitated some setbacks. However, the crack
detection algorithm carried out on a fair level. The mesh fashions can nonetheless be examined manually to decide
the place there are cracks on the specimens. The crack measurements that had been made compared well to the real
crack measurements and the scaling performed on fashions were accurate. The standard technique took each
computational and guide time. The refinement and incorporation of digital crack detection and digital crack
measurement can help in fields such as quality manage and quality assurance, field inspections, research and put
up disaster reconnaissance by growing safety.
References
[1] Baohua Shan, ShijieZheng, JinpingOu, “A stereovision-based crack width detection approach for concrete surface
assessment”, KSCE J. Civ. Eng. 20 (2) (2016) 803–812. Springer.
[2] ShivprakashIyer, Sunil K. Sinha, A robust approach for automatic detection and segmentation of cracks in
underground pipeline images, Image Vis. Comput. 23 (10) (2005) 931–933
[3] Yuan-Sen Yang, Chung-Ming Yang, Chang-Wei Huang, Thin crack observation in a reinforced concrete bridge
pier test using image processing and analysis, Adv. Eng. Softw. 83 (2015) 99– 108
[4] Sunil K. Sinha, Paul W. Fieguth, Automated detection of cracks in buried concrete pipe images, Autom. Constr. 15
(1) (2006) 58– 72.
[5] Ahmed Mahgoub Ahmed Talab, Zhangcan Huang, Fan Xi, Liu Hai Ming, Detection crack in image using Otsu
method and multiple filtering in image processing techniques, Optik – Int. J. Light Electron Opt. 127 (3) (2016)
1030–1033.
[6] G. E. Hinton, S. Osindero, and Y.-W.Teh, “A fast learning algorithm for deep belief nets,” Neural Computation,
vol. 18, no. 7, pp. 1527–1554, 2006
[7] Ahmed, K. Yu,W. Xu, Y. Gong, and E. Xing, “Training hierarchical feed-forward visual recognition models using
transfer learning from pseudo-tasks,” in Proceedings of the in EuropeanConference on Computer Vision, pp.
69–82, Springer, Marseille, France, October 2008
[8] Laurent, John, Jean François Hébert, and Mario Talbot. "Automated detection of sealed cracks using 2d and 3d
road surface data."
[9] U.F. Eze, I. Emmanuel and E. Stephen, “Fuzzy Logic Model for Traffic Congestion,” Journal of Mobile
Computing and Application, 2014, pp.15- 20.
[10]Y.LeCunand Y. Bengio, “Convolutional networks for images, speech, and time series,” The Handbook of Brain
Theory andNeuralNetworks,vol.3361,no.10,p.1995,1995
[11]L. Zhang, F. Yang, Y. Daniel Zhang, and Y. J. Zhu, “Road crack detection using deep convolutional neural
network,” in Proceedingsof the IEEE International Conference on Image Processing(ICIP ’16), pp. 3708–3712,
Phoenix, Ariz, USA, September 2016.
[12]Leo Pauly, Harriet Peel, Shan Luo, David Hogg and Raul Fuentes,Applied Computational Intelligence and Soft
Computing, Volume 2017
[13]Xiaoming Sun, Jianping Huang, Wanyu Liu, MantaoXu,Pavement crack characteristic detection based on
sparserepresentation, EURASIP J. Adv. Signal Process. (2012).
[14]Hoang-Nam Nguyen, Tai-Yan Kam, Pi-Ying Cheng, Anautomatic approach for accurate edge detection of
concrete crack utilizing 2D geometric features of crack, J. Signal Process.Syst. 77 (3) (2014) 221– 240.
[15]J. Zakrzewski, N. Chigarev, V. Tournat, V. Gusev, Combinedphotoacoustic–acoustic technique for crack
imaging, Int. J. Thermophys. 31 (1) (2010) 199–2.