Implemented an Advanced 2D Otsu method for image segmentation which solved the problems of the traditional Otsu method such as sensitive to noise and shadow. Wrote and debugged the program in C and VisionX system.
This document describes a computational endomicroscopy platform that uses compressed sensing to achieve higher resolution images than the physical sensor resolution allows. It uses a digital micromirror device as a spatial light modulator to modulate scenes at a conjugate image plane. A camera then collects multiple coded measurements to reconstruct higher resolution images through compressed sensing algorithms. Experiments demonstrate reconstructing higher resolution images than the individual fiber spacing of fiber optic bundles used in endomicroscopy. Future work aims to further reduce measurements needed and apply the techniques to fiber bundle platforms.
The document summarizes a research project on single image haze removal using a variable fog-weight. It begins with an introduction on how haze degrades image quality and the need for haze removal techniques. It then discusses the motivation, literature review, objective, and main contribution of the proposed method. The method uses the dark channel prior to estimate the transmission map and atmospheric light. It then applies a variable fog-weight to modify the transmission map and reduce halo artifacts. A guided filter is used for transmission refinement before recovering the haze-free scene radiance. The method aims to improve on existing techniques by reducing time complexity and halo artifacts while enhancing image visibility.
A Review over Different Blur Detection Techniques in Image Processingpaperpublications3
Abstract: In last few years there is lot of development and attentions in area of blur detection techniques. The Blur detection techniques are very helpful in real life application and are used in image segmentation, image restoration and image enhancement. Blur detection techniques are used to remove the blur from a blurred region of an image which is due to defocus of a camera or motion of an object. In this literature review we represent some techniques of blur detection such as Blind image de-convolution, Low depth of field, Edge sharpness analysis, and Low directional high frequency energy. After studying all these techniques we have found that there are lot of future work is required for the development of perfect and effective blur detection technique.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of 3D printing techniques and applications in optics and photonics. It discusses several common 3D printing methods like fused deposition modeling, polyjet printing, and direct ink writing. It also outlines key materials used in 3D printing like metals, polymers, and ceramics. The document reviews applications of 3D printing in areas like optics, photonics, metamaterials, and terahertz components. It also highlights some challenges of 3D printing like high costs and limited applications for large structures. In closing, it provides examples of recent work on 3D printed dielectric reflectarrays and spiral phase plates.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
Image Denoising of various images Using Wavelet Transform and Thresholding Te...IRJET Journal
The document discusses image denoising using wavelet transforms and thresholding techniques. It first provides background on image denoising and wavelet transforms. It then reviews several existing studies that used wavelet transforms like Haar, db4, and sym4 along with thresholding to denoise images corrupted with Gaussian and salt-and-pepper noise. Next, it describes the proposed denoising algorithm which involves adding noise to test images, decomposing the noisy images using different wavelet transforms, applying thresholding, and calculating metrics like PSNR to evaluate performance. The algorithm aims to eliminate noise in the wavelet domain using soft and hard thresholding followed by reconstruction.
This document describes a computational endomicroscopy platform that uses compressed sensing to achieve higher resolution images than the physical sensor resolution allows. It uses a digital micromirror device as a spatial light modulator to modulate scenes at a conjugate image plane. A camera then collects multiple coded measurements to reconstruct higher resolution images through compressed sensing algorithms. Experiments demonstrate reconstructing higher resolution images than the individual fiber spacing of fiber optic bundles used in endomicroscopy. Future work aims to further reduce measurements needed and apply the techniques to fiber bundle platforms.
The document summarizes a research project on single image haze removal using a variable fog-weight. It begins with an introduction on how haze degrades image quality and the need for haze removal techniques. It then discusses the motivation, literature review, objective, and main contribution of the proposed method. The method uses the dark channel prior to estimate the transmission map and atmospheric light. It then applies a variable fog-weight to modify the transmission map and reduce halo artifacts. A guided filter is used for transmission refinement before recovering the haze-free scene radiance. The method aims to improve on existing techniques by reducing time complexity and halo artifacts while enhancing image visibility.
A Review over Different Blur Detection Techniques in Image Processingpaperpublications3
Abstract: In last few years there is lot of development and attentions in area of blur detection techniques. The Blur detection techniques are very helpful in real life application and are used in image segmentation, image restoration and image enhancement. Blur detection techniques are used to remove the blur from a blurred region of an image which is due to defocus of a camera or motion of an object. In this literature review we represent some techniques of blur detection such as Blind image de-convolution, Low depth of field, Edge sharpness analysis, and Low directional high frequency energy. After studying all these techniques we have found that there are lot of future work is required for the development of perfect and effective blur detection technique.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of 3D printing techniques and applications in optics and photonics. It discusses several common 3D printing methods like fused deposition modeling, polyjet printing, and direct ink writing. It also outlines key materials used in 3D printing like metals, polymers, and ceramics. The document reviews applications of 3D printing in areas like optics, photonics, metamaterials, and terahertz components. It also highlights some challenges of 3D printing like high costs and limited applications for large structures. In closing, it provides examples of recent work on 3D printed dielectric reflectarrays and spiral phase plates.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
Image Denoising of various images Using Wavelet Transform and Thresholding Te...IRJET Journal
The document discusses image denoising using wavelet transforms and thresholding techniques. It first provides background on image denoising and wavelet transforms. It then reviews several existing studies that used wavelet transforms like Haar, db4, and sym4 along with thresholding to denoise images corrupted with Gaussian and salt-and-pepper noise. Next, it describes the proposed denoising algorithm which involves adding noise to test images, decomposing the noisy images using different wavelet transforms, applying thresholding, and calculating metrics like PSNR to evaluate performance. The algorithm aims to eliminate noise in the wavelet domain using soft and hard thresholding followed by reconstruction.
Microstructural Analysis and Machine LearningPFHub PFHub
This document discusses using machine learning for microstructural analysis and semantic segmentation of x-ray tomography data. It describes using a convolutional neural network (CNN) trained on phase field simulated microstructures to perform semantic segmentation of x-ray tomography images of dendritic solidification in aluminum alloys. The CNN was able to achieve 99% accuracy when trained on 1000 small cropped images from the tomography data. Phase field modeling offers control over features to match the tomography and help determine the needed amount and size of training images for the CNN.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a project that developed an automatic classification model for remote sensing images using texture features and classifiers. It presents techniques used like Gabor and Gaussian filters for feature extraction. A system was designed and implemented in MATLAB for image processing and RapidMiner for classification using KNN, SVM and neural networks. The models were applied to classify land cover types in images and evaluate changes over time. Evaluation showed SVM achieved the best accuracy of 97.94% while requiring less time than the neural network. Applications of the automatic classification model include soil assessment, land use mapping and monitoring of environmental changes.
Pixel Recursive Super Resolution.
Ryan Dahl, Mohammad Norouzi & Jonathon Shlens
Google Brain.
Abstract
We present a pixel recursive super resolution model that
synthesizes realistic details into images while enhancing
their resolution. A low resolution image may correspond
to multiple plausible high resolution images, thus modeling
the super resolution process with a pixel independent conditional
model often results in averaging different details–
hence blurry edges. By contrast, our model is able to represent
a multimodal conditional distribution by properly modeling
the statistical dependencies among the high resolution
image pixels, conditioned on a low resolution input. We
employ a PixelCNN architecture to define a strong prior
over natural images and jointly optimize this prior with a
deep conditioning convolutional network. Human evaluations
indicate that samples from our proposed model look
This document reviews different techniques for thinning images, including the Zhang and Suen algorithm and neural networks. It provides an overview of existing thinning approaches, such as iterative algorithms, and proposes a new approach using neural networks. The proposed approach aims to perform thinning invariant to rotations while being less sensitive to noise than existing methods. It evaluates techniques based on execution time, thinning rate, and other performance measures. The document concludes that neural networks may provide better results than existing techniques in terms of metrics like PSNR and MSE, while also reducing execution time for skeletonization.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...CSCJournals
This document presents a proposed methodology for microarray image segmentation using clustering techniques. The methodology involves three main steps: preprocessing, gridding, and segmentation. Segmentation is performed using an enhanced fuzzy c-means clustering algorithm (EFCMC) that uses neighborhood pixel information and gray levels. EFCMC can accurately detect absent spots and is tolerant to noise. The methodology is tested on real microarray images and its segmentation quality is assessed using a quality index. Results show EFCMC improves the quality index compared to k-means clustering and fuzzy c-means clustering.
This document provides a survey of single scalar point multiplication algorithms for elliptic curves over prime fields. It discusses the background of elliptic curve cryptography and point multiplication. Point multiplication is the dominant operation in ECC and can be computed using on-the-fly techniques or precomputation if the point is fixed. The efficiency of point multiplication depends on the recoding method used to represent the scalar and the composite elliptic curve operations employed. Various recoding methods and point multiplication algorithms are analyzed, including binary, signed binary using NAF representation, and window methods.
This document describes a simple method for fabricating plastic microlens arrays with controllable shape and high fill-factor using 3D diffuser lithography and plastic replication. A diffuser is inserted into the conventional lithography process to randomize the UV light paths and form lens-like 3D latent images in thick photoresist. Microlens molds are then replicated by casting liquid PDMS onto the photoresist patterns. The focal length of the fabricated hemispherical microlenses ranges from 13-88 μm depending on UV exposure dose. Curing PDMS at 85°C produces smoother molds with surface roughness of 2.6 nm compared to room temperature curing.
This document summarizes a project to develop a structure-property linkage model for glass fibre reinforced polymer (GFRP) composites. The project involves collecting micro-CT scan data from GFRP samples, segmenting the scans into fibre and matrix phases, simulating microstructures and properties using finite element analysis, extracting 2-point statistics from the microstructures, using principal component analysis for dimensionality reduction, and developing a regression model to link microstructure properties to predict physical properties. The initial model is validated against a limited number of samples and further work is suggested to improve the model with more real samples and experimental testing.
Amalgamation of contour, texture, color, edge, and spatial features for effic...eSAT Journals
Abstract From the past few years, Content based image retrieval (CBIR) has been a progressive and curious research area. Image retrieval is a process of extraction of the set of images from the available image database resembling the query image. Many CBIR techniques have been proposed for relevant image recoveries. However most of them are based on a particular feature extraction like texture based recovery, color based retrieval system etc. Here in this paper we put forward a novel technique for image recovery based on the integration of contour, texture, color, edge, and spatial features. Contourlet decomposition is employed for the extraction of contour features such as energy and standard deviation. Directionality and anisotropy are the properties of contourlet transformation that makes it an efficient technique. After feature extraction of query and database images, similarity measurement techniques such as Squared Euclidian and Manhattan distance were used to obtain the top N image matches. The simulation results in Matlab show that the proposed technique offers a better image retrieval. Satisfactory precision-recall rate is also maintained in this method. Keywords: Contourlet Decomposition, Local Binary Pattern, Squared Euclidian Distance, Manhattan Distance
Currently, magnetic resonance imaging (MRI) has been utilized extensively to obtain high contrast medical image due to its safety which can be applied repetitively. To extract important information from an MRI medical images, an efficient image segmentation or edge detection is required. Edges are represented as important contour features in the medical image since they are the boundaries where distinct intensity changes or discontinuities occur. However, in practices, it is found rather difficult to design an edge detector that is capable of finding all the true edges in an image as there is always noise, and the subjectivity of sensitiveness in detecting the edges. Many traditional algorithms have been proposed to detect the edge, such as Canny, Sobel, Prewitt, Roberts, Zerocross, and Laplacian of Gaussian (LoG). Moreover, many researches have shown the potential of using Artificial Neural Network (ANN) for edge detection. Although many algorithms have been conducted on edge detection for medical images, however higher computational cost and subjective image quality could be further improved. Therefore, the objective of this paper is to develop a fast ANN based edge detection algorithm for MRI medical images. First, we developed features based on horizontal, vertical, and diagonal difference. Then, Canny edge detector will be used as the training output. Finally, optimized parameters will be obtained, including number of hidden layers and output threshold. The edge detection image will be analysed its quality subjectively and computational. Results showed that the proposed algorithm provided better image quality while it has faster processing time around three times time compared to other traditional algorithms, such as Sobel and Canny edge detector.
Review on Optimal image fusion techniques and Hybrid techniqueIRJET Journal
This document reviews various image fusion techniques and proposes a hybrid technique. It discusses pixel-level, feature-level, and decision-level image fusion. Spatial domain methods like average fusion and temporal domain methods like discrete wavelet transform are described. The limitations of existing techniques like ringing artifacts and shift-variance are covered. A hybrid technique using set partitioning in hierarchical trees (SPIHT) and self-organizing migrating algorithm (SOMA) is proposed to improve fusion quality and efficiency over existing methods. This technique is presented as easier to implement and suitable for real-time applications.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
1. The document proposes a new method for shadow detection and removal in satellite images using image segmentation, shadow feature extraction, and inner-outer outline profile line (IOOPL) matching.
2. Key steps include segmenting the image, detecting suspected shadows, eliminating false shadows through analysis of color and spatial properties, extracting shadow boundaries, and obtaining homogeneous sections through IOOPL similarity matching to determine radiation parameters for shadow removal.
3. Experimental results showed the method could successfully detect shadows and remove them to improve image quality for applications like object classification and change detection.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
The document discusses an unsupervised method for extracting buildings from high resolution satellite images regardless of rooftop structures. The method first calculates NDVI and chromaticity ratios to segment vegetation and shadows. Rooftops and roads are then detected and eliminated. Principal component analysis and area analysis are performed to accurately extract buildings. The algorithm aims to eliminate inhomogeneities caused by varying building hierarchies by focusing on eliminating non-building regions rather than detecting building regions of interest. The methodology is tested on Quickbird satellite imagery and results indicate it can extract buildings in complex environments irrespective of rooftop shape.
IRJET- Image Segmentation Techniques: A ReviewIRJET Journal
1. The document discusses and reviews various techniques for image segmentation, including edge detection, threshold-based, region-based, and neural network-based methods.
2. Edge detection separates images by detecting changes in pixel intensity or color to find edges and boundaries. Threshold-based methods segment images based on pixel intensity levels compared to a threshold. Region-based methods partition images into homogeneous regions of connected pixels. Neural network-based methods can perform automated segmentation through supervised or unsupervised machine learning.
3. Prior research has evaluated these techniques, finding that edge detection works best with clear edges but struggles with noise or smooth boundaries, and thresholding methods can miss details but are simple to implement. Region-based and neural network
Deep learning for image super resolutionPrudhvi Raj
Using Deep Convolutional Networks, the machine can learn end-to-end mapping between the low/high-resolution images. Unlike traditional methods, this method jointly optimizes all the layers of the image. A light-weight CNN structure is used, which is simple to implement and provides formidable trade-off from the existential methods.
The document discusses clustering images based on their properties. Images are converted into intensity, contrast, Weibull and fractal images. Eight properties are calculated for each image type, including brightness, standard deviation, entropy, skewness, kurtosis, separability, spatial frequency and visibility. The properties are normalized and clustered using k-means clustering. Tables show normalized property values for different image types. The clustering groups similar images based on their discriminative properties.
Fpga implementation of image segmentation by using edge detection based on so...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Image noise reduction by deep learning methodsIJECEIAES
Image noise reduction is an important task in the field of computer vision and image processing. Traditional noise filtering methods may be limited by their ability to preserve image details. The purpose of this work is to study and apply deep learning methods to reduce noise in images. The main tasks of noise reduction in images are the removal of Gaussian noise, salt and pepper noise, noise of lines and stripes, noise caused by compression, and noise caused by equipment defects. In this paper, such noises as the removal of raindrops, dust, and traces of snow on the images were considered. In the work, complex patterns and high noise density were studied. A deep learning algorithm, such as the decomposition method with and without preprocessing, and their effectiveness in applying noise reduction are considered. It is expected that the results of the study will confirm the effectiveness of deep learning methods in reducing noise in images. This may lead to the development of more accurate and versatile image processing methods capable of preserving details and improving the visual quality of images in various fields, including medicine, photography, and video.
This document summarizes a research paper on techniques for binarizing degraded document images. It discusses how degraded documents often have mixed foreground and background pixels that need to be separated. The proposed method uses contrast adjustment, grey scale edge detection, thresholding and post-processing to binarize degraded images. It first inverts the image contrast, then uses grey scale detection to find text stroke edges. Pixels are classified and thresholding is used to create a binary image. Post-processing removes background pixels to output a clean image with only text strokes. The method is tested on degraded novel and book images and produces separated, readable text from the backgrounds.
Microstructural Analysis and Machine LearningPFHub PFHub
This document discusses using machine learning for microstructural analysis and semantic segmentation of x-ray tomography data. It describes using a convolutional neural network (CNN) trained on phase field simulated microstructures to perform semantic segmentation of x-ray tomography images of dendritic solidification in aluminum alloys. The CNN was able to achieve 99% accuracy when trained on 1000 small cropped images from the tomography data. Phase field modeling offers control over features to match the tomography and help determine the needed amount and size of training images for the CNN.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a project that developed an automatic classification model for remote sensing images using texture features and classifiers. It presents techniques used like Gabor and Gaussian filters for feature extraction. A system was designed and implemented in MATLAB for image processing and RapidMiner for classification using KNN, SVM and neural networks. The models were applied to classify land cover types in images and evaluate changes over time. Evaluation showed SVM achieved the best accuracy of 97.94% while requiring less time than the neural network. Applications of the automatic classification model include soil assessment, land use mapping and monitoring of environmental changes.
Pixel Recursive Super Resolution.
Ryan Dahl, Mohammad Norouzi & Jonathon Shlens
Google Brain.
Abstract
We present a pixel recursive super resolution model that
synthesizes realistic details into images while enhancing
their resolution. A low resolution image may correspond
to multiple plausible high resolution images, thus modeling
the super resolution process with a pixel independent conditional
model often results in averaging different details–
hence blurry edges. By contrast, our model is able to represent
a multimodal conditional distribution by properly modeling
the statistical dependencies among the high resolution
image pixels, conditioned on a low resolution input. We
employ a PixelCNN architecture to define a strong prior
over natural images and jointly optimize this prior with a
deep conditioning convolutional network. Human evaluations
indicate that samples from our proposed model look
This document reviews different techniques for thinning images, including the Zhang and Suen algorithm and neural networks. It provides an overview of existing thinning approaches, such as iterative algorithms, and proposes a new approach using neural networks. The proposed approach aims to perform thinning invariant to rotations while being less sensitive to noise than existing methods. It evaluates techniques based on execution time, thinning rate, and other performance measures. The document concludes that neural networks may provide better results than existing techniques in terms of metrics like PSNR and MSE, while also reducing execution time for skeletonization.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Analysis and Comparison of Quality Index Using Clustering Techniques for S...CSCJournals
This document presents a proposed methodology for microarray image segmentation using clustering techniques. The methodology involves three main steps: preprocessing, gridding, and segmentation. Segmentation is performed using an enhanced fuzzy c-means clustering algorithm (EFCMC) that uses neighborhood pixel information and gray levels. EFCMC can accurately detect absent spots and is tolerant to noise. The methodology is tested on real microarray images and its segmentation quality is assessed using a quality index. Results show EFCMC improves the quality index compared to k-means clustering and fuzzy c-means clustering.
This document provides a survey of single scalar point multiplication algorithms for elliptic curves over prime fields. It discusses the background of elliptic curve cryptography and point multiplication. Point multiplication is the dominant operation in ECC and can be computed using on-the-fly techniques or precomputation if the point is fixed. The efficiency of point multiplication depends on the recoding method used to represent the scalar and the composite elliptic curve operations employed. Various recoding methods and point multiplication algorithms are analyzed, including binary, signed binary using NAF representation, and window methods.
This document describes a simple method for fabricating plastic microlens arrays with controllable shape and high fill-factor using 3D diffuser lithography and plastic replication. A diffuser is inserted into the conventional lithography process to randomize the UV light paths and form lens-like 3D latent images in thick photoresist. Microlens molds are then replicated by casting liquid PDMS onto the photoresist patterns. The focal length of the fabricated hemispherical microlenses ranges from 13-88 μm depending on UV exposure dose. Curing PDMS at 85°C produces smoother molds with surface roughness of 2.6 nm compared to room temperature curing.
This document summarizes a project to develop a structure-property linkage model for glass fibre reinforced polymer (GFRP) composites. The project involves collecting micro-CT scan data from GFRP samples, segmenting the scans into fibre and matrix phases, simulating microstructures and properties using finite element analysis, extracting 2-point statistics from the microstructures, using principal component analysis for dimensionality reduction, and developing a regression model to link microstructure properties to predict physical properties. The initial model is validated against a limited number of samples and further work is suggested to improve the model with more real samples and experimental testing.
Amalgamation of contour, texture, color, edge, and spatial features for effic...eSAT Journals
Abstract From the past few years, Content based image retrieval (CBIR) has been a progressive and curious research area. Image retrieval is a process of extraction of the set of images from the available image database resembling the query image. Many CBIR techniques have been proposed for relevant image recoveries. However most of them are based on a particular feature extraction like texture based recovery, color based retrieval system etc. Here in this paper we put forward a novel technique for image recovery based on the integration of contour, texture, color, edge, and spatial features. Contourlet decomposition is employed for the extraction of contour features such as energy and standard deviation. Directionality and anisotropy are the properties of contourlet transformation that makes it an efficient technique. After feature extraction of query and database images, similarity measurement techniques such as Squared Euclidian and Manhattan distance were used to obtain the top N image matches. The simulation results in Matlab show that the proposed technique offers a better image retrieval. Satisfactory precision-recall rate is also maintained in this method. Keywords: Contourlet Decomposition, Local Binary Pattern, Squared Euclidian Distance, Manhattan Distance
Currently, magnetic resonance imaging (MRI) has been utilized extensively to obtain high contrast medical image due to its safety which can be applied repetitively. To extract important information from an MRI medical images, an efficient image segmentation or edge detection is required. Edges are represented as important contour features in the medical image since they are the boundaries where distinct intensity changes or discontinuities occur. However, in practices, it is found rather difficult to design an edge detector that is capable of finding all the true edges in an image as there is always noise, and the subjectivity of sensitiveness in detecting the edges. Many traditional algorithms have been proposed to detect the edge, such as Canny, Sobel, Prewitt, Roberts, Zerocross, and Laplacian of Gaussian (LoG). Moreover, many researches have shown the potential of using Artificial Neural Network (ANN) for edge detection. Although many algorithms have been conducted on edge detection for medical images, however higher computational cost and subjective image quality could be further improved. Therefore, the objective of this paper is to develop a fast ANN based edge detection algorithm for MRI medical images. First, we developed features based on horizontal, vertical, and diagonal difference. Then, Canny edge detector will be used as the training output. Finally, optimized parameters will be obtained, including number of hidden layers and output threshold. The edge detection image will be analysed its quality subjectively and computational. Results showed that the proposed algorithm provided better image quality while it has faster processing time around three times time compared to other traditional algorithms, such as Sobel and Canny edge detector.
Review on Optimal image fusion techniques and Hybrid techniqueIRJET Journal
This document reviews various image fusion techniques and proposes a hybrid technique. It discusses pixel-level, feature-level, and decision-level image fusion. Spatial domain methods like average fusion and temporal domain methods like discrete wavelet transform are described. The limitations of existing techniques like ringing artifacts and shift-variance are covered. A hybrid technique using set partitioning in hierarchical trees (SPIHT) and self-organizing migrating algorithm (SOMA) is proposed to improve fusion quality and efficiency over existing methods. This technique is presented as easier to implement and suitable for real-time applications.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
1. The document proposes a new method for shadow detection and removal in satellite images using image segmentation, shadow feature extraction, and inner-outer outline profile line (IOOPL) matching.
2. Key steps include segmenting the image, detecting suspected shadows, eliminating false shadows through analysis of color and spatial properties, extracting shadow boundaries, and obtaining homogeneous sections through IOOPL similarity matching to determine radiation parameters for shadow removal.
3. Experimental results showed the method could successfully detect shadows and remove them to improve image quality for applications like object classification and change detection.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
The document discusses an unsupervised method for extracting buildings from high resolution satellite images regardless of rooftop structures. The method first calculates NDVI and chromaticity ratios to segment vegetation and shadows. Rooftops and roads are then detected and eliminated. Principal component analysis and area analysis are performed to accurately extract buildings. The algorithm aims to eliminate inhomogeneities caused by varying building hierarchies by focusing on eliminating non-building regions rather than detecting building regions of interest. The methodology is tested on Quickbird satellite imagery and results indicate it can extract buildings in complex environments irrespective of rooftop shape.
IRJET- Image Segmentation Techniques: A ReviewIRJET Journal
1. The document discusses and reviews various techniques for image segmentation, including edge detection, threshold-based, region-based, and neural network-based methods.
2. Edge detection separates images by detecting changes in pixel intensity or color to find edges and boundaries. Threshold-based methods segment images based on pixel intensity levels compared to a threshold. Region-based methods partition images into homogeneous regions of connected pixels. Neural network-based methods can perform automated segmentation through supervised or unsupervised machine learning.
3. Prior research has evaluated these techniques, finding that edge detection works best with clear edges but struggles with noise or smooth boundaries, and thresholding methods can miss details but are simple to implement. Region-based and neural network
Deep learning for image super resolutionPrudhvi Raj
Using Deep Convolutional Networks, the machine can learn end-to-end mapping between the low/high-resolution images. Unlike traditional methods, this method jointly optimizes all the layers of the image. A light-weight CNN structure is used, which is simple to implement and provides formidable trade-off from the existential methods.
The document discusses clustering images based on their properties. Images are converted into intensity, contrast, Weibull and fractal images. Eight properties are calculated for each image type, including brightness, standard deviation, entropy, skewness, kurtosis, separability, spatial frequency and visibility. The properties are normalized and clustered using k-means clustering. Tables show normalized property values for different image types. The clustering groups similar images based on their discriminative properties.
Fpga implementation of image segmentation by using edge detection based on so...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Image noise reduction by deep learning methodsIJECEIAES
Image noise reduction is an important task in the field of computer vision and image processing. Traditional noise filtering methods may be limited by their ability to preserve image details. The purpose of this work is to study and apply deep learning methods to reduce noise in images. The main tasks of noise reduction in images are the removal of Gaussian noise, salt and pepper noise, noise of lines and stripes, noise caused by compression, and noise caused by equipment defects. In this paper, such noises as the removal of raindrops, dust, and traces of snow on the images were considered. In the work, complex patterns and high noise density were studied. A deep learning algorithm, such as the decomposition method with and without preprocessing, and their effectiveness in applying noise reduction are considered. It is expected that the results of the study will confirm the effectiveness of deep learning methods in reducing noise in images. This may lead to the development of more accurate and versatile image processing methods capable of preserving details and improving the visual quality of images in various fields, including medicine, photography, and video.
This document summarizes a research paper on techniques for binarizing degraded document images. It discusses how degraded documents often have mixed foreground and background pixels that need to be separated. The proposed method uses contrast adjustment, grey scale edge detection, thresholding and post-processing to binarize degraded images. It first inverts the image contrast, then uses grey scale detection to find text stroke edges. Pixels are classified and thresholding is used to create a binary image. Post-processing removes background pixels to output a clean image with only text strokes. The method is tested on degraded novel and book images and produces separated, readable text from the backgrounds.
Binarization of Degraded Text documents and Palm Leaf ManuscriptsIRJET Journal
This document proposes a technique for binarizing degraded text documents and palm leaf manuscripts. It involves taking the average pixel value of the image as a threshold to distinguish foreground from background. The algorithm first computes the average value of the original image and uses it to set pixels above the threshold to black, removing background. It then computes the average of the remaining image, excluding black pixels, and uses that value as a new threshold to set remaining pixels above it to white, extracting the foreground. The technique is tested on old documents and manuscripts, showing improvement over existing methods based on metrics like peak signal-to-noise ratio. While effective for documents, it needs improvement for palm leaf manuscripts with non-uniform degradation.
IRJET- Implementation of Histogram based Tsallis Entropic Thresholding Segmen...IRJET Journal
This document discusses image segmentation techniques for plasma detection in visible images of tokamaks. It compares Gray Level Local Variance (GLLV), Gray Level Local Entropy (GLLE), and Gray Level Spatial Correlation (GLSC) based 2D histogram segmentation methods using Tsallis entropy thresholding. These methods construct 2D histograms using pixel gray levels combined with local variance, entropy, or spatial correlation features. The document implements these methods on visible tokamak images and evaluates the results using an unsupervised uniformity value metric. It finds that the GLSC method provides better segmentation in terms of uniformity value compared to the other techniques.
Noisy image enhancements using deep learning techniquesIJECEIAES
This article explores the application of deep learning techniques to improve the accuracy of feature enhancements in noisy images. A multitasking convolutional neural network (CNN) learning model architecture has been proposed that is trained on a large set of annotated images. Various techniques have been used to process noisy images, including the use of data augmentation, the application of filters, and the use of image reconstruction techniques. As a result of the experiments, it was shown that the proposed model using deep learning methods significantly improves the accuracy of object recognition in noisy images. Compared to single-tasking models, the multi-tasking model showed the superiority of this approach in performing multiple tasks simultaneously and saving training time. This study confirms the effectiveness of using multitasking models using deep learning for object recognition in noisy images. The results obtained can be applied in various fields, including computer vision, robotics, automatic driving, and others, where accurate object recognition in noisy images is a critical component.
IMAGE SEGMENTATION BY USING THRESHOLDING TECHNIQUES FOR MEDICAL IMAGEScseij
Image binarization is the process of separation of pixel values into two groups, black as background and
white as foreground. Thresholding can be categorized into global thresholding and local thresholding. This
paper describes a locally adaptive thresholding technique that removes background by using local mean
and standard deviation. Most common and simplest approach to segment an image is using thresholding.
In this work we present an efficient implementation for threshoding and give a detailed comparison of
Niblack and sauvola local thresholding algorithm. Niblack and sauvola thresholding algorithm is
implemented on medical images. The quality of segmented image is measured by statistical parameters:
Jaccard Similarity Coefficient, Peak Signal to Noise Ratio (PSNR).
An effective and robust technique for the binarization of degraded document i...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
A Review On Single Image Depth Prediction with Wavelet DecompositionIRJET Journal
This document summarizes research on single image depth prediction using wavelet decomposition. It begins with an abstract describing how wavelet-based methods can accurately predict depth from RGB images with reduced computational complexity compared to other methods. The document then reviews related work applying wavelets to tasks like image classification, disparity estimation, and demoireing. It describes how wavelet decomposition can cut the number of operations in the decoder in half with less than 2% accuracy drop for monocular depth estimation. Finally, it concludes that wavelet methods allow depth prediction from single images by combining wavelet representation with deep learning to progressively upsample and refine depth maps.
Image Segmentation Techniques for Remote Sensing Satellite Images.pdfnagwaAboElenein
The use of satellite imagery has become an integral aspect in the planning of
multiple domains that include disaster management and analysis of natural calamity
images, snow cover mapping, smart city development, etc. Extraction of urban
information like linear features(roads), structured features( buildings, dams, manmade
structures), boundaries of water bodies) from satellite images has now
become an important area in remote sensing studies.
The whole part of a digital image is not useful for a particular purpose hence
the image needs to be segmented. Various methods for image segmentation have
been proposed but the choice of a particular method depends upon our requirement.
Removal of Gaussian noise on the image edges using the Prewitt operator and t...IOSR Journals
Abstract: Image edge detection algorithm is applied on images to remove Gaussian noise that is present in the
image during capturing or transmission using a method which combines Prewitt operator and threshold
function technique to do edge detection on the image. This method is better than a method which combines
Prewitt operator and mean filtering. In this paper, firstly use mean filtering to remove initially Gaussian noise,
then use Prewitt operator to do edge detection on the image, and finally applied a threshold function technique
with Prewitt operator.
Keywords: Gaussian noise, Prewitt operator, edge detection, threshold function
An Efficient Image Denoising Approach for the Recovery of Impulse NoisejournalBEEI
Image noise is one of the key issues in image processing applications today. The noise will affect the quality of the image and thus degrades the actual information of the image. Visual quality is the prerequisite for many imagery applications such as remote sensing. In recent years, the significance of noise assessment and the recovery of noisy images are increasing. The impulse noise is characterized by replacing a portion of an image’s pixel values with random values Such noise can be introduced due to transmission errors. Accordingly, this paper focuses on the effect of visual quality of the image due to impulse noise during the transmission of images. In this paper, a hybrid statistical noise suppression technique has been developed for improving the quality of the impulse noisy color images. We further proved the performance of the proposed image enhancement scheme using the advanced performance metrics.
This document describes a two-stage technique for removing impulse noise from digital images using neural networks and fuzzy logic. In the first stage, a neural network is used to detect and remove noise from the image cleanly while preserving image details. In the second stage, fuzzy decision rules inspired by the human visual system are used to enhance image quality by compensating for blurring or destruction caused in the first stage. The goal is to remove noise cleanly without blurring edges while enhancing the overall visual quality of the processed image.
noise remove in image processing by fuzzy logicRucku
This document summarizes a research paper that proposes a two-stage technique for removing impulse noise from digital images using neural networks and fuzzy logic. In the first stage, a neural network is used to detect and remove noise while preserving image details. In the second stage, fuzzy decision rules inspired by the human visual system are used to further process pixels and enhance image quality, especially in sensitive regions. The technique aims to remove noise cleanly without blurring edges or destroying important information. It is presented as an improvement over conventional noise removal methods.
Object Recogniton Based on Undecimated Wavelet TransformIJCOAiir
Object Recognition (OR) is the mission of finding a specified object in an image or video sequence
in computer vision. An efficient method for recognizing object in an image based on Undecimated Wavelet
Transform (UWT) is proposed. In this system, the undecimated coefficients are used as features to recognize the
objects. The given original image is decomposed by using the UWT. All coefficients are taken as features for
the classification process. This method is applied to all the training images and the extracted features of
unknown object are used as an input to the K-Nearest Neighbor (K-NN) classifier to recognize the object. The
assessment of the system is agreed on using Columbia Object Image Library Dataset (COIL-100) database.
This document discusses image segmentation techniques for hyperspectral images. It first defines image segmentation as extracting particular regions from images, and lists common segmentation methods like edge detection, contour extraction, and clustering. The objective is then stated as improving noise-free estimation for noisy hyperspectral images by fully utilizing correlated spectral information and providing rich spectral-spatial information. Finally, the document surveys several papers on hyperspectral image denoising and restoration, discussing their purposes, methodologies, and limitations.
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET Journal
This document proposes a method for removing haze from underwater images using fusion techniques. It involves three main steps:
1. Removal of haze from the input underwater image using a water shield filter to extract a dehazed image.
2. Denoising the dehazed image using a sequential algorithm to compensate for uneven lighting and enhance image features.
3. Fusing the dehazed and denoised images to produce a clear output image with both haze and noise removed.
The method aims to improve underwater image visibility and contrast correction in a simple and effective manner. Evaluation on sample images demonstrates reduced haze and artifacts after processing.
Comparative study on image fusion methods in spatial domainIAEME Publication
This document provides a comparative study of various image fusion methods in the spatial domain. It begins by introducing image fusion and its applications. Section 2 then describes several common fusion algorithms in the spatial domain, including average, select maximum/minimum, Brovey transform, intensity hue saturation (IHS), and principal component analysis (PCA). Section 3 defines image fusion quality measures like entropy, mean squared error, and normalized cross correlation. Section 4 provides a comparative analysis of the spatial domain fusion techniques based on parameters like simplicity, type of resources, and disadvantages. It finds that spatial domain methods provide high spatial resolution but have issues like image blurring and producing less informative outputs. The document concludes that while the best algorithm depends on the problem, spatial
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
aziz sancar nobel prize winner: from mardin to nobel
Advanced 2D Otsu Method
1. Advanced 2D Otsu Method
Xin Li(xl553)*, Jingyao Ren(jr986)**
* Department of Biomedical Engineering, Cornell University
** Department of Electrical and Computer Engineering, Cornell University
Introduction
References
Description of 2D Otsu Method
Experiment Results of Shadow Image
Experiment Results of Noise Image
Final Project
Group 13
ECE 5470
Fall 2015
In this project, we implemented the traditional Otsu method[1] for
image thresholding. Next, based on the weaknesses of the
traditional Otsu method such as sensitive to noise and shadow, we
implemented an Advanced 2D Otsu method[2,3]. Different tests
were conducted to evaluate the performance of the Advanced 2D
Otsu method. Comparing to traditional Otsu, the 2D Otsu method is
a better automatic thresholding method for noise and shadow
images, and it could get more accurate thresholding than the
traditional Otsu method.
The keypoint of the 2D Otsu method is to use both the distribution
of the pixel grayscales and the spatial information to decide the
threshold together[2,3]. The 1D Otsu method only consider the
gray level of the image and ignored the spatial information. To
overcome this problem, we could use 2D variance-based
techniques using local neighborhood as well as pixel information
by maximizing the between-class variance defined on the 2D
histogram. This will improve the noise robustness of 1D Otsu.
Compared to traditional Otsu method, 2D Otsu shows better
performance at eliminating shadows and background noise.
However, compared to traditional Otsu with Mean Filter, 2D Otsu
method shows a similiar performance and limited improvements
on denoising. In conlcusion, 2D Otsu is a good choice for
removing shadows but not a perfect choice for denoising. Future
work are needed to improve its denoising ability of 2D Otsu
method.
[1]. Nobuyuki Otsu (1979). "A threshold selection method from gray-level histograms". IEEE Trans. Sys., Man., Cyber. 9 (1): 62–66
[2]. Jianzhuang, Liu, Li Wenqing, and Tian Yupeng. "Automatic thresholding of gray-level pictures using two-dimension Otsu method."
Circuits and Systems, 1991. Conference Proceedings, China., 1991 International Conference on. IEEE, 1991.
[3]. Nie, F., Wang, Y., Pan, M., Peng, G., & Zhang, P. (2013). Two-dimensional extension of variance-based thresholding for image
segmentation. Multidimensional systems and signal processing, 24(3), 485-501.
[4]. Berkeley Segmentation Dataset and Benchmarks 500 (BSDS500), website source: http://www.eecs.berkeley.edu/Research/Projects/
CS/vision/grouping/resources.html
1
2
4
5
2D histogram 1D projection of 2D
histogram
MSE functions of 2D Otsu. Maximizing
Z* will achieve best threshold
1D projection of 2D histogram
Experiment Design3
Design:
• Dataset: Use images with noise(i.e. Gaussain noise, Salt and Pepper
noise, etc. ) and shadow from BSD500 database[4].
• Ground True: Manually mark the ground truth.
• Evaluation: Jaccard and Dice Index; compare result image directly
Additional test:
• Compare the thresholding performance of 2D Otsu on noise image
and original Otsu on Mean-Filtered image.
Hypothesis:
• The 2D Otsu method has better thresholding performance than
original Otsu espcially on images with noise and shadow.
Fish Otsu 2D Otsu
Mp Otsu 2D Otsu
Beaver Otsu 2D Otsu
Image Name
Jaccard
Index
Dice
Index
Normalized Jaccard Index
(2D Otsu is 1) Improvement
(Jaccard)
Otsu 2D Otsu Otsu 2D Otsu Otsu 2D Otsu
Fish 0.01542 0.07785 0.03038 0.14446 0.5075 1 49.24%
Mp 0.05208 0.17051 0.09901 0.29135 0.5260 1 47.40%
Beaver 0.19720 0.33464 0.32943 0.50146 0.5986 1 40.14%
Cup 0.06186 0.10345 0.11651 0.18750 0.5309 1 46.91%
Cup Otsu 2D Otsu
Rectangle Rectangle + Gaussain noise
Otsu only 2D Otsu Mean Filter + Otsu
The 2D Otsu method could work as
a Low-Pass-Filter to remove noise.
To evaluate its performance, we
also compared the thresholding
quality of the 2D Otsu with Mean
Filter + Otsu.
Conclusion6
Eagle + Salt and Pepper noise Otsu only
Mean-Filter + Otsu2D Otsu