The document describes research done at IIT Madras on texture analysis and image segmentation. It discusses using techniques like discrete wavelet transform, Gabor filtering, fuzzy c-means clustering, and constraint satisfaction neural networks to perform region-based texture segmentation and edge detection. The proposed methodology integrates region and edge information using a constraint satisfaction neural network to perform texture segmentation.
The document describes techniques for image texture analysis and segmentation. It proposes a methodology using constraint satisfaction neural networks to integrate region-based and edge-based texture segmentation. The methodology initializes a CSNN using fuzzy c-means clustering, then iteratively updates the neuron probabilities and edge maps to refine the segmentation. Experimental results demonstrate improved segmentation by combining region and edge information.
This document summarizes an academic paper that proposes a method for incrementally training object detection models to classify unseen object classes in real-time. It begins by providing background on object detection techniques like YOLO and SSD that can perform detection in a single pass. The paper aims to improve these single-shot detectors through incremental learning to classify new object classes without retraining the entire model from scratch. It conducted experiments on YOLO and VGG16 to investigate how well they can classify objects from unseen classes and whether their performance is affected by factors like background, bounding box size, or network architecture. The goal is to develop a more robust object detection method that can easily adapt to new classes of objects in real-time applications.
This document provides an overview of convolutional neural networks (CNNs) for image and video recognition. It discusses that CNNs have greatly improved image classification accuracy on ImageNet over the years. CNNs consist of convolutional layers that apply filters to extract features, pooling layers that reduce the spatial size, and fully connected layers for classification. Training involves tuning parameters through backpropagation, while inference uses a trained model for classification. Example networks discussed include AlexNet, VGG16, GoogLeNet and ResNet, which contain increasing numbers of parameters and computational operations.
This presentation is an analysis of the paper,"SCRDet++: Detecting Small, Cluttered and Rotated Objects via Instance-Level Feature Denoising and Rotation Loss Smoothing"
PyData London 2015 - Localising Organs of the Fetus in MRI Data Using PythonKevin Keraudren
This document summarizes an automated method for localizing fetal organs in magnetic resonance images. The method uses machine learning to sequentially localize the brain, heart, lungs and liver. It first normalizes fetal size based on gestational age. It then localizes the brain, uses this to search for the heart between two spheres. The heart location guides searching inside a third sphere for the lungs and liver. Features incorporate spatial relationships modeled by Gaussian distributions. Classification predicts organ candidates, regression refines locations, and spatial optimization selects the final detection by maximizing votes and relative organ positions. Training involves extracting random cube features around labeled pixels to classify organs.
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
The document provides an overview of deep learning examples and applications including computer vision tasks like image classification and object detection from images, speech recognition from audio, and natural language processing on text. It then discusses common deep learning network structures like convolutional neural networks and how they are applied to tasks like handwritten digit recognition. Finally, it outlines Intel's portfolio of AI tools and libraries for deep learning including frameworks, libraries, and hardware.
The document describes techniques for image texture analysis and segmentation. It proposes a methodology using constraint satisfaction neural networks to integrate region-based and edge-based texture segmentation. The methodology initializes a CSNN using fuzzy c-means clustering, then iteratively updates the neuron probabilities and edge maps to refine the segmentation. Experimental results demonstrate improved segmentation by combining region and edge information.
This document summarizes an academic paper that proposes a method for incrementally training object detection models to classify unseen object classes in real-time. It begins by providing background on object detection techniques like YOLO and SSD that can perform detection in a single pass. The paper aims to improve these single-shot detectors through incremental learning to classify new object classes without retraining the entire model from scratch. It conducted experiments on YOLO and VGG16 to investigate how well they can classify objects from unseen classes and whether their performance is affected by factors like background, bounding box size, or network architecture. The goal is to develop a more robust object detection method that can easily adapt to new classes of objects in real-time applications.
This document provides an overview of convolutional neural networks (CNNs) for image and video recognition. It discusses that CNNs have greatly improved image classification accuracy on ImageNet over the years. CNNs consist of convolutional layers that apply filters to extract features, pooling layers that reduce the spatial size, and fully connected layers for classification. Training involves tuning parameters through backpropagation, while inference uses a trained model for classification. Example networks discussed include AlexNet, VGG16, GoogLeNet and ResNet, which contain increasing numbers of parameters and computational operations.
This presentation is an analysis of the paper,"SCRDet++: Detecting Small, Cluttered and Rotated Objects via Instance-Level Feature Denoising and Rotation Loss Smoothing"
PyData London 2015 - Localising Organs of the Fetus in MRI Data Using PythonKevin Keraudren
This document summarizes an automated method for localizing fetal organs in magnetic resonance images. The method uses machine learning to sequentially localize the brain, heart, lungs and liver. It first normalizes fetal size based on gestational age. It then localizes the brain, uses this to search for the heart between two spheres. The heart location guides searching inside a third sphere for the lungs and liver. Features incorporate spatial relationships modeled by Gaussian distributions. Classification predicts organ candidates, regression refines locations, and spatial optimization selects the final detection by maximizing votes and relative organ positions. Training involves extracting random cube features around labeled pixels to classify organs.
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
The document provides an overview of deep learning examples and applications including computer vision tasks like image classification and object detection from images, speech recognition from audio, and natural language processing on text. It then discusses common deep learning network structures like convolutional neural networks and how they are applied to tasks like handwritten digit recognition. Finally, it outlines Intel's portfolio of AI tools and libraries for deep learning including frameworks, libraries, and hardware.
The document discusses several case studies of applying machine learning to different problems:
1) Classifying images as day or night using a convolutional neural network.
2) Face verification using deep neural networks to encode faces and compare encodings.
3) Neural style transfer to generate artistic images by combining the content of one image and the style of another, using neural network features and gram matrices.
4) Various techniques are discussed for problems like face identification, clustering, and trigger word detection from audio. The case studies illustrate different modeling decisions needed for machine learning projects.
The document discusses finding initial parameters for neural networks used in data clustering. It presents a modified K-means fast learning artificial neural network (K-FLANN) algorithm that uses differential evolution to optimize the vigilance and tolerance parameters of K-FLANN. Differential evolution is used to select parameter values that yield good clustering performance. The modified K-FLANN algorithm is evaluated on several data sets and shown to provide promising results in terms of convergence rate and accuracy compared to other algorithms. Comparisons are also made between the original and modified versions of K-FLANN.
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Micr...IJAAS Team
We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
The document discusses neural networks, generative adversarial networks, and image-to-image translation. It begins by explaining how neural networks learn through forward propagation, calculating loss, and using the loss to update weights via backpropagation. Generative adversarial networks are introduced as a game between a generator and discriminator, where the generator tries to fool the discriminator and vice versa. Image-to-image translation uses conditional GANs to translate images from one domain to another, such as maps to aerial photos.
In this talk we detail the step to creating a Visual Search engine for 1M Amazon product using MXNet Gluon and the K-Nearest Neighbor search library HNSW.
For implementation details, check this repository: https://github.com/ThomasDelteil/VisualSearch_MXNet
Video available here:
https://www.youtube.com/watch?v=9a8MAtfFVwI
Demo website available here:
https://thomasdelteil.github.io/VisualSearch_MXNet/
An image can be seen as a matrix I, where I(x, y) is the brightness of the pixel located at coordinates (x, y). In the Convolutional neural network, the kernel is nothing but a filter
that is used to extract the features from the images.
- The document presents a neural network model for recognizing handwritten digits. It uses a dataset of 20x20 pixel grayscale images of digits 0-9.
- The proposed neural network has an input layer of 400 nodes, a hidden layer of 25 nodes, and an output layer of 10 nodes. It is trained using backpropagation to classify images.
- The model achieves an accuracy of over 96.5% on test data after 200 iterations of training, outperforming a logistic regression model which achieved 91.5% accuracy. Future work could involve classifying more complex natural images.
Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lens...inside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Gilles Fourestey from EPFL presents: Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software.
"LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It was developed by Prof. Kneib, head of the LASTRO lab at EPFL, et al., starting from 1996. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.
However, LENSTOOL lacks efficient vectorization and only uses OpenMP, which limits its execution to one node and can lead to execution times that exceed several months. Therefore, the LASTRO and the EPFL HPC group decided to rewrite the code from scratch and in order to minimize risk and maximize performance, a bottom-up approach that focuses on exposing parallelism at hardware and instruction levels was used. The result is a high performance code, fully vectorized on Xeon, Xeon Phis and GPUs that currently scales up to hundreds of nodes on CSCS’ Piz Daint, one of the fastest supercomputers in the world."
Watch the video: https://wp.me/p3RLHQ-ili
Learn more: https://infoscience.epfl.ch/record/234382/files/EPFL_TH8338.pdf?subformat=pdfa
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
U-Net is a convolutional neural network used for biomedical image segmentation. It takes in an input image and outputs a segmentation map identifying nuclei pixels. The U-Net architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. The model was trained on microscopy images annotated with nuclei masks to achieve an intersection over union score of 85% after data augmentation was applied.
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement ijsc
Soft computing is likely to play aprogressively important role in many applications including image enhancement. The paradigm for soft computing is the human mind. The soft computing critique has been particularly strong with fuzzy logic. The fuzzy logic is facts representationas a rule for management of uncertainty. Inthis paperthe Multi-Dimensional optimized problem is addressed by discussing the optimal thresholding usingfuzzyentropyfor Image enhancement. This technique is compared with bi-level and multi-level thresholding and obtained optimal thresholding values for different levels of speckle noisy and low contrasted images. The fuzzy entropy method has produced better results compared to bi-level and multi-level thresholding techniques.
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDSIJNSA Journal
With the increasing popularity of information technology in communication network, security has become an inseparable but vital issue for providing for confidentiality, data security, entity authentication and data origin authentication. Steganography is the scheme of hiding data into a cover media to provide confidentiality and secrecy without risking suspicion of an intruder. Visual cryptography is a new technique which provides information security using simple algorithm unlike the complex, computationally intensive algorithms used in other techniques like traditional cryptography. This technique allows visual information to be encrypted in such a way that their decryption can be performed by the Human Visual System (HVS), without any complex cryptographic algorithms. To provide a better secured system that ensures high data capacity and information security, a multilevel security system can be thought for which can be built by incorporating the principles of steganography and visual cryptography.
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDSIJNSA Journal
The document discusses a hybrid approach to steganography and visual cryptography for improved data security. It proposes combining principles of steganography, which hides data in a cover media, and visual cryptography, which encrypts images in a way that can be decrypted by human vision without algorithms. Specifically, it describes generating two random grids from a secret image that reveal the image when overlaid but hide it individually. The random grids are created by inverting or substituting pixels based on the secret image. This hybrid approach aims to provide stronger security than either technique alone by incorporating advantages of both.
This document summarizes a study of deep learning models and Bayesian statistics. It discusses the history of artificial intelligence and machine learning before introducing restricted Boltzmann machines, deep belief networks, and Bayesian statistics. It describes experiments applying restricted Boltzmann machines to classify movies and generate images, and using a deep belief network to classify images from multiple datasets with 100% accuracy. The conclusion states that deep learning has advanced artificial intelligence by allowing algorithms to perform multiple tasks and taken us closer to the original goal of general artificial intelligence.
IRJET- Real-Time Object Detection using Deep Learning: A SurveyIRJET Journal
This document summarizes recent advances in real-time object detection using deep learning. It first provides an overview of object detection and deep learning. It then reviews popular object detection models including CNNs, R-CNNs, Fast R-CNN, Faster R-CNN, YOLO, and SSD. The document proposes modifications to existing models to improve small object detection accuracy. Specifically, it proposes using Darknet-53 with feature map upsampling and concatenation at multiple scales to detect objects of different sizes. It also describes using k-means clustering to select anchor boxes tailored to each detection scale.
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
Deep learning models with millions or billions of parameters should overfit according to classical theory, but they do not. The emerging theory of double descent seeks to explain why larger neural networks can generalize well. Random matrix theory provides a tractable framework to model double descent through random feature models, where the number of random features controls model capacity. In the high-dimensional limit, the test error of random feature regression exhibits a double descent shape that can be computed analytically.
This document provides an overview of game theory and its applications to neural networks. It begins by discussing deductive and inductive reasoning, and how algorithms like weighted majority and gradient descent can be understood through the lens of game theory. Specifically, it notes that gradient descent achieves low regret when viewed as playing against an adversarial environment. It then discusses how neural networks achieve superhuman performance despite being non-convex problems, which required decades of engineering tweaks. Finally, it suggests game theory can provide insights into modeling populations of neural networks or "experts" that distribute knowledge effectively.
Image Quality Feature Based Detection Algorithm for Forgery in Images ijcga
This document summarizes a research paper that proposes an algorithm to detect image forgeries using image quality features and moment-based features. The algorithm extracts 18 image quality metrics related to mean errors, correlations, spectral errors, and HSV norms from image regions. It also applies discrete wavelet transforms and calculates moments from the characteristic functions of histogram sub-bands. Discrete cosine transforms are applied and the coefficients are used to extract additional features. The features are then used to train an SVM classifier to detect forged and authentic images. The algorithm was tested on over 1800 images and achieved accuracy rates over 90% depending on the percentage of images used for training.
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
More Related Content
Similar to fdocuments.in_image-texture-analysis.ppt
The document discusses several case studies of applying machine learning to different problems:
1) Classifying images as day or night using a convolutional neural network.
2) Face verification using deep neural networks to encode faces and compare encodings.
3) Neural style transfer to generate artistic images by combining the content of one image and the style of another, using neural network features and gram matrices.
4) Various techniques are discussed for problems like face identification, clustering, and trigger word detection from audio. The case studies illustrate different modeling decisions needed for machine learning projects.
The document discusses finding initial parameters for neural networks used in data clustering. It presents a modified K-means fast learning artificial neural network (K-FLANN) algorithm that uses differential evolution to optimize the vigilance and tolerance parameters of K-FLANN. Differential evolution is used to select parameter values that yield good clustering performance. The modified K-FLANN algorithm is evaluated on several data sets and shown to provide promising results in terms of convergence rate and accuracy compared to other algorithms. Comparisons are also made between the original and modified versions of K-FLANN.
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Micr...IJAAS Team
We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
The document discusses neural networks, generative adversarial networks, and image-to-image translation. It begins by explaining how neural networks learn through forward propagation, calculating loss, and using the loss to update weights via backpropagation. Generative adversarial networks are introduced as a game between a generator and discriminator, where the generator tries to fool the discriminator and vice versa. Image-to-image translation uses conditional GANs to translate images from one domain to another, such as maps to aerial photos.
In this talk we detail the step to creating a Visual Search engine for 1M Amazon product using MXNet Gluon and the K-Nearest Neighbor search library HNSW.
For implementation details, check this repository: https://github.com/ThomasDelteil/VisualSearch_MXNet
Video available here:
https://www.youtube.com/watch?v=9a8MAtfFVwI
Demo website available here:
https://thomasdelteil.github.io/VisualSearch_MXNet/
An image can be seen as a matrix I, where I(x, y) is the brightness of the pixel located at coordinates (x, y). In the Convolutional neural network, the kernel is nothing but a filter
that is used to extract the features from the images.
- The document presents a neural network model for recognizing handwritten digits. It uses a dataset of 20x20 pixel grayscale images of digits 0-9.
- The proposed neural network has an input layer of 400 nodes, a hidden layer of 25 nodes, and an output layer of 10 nodes. It is trained using backpropagation to classify images.
- The model achieves an accuracy of over 96.5% on test data after 200 iterations of training, outperforming a logistic regression model which achieved 91.5% accuracy. Future work could involve classifying more complex natural images.
Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lens...inside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Gilles Fourestey from EPFL presents: Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software.
"LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It was developed by Prof. Kneib, head of the LASTRO lab at EPFL, et al., starting from 1996. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.
However, LENSTOOL lacks efficient vectorization and only uses OpenMP, which limits its execution to one node and can lead to execution times that exceed several months. Therefore, the LASTRO and the EPFL HPC group decided to rewrite the code from scratch and in order to minimize risk and maximize performance, a bottom-up approach that focuses on exposing parallelism at hardware and instruction levels was used. The result is a high performance code, fully vectorized on Xeon, Xeon Phis and GPUs that currently scales up to hundreds of nodes on CSCS’ Piz Daint, one of the fastest supercomputers in the world."
Watch the video: https://wp.me/p3RLHQ-ili
Learn more: https://infoscience.epfl.ch/record/234382/files/EPFL_TH8338.pdf?subformat=pdfa
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
U-Net is a convolutional neural network used for biomedical image segmentation. It takes in an input image and outputs a segmentation map identifying nuclei pixels. The U-Net architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. The model was trained on microscopy images annotated with nuclei masks to achieve an intersection over union score of 85% after data augmentation was applied.
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement ijsc
Soft computing is likely to play aprogressively important role in many applications including image enhancement. The paradigm for soft computing is the human mind. The soft computing critique has been particularly strong with fuzzy logic. The fuzzy logic is facts representationas a rule for management of uncertainty. Inthis paperthe Multi-Dimensional optimized problem is addressed by discussing the optimal thresholding usingfuzzyentropyfor Image enhancement. This technique is compared with bi-level and multi-level thresholding and obtained optimal thresholding values for different levels of speckle noisy and low contrasted images. The fuzzy entropy method has produced better results compared to bi-level and multi-level thresholding techniques.
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDSIJNSA Journal
With the increasing popularity of information technology in communication network, security has become an inseparable but vital issue for providing for confidentiality, data security, entity authentication and data origin authentication. Steganography is the scheme of hiding data into a cover media to provide confidentiality and secrecy without risking suspicion of an intruder. Visual cryptography is a new technique which provides information security using simple algorithm unlike the complex, computationally intensive algorithms used in other techniques like traditional cryptography. This technique allows visual information to be encrypted in such a way that their decryption can be performed by the Human Visual System (HVS), without any complex cryptographic algorithms. To provide a better secured system that ensures high data capacity and information security, a multilevel security system can be thought for which can be built by incorporating the principles of steganography and visual cryptography.
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDSIJNSA Journal
The document discusses a hybrid approach to steganography and visual cryptography for improved data security. It proposes combining principles of steganography, which hides data in a cover media, and visual cryptography, which encrypts images in a way that can be decrypted by human vision without algorithms. Specifically, it describes generating two random grids from a secret image that reveal the image when overlaid but hide it individually. The random grids are created by inverting or substituting pixels based on the secret image. This hybrid approach aims to provide stronger security than either technique alone by incorporating advantages of both.
This document summarizes a study of deep learning models and Bayesian statistics. It discusses the history of artificial intelligence and machine learning before introducing restricted Boltzmann machines, deep belief networks, and Bayesian statistics. It describes experiments applying restricted Boltzmann machines to classify movies and generate images, and using a deep belief network to classify images from multiple datasets with 100% accuracy. The conclusion states that deep learning has advanced artificial intelligence by allowing algorithms to perform multiple tasks and taken us closer to the original goal of general artificial intelligence.
IRJET- Real-Time Object Detection using Deep Learning: A SurveyIRJET Journal
This document summarizes recent advances in real-time object detection using deep learning. It first provides an overview of object detection and deep learning. It then reviews popular object detection models including CNNs, R-CNNs, Fast R-CNN, Faster R-CNN, YOLO, and SSD. The document proposes modifications to existing models to improve small object detection accuracy. Specifically, it proposes using Darknet-53 with feature map upsampling and concatenation at multiple scales to detect objects of different sizes. It also describes using k-means clustering to select anchor boxes tailored to each detection scale.
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
Deep learning models with millions or billions of parameters should overfit according to classical theory, but they do not. The emerging theory of double descent seeks to explain why larger neural networks can generalize well. Random matrix theory provides a tractable framework to model double descent through random feature models, where the number of random features controls model capacity. In the high-dimensional limit, the test error of random feature regression exhibits a double descent shape that can be computed analytically.
This document provides an overview of game theory and its applications to neural networks. It begins by discussing deductive and inductive reasoning, and how algorithms like weighted majority and gradient descent can be understood through the lens of game theory. Specifically, it notes that gradient descent achieves low regret when viewed as playing against an adversarial environment. It then discusses how neural networks achieve superhuman performance despite being non-convex problems, which required decades of engineering tweaks. Finally, it suggests game theory can provide insights into modeling populations of neural networks or "experts" that distribute knowledge effectively.
Image Quality Feature Based Detection Algorithm for Forgery in Images ijcga
This document summarizes a research paper that proposes an algorithm to detect image forgeries using image quality features and moment-based features. The algorithm extracts 18 image quality metrics related to mean errors, correlations, spectral errors, and HSV norms from image regions. It also applies discrete wavelet transforms and calculates moments from the characteristic functions of histogram sub-bands. Discrete cosine transforms are applied and the coefficients are used to extract additional features. The features are then used to train an SVM classifier to detect forged and authentic images. The algorithm was tested on over 1800 images and achieved accuracy rates over 90% depending on the percentage of images used for training.
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
Similar to fdocuments.in_image-texture-analysis.ppt (20)
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
1. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Image Texture Analysis
Lalit Gupta,
Scientist, Philips Research
2. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Texture Analysis
Region based texture
segmentation
+
Texture Edge Detection
Textured image
3. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Region Based Texture
Segmentation
4. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Image histograms
R1 R2
R3 R4
R1 R2
R3 R4
5. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Classification using Proposed
Methodology
A1
V1
H1
D1
1ST level
Decomposition
DWT (Daubechies)
Dj
Dj
Filtering
FCM
Unsupervised
classification
Image
DCT
(9 masks)
DCT
(9 masks)
.
.
Gaussian
filtering
Gj
Gj
Smoothing
.
.
Fj
Fj
Feature
extraction
.
.
DWT: Discrete wavelet transform
DCT: Discrete cosine transform Ref: [Randen99]
6. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Input Image
Steps of Processing
DWT
A1 V1 H1 D1
FCM
.. .. ..
DCT
. . .
.. .. ..
Smoothing
. . .
.. .. ..
Mean
36 Feature images
. . .
7. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
(a) Input Image (b) DWT (c) Gabor filter (b) DWT+Gabor
(d) GMRF (e) DWT + MRF (f) DCT (f) DWT+DCT
Results using various Filtering Techniques
Ref: [Ng92], [Rao2004], [Cesmeli2001]
8. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Results (Cont.)
I1 I2 I3 I4 I5
Input images
I6 I7 I8 I9 I10
9. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Results (Cont.)
0
5
10
15
20
25
30
35
40
1 2 3 4 5 6 7 8 9 10
Image Index
Error
in
classification
(%)
DWT+Gabor DWT+MRF DCT DWT+DCT
Number of pixels incorrectly classified
Error in classification =
Total number of pixels
10. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Texture Edge Detection
11. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Proposed
Methodology
Filtering using 1-D Discrete Wavelet
Transform and 1-D Gabor filter bank
16 dimensional feature
vector is mapped onto one
dimensional feature map
Self-Organizing feature Map (SOM)
Smoothed image
Smoothing using 2-D symmetric
Gaussian filter
Edge map
Edge detection using Canny operator
Final edge map
Edge Linking
Input image
Smoothed images
Smoothing using 2-D asymmetric
Gaussian filter
. . .
16 filtered images, 8 each
along horizontal and vertical
parallel lines of image
. . .
Ref: [Liu99], [Canny86],
[Yegnanarayana98]
12. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Steps of
Processing
Input image
Filtered
images
...
...
Smoothed
images
Feature
map
Smoothed images
Edge
map
13. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Results
Input image Edge map Input image Edge map Input image Edge map
14. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Integrating Region and Edge
Information for Texture
Segmentation
We have used a modified constraint satisfaction neural networks
termed as Constraint Satisfaction Neural Network for Complementary
Information Integration (CSNN-CII), which integrates the region and
edge based approaches.
+
15. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Dynamic Window
Image Window
16. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Constraint Satisfaction Neural
Networks For Image Segmentation
1 < i < n
1 < j < n
1 < k < m
i
j
k
Size of image: n x n
No. of labels/classes: m
Ref: [Lin92]
17. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Constraint Satisfaction Neural Network
for Complementary Information
Integration (CSNN-CII)
Each neuron in CSNN-CII contains two fields:
Probability and Rank.
Probability: probability that the pixel belongs to the
segment represented by the corresponding layer.
Rank: Rank field stores the rank of the
probability in a decreasing order, for
that neuron.
0.1
0.5
0.4
Probabilities
3
1
2
Rank
18. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
The weight between kth layer’s (i, j)th, Uijk, neuron and
lth layer’s (q, r)th, Uqrl, neuron is computed as:
, , ,
2
1
1
ijk qrl
ij qr k l
R R
W
p m
Weights in the CSNN can be interpreted as constraints.
Weights are determined based on the heuristic that a neuron
excites other neurons representing the labels of similar
intensities and inhibits other neurons representing labels of
quite different intensities.
Where,
p: number of neurons in 2D neighborhood (dynamic window).
m: number of layers (classes).
Uijk: represents kth layer’s (i, j)th neuron.
Rijk: Rank for (i, j)th neuron in kth layer or Uijk neuron.
Ref: [Lin 92]
Uijk
Uqrl
Wij,qr,k,l
19. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Algorithm
• Phase 1:
– Initialize the CSNN neurons using fuzzy c-means results.
• The probability values obtained from FCM are assigned to
the nodes of CSNN. Ranks for each neuron are also
computed on the basis of initial class probabilities.
0.2 0.2 0.8
0.3 0.6 0.2
0.6 0.3 0.6
0.8 0.8 0.2
0.7 0.4 0.8
0.4 0.7 0.4
0.2, 2 0.2, 2 0.8, 1
0.3, 2 0.6, 1 0.2, 2
0.6, 1 0.3, 2 0.6, 1
0.8, 1 0.8, 1 0.2, 2
0.7, 1 0.4, 2 0.8, 1
0.4, 2 0.7, 1 0.4, 2 Rank
Probability
CSNN-CII
Layer-1
Layer-2
FCM output
20. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
S Uijk
Hijk
1
, , ,1 1
qr ij
ij qr k qr
U N
W O
, , ,
qrm ij
ij qr k m qrm
U N
W O
1
t
ijk
O
t
ijk
O
Hijk: sum of inputs from all neighboring neurons.
Oijk: the probability of (i,j)th pixel having a label k (Probability value assigned
to the Uijk neuron).
Nij: a set of neurons in the 3D neighborhood of (i,j)th neuron (considering
Dynamic window).
, , ,
qrl ij
t t
ijk ij qr k l qrl
U N
H W O
i
j
k
– Iterate and update the probabilities, edge map
and determine the winner label
Algorithm (Cont.)
21. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
0.2, 2 0.2, 2 0.8, 1
0.3, 2 0.6, 1 0.2, 2
0.6, 1 0.3, 2 0.6, 1
0.8, 1 0.8, 1 0.2, 2
0.7, 1 0.4, 2 0.8, 1
0.4, 2 0.7, 1 0.4, 2
CSNN-CII
Layer-1
Layer-2
1 2|1 1| 1
1
5 2 5
W
For neurons with rank=1
1 2|1 2|
1 0
5 2
W
For neurons with rank=2
, , ,
qrl ij
t t
ijk ij qr k l qrl
U N
H W O
1 1
5 5
0*0.2 *0.8 ... *0.8 ...
a
H
0.74
a
H
, , ,
2
1
1
ijk qrl
ij qr k l
R R
W
p m
0.26
b
H
Algorithm (Cont.)
1 0 0
1 0 0
1 0 0
Edge information
22. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
1
1
( )
( )
t t
ijk ijk
t
ijk m
t t
ijl ijl
l
Pos O O
O
Pos O O
if 0
( )
0 otherwise
X X
Pos X
if max
otherwise
t t
ijk ijl
t l
ijk
H H
O
0.74
a
H
0.26
b
H
0.6
a
O
0.4
b
O
a b
H H
Algorithm (Cont.)
0.2, 2 0.2, 2 0.8, 1
0.3, 2 0.6, 1 0.2, 2
0.6, 1 0.3, 2 0.6, 1
0.8, 1 0.8, 1 0.2, 2
0.7, 1 0.4, 2 0.8, 1
0.4, 2 0.7, 1 0.4, 2
CSNN-CII
Layer-1
Layer-2
23. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
1
ij
argmax O ( )
t t
ij
l
Y l
Labels to each pixel of an image are
assigned as:
Where,
l l m
0.1
a
O
0.1
b
O
Updated probability values:
0.7
a
O
0.3
b
O
0.2, 2 0.2, 2 0.8, 1
0.3, 2 0.6, 1 0.2, 2
0.6, 1 0.3, 2 0.6, 1
0.8, 1 0.8, 1 0.2, 2
0.7, 1 0.4, 2 0.8, 1
0.4, 2 0.7, 1 0.4, 2
2 2 1
2 1 2
1 2 1
Layer-1
Layer-2
Y
Where, 0.1
Algorithm (Cont.)
24. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Updating Edge Map:
B : Edge map obtained using lower threshold.
E : Edge map obtained using higher threshold.
Mij : the set of pixels in the neighborhood of pixel (i, j) in the
output image Y of size 2v+1, excluding edge pixels in E.
0
t
ij qr
t
qr
q v q q v
M Y r v r r v
E
Algorithm (Cont.)
Y
min( ) max( )
ij ij
M M
E
1
1 1
1 1 and min( ) max( )
0 otherwise
t
ij
t
ij ij ij ij
E
E B M M
Edge map at each iteration is computed as:
25. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
1
1 1
1 1 and min( ) max( )
0 otherwise
t
ij
t
ij ij ij ij
E
E B M M
Edge map at each iteration is computed as:
– Check the convergence condition, i.e., the number of
pixels updated in Y, at each iteration. If there is any
update go to second step.
B Y Updated edge map (E)
E
M
Algorithm (Cont.)
26. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
• Phase 2
– Iterate, and update edge map E, by removing extra
edge pixels and by adding new edge pixels.
1 1
1 1
0
ij qr
t
qr
q q q
L Y r r r
E
Lij is considered as:
0 1 and min( ) max( )
otherwise
ij ij ij
ij
ij
E L L
E
E
Edge map E is updated as:
Algorithm (Cont.)
Y
min( ) max( )
ij ij
L L
27. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
– Merge Edge map and Segmented map to get final
output.
Finally, new edge pixels are added where Eij = 0 and
min(Lij) max(Lij)
0 1 and min( ) max( )
otherwise
ij ij ij
ij
ij
E L L
E
E
E Y Updated edge map (E)
E Y Updated edge map (E)
Algorithm (Cont.)
28. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Final Output
Segmented map Edge map
– Merge Edge map and Segmented map to get final
output.
29. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Input Image
Segmented map
before integration
(Ref: [Rao2004])
Edge map before
integration
(Ref: [Lalit2006])
Segmented map
and Edge map
after integration
Results
30. "All truths are easy to understand once they are discovered; the point is to discover them." - Galileo
Work done in IIT Madras
Results
Input Image
Segmented map
before integration
(Ref: [Rao2004])
Edge map before
integration
(Ref: [Lalit2006])
Segmented map
and Edge map
after integration