The document discusses using LIDAR data and the Normalized Difference Tree Index (NDTI) to extract building footprints from point cloud data. It describes collecting LIDAR data using sensors mounted on aircraft and processing the data to generate nDSM, intensity, and NDTI rasters. Image segmentation and rule-based classification are then used to classify objects in the rasters as buildings or non-buildings. The results achieved an overall accuracy of 94% and kappa value of 0.84, showing high correlation with reference building footprints. Machine learning techniques are recommended for more accurate extraction at smaller scale factors.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
A Novel Algorithm for Watermarking and Image Encryption cscpconf
Digital watermarking is a method of copyright protection of audio, images, video and text. We
propose a new robust watermarking technique based on contourlet transform and singular value
decomposition. The paper also proposes a novel encryption algorithm to store a signed double
matrix as an RGB image. The entropy of the watermarked image and correlation coefficient of
extracted watermark image is very close to ideal values, proving the correctness of proposed
algorithm. Also experimental results show resiliency of the scheme against large blurring attack
like mean and gaussian filtering, linear filtering (high pass and low pass filtering) , non-linear
filtering (median filtering), addition of a constant offset to the pixel values and local exchange of pixels .Thus proving the security, effectiveness and robustness of the proposed watermarking algorithm.
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
This is an Image Semantic Segmentation project targeted on Satellite Imagery. The goal was to detect the pixel-wise segmentation map for various objects in Satellite Imagery including buildings, water bodies, roads etc. The data for this was taken from the Kaggle competition <https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection>.
We implemented FCN, U-Net and Segnet Deep learning architectures for this task.
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
A Novel Algorithm for Watermarking and Image Encryption cscpconf
Digital watermarking is a method of copyright protection of audio, images, video and text. We
propose a new robust watermarking technique based on contourlet transform and singular value
decomposition. The paper also proposes a novel encryption algorithm to store a signed double
matrix as an RGB image. The entropy of the watermarked image and correlation coefficient of
extracted watermark image is very close to ideal values, proving the correctness of proposed
algorithm. Also experimental results show resiliency of the scheme against large blurring attack
like mean and gaussian filtering, linear filtering (high pass and low pass filtering) , non-linear
filtering (median filtering), addition of a constant offset to the pixel values and local exchange of pixels .Thus proving the security, effectiveness and robustness of the proposed watermarking algorithm.
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
This is an Image Semantic Segmentation project targeted on Satellite Imagery. The goal was to detect the pixel-wise segmentation map for various objects in Satellite Imagery including buildings, water bodies, roads etc. The data for this was taken from the Kaggle competition <https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection>.
We implemented FCN, U-Net and Segnet Deep learning architectures for this task.
Object Detection using Deep Neural NetworksUsman Qayyum
Recent Talk at PI school covering following contents
Object Detection
Recent Architecture of Deep NN for Object Detection
Object Detection on Embedded Computers (or for edge computing)
SqueezeNet for embedded computing
TinySSD (object detection for edge computing)
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Slide for study session given by Ryosuke Sasaki at Arithmer inc.
It is a summary of recent methods for object pose estimation in robotics using deep learning.
He entered Ph.D course at Univ. of Tokyo in April 2020.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
We consider the problem of finding anomalies in high-dimensional data using popular PCA based anomaly scores. The naive algorithms for computing these scores explicitly compute the PCA of the covariance matrix which uses space quadratic in the dimensionality of the data. We give the first streaming algorithms
that use space that is linear or sublinear in the dimension. We prove general results showing that any sketch of a matrix that satisfies a certain operator norm guarantee can be used to approximate these scores. We instantiate these results with powerful matrix sketching techniques such as Frequent Directions and random projections to derive efficient and practical algorithms for these problems, which we validate over real-world data sets. Our main technical contribution is to prove matrix perturbation
inequalities for operators arising in the computation of these measures.
-Proceedings: https://arxiv.org/abs/1804.03065
-Origin: https://arxiv.org/abs/1804.03065
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
DAOR - Bridging the Gap between Community and Node Representations: Graph Emb...Artem Lutov
Slides of the presentation given at BigData'19, special session on Information Granulation in Data Science and Scalable Computing.
The fully automatic (i.e., without any manual tuning) graph embedding (i.e., network representation learning, unsupervised feature extraction) performed in near-linear time is presented. The resulting embeddings are interpretable, preserve both low- and high-order structural proximity of the graph nodes, computed (i.e., learned) by orders of magnitude faster and perform competitively to the manually tuned best state-of-the-art embedding techniques evaluated on diverse tasks of graph analysis.
FPGA Implementation of 2-D DCT & DWT Engines for Vision Based Tracking of Dyn...IJERA Editor
Real time motion estimation for tracking is a challenging task. Several techniques can transform an image into frequency domain, such as DCT, DFT and wavelet transform. Direct implementation of 2-D DCT takes N^4 multiplications for an N x N image which is impractical. The proposed architecture for implementation of 2-D DCT uses look up tables. They are used to store pre-computed vector products that completely eliminate the multiplier. This makes the architecture highly time efficient, and the routing delay and power consumption is also reduced significantly. Another approach, 2-D discrete wavelet transform based motion estimation (DWT-ME) provides substantial improvements in quality and area. The proposed architecture uses Haar wavelet transform for motion estimation. In this paper, we present the comparison of the performance of discrete cosine transform, discrete wavelet transform for implementation in motion estimation.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Slide for study session given by Ryosuke Sasaki at Arithmer inc.
It is a summary of recent methods for object pose estimation in robotics using deep learning.
He entered Ph.D course at Univ. of Tokyo in April 2020.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
We consider the problem of finding anomalies in high-dimensional data using popular PCA based anomaly scores. The naive algorithms for computing these scores explicitly compute the PCA of the covariance matrix which uses space quadratic in the dimensionality of the data. We give the first streaming algorithms
that use space that is linear or sublinear in the dimension. We prove general results showing that any sketch of a matrix that satisfies a certain operator norm guarantee can be used to approximate these scores. We instantiate these results with powerful matrix sketching techniques such as Frequent Directions and random projections to derive efficient and practical algorithms for these problems, which we validate over real-world data sets. Our main technical contribution is to prove matrix perturbation
inequalities for operators arising in the computation of these measures.
-Proceedings: https://arxiv.org/abs/1804.03065
-Origin: https://arxiv.org/abs/1804.03065
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
DAOR - Bridging the Gap between Community and Node Representations: Graph Emb...Artem Lutov
Slides of the presentation given at BigData'19, special session on Information Granulation in Data Science and Scalable Computing.
The fully automatic (i.e., without any manual tuning) graph embedding (i.e., network representation learning, unsupervised feature extraction) performed in near-linear time is presented. The resulting embeddings are interpretable, preserve both low- and high-order structural proximity of the graph nodes, computed (i.e., learned) by orders of magnitude faster and perform competitively to the manually tuned best state-of-the-art embedding techniques evaluated on diverse tasks of graph analysis.
FPGA Implementation of 2-D DCT & DWT Engines for Vision Based Tracking of Dyn...IJERA Editor
Real time motion estimation for tracking is a challenging task. Several techniques can transform an image into frequency domain, such as DCT, DFT and wavelet transform. Direct implementation of 2-D DCT takes N^4 multiplications for an N x N image which is impractical. The proposed architecture for implementation of 2-D DCT uses look up tables. They are used to store pre-computed vector products that completely eliminate the multiplier. This makes the architecture highly time efficient, and the routing delay and power consumption is also reduced significantly. Another approach, 2-D discrete wavelet transform based motion estimation (DWT-ME) provides substantial improvements in quality and area. The proposed architecture uses Haar wavelet transform for motion estimation. In this paper, we present the comparison of the performance of discrete cosine transform, discrete wavelet transform for implementation in motion estimation.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
This presentation is an analysis of the paper,"SCRDet++: Detecting Small, Cluttered and Rotated Objects via Instance-Level Feature Denoising and Rotation Loss Smoothing"
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. LIDAR – LIght
Detection And
Ranging
Lidar, which stands for Light
Detection and Ranging, is a
remote sensing method that
uses light in the form of a
pulsed laser to measure ranges
to the Earth.
The data is collected in the form
point cloud data by sensor
scanner mounted on a device.
3. LIDAR components
The main components of
LIDAR system are : -
• Lidar Scanner
• Global Positioning
System
• Inertia Measurement
unit
• Computer System
LIDAR based system –
• Airborne
• Ground based
• Satellite based
5. Objective
The objective of the study is to extract Building Footprints from a LIDAR scanned
data in a complex urban environment using the Normalized Difference Tree
Index(NDTI) .
Accurate building footprint availability is a tough task and for urban planning and
development tasks one needs an accurate building footprint data to better plan
the services in a more realistic multi dimensional scenario.
6. Methodology of the study
Rule based
Object
Oriented
Classification
Lidar Intensity data
nDSM data (FR & LR)
NDTI data
Output
In the study Lidar intensity data, normalized digital surface model(nDSM) of
the first and last returns, and the normalized difference tree index(NDTI)
derived from the two returns are used to extract building footprints from
point cloud data using rule based object oriented classification.
7. Data Collection
The LIDAR data is collected using Optech ALTM 3100 sensor with the help of
Applied Geomatics Research Group.
Data Metrics
1. Altitude – 1000M
2. Scanning Rate – 39Hz
3. Scanning Angle - +-20 Degrees
4. Laser Frequency – 70HZ
5. Site Location – University of Western
Ontario.
Data Acquisition – 15 Flight Lines of width
about 750m and the strip overlap is 50%.
8. Process Flowchart
Raw Lidar
Point Cloud
Data Pre-
processing
Image
Segmentation
Rule Based
Classification
Post
Processing
Building
Footprints
Data Pre-processing
The Digital Surface Model (DSM) which represents the earths surface and includes all objects on it &
Digital Terrain Model(DTM) which represents the elevation of the earths surface with all objects
removed from it.
nDSM – The difference model for DSM and DTM which is a representation of the height of the objects
on the plain surface.
DSM and DTM raster with 0.6m resolution are constructed by interpolation for the 1st and the last
return. nDSM = DSM - DTM
Intensity Raster Layer – Created by Interpolation of Intensity attribute of Lidar data.
NDTI = (nDSM(FR)-nDSM(LR)) / (nDSM(FR)+nDSM(LR)) :-FR – First Return, LR – Last Return
10. Image Segmentation
Segmentation is the method to have image
objects large enough to have meaningful
geometric or spectral values but small
enough so that they do not represent more
than one feature class in the same
classification.
Two Step Approach – Multiresolution
Segmentation & Spectral Difference
Segmentation.
Multiresolution Segmentation – The
method aims to region building within the
data based on a segment scale parameter
which is a threshold for heterogeneity of the
objects. Different sets of nDSM (FR & LR) &
NDTI are used as inputs, different scale
parameters are used to arrive at optimum
result. Scale Parameter 10 is chosen in order
to limit intermixing or excess data.
Comparison of different scale parameters (a)20,
(b)15, (c)10, (d)5.
11. Spectral Difference Segmentation It is
used to merge neighboring objects
on the mean layer intensity values. The
layers used –
nDSM(FR), nDSM(LR), NDTI, Intensity
data – Layer weights 1:1:2:1
(K1,K2,K3,K4)
Normalized layer weights – KN1-
K1/(K1+K2+K3+K4) are calculated.
Expression – 𝑖=1
4
KNi∗(Ki1−Ki2)
Ki1-Ki2 = Difference of the mean
intensity values.
Higher weight for NDTI data is used
separate trees from buildings.
The loop is reiterated multiple times.
nDSM of the last return as a visual reference and
segmentation output.
12. Rule based classification
The classification has three steps -
Assign basic building objects – This step classifies the basic
buildings based on shape and small height difference
between first and last return. NDTI <=0.001 & nDSM(LR)
>=2.5 or nDSM(LR) >=30 Further check for roundness
<=0.41 & area <=190.
Assign Adjacent Building objects – It is used to identify
objects adjacent to buildings. Rel. Border to building is
calculated using the border length shared between two
adjacent buildings and is used to classify them.
Assign building edge objects – The last step is to identify the
building edge objects. Rel. border to building >=0.4.
At each step if the query is true it proceeds as a building
candidate if it is false it puts the object into unclassified
category and furthers to next step.
13. Rule based classification
Fig(a) – In Step(1) the major buildings are classified and
the unclassified buildings in this step are sent for step
2.The highlighted areas are the classified buildings.
Fig(b) – In step(2) the adjacent building based on the
shared boundary criteria of >=0.6 with respect to the
parent object is used and the buildings classified in this
step are darkened further.
Fig(c) – In step(3) the edge building which share even
lower boundary with the parent object >=0.4 and further
the roundness >= 2.8 and nDSM(LR) >=6 is filtered. Due
to penetration of light at the edges of the building
therefore some small building at the edges may be missed
hence a further check with 3.5<=nDSM<=5.3 and rel.
border >=0.5 is used to collect them.
The remaining objects are assigned to the unclassified
class.
14. Post processing & Accuracy assessment
In post processing geometric check using area and
rectangular parameter fit is used to further filter the data and
arrive at the final result.
Accuracy assessment of the data is done using Confusion
matrix algorithm.
A confusion matrix is a table that is often used to describe
the performance of a classification model (or “classifier”) on a
set of test data for which the true values are known.
Kappa Value – It measures the agreement between the
classification and the true values.
Classified
Data/Reference
data
Building Non-Building
Building (d) (c)
Non-Building (e) (f)
User accuracy – d/(c+d), e/(e+f)
Producers accuracy- d/(d+e), c/(c+f)
Kappa -(d + c)*(d + f)–((d + c)*(d + e) + (e + f)*(c +
f))/((c + d + e + f)^2–(d + c)*(d + e)*(e + f)*(c + f))
15. Result
Overall accuracy – 94% ,
Commission error – 6.3%, Kappa-
0.84.
This means that there is a high
correlation with the predicted
data and the actual data.
Fig(1)
Samples
from the
reference
building
footprints,
(2) – Results
from the
anlysis.
Summary
We have a clear sight of moving
ahead of the traditional optical
images format.
The NDTI index has proved to be a
game changer in the analysis and can
be used in various ways to extract
meaningful data.
Different sets of algorithms and
techniques can be utilized to access
the output and object based
classification proved to be a winner.
16. Recommendations
We can make use of machine learning technique for object classification and
segmentation using which we can go down to the scale factor of 5 making the data
more accurate and not compromising on time taken. This can be done using
automated feature extraction from point clouds and is an advance machine
learning technique.
17. References
Congalton, R. G. 1991. “A Review of Assessing the Accuracy of Classifications of Remotely
Sensed Data.” Remote Sensing of Environment 37: 35–46.
Definiens. 2010. Definiens Developer 8.0.1 Reference Book. München: Definiens AG.
El-Ashmawy, N., A. Shaker, and W. Y. Yan. 2011. “Pixel vs Object-Based Image Classification
Techniques for LiDAR Intensity Data.” ISPRS Workshop on Laser Scanning, Elsevier, XXXIII,
Calgary, August 29–31.
Frauman, E., and E. Wolff. 2005. “Segmentation of Very High Spatial Resolutions Satellite
Images in Urban Areas for Segments-Based Classification.” Proceedings of the 3rd
International Symposium Remote Sensing and Data Fusion Over Urban Areas, Tempe, AZ,
March 14–16.