The presentation discusses a new method for imaging seismic data that was recently implemented on the SmartGeo cloud computing portal (https://smartgeo.crs4.it/enginframe/eiagrid/eiagrid.xml). The method is particularly suited for near-surface applications such as geotechnical engineering or environmental studies. It is shown that instead of limiting the stacking velocity analysis to single Common-Midpoint-Gathers, groups of neighboring Common-Midpoint-Gathers gathers are considered to identify entire reflection surfaces in the data. As a result the extracted kinematic properties of the subsurface, e.g. wave-propagation velocities, are more reliable and the final data stacking leads to a more detailed subsurface image even in case of noisy prestack data and laterally strongly variable velocities. In the second part of the presentation, the successful application of the proposed method is discussed in a case study based on a ultra-shallow seismic SH-wave data set recorded close to Teulada, Sardinia, Italy.
Viene presentato e discusso (in inglese) in dettaglio l'utilizzo della piattaforma EIAGRID/SmartGEO in due casi studio significativi per le applicazioni geotecniche e ambientali. Al termine, l'utente interessato dovrebbe essere in grado di utilizzare in modo autonomo la piattaforma attraverso il portale SmartGEO.
Compressive sampling (CS) aims at acquiring a signal at a sampling rate below the Nyquist rate by exploiting prior knowledge that a signal is sparse or correlated in some domain. Despite the remarkable progress in the theory of CS, the sampling rate on a single image required by CS is still very high in practice. In this presentation, a non-local compressive sampling (NLCS) recovery method is proposed to further reduce the sampling rate by exploiting non-local patch correlation and local piecewise smoothness present in natural images. Two non-local sparsity measures, i.e., non-local wavelet sparsity and non-local joint sparsity, are proposed to exploit the patch correlation in NLCS. An efficient iterative algorithm is developed to solve the NLCS recovery problem, which is shown to have stable convergence behavior in experiments. The experimental results show that our NLCS significantly improves the state-of-the-art of image compressive sampling.
In this deck from ATPESC 2019, Yunong Shi from the University of Chicago presents: SW/HW co-design for near-term quantum computing.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-lpv
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Many great resources exist for those who wish to understand quantum computing, however nearly all these resources fail to cater to the novice. Many resources simply assume that the reader is already proficient with both linear and matrix algebra, as well as intrinsically understand how a quantum system should function. These resources are all well and good, and probably provide a great deal of insight to those equipped to digest them (physicists, mathematicians), but for the rest of us, they appear dense and impenetrable.
How can you begin to understand something this complex in the first instance?
What is needed is an introduction to the introduction, some bridging information that illustrates these concepts in 'normal' language, and allows the reader to go from ‘Novice’ to ‘Somewhat Prepared’.
Join me to get the skinny on quantum computing and quantum information fundamentals, as seen from an OO perspective.
The presentation discusses a new method for imaging seismic data that was recently implemented on the SmartGeo cloud computing portal (https://smartgeo.crs4.it/enginframe/eiagrid/eiagrid.xml). The method is particularly suited for near-surface applications such as geotechnical engineering or environmental studies. It is shown that instead of limiting the stacking velocity analysis to single Common-Midpoint-Gathers, groups of neighboring Common-Midpoint-Gathers gathers are considered to identify entire reflection surfaces in the data. As a result the extracted kinematic properties of the subsurface, e.g. wave-propagation velocities, are more reliable and the final data stacking leads to a more detailed subsurface image even in case of noisy prestack data and laterally strongly variable velocities. In the second part of the presentation, the successful application of the proposed method is discussed in a case study based on a ultra-shallow seismic SH-wave data set recorded close to Teulada, Sardinia, Italy.
Viene presentato e discusso (in inglese) in dettaglio l'utilizzo della piattaforma EIAGRID/SmartGEO in due casi studio significativi per le applicazioni geotecniche e ambientali. Al termine, l'utente interessato dovrebbe essere in grado di utilizzare in modo autonomo la piattaforma attraverso il portale SmartGEO.
Compressive sampling (CS) aims at acquiring a signal at a sampling rate below the Nyquist rate by exploiting prior knowledge that a signal is sparse or correlated in some domain. Despite the remarkable progress in the theory of CS, the sampling rate on a single image required by CS is still very high in practice. In this presentation, a non-local compressive sampling (NLCS) recovery method is proposed to further reduce the sampling rate by exploiting non-local patch correlation and local piecewise smoothness present in natural images. Two non-local sparsity measures, i.e., non-local wavelet sparsity and non-local joint sparsity, are proposed to exploit the patch correlation in NLCS. An efficient iterative algorithm is developed to solve the NLCS recovery problem, which is shown to have stable convergence behavior in experiments. The experimental results show that our NLCS significantly improves the state-of-the-art of image compressive sampling.
In this deck from ATPESC 2019, Yunong Shi from the University of Chicago presents: SW/HW co-design for near-term quantum computing.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-lpv
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Many great resources exist for those who wish to understand quantum computing, however nearly all these resources fail to cater to the novice. Many resources simply assume that the reader is already proficient with both linear and matrix algebra, as well as intrinsically understand how a quantum system should function. These resources are all well and good, and probably provide a great deal of insight to those equipped to digest them (physicists, mathematicians), but for the rest of us, they appear dense and impenetrable.
How can you begin to understand something this complex in the first instance?
What is needed is an introduction to the introduction, some bridging information that illustrates these concepts in 'normal' language, and allows the reader to go from ‘Novice’ to ‘Somewhat Prepared’.
Join me to get the skinny on quantum computing and quantum information fundamentals, as seen from an OO perspective.
PhD defence public presentation, Bayesian methods for inverse problems with point clouds: applications to single-photon lidar, ENSEEHIT, Toulouse, France
OPTIMIZED RATE ALLOCATION OF HYPERSPECTRAL IMAGES IN COMPRESSED DOMAIN USING ...Pioneer Natural Resources
This paper studies the application of bit allocation using JPEG2000 for compressing multi-dimensional remote sensing data. Past experiments have shown that the Karhunen- Lo
`
e
ve transform (KLT) along with rate distortion optimal(RDO) bit allocation produces good compression perfor-mance. However, this model has the unavoidable disadvan-tage of paying a price in terms of implementation complex-ity. In this research we address this complexity problem byusing the discrete wavelet transform (DWT) instead of theKLT as the decorrelator. Further, we have incorporated amixed model (MM) to find the rate distortion curves instead of the prior method of using experimental rate distortioncurves for RDO bit allocation. We compared our results tothe traditional high bit rate quantizer bit allocation modelbased on the logarithm of variances among the bands. Our comparisons show that by using the MM-RDO bit rate al-location method result in lower mean squared error (MSE)compared to the traditional bit allocation scheme. Our ap- proach also has an additional advantage of using DWT asa computationally efficient decorrelator when compared tothe KLT
Capacitated Kinetic Clustering in Mobile Networks by Optimal Transportation T...Chien-Chun Ni
Presented in INFOCOM 2016
http://www3.cs.stonybrook.edu/~chni/publication/optran/
--
We consider the problem of capacitated kinetic clustering in which
n
n
mobile terminals and
k
k
base stations with respective operating capacities are given. The task is to assign the mobile terminals to the base stations such that the total squared distance from each terminal to its assigned base station is minimized and the capacity constraints are satisfied. This paper focuses on the development of distributed and computationally efficient algorithms that adapt to the motion of both terminals and base stations. Suggested by the optimal transportation theory, we exploit the structural property of the optimal solution, which can be represented by a power diagram on the base stations such that the total usage of nodes within each power cell equals the capacity of the corresponding base station. We show by using the kinetic data structure framework the first analytical upper bound on the number of changes in the optimal solution, i.e., its stability. On the algorithm side, using the power diagram formulation we show that the solution can be represented in size proportional to the number of base stations and can be solved by an iterative, local algorithm. In particular, this algorithm can naturally exploit the continuity of motion and has orders of magnitude faster than existing solutions using min-cost matching and linear programming, and thus is able to handle large scale data under mobility.
Reading group - Week 2 - Trajectory Pooled Deep-Convolutional Descriptors (TDD)Saimunur Rahman
This presentation was prepared for ViPr Reading group at Multimedia University, Cyberjaya. The goal of this presentation was to make aware the lab members about the recent advancements in action recognition.
An improved fading Kalman filter in the application of BDS dynamic positioningIJRES Journal
Aiming at the poor dynamic performance and low navigation precision of traditional fading
Kalman filter in BDS dynamic positioning, an improved fading Kalman filter based on fading factor vector is
proposed. The fading factor is extended to a fading factor vector, and each element of the vector corresponds to
each state component. Based on the difference between the actual observed quantity and the predicted one, the
value of the vector is changed automatically. The memory length of different channel is changed in real time
according to the dynamic property of the corresponding state component. The actual observation data of BDS is
used to test the algorithm. The experimental results show that compared with the traditional fading Kalman filter
and the method of the third references, the positioning precision of the algorithm is improved by 46.3% and
23.6% respectively.
Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-ran...shuxianbiao
Low-rank matrix recovery from a corrupted observation has many applications in computer vision. Conventional methods address this problem by iterating between nuclear norm minimization and sparsity minimization. However, iterative nuclear norm minimization is computationally prohibitive for large-scale data (e.g., video) analysis. In this paper, we propose a Robust Orthogonal Subspace Learning (ROSL) method to achieve efficient low-rank recovery. Our intuition is a novel rank measure on the low-rank matrix that imposes the group sparsity of its coefficients under orthonormal subspace. We present an efficient sparse coding algorithm to minimize this rank measure and recover the low-rank matrix at quadratic complexity of the matrix size. We give theoretical proof to validate that this rank measure is lower bounded by nuclear norm and it has the same global minimum as the latter. To further accelerate ROSL to linear complexity, we also describe a faster version (ROSL+) empowered by random sampling. Our extensive experiments demonstrate that both ROSL and ROSL+ provide superior efficiency against the state-of-the-art methods at the same level of recovery accuracy.
PhD defence public presentation, Bayesian methods for inverse problems with point clouds: applications to single-photon lidar, ENSEEHIT, Toulouse, France
OPTIMIZED RATE ALLOCATION OF HYPERSPECTRAL IMAGES IN COMPRESSED DOMAIN USING ...Pioneer Natural Resources
This paper studies the application of bit allocation using JPEG2000 for compressing multi-dimensional remote sensing data. Past experiments have shown that the Karhunen- Lo
`
e
ve transform (KLT) along with rate distortion optimal(RDO) bit allocation produces good compression perfor-mance. However, this model has the unavoidable disadvan-tage of paying a price in terms of implementation complex-ity. In this research we address this complexity problem byusing the discrete wavelet transform (DWT) instead of theKLT as the decorrelator. Further, we have incorporated amixed model (MM) to find the rate distortion curves instead of the prior method of using experimental rate distortioncurves for RDO bit allocation. We compared our results tothe traditional high bit rate quantizer bit allocation modelbased on the logarithm of variances among the bands. Our comparisons show that by using the MM-RDO bit rate al-location method result in lower mean squared error (MSE)compared to the traditional bit allocation scheme. Our ap- proach also has an additional advantage of using DWT asa computationally efficient decorrelator when compared tothe KLT
Capacitated Kinetic Clustering in Mobile Networks by Optimal Transportation T...Chien-Chun Ni
Presented in INFOCOM 2016
http://www3.cs.stonybrook.edu/~chni/publication/optran/
--
We consider the problem of capacitated kinetic clustering in which
n
n
mobile terminals and
k
k
base stations with respective operating capacities are given. The task is to assign the mobile terminals to the base stations such that the total squared distance from each terminal to its assigned base station is minimized and the capacity constraints are satisfied. This paper focuses on the development of distributed and computationally efficient algorithms that adapt to the motion of both terminals and base stations. Suggested by the optimal transportation theory, we exploit the structural property of the optimal solution, which can be represented by a power diagram on the base stations such that the total usage of nodes within each power cell equals the capacity of the corresponding base station. We show by using the kinetic data structure framework the first analytical upper bound on the number of changes in the optimal solution, i.e., its stability. On the algorithm side, using the power diagram formulation we show that the solution can be represented in size proportional to the number of base stations and can be solved by an iterative, local algorithm. In particular, this algorithm can naturally exploit the continuity of motion and has orders of magnitude faster than existing solutions using min-cost matching and linear programming, and thus is able to handle large scale data under mobility.
Reading group - Week 2 - Trajectory Pooled Deep-Convolutional Descriptors (TDD)Saimunur Rahman
This presentation was prepared for ViPr Reading group at Multimedia University, Cyberjaya. The goal of this presentation was to make aware the lab members about the recent advancements in action recognition.
An improved fading Kalman filter in the application of BDS dynamic positioningIJRES Journal
Aiming at the poor dynamic performance and low navigation precision of traditional fading
Kalman filter in BDS dynamic positioning, an improved fading Kalman filter based on fading factor vector is
proposed. The fading factor is extended to a fading factor vector, and each element of the vector corresponds to
each state component. Based on the difference between the actual observed quantity and the predicted one, the
value of the vector is changed automatically. The memory length of different channel is changed in real time
according to the dynamic property of the corresponding state component. The actual observation data of BDS is
used to test the algorithm. The experimental results show that compared with the traditional fading Kalman filter
and the method of the third references, the positioning precision of the algorithm is improved by 46.3% and
23.6% respectively.
Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-ran...shuxianbiao
Low-rank matrix recovery from a corrupted observation has many applications in computer vision. Conventional methods address this problem by iterating between nuclear norm minimization and sparsity minimization. However, iterative nuclear norm minimization is computationally prohibitive for large-scale data (e.g., video) analysis. In this paper, we propose a Robust Orthogonal Subspace Learning (ROSL) method to achieve efficient low-rank recovery. Our intuition is a novel rank measure on the low-rank matrix that imposes the group sparsity of its coefficients under orthonormal subspace. We present an efficient sparse coding algorithm to minimize this rank measure and recover the low-rank matrix at quadratic complexity of the matrix size. We give theoretical proof to validate that this rank measure is lower bounded by nuclear norm and it has the same global minimum as the latter. To further accelerate ROSL to linear complexity, we also describe a faster version (ROSL+) empowered by random sampling. Our extensive experiments demonstrate that both ROSL and ROSL+ provide superior efficiency against the state-of-the-art methods at the same level of recovery accuracy.
A Novel Methodology for Designing Linear Phase IIR FiltersIDES Editor
This paper presents a novel technique for
designing an Infinite Impulse Response (IIR) Filter with
Linear Phase Response. The design of IIR filter is always a
challenging task due to the reason that a Linear Phase
Response is not realizable in this kind. The conventional
techniques involve large number of samples and higher
order filter for better approximation resulting in complex
hardware for implementing the same. In addition, an
extensive computational resource for obtaining the inverse
of huge matrices is required. However, we propose a
technique, which uses the frequency domain sampling along
with the linear programming concept to achieve a filter
design, which gives a best approximation for the linear
phase response. The proposed method can give the closest
response with less number of samples (only 10) and is
computationally simple. We have presented the filter design
along with its formulation and solving methodology.
Numerical results are used to substantiate the efficiency of
the proposed method.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Neuro-symbolic is not enough, we need neuro-*semantic*
recursive bilatera filtering
1. Introduction
• The Bilateral filter is a robust edge-preserving
filter introduced by Tomasi and Manduchi.
• Bilateral filter can be implemented recursively
as long as spatial filter kernel can be
implemented recursively and range filter
kernel can be decomposed into a recursive
product.
3. Recursive Filtering:
• Let x denote the one-dimensional (1D) input
signal of a causal recursive system of order n,
and y denote the output, then
• 1st-order recursive filtering
• 2nd-order recursive filtering
n
k
kik
n
l
lili ybxay
1
1
0
)()(
110 iii ybxay 1)1( iii yaxayEx:
2211110 iiiii ybybxaxay
3
4. This recursive system is then characterized by
the following transfer function
where {ha
k} denote the impulse response of the
recursive system whose Z-transform is Ha(Z).
5. • Deriche proposed Recursively implementing the Gaussian and
its derivatives.
)
2
exp(
2
1
)( 2
2
2
i
iG
2211110 iiiii ybybxaxayCausal:
Anti causal:
5
2nd order recursive implementation:
a
i
a
iii
a
i ybybxaxay 22112312
where,
8. Spatial Parameter
small large
input
limited smoothing strong smoothing
)
2
exp(
2
1
)( 2
2
2
i
iG
9. How to set s:
• Depends on the application.
• Common strategy of s: proportional to image
size
–e.g. 2% of the image diagonal
–property: independent of image resolution
10. • Bilateral filtering
where Rk,i = R(xk, xi) is the range filter kernel
for measuring the range similarity of pixel k
and i and Sk,i = S(k, i) is the spatial filter kernel
for measuring their spatial similarity.
11. Modified range kernel
The proposed method measures the range distance
by accumulating the color difference between
every two neighboring pixels on the path between
k and i.
12. The new range filter kernel Rk,i measures the
range distance between pixel k and i by
accumulating the range distance between every
two neighboring pixels on the path between k
and i.
The range filtering kernel is often Gaussian
where |xj − xj+1|2 denotes the range cost of
traveling from pixel j to j + 1
(or from j + 1 to j) and Rj,j = 1, then
13. Using the new range filtering kernel, a recursive
implementation of the bi-lateral filter can be
obtained with a small modification of the
coeffcients (al and bk) of the recursive system
defined by the spatial filter kernel at each pixel
location.
where n ≥ 1
14. • The output of this modified recursive system is
then
• with the initial condition that y0 = a0x0, and xi = 0
when i < 0. Apparently, this is a bilateral filter
where Ri,k is the range filter kernel and
Si,k =∑ λi−m−kam (m=0 to n-1)is the spatial filter
kernel.
where
15. For any bilateral filter containing the new range
filter kernel and any spatial filter kernel that can
be recursively implemented, an exact recursive
implementation can be obtained by simply
altering the coefficients of the recursive system
defined by the spatial filter kernel at each pixel
location.
• Recursive implementation of the spatial filter
• Recursive bilateral filter
n
k
kik
n
l
lili ybxay
1
1
0
)()(
n
k
kikkii
n
l
lilliii ybRxaRy
1
,
1
0
, )()(
16. Complexity Analysis:
• Recursive implementation of the spatial filter
2n multiplication operations and 2n-1
addition and subtraction operations are required.
• Recursive bilateral filter
–New range kernel can be computed recursively.
n
k
kik
n
l
lili ybxay
1
1
0
)()(
n
k
kikkii
n
l
lilliii ybRxaRy
1
,
1
0
, )()(
kikikiikii RRR ),1()1(,,
17. • Only 3n-2 additional multiplication operations
and n operations for measuring the range
distance between two neighboring pixels.
• Recursive implementation method will be
independent of kernel size and only depends
on the number of pixels in an image
18. 2D Recursive Bilateral Filtering:
• Performing the proposed 1D recursive bilateral
filter both horizontally and vertically extends
the 1D filter to 2D.
The horizontal pass is performed first, the
vertical pass will be applied to the result
produced by the horizontal one (and vice-
versa).