PhD defence public presentation, Bayesian methods for inverse problems with point clouds: applications to single-photon lidar, ENSEEHIT, Toulouse, France
Compressive sampling (CS) aims at acquiring a signal at a sampling rate below the Nyquist rate by exploiting prior knowledge that a signal is sparse or correlated in some domain. Despite the remarkable progress in the theory of CS, the sampling rate on a single image required by CS is still very high in practice. In this presentation, a non-local compressive sampling (NLCS) recovery method is proposed to further reduce the sampling rate by exploiting non-local patch correlation and local piecewise smoothness present in natural images. Two non-local sparsity measures, i.e., non-local wavelet sparsity and non-local joint sparsity, are proposed to exploit the patch correlation in NLCS. An efficient iterative algorithm is developed to solve the NLCS recovery problem, which is shown to have stable convergence behavior in experiments. The experimental results show that our NLCS significantly improves the state-of-the-art of image compressive sampling.
Nonlinear Transformation Based Detection And Directional Mean Filter to Remo...IJMER
In this paper, a novel two stage algorithm for the removal of random valued impulse noise
from the images is presented. In the first stage the noise pixels are detected by using an exponential
nonlinear function. The transformation of the pixels increases the gap between noisy and noise free
candidates which leads to an efficient detection. In the second stage, the directional differences between
the pixels in the four main directions are calculated. The mean values of the pixels which lie in the
direction of minimum difference are calculated and the noisy pixel values are replaced with the mean
value of the pixels lying in the direction of minimum difference. Experimental results show that proposed
method is superior to the conventional methods in peak signal to noise ratio.
PhD defence public presentation, Bayesian methods for inverse problems with point clouds: applications to single-photon lidar, ENSEEHIT, Toulouse, France
Compressive sampling (CS) aims at acquiring a signal at a sampling rate below the Nyquist rate by exploiting prior knowledge that a signal is sparse or correlated in some domain. Despite the remarkable progress in the theory of CS, the sampling rate on a single image required by CS is still very high in practice. In this presentation, a non-local compressive sampling (NLCS) recovery method is proposed to further reduce the sampling rate by exploiting non-local patch correlation and local piecewise smoothness present in natural images. Two non-local sparsity measures, i.e., non-local wavelet sparsity and non-local joint sparsity, are proposed to exploit the patch correlation in NLCS. An efficient iterative algorithm is developed to solve the NLCS recovery problem, which is shown to have stable convergence behavior in experiments. The experimental results show that our NLCS significantly improves the state-of-the-art of image compressive sampling.
Nonlinear Transformation Based Detection And Directional Mean Filter to Remo...IJMER
In this paper, a novel two stage algorithm for the removal of random valued impulse noise
from the images is presented. In the first stage the noise pixels are detected by using an exponential
nonlinear function. The transformation of the pixels increases the gap between noisy and noise free
candidates which leads to an efficient detection. In the second stage, the directional differences between
the pixels in the four main directions are calculated. The mean values of the pixels which lie in the
direction of minimum difference are calculated and the noisy pixel values are replaced with the mean
value of the pixels lying in the direction of minimum difference. Experimental results show that proposed
method is superior to the conventional methods in peak signal to noise ratio.
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
This is the slides about DTAM for my group meeting report, hope it does help to anyone who will want to implement DTAM and need to understand it deeply.
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing MethodsISAR Publications
Most of the computer applications use digital images. Digital image processing acts an important
role in the analysis and interpretation of data, which is in the digital form. Images taken in foggy
weather condition often suffer from poor visibility and clarity. After the study of several fast
dehazing methods like Tan’s dehazing technique, Fattal’s dehazing technique and aiming Heat al
dehazing technique, Dark Channel Prior (DCP) intended by He et al is most substantive technique
for dehazing.This survey aims to study about various existing methods such as polarization, dark
channel prior, depth map based method etc. are used for dehazing.
This paper analyzed different haze removal methods. Haze causes trouble to
many computer graphics/vision applications as it reduces the visibility of the scene. Air light and
attenuation are two basic phenomena of haze. air light enhances the whiteness in scene and on
the other hand attenuation reduces the contrast. the colour and contrast of the scene is recovered
by haze removal techniques. many applications like object detection , surveillance, consumer
electronics etc. apply haze removal techniques. this paper widely focuses on the methods of
effectively eliminating haze from digital images. it also indicates the demerits of current
techniques.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
Image reconstruction in CT is mostly a mathematical process however, this presentation tries to explain the complicated process of image reconstruction in a visual way, mainly focusing om Filtered back projection, Iterative Reconstruction and AI based image reconstruction.
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
Noise Removal in SAR Images using Orthonormal Ridgelet TransformIJERA Editor
Development in the field of image processing for reducing speckle noise from digital images/satellite images is a challenging task for image processing applications. Previously many algorithms were proposed to de-speckle the noise in digital images. Here in this article we are presenting experimental results on de-speckling of Synthetic Aperture RADAR (SAR) images. SAR images have wide applications in remote sensing and mapping the surfaces of all planets. SAR can also be implemented as "inverse SAR" by observing a moving target over a substantial time with a stationary antenna. Hence denoising of SAR images is an essential task for viewing the information. Here we introduce a transformation technique called ―Ridgelet‖, which is an extension level of wavelet. Ridgelet analysis can be done in the similar way how wavelet analysis was done in the Radon domain as it translates singularities along lines into point singularities under different frequencies. Simulation results were show cased for proving that proposed work is more reliable than compared to other de-speckling processes, and the quality of de-speckled image is measured in terms of Peak Signal to Noise Ratio and Mean Square Error
CSTalks - Object detection and tracking - 25th Maycstalks
Object detection is a fundamental step in most of the video analysis applications. There are many research challenges involved in automatic object detection, depending on different scenarios. The most prevalent application of object detection is in the field of multimedia surveillance. In this talk we will discuss the common problems in the object detection in a surveillance video. Further, we will discuss the Gaussian Mixture Model (GMM) based object detection method. While object detection is the basic step of video analysis, higher level semantic interpretation of the scene requires trajectory information. Most of the suspicious event detection methods use tracking as the basic building block. In the second part of the talk, we will discuss particle filter based method of object tracking. To summarize, the aim of the talk is two-fold: (1) Discuss common problems in object detection and tracking (2) Hands on experience of how to use classical methods of GMM and particle filtering in problem solving.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
Noise is one of the most widespread problems present in nearly all imaging applications. In spite of the sophistication of the recently proposed methods, most denoising algorithms have not yet attained a desirable level of applicability. This paper proposes a two-stage algorithm for speckle noise reduction jointly in the wavelet and spatial domains. At the first stage, the optimal parameter value of the spatial speckle reduction filter is estimated, based on edge pixel statistics and noise variance. Then the optimized filter is used at the second stage to additionally smooth the approximation image of the wavelet sub-band. A complexity reduction algorithm for wavelet decomposition is also proposed. The obtained results are highly encouraging in terms of image quality which paves the way towards the reinforcement of the proposed algorithm for the performance enhancement of the Block Matching and 3D Filtering algorithm tackling multiplicative speckle noise.
Evaluating effectiveness of radiometric correction for optical satellite imag...Dang Le
One of our published researches in ACRS 31st in Hanoi.
It has been used for our project in processing optical satellite imagery to detect environmental pollution.
Two Dimensional Image Reconstruction Algorithmsmastersrihari
Convolution Back-Projection (CBP) Algorithm was used for the reconstruction of the image. The performance was compared by implementing the algorithm by using RAM- LAK filter, Shepp- Logan filter and also No filter being used.
Super resolution in deep learning era - Jaejun YooJaeJun Yoo
Abstract (Eng/Kor):
Image restoration (IR) is one of the fundamental problems, which includes denoising, deblurring, super-resolution, etc. Among those, in today's talk, I will more focus on the super-resolution task. There are two main streams in the super-resolution studies; a traditional model-based optimization and a discriminative learning method. I will present the pros and cons of both methods and their recent developments in the research field. Finally, I will provide a mathematical view that explains both methods in a single holistic framework, while achieving the best of both worlds. The last slide summarizes the remaining problems that are yet to be solved in the field.
영상 복원(Image restoration, IR)은 low-level vision에서 매우 중요하게 다루는 근본적인 문제 중 하나로서 denoising, deblurring, super-resolution 등의 다양한 영상 처리 문제를 포괄합니다. 오늘 발표에서는 영상 복원 분야 중에서도 super-resolution 문제에 대해 집중적으로 다루겠습니다. 전통적인 model-based optimization 방식과 deep learning을 적용하여 문제를 푸는 방식에 대해, 각각의 장단점과 최신 연구 발전 흐름을 소개하겠습니다. 마지막으로는 이 둘을 하나로 잇는 통일된 관점을 제시하고 관련 연구들 살펴본 후, super-resolution 분야에서 아직 남아있는 문제점들을 정리하겠습니다.
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
This is the slides about DTAM for my group meeting report, hope it does help to anyone who will want to implement DTAM and need to understand it deeply.
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing MethodsISAR Publications
Most of the computer applications use digital images. Digital image processing acts an important
role in the analysis and interpretation of data, which is in the digital form. Images taken in foggy
weather condition often suffer from poor visibility and clarity. After the study of several fast
dehazing methods like Tan’s dehazing technique, Fattal’s dehazing technique and aiming Heat al
dehazing technique, Dark Channel Prior (DCP) intended by He et al is most substantive technique
for dehazing.This survey aims to study about various existing methods such as polarization, dark
channel prior, depth map based method etc. are used for dehazing.
This paper analyzed different haze removal methods. Haze causes trouble to
many computer graphics/vision applications as it reduces the visibility of the scene. Air light and
attenuation are two basic phenomena of haze. air light enhances the whiteness in scene and on
the other hand attenuation reduces the contrast. the colour and contrast of the scene is recovered
by haze removal techniques. many applications like object detection , surveillance, consumer
electronics etc. apply haze removal techniques. this paper widely focuses on the methods of
effectively eliminating haze from digital images. it also indicates the demerits of current
techniques.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
Image reconstruction in CT is mostly a mathematical process however, this presentation tries to explain the complicated process of image reconstruction in a visual way, mainly focusing om Filtered back projection, Iterative Reconstruction and AI based image reconstruction.
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
Noise Removal in SAR Images using Orthonormal Ridgelet TransformIJERA Editor
Development in the field of image processing for reducing speckle noise from digital images/satellite images is a challenging task for image processing applications. Previously many algorithms were proposed to de-speckle the noise in digital images. Here in this article we are presenting experimental results on de-speckling of Synthetic Aperture RADAR (SAR) images. SAR images have wide applications in remote sensing and mapping the surfaces of all planets. SAR can also be implemented as "inverse SAR" by observing a moving target over a substantial time with a stationary antenna. Hence denoising of SAR images is an essential task for viewing the information. Here we introduce a transformation technique called ―Ridgelet‖, which is an extension level of wavelet. Ridgelet analysis can be done in the similar way how wavelet analysis was done in the Radon domain as it translates singularities along lines into point singularities under different frequencies. Simulation results were show cased for proving that proposed work is more reliable than compared to other de-speckling processes, and the quality of de-speckled image is measured in terms of Peak Signal to Noise Ratio and Mean Square Error
CSTalks - Object detection and tracking - 25th Maycstalks
Object detection is a fundamental step in most of the video analysis applications. There are many research challenges involved in automatic object detection, depending on different scenarios. The most prevalent application of object detection is in the field of multimedia surveillance. In this talk we will discuss the common problems in the object detection in a surveillance video. Further, we will discuss the Gaussian Mixture Model (GMM) based object detection method. While object detection is the basic step of video analysis, higher level semantic interpretation of the scene requires trajectory information. Most of the suspicious event detection methods use tracking as the basic building block. In the second part of the talk, we will discuss particle filter based method of object tracking. To summarize, the aim of the talk is two-fold: (1) Discuss common problems in object detection and tracking (2) Hands on experience of how to use classical methods of GMM and particle filtering in problem solving.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
Noise is one of the most widespread problems present in nearly all imaging applications. In spite of the sophistication of the recently proposed methods, most denoising algorithms have not yet attained a desirable level of applicability. This paper proposes a two-stage algorithm for speckle noise reduction jointly in the wavelet and spatial domains. At the first stage, the optimal parameter value of the spatial speckle reduction filter is estimated, based on edge pixel statistics and noise variance. Then the optimized filter is used at the second stage to additionally smooth the approximation image of the wavelet sub-band. A complexity reduction algorithm for wavelet decomposition is also proposed. The obtained results are highly encouraging in terms of image quality which paves the way towards the reinforcement of the proposed algorithm for the performance enhancement of the Block Matching and 3D Filtering algorithm tackling multiplicative speckle noise.
Evaluating effectiveness of radiometric correction for optical satellite imag...Dang Le
One of our published researches in ACRS 31st in Hanoi.
It has been used for our project in processing optical satellite imagery to detect environmental pollution.
Two Dimensional Image Reconstruction Algorithmsmastersrihari
Convolution Back-Projection (CBP) Algorithm was used for the reconstruction of the image. The performance was compared by implementing the algorithm by using RAM- LAK filter, Shepp- Logan filter and also No filter being used.
Super resolution in deep learning era - Jaejun YooJaeJun Yoo
Abstract (Eng/Kor):
Image restoration (IR) is one of the fundamental problems, which includes denoising, deblurring, super-resolution, etc. Among those, in today's talk, I will more focus on the super-resolution task. There are two main streams in the super-resolution studies; a traditional model-based optimization and a discriminative learning method. I will present the pros and cons of both methods and their recent developments in the research field. Finally, I will provide a mathematical view that explains both methods in a single holistic framework, while achieving the best of both worlds. The last slide summarizes the remaining problems that are yet to be solved in the field.
영상 복원(Image restoration, IR)은 low-level vision에서 매우 중요하게 다루는 근본적인 문제 중 하나로서 denoising, deblurring, super-resolution 등의 다양한 영상 처리 문제를 포괄합니다. 오늘 발표에서는 영상 복원 분야 중에서도 super-resolution 문제에 대해 집중적으로 다루겠습니다. 전통적인 model-based optimization 방식과 deep learning을 적용하여 문제를 푸는 방식에 대해, 각각의 장단점과 최신 연구 발전 흐름을 소개하겠습니다. 마지막으로는 이 둘을 하나로 잇는 통일된 관점을 제시하고 관련 연구들 살펴본 후, super-resolution 분야에서 아직 남아있는 문제점들을 정리하겠습니다.
Nucleation and avalanches in film with labyrintine magnetic domainsAndrea Benassi
Experimental investigations of the scaling behavior of Barkhausen avalanches in out-of plane ferromagnetic films yield widely different results for the values of the critical exponents despite similar labyrinthine domain structures, suggesting that universality may not hold for this class of materials. Analyzing a phase-field model for magnetic reversal, we show that avalanche scaling is bounded by characteristic length scales arising from the competition between dipolar forces and exchange interactions. We compare our results with the experiments and find a good qualitative and quantitative agreement, reconciling apparent contradictions. Finally, we make some prediction, amenable to experimental verification, on the dependence of the avalanche's behavior from the film thickness and disorder.
Andrey V. Savchenko - Sequential Hierarchical Image Recognition based on the ...AIST
Andrey V. Savchenko (National Research University Higher School of Economics), Vladimir Milov (N. Novgorod State Technical University), Natalya Belova (NRU HSE, Moscow) - Sequential Hierarchical Image Recognition based on the Pyramid Histograms of Oriented Gradients with Small Samples
AIST Conference 2015 http://aistconf.org
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
We investigate the use of stochastic parametrization to account for model errors due to sub-grid scales in data assimilation of chaotic systems. Using data from fine simulations of the system, the stochastic parametrization leads to a non-Markovian model that captures the ket statistical and dynamical properties of the full system. The non-Markovian model can then be used in data assimilation algorithms to improve the performance of state estimation and prediction. Tests on the two-layer Lorenz 96 model show that such a non-Markovian stochastic parametrization approach improves data assimilation, and it outperforms the techniques of localization and inflation in the ensemble Kalman filter with perturbed observations.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Mammalian Pineal Body Structure and Also Functions
ICASSP19
1. 3D reconstruction
using single-photon Lidar data
exploiting the widths of the returns
J. Tachella1, Y. Altmann1, J.Y. Tourneret2 and S. McLaughlin1
1School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK
2INP-ENSEEHIT-IRIT-TeSA, University of Toulouse, Toulouse, France
2. Outline
The single-photon Lidar data 3D reconstruction problem
• Challenges
• State-of-the-art
New Bayesian 3D reconstruction algorithm
• Multiple surfaces per pixel
• Broadening of the instrumental response
• Highly-scattering media
Experiments using real Lidar data
• Long range (kilometres)
• Underwater
2/21
4. Challenges
Few detected photons 𝑠𝑡 ≪ 1
High background 𝑏 ≫ 𝑠𝑡
No target 𝑠𝑡 = 0
Multiple surfaces 𝑠𝑡 = 𝑟𝑛 ℎ(𝑡 − 𝑡 𝑛)
Broadening of the IRF 𝒉 𝒘(𝒕 − 𝒕 𝒏)
Highly scattering environments 𝒆−𝜶𝒕 𝒏 𝒓 𝒏
exponential
attenuation
Spatial
correlations
neighbouring
pixels
estimate
background
unmix signals
target
detection
problem
unknown
dimension
additional
parameters
to estimate
low signal
high background
4/21
5. Recent algorithms
MANIPOP algorithm
J. Tachella, Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, J.-Y. Tourneret, and S. McLaughlin,
“Bayesian 3D reconstruction of complex scenes from single-photon Lidar data” SIAM Journal on Imaging Sciences, 2019
5/21
Shin
(2016)
Altmann
(2016)
Shin
(2016)
Rapp
(2017)
Halimi
(2017a)
Halimi
(2017b)
Lindell
(2018)
Ren
(2018)
Tachella
(2019)
Proposed
method
Few photons
High
background
Target
detection
Multiple
surfaces
Broadening
IRF
Attenuating
media
7. Point process model
We model each return as a point in 3D space
Φ = { 𝒄 𝑛, 𝑟𝑛, 𝑤 𝑛 | 𝑛 = 1, … , 𝑁}
where 𝒄 𝑛 = 𝑥 𝑛, 𝑦 𝑛, 𝑡 𝑛
𝑇 ∈ ℝ3
𝑟𝑛 ∈ ℝ+
𝑤 𝑛 ∈ (1, +∞)
𝑟𝑛
𝒄 𝑛
𝑤 𝑛
7/21
9. Prior distributions
1. Point positions
Prior knowledge:
• Correlation between points within a surface
• Sparsity in depth
• Unknown number of points
𝑝 Φ = 𝑓1 Φ 𝑓2 Φ 𝜋 𝑐 Φ
Area interaction process
Strauss process
Poisson reference measure
Prior distribution: Area interaction process + Strauss process
Laser
beam
direction
9/21
10. Prior distributions
2. Background levels
Prior knowledge:
• Correlation between neighbouring points
• Positivity constraint
• Fixed dimension
𝑝 𝑩 𝛼 𝐵 ∝
𝑖,𝑗
𝑏𝑖,𝑗
𝛼 𝐵−1
𝑏𝑖,𝑗
𝛼 𝐵
Prior distribution: Gamma Markov random field
where 𝑏𝑖,𝑗 is a low-pass version of 𝑏𝑖,𝑗
and 𝛼 𝐵 is a hyperparameter
Dikmen and Cemgil (2010) "Gamma Markov random fields for audio source modelling." IEEE Trans. on Audio, Speech, and Language Processing
background illumination
target
10/21
11. Prior distributions
3. Point reflectivity
Prior knowledge:
• Correlation between neighbouring points within a surface
• Positivity constraint
𝑚 𝑛 = log 𝑟𝑛
𝑝 𝒎 𝜎 𝑚, 𝛽 𝑚 ∝ 𝒩(0, 𝜎 𝑚
2
𝑷−𝟏
)
Prior distribution: Gaussian Markov random field
where 𝑷 is the Laplacian operator w.r.t. the manifold
𝜎 𝑚, 𝛽 𝑚 are hyperparameters
𝑟1
𝑟2 𝑟3 𝑟4
𝑟5
𝑟6
𝑟7
𝑟8
Laser
beam
direction
11/21
12. Prior distributions
4. Broadening of IRF
Prior knowledge:
• Correlation between neighbouring points within a surface
• Positivity constraint
𝑤 𝑛 = log(𝑤 𝑛−1)
𝑝 𝒘 𝜎 𝑤, 𝛽 𝑤 ∝ 𝒩(0, 𝜎 𝑤
2 𝑷−𝟏)
Prior distribution: Gaussian Markov random field
𝑤1
𝑤2 𝑤3 𝑤4
𝑤5
𝑤6
𝑤7
𝑤8
where 𝑷 is the Laplacian operator w.r.t. the manifold
𝜎 𝑤, 𝛽 𝑤 are hyperparameters
Laser
beam
direction
12/21
13. Inference
We use the MAP estimator for Φ
Φ = 𝑎𝑟𝑔𝑚𝑎𝑥Φ 𝑝 Φ, 𝑩 𝑍
Minimum mean squared error for 𝑩
𝑩 = 𝔼 {𝑩|𝑌}
No analytical expressions available
If we gather samples (Φ s
, 𝑩(𝑠)
) according to 𝑝 Φ, 𝑩 𝑍 for 𝑠 = 1, … , 𝑁 𝑚𝑐
Φ ≈ 𝑎𝑟𝑔𝑚𝑎𝑥Φ(s) 𝑝 Φ s
, 𝑩(𝑠)
𝑌
𝑩 ≈
1
𝑁 𝑚𝑐
𝑠=1
𝑁 𝑚𝑐
𝑩(𝑠)
13/21
14. Reversible jump MCMC
• How do we gather samples Φ(s)
?
– The number of points indicates the dimension of the model
– Classical Monte Carlo methods sample a fixed dimensional model
Reversible jump Markov chain Monte Carlo (Green, 1995)
… or MCMC for variable-dimension models
14/21
15. Reversible jump MCMC
– Birth: Proposes a new point in 3D space at random
– Death: Tries to remove one existing point at random
– Shift: Proposes a new position for an existing point
– Mark move: Proposes a new mark for an existing point
– Split: Separate one existing point into two new ones.
– Merge: Fuse two existing points into one.
15/21
16. Experiments
Goal: Long range building reconstruction
Data size: 123x96x800
Detections per pixel: 913 photons
Scattering coefficients (𝜶): ≈ 0
Signal-to-background-ratio: 1.64
16/21
18. Experiments
Goal: Underwater 3D reconstruction
Data size: 120x120x2500
Scattering coefficients (𝜶): 0.6, 3.9 and 4.8
Underwater pipe
Lidar
18/21
19. Experiments
MANIPOP
Proposed
𝛼 = 0.6
4740 photons per pixel
SBR: 24.2
Time: 410 s
Time: 329 s
𝛼 = 3.9
282 photons per pixel
SBR: 0.4
Time: 263 s
Time: 318 s
𝛼 = 4.8
198 photons per pixel
SBR: 0.05
Time: 212 s
Time: 240 s 19/21
20. Conclusions and future work
We adapted MANIPOP to account for peak broadening and underwater
conditions
• Non-trivial to adapt other existing models
• Negligible increase of execution time, similar to optimization-based methods
• General structured sparsity formulation
• Carefully tailored RJ-MCMC moves
Current work
• Real-time reconstruction
• Multiple-view 3D reconstruction
• Multispectral single-photon Lidar
20/21