SlideShare a Scribd company logo
1 of 45
Praveen, Paurush, and Holger Fröhlich. "Boosting probabilistic
graphical model inference by incorporating prior knowledge
from multiple sources." PloS one 8.6 (2013): e67410.
@St_Hakky
これは論文を読んだ際のまとめであり、私が書いた論文ではありません。
Introduction
• Graphical models is useful for extracting meaningful
biological insights from experimental data in life science
research.
• (Dynamic) Bayesian Networks
• Gaussian Graphical Models
• Poisson Graphical Models
• And so many other methods…
• These models can infer features of cellular networks in a
data driven manner.
Problem
• Network inference from experimental data is
challenging because of …
• the typical low signal to noise ratio
• High dimensional
• Low number of replicates and noisy measurements(the
sample size is very small)
• The inference of regulatory network from such data is
hence challenging and often fails to reach the desired
level of accuracy.
How to deal with this problem?
• One can either work at experimental level by
increasing the sample size.
• This is the best way, but practically this is very difficult.
• The limitation of cost and time.
• We can deal with this problem at inference level
and we will consider about it.
There are so many data in Bio
• There are many knowledge in Bio : pathway
databases [6–8], Gene Ontology [9] and others.
• So, Integrating known information from databases
and biological literature as prior knowledge thus
appears to be beneficial.
Integrating heterogenous information into the
learning process is not straight forward.
• However, biological knowledge covers many
different aspects and is widely distributed across
multiple knowledge resources.
• So, integrating heterogenous information into the
learning process is not straight forward.
The usuful databases
• Pathway databases [6–8]
• Gene Ontology [9]
• Promoter sequences
• Protein-protein interactions
• Evolutionary information
• KEGG pathway
• GO annotation
How to deal with the problem at
inference level
• From one information
• Generate sample data
• Integrating one or more particular information into
learning process
• Gene regulatory networks were inferred from a
combination of gene expression data with transcription
factor binding motifs
• promoter sequences, protein-protein interactions,
evolutionary information, KEGG pathways, GO anotation
Several approaches have been
proposed
• On the technical side several approaches for
integrating prior knowledge into the inference
of probabilistic graphical models have been
published.
Prior Research
• In [16] and [17] the authors only generate candidate structures with
significance above a certain threshold according to prior knowledge.
• [16] Larsen, Peter, et al. "A statistical method to incorporate biological knowledge for
generating testable novel gene regulatory interactions from microarray
experiments." BMC bioinformatics 8.1 (2007): 1.
• [17] Almasri, Eyad, et al. “Incorporating literature knowledge in Bayesian network for
inferring gene networks with gene expression data.” International Symposium on
Bioinformatics Research and Applications. Springer Berlin Heidelberg, 2008.
• Another idea is to introduce a probabilistic Bayesian prior over network
structures. [18] introduced a prior for individual edges based on an a-
priori assumed degree of belief.
• [18] Fröhlich, Holger, et al. “Large scale statistical inference of signaling pathways
from rnai and microarray data.” BMC bioinformatics 8.1 (2007): 1.
Prior Research
• [19] describes a more general set of priors, which can also capture
global network properties, such as scale-free behavior.
• [19] Mukherjee, Sach, and Terence P. Speed. "Network inference using informative
priors." Proceedings of the National Academy of Sciences 105.38 (2008): 14313-
14318.
• [20] use a similar form of prior as [18], but additionally combine
multiple information sources via a linear weighting scheme.
• Werhli, Adriano V., and Dirk Husmeier. "Reconstructing gene regulatory networks
with Bayesian networks by combining expression data with multiple sources of prior
knowledge." Statistical applications in genetics and molecular biology 6.1 (2007).
Prior Research
• [21] treat different information sources as statistically independent,
and consequently the overall prior is just the product over the
priors for the individual information sources.
• [21] Gao, Shouguo, and Xujing Wang. "Quantitative utilization of prior biological knowledge in the Bayesian
network modeling of gene expression data." BMC bioinformatics 12.1 (2011): 1.
We focus on…
• The focus of this paper is on construction of
consensus priors from multiple, heterogenous
knowledge sources.
Proposal Methods
• Latent Factor Model(LFM)
• LFM relies on the idea of a generative process for the
individual information sources and uses Bayesian
inference to estimate a consensus prior.
• Noisy-OR
• The second model integrates different information
sources via a Noisy-OR gate.
Materials and Methods
• 1.1:Edge-wise Priors for Data Driven Network
Inference.
• 1.2 : Latent Factor Model (LFM)
• 1.3 : Noisy-OR Model (NOM)
• 1.4 : Information Sources
Edge-wise Priors for Data Driven
Network Inference
• D : experimental data
• Φ : network graph (represented by an m×m
adjacency matrix) to infer from data.
• Bayes rule :
Edge-wise Priors for Data Driven
Network Inference
• P(Φ) is a prior. We assume P(Φ) can be
decomposed into
• Φ𝑖𝑗 is a matrix of prior edge confidence.
• Φ𝑖𝑗 = 1 : high prior degree of belief in the
existence of the edge i→j
Our purpose
• Our purpos is to compile Φ𝑖𝑗 in a consistent
manner from n available resources.
• 𝑋(1), 𝑋(2), , , 𝑋(𝑛) ∶ we have n edge confidence
matrix with n information matrix.
Latent Factor Model(LFM)
• LFM is based on the idea that the prior information
encoded in matrices 𝑋(1), 𝑋(2), , , 𝑋(𝑛) all originate
from the true but unknown network Φ.
Latent Factor Model(LFM)
• We use this notion to conduct joint Bayesian
inference on W as well as additional
parameters 𝜃 = (α, β) given 𝑋(1)
, 𝑋(2)
, , , 𝑋(𝑛)
• The idea behind this equation is that we can identify
Φ with the posterior 𝑃 Φ, 𝜃 𝑋 1 , 𝑋 2 , , , 𝑋 𝑛
Latent Factor Model(LFM)
• The entries of each matrix 𝑋(𝑘) can be assumed to
follow beta distribution.
• 詳しい説明は省きますが、αとβの値をいじれるの
で、それぞれの情報源の重要度を独立に決めるこ
とができます
MCMC to get Φ and 𝜃 = (α, β)
• Employ an adaptive Markov Chain Monte Carlo
strategy to learn the latent variable Φ together
with parameters 𝜃 = (α, β) .
• In network space MCMC moves are edge insertion,
deletion and reversal.
• In parameter space α and β are adapted on log-scale
using a multivariate Gaussian transition kernel.
What is MCMC?
• PRML11章と、緑本の8章あたりが参考になる
Noisy-OR model(NOM)
• The NOM represents a non-deterministic
disjunctive relation between an effect.
• Its possible causes and has been extensively used in
artificial intelligence.
Noisy-OR model(NOM)
• The Noisy-OR principle is governed by two
hallmarks:
• First, each cause has a probability to produce the effect.
• Second, the probability of each cause being sufficient to
produce the effect is independent of the presence of
other causes.
Noisy-OR model(NOM)
• In our case 𝑋(1), 𝑋(2), , , 𝑋(𝑛) are interpreted as
causes and Φ𝑖𝑗 as effect. The link between both is
given by
• Φ𝑖𝑗 = 1 : high prior degree of belief in the
existence of the edge i→j.
LFM vs NOM
• LFM
• A high level of agreement between information sources
is required in order to achieve high values in Φ
• NOM
• In consequence Φ𝑖𝑗 becomes close to 1, if the edge i→j
has a high confidence in at least one information source,
because then the product gets close to 0.
NOM.RNK
• In addition to NOM, we also experimented with a
variant based on relative ranks.
Information Sources
• We use the following information as prior
information
• GO annotation
• Pathway Databases
• KEGG, PathwayCommons
• Protain domain annotation
• InterPro
• Protatin domain interaction
• DOMINE
GO annotation
• Interacting proteins often function in the same
biological process.
• So, two proteins acting in the same biological
process are more likely to interact than two
proteins involved in different processes.
• We used the default similarity measure for gene
products implemented in the R-package GOSim.
Pathway Databases
• KEGG
• R-package : KEGGgraph
• PathwaysCommons database
• To compute a condence value for each interaction
between a pair of genes/proteins we can work at the
level of this graph in different ways:
• Shortest path distance between the two entities
• Dijkstra’s algorithm
• R-package : RBGL
• In this paper, we use this one because we observed that the use of
the diffusion kernel in place of the shortest path measure did not
affect the results much.
• Diffusion kernels
• R-package : pathClass
Protein domain annotation
• Protein domain annotation was compared on the
basis of a binary vector representation via the
cosine similarity.
• proteins with similar domains are more likely to act
in similar biological pathways.
• The condence of interaction between two proteins
can thus be seen as a function of the similarity of
the domain annotations of proteins.
Protein domain annotation
• Protein domain annotation can be retrieved from
the Inter-Pro database.
• A “1" in a component thus indicates that the
protein is annotated with the corresponding
domain. Otherwise a “0" is filled in.
• The similarity between two binary vectors u, v
(domain signatures) is presented in terms of the
cosine similarity
Protatin domain interaction
• The relative frequency of interacting protein
domain pairs was taken as another confidence measure
for an edge a-b.
• Two proteins are more likely to interact if they contain
domains, which can potentially interact.
• The DOMINE database collates known and predicted
domain-domain interactions
• where H is the number of hit pairs found in the
DOMINE database and DA and DB are the the number of
domains in proteins A and B, respectively
Results
• 2.1 : Correlation of Prior Edge Confidences with
True Biological Network
• Network Sampling
• Simulated Information Sources
• Weighting of Information Sources
• Real Information Sources
• 2.2 : Enhancement of Data Driven Network
Reconstruction Accuracy
• Simulated Data
• Application to Breast Cancer
• Application to Yeast Heat-Shock Network
Results
• 2.1 : Correlation of Prior Edge Confidences with
True Biological Network
• Network Sampling
• Simulated Information Sources
• Weighting of Information Sources
• Real Information Sources
• 2.2 : Enhancement of Data Driven Network
Reconstruction Accuracy
• Simulated Data
• Application to Breast Cancer
• Application to Yeast Heat-Shock Network
• 今回は割愛
Results
• 2.1 : Correlation of Prior Edge Confidences with
True Biological Network
• Network Sampling
• Simulated Information Sources
• Weighting of Information Sources
• Real Information Sources
• 2.2 : Enhancement of Data Driven Network
Reconstruction Accuracy
• Simulated Data
• Application to Breast Cancer
• Application to Yeast Heat-Shock Network
Enhancement of Data Driven
Network Reconstruction Accuracy
• We next investigated, in how far our priors could
enhance the reconstruction performance of
Bayesian Networks learned from data.
• From the set of best fitting network structures the
one with minimum BIC value was selected.
Network reconstruction for the
breast cancer data
• (Right) Without using any prior
• (Left) Using the NOM prior
• Black edges could be verified with established literature knowledge
• Grey edges could not be verified.
Network reconstruction for the
breast cancer data
Yeast heat-shock response network obtained
via Bayesian network reconstruction
• (a) Network without any prior knowledge
• (b) The gold standard network from YEASTRACT database
• (c) Network reconstructed with prior knowledge (NOM).
Reconstruction performance of Yeast heat-shock
response network with Bayesian Networks and
different priors (NP = No Prior).
Conclusion
• We propose two methods:
• Latent Factor Model(LFM)
• Noisy-OR model(NOM)
• Result :
• Both of our models yielded priors which were
significantly closer to the true biological network than
competing methods.
• They could significantly enhance the reconstruction
performance of Bayesian Networks compared to a
situation without any prior.
Conclusion
• Our methods were also superior to purely using
STRING edge confidence scores as prior information.
• The proposal methods are robust for network size.
They can manage small , medium, large network.
• The LFM’s computational cost is higher than NOM, so we
should use NOM to large network.

More Related Content

What's hot

Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15
Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15
Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15MLconf
 
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...Arinze Akutekwe
 
Deep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyDeep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyJason Tsai
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)spartacus131211
 
Big data from small data:  A survey of the neuroscience landscape through the...
Big data from small data:  A survey of the neuroscience landscape through the...Big data from small data:  A survey of the neuroscience landscape through the...
Big data from small data:  A survey of the neuroscience landscape through the...Maryann Martone
 
ITAB2010-Thresholding Correlation Matrices
ITAB2010-Thresholding Correlation MatricesITAB2010-Thresholding Correlation Matrices
ITAB2010-Thresholding Correlation MatricesAthanasios Anastasiou
 
How Can Machine Learning Help Your Research Forward?
How Can Machine Learning Help Your Research Forward?How Can Machine Learning Help Your Research Forward?
How Can Machine Learning Help Your Research Forward?Wouter Deconinck
 
Knowledge extraction and visualisation using rule-based machine learning
Knowledge extraction and visualisation using rule-based machine learningKnowledge extraction and visualisation using rule-based machine learning
Knowledge extraction and visualisation using rule-based machine learningjaumebp
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learningAmr Rashed
 
Neural Networks for Pattern Recognition
Neural Networks for Pattern RecognitionNeural Networks for Pattern Recognition
Neural Networks for Pattern RecognitionVipra Singh
 
Fine –grained interest matching for neural news recommendation
Fine –grained interest matching   for neural news recommendationFine –grained interest matching   for neural news recommendation
Fine –grained interest matching for neural news recommendationtaeseon ryu
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkBurhan Muzafar
 
Inferring networks from multiple samples with consensus LASSO
Inferring networks from multiple samples with consensus LASSOInferring networks from multiple samples with consensus LASSO
Inferring networks from multiple samples with consensus LASSOtuxette
 
Introduction to Few shot learning
Introduction to Few shot learningIntroduction to Few shot learning
Introduction to Few shot learningRidge-i, Inc.
 
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing codeISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing codeKengo Sato
 
Slides ppt
Slides pptSlides ppt
Slides pptbutest
 
Artificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep LearningArtificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
 

What's hot (20)

Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15
Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15
Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15
 
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...
Inference of Nonlinear Gene Regulatory Networks through Optimized Ensemble of...
 
Deep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyDeep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical Methodology
 
01 Introduction to Machine Learning
01 Introduction to Machine Learning01 Introduction to Machine Learning
01 Introduction to Machine Learning
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)
 
Big data from small data:  A survey of the neuroscience landscape through the...
Big data from small data:  A survey of the neuroscience landscape through the...Big data from small data:  A survey of the neuroscience landscape through the...
Big data from small data:  A survey of the neuroscience landscape through the...
 
ITAB2010-Thresholding Correlation Matrices
ITAB2010-Thresholding Correlation MatricesITAB2010-Thresholding Correlation Matrices
ITAB2010-Thresholding Correlation Matrices
 
How Can Machine Learning Help Your Research Forward?
How Can Machine Learning Help Your Research Forward?How Can Machine Learning Help Your Research Forward?
How Can Machine Learning Help Your Research Forward?
 
Knowledge extraction and visualisation using rule-based machine learning
Knowledge extraction and visualisation using rule-based machine learningKnowledge extraction and visualisation using rule-based machine learning
Knowledge extraction and visualisation using rule-based machine learning
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
Brain connectivity analysis
Brain connectivity analysisBrain connectivity analysis
Brain connectivity analysis
 
Neural Networks for Pattern Recognition
Neural Networks for Pattern RecognitionNeural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
 
Fine –grained interest matching for neural news recommendation
Fine –grained interest matching   for neural news recommendationFine –grained interest matching   for neural news recommendation
Fine –grained interest matching for neural news recommendation
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Inferring networks from multiple samples with consensus LASSO
Inferring networks from multiple samples with consensus LASSOInferring networks from multiple samples with consensus LASSO
Inferring networks from multiple samples with consensus LASSO
 
Introduction to Few shot learning
Introduction to Few shot learningIntroduction to Few shot learning
Introduction to Few shot learning
 
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing codeISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
ISMB2014読み会 イントロ + Deep learning of the tissue-regulated splicing code
 
H43014046
H43014046H43014046
H43014046
 
Slides ppt
Slides pptSlides ppt
Slides ppt
 
Artificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep LearningArtificial Intelligence, Machine Learning and Deep Learning
Artificial Intelligence, Machine Learning and Deep Learning
 

Viewers also liked

Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.
Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.
Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.Hakky St
 
[DL輪読会]Xception: Deep Learning with Depthwise Separable Convolutions
[DL輪読会]Xception: Deep Learning with Depthwise Separable Convolutions[DL輪読会]Xception: Deep Learning with Depthwise Separable Convolutions
[DL輪読会]Xception: Deep Learning with Depthwise Separable ConvolutionsDeep Learning JP
 
Tensorflow
TensorflowTensorflow
TensorflowHakky St
 
強くなるロボティック・ ゲームプレイヤーの作り方3章
強くなるロボティック・ ゲームプレイヤーの作り方3章強くなるロボティック・ ゲームプレイヤーの作り方3章
強くなるロボティック・ ゲームプレイヤーの作り方3章Hakky St
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節Hakky St
 
Reducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksReducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksHakky St
 
【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章 【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章 Hakky St
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節Hakky St
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節Hakky St
 
Deep Recurrent Q-Learning(DRQN) for Partially Observable MDPs
Deep Recurrent Q-Learning(DRQN) for Partially Observable MDPsDeep Recurrent Q-Learning(DRQN) for Partially Observable MDPs
Deep Recurrent Q-Learning(DRQN) for Partially Observable MDPsHakky St
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章Hakky St
 
【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章Hakky St
 
Diet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomicDiet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomicHakky St
 
スパース性に基づく機械学習 2章 データからの学習
スパース性に基づく機械学習 2章 データからの学習スパース性に基づく機械学習 2章 データからの学習
スパース性に基づく機械学習 2章 データからの学習hagino 3000
 
劣モジュラ最適化と機械学習1章
劣モジュラ最適化と機械学習1章劣モジュラ最適化と機械学習1章
劣モジュラ最適化と機械学習1章Hakky St
 
Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展Yuichi Yoshida
 
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
 

Viewers also liked (17)

Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.
Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.
Creating basic workflows as Jupyter Notebooks to use Cytoscape programmatically.
 
[DL輪読会]Xception: Deep Learning with Depthwise Separable Convolutions
[DL輪読会]Xception: Deep Learning with Depthwise Separable Convolutions[DL輪読会]Xception: Deep Learning with Depthwise Separable Convolutions
[DL輪読会]Xception: Deep Learning with Depthwise Separable Convolutions
 
Tensorflow
TensorflowTensorflow
Tensorflow
 
強くなるロボティック・ ゲームプレイヤーの作り方3章
強くなるロボティック・ ゲームプレイヤーの作り方3章強くなるロボティック・ ゲームプレイヤーの作り方3章
強くなるロボティック・ ゲームプレイヤーの作り方3章
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 4.2節
 
Reducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksReducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networks
 
【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章 【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル2章
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 2.3節〜2.5節
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 3.3節と3.4節
 
Deep Recurrent Q-Learning(DRQN) for Partially Observable MDPs
Deep Recurrent Q-Learning(DRQN) for Partially Observable MDPsDeep Recurrent Q-Learning(DRQN) for Partially Observable MDPs
Deep Recurrent Q-Learning(DRQN) for Partially Observable MDPs
 
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章
スパース性に基づく機械学習(機械学習プロフェッショナルシリーズ) 1章
 
【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章
【機械学習プロフェッショナルシリーズ】グラフィカルモデル1章
 
Diet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomicDiet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomic
 
スパース性に基づく機械学習 2章 データからの学習
スパース性に基づく機械学習 2章 データからの学習スパース性に基づく機械学習 2章 データからの学習
スパース性に基づく機械学習 2章 データからの学習
 
劣モジュラ最適化と機械学習1章
劣モジュラ最適化と機械学習1章劣モジュラ最適化と機械学習1章
劣モジュラ最適化と機械学習1章
 
Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展
 
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
 

Similar to Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources

Prote-OMIC Data Analysis and Visualization
Prote-OMIC Data Analysis and VisualizationProte-OMIC Data Analysis and Visualization
Prote-OMIC Data Analysis and VisualizationDmitry Grapov
 
Solis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptx
Solis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptxSolis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptx
Solis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptxRyanLong78
 
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...ssuser4b1f48
 
Systems genetics approaches to understand complex traits
Systems genetics approaches to understand complex traitsSystems genetics approaches to understand complex traits
Systems genetics approaches to understand complex traitsSOYEON KIM
 
Proteomics - Analysis and integration of large-scale data sets
Proteomics - Analysis and integration of large-scale data setsProteomics - Analysis and integration of large-scale data sets
Proteomics - Analysis and integration of large-scale data setsLars Juhl Jensen
 
Introduction to graph databases: Neo4j and Cypher
Introduction to graph databases: Neo4j and CypherIntroduction to graph databases: Neo4j and Cypher
Introduction to graph databases: Neo4j and CypherAnjani Dhrangadhariya
 
An interactive approach to multiobjective clustering of gene expression patterns
An interactive approach to multiobjective clustering of gene expression patternsAn interactive approach to multiobjective clustering of gene expression patterns
An interactive approach to multiobjective clustering of gene expression patternsRavi Kumar
 
Jillian ms defense-4-14-14-ja
Jillian ms defense-4-14-14-jaJillian ms defense-4-14-14-ja
Jillian ms defense-4-14-14-jaJillian Aurisano
 
Cornell Pbsb 20090126 Nets
Cornell Pbsb 20090126 NetsCornell Pbsb 20090126 Nets
Cornell Pbsb 20090126 NetsMark Gerstein
 
2015 GU-ICBI Poster (third printing)
2015 GU-ICBI Poster (third printing)2015 GU-ICBI Poster (third printing)
2015 GU-ICBI Poster (third printing)Michael Atkins
 
local and global allignment
local and global allignmentlocal and global allignment
local and global allignmentSHOBI KHAKHI
 
Machine reading for cancer biology
Machine reading for cancer biologyMachine reading for cancer biology
Machine reading for cancer biologyLaura Berry
 
32_Nov07_MachineLear..
32_Nov07_MachineLear..32_Nov07_MachineLear..
32_Nov07_MachineLear..butest
 
Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Dmitry Grapov
 
STRING - Modeling of pathways through cross-species integration of large-scal...
STRING - Modeling of pathways through cross-species integration of large-scal...STRING - Modeling of pathways through cross-species integration of large-scal...
STRING - Modeling of pathways through cross-species integration of large-scal...Lars Juhl Jensen
 
Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...
Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...
Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...VHIR Vall d’Hebron Institut de Recerca
 
Combining Explicit and Latent Web Semantics for Maintaining Knowledge Graphs
Combining Explicit and Latent Web Semantics for Maintaining Knowledge GraphsCombining Explicit and Latent Web Semantics for Maintaining Knowledge Graphs
Combining Explicit and Latent Web Semantics for Maintaining Knowledge GraphsPaul Groth
 

Similar to Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources (20)

presentation
presentationpresentation
presentation
 
KnetMiner Overview Oct 2017
KnetMiner Overview Oct 2017KnetMiner Overview Oct 2017
KnetMiner Overview Oct 2017
 
Prote-OMIC Data Analysis and Visualization
Prote-OMIC Data Analysis and VisualizationProte-OMIC Data Analysis and Visualization
Prote-OMIC Data Analysis and Visualization
 
Solis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptx
Solis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptxSolis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptx
Solis-Lemus & Ane (2016) Inferring Phylogenetic Networks.pptx
 
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...NS-CUK Seminar: H.B.Kim,  Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
 
Systems genetics approaches to understand complex traits
Systems genetics approaches to understand complex traitsSystems genetics approaches to understand complex traits
Systems genetics approaches to understand complex traits
 
Proteomics - Analysis and integration of large-scale data sets
Proteomics - Analysis and integration of large-scale data setsProteomics - Analysis and integration of large-scale data sets
Proteomics - Analysis and integration of large-scale data sets
 
Introduction to graph databases: Neo4j and Cypher
Introduction to graph databases: Neo4j and CypherIntroduction to graph databases: Neo4j and Cypher
Introduction to graph databases: Neo4j and Cypher
 
An interactive approach to multiobjective clustering of gene expression patterns
An interactive approach to multiobjective clustering of gene expression patternsAn interactive approach to multiobjective clustering of gene expression patterns
An interactive approach to multiobjective clustering of gene expression patterns
 
Jillian ms defense-4-14-14-ja
Jillian ms defense-4-14-14-jaJillian ms defense-4-14-14-ja
Jillian ms defense-4-14-14-ja
 
KnetMiner - EBI Workshop 2017
KnetMiner - EBI Workshop 2017KnetMiner - EBI Workshop 2017
KnetMiner - EBI Workshop 2017
 
Cornell Pbsb 20090126 Nets
Cornell Pbsb 20090126 NetsCornell Pbsb 20090126 Nets
Cornell Pbsb 20090126 Nets
 
2015 GU-ICBI Poster (third printing)
2015 GU-ICBI Poster (third printing)2015 GU-ICBI Poster (third printing)
2015 GU-ICBI Poster (third printing)
 
local and global allignment
local and global allignmentlocal and global allignment
local and global allignment
 
Machine reading for cancer biology
Machine reading for cancer biologyMachine reading for cancer biology
Machine reading for cancer biology
 
32_Nov07_MachineLear..
32_Nov07_MachineLear..32_Nov07_MachineLear..
32_Nov07_MachineLear..
 
Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)
 
STRING - Modeling of pathways through cross-species integration of large-scal...
STRING - Modeling of pathways through cross-species integration of large-scal...STRING - Modeling of pathways through cross-species integration of large-scal...
STRING - Modeling of pathways through cross-species integration of large-scal...
 
Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...
Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...
Introduction to RNA-seq and RNA-seq Data Analysis (UEB-UAT Bioinformatics Cou...
 
Combining Explicit and Latent Web Semantics for Maintaining Knowledge Graphs
Combining Explicit and Latent Web Semantics for Maintaining Knowledge GraphsCombining Explicit and Latent Web Semantics for Maintaining Knowledge Graphs
Combining Explicit and Latent Web Semantics for Maintaining Knowledge Graphs
 

Recently uploaded

Ukraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSUkraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSAishani27
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiSuhani Kapoor
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingNeil Barnes
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...Pooja Nehwal
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxEmmanuel Dauda
 
Digi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptxDigi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptxTanveerAhmed817946
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998YohFuh
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptxAnupama Kate
 
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Predicting Employee Churn: A Data-Driven Approach Project Presentation
Predicting Employee Churn: A Data-Driven Approach Project PresentationPredicting Employee Churn: A Data-Driven Approach Project Presentation
Predicting Employee Churn: A Data-Driven Approach Project PresentationBoston Institute of Analytics
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfMarinCaroMartnezBerg
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Callshivangimorya083
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Delhi Call girls
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 

Recently uploaded (20)

Ukraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSUkraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICS
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data Storytelling
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptx
 
Digi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptxDigi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptx
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx
 
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
 
Predicting Employee Churn: A Data-Driven Approach Project Presentation
Predicting Employee Churn: A Data-Driven Approach Project PresentationPredicting Employee Churn: A Data-Driven Approach Project Presentation
Predicting Employee Churn: A Data-Driven Approach Project Presentation
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 

Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources

  • 1. Praveen, Paurush, and Holger Fröhlich. "Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources." PloS one 8.6 (2013): e67410. @St_Hakky これは論文を読んだ際のまとめであり、私が書いた論文ではありません。
  • 2. Introduction • Graphical models is useful for extracting meaningful biological insights from experimental data in life science research. • (Dynamic) Bayesian Networks • Gaussian Graphical Models • Poisson Graphical Models • And so many other methods… • These models can infer features of cellular networks in a data driven manner.
  • 3. Problem • Network inference from experimental data is challenging because of … • the typical low signal to noise ratio • High dimensional • Low number of replicates and noisy measurements(the sample size is very small) • The inference of regulatory network from such data is hence challenging and often fails to reach the desired level of accuracy.
  • 4. How to deal with this problem? • One can either work at experimental level by increasing the sample size. • This is the best way, but practically this is very difficult. • The limitation of cost and time. • We can deal with this problem at inference level and we will consider about it.
  • 5. There are so many data in Bio • There are many knowledge in Bio : pathway databases [6–8], Gene Ontology [9] and others. • So, Integrating known information from databases and biological literature as prior knowledge thus appears to be beneficial.
  • 6. Integrating heterogenous information into the learning process is not straight forward. • However, biological knowledge covers many different aspects and is widely distributed across multiple knowledge resources. • So, integrating heterogenous information into the learning process is not straight forward.
  • 7. The usuful databases • Pathway databases [6–8] • Gene Ontology [9] • Promoter sequences • Protein-protein interactions • Evolutionary information • KEGG pathway • GO annotation
  • 8. How to deal with the problem at inference level • From one information • Generate sample data • Integrating one or more particular information into learning process • Gene regulatory networks were inferred from a combination of gene expression data with transcription factor binding motifs • promoter sequences, protein-protein interactions, evolutionary information, KEGG pathways, GO anotation
  • 9. Several approaches have been proposed • On the technical side several approaches for integrating prior knowledge into the inference of probabilistic graphical models have been published.
  • 10. Prior Research • In [16] and [17] the authors only generate candidate structures with significance above a certain threshold according to prior knowledge. • [16] Larsen, Peter, et al. "A statistical method to incorporate biological knowledge for generating testable novel gene regulatory interactions from microarray experiments." BMC bioinformatics 8.1 (2007): 1. • [17] Almasri, Eyad, et al. “Incorporating literature knowledge in Bayesian network for inferring gene networks with gene expression data.” International Symposium on Bioinformatics Research and Applications. Springer Berlin Heidelberg, 2008. • Another idea is to introduce a probabilistic Bayesian prior over network structures. [18] introduced a prior for individual edges based on an a- priori assumed degree of belief. • [18] Fröhlich, Holger, et al. “Large scale statistical inference of signaling pathways from rnai and microarray data.” BMC bioinformatics 8.1 (2007): 1.
  • 11. Prior Research • [19] describes a more general set of priors, which can also capture global network properties, such as scale-free behavior. • [19] Mukherjee, Sach, and Terence P. Speed. "Network inference using informative priors." Proceedings of the National Academy of Sciences 105.38 (2008): 14313- 14318. • [20] use a similar form of prior as [18], but additionally combine multiple information sources via a linear weighting scheme. • Werhli, Adriano V., and Dirk Husmeier. "Reconstructing gene regulatory networks with Bayesian networks by combining expression data with multiple sources of prior knowledge." Statistical applications in genetics and molecular biology 6.1 (2007).
  • 12. Prior Research • [21] treat different information sources as statistically independent, and consequently the overall prior is just the product over the priors for the individual information sources. • [21] Gao, Shouguo, and Xujing Wang. "Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data." BMC bioinformatics 12.1 (2011): 1.
  • 13. We focus on… • The focus of this paper is on construction of consensus priors from multiple, heterogenous knowledge sources.
  • 14. Proposal Methods • Latent Factor Model(LFM) • LFM relies on the idea of a generative process for the individual information sources and uses Bayesian inference to estimate a consensus prior. • Noisy-OR • The second model integrates different information sources via a Noisy-OR gate.
  • 15. Materials and Methods • 1.1:Edge-wise Priors for Data Driven Network Inference. • 1.2 : Latent Factor Model (LFM) • 1.3 : Noisy-OR Model (NOM) • 1.4 : Information Sources
  • 16. Edge-wise Priors for Data Driven Network Inference • D : experimental data • Φ : network graph (represented by an m×m adjacency matrix) to infer from data. • Bayes rule :
  • 17. Edge-wise Priors for Data Driven Network Inference • P(Φ) is a prior. We assume P(Φ) can be decomposed into • Φ𝑖𝑗 is a matrix of prior edge confidence. • Φ𝑖𝑗 = 1 : high prior degree of belief in the existence of the edge i→j
  • 18. Our purpose • Our purpos is to compile Φ𝑖𝑗 in a consistent manner from n available resources. • 𝑋(1), 𝑋(2), , , 𝑋(𝑛) ∶ we have n edge confidence matrix with n information matrix.
  • 19. Latent Factor Model(LFM) • LFM is based on the idea that the prior information encoded in matrices 𝑋(1), 𝑋(2), , , 𝑋(𝑛) all originate from the true but unknown network Φ.
  • 20. Latent Factor Model(LFM) • We use this notion to conduct joint Bayesian inference on W as well as additional parameters 𝜃 = (α, β) given 𝑋(1) , 𝑋(2) , , , 𝑋(𝑛) • The idea behind this equation is that we can identify Φ with the posterior 𝑃 Φ, 𝜃 𝑋 1 , 𝑋 2 , , , 𝑋 𝑛
  • 21. Latent Factor Model(LFM) • The entries of each matrix 𝑋(𝑘) can be assumed to follow beta distribution. • 詳しい説明は省きますが、αとβの値をいじれるの で、それぞれの情報源の重要度を独立に決めるこ とができます
  • 22. MCMC to get Φ and 𝜃 = (α, β) • Employ an adaptive Markov Chain Monte Carlo strategy to learn the latent variable Φ together with parameters 𝜃 = (α, β) . • In network space MCMC moves are edge insertion, deletion and reversal. • In parameter space α and β are adapted on log-scale using a multivariate Gaussian transition kernel.
  • 23. What is MCMC? • PRML11章と、緑本の8章あたりが参考になる
  • 24. Noisy-OR model(NOM) • The NOM represents a non-deterministic disjunctive relation between an effect. • Its possible causes and has been extensively used in artificial intelligence.
  • 25. Noisy-OR model(NOM) • The Noisy-OR principle is governed by two hallmarks: • First, each cause has a probability to produce the effect. • Second, the probability of each cause being sufficient to produce the effect is independent of the presence of other causes.
  • 26. Noisy-OR model(NOM) • In our case 𝑋(1), 𝑋(2), , , 𝑋(𝑛) are interpreted as causes and Φ𝑖𝑗 as effect. The link between both is given by • Φ𝑖𝑗 = 1 : high prior degree of belief in the existence of the edge i→j.
  • 27. LFM vs NOM • LFM • A high level of agreement between information sources is required in order to achieve high values in Φ • NOM • In consequence Φ𝑖𝑗 becomes close to 1, if the edge i→j has a high confidence in at least one information source, because then the product gets close to 0.
  • 28. NOM.RNK • In addition to NOM, we also experimented with a variant based on relative ranks.
  • 29. Information Sources • We use the following information as prior information • GO annotation • Pathway Databases • KEGG, PathwayCommons • Protain domain annotation • InterPro • Protatin domain interaction • DOMINE
  • 30. GO annotation • Interacting proteins often function in the same biological process. • So, two proteins acting in the same biological process are more likely to interact than two proteins involved in different processes. • We used the default similarity measure for gene products implemented in the R-package GOSim.
  • 31. Pathway Databases • KEGG • R-package : KEGGgraph • PathwaysCommons database • To compute a condence value for each interaction between a pair of genes/proteins we can work at the level of this graph in different ways: • Shortest path distance between the two entities • Dijkstra’s algorithm • R-package : RBGL • In this paper, we use this one because we observed that the use of the diffusion kernel in place of the shortest path measure did not affect the results much. • Diffusion kernels • R-package : pathClass
  • 32. Protein domain annotation • Protein domain annotation was compared on the basis of a binary vector representation via the cosine similarity. • proteins with similar domains are more likely to act in similar biological pathways. • The condence of interaction between two proteins can thus be seen as a function of the similarity of the domain annotations of proteins.
  • 33. Protein domain annotation • Protein domain annotation can be retrieved from the Inter-Pro database. • A “1" in a component thus indicates that the protein is annotated with the corresponding domain. Otherwise a “0" is filled in. • The similarity between two binary vectors u, v (domain signatures) is presented in terms of the cosine similarity
  • 34. Protatin domain interaction • The relative frequency of interacting protein domain pairs was taken as another confidence measure for an edge a-b. • Two proteins are more likely to interact if they contain domains, which can potentially interact. • The DOMINE database collates known and predicted domain-domain interactions • where H is the number of hit pairs found in the DOMINE database and DA and DB are the the number of domains in proteins A and B, respectively
  • 35. Results • 2.1 : Correlation of Prior Edge Confidences with True Biological Network • Network Sampling • Simulated Information Sources • Weighting of Information Sources • Real Information Sources • 2.2 : Enhancement of Data Driven Network Reconstruction Accuracy • Simulated Data • Application to Breast Cancer • Application to Yeast Heat-Shock Network
  • 36. Results • 2.1 : Correlation of Prior Edge Confidences with True Biological Network • Network Sampling • Simulated Information Sources • Weighting of Information Sources • Real Information Sources • 2.2 : Enhancement of Data Driven Network Reconstruction Accuracy • Simulated Data • Application to Breast Cancer • Application to Yeast Heat-Shock Network
  • 38. Results • 2.1 : Correlation of Prior Edge Confidences with True Biological Network • Network Sampling • Simulated Information Sources • Weighting of Information Sources • Real Information Sources • 2.2 : Enhancement of Data Driven Network Reconstruction Accuracy • Simulated Data • Application to Breast Cancer • Application to Yeast Heat-Shock Network
  • 39. Enhancement of Data Driven Network Reconstruction Accuracy • We next investigated, in how far our priors could enhance the reconstruction performance of Bayesian Networks learned from data. • From the set of best fitting network structures the one with minimum BIC value was selected.
  • 40. Network reconstruction for the breast cancer data • (Right) Without using any prior • (Left) Using the NOM prior • Black edges could be verified with established literature knowledge • Grey edges could not be verified.
  • 41. Network reconstruction for the breast cancer data
  • 42. Yeast heat-shock response network obtained via Bayesian network reconstruction • (a) Network without any prior knowledge • (b) The gold standard network from YEASTRACT database • (c) Network reconstructed with prior knowledge (NOM).
  • 43. Reconstruction performance of Yeast heat-shock response network with Bayesian Networks and different priors (NP = No Prior).
  • 44. Conclusion • We propose two methods: • Latent Factor Model(LFM) • Noisy-OR model(NOM) • Result : • Both of our models yielded priors which were significantly closer to the true biological network than competing methods. • They could significantly enhance the reconstruction performance of Bayesian Networks compared to a situation without any prior.
  • 45. Conclusion • Our methods were also superior to purely using STRING edge confidence scores as prior information. • The proposal methods are robust for network size. They can manage small , medium, large network. • The LFM’s computational cost is higher than NOM, so we should use NOM to large network.

Editor's Notes

  1. This specifically implies that direct correlations between edge confidences across matrices can be explained by this hidden dependency. In other words W is a latent factor explaining correlations between the X (k)
  2. なぜ、ベータ分布に従うのかの詳しい理由は説明されていなかった。