An improved spfa algorithm for single source shortest path problem using forw...IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph,
the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network.
SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed.
In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically
analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient.
An improved spfa algorithm for single source shortest path problem using forw...IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph,
the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network.
SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed.
In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically
analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient.
International Journal of Managing Information Technology (IJMIT)IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph, the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network. SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed. In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient
Improving Variational Inference with Inverse Autoregressive FlowTatsuya Shirakawa
This slide was created for NIPS 2016 study meetup.
IAF and other related researches are briefly explained.
paper:
Diederik P. Kingma et al., "Improving Variational Inference with Inverse Autoregressive Flow", 2016
https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow
Robot의 Gait optimization, Gesture Recognition, Optimal Control, Hyper parameter optimization, 신약 신소재 개발을 위한 optimal data sampling strategy등과 같은 ML분야에서 약방의 감초 같은 존재인 GP이지만 이해가 쉽지 않은 GP의 기본적인 이론 및 matlab code 소개
An improved spfa algorithm for single source shortest path problem using forw...IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph,
the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network.
SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed.
In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically
analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient.
An improved spfa algorithm for single source shortest path problem using forw...IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph,
the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network.
SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed.
In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically
analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient.
International Journal of Managing Information Technology (IJMIT)IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph, the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network. SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed. In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient
Improving Variational Inference with Inverse Autoregressive FlowTatsuya Shirakawa
This slide was created for NIPS 2016 study meetup.
IAF and other related researches are briefly explained.
paper:
Diederik P. Kingma et al., "Improving Variational Inference with Inverse Autoregressive Flow", 2016
https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow
Robot의 Gait optimization, Gesture Recognition, Optimal Control, Hyper parameter optimization, 신약 신소재 개발을 위한 optimal data sampling strategy등과 같은 ML분야에서 약방의 감초 같은 존재인 GP이지만 이해가 쉽지 않은 GP의 기본적인 이론 및 matlab code 소개
Slides for the presentation at ENBIS 2018 of "Deep k-Means: Jointly Clustering with k-Means and Learning Representations" by Thibaut Thonet. Joint work with Maziar Moradi Fard and Eric Gaussier.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Talk on Optimization for Deep Learning, which gives an overview of gradient descent optimization algorithms and highlights some current research directions.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
It presents various approximation schemes including absolute approximation, epsilon approximation and also presents some polynomial time approximation schemes. It also presents some probabilistically good algorithms.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPK Lehre
We describe how to estimate the optimisation time of the UMDA, an estimation of distribution algorithm, using the level-based theorem. The paper was presented at GECCO 2015 in Madrid.
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsAlexander Litvinenko
We derive non-linear approximation of Bayesian update for PCE coefficients. We avoid the usage of Monte Carlo Markov Chain formula to compute posterior.
Slides for the presentation at ENBIS 2018 of "Deep k-Means: Jointly Clustering with k-Means and Learning Representations" by Thibaut Thonet. Joint work with Maziar Moradi Fard and Eric Gaussier.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Talk on Optimization for Deep Learning, which gives an overview of gradient descent optimization algorithms and highlights some current research directions.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
It presents various approximation schemes including absolute approximation, epsilon approximation and also presents some polynomial time approximation schemes. It also presents some probabilistically good algorithms.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPK Lehre
We describe how to estimate the optimisation time of the UMDA, an estimation of distribution algorithm, using the level-based theorem. The paper was presented at GECCO 2015 in Madrid.
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsAlexander Litvinenko
We derive non-linear approximation of Bayesian update for PCE coefficients. We avoid the usage of Monte Carlo Markov Chain formula to compute posterior.
TMPA-2015: Implementing the MetaVCG Approach in the C-light SystemIosif Itkin
Alexei Promsky, Dmitry Kondtratyev, A.P. Ershov Institute of Informatics Systems, Novosibirsk
12 - 14 November 2015
Tools and Methods of Program Analysis in St. Petersburg
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
International Workshop on Artificial Intelligence in Software Testing
Gradient Estimation Using Stochastic Computation Graphs
1. Gradient Estimation Using Stochastic
Computation Graphs
Yoonho Lee
Department of Computer Science and Engineering
Pohang University of Science and Technology
May 9, 2017
13. Stochastic Gradient Estimation
SF vs PD
R(θ) = Ew∼Pθ(dw)[f (θ, w)]
Score Function (SF) Estimator:
θR(θ)
θ=θ0
= Ew∼Pθ0
(dw)[f (w) θlnPθ(dw)]
θ=θ0
Pathwise Derivative (PD) Estimator:
θR(θ) = Ew∼P(dw)[ θf (θ, w)]
SF allows discontinuous f .
SF requires sample values f ; PD requires derivatives f .
SF takes derivatives over the probability distribution of w; PD
takes derivatives over the function f .
SF has high variance on high-dimensional w.
PD has high variance on rough f .
18. Stochastic Computation Graphs
Stochastic computation graphs are a design choice rather
than an intrinsic property of a problem
Stochastic nodes ’block’ gradient flow
21. Stochastic Computation Graphs
Deriving Unbiased Estimators
Condition 1. (Differentiability Requirements) Given input node
θ ∈ Θ, for all edges (v, w) which satisfy θ D v and θ D w, the
following condition holds: if w is deterministic, ∂w
∂v exists, and if w
is stochastic, ∂
∂v p(w|PARENTSw ) exists.
Theorem 1. Suppose θ ∈ Θ satisfies Condition 1. The following
equations hold:
∂
∂θ
E
c∈C
c = E
w∈S
θ D w
∂
∂θ
logp(w|DEPSw ) ˆQw +
c∈C
θ D c
∂
∂θ
c(DEPSc)
= E
c∈C
ˆc
w∈S
θ D w
∂
∂θ
logp(w|DEPSw ) +
c∈C
θ D c
∂
∂θ
c(DEPSc)
22. Stochastic Computation Graphs
Surrogate loss functions
We have this equation from Theorem 1:
∂
∂θ
E
c∈C
c = E
w∈S
θ D w
∂
∂θ
logp(w|DEPSw ) ˆQw +
c∈C
θ D c
∂
∂θ
c(DEPSc)
Note that by defining
L(Θ, S) :=
w∈S
logp(w|DEPSw ) ˆQw +
c∈C
c(DEPSc)
we have
∂
∂θ
E
c∈C
c = E
∂
∂θ
L(Θ, S)
27. Policy Gradient
Surrogate Loss
1
The policy gradient algorithm is Theorem 1 applied to the
MDP graph
1
Richard S Sutton et al. “Policy Gradient Methods for Reinforcement Learning with Function Approximation”.
In: NIPS (1999).
28. Stochastic Value Gradient
SVG(0)
Learn Qφ, and update2
∆θ ∝ θ
T
t=1
Qφ(st, πθ(st, t))
2
Nicolas Heess et al. “Learning Continuous Control Policies by Stochastic Value Gradients”. In: NIPS (2015).
29. Stochastic Value Gradient
SVG(1)
Learn Vφ and dynamics model f such that st+1 = f (st, at) + δt.
Given transition (st, at, st+1), infer noise δt.3
∆θ ∝ θ
T
t=1
rt + γVφ(f (st, at) + δt)
3
Nicolas Heess et al. “Learning Continuous Control Policies by Stochastic Value Gradients”. In: NIPS (2015).
30. Stochastic Value Gradient
SVG(∞)
Learn f , infer all noise variables δt. Differentiate through whole
graph.4
4
Nicolas Heess et al. “Learning Continuous Control Policies by Stochastic Value Gradients”. In: NIPS (2015).
34. Neural Variational Inference
7
Where
L(x, θ, φ) = Eh∼Q(h|x)[logPθ(x, h) − logQφ(h|x)]
Score function estimator
7
Andriy Mnih and Karol Gregor. “Neural Variational Inference and Learning in Belief Networks”. In: ICML
(2014).
35. Variational Autoencoder
8
Pathwise derivative estimator applied to the exact same graph
as NVIL
8
Diederik P Kingma and Max Welling. “Auto-Encoding Variational Bayes”. In: ICLR (2014).
36. DRAW
9
Sequential version of VAE
Same graph as SVG(∞) with (known) deterministic
dynamics(e=state, z=action, L=reward)
Similarly, VAE can be seen as SVG(∞) with 1 timestep and
deterministic dynamics
9
Karol Gregor et al. “DRAW: A Recurrent Neural Network For Image Generation”. In: ICML (2015).
37. GAN
10
Connection to SVG(0) has been established in [10]
Actor has no access to state. Observation is a random choice
between real or fake image. Reward is 1 if real, 0 if fake. D
learns value function of observation. G’s learning signal is D.
10
David Pfau and Oriol Vinyals. “Connecting Generative Adversarial Networks and Actor-Critic Methods”. In:
(2016).
38. Bayes by backprop
11
and our cost function f is:
f (D, θ) = Ew∼q(w|θ)[f (w, θ)]
= Ew∼q(w|θ)[−logp(D|w) + logq(w|θ) − logp(w)]
From the point of view of stochastic computation graphs,
weights and activations are not different
Estimate ELBO gradient of activation with PD estimator:
VAE
Estimate ELBO gradient of weight with PD estimator: Bayes
by Backprop
11
Charles Blundell et al. “Weight Uncertainty in Neural Networks”. In: ICML (2015).
39. Summary
Stochastic computation graphs explain many different
algorithms with one gradient estimator(Theorem 1)
Stochastic computation graphs give us insight into the
behavior of estimators
Stochastic computation graphs reveal equivalencies:
NVIL[7]=Policy gradient[12]
VAE[6]/DRAW[4]=SVG(∞)[5]
GAN[3]=SVG(0)[5]
40. References I
[1] Charles Blundell et al. “Weight Uncertainty in Neural
Networks”. In: ICML (2015).
[2] Michael Fu. “Stochastic Gradient Estimation”. In: (2005).
[3] Ian J. Goodfellow et al. “Generative Adversarial Networks”.
In: NIPS (2014).
[4] Karol Gregor et al. “DRAW: A Recurrent Neural Network
For Image Generation”. In: ICML (2015).
[5] Nicolas Heess et al. “Learning Continuous Control Policies
by Stochastic Value Gradients”. In: NIPS (2015).
[6] Diederik P Kingma and Max Welling. “Auto-Encoding
Variational Bayes”. In: ICLR (2014).
[7] Andriy Mnih and Karol Gregor. “Neural Variational Inference
and Learning in Belief Networks”. In: ICML (2014).
41. References II
[8] David Pfau and Oriol Vinyals. “Connecting Generative
Adversarial Networks and Actor-Critic Methods”. In: (2016).
[9] Tim Salimans et al. “Evolution Strategies as a Scalable
Alternative to Reinforcement Learning”. In: arXiv (2017).
[10] John Schulman et al. “Gradient Estimation Using Stochastic
Computation Graphs”. In: NIPS (2015).
[11] David Silver et al. “Deterministic Policy Gradient
Algorithms”. In: ICML (2014).
[12] Richard S Sutton et al. “Policy Gradient Methods for
Reinforcement Learning with Function Approximation”. In:
NIPS (1999).