1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Processing Reachability Queries with Realistic Constraints on Massive Network...BigMine
Massive graphs are ubiquitous in various application domains, such as social networks, road networks, communication networks, biological networks, RDF graphs, and so on. Such graphs are massive (for example, with hundreds of millions of nodes and edges or even more) and contain rich information (for example, node/edge weights, labels and textual contents). In such massive graphs, an important class of problems is to process various graph structure related queries. Graph reachability, as an example, asks whether a node can reach another in a graph. However, the large graph scale presents new challenges for efficient query processing.
In this talk, I will introduce two new yet important types of graph reachability queries: weight constraint reachability that imposes edge weight constraint on the answer path, and k-hop reachability that imposes a length constraint on the answer path. With such realistic constraints, we can find more meaningful and practically feasible answers. These two reachablity queries have wide applications in many real-world problems, such as QoS routing and trip planning.
The presentation is an introduction to decision making with approximate Bayesian Methods. It consists of a review of Bayesian Decision Theory and Variational Inference along with a description of Loss Calibrated Variational Inference.
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Processing Reachability Queries with Realistic Constraints on Massive Network...BigMine
Massive graphs are ubiquitous in various application domains, such as social networks, road networks, communication networks, biological networks, RDF graphs, and so on. Such graphs are massive (for example, with hundreds of millions of nodes and edges or even more) and contain rich information (for example, node/edge weights, labels and textual contents). In such massive graphs, an important class of problems is to process various graph structure related queries. Graph reachability, as an example, asks whether a node can reach another in a graph. However, the large graph scale presents new challenges for efficient query processing.
In this talk, I will introduce two new yet important types of graph reachability queries: weight constraint reachability that imposes edge weight constraint on the answer path, and k-hop reachability that imposes a length constraint on the answer path. With such realistic constraints, we can find more meaningful and practically feasible answers. These two reachablity queries have wide applications in many real-world problems, such as QoS routing and trip planning.
The presentation is an introduction to decision making with approximate Bayesian Methods. It consists of a review of Bayesian Decision Theory and Variational Inference along with a description of Loss Calibrated Variational Inference.
In the study of probabilistic integrators for deterministic ordinary differential equations, one goal is to establish the convergence (in an appropriate topology) of the random solutions to the true deterministic solution of an initial value problem defined by some operator. The challenge is to identify the right conditions on the additive noise with which one constructs the probabilistic integrator, so that the convergence of the random solutions has the same order as the underlying deterministic integrator. In the context of ordinary differential equations, Conrad et. al. (Stat.
Comput., 2017), established the mean square convergence of the solutions for globally Lipschitz vector fields, under the assumptions of i.i.d., state-independent, mean-zero Gaussian noise. We extend their analysis by considering vector fields that need not be globally Lipschitz, and by
considering non-Gaussian, non-i.i.d. noise that can depend on the state and that can have nonzero mean. A key assumption is a uniform moment bound condition on the noise. We obtain convergence in the stronger topology of the uniform norm, and establish results that connect this topology to the regularity of the additive noise. Joint work with A. M. Stuart (Caltech), T. J. Sullivan (Free University of Berlin).
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter Lambda, which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...Leo Asselborn
This presentation proposes an approach to algorithmically synthesize control strategies for
set-to-set transitions of uncertain discrete-time switched linear systems based on a combination
of tree search and reachable set computations in a stochastic setting. For given Gaussian
distributions of the initial states and disturbances, state sets wich are reachable to a chosen
confidence level under the effect of time-variant hybrid control laws are computed by using
principles of the ellipsoidal calculus. The proposed algorithm iterates over sequences of the
discrete states and LMI-constrained semi-definite programming (SDP) problems to compute
stabilizing controllers, while polytopic input constraints are considered. An example for illustration is included.
Sensors and Samples: A Homological ApproachDon Sheehy
In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that its possible to certify the coverage of a coordinate free sensor network even with very minimal knowledge of the space to be covered. We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects of this approach. This factoring reveals an interesting new connection between the topological coverage condition and the notion of weak feature size in geometric sampling theory. We then apply this connection to the problem of showing that for a given scale, if one knows the number of connected components and the distance to the boundary, one can also infer the higher betti numbers or provide strong evidence that more samples are needed. This is in contrast to previous work which merely assumed a good sample and gives no guarantees if the sampling condition is not met.
Slides from my introductory talk on Lossy Kernelization at the Parameterized Complexity Summer School 2017, co-located with ALGO 2017 and held at TU Wien, Vienna.
We propose a new stochastic first-order algorithmic framework to solve stochastic composite nonconvex optimization problems that covers both finite-sum and expectation settings. Our algorithms rely on the SARAH estimator and consist of two steps: a proximal gradient and an averaging step making them different from existing nonconvex proximal-type algorithms. The algorithms only require an average smoothness assumption of the nonconvex objective term and additional bounded variance assumption if applied to expectation problems. They work with both constant and adaptive step-sizes, while allowing single sample and mini-batches. In all these cases, we prove that our algorithms can achieve the best-known complexity bounds. One key step of our methods is new constant and adaptive step-sizes that help to achieve desired complexity bounds while improving practical performance. Our constant step-size is much larger than existing methods including proximal SVRG schemes in the single sample case. We also specify the algorithm to the non-composite case that covers existing state-of-the-arts in terms of complexity bounds.Our update also allows one to trade-off between step-sizes and mini-batch sizes to improve performance. We test the proposed algorithms on two composite nonconvex problems and neural networks using several well-known datasets.
Predicates & Quantifiers
CMSC 56 | Discrete Mathematical Structure for Computer Science
September 1, 2018
Instructor: Allyn Joy D. Calcaben
College of Arts & Sciences
University of the Philippines Visayas
In this talk, I address two new ideas in sampling geometric objects. The first is a new take on adaptive sampling with respect to the local feature size, i.e., the distance to the medial axis. We recently proved that such samples acn be viewed as uniform samples with respect to an alternative metric on the Euclidean space. The second is a generalization of Voronoi refinement sampling. There, one also achieves an adaptive sample while simultaneously "discovering" the underlying sizing function. We show how to construct such samples that are spaced uniformly with respect to the $k$th nearest neighbor distance function.
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...Alexander Litvinenko
We suggest the new vision for classical Bayesian Update formula. We expand all ingredients in Polynomial Chaos Expansion and write out a new formula for Bayesian* update of PCE coefficients. This formula is derived from Minimum Mean Square Estimation. One starts with prior PCE, take measurements into account, and obtain posterior PCE coefficients, without any MCMC sampling.
"The Metropolis adjusted Langevin Algorithm
for log-concave probability measures in high
dimensions", talk by Andreas Elberle at the BigMC seminar, 9th June 2011, Paris
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Leo Asselborn
This presentation proposes an algorithmic approach to
synthesize stabilizing control laws for discrete-time piecewise
affine probabilistic (PWAP) systems based on computations of
probabilistic reachable sets. The considered class of systems
contains probabilistic components (with Gaussian distribution)
modeling additive disturbances and state initialization. The
probabilistic reachable state sets contain all states that are
reachable with a given confidence level under the effect of
time-variant control laws. The control synthesis uses principles
of the ellipsoidal calculus, and it considers that the system
parametrization depends on the partition of the state space. The
proposed algorithm uses LMI-constrained semi-definite programming
(SDP) problems to compute stabilizing controllers,
while polytopic input constraints and transitions between regions
of the state space are considered. The formulation of
the SDP is adopted from a previous work in [1] for switched
systems, in which the switching of the continuous dynamics
is triggered by a discrete input variable. Here, as opposed
to [1], the switching occurs autonomously and an algorithmic
procedure is suggested to synthesis a stabilizing controller. An
example for illustration is included.
In this talk, we give an overview of results on numerical integration in Hermite spaces. These spaces contain functions defined on $\mathbb{R}^d$, and can be characterized by the decay of their Hermite coefficients. We consider the case of exponentially as well as polynomially decaying Hermite coefficients. For numerical integration, we either use Gauss-Hermite quadrature rules or algorithms based on quasi-Monte Carlo rules. We present upper and lower error bounds for these algorithms, and discuss their dependence on the dimension $d$. Furthermore, we comment on open problems for future research.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
3. Near Neighbor Problem
Given a set of points V , a distance metric d and a query point
q, is there any point x close to query point q: d(x, q) ≤ R.
Easy in low dimension. Complexity increases exponentially in
dimension.
4. Approximate Near Neighbor Problem
Given a set of points V , a distance metric d and a query
point q, the (c, R)-approximate near neighbor problem
requires if there exists a point x such that d(x, q) ≤ R, then
one must find a point x such that d(x , q) ≤ cR with
probability > (1 − δ) for a given δ > 0.
The technique that we will be using to solve it is Locality Sensitive
Hashing
5. Locality Sensitive Hashing
A family of hash functions H that is said to be
(c, R, p1, p2)-sensitive for a distance metric d, when:
1. Prh∼H[h(x) = h(y)] ≥ p1 for all x and y such that
d(x, y) ≤ R
2. Prh∼H[h(x) = h(y)] ≤ p2 for all x and y such that
d(x, y) > cR
For H to be LSH p1 > p2.
6. Locality Sensitive Hashing
Example
Let V ⊆ [0, 1]n and d(x, y) = Hamming distance between x and y.
Let R << n and cR << n, define H = {h1, h2, ..., hn} such that
hi (x) = xi . p1 ≥ 1 − R
n and p2 ≤ 1 − cR
n .
7. Locality Sensitive Hashing for solving (c,R)-NN problem
LSH H : (c, R, p1, p2)-sensitive
hi,j ∼ H, i ∈ [1, K], j ∈ [1, L]
define gj = h1,j , h2,j , ..., hK,j for all j ∈ [1, L]
8. Locality Sensitive Hashing for solving (c,R)-NN problem
Preprocessing For all x ∈ V and for all j ∈ [L], add x to
bucketj (gj (x)).
Time=O(NLK)
Query(q)
for j=1,2,..,L
for all x ∈ bucketj (gj (q))
if d(x, q) ≤ cR then return x
return none
Time=O(KL + NLF) where F is the probability for any given j
that a point x is hashed to the same bucket by gj as q but
d(x, q) > cR.
9. Locality Sensitive Hashing for solving (c,R)-NN problem
How much is F ?
Given x and y with d(x, y) > cr,
F = Pr gj (x) = gj (y) | d(x, y) > cR
K
j=1
Pr hi,j (x) = hi,j (y) | d(x, y) > cR ≤ pk
2
Hence query time O(KL + NLpk
2 ).
11. Locality Sensitive Hashing for solving (c,R)-NN problem
How to choose K and L
set L = 1
pK
1
. Success probability becomes 1 − 1
e . If
δ = 1
e -happy!
To minimize query cost : O(L): NpK
2 = 1
We have
N =
1
pk
2
=
1
p1
k
log 1/p2
log 1/p1
= L
log 1/p2
log 1/p1
We have L = Nρ, ρ = log 1/p1
log 1/p2
Example
p1 = 0.1, p2 = 0.01 leads to ρ = 0.5, L =
√
N, K = O(log N).
Preprocessing time=O(N
√
N log N), Query time=O(
√
N log N).