My talk at SIAM NetSci workshop (2015) on our new spacey random walk and spacey random surfer models and how we derived them. There many potential extensions and opportunities to use this for analyzing big data as tensors.
Localized methods in graph mining exploit the local structures in a graph instead attempting to find global structures. These are widely successful at all sorts of problems including community detection, label propagation, and a few others.
PageRank Centrality of dynamic graph structuresDavid Gleich
A talk I gave at the SIAM Annual Meeting Mini-symposium on the mathematics of the power grid organized by Mahantesh Halappanavar. I discuss a few ideas on how our dynamic centrality could help analyze such situations.
Localized methods for diffusions in large graphsDavid Gleich
I describe a few ongoing research projects on diffusions in large graphs and how we can create efficient matrix computations in order to determine them efficiently.
Anti-differentiating Approximation Algorithms: PageRank and MinCutDavid Gleich
We study how Google's PageRank method relates to mincut and a particular type of electrical flow in a network. We also explain the details of how the "push method" for computing PageRank helps to accelerate it. This has implications for semi-supervised learning and machine learning, as well as social network analysis.
Spacey random walks and higher-order data analysisDavid Gleich
My talk at TMA 2016 (The workshop on Tensors, Matrices, and their Applications) on the relationship between a spacey random walk process and tensor eigenvectors
Anti-differentiating approximation algorithms: A case study with min-cuts, sp...David Gleich
This talk covers the idea of anti-differentiating approximation algorithms, which is an idea to explain the success of widely used heuristic procedures. Formally, this involves finding an optimization problem solved exactly by an approximation algorithm or heuristic.
Localized methods in graph mining exploit the local structures in a graph instead attempting to find global structures. These are widely successful at all sorts of problems including community detection, label propagation, and a few others.
PageRank Centrality of dynamic graph structuresDavid Gleich
A talk I gave at the SIAM Annual Meeting Mini-symposium on the mathematics of the power grid organized by Mahantesh Halappanavar. I discuss a few ideas on how our dynamic centrality could help analyze such situations.
Localized methods for diffusions in large graphsDavid Gleich
I describe a few ongoing research projects on diffusions in large graphs and how we can create efficient matrix computations in order to determine them efficiently.
Anti-differentiating Approximation Algorithms: PageRank and MinCutDavid Gleich
We study how Google's PageRank method relates to mincut and a particular type of electrical flow in a network. We also explain the details of how the "push method" for computing PageRank helps to accelerate it. This has implications for semi-supervised learning and machine learning, as well as social network analysis.
Spacey random walks and higher-order data analysisDavid Gleich
My talk at TMA 2016 (The workshop on Tensors, Matrices, and their Applications) on the relationship between a spacey random walk process and tensor eigenvectors
Anti-differentiating approximation algorithms: A case study with min-cuts, sp...David Gleich
This talk covers the idea of anti-differentiating approximation algorithms, which is an idea to explain the success of widely used heuristic procedures. Formally, this involves finding an optimization problem solved exactly by an approximation algorithm or heuristic.
Higher-order organization of complex networksDavid Gleich
A talk I gave at the Park City Institute of Mathematics about our recent work on using motifs to analyze and cluster networks. This involves a higher-order cheeger inequality in terms of motifs.
Using Local Spectral Methods to Robustify Graph-Based LearningDavid Gleich
This is my KDD2015 talk on robustness in semi-supervised learning. The paper is already on Michael Mahoney's website: http://www.stat.berkeley.edu/~mmahoney/pubs/robustifying-kdd15.pdf See the KDD paper for all the details, which this talk is a bit light on.
A copy of my slides from the SILO Seminar at UW Madison on our recent developments for the NEO-K-Means methods including new optimization routines and results.
Big data matrix factorizations and Overlapping community detection in graphsDavid Gleich
In a talk at the Chinese Academic of Sciences Institute for Automation, I discuss some of the MapReduce and community detection methods I've worked on.
Spectral clustering with motifs and higher-order structuresDavid Gleich
I presented these slides at the #strathna meeting in Glasgow in June 2017. They are an updated and enhanced version of the earlier talks on the subject.
Fast relaxation methods for the matrix exponential David Gleich
The matrix exponential is a matrix computing primitive used in link prediction and community detection. We describe a fast method to compute it using relaxation on a large linear system of equations. This enables us to compute a column of the matrix exponential is sublinear time, or under a second on a standard desktop computer.
Correlation clustering and community detection in graphs and networksDavid Gleich
We show a new relationship between various community detection objectives and a correlation clustering framework. These enable us to detect communities with good bounds on the solution.
Gaps between the theory and practice of large-scale matrix-based network comp...David Gleich
I discuss some runtimes for the personalized PageRank vector and how it relates to open questions in how we should tackle these network based measures via matrix computations.
Relaxation methods for the matrix exponential on large networksDavid Gleich
My talk from the Stanford ICME seminar series on doing network analysis and link prediction using the a fast algorithm for the matrix exponential on graph problems.
1. Y. Gal, Uncertainty in Deep Learning, 2016
2. P. McClure, Representing Inferential Uncertainty in Deep Neural Networks Through Sampling, 2017
3. G. Khan et al., Uncertainty-Aware Reinforcement
Learning from Collision Avoidance, 2016
4. B. Lakshminarayanan et al., Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles, 2017
5. A. Kendal and Y. Gal, What Uncertainties Do We Need in
Bayesian Deep Learning for Computer Vision?, 2017
6. S. Choi et al., Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling, 2017
7. Anonymous, Bayesian Uncertainty Estimation for
Batch Normalized Deep Networks, 2017
In this presentation, we provide a quick intro do bayesian inference, Gaussian Processes and then later relate to the latest state of the art research on Bayesian Deep Learning, in order to include uncertainty in deep neural net predictions
In the presence of relevant physical observations, one can usually calibrate a computer model, and even estimate systematic discrepancies of the model from reality. Estimating and quantifying the uncertainty in this model discrepancy can lead to reliable predictions - so long as the prediction "is similar to" the available physical observations. Exactly how to define "similar" has proven difficult in many applications. Clearly it depends on how well the computational model captures the relevant physics in the system, as well as how portable the model discrepancy is in going from the available physical data to the prediction. This talk will discuss these concepts using computational models ranging from simple to very complex.
How does Google Google: A journey into the wondrous mathematics behind your f...David Gleich
A talk I gave at the annual meeting for the MetroNY section of the MAA about how Google works from a link-ranking perspective. (http://sections.maa.org/metrony/)
Based on a talk by Margot Gerritsen (which used elements from another talk I gave years ago, yay co-author improvements!)
Higher-order organization of complex networksDavid Gleich
A talk I gave at the Park City Institute of Mathematics about our recent work on using motifs to analyze and cluster networks. This involves a higher-order cheeger inequality in terms of motifs.
Using Local Spectral Methods to Robustify Graph-Based LearningDavid Gleich
This is my KDD2015 talk on robustness in semi-supervised learning. The paper is already on Michael Mahoney's website: http://www.stat.berkeley.edu/~mmahoney/pubs/robustifying-kdd15.pdf See the KDD paper for all the details, which this talk is a bit light on.
A copy of my slides from the SILO Seminar at UW Madison on our recent developments for the NEO-K-Means methods including new optimization routines and results.
Big data matrix factorizations and Overlapping community detection in graphsDavid Gleich
In a talk at the Chinese Academic of Sciences Institute for Automation, I discuss some of the MapReduce and community detection methods I've worked on.
Spectral clustering with motifs and higher-order structuresDavid Gleich
I presented these slides at the #strathna meeting in Glasgow in June 2017. They are an updated and enhanced version of the earlier talks on the subject.
Fast relaxation methods for the matrix exponential David Gleich
The matrix exponential is a matrix computing primitive used in link prediction and community detection. We describe a fast method to compute it using relaxation on a large linear system of equations. This enables us to compute a column of the matrix exponential is sublinear time, or under a second on a standard desktop computer.
Correlation clustering and community detection in graphs and networksDavid Gleich
We show a new relationship between various community detection objectives and a correlation clustering framework. These enable us to detect communities with good bounds on the solution.
Gaps between the theory and practice of large-scale matrix-based network comp...David Gleich
I discuss some runtimes for the personalized PageRank vector and how it relates to open questions in how we should tackle these network based measures via matrix computations.
Relaxation methods for the matrix exponential on large networksDavid Gleich
My talk from the Stanford ICME seminar series on doing network analysis and link prediction using the a fast algorithm for the matrix exponential on graph problems.
1. Y. Gal, Uncertainty in Deep Learning, 2016
2. P. McClure, Representing Inferential Uncertainty in Deep Neural Networks Through Sampling, 2017
3. G. Khan et al., Uncertainty-Aware Reinforcement
Learning from Collision Avoidance, 2016
4. B. Lakshminarayanan et al., Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles, 2017
5. A. Kendal and Y. Gal, What Uncertainties Do We Need in
Bayesian Deep Learning for Computer Vision?, 2017
6. S. Choi et al., Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling, 2017
7. Anonymous, Bayesian Uncertainty Estimation for
Batch Normalized Deep Networks, 2017
In this presentation, we provide a quick intro do bayesian inference, Gaussian Processes and then later relate to the latest state of the art research on Bayesian Deep Learning, in order to include uncertainty in deep neural net predictions
In the presence of relevant physical observations, one can usually calibrate a computer model, and even estimate systematic discrepancies of the model from reality. Estimating and quantifying the uncertainty in this model discrepancy can lead to reliable predictions - so long as the prediction "is similar to" the available physical observations. Exactly how to define "similar" has proven difficult in many applications. Clearly it depends on how well the computational model captures the relevant physics in the system, as well as how portable the model discrepancy is in going from the available physical data to the prediction. This talk will discuss these concepts using computational models ranging from simple to very complex.
How does Google Google: A journey into the wondrous mathematics behind your f...David Gleich
A talk I gave at the annual meeting for the MetroNY section of the MAA about how Google works from a link-ranking perspective. (http://sections.maa.org/metrony/)
Based on a talk by Margot Gerritsen (which used elements from another talk I gave years ago, yay co-author improvements!)
Fast matrix primitives for ranking, link-prediction and moreDavid Gleich
I gave this talk at Netflix about some of the recent work I've been doing on fast matrix primitives for link prediction and also some non-standard uses of the nuclear norm for ranking.
Overlapping clusters for distributed computationDavid Gleich
My talk from WSDM2012. See the paper on my webpage: http://www.cs.purdue.edu/homes/dgleich/publications/Andersen%202012%20-%20overlapping.pdf
And the codes http://www.cs.purdue.edu/homes/dgleich/codes/overlapping/
A history of PageRank from the numerical computing perspectiveDavid Gleich
We'll survey some of the underlying ideas from Google's PageRank algorithm along the lines of Massimo Franceschet's CACM history.
There are some slight liberties I've taken to make it more accessible.
This talk is a new update based on some of our recent results on doing Tall and Skinny QRs in MapReduce. In particular, the "fast" iterative refinement approximation based on a sample is new.
MapReduce Tall-and-skinny QR and applicationsDavid Gleich
A talk at the SIMONS workshop on Parallel and Distributed Algorithms for Inference and Optimization on how to do tall-and-skinny QR factorizations on MapReduce using a communication avoiding algorithm.
Vertex neighborhoods, low conductance cuts, and good seeds for local communit...David Gleich
My talk from KDD2012 about vertex neighborhoods and low conductance cuts. See the paper here: http://arxiv.org/abs/1112.0031 and http://dl.acm.org/citation.cfm?id=2339628
A new approximation for the full waveform inversion (FWI) that does not require any forward modeling to estimate the gradient, making the routine cheaper, faster and with competitive resolution. It also opens the possibility for a post-stack approximation.
Project presented at the GeoConvention 2017 in Calgary, AB, Canada:
http://www.geoconvention.com/uploads/2017abstracts/039_GC2017_FWI_without_tears.pdf
Markov Chain Monitoring - Application to demand prediction in bike sharing sy...Harshal Chaudhari
The presentation accompanying the paper at SDM 2018 - https://epubs.siam.org/doi/abs/10.1137/1.9781611975321.50
Github: https://github.com/chdhr-harshal/mc-monitor
In networking applications, one often wishes to obtain estimates about the number of objects at different parts of the network (e.g., the number of cars at an intersection of a road network or the number of packets expected to reach a node in a computer network) by monitoring the traffic in a small number of network nodes or edges. We formalize this task by defining the Markov Chain Monitoring problem. Given an initial distribution of items over the nodes of a Markov chain, we wish to estimate the distribution of items at subsequent times. We do this by asking a limited number of queries that retrieve, for example, how many items transitioned to a specific node or over a specific edge at a particular time. We consider different types of queries, each defining a different variant of the Markov Chain Monitoring. For each variant, we design efficient algorithms for choosing the queries that make our estimates as accurate as possible. In our experiments with synthetic and real datasets we demonstrate the efficiency and the efficacy of our algorithms in a variety of settings.
Introduction to search and optimisation for the design theoristAkin Osman Kazakci
An historically important design theory is the state-space search by Herbert Simon. Over the years, the importance of this model has been consistently downplayed for various reasons. Today, it is not being used or discussed very frequently - except to downplay its significance even more - usually without an in-depth analysis.
However, the young generation of (design) researchers do not know well-enough the underlying formalism and how it can be used to interpret design phenomena.
This short introduction intends to give the basics of search, optimisation and problem-solving formalisms in a very intuitive way - which also helps to understand more complicated formal models of design.
High-performance graph analysis is unlocking knowledge in computer security, bioinformatics, social networks, and many other data integration areas. Graphs provide a convenient abstraction for many data problems beyond linear algebra. Some problems map directly to linear algebra. Others, like community detection, look eerily similar to sparse linear algebra techniques. And then there are algorithms that strongly resist attempts at making them look like linear algebra. This talk will cover recent results with an emphasis on streaming graph problems where the graph changes and results need updated with minimal latency. We’ll also touch on issues of sensitivity and reliability where graph analysis needs to learn from numerical analysis and linear algebra.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Spacey random walks and higher order Markov chains
1. Spacey Random Walks on
Higher-Order Markov Chains
David F. Gleich!
Purdue University!
Joint work with
Austin Benson,
Lek-Heng Lim,
supported by "
NSF CAREER
CCF-1149756
IIS-1422918
SIAM NetSci15
David Gleich · Purdue
1
3. WARNING!!
This talk presents the “forward” explicit
derivation (i.e. lots of little steps)
rather than the implicit “backwards”
derivation (i.e. big intuitive leaps)
SIAM NetSci15
David Gleich · Purdue
3
4. PageRank:The initial condition
My dissertation"
Models & Algorithms for PageRank Sensitivity
The essence of PageRank!
Take any Markov chain P, PageRank "
creates a related chain with great “utility”
• Unique stationary distribution
• Fast convergence
• Modeling flexibility
(I ↵P)x = (1 ↵)v
PageRank
beyond
the Web
arXiv:1407.5107
by Jessica Leber
Fast Magazine
SIAM NetSci15
David Gleich · Purdue
4
5. Be careful about what you
discuss after a talk…
I gave a talk!
at the Univ. of Chicago and visited Lek-heng Lim!
He told me about a new idea!
in Markov chains analysis and tensor eigenvalues
SIAM NetSci15
David Gleich · Purdue
5
6. Approximate stationary distributions
of higher-order Markov chains
A higher order Markov chain!
depends on the last few states.
These become Markov chains on the product state space."
But that’s usually too large for stationary distributions.
The approximation!
is that we form a rank-1 approximation of that stationary
distribution object.
Due to Michael Ng and collaborators
P(Xt+1 = i | history) = P(Xt+1 = i | Xt = j, Xt 1 = k)
P(X = [i, j]) = xi xj
SIAM NetSci15
David Gleich · Purdue
6
P(X = [i, j]) = Xi,j
7. Why?
SIAM NetSci15
David Gleich · Purdue
7
Multidimensional, multi-
ceted data from inform-
ics and simulations
a
b
m
li
This propos
dimensiona
We want to analyze
higher-order relationships
and multi-way data and …
Things like
• Enron emails
• Regular hypergraphs
And there’s three+ indices!
So it’s a "
higher-order Markov chain
8. Approximate stationary distributions
of higher-order Markov chains
The new problem!
of computing an approx. stationary dist. is a tensor eigenvector
The new problem’!
• existence is guaranteed under mild conditions
• uniqueness …
• convergence …
Due to Michael Ng and collaborators
xi =
X
jk
Pijk xj xk or x = Px2
require heroic algebra
(and are hard to check)
SIAM NetSci15
David Gleich · Purdue
8
9. Some small quick notes
A stochastic matrix M is a Markov chain
A stochastic hypermatrix / tensor / probability P table "
is a higher-order Markov chain
SIAM NetSci15
David Gleich · Purdue
9
Multidimensional, multi-
faceted data from inform-
atics and simulations
a
b
m
li
This propos
dimensiona
10. PageRank to the rescue!
What if we looked at these approx. stat.
distributions of a PageRank modified higher-
order chain?
Multilinear PageRank!
• Formally the Li & Ng approx. stat. dist. of the
PageRank modified higher order Markov chain
• Guaranteed existence!
• Fast convergence ?
• Uniqueness ?
x = ↵Px2
+ (1 ↵)v
Multilinear PageRank"
Gleich, Lim, Yu"
arXiv:1409.1465
when alpha < 1/order !
when alpha < 1/order !
SIAM NetSci15
David Gleich · Purdue
10
11. One nagging question …!
Is there a stochastic process that
underlies this approximation?
SIAM NetSci15
David Gleich · Purdue
11
12. Meanwhile … "
Spectral clustering of tensors
Austin Benson (a colleague) asked"
if there were any interesting method to “cluster” tensors.
“Recall” spectral clustering on graphs!
!
SIAM Data Mining 2015, arXiv:1502.05058
graph ! random walk
! second eigenvector
! sweep cut partition
SIAM NetSci15
David Gleich · Purdue
12
MT
y = 2y
¯SS
min
S
(S) = min
S
#(edges cut)
min(vol(S), vol( ¯S))
13. Meanwhile … "
Spectral clustering of tensors
Austin Benson (a colleague) asked"
if there were any interesting method to “cluster” tensors.
“Conjecture” spectral clustering on tensors!
!
SIAM Data Mining 2015, arXiv:1502.05058
graph/tensor ! higher-order random walk
! second eigenvector
! sweep cut partition
??????!
SIAM NetSci15
David Gleich · Purdue
13
14. We tried many
• apriori good and
• retrospectively bad
ideas for the second eigenvector
SIAM NetSci15
David Gleich · Purdue
14
15. Austin and I were talking one day …
... about the problem of the process. (He was using Multilinear
PageRank as the “first” eigenvector.) He observed that
One of the five algorithms !
for multilinear PageRank uses a seq. of Markov chains.
Is there some way to turn this into a random walk?
xk+1 = stat. dist. of Markov chain based on ↵, v, P, and xk
SIAM NetSci15
David Gleich · Purdue
15
17. The spacey random walk
Consider a higher-order Markov chain.
If we were perfect, we’d figure out the stationary
distribution of that. But we are spacey!
• On arriving at state j, we promptly "
“space out” and forget we came from k.
• But we still believe we are “higher-order”
• So we invent a state k by drawing a random
state from our history.
P(Xt+1 = i | history) = P(Xt+1 = i | Xt = j, Xt 1 = k)
SIAM NetSci15
David Gleich · Purdue
17
18. The spacey random walk
This is a vertex-reinforced random walk! "
e.g. Polya’s urn.
Pemantle, 1992; Benaïm, 1997; Pemantle 2007
SIAM NetSci15
David Gleich · Purdue
18
P(Xt+1 = i | Xt = j and the right filtration on history)
=
X
k
Pi,j,k Ck (t)/(t + n)
Let Ct (k) = (1 +
Pt
s=1 Ind{Xs = k})
How often we’ve visited
state k in the past
19. Stationary distributions of vertex
reinforced random walks
A vertex-reinforced random walk at time t transitions
according to a Markov matrix M given the observed
frequencies.
This has a stationary distribution, iff the dynamical system
converges.
SIAM NetSci15
David Gleich · Purdue
19
dx
dt
= ⇡[M(x)] x
P(Xt+1 = i | Xt = j and the right filtration on history)
= [M(t)]i,j
= [M(c(t))]i,j
⇡[M] is a map to the stat. dist.
M. Benïam 1997
20. The Markov matrix for "
Spacey Random Walks
A necessary condition for a stationary distribution
(otherwise makes no sense)
SIAM NetSci15
David Gleich · Purdue
20
Property B. Let P be an order-m, n dimensional probability table. Then P has
property B if there is a unique stationary distribution associated with all stochastic
combinations of the last m 2 modes. That is, M =
P
k,`,... P(:, :, k, `, ...) k,`,... defines
a Markov chain with a unique Perron root when all s are positive and sum to one.
dx
dt
= ⇡[M(x)] x
M =
X
k
P(:, :, k)xk
This is the transition probability associated
with guessing the last state based on history!
21. We have all sorts of cool results on spacey
random walks… e.g.
Suppose you have a Polya Urn with memory… "
Then it always has a stationary distribution!
SIAM NetSci15
David Gleich · Purdue
21
22. Back to Multilinear PageRank
The Multilinear PageRank problem is what we call a
spacey random surfer model.
• This is a spacey random walk
• We add random jumps with probability (1-alpha)
It’s also a vertex-reinforced random walk.
Thus, it has a stationary probability if
converges.
SIAM NetSci15
David Gleich · Purdue
22
dx
dt
= ⇡[M(x)] x
M(x) = ↵
P
k P(:, :, k)xk
+ (1 ↵)v
Which occurs when alpha < 1/order !
23. Some interesting notes about vertex
reinforced random walks
• The power method is NOT the natural
algorithm! It’s to evolve the ODE.
• It’s unclear if there are any structural
properties that guarantee a stationary
distribution (except for something like the
Multilinear PageRank equation)
• Can be tough to analyze the resulting ODEs
• Asymptotically creates a Markov chain!
SIAM NetSci15
David Gleich · Purdue
23
24. … back to spectral clustering …
SIAM NetSci15
David Gleich · Purdue
24
25. Meanwhile … "
Spectral clustering of tensors
Austin Benson (a colleague) asked"
if there were any interesting method to “cluster” tensors.
“Conjecture” spectral clustering on tensors!
!
SIAM Data Mining 2015, arXiv:1502.05058
graph/tensor ! higher-order random walk
! second eigenvector
! sweep cut partition
??????!
SIAM NetSci15
David Gleich · Purdue
25
26. Meanwhile … "
Spectral clustering of tensors
Austin Benson (a colleague) asked"
if there were any interesting method to “cluster” tensors.
“Conjecture” spectral clustering on tensors!
!
SIAM Data Mining 2015, arXiv:1502.05058
graph/tensor ! higher-order random walk
! second eigenvector
! sweep cut partition
SIAM NetSci15
David Gleich · Purdue
26
M(x)T
y = 2y
Use the asymptotic
Markov matrix!
27. Problem current methods
only consider edges
… and that is not enough for current problems
SIAM NetSci15
David Gleich · Purdue
27
In social networks, we want to penalize cutting triangles more than
cutting edges. The triangle motif represents stronger social ties.
28. Problem current methods
only consider edges
SIAM NetSci15
David Gleich · Purdue
28
SPT16
HO
CLN1
CLN2
SWI4_SWI6
In transcription networks, the ``feedforward loop” motif represents
biological function. Thus, we want to look for clusters of this structure.
29. An example with a layered flow network
SIAM NetSci15
David Gleich · Purdue
29
0
12
3
4 5
6 7
8 9
10 11
§ The network “flows” downward
§ Use directed 3-cycles to model flow
i
kj
i
kj
i
kj
i
kj
1 1 1 2
§ Tensor spectral clustering: {0,1,2,3}, {4,5,6,7}, {8,9,10,11}
§ Standard spectral: {0,1,2,3,4,5,6,7}, {8,10,11}, {9}
30. SIAM NetSci15
David Gleich · Purdue
30
WAW2015
EURANDOM
–
Eindhoven
–
Netherlands
Workshop on Algorithms and Models for the Web Graph
(but it’s grown to be all types of network analysis)
December 10-‐11
Winter School on Complex Network and Graph Models
December 7-‐8
Submissions Due July 25th!
31. Time for Lots of Questions!
Manuscripts!
Li, Ng. On the limiting probability distribution of a transition
probability tensor. Linear & Multilinear Algebra 2013.
Gleich. PageRank beyond the Web. (accepted at SIAM Review)
Gleich, Lim, Yu. Multilinear PageRank. (under review…)
Benson, Gleich, Leskovec. Tensor Spectral Clustering for
partitioning higher order network structures. SDM 2015, arXiv:"
https://github.com/arbenson/tensor-sc
Benson, Gleich, Leskovec. Forthcoming. (Much better method…)
Benson, Gleich, Lim. The Spacey Random Walk. In prep.
SIAM NetSci15
David Gleich · Purdue
31