This document discusses applying topic modeling to analyze political discourse from two years of Dail debates. It outlines the 7 key stages to transform unstructured discussion data into insightful narratives: 1) collate data, 2) create a "bag-of-words", 3) model parameters, 4) determine optimum topics, 5) name topics, 6) visualize the model, and 7) garner insight. The document then provides details on implementing latent Dirichlet allocation to statistically group documents based on word probabilities and determine topics.
Einstein published his ideas and became a pivotal element in shifting the way we think about physics - from the Newtonian model to the Quantum - in turn this changed the way we think about the world and allowed us to develop new ways of engaging with the world.
We are at a similar juncture. The development of computational technologies allows us to think about astronomical volumes of data and to make meaning of that data.
The mindshift that occurs is that “the machine is our friend”. The computer, like all machines, extends our capabilities. As a consequence the types of thinking now required in industry are those that get away from thinking like a computer and shift towards creative engagement with possibilities. Logical thinking is still necessary but it starts to be driven by imagination.
Computational thinking and data science change the way we think about defining and solving problems.
The age of creativity - which increasingly extends its impact from arts applications to business, scientific, technological, entrepreneurship, political, and other contexts.
In a world where many of our digital spaces are becoming more closed than ever, open data is a concept that is rapidly on the rise.
In this talk we explore what open data is (and what it isn't), and why we should care about it. We'll look at how you can introduce it into your projects with regards to practical publication and consumption, and discuss some useful tools and reference points.
Open data isn't just dry and technical - it gives us great scope to be creative, and throughout this talk we'll go through some of the amazing things that it has been used for globally in the hope that it will inspire you to create something amazing yourself.
Data-Ed: A Framework for no sql and HadoopData Blueprint
Big Data and NoSQL continue to make headlines everywhere. However, most of what has been written about these topics is focused on the hardware, services, and scale out. But what about a Big Data and NoSQL Strategy, one that supports your business strategy? Virtually every major organization thinking about these data platforms is faced with the challenge of figuring out the appropriate approach and the requirements. This presentation will provide guidance on how to think about and establish realistic Big Data management plans and expectations. We will introduce a framework for evaluating the various choices when it comes to implementing and succeeding with Big Data/NoSQL and show how to demonstrate a sample use case.
Data-Ed Webinar: A Framework for Implementing NoSQL, HadoopDATAVERSITY
Big Data and NoSQL continue to make headlines everywhere. However, most of what has been written about these topics is focused on the hardware, services, and scale out. But what about a Big Data and NoSQL Strategy, one that supports your business strategy? Virtually every major organization thinking about these data platforms is faced with the challenge of figuring out the appropriate approach and the requirements. This presentation will provide guidance on how to think about and establish realistic Big Data management plans and expectations. We will introduce a framework for evaluating the various choices when it comes to implementing and succeeding with Big Data/NoSQL and show how to demonstrate a sample use case.
Takeaways:
A Framework for evaluating Big Data techniques
Deciding on a Big Data platform – How do you know which one is a good fit for you?
The means by which big data techniques can complement existing data management practices
The prototyping nature of practicing big data techniques
The distinct ways in which utilizing Big Data can generate business value
Einstein published his ideas and became a pivotal element in shifting the way we think about physics - from the Newtonian model to the Quantum - in turn this changed the way we think about the world and allowed us to develop new ways of engaging with the world.
We are at a similar juncture. The development of computational technologies allows us to think about astronomical volumes of data and to make meaning of that data.
The mindshift that occurs is that “the machine is our friend”. The computer, like all machines, extends our capabilities. As a consequence the types of thinking now required in industry are those that get away from thinking like a computer and shift towards creative engagement with possibilities. Logical thinking is still necessary but it starts to be driven by imagination.
Computational thinking and data science change the way we think about defining and solving problems.
The age of creativity - which increasingly extends its impact from arts applications to business, scientific, technological, entrepreneurship, political, and other contexts.
In a world where many of our digital spaces are becoming more closed than ever, open data is a concept that is rapidly on the rise.
In this talk we explore what open data is (and what it isn't), and why we should care about it. We'll look at how you can introduce it into your projects with regards to practical publication and consumption, and discuss some useful tools and reference points.
Open data isn't just dry and technical - it gives us great scope to be creative, and throughout this talk we'll go through some of the amazing things that it has been used for globally in the hope that it will inspire you to create something amazing yourself.
Data-Ed: A Framework for no sql and HadoopData Blueprint
Big Data and NoSQL continue to make headlines everywhere. However, most of what has been written about these topics is focused on the hardware, services, and scale out. But what about a Big Data and NoSQL Strategy, one that supports your business strategy? Virtually every major organization thinking about these data platforms is faced with the challenge of figuring out the appropriate approach and the requirements. This presentation will provide guidance on how to think about and establish realistic Big Data management plans and expectations. We will introduce a framework for evaluating the various choices when it comes to implementing and succeeding with Big Data/NoSQL and show how to demonstrate a sample use case.
Data-Ed Webinar: A Framework for Implementing NoSQL, HadoopDATAVERSITY
Big Data and NoSQL continue to make headlines everywhere. However, most of what has been written about these topics is focused on the hardware, services, and scale out. But what about a Big Data and NoSQL Strategy, one that supports your business strategy? Virtually every major organization thinking about these data platforms is faced with the challenge of figuring out the appropriate approach and the requirements. This presentation will provide guidance on how to think about and establish realistic Big Data management plans and expectations. We will introduce a framework for evaluating the various choices when it comes to implementing and succeeding with Big Data/NoSQL and show how to demonstrate a sample use case.
Takeaways:
A Framework for evaluating Big Data techniques
Deciding on a Big Data platform – How do you know which one is a good fit for you?
The means by which big data techniques can complement existing data management practices
The prototyping nature of practicing big data techniques
The distinct ways in which utilizing Big Data can generate business value
This presentation was given by Christian Reimsbach-Kounatze of the OECD at the CERI Conference on Innovation, Governance and Reform in Education on 5 November 2014 during session 6.b: The Role of “Big Data”.
Talk given at Apps World London in November 2015 - a shorter version of a previously given presentation - see my Fronteers slides for the full version.
Basic Overview of XBRL
“… in this networked world, there is a need for greater flow of knowledge across boundaries.
…. knowledge needs structure - without structure you cannot retrieve it efficiently and you cannot personalize
…the internet has facilitated access to information, but complicated the task of identifying relevant knowledge
Third presentation in our seminar on business intelligence dashboards. Derek Murphy works for National Grid and related learning points from over 30 years experience of delivering business intelligence projects
Presentation also available on YouTube https://www.youtube.com/watch?v=Er90qIA2S7U
Presentation by Kasper Kisjes (Rijkswaterstaat) and Christoph Balduck (Data T...Patrick Van Renterghem
Kasper and Christoph explained how to apply data governance, data discovery and data cataloging for 200+ data sources. Also,m about dealing with a connect vs. collect strategy in a centralized hub of data.
Water Conservation Essay Example - PHDessay.com. Essay On Water Conservation [3+ Examples]. Essay on water conservation by johny bahsa - Issuu. Water Conservation Essay for Students | 500+ Words Essay. Essay on Water Conservation in English for Class 1 to 12 Students. Write an essay on Water Conservation | Essay Writing | English. Essay on Water Conservation | Importance & Advantages of Saving Water. Save Water and Save Life Essay. Water Conservation Essay | Essay on Water Conservation for Kids. Conservation Of Water Essay – Telegraph. Short essay on importance of water conservation - Google Docs. Water Conservation Essay — 700+ Words Essays [Top 5]. Write An Essay Of Water Conservation In English/ Essay On 'Save Water .... Essay On Water Conservation - johny basha - Medium. Essay & Paragraph on Water Conservation in English.
Slides from my talk at the 13th European Conference on e-Government. Preliminary testing of Yu and Robinson's framework for evaluating characteristics of public sector data using data from the UK's data.gov.uk portal.
CIM 4.0 is part of 4IR, but it is not properly understood or explained. This presentation offers some additional views and perspectives to aid students to understand the value and impact of CIM 4.0.
All the hype today is about Big Data and Analytics but many people seem to ignore the fact that if you didn't get the "Small Data" right there is absolutely no way that "Big Data" will add value to your organization - it will just add confusion
R A Longhorn Presentation at Taiwan Open Data Forum, Taipei, 9 July 2014GSDI Association
Big Data Meets Open Data: Challenges and Issues presentation of Roger Longhorn, Operations & Communications Manager, GSDI Association, delivered at the Taiwan Open Data Forum, 9 July 2014 in Taipei
New Horizons for a Data-Driven Economy – A Roadmap for Big Data in Europe inside-BigData.com
In this video from the ISC Big Data'14 Conference, Edward Curry from the NUI Galway & Nuria de Lama Sanchez from Atos present: New Horizons for a Data-Driven Economy – A Roadmap for Big Data in Europe.
"In this talk we summarize the results of the BIG project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies across different sectors, together with the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations."
Learn more:
http://www.isc-events.com/bigdata14/schedule.html
and
http://big-project.eu/
Watch the video presentation: http://wp.me/p3RLEV-37G
S ba0881 big-data-use-cases-pearson-edge2015-v7Tony Pearson
IBM is a market leader in big data and analytics solutions. This session explains the basics of Big Data, with actual use cases of clients who have benefited from IBM solutions in this space, followed by architectures with IBM BigInsights, BigSQL, Platform Symphony and Spectrum Scale.
This presentation was given by Christian Reimsbach-Kounatze of the OECD at the CERI Conference on Innovation, Governance and Reform in Education on 5 November 2014 during session 6.b: The Role of “Big Data”.
Talk given at Apps World London in November 2015 - a shorter version of a previously given presentation - see my Fronteers slides for the full version.
Basic Overview of XBRL
“… in this networked world, there is a need for greater flow of knowledge across boundaries.
…. knowledge needs structure - without structure you cannot retrieve it efficiently and you cannot personalize
…the internet has facilitated access to information, but complicated the task of identifying relevant knowledge
Third presentation in our seminar on business intelligence dashboards. Derek Murphy works for National Grid and related learning points from over 30 years experience of delivering business intelligence projects
Presentation also available on YouTube https://www.youtube.com/watch?v=Er90qIA2S7U
Presentation by Kasper Kisjes (Rijkswaterstaat) and Christoph Balduck (Data T...Patrick Van Renterghem
Kasper and Christoph explained how to apply data governance, data discovery and data cataloging for 200+ data sources. Also,m about dealing with a connect vs. collect strategy in a centralized hub of data.
Water Conservation Essay Example - PHDessay.com. Essay On Water Conservation [3+ Examples]. Essay on water conservation by johny bahsa - Issuu. Water Conservation Essay for Students | 500+ Words Essay. Essay on Water Conservation in English for Class 1 to 12 Students. Write an essay on Water Conservation | Essay Writing | English. Essay on Water Conservation | Importance & Advantages of Saving Water. Save Water and Save Life Essay. Water Conservation Essay | Essay on Water Conservation for Kids. Conservation Of Water Essay – Telegraph. Short essay on importance of water conservation - Google Docs. Water Conservation Essay — 700+ Words Essays [Top 5]. Write An Essay Of Water Conservation In English/ Essay On 'Save Water .... Essay On Water Conservation - johny basha - Medium. Essay & Paragraph on Water Conservation in English.
Slides from my talk at the 13th European Conference on e-Government. Preliminary testing of Yu and Robinson's framework for evaluating characteristics of public sector data using data from the UK's data.gov.uk portal.
CIM 4.0 is part of 4IR, but it is not properly understood or explained. This presentation offers some additional views and perspectives to aid students to understand the value and impact of CIM 4.0.
All the hype today is about Big Data and Analytics but many people seem to ignore the fact that if you didn't get the "Small Data" right there is absolutely no way that "Big Data" will add value to your organization - it will just add confusion
R A Longhorn Presentation at Taiwan Open Data Forum, Taipei, 9 July 2014GSDI Association
Big Data Meets Open Data: Challenges and Issues presentation of Roger Longhorn, Operations & Communications Manager, GSDI Association, delivered at the Taiwan Open Data Forum, 9 July 2014 in Taipei
New Horizons for a Data-Driven Economy – A Roadmap for Big Data in Europe inside-BigData.com
In this video from the ISC Big Data'14 Conference, Edward Curry from the NUI Galway & Nuria de Lama Sanchez from Atos present: New Horizons for a Data-Driven Economy – A Roadmap for Big Data in Europe.
"In this talk we summarize the results of the BIG project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies across different sectors, together with the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations."
Learn more:
http://www.isc-events.com/bigdata14/schedule.html
and
http://big-project.eu/
Watch the video presentation: http://wp.me/p3RLEV-37G
S ba0881 big-data-use-cases-pearson-edge2015-v7Tony Pearson
IBM is a market leader in big data and analytics solutions. This session explains the basics of Big Data, with actual use cases of clients who have benefited from IBM solutions in this space, followed by architectures with IBM BigInsights, BigSQL, Platform Symphony and Spectrum Scale.
Similar to A Practical Application of LDA using Dail Debate Records (20)
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
A Practical Application of LDA using Dail Debate Records
1. A practical application of topic modelling
Using 2 years of Dail Debate
conor@fabrikatyr.com
+353851201772919 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE
@Fabrikatyr
@Conr – Conor Duke
@Tcarnus – Dr. Tim
Carnus
#UCDDataPol
2. 19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 2
Transforming raw data to insight for a particular audience is not
about algorithms alone
Data
Insight
Good Data Science makes ‘The Gap’ as small as possible
3. Finding the most suitable application of Topic
modelling for ‘unstructured discussion’ is
critical
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 3
Topic
modelling
Semantic
Subject matter
corpus
General Corpus
Statistical
Word
probability
Paragraph
structure
Word distance
Mixture of all?
Analysing political debate
discourse has the following issues
• Few / little ‘training’ texts
• Highly variable sentence
length
• Distinct word distributions
• Statistical word probability
has readily available
implementations and can
resolve these challenges
4. What is statistical ‘Topic Modelling ‘ using
‘Word Frequency’ ?
Grouping documents based on
the probability of words
occurring in each document
http://people.cs.umass.edu/~wallach/talks/priors.pdf
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 4
Each topic is described by group of words.
To ‘topic model’ we;
1. Calculate what ‘words’ are in a ‘topic’
2. Predict the probability of single
response being a particular topic
3. Group the responses into ‘topics’
5. We applied ‘Latent Dirchlet Allocation’(LDA) to a
‘Bag-of-words’ using ‘Gibbs sampling method’ to
topic model
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 5
Bag-of-words converts each response
into a set unordered single words
LDA repeatedly examines the probability
of the words in each response and
establish ‘common sets’ (topics)
6. This is a practical talk…so we are going to focus
on making that gap as narrow as possible
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 6
7. How to practically apply topic modelling?
There are 7 key stages to transform unstructured discussion into an insightful narrative
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 7
1
2
3
4
5
6
• Garner
insight
7
•Collate
data
•Model
Parameters •Visualise
•Name topics
•Optimum
topics
•Create
‘bag-of-
words’
8. Sourcing the
data
• Using a public
available crawler was
unsuccessful at both
Public and privately
run websites
• Modern websites will
provide an ‘API’* to
bulk query the data in
XML or JSON format
DATA
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 8
*Way of requesting data without using your web
browser
9. Data Carpentry – converting extract to ‘bag
of words’ can be complicated
• No data set is ever ready to
operate on ‘out-of-the-box’
• Challenges included:
• Character encoding
• Irish characters
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 9
‘tm’ (R ) or Pandas (Python) & Regular Expression to the Rescue!
10. 19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 10
Few ‘tricks’ to make the ‘lighten’ the
processing load
‘Function words’ such as the, of, to, and, a, etc. will
pervade the learned topics, hiding the actual topics
Stemming
Semantic method which simplifies things, but
dialects can distort
StopWord Removal
Non-function Stop-word lists must often be domain-dependent and there
are inevitably cases where filtering causes noise, or missing patterns
that may be of interest
11. 19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 11
There are number of published methods of optimum topic number
selection but the biggest constraint is the level of granularity the
research question requires
Harmonic
Mean
AIC
Entropy
“Sum of Lowest average probability”
for each topic distribution
Balance of “Harmonic
mean “ against model
complexity
Least amount
of disorder in
the topics
Using Kullback–Leibler divergence we can
spot local minimum and pick the optimum
number based on how many topics we want
to ‘name’
Local minimums provide a chance to explore the
Trade-off between granularity and consistency
12. The topics need to be ‘named’, this is a manual process, which
requires both the ‘most probable’ words in it and the ‘most likely’
response for each topic
Responses for ‘Bank’ topic
Topic
Score
The banks have a veto. 1
The banks were not dealing with them. Will there will be
independent oversight of the deals the banks strike with
individuals? Will this oversight have teeth and ensure that the
restructuring that may take place will be effective and will have
real meaning for people who find themselves in arrears?
0.953571429
There are not 30,000 families facing eviction, nor will there be.
However, there are 30,000 families who would be facing that
prospect if we had not put in place the measures we have put
in place to deal with the issue of mortgage arrears.
0.948684211
Thousands of Irish families are facing eviction. Ulster Bank
alone has 4,700 repossession cases before the courts.
0.925
David Hall has said that up to 50 new repossession cases are
coming before the courts every month.
0.91875
There is enormous relief for distressed mortgage holders and
their families when it becomes clear that they are not in
danger of losing their houses.
0.91875
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 12
Magdelene
Laundry Non-topic Bank Debit
Shannon
Airport No fly Water Charges
0.103 0.013 0.021 0.034 0.061
woman one debt force water
0.019 0.013 0.02 0.029 0.03
mother u bank defence charge
0.018 0.01 0.013 0.018 0.024
laundry time billion airport irish
0.015 0.008 0.013 0.013 0.016
institution would fiscal member company
0.015 0.008 0.012 0.012 0.013
home know economy data tax
0.012 0.008 0.011 0.012 0.011
report many economic Shannon service
0.011 0.008 0.011 0.011 0.011
life year year operation pay
0.01 0.008 0.011 0.011 0.01
baby say irish international bord
0.01 0.008 0.01 0.01 0.01
Word Probability Response Score
13. The Model still
needs to be
visualised
Again we use Kullback-Leibler
divergence to map the topics
against each other. Each word
has a measure of Saliency
Saliency is a compromise
between a word's overall
frequency and it's distinctiveness.
A word's distinctiveness is a
measure of that word's
distribution over topics
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 13
By visualising the word distributions in each topic
we understand them better
14. 19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 14
After “general discourse” has been filtered out (>50%
of corpus) some topics dominate
Jobs and enterprise
Taxation and the Budget
Rent,property and
local housing
Regional
discussion
HSE
Medical Cards
Credit Union
Family and child services
Irish_discourse
Europe
Northern Ireland
Criminal Justice System
Garda / GSOC et.c
Flood damage
Geopolitics / foreign policy
Agriculture
Coillte
School and eductions
Social_housing
Mortgage crisis Bank Debit Water Charges Funding for govenernment
projects
Magdelene Laundry
Shannon Airport No fly
Topic % occurrence
Dail_order of business 33.89%
Passing Legislation 11.84%
Jobs and enterprise 7.43%
Taxation and the Budget 5.96%
Deputies been called out 5.62%
Question time 4.60%
Rent, property and local
housing
4.56%
Parties naming each other 4.54%
Regional discussion 3.31%
HSE 2.73%
Non-topic 1.97%
Medical Cards 1.42%
Credit Union 1.40%
Once we have named the
topics, we can examine the
body of text in more detail
15. 19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 15
While there are many topics the incumbent Party
contribute the most frequently
Chart detailing frequency of contribution by party
30.8% 23.4% 15.3% 13.3% 11.6% 3.3% 2.0% 0.2%
Fine Gael Fianna Fail Labour Independent Sinn Fein People Before Profit
Alliance
Socialist Party United Left
General Discourse Jobs and enterprise Taxation and the Budget
Rent,property and local housing Regional discussion HSE
Family and child services Medical Cards Irish_discourse
Europe Northern Ireland Criminal Justice System
Garda / GSOC et.c Coillte Mortgage crisis
Bank Debit
16. Bank Debit
Mortgage crisis
Coillte
Garda / GSOC et.c
Criminal Justice System
Northern Ireland
Europe
Irish_discourse
Family and child services
Medical Cards
HSE
Regional discussion
Rent,property and local housing
Taxation and the Budget
Jobs and enterprise
}
Jobs, Taxation,
budget and
property remain
consistently in
focus
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 16
top 3
dominate
17. An exploration
was made to
using ‘cloud
based’ resources
to accelerate
modelling time
Threading!!!!!
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 17
Applying a 30GB
RAM R-studio
environment
At peak load it was 25%
utilised. The computational
load is in moving the data
around NOT using RAM to
Calculate
Multi Threading topic modelling solutions need to be explored
(Gensim - models.ldamulticore – parallelized Latent Dirichlet
Allocation)
18. Comparison of LDA
implementations
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 18
Learning rate – (decay)
To ‘bootstrap’ small bodies of
text
‘Passes’ of the Bayesian sampling function can also effect the model
•Gensim in Python currently has
the most extensive set of
parameters however
topicmodels in R has some good
visualisation examples
•‘Online’ LDA implementations
are crucial for ‘social listening’
for evolving political commentary
The ‘Number of Topics’ is the key parameter however there are a few
other parameters which are important.
Priors Matter
Function of document
count and length
‘Honourable mention’ implementations
• Vowpal Wabbit – machine learning
• Mallet – Focus on text modelling
• Stanford - great resource
19. Context is Key
Blind application of
complex modelling
will yield poor results
The final deliverable
and audience must be
defined before
embarking on the
analysis
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 19
20. Thanks you and Questions?
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 20
There are 7 key stages to transform unstructured discussion into an insightful narrative
1
2
3
4
5
6
• Garner
insight
7
•Collate
data
•Model
Parameters •Visualise
•Name topics
•Optimum
topics
•Create
‘bag-of-
words’
22. 19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 22
1. Suppose you’ve just moved to a new city. You want to know where
the other hipsters and data science geeks tend to hang out. Of
course, as a hipster, you know you can’t just ask, so what do you
do?
2. So you scope out a bunch of different establishments
(documents) across town, making note of the people (words)
hanging out in each of them (e.g., Alice hangs out at the mall and
at the park, Bob hangs out at the movie theater and the park, and
so on).
3. Crucially, you don’t know the typical interest groups (topics) of
each establishment, nor do you know the different interests of
each person.
4. So you pick some number K of interests to learn (i.e., you want to
learn the K most important kinds of interests people may have),
and start by making a guess as to why you see people where you
do
5. Go through each place and person over and over again. Your
guesses keep getting better and better
6. For each interest (‘topic’), you can count the people assigned to
that interest to figure out what people have this particular interest.
By looking at the people themselves, you can interpret the
category as well (e.g., if category X contains lots of tall people
wearing jerseys and carrying around basketballs, you might
interpret X as the “basketball players” group).
LDA in
In Laymans terms
23. Latent Dirichlet Allocation
LDA represents documents
as mixtures of topics that spit out
words with certain probabilities.
It creates a generative model for a
collection of documents, LDA then
tries to backtrack from the documents
to find a set of topics that are likely to
have generated the collection.
Word probabilities are maximized by
dividing the words among the topics.
(More terms means more mass to be
spread around.) In a mixture, this is
enough to find clusters of co-
occurring words
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 23
24. Latent Dirichlet Allocation(2/2)
In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune
Dirichlet), often denoted , is a family of continuous multivariate probability
distributions parametrized by a vector of positive reals. It is the multivariate
generalization of the beta distribution.[1]Dirichlet distributions are very often used as prior
distributions in Bayesian statistics, and in fact the Dirichlet distribution is the conjugate
prior of the categorical distribution and multinomial distribution. That is, its probability
density function returns the belief that the probabilities of Krival events are given that
each event has been observed times.
The infinite-dimensional generalization of the Dirichlet distribution is the Dirichlet process.
The Dirichlet on the topic proportions can encourage sparsity, i.e., a document is
penalized for using many topics. Loosely, this can be thought of as softening the strict
definition of “co-occurrence” in a mixture model. This flexibility leads to sets of
terms that more tightly co-occur
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 24
25. Latent Dirichlet Allocation(1/2)
LDA represents documents as mixtures of topics that spit out
words with certain probabilities. It assumes that documents are
produced in the following fashion: when writing each document,
you
Assuming this generative model for a collection of documents,
LDA then tries to backtrack from the documents to find a set of
topics that are likely to have generated the collection.
Word probabilities are maximized by dividing the words among
the topics. (More terms means more mass to be spread around.)
In a mixture, this is enough to find clusters of co-occurring
words. In LDA, the Dirichlet on the topic proportions can
encourage sparsity, i.e., a document is penalized for using many
topics. Loosely, this can be thought of as softening the strict
definition of “co-occurrence” in a mixture model. This
flexibility leads to sets of terms that more tightly co-occur
Decide on the number of words N the document will have
(say, according to a Poisson distribution).
Choose a topic mixture for the document (according to a
Dirichlet distribution over a fixed set of K topics). For
example, assuming that we have the two food and cute
animal topics above, you might choose the document to
consist of 1/3 food and 2/3 cute animals.
Generate each word w_i in the document by:
◦ First picking a topic (according to the multinomial distribution
that you sampled above; for example, you might pick the food
topic with 1/3 probability and the cute animals topic with 2/3
probability).
◦ Using the topic to generate the word itself (according to the
topic’s multinomial distribution). For example, if we selected
the food topic, we might generate the word “broccoli” with
30% probability, “bananas” with 15% probability, and so on.
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 25
27. Model Selection by Harmonic Mean - Martin Ponweiser
Optimum number of Topics (1/2)
**The drawback however is that the estimator might have infinite variance.
The problem is that adding more parameters to the model will always increase the likelihood.
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 27
28. Optimum
number of
Topics- Akaike
information
criterion – AIC
(2/2)
AIC deals with the trade-off
between the goodness of fit of the
model and the complexity of the
model.
k = no. of parameters
L = maximized value of
the likelihood function
Winner of
Kyoto prize
2006
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 28
29. Each Response can have a likelihood of
multiple topics, for brevity we need to select
the most likely
@GAMMA IS THE PROBABILITY OF EACH TOPIC FOR EACH RESPONSE, BUT
TAKING THE ‘MAX’ VALUE EACH TD’S RESPONSE CAN BE TAGGED WITH ‘THE
‘MOST LIKELY’ TOPIC
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 29
30. Other methods of optimum Topic
selection
PERPLEXITY
Is a measurement of how well a probability
distribution or probability model predicts a
sample
A weighted geometric average of the
inverses of the probabilities.
HIERARCHICAL DIRICHLET
PROCESSES
A common base distribution is selected
which represents the countably-infinite set
of possible topics for the corpus, and then
the finite distribution of topics for each
document is sampled from this base
distribution.
LO 5
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 30
32. Sampling Methods?
GIBBS SAMPLING METHOD
Gibbs sampling is commonly used as a
means of statistical inference,
especially Bayesian inference. It is
a randomized algorithm (i.e. an algorithm
that makes use of random numbers, and
hence may produce different results each
time it is run)
With Gibbs sampling, we can
approximate the posterior probability as
tight as we want.
HOW DOES IT WORK?
Technically LDA Gibbs sampling works because we
intentionally set up a Markov chain that converges
into the posterior distribution of the model
parameters, or word–topic assignments
The equations for collapsed Gibbs sampling, there
is a factor for words, another for documents.
Probabilities are higher for assignments that "don't
break document boundaries", that is, words
appearing in the same document have a slightly
higher odds of ending up in the same topic.
The same holds for document assignments, they
to a degree follow "word boundaries". These effects
mix up and spread over clusters of documents and
words.
LO 5
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 32
33. VEM
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives
a probability distribution over the latent variables (in the Bayesian style) together with a
point estimate for θ (either a maximum likelihood estimate or a posterior mode). We may
want a fully Bayesian version of this, giving a probability distribution over θ as well as the
latent variables. In fact the Bayesian approach to inference is simply to treat θ as
another latent variable. In this paradigm, the distinction between the E and M steps
disappears. If we use the factorized Q approximation as described above (variational
Bayes), we may iterate over each latent variable (now including θ) and optimize them
one at a time. There are now k steps per iteration, where k is the number of latent
variables. For graphical models this is easy to do as each variable's new Q depends
only on its Markov blanket, so local message passing can be used for efficient inference.
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 33
34. Appendix 4
LDA V LSI
FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 3419 Jan 2014
35. LDA v. Latent semantic indexing
(LSI)
LDA
1. Topics typically look more coherent and
easier to interpret.
2. Better at classifying static corpuses
3. Ability to shard and map-reduce
4. Priors are critical
LSI
1. LSI requires relatively high computational
performance and memory
2. Topics are difficult to interrupt
3. Better for continuous feeds (‘Online’) as
models have larger scope to change
LSI is based on the principle that words that are used in the same contexts tend to have similar
meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by
establishing associations between those terms that occur in similar contexts
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 35
37. Kullback–Leibler divergence
In probability theory and information theory, the Kullback–Leibler
divergence[1][2][3] (also information divergence, information gain, relative entropy,
or KLIC; here abbreviated as KL divergence) is a non-symmetric measure of the difference
between two probability distributions P and Q. Specifically, the Kullback–Leibler divergence
of Q from P, denoted DKL(P||Q), is a measure of the information lost when Q is used to
approximate P:[4] The KL divergence measures the expected number of extra bits required
to code samples from P when using a code based on Q, rather than using a code based
on P. Typically P represents the "true" distribution of data, observations, or a precisely
calculated theoretical distribution. The measure Qtypically represents a theory, model,
description, or approximation of P.
Although it is often intuited as a metric or distance, the KL divergence is not a true metric —
for example, it is not symmetric: the KL divergence from P to Q is generally not the same as
that from Q to P. However, its infinitesimal form, specifically its Hessian, is a metric tensor: it
is the Fisher information metric.
KL divergence is a special case of a broader class of divergences called f-divergences. It
was originally introduced by Solomon Kullback and Richard Leibler in 1951 as the directed
divergence between two distributions. It can be derived from a Bregman divergence.
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 37
38. 19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 38
Saliency
In the visualization the a bar chart with the most “important” keywords
and their corresponding frequencies given the current state. By default,
the most important keywords are defined by a measure
of saliency (Chuang, Heer, Manning 2012).
Saliency is a compromise between a word's overall frequency and
it's distinctiveness. A word's distinctiveness is a measure of that word's
distribution over topics (relative to the marginal distribution over topics).
A word is highly distinctive if that word's mass is concentrated on a
small number of topics. Very obscure (or rare) words are usually highly
distinctive which is why the overall frequency of the word is
incorporated in the weight.
For example, on the y-axis of the bar chart, we see the words
“gorbachev” and “last”. Last is not very distinctive (it appears in many
different topics), but it is still salient since it has such a high overall
frequency. On the other hand, gorbachev does not have a very high
overall frequency, but is highly distinctive since it appears almost solely
in topic 17 The changing in the size of the circles below reflects the
difference in distinctiveness between gorbachev and last.
39. Conditioning
Our definition of “important” words changes depending on whether the user indicates an
interest in a particular cluster or topic. Here we define the relevance of a word given a topic
as:
(lambda*log(p(word|topic)) + (1-lambda)*log(frac{p(word|topic)}{p(word)}))
The relevance of a word given a cluster can also be defined in a similar way:
(lambda*log(p(word|topic)) + (1-lambda)*log(frac{p(word|cluster)}{p(word)}))
where (p(word|cluster) = sum_{topic}p(word|topic))
Once the top relevant words are chosen, these words are then ordered according to their
frequency within the relevant topic/cluster. To keep these relevant words in place on the bar
chart, the user must click the desired cluster region or topic circle. To resume to the default
plots, click the “clear selection” text above the scatterplot. In the left hand screenshot below,
cluster 2 is selected via click, then “soviet” is hovered upon to expose it's conditional
distribution over topics. Similarly, on the right hand screenshot, topic 17 is selected via click,
then “union” is hovered upon.
19 Jan 2014 FABRIKATYR – TOPIC MODELLING POLITICAL DISCOURSE 39
For LDA the choice of prior is surprisingly important:
– Asymmetric prior for document-specific topic distributions
– Symmetric prior for topic-specific word distributions