This document provides an overview and agenda for a brown bag presentation on analytics services. The presentation includes introductions of the analytics team, discussions of why analytics are important both for business and practical reasons, and case studies of identifying smugglers and analyzing text data. The presentation emphasizes a philosophy of not being "data agnostic" and using modes of inquiry like induction and abduction rather than deduction.
Data Tactics Data Science Brown Bag (April 2014)Rich Heimann
This is a presentation we perform internally every quarter as part of our Data Science Brown Bag Series. This presentation was talking about different types of soft clustering techniques - all of which the team currently performs depending on the complexity of the data and the complexity of customer problems. If you are interested in learning more about working with L-3 Data Tactics or interested in working for the L-3 Data Tactics Data Science team please contact us soon! Thank you.
- What is Clustering, Honeypots and Density Based Clustering?
- What is Optics Clustering and how is it different than DB Clustering? …and how
can it be used for outlier detection.
- What is so-called soft clustering and how is it different than clustering? …and how
can it be used for outlier detection.
Big Social Data: The Spatial Turn in Big Data (Video available soon on YouTube)Rich Heimann
Big Social Data: The Spatial Turn in Big Data
By Richard Heimann & Abe Usher
University of Maryland Baltimore County Webinar Description:
The increased access to spatial data and overall improved application of spatial analytical methods present certain potential to social scientific research. This webinar is designed to focus on substantive social science research perspectives while exposing rewards involved in the application of geographic information systems (GIS), Big Data, and spatial analytics in their own research.
What is witnessed as the hype of Web 2.0 has worn off and the collaborative use of the Internet becomes a societal norm is an unprecedented explosion in the creation and analysis of geospatial data. Just as major governments are reducing their investments in location intelligence, individuals and non-government organizations are fueling a bonfire of innovation in the world of GIS data.
Traditional spatial analyses grew up in an era of sparse data and very weak computational power. Today, both of those circumstances are reversed and many of the old solutions are no longer suitable to answer todays questions.
"Big Social Data: The Spatial Turn in Big Data" reflects this change and combines two things which, until recently, engaged quite different groups of researchers and practitioners. Together, they require particular techniques and a sophisticated understanding of the special problems associated with spatial social data. Geographic Data Mining, or Geographic Knowledge Discovery, is not new, but is developing and changing rapidly as both more, and different, data becomes available, and people see new applications. The days of ‘Big Data’ require fresh thinking.
The webinar will highlight connections between spatial concepts and data availability. New emerging social media data will be promoted over traditional social science data, which better reflect some of the more recently developments in Big Data - most notably the socially critical exploration of such data.
Neural Information Retrieval: In search of meaningful progressBhaskar Mitra
The emergence of deep learning based methods for search poses several challenges and opportunities not just for modeling, but also for benchmarking and measuring progress in the field. Some of these challenges are new, while others have evolved from existing challenges in IR benchmarking exacerbated by the scale at which deep learning models operate. Evaluation efforts such as the TREC Deep Learning track and the MS MARCO public leaderboard are intended to encourage research and track our progress, addressing big questions in our field. The goal is not simply to identify which run is "best" but to move the field forward by developing new robust techniques, that work in many different settings, and are adopted in research and practice. This entails a wider conversation in the IR community about what constitutes meaningful progress, how benchmark design can encourage or discourage certain outcomes, and about the validity of our findings. In this talk, I will present a brief overview of what we have learned from our work on MS MARCO and the TREC Deep Learning track--and reflect on the state of the field and the road ahead.
Data Tactics Data Science Brown Bag (April 2014)Rich Heimann
This is a presentation we perform internally every quarter as part of our Data Science Brown Bag Series. This presentation was talking about different types of soft clustering techniques - all of which the team currently performs depending on the complexity of the data and the complexity of customer problems. If you are interested in learning more about working with L-3 Data Tactics or interested in working for the L-3 Data Tactics Data Science team please contact us soon! Thank you.
- What is Clustering, Honeypots and Density Based Clustering?
- What is Optics Clustering and how is it different than DB Clustering? …and how
can it be used for outlier detection.
- What is so-called soft clustering and how is it different than clustering? …and how
can it be used for outlier detection.
Big Social Data: The Spatial Turn in Big Data (Video available soon on YouTube)Rich Heimann
Big Social Data: The Spatial Turn in Big Data
By Richard Heimann & Abe Usher
University of Maryland Baltimore County Webinar Description:
The increased access to spatial data and overall improved application of spatial analytical methods present certain potential to social scientific research. This webinar is designed to focus on substantive social science research perspectives while exposing rewards involved in the application of geographic information systems (GIS), Big Data, and spatial analytics in their own research.
What is witnessed as the hype of Web 2.0 has worn off and the collaborative use of the Internet becomes a societal norm is an unprecedented explosion in the creation and analysis of geospatial data. Just as major governments are reducing their investments in location intelligence, individuals and non-government organizations are fueling a bonfire of innovation in the world of GIS data.
Traditional spatial analyses grew up in an era of sparse data and very weak computational power. Today, both of those circumstances are reversed and many of the old solutions are no longer suitable to answer todays questions.
"Big Social Data: The Spatial Turn in Big Data" reflects this change and combines two things which, until recently, engaged quite different groups of researchers and practitioners. Together, they require particular techniques and a sophisticated understanding of the special problems associated with spatial social data. Geographic Data Mining, or Geographic Knowledge Discovery, is not new, but is developing and changing rapidly as both more, and different, data becomes available, and people see new applications. The days of ‘Big Data’ require fresh thinking.
The webinar will highlight connections between spatial concepts and data availability. New emerging social media data will be promoted over traditional social science data, which better reflect some of the more recently developments in Big Data - most notably the socially critical exploration of such data.
Neural Information Retrieval: In search of meaningful progressBhaskar Mitra
The emergence of deep learning based methods for search poses several challenges and opportunities not just for modeling, but also for benchmarking and measuring progress in the field. Some of these challenges are new, while others have evolved from existing challenges in IR benchmarking exacerbated by the scale at which deep learning models operate. Evaluation efforts such as the TREC Deep Learning track and the MS MARCO public leaderboard are intended to encourage research and track our progress, addressing big questions in our field. The goal is not simply to identify which run is "best" but to move the field forward by developing new robust techniques, that work in many different settings, and are adopted in research and practice. This entails a wider conversation in the IR community about what constitutes meaningful progress, how benchmark design can encourage or discourage certain outcomes, and about the validity of our findings. In this talk, I will present a brief overview of what we have learned from our work on MS MARCO and the TREC Deep Learning track--and reflect on the state of the field and the road ahead.
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and BeyondBhaskar Mitra
The emergence of deep learning-based methods for information retrieval (IR) poses several challenges and opportunities for benchmarking. Some of these are new, while others have evolved from existing challenges in IR exacerbated by the scale at which deep learning models operate. In this talk, I will present a brief overview of what we have learned from our work on MS MARCO and the TREC Deep Learning track, and reflect on the road ahead.
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning TrackBhaskar Mitra
We benchmark Conformer-Kernel models under the strict blind evaluation setting of the TREC 2020 Deep Learning track. In particular, we study the impact of incorporating: (i) Explicit term matching to complement matching based on learned representations (i.e., the “Duet principle”), (ii) query term independence (i.e., the “QTI assumption”) to scale the model to the full retrieval setting, and (iii) the ORCAS click data as an additional document description field. We find evidence which supports that all three aforementioned strategies can lead to improved retrieval quality.
The ultimate goal of a recommender system is to suggest interesting and not obvious items (e.g., products to buy, people to connect with, movies to watch, etc.) to users, based on their preferences.
The advent of the Linked Open Data (LOD) initiative in the Semantic Web gave birth to a variety of open knowledge bases freely accessible on the Web. They provide a valuable source of information that can improve conventional recommender systems, if properly exploited.
Here I present several approaches to recommender systems that leverage Linked Data knowledge bases such as DBpedia. In particular, content-based and hybrid recommendation algorithms will be discussed.
For full details about the presented approaches please refer to the full papers mentioned in this presentation.
Adversarial and reinforcement learning-based approaches to information retrievalBhaskar Mitra
Traditionally, machine learning based approaches to information retrieval have taken the form of supervised learning-to-rank models. Recently, other machine learning approaches—such as adversarial learning and reinforcement learning—have started to find interesting applications in retrieval systems. At Bing, we have been exploring some of these methods in the context of web search. In this talk, I will share couple of our recent work in this area that we presented at SIGIR 2018.
Neural Models for Information RetrievalBhaskar Mitra
In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing (NLP) tasks, such as language modelling and machine translation. This suggests that neural models will also yield significant performance improvements on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using semantic rather than lexical matching. IR tasks, however, are fundamentally different from NLP tasks leading to new challenges and opportunities for existing neural representation learning approaches for text.
We begin this talk with a discussion on text embedding spaces for modelling different types of relationships between items which makes them suitable for different IR tasks. Next, we present how topic-specific representations can be more effective than learning global embeddings. Finally, we conclude with an emphasis on dealing with rare terms and concepts for IR, and how embedding based approaches can be augmented with neural models for lexical matching for better retrieval performance. While our discussions are grounded in IR tasks, the findings and the insights covered during this talk should be generally applicable to other NLP and machine learning tasks.
Basic introduction to recommender systems + Implementing a content-based recommender system by leveraging knowledge encoded into Linked Open Data datasets
The World Wide Web is moving from a Web of hyper-linked documents to a Web of linked data. Thanks to the Semantic Web technological stack and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets connected with each other to form the so called LOD cloud. As of today, we have tons of RDF data available in the Web of Data, but only a few applications really exploit their potential power. The availability of such data is for sure an opportunity to feed personalized information access tools such as recommender systems. We will show how to plug Linked Open Data in a recommendation engine in order to build a new generation of LOD-enabled applications.
(Lecture given @ the 11th Reasoning Web Summer School - Berlin - August 1, 2015)
5 Lessons Learned from Designing Neural Models for Information RetrievalBhaskar Mitra
Slides from my keynote talk at the Recherche d'Information SEmantique (RISE) workshop at CORIA-TALN 2018 conference in Rennes, France.
(Abstract)
Neural Information Retrieval (or neural IR) is the application of shallow or deep neural networks to IR tasks. Unlike classical IR models, these machine learning (ML) based approaches are data-hungry, requiring large scale training data before they can be deployed. Traditional learning to rank models employ supervised ML techniques—including neural networks—over hand-crafted IR features. By contrast, more recently proposed neural models learn representations of language from raw text that can bridge the gap between the query and the document vocabulary.
Neural IR is an emerging field and research publications in the area has been increasing in recent years. While the community explores new architectures and training regimes, a new set of challenges, opportunities, and design principles are emerging in the context of these new IR models. In this talk, I will share five lessons learned from my personal research in the area of neural IR. I will present a framework for discussing different unsupervised approaches to learning latent representations of text. I will cover several challenges to learning effective text representations for IR and discuss how latent space models should be combined with observed feature spaces for better retrieval performance. Finally, I will conclude with a few case studies that demonstrates the application of neural approaches to IR that go beyond text matching.
A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difficult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re-rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we find the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features.
Summary: Graphs are structures commonly used in computer science that model the interactions among entities. I will start from introducing the basic formulations of graph based machine learning, which has been a popular topic of research in the past decade and led to a powerful set of techniques. Particularly, I will show examples on how it acts as a generic data mining and predictive analytic tool. In the second part, I am going to discuss applications of such learning techniques in media analytics: (1) image analysis, where visually coherent objects are isolated from images; (2) social analysis of videos, where actors' social properties are predicted from videos. Materials in this part are based on our recent publications in highly selective venues (papers on https://sites.google.com/site/leiding2010/ ).
Bio: Lei Ding is a researcher making sense of large amounts of data in all media types. He currently works in Intent Media as a scientist, focusing on data analytics and applied machine learning in online advertising. Previously, he has worked in several research institutions including Columbia University, UIUC and IBM Research on digital / social media analysis and understanding. He received a Ph.D. degree in Computer Science and Engineering from The Ohio State University, where he was a Distinguished University Fellow.
Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.
I argue why I think that Computer Science (or better: Informatics) is a "natural science", in the same sense that physics, astronomy, biology, psychology and sociology are a natural science: they study a part of the world around us. In that same sense, I think Informatics studies a part of the world around us.
For a similar talk (including script), but more aimed at a Semantic Web audience in particular, see http://www.cs.vu.nl/~frankh/spool/ISWC2011Keynote/
(or http://videolectures.net/iswc2011_van_harmelen_universal/ for a video registration)
I will try to say – what is QA, how could we get the answer to questions on natural language and how successful have we been in that domain.
I have gained all of my knowledge from three proposed papers and what I read around them.
From the webinar presentation "Data Science: Not Just for Big Data", hosted by Kalido and presented by:
David Smith, Data Scientist at Revolution Analytics, and
Gregory Piatetsky, Editor, KDnuggets
These are the slides for David Smith's portion of the presentation.
Watch the full webinar at:
http://www.kalido.com/data-science.htm
Myths and Mathemagical Superpowers of Data ScientistsDavid Pittman
Some people think data scientists are mythical beings, like unicorns, or they are some sort of nouveau fad that will quickly fade. Not true, says IBM big data evangelist James Kobielus. In this engaging presentation, with artwork created by Angela Tuminello, Kobielus debunks 10 myths about data scientists and their role in analytics and big data. You might also want to read the full blog by Kobielus that spawned this presentation: "Data Scientists: Myths and Mathemagical Superpowers" - http://ibm.co/PqF7Jn
For more information, visit http://www.ibmbigdatahub.com
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and BeyondBhaskar Mitra
The emergence of deep learning-based methods for information retrieval (IR) poses several challenges and opportunities for benchmarking. Some of these are new, while others have evolved from existing challenges in IR exacerbated by the scale at which deep learning models operate. In this talk, I will present a brief overview of what we have learned from our work on MS MARCO and the TREC Deep Learning track, and reflect on the road ahead.
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning TrackBhaskar Mitra
We benchmark Conformer-Kernel models under the strict blind evaluation setting of the TREC 2020 Deep Learning track. In particular, we study the impact of incorporating: (i) Explicit term matching to complement matching based on learned representations (i.e., the “Duet principle”), (ii) query term independence (i.e., the “QTI assumption”) to scale the model to the full retrieval setting, and (iii) the ORCAS click data as an additional document description field. We find evidence which supports that all three aforementioned strategies can lead to improved retrieval quality.
The ultimate goal of a recommender system is to suggest interesting and not obvious items (e.g., products to buy, people to connect with, movies to watch, etc.) to users, based on their preferences.
The advent of the Linked Open Data (LOD) initiative in the Semantic Web gave birth to a variety of open knowledge bases freely accessible on the Web. They provide a valuable source of information that can improve conventional recommender systems, if properly exploited.
Here I present several approaches to recommender systems that leverage Linked Data knowledge bases such as DBpedia. In particular, content-based and hybrid recommendation algorithms will be discussed.
For full details about the presented approaches please refer to the full papers mentioned in this presentation.
Adversarial and reinforcement learning-based approaches to information retrievalBhaskar Mitra
Traditionally, machine learning based approaches to information retrieval have taken the form of supervised learning-to-rank models. Recently, other machine learning approaches—such as adversarial learning and reinforcement learning—have started to find interesting applications in retrieval systems. At Bing, we have been exploring some of these methods in the context of web search. In this talk, I will share couple of our recent work in this area that we presented at SIGIR 2018.
Neural Models for Information RetrievalBhaskar Mitra
In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing (NLP) tasks, such as language modelling and machine translation. This suggests that neural models will also yield significant performance improvements on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using semantic rather than lexical matching. IR tasks, however, are fundamentally different from NLP tasks leading to new challenges and opportunities for existing neural representation learning approaches for text.
We begin this talk with a discussion on text embedding spaces for modelling different types of relationships between items which makes them suitable for different IR tasks. Next, we present how topic-specific representations can be more effective than learning global embeddings. Finally, we conclude with an emphasis on dealing with rare terms and concepts for IR, and how embedding based approaches can be augmented with neural models for lexical matching for better retrieval performance. While our discussions are grounded in IR tasks, the findings and the insights covered during this talk should be generally applicable to other NLP and machine learning tasks.
Basic introduction to recommender systems + Implementing a content-based recommender system by leveraging knowledge encoded into Linked Open Data datasets
The World Wide Web is moving from a Web of hyper-linked documents to a Web of linked data. Thanks to the Semantic Web technological stack and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets connected with each other to form the so called LOD cloud. As of today, we have tons of RDF data available in the Web of Data, but only a few applications really exploit their potential power. The availability of such data is for sure an opportunity to feed personalized information access tools such as recommender systems. We will show how to plug Linked Open Data in a recommendation engine in order to build a new generation of LOD-enabled applications.
(Lecture given @ the 11th Reasoning Web Summer School - Berlin - August 1, 2015)
5 Lessons Learned from Designing Neural Models for Information RetrievalBhaskar Mitra
Slides from my keynote talk at the Recherche d'Information SEmantique (RISE) workshop at CORIA-TALN 2018 conference in Rennes, France.
(Abstract)
Neural Information Retrieval (or neural IR) is the application of shallow or deep neural networks to IR tasks. Unlike classical IR models, these machine learning (ML) based approaches are data-hungry, requiring large scale training data before they can be deployed. Traditional learning to rank models employ supervised ML techniques—including neural networks—over hand-crafted IR features. By contrast, more recently proposed neural models learn representations of language from raw text that can bridge the gap between the query and the document vocabulary.
Neural IR is an emerging field and research publications in the area has been increasing in recent years. While the community explores new architectures and training regimes, a new set of challenges, opportunities, and design principles are emerging in the context of these new IR models. In this talk, I will share five lessons learned from my personal research in the area of neural IR. I will present a framework for discussing different unsupervised approaches to learning latent representations of text. I will cover several challenges to learning effective text representations for IR and discuss how latent space models should be combined with observed feature spaces for better retrieval performance. Finally, I will conclude with a few case studies that demonstrates the application of neural approaches to IR that go beyond text matching.
A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difficult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re-rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we find the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features.
Summary: Graphs are structures commonly used in computer science that model the interactions among entities. I will start from introducing the basic formulations of graph based machine learning, which has been a popular topic of research in the past decade and led to a powerful set of techniques. Particularly, I will show examples on how it acts as a generic data mining and predictive analytic tool. In the second part, I am going to discuss applications of such learning techniques in media analytics: (1) image analysis, where visually coherent objects are isolated from images; (2) social analysis of videos, where actors' social properties are predicted from videos. Materials in this part are based on our recent publications in highly selective venues (papers on https://sites.google.com/site/leiding2010/ ).
Bio: Lei Ding is a researcher making sense of large amounts of data in all media types. He currently works in Intent Media as a scientist, focusing on data analytics and applied machine learning in online advertising. Previously, he has worked in several research institutions including Columbia University, UIUC and IBM Research on digital / social media analysis and understanding. He received a Ph.D. degree in Computer Science and Engineering from The Ohio State University, where he was a Distinguished University Fellow.
Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.
I argue why I think that Computer Science (or better: Informatics) is a "natural science", in the same sense that physics, astronomy, biology, psychology and sociology are a natural science: they study a part of the world around us. In that same sense, I think Informatics studies a part of the world around us.
For a similar talk (including script), but more aimed at a Semantic Web audience in particular, see http://www.cs.vu.nl/~frankh/spool/ISWC2011Keynote/
(or http://videolectures.net/iswc2011_van_harmelen_universal/ for a video registration)
I will try to say – what is QA, how could we get the answer to questions on natural language and how successful have we been in that domain.
I have gained all of my knowledge from three proposed papers and what I read around them.
From the webinar presentation "Data Science: Not Just for Big Data", hosted by Kalido and presented by:
David Smith, Data Scientist at Revolution Analytics, and
Gregory Piatetsky, Editor, KDnuggets
These are the slides for David Smith's portion of the presentation.
Watch the full webinar at:
http://www.kalido.com/data-science.htm
Myths and Mathemagical Superpowers of Data ScientistsDavid Pittman
Some people think data scientists are mythical beings, like unicorns, or they are some sort of nouveau fad that will quickly fade. Not true, says IBM big data evangelist James Kobielus. In this engaging presentation, with artwork created by Angela Tuminello, Kobielus debunks 10 myths about data scientists and their role in analytics and big data. You might also want to read the full blog by Kobielus that spawned this presentation: "Data Scientists: Myths and Mathemagical Superpowers" - http://ibm.co/PqF7Jn
For more information, visit http://www.ibmbigdatahub.com
How To Interview a Data Scientist
Daniel Tunkelang
Presented at the O'Reilly Strata 2013 Conference
Video: https://www.youtube.com/watch?v=gUTuESHKbXI
Interviewing data scientists is hard. The tech press sporadically publishes “best” interview questions that are cringe-worthy.
At LinkedIn, we put a heavy emphasis on the ability to think through the problems we work on. For example, if someone claims expertise in machine learning, we ask them to apply it to one of our recommendation problems. And, when we test coding and algorithmic problem solving, we do it with real problems that we’ve faced in the course of our day jobs. In general, we try as hard as possible to make the interview process representative of actual work.
In this session, I’ll offer general principles and concrete examples of how to interview data scientists. I’ll also touch on the challenges of sourcing and closing top candidates.
Synonyms for Learning Lunches include, but are not limited to the following: Lunch and Learn and/or brown-bag seminar. These training sessions (formal or informal) occur during a lunch period whereby participants receive, collaborate, or are trained during a specific time or location.
The purpose is to utilize normal breaks, such as the lunch break, to provide information to attendees in a voluntary and informal setting. It is often followed by a discussion of the topic. These sessions are common at universities and private companies as a medium for knowledge management and internal communications.
In universities, especially for graduate students, brown-bag seminars are often offered to update the researching community about ongoing research. Usually held by schools, universities and governmental institutions, they involve lectures, presentations, or talks by researchers, mostly professors about their ongoing research. Professors may visit from other universities to talk about their research.
Your ideas are really beautiful only inside your head, every time you try to share your idea the other person don't get it.
I want to teach you how to:
- generate many creative ideas
- share your ideas with others
- verify if they are valid
- get feedback on them properly
- present them
- create prototype of your application in a minute
If you are interested in the topics covered, further reading may include:
"Sketching User Experiences" by Bill Buxton
"Design is a Job" by Mike Monteiro
Data Science, Machine Learning and Neural NetworksBICA Labs
Lecture briefly overviewing state of the art of Data Science, Machine Learning and Neural Networks. Covers main Artificial Intelligence technologies, Data Science algorithms, Neural network architectures and cloud computing facilities enabling the whole stack.
Intro to Data Science for Enterprise Big DataPaco Nathan
If you need a different format (PDF, PPT) instead of Keynote, please email me: pnathan AT concurrentinc DOT com
An overview of Data Science for Enterprise Big Data. In other words, how to combine structured and unstructured data, leveraging the tools of automation and mathematics, for highly scalable businesses. We discuss management strategy for building Data Science teams, basic requirements of the "science" in Data Science, and typical data access patterns for working with Big Data. We review some great algorithms, tools, and truisms for building a Data Science practice, and provide plus some great references to read for further study.
Presented initially at the Enterprise Big Data meetup at Tata Consultancy Services, Santa Clara, 2012-08-20 http://www.meetup.com/Enterprise-Big-Data/events/77635202/
How to Become a Data Scientist
SF Data Science Meetup, June 30, 2014
Video of this talk is available here: https://www.youtube.com/watch?v=c52IOlnPw08
More information at: http://www.zipfianacademy.com
Zipfian Academy @ Crowdflower
Tutorial on Deep learning and ApplicationsNhatHai Phan
In this presentation, I would like to review basis techniques, models, and applications in deep learning. Hope you find the slides are interesting. Further information about my research can be found at "https://sites.google.com/site/ihaiphan/."
NhatHai Phan
CIS Department,
University of Oregon, Eugene, OR
This talk is about how we applied deep learning techinques to achieve state-of-the-art results in various NLP tasks like sentiment analysis and aspect identification, and how we deployed these models at Flipkart
Data By The People, For The People
Daniel Tunkelang
Director, Data Science at LinkedIn
Invited Talk at the 21st ACM International Conference on Information and Knowledge Management (CIKM 2012)
LinkedIn has a unique data collection: the 175M+ members who use LinkedIn are also the content those same members access using our information retrieval products. LinkedIn members performed over 4 billion professionally-oriented searches in 2011, most of those to find and discover other people. Every LinkedIn search and recommendation is deeply personalized, reflecting the user's current employment, career history, and professional network. In this talk, I will describe some of the challenges and opportunities that arise from working with this unique corpus. I will discuss work we are doing in the areas of relevance, recommendation, and reputation, as well as the ecosystem we have developed to incent people to provide the high-quality semi-structured profiles that make LinkedIn so useful.
Bio:
Daniel Tunkelang leads the data science team at LinkedIn, which analyzes terabytes of data to produce products and insights that serve LinkedIn's members. Prior to LinkedIn, Daniel led a local search quality team at Google. Daniel was a founding employee of faceted search pioneer Endeca (recently acquired by Oracle), where he spent ten years as Chief Scientist. He has authored fourteen patents, written a textbook on faceted search, created the annual workshop on human-computer interaction and information retrieval (HCIR), and participated in the premier research conferences on information retrieval, knowledge management, databases, and data mining (SIGIR, CIKM, SIGMOD, SIAM Data Mining). Daniel holds a PhD in Computer Science from CMU, as well as BS and MS degrees from MIT.
Presentation given by Dr. Diego Kuonen, CStat PStat CSci, on November 20, 2013, at the "IBM Developer Days 2013" in Zurich, Switzerland.
ABSTRACT
There is no question that big data has hit the business, government and scientific sectors. The demand for skills in data science is unprecedented in sectors where value, competitiveness and efficiency are driven by data. However, there is plenty of misleading hype around the terms big data and data science. This presentation gives a professional statistician's view on these terms and illustrates the connection between data science and statistics.
The presentation is also available at http://www.statoo.com/BigDataDataScience/.
R, Data Wrangling & Kaggle Data Science CompetitionsKrishna Sankar
Presentation for my tutorial at Big Data Tech Con http://goo.gl/ZRoFHi
This is the R version of my pycon tutorial + a few updates
It is work in progress. I will update with daily snapshot until done.
Give a background of Data Science and Artificial Intelligence, to better understand the current state of the art (SOTA) for Large Language Models (LLMs) and Generative AI. Then start a discussion on the direction things are going in the future.
Collective Mind infrastructure and repository to crowdsource auto-tuning (c-m...Grigori Fursin
Open access vision publication for this presentation: http://arxiv.org/abs/1308.2410
Designing, analyzing and optimizing applications for rapidly evolving computer systems is often a tedious, ad-hoc, costly and error prone process due to an enormous number of available design and optimization choices combined with complex interactions between all components. Auto-tuning, run-time adaptation and machine learning based techniques have been investigated for more than a decade to address some of these challenges but are still far from the widespread production use. This is not only due to large optimization spaces, but also due to a lack of a common methodology to discover, preserve and share knowledge about behavior of existing computer systems with ever changing interfaces of analysis and optimization tools.
In this talk I presented a new version of the modular, open source Collective Mind Framework and Repository (cTuning.org, c-mind.org/repo) for collaborative and statistical analysis and optimization of program and architecture behavior. Motivated by physics, biology and AI sciences, this framework helps researchers to gradually expose tuning choices, properties and characteristics at multiple granularity levels in existing systems through multiple plugins. These plugins can be easily combined like LEGO to build customized collaborative or private in-house repositories of shared data (applications, data sets, codelets, micro-benchmarks and architecture descriptions), modules (classification, predictive modeling, run-time adaptation) and statistics from multiple program executions. Collected data is continuously analyzed and extrapolated using online learning to predict better optimizations or hardware configurations to effectively balance performance, power consumption and other characteristics.
This approach was initially validated in the MILEPOST project to remove the training phase of a machine learning based self-tuning compiler, and later extended in the Intel Exascale Lab to connect various tuning tools with an in-house customized repository. During this talk, I will demonstrate the auto-tuning using the new version of this framework and off-the-shelf mobile phones while describing encountered challenges and possible solutions.
Presented to "Managing the Material: Tackling Visual Arts as Research Data" workshop, organised by Visual Arts Data Service (VADS) in conjunction with the Digital Curation Centre (DCC), through the JISC-funded KAPTUR project. London, 14 September 2012
This talk was given at Velocity '13 in Santa Clara by Abe Stanway and Jon Cowie. It talks about how Etsy make sense of the 250k metrics they gather, using their new Kale stack.
Big Data Analytics: Discovering Latent Structure in Twitter; A Case Study in ...Rich Heimann
Big Social Data Analysis: Using location & Twitter to explore the tragic aftermath of the Sandy Hook Elementary School shooting.
The growth of social media over the last decade has revolutionized the way individuals interact and industries conduct business. Individuals produce data at an unprecedented rate by interacting, sharing, and consuming content through social media. However, analyzing this ever-growing pile of data is quite tricky and, if done erroneously, could lead to wrong inferences.
In this webinar you will gain, by example insights to mining social media data and exposing underlying latent structures relating to ideology and sentiment as well as space and time.
Human Terrain Analysis at George Mason University (DAY 1)Rich Heimann
First lecture in a three day class on Human Terrain Analysis. The lecture is a state of the discipline talk with historical and contemporary examples of HTA.
Human Terrain Analysis at George Mason University (DAY 1)Rich Heimann
First lecture in a three day class on Human Terrain Analysis. The lecture is a state of the discipline talk with historical and contemporary examples of HTA.
Human Terrain Analysis at George Mason University (DAY 1)Rich Heimann
First lecture in a three day class on Human Terrain Analysis. The lecture is a state of the discipline talk with historical and contemporary examples of HTA.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
1. DT Brown Bag: A Primer in Analytics
WELCOME!
R2 = 500; p<marty’s 1mile time
asymptotically approaching perfect
Thursday, August 22, 13
2. Outline
•EAT, Guten Appetit, Bon appetit, Buen apetito, Buon appetito!
•Words from the VP
•Why this brown-bag?
•Analytics Services:
•Team Introduction; About YOU!
•Why Analytics!?
•Philosophy...
•Case Studies:
•Case Study (Nathan D.)
•Localview (Marty A.)
•Case Study (me)
•Core Values: Analytical Insights
•On the horizon...
Thursday, August 22, 13
3. Why this brown bag??
Learning [close] at a pace similar to the pace at which we learn.
Learning and Educating from/to PMs, SWE, and OPs.
PM: Provide insights from FRIs/RFPs.
PM: Atmospherics from our costumers.
SWE: Accessing data spaces.
SWE: Integrating algorithms.
OP: How do you best consume the outputs of models?
OP: What models are best to present to OPs?
PM: Program Managers, SWE: Software Engineers, OP: Operators
Thursday, August 22, 13
5. Data Tactics Analytics Practice
The Team:
(Nathan D., Shrayes R., David P., Adam VE., Andrew T., Geoffrey B., Rich H.)
Graduates from top universities...
Degrees include:
mathematics, computer science, aeronautical engineering,
astrophysics, electrical engineering, mechanical engineering, statistics,
social science(s).
Base competencies (horizontals): Clustering, Association Rules,
Regression, Naive Bayesian Classifier, Decision Trees, Time-Series,
Text Analysis.
Going beyond the base (verticals)...
Thursday, August 22, 13
6. Data Tactics Analytics Practice
ABOUT YOU:
28 confirmed, 18 webex, 14 tentative (n:60 represent > 25% of the company)
21 confirmed within the first 60 minutes....
Monsee Wood & Steve Moccio 1st
Charles Fuller & Lenesto Page Last
Chris Zilligen: 3,120 (Longest resume)
Catherine Schymanski: 284 (shortest resume)
Linguistic Standard:
Jack Gustafson (FK: -126)
Shrayes Ramesh (FK: -38)
...analytics team below the company average!! :)
Thursday, August 22, 13
7. Horizontals & Verticals
Clustering || Regression || Decision Trees || Text Analysis
Association Rules || Naive Bayesian Classifier || Time Series Analysis
econom
etricsspatialeconom
etrics
graph
theory
algorithm
s
astrophysicaltim
e-series
analysis
path
planning
algorithm
s
bayesian
statistics
constrained
optim
izations
num
ericalintegration
techniques
PCA
G
LM
hierarchicalm
odels
IRT
DLISA
latentclass
analysis
structuralequation
m
odeling
m
ixture
m
odels
SVM
m
axent
CART
naive
bayes
classifier
ICA
Thursday, August 22, 13
8. Data Tactics Analytics Practice
Program
m
ing
&
Scripting
Skills
M
athem
atics
&
Statistics
Domain Expertise
DT
Analytics
Traditional
Research
DangerZone!
~statisticulation
ML
[2] http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram
[1] Statisticulation “How to Lie with Statistics” Darrell Huff
[3] https://portal.data-tactics-corp.com/sites/analytics/Wiki/AnalyticsFAQ.aspx
Thursday, August 22, 13
9. Why Analytics [Business]???
Why are analytics important?
(Business, Analytics, Practical)
"We need to stop reinventing the cloud
and start using it!"
(Dave Boyd)
Thursday, August 22, 13
10. Why are analytics important?
(Business, Analytics, Practical)
Analytics:
No Free Lunch (NFL) theorems: no algorithm performs better
than any other when their performance is averaged uniformly
over all possible problems of a particular type. Algorithms must
be designed for a particular domain or style of problem, and that
there is no such thing as a general purpose algorithm.
Why Analytics [Analytics]???
Thursday, August 22, 13
11. Marty doesn’t scale - none of us do.
Data Scales
Web Scales
Academic Publications Scale
IC Scales
N
t
t
Why Analytics [Practical]???
Thursday, August 22, 13
12. Why Analytics [Practical]???
Why are analytics important?
(Business, Analytics, Practical)
“…the alternative to good statistics is not “no
statistics,” it’s bad statistics. People who argue
against statistical reasoning often end up backing up
their arguments with whatever numbers they have at
their command, over- or under-adjusting in their
eagerness to avoid anything systematic” Bill James
Thursday, August 22, 13
13. "companies that have massive amounts of data
without massive amounts of clue are going to be
displaced by startups that have less data but more
clue" (Tim O’Reilly)
Philosophy:
Thursday, August 22, 13
14. Philosophy:
We are NOT “Data Agnostic”
...this should represent an early warning
system about our culture. The IT notion
of data is dead.
Thursday, August 22, 13
16. “Analytics in Perspective” reflects how people arrive at
decisions.
GOOD: Induction, Abduction, Circumscription, Counterfactuals.
BAD: Deduction, Speculation, Justification, Groupthink
Analytics in Perspective...
Thursday, August 22, 13
18. Background: The Strait of Hormuz
Importance:
• Oil
• Embargo
• Smuggling
Thursday, August 22, 13
19. How to Catch Smugglers
In order to stop smugglers, we must identify:
1. Which boats are undertaking illicit activities
2. Where illicit activities are taking place
3. Points of departure/arrival of suspicious ships
Thursday, August 22, 13
20. A Difficult Task: Too Much Data
AIS (transponder) provides ship-level data:
• Ship location (lat-long)
• Ship speed
• Ship bearing
• Ship “purpose”
• Time stamp
About 0.5M pings from 1,300 boats between
March 2012 and January 2013.
Thursday, August 22, 13
22. A Difficult Task: Too Little Data
Individual pings or tracks not useful: no point of
comparison
Similarly, small duration plots are too thin to provide
analytic leverage.
Thursday, August 22, 13
23. A Difficult Task: Too Little Data
.
A single boat:
Thursday, August 22, 13
24. A Difficult Task: Too Little Data
.
A single day:
Thursday, August 22, 13
26. Solution: Analytics
Use a statistical model to discover patterns in
the data…
…then identify observations (boat-times) that do
not fit those patterns.
Goal: Identify boats, place, and times that exhibit
or house discrepant behavior.
Thursday, August 22, 13
27. Characteristics of a Good Model
A good model for this data should:
• Leverage all of the available data
• Take advantage of local information (not global patterns)
• Be able to accommodate a variety of patterns (shipping,
fishing, etc)
• Be able to identify ships that are only occasionally deviant
• Identify place-times where deviant activity occurs
• Be estimable with reasonable computational resources
Thursday, August 22, 13
28. The Model
A local, unsupervised-as-supervised learning,
bagged, probability model.
A LUBaP model?
Thursday, August 22, 13
29. The Model
A local, unsupervised-as-supervised learning,
bagged, probability model.
We want to compare apples-to-apples; that is,
treat nearby (spatio-temporally) boats the same,
don't compare them to far-flung ones.
Assign each observation to a geographically
constrained grid square.
Thursday, August 22, 13
30. The Model
A local, unsupervised-as-supervised learning,
bagged, probability model.
Thursday, August 22, 13
31. The Model
A local, unsupervised-as-supervised learning,
bagged, probability model.
Let m denote the number of observations in a particular grid
square. Then, in each square, add m additional observations
with the following characteristics:
•position, drawn from bivariate uniform distribution
•speed, drawn with replacement from empirical distribution
•time of observation, drawn from a uniform distribution
Now, the task is no longer unsupervised, but supervised.
->Model the probability of a boat being a ``real'' boat.
Thursday, August 22, 13
34. The Model
A local, unsupervised-as-supervised learning,
bagged, probability model.
•Turned outlier detection, a poorly structured problem, into
modeling a binary target, a very well-understood problem
•Now, simply model the probability that each boat is “real”
•Apply logistic regression to each grid square
•Allow the flexibility (order) of the model fit (splines,
interactions) to depend on the data density in each square
(more data, richer model).
•logit(“real”) = f(speed, location, time)
Thursday, August 22, 13
35. The Model
A local, unsupervised-as-supervised learning,
bagged, probability model.
Problem: Predictions may be arbitrary due to
random assignment and grid coarseness.
Thursday, August 22, 13
36. The Model
A local, unsupervised-as-supervised learning,
bagged, probability model.
Problem: Predictions may be arbitrary due to
random assignment and grid coarseness.
Solution:
1. Create multiple grids with different positions.
2. Re-run the local model in each square, for
each different grid.
3. Aggregate the predicted probabilities for each
observation, in each grid, by averaging.
Thursday, August 22, 13
37. Computational Efficiency
Estimating a flexible model in each of ~300 grid squares, for
each of 6 grids, means estimating ~1,800 logistic models!
Not a problem, because:
• each one has limited amounts of data (most algorithms take
exponentially longer as a function of data size)
• each local model is separate, allowing for parallel
processing
Computation on my laptop takes ~4 minutes after simple
parallelization across cores.
Thursday, August 22, 13
38. What is the Output from this Model?
•Predicted probability of each boat-time (i.e. observation)
being a real boat.
•High probabilities indicate observations doing something
“normal” or “predictable.”
•Low probabilities indicate observations doing something
“discrepant.”
Ship ID Lat Long Speed Timestamp Pr
623432 24.546 55.005 9.8 1203221230 0.78
874627 24.716 55.108 12.4 1209242230 0.08
523881 25.128 54.807 4.2 1206120947 0.64
Thursday, August 22, 13
41. Value III: Prioritized List of Suspect Boats
•Model generates probabilities on an interval scale
•Facilitates efficient use of scarce enforcement resources
Thursday, August 22, 13
42. Lessons Learned
Analytics is a powerful tool for identifying patterns in big data.
Identifying outliers is predicated on identifying patterns.
LUBaP models are a powerful tool for outlier detection.
This model utilizes no subject matter expertise and a simple
probability model (implications: portable across domains; fast)
Thursday, August 22, 13
43. What’s the Next Hot Thing?
Unsupervised Scaling of Text Data
Thursday, August 22, 13
44. Analyzing Text is Important
The preponderance of data created today is free text, not
structured numerical data.
One thing people want to do with text is “scale” it; that is, rank
order it according to an underlying continuum.
Examples:
-put a numerical value on what each product reviewer thinks of
a particular product
-generate a measure of the extremism of Iranian clerics based
on their writings
Thursday, August 22, 13
45. Analyzing Text is Difficult
Text data is unstructured, and messy.
“I thought I would love the iPhone, but it’s actually not that
great.”
Standard approaches:
1. Dictionary: Create a numeric value for many content-laden
words; compare texts to the dictionary.
2. Estimation: Hand-score many texts; use the scores as a
basis for training a statistical model for other texts.
Thursday, August 22, 13
46. A New Approach
Each author’s use of a word implies they “support” that
word, as opposed to words they don’t use. The
model, developed for scaling ideological positions of
legislators from votes, can be applied to word use.
Benefits:
1: No dictionary!
2: Language invariant!
https://github.com/DataTacticsCorp/text-analysis
Thursday, August 22, 13
47. Preliminary Example
Pulled down 2000 tweets, 1000 each with the hashtags #prolife
and #prochoice.
Drop the hashtags (no cheating!), pre-process the text data, and
run the model.
Thursday, August 22, 13
53. ...by the numbers
7 volunteered & part time team members (NO OVERHEAD)
first DEMO delivered in 86 days
832 hours of research & development time
Thursday, August 22, 13
54. The Team:
The Team
backend development frontend development data analysis development
Marty A
Joe A
Joon K
Annie W Dave P
Rich H
Shenoa H
Thursday, August 22, 13
60. Directional Space Time Analytics
Data Tactics has been working on a set of problems that
require considered solutions. The following method
compares distributions at two points in time, with a
particular focus on changes in the overall morphology of the
distribution as well as mobility of individual observations
within the distribution over that same period of time and
contextually accounting for neighborhood effects. These
dynamics are illuminating and communicate time and
explicitly account for underlying spatial dimension (Wy).
Based on the integration of a dynamic local space-time
together with direction statistics these methods provide
insights on the role of spatial dependence and uncontrolled
variance over time and space.
Thursday, August 22, 13
61. Directional Space Time Analytics
This analysis demonstrates the utility of directional space time analytics
on regional stability distribution dynamics. Drawing on recent advances
in geovisualization [1], we suggest a spatially explicit view of mobility.
Based on the integration of a dynamic local indicator of spatial
association together with directional statistics and mapped data points
to each observation, this framework provides new insights on the role of
spatial dependence in regional stability and change.
These approaches have been illustrated with state level incomes in the
U.S. (1969-2008), Gross Domestic Product (1960 - 2011) Failed State
Index (2010 - 2012), and GMTI data (t0, t1).
[1] Murray, A. T., Liu, Y., Rey, S. J., and Anselin, L. (2010). Exploring movement object patterns.
Thursday, August 22, 13
62. Per Capita Gross Domestic Product
A measure of the total output of a country that takes the gross domestic product (GDP)
and divides it by the number of people in the country. The per capita GDP is especially
useful when comparing one country to another because it shows the relative
performance of the countries. A rise in per capita GDP signals growth in the economy
and tends to translate as an increase in productivity.
GDP is widely used by economists to gauge economic recession and recovery and an
economy's general monetary ability to address externalities. It is not meant to measure
externalities. It serves as a general metric for a nominal monetary standard of living and
is not adjusted for costs of living within a region.
Gross Domestic Product
GDP = private consumption + gross investment + government spending + (exports − imports), or
Thursday, August 22, 13
63. GDP per. Capita
Time Span: 1960 to 2011 (51 temporal bin(s), 1 year intervals): 2000 to 2011 (12 temporal
bin(s), 1 year intervals);
Spatial Area: Global;
Original Sample: 202 obs;
Data processing: imputation;
Pruned Sample: 145 observations;
Method: Directional Local Indicator of Spatial Autocorrelation (Moran’s I) with space-time
classifications of High-high (Hh), high-High, Low-Low (LL), High Low (HL), Low-High (LH);
Spatial Weights: knn4;
Thursday, August 22, 13
64. > describe(dlisa$yr2000)
> describe(dlisa$yr2011)
V. Name n mean sd median mad min max range skew kurtosis
yr2000 145 5759 9534 1491 1831 87 46453 46366 2.12 3.72
yr2011 145 13292 20621 4666 5841 231 114232 114001 2.46 6.54
Directional Space Time Analytics
Thursday, August 22, 13
66. Directional Space Time Analytics
2000:2011 (12 temporal bin(s), 1 year intervals);
Thursday, August 22, 13
67. Directional Space Time Analytics
What is wrong with Vermont[1]?
- Seemingly nothing!
- Lies within head of approximately normal distribution
- Not an outlier in a classical statistical sense
- Vermont remains below the US average but is
closing the gap.
[1] State Median Income
Thursday, August 22, 13
68. State Median Income
Time Span: 1969 to 2008 (40 temporal bin(s), 1 year intervals)
Spatial Area: Contiguous United States;
Original Sample: 48 obs;
Method: Directional Local Indicator of Spatial Autocorrelation (Moran’s I) with space-time
classifications of High-high (Hh), high-High, Low-Low (LL), High Low (HL), Low-High (LH);
Spatial Weights: Rook Contiguity;
Thursday, August 22, 13
69. Directional Space Time Analytics
1969:2008 (40 temporal bin(s), 1 year intervals)
Thursday, August 22, 13
70. Directional Space Time Analytics
1969:2008 (40 temporal bin(s), 1 year intervals)
Thursday, August 22, 13
71. Directional Space Time Analytics
1969:2008 (40 temporal bin(s), 1 year intervals)
Thursday, August 22, 13
73. Core Values:
Localview as an ecosystem:
Most existing big data analyses of social media are confined to a
single platform. However, most of the topics of interest to such
studies, such as influence or information flow can rarely be confined
to the Internet, let alone to a single platform. Understandable
difficulty in obtaining high-quality multi-platform data does not mean
that we can treat a single platform as a closed and insular system,
as if human information flows were all gases in a chamber.
“Shapes of stories into computers...” Kurt Vonnegut
Nate Silver - Cognition2
; Small Multiples; Tukey vs. Tufte
http://kottke.org/11/09/kurt-vonnegut-explains-the-shapes-of-stories
Thursday, August 22, 13
74. Core Values:
Open-source software where possible.
-Bigger data means bigger cost.
-Scientific Python and R Computing Language reached maturity years ago.
Data = Rough + Smooth Qualities
Rough = impulsive, spiky signal: outliers; Smooth = pervasive
Leverage analytics to help understand patterns in data as well as outliers - so called rough
and smooth elements of data. The “smooth” and the “rough” patterns in data are
informative, depending on the specific questions customers have.
Local, as opposed to global or whole-map statistics:
We believe that micro-level, local patterns are often of key interest, and can be
obscured or distorted by attempts to fit global models to local data.
Analytical Pluralism:
Mutli-method approaches dominate single-method approaches. Rather than craft a single
statistical model to answer a customer question, we attack problems from several angles
simultaneously, deriving insights from areas of overlap and divergence in the pattern of findings.
Methodological pathways:
Blend nomothetic and idiographic approaches.
Thursday, August 22, 13
79. ...on the horizon.
...On the Horizon:
DT & USMA Department of Systems Engineering partner together and leverage
the Advanced Individual Academic Development Program.
Rstudio: analytics.data-tactics-corp.com; PostgreSQL: analytics.data-tactics-corp.com Port: 5432
https://github.com/rheimann/kiva-master
Thursday, August 22, 13
80. Data Tactics & US Military Academy:
A Prime in Microfinance using KIVA
Rstudio: analytics.data-tactics-corp.com; PostgreSQL: analytics.data-tactics-corp.com Port: 5432
Understanding the complex nature of microfinance more completely:
The US military is directly involved in microfinance (Iraq & Afghanistan), working primarily
through Provincial Reconstruction Teams (PRTs). Funded by the DoD and DoS; the
operational requirements of these agencies create a need to demonstrate quick impact on
economic recovery and therefore the goal is to report high numbers of loans.
Technical complexities separate this data from other datasets:
Heterogeneous forms: structured/unstructured/nominal,ordinal, quantitative/temporal/
geographic/multi-lingual/multiple relationships(lenders to recipients) - multiple sectors/
missing data. Data cleansing is hard!
Big Data(ish): $420M (USD), 1.1 million lenders, 580,000 loans, 250 partners, 4.1M
transactions, 3 WHOLE GBs. (https://vimeo.com/28413747)
Broad appeal:
...government to defense to finance to banking to non-profit organizations to THE POOR.
https://github.com/rheimann/kiva-master
Thursday, August 22, 13
81. ...on the horizon.
...On the Horizon:
DT & The Institute for the Study of War will collaborate in a balanced but largely
quantitative approach to analyzing revolutions and the role social media plays with
particular focus on the Iraq Spring.
Thursday, August 22, 13
82. ...on the horizon.
...on the Horizon:
Data Science for Program Managers (late September / early October)
Analytics Brown Bag Volume II (October / Early November)
Thursday, August 22, 13