Creating a “single source of truth”Combining disparate data sources of potential donors, volunteers and voters(email, postal, telephone, mobile and social contacts with historical votingrecords, polling and fundraising data)They built a single view of individuals that informedtheir strategies for raising funds, mobilizingvolunteers and securing votes.Source: http://connect.icrossing.co.uk/obamas-big-data-election-victory_9423
Profiling and predicting Demographics and data collected by fieldwork on the campaign trail were added to the mix, allowing predictive modelling to score people on their likelihood to donate or vote for the Democrats. Channels of communication were optimized, and the type of messaging was tailored to maximize the likelihood of response.Source: http://connect.icrossing.co.uk/obamas-big-data-election-victory_9423
Turning data into the human touchThe power of localised networks andneighbourhoodsUsing centralized data to provide geo-targeted insight, campaignvolunteers could base themselves in the areas that mattered most, talkingto the voters they had got to know since the start of the 2008 campaign.Deliver their message from within communitiesThe impact of this saw them receive double the votes they achieved in2008 in the marginal states.Source: http://connect.icrossing.co.uk/obamas-big-data-election-victory_9423
Turning data into the human touchSono stati oltre due milioni i piccoli donatori che hannoversato nelle casse della sua campagna oltre 427milioni di dollari.Circa il 55% dei fondi raccolti sono arrivate da donazionisotto i 200 dollari.Source: http://connect.icrossing.co.uk/obamas-big-data-election-victory_9423
Focus on the swing statesRegular polling of states like Ohio throughout thecampaign provided valuable data for the team to processand analyze trends.For example, the analysts could track the impact of the three TV debates onthe democratic vote in real-time and were able to identify specific segments totarget with campaign material – split by region, demographics and the profilescoring that had been modeled in the new database. One Democrat officialcommented that they scenario tested the election 66,000 times every night inorder to calculate predicted outcomes for swing states.Campaign resource was then allocated appropriately to persuade undecidedvoters most likely to pledge their allegiance to Obama.By the time election day came around, the Democrats hada clear idea of how voting in the swing states was looking.Source: http://connect.icrossing.co.uk/obamas-big-data-election-victory_9423
Data science involvement in the election wasn’tjust restricted to the candidates’ teams.Nate Silver used sabermetrics to accurately predict the outcome ofall 50 state votesSource: http://connect.icrossing.co.uk/obamas-big-data-election-victory_9423
Big Data – What Is It? Big Data – What Is It?Volume. Variety. Velocity. Volume. Variety. Velocity.Variability. Complexity. Taken together, these three “Vs” of Big Data were originally posited by Gartner’s Doug Laney in a 2001 research report. Variability. Complexity. Taken together, these three “Vs” of Big Data were originally posited by Gartner’s Doug Laney in a 2001 research report.
“It’s difficult to imagine thepower that you’re going to havewhen so many different sorts ofdata are available”Tim Berners Lee
Mass Opinion Business Intelligence (MOBI) analyzes andclassifies comments made online and distills the information into apre-defined, structured database.MOBI methodology combines online measurement, cloudcomputing and market research that provides live consumersentiment data around brands, products and purchase influencingfactors using decision-supported information from millions ofunsolicited opinions.http://en.wikipedia.org/wiki/WiseWindow
Financial Services Industry: Bloomberg andWiseWindow use social media and big data to improveinvestment returns.http://en.wikipedia.org/wiki/WiseWindow
Natural disasters: Twitter was a richer and more up-to-date source of information about the 5.8 magnitudequake in Virginia.
http://youtu.be/PThAriHjk10Traffic Twitter after Japan earthquake
Automotive Industry: Big data analysis of social mediacomments can predict trends in automotive equipmentfailures.
Telecommunications: T-Mobile used big data integratedwith its transaction systems and social media todramatically cut customer defections in one quarter.
Energy/Utility Industry: GE is going to use social mediareports to track outages faster and better.
Advertising Industry: Dachis Group used big dataanalysis of social media to create a more up-to-date andaccurate ranking of the competitive position ofengagement at large companies.
Marketing: Nestle is using social media listening andanalytics to engage at scale in the market using its bigdata powered central command center.
Education Industry: DoSomething.org engaged 200,000people worldwide in Facebook to combat bullying inschools and analyzed their sentiments.
Criminal Justice: Police department around the UnitedStates now use social media analysis extensively tofight crime.
Health Care Industry: Using social media and big data totrack cholera outbreaks in Haiti faster and moreaccurately.
Top 5 Myths about Big Data1. Big Data is Only About Massive Data Volume Generally speaking, experts consider petabytes of data volumes as the starting point for Big Data, although this volume indicator is a moving target. Therefore, while volume is important, the next two “Vs” are better individual indicators. Variety refers to the many different data and file types that are important to manage and analyze more thoroughly, but for which traditional relational databases are poorly suited. Some examples of this variety include sound and movie files, images, documents, geo- location data, web logs, and text strings. Velocity is about the rate of change in the data and how quickly it must be used to create real value. Traditional technologies are especially poorly suited to storing and using high- velocity data. So new approaches are needed. If the data in question is created and aggregates very quickly and must be used swiftly to uncover patterns and problems, the greater the velocity and the more likely that you have a Big Data opportunity.
Top 5 Myths about Big Data2. Big Data Means Hadoop Hadoop is the Apache open-source software framework for working with Big Data. It was derived from Google technology and put to practice by Yahoo and others. But, Big Data is too varied and complex for a one-size-fits-all solution. While Hadoop has surely captured the greatest name recognition, it is just one of three classes of technologies well suited to storing and managing Big Data. The other two classes are NoSQL and Massively Parallel Processing (MPP) data stores. (See myth number five below for more about NoSQL.) Examples of MPP data stores include EMC’s Greenplum, IBM’s Netezza, and HP’s Vertica.
Top 5 Myths about Big Data3. Big Data Means Unstructured Data Big Data is probably better termed “multi-structured” as it could include text strings, documents of all types, audio and video files, metadata, web pages, email messages, social media feeds, form data, and so on. The consistent trait of these varied data types is that the data schema isn’t known or defined when the data is captured and stored. Rather, a data model is often applied at the time the data is used.
Top 5 Myths about Big Data4. Big Data is for Social Media Feeds andSentiment Analysis Simply put, if your organization needs to broadly analyze web traffic, IT system logs, customer sentiment, or any other type of digital shadows being created in record volumes each day, Big Data offers a way to do this. Even though the early pioneers of Big Data have been the largest, web-based, social media companies -- Google, Yahoo, Facebook -- it was the volume, variety, and velocity of data generated by their services that required a radically new solution rather than the need to analyze social feeds or gauge audience sentiment.
Top 5 Myths about Big Data5. NoSQL means No SQL NoSQL means “not only” SQL because these types of data stores offer domain-specific access and query techniques in addition to SQL or SQL-like interfaces. Technologies in this NoSQL category include key value stores, document-oriented databases, graph databases, big table structures, and caching data stores. The specific native access methods to stored data provide a rich, low-latency approach, typically through a proprietary interface. SQL access has the advantage of familiarity and compatibility with many existing tools. Although this is usually at some expense of latency driven by the interpretation of the query to the native “language” of the underlying system. For example, Cassandra, the popular open source key value store offered in commercial form by DataStax, not only includes native APIs for direct access to Cassandra data, but CQL (it’s SQL-like interface) as its emerging preferred access mechanism. It’s important to choose the right NoSQL technology to fit both the business problem and data type and the many categories of NoSQL technologies offer plenty of choice.
Data Integration Concessio ni edilizie Cause di morteCasellario RicoveriGiudiziario ospedalieri Delibere comunali Industrie per ATECO Dati Spesa ambientali sanitaria Provvedim enti Regionali Mappe Dichiarazio ni dei Politici
Data Integration Concessio ni edilizie Cause di morteCasellario RicoveriGiudiziario ospedalieri Delibere comunali Industrie per ATECO Dati Spesa ambientali sanitaria Provvedim enti Regionali Dati Geografici Dichiarazio ni dei Politici