This presentation analyzes the TED Talk by Sebastian Wernicke
on "How to use data to make a hit tv show". It analyzes the importance of logic in decision making rather than purely depending on data.
Roy Price conducted a competition to select a TV show based on data collected from millions of viewer reactions, but the show that was selected was only average. Another team also looked at the data but took a risk in greenlighting a different project, which became a hit show. While data provides useful insights, making decisions and taking risks is still important, as the full picture may not be clear from data alone. Experts are able to integrate different data points and make good conclusions even with incomplete information.
When and how to use statistics in a UX worldNiki Lin
Setting up a correct plan with statistics can be hard for a number of reasons. This presentation is a primer about good use of statistics and setting up a correct plan for using statistics in a UX environment.
The little spreadsheet overconfidently took on a complex task that was too difficult for it, despite its normal work of valuing simple projects. Spreadsheet errors often arise from overconfidence, where the creator thinks they can handle a more complex problem than they realistically can. There are three main types of spreadsheet errors: mechanical errors from mistyping numbers, logic errors from using the wrong formula, and omission errors by leaving something out. It is important to check work at the end of developing a spreadsheet to reduce errors, as error rates remain consistent even for experienced spreadsheet users. Framing the design carefully and double checking work can help recognize and avoid spreadsheet errors.
So much attention is focused on how technology makes us sad, lonely, addicted, lazy, and maybe a little stupid. At the same time, we know that technology is actually making all of us feel smart, whole, and connected. What if we could intentionally design technologies for positive emotions and positive outcomes? This is at the heart of happy design.
Learn how to measure for the one thing that matters - happiness - and how to identify the five elements of positive design.
Martina Pugliese, a data science lead, discusses remaining sane in the age of data. She warns of common traps when analyzing and reporting on data, including confirmation bias, Simpson's paradox, and correlation not implying causation. Pugliese emphasizes inspecting data for biases, looking at sources critically, and questioning interests and motivations behind data reporting. While data can provide insights, true science requires scrutinizing data collection and analysis methods.
This document outlines a three-week school project where students in an 11th grade history class created infographics about environmental disasters. The project involved asking questions, forming hypotheses, designing experiments, analyzing data, drawing conclusions, and formulating final hypotheses. It discusses students presenting the infographics they created and exploring patterns and connections in the data. The goal was for the visualizations to tell stories with the information and focus on important details.
This document discusses ten common "dramatic instincts" that can lead people to misinterpret facts and see issues as more dramatic than they really are. These instincts are the gap, negativity, fear, size, destiny, single perspective, blame, and urgency instincts. It provides rules of thumb for controlling each instinct, such as checking for actual gaps and risks rather than assuming them, focusing on gradual changes rather than destinies, considering multiple perspectives rather than single views, and resisting urges to blame or act urgently. The goal is to recognize instincts that distort reality and instead take a fact-based, measured approach to understanding issues and making decisions.
This document discusses key metrics and analysis that are important for data scientists. It emphasizes that daily active users, retention rates, time spent, and revenue are classic good metrics to analyze. Good analysis should describe user needs and inform decisions. Complexity should be avoided as an anti-goal. Measuring impact and having observable consequences are important. Assumptions, small data, talking to users, and making predictions can help overcome objections around data and analysis. Confidence intervals, testing intuitions with data, and the value of information are also discussed.
Roy Price conducted a competition to select a TV show based on data collected from millions of viewer reactions, but the show that was selected was only average. Another team also looked at the data but took a risk in greenlighting a different project, which became a hit show. While data provides useful insights, making decisions and taking risks is still important, as the full picture may not be clear from data alone. Experts are able to integrate different data points and make good conclusions even with incomplete information.
When and how to use statistics in a UX worldNiki Lin
Setting up a correct plan with statistics can be hard for a number of reasons. This presentation is a primer about good use of statistics and setting up a correct plan for using statistics in a UX environment.
The little spreadsheet overconfidently took on a complex task that was too difficult for it, despite its normal work of valuing simple projects. Spreadsheet errors often arise from overconfidence, where the creator thinks they can handle a more complex problem than they realistically can. There are three main types of spreadsheet errors: mechanical errors from mistyping numbers, logic errors from using the wrong formula, and omission errors by leaving something out. It is important to check work at the end of developing a spreadsheet to reduce errors, as error rates remain consistent even for experienced spreadsheet users. Framing the design carefully and double checking work can help recognize and avoid spreadsheet errors.
So much attention is focused on how technology makes us sad, lonely, addicted, lazy, and maybe a little stupid. At the same time, we know that technology is actually making all of us feel smart, whole, and connected. What if we could intentionally design technologies for positive emotions and positive outcomes? This is at the heart of happy design.
Learn how to measure for the one thing that matters - happiness - and how to identify the five elements of positive design.
Martina Pugliese, a data science lead, discusses remaining sane in the age of data. She warns of common traps when analyzing and reporting on data, including confirmation bias, Simpson's paradox, and correlation not implying causation. Pugliese emphasizes inspecting data for biases, looking at sources critically, and questioning interests and motivations behind data reporting. While data can provide insights, true science requires scrutinizing data collection and analysis methods.
This document outlines a three-week school project where students in an 11th grade history class created infographics about environmental disasters. The project involved asking questions, forming hypotheses, designing experiments, analyzing data, drawing conclusions, and formulating final hypotheses. It discusses students presenting the infographics they created and exploring patterns and connections in the data. The goal was for the visualizations to tell stories with the information and focus on important details.
This document discusses ten common "dramatic instincts" that can lead people to misinterpret facts and see issues as more dramatic than they really are. These instincts are the gap, negativity, fear, size, destiny, single perspective, blame, and urgency instincts. It provides rules of thumb for controlling each instinct, such as checking for actual gaps and risks rather than assuming them, focusing on gradual changes rather than destinies, considering multiple perspectives rather than single views, and resisting urges to blame or act urgently. The goal is to recognize instincts that distort reality and instead take a fact-based, measured approach to understanding issues and making decisions.
This document discusses key metrics and analysis that are important for data scientists. It emphasizes that daily active users, retention rates, time spent, and revenue are classic good metrics to analyze. Good analysis should describe user needs and inform decisions. Complexity should be avoided as an anti-goal. Measuring impact and having observable consequences are important. Assumptions, small data, talking to users, and making predictions can help overcome objections around data and analysis. Confidence intervals, testing intuitions with data, and the value of information are also discussed.
Scientific notation and scaling are used to help understand numbers and sizes that are too large or small for direct comprehension. Scientific notation converts very large or small numbers into a more readable format with an exponent. Scaling creates comparable models to study objects too big, like planets, or too small, like molecules, to directly observe. While less accurate and requiring unit conversions, these techniques are important for studying the vast universe whose real sizes are beyond direct human experience or reference.
Can automated feature engineering prevent target leaks Meir Maor
In this talk we will review common and subtle ways of how problem definitions can go wrong. Exemplified by cases we encounter in the field, we will discuss target leaks (the use of information which cannot be available at prediction time), address sampling bias and consider ways to identify & tackle them.
You'll hear many real-life examples of how these issues manifested and see how introducing automated feature engineering can change the way data scientists discover and treat them.
The document discusses challenges that can arise when implementing G7 color management. It notes that initial excitement about G7 can fade when difficulties emerge, just as in marriage, and lists some common attitude and practice obstacles encountered. These include running to a visual proof match before hitting accurate numbers, adjusting densities without communication, and expecting one curve to work for multiple conditions. It acknowledges that while G7 aims for consistency, some flexibility is still required in the real world. Overall the document provides an overview of challenges in transitioning to G7 and maintaining successful implementation.
This is the final pitch presentation of Bastiaan Andriessen, Maarten Bamelis, Andreas De Lille and Ruben Meul during #oSoc14 TDD, the final event of open Summer of code 2014. Zerkzoeker is a tombstone searching tool for West Flemish cemeteries, an oSoc14 challenge from our partners Leiedal and the Opening Up project. More info on http://summerofcode.be
DevOps Days SLC 16: Stop running with sharp metricsJulia Wester
There are a thousand metrics floating around and it is difficult to tell what is truly important. Whether you’re the person who is being measured by something that doesn’t quite make sense or the leader that is trying to figure out just how the heck to show to others that her team is successful, there are a lot of questions out there and a lot of people that are just feeling injured by metrics.
Julia Wester will share examples of good and bad techniques for using data when coaching teams. Come, listen and learn how to avoid the pitfalls of managing by numbers, including how to identify and avoid vanity metrics, how to choose metrics that drive desired behaviors, and ways to visualize balanced team metrics that enable continuous improvement.
This document provides information about recognizing common cognitive biases and misconceptions when interpreting facts and data. It lists 10 common "dramatic instincts" that can lead us to perceive things as more dramatic than they are, such as the size instinct, negativity instinct, and blame instinct. The document encourages questioning narratives that trigger these instincts. It also provides "rules of thumb" to help control each dramatic instinct, such as checking proportions to control the size instinct and noticing slow changes to control perceptions of destiny or inevitability. The overall message is that recognizing our cognitive biases can help us achieve a more fact-based understanding of the world.
As content marketers, we're rich with data but still far too poor with insights. Find out how to see both the forest and the trees, and to create "lead" metrics that look forward, instead of always just measuring success and failure in the past.
The document summarizes key ideas from an Agile Portugal conference presentation on rebooting Agile. It discusses evolving versions of principles from George Orwell's Animal Farm and the Agile Manifesto. It then presents strategies for addressing malnutrition in Vietnam, focusing on positive deviance pioneered by Jerry Sternin of using existing successful local solutions rather than outsider recommendations. The presentation concludes with encouragement to keep trying and learn from failures.
Data analysis provides insights into audience preferences that can help create successful TV shows and make good business decisions. For example, Game of Thrones analyzed audience demand for fantasy genres as well as reactions to sample episodes. Both Amazon and Netflix released sample episodes to collect data on viewer behaviors like pausing and rewatching. Netflix was more successful because in addition to data analysis, it allowed intuition and risk-taking in decision making. While data provides insights, human judgment is still needed to solve problems and make effective choices.
Netflix used data to understand their viewers' preferences for shows, actors, and genres. They analyzed this data and decided to license "House of Cards", a political drama about a senator, rather than their originally planned sitcom about four Republican senators. "House of Cards" was hugely popular with ratings of 9.1 while their other show "Alpha House" received average ratings of only 7.5. The document argues that Netflix's success was due to their use of data to understand their audience but also relying on human judgment and creativity rather than being driven solely by data when making final decisions.
1. The document provides tips on how to use data analysis to create a hit TV show by studying trends in user activities and experiences while watching shows.
2. It discusses using data to break problems down but notes that data analysis alone is not suited for putting solutions back together, and the brain is better for conclusions.
3. The document advises Indian managers to use intellectual skills to combine data analysis with personal decision making to solve problems.
The document discusses how Amazon and Netflix used data analysis to develop successful TV shows. Amazon held a competition to select TV show pilots, then analyzed viewer data like ratings and histories to develop shows. They concluded a sitcom about Republican senators would do well but "Alpha House" was only average. Netflix's Chief Content Officer looked at their viewer data to make "House of Cards", betting on a drama about a senator, which became a hit with a 9.1 rating. However, the document notes that while data analysis is useful, it does not always lead to optimal results, and following data alone can lead to wrong decisions. Complex problems require both deep analysis of parts and combining them insightfully.
1) Netflix used data analysis to break down the problem of finding a hit TV show, while Amazon Studios used data directly to arrive at the solution of making a sitcom about four Republican Senators.
2) While Amazon's approach was not very successful, Netflix was successful because they used data to understand audience preferences but then made the creative decision to produce "House of Cards", a drama about a Senator, which was not directly suggested by the data.
3) Relying too heavily on data-driven decision making can leave little room for errors, which could have serious consequences like wrongly denying parole or greenlighting mediocre TV shows. Data is best for understanding problems but human judgment is
You are the ultimate data wrangler. The polyglot master of python and R. You know all about the differences of linear versus logistic regression. You know when to use a dimensionality reduction algorithm and when to use a neural net. You have petabytes of data taking structural-form at your command, and you have the R-squared score to prove it!
But all of your data wrangling and number crunching won't matter if the decision makers ignore your data.
The tools to communicate the message in your data are simple, yet they can be a hard to learn. So, let’s talk about the five critical communication tools you need to master "The Art of Speaking Data."
1) The document discusses how data analysis is used to make hit TV shows.
2) It analyzes data from IMDB on 2500 TV shows' ratings and finds that while House of Cards was in the top 2% of rated shows, Alpha House had a slightly above average rating despite similar data.
3) The key insight is that data analysis alone is not enough - human judgment from expert brains is needed to properly analyze and draw conclusions from data, and risks must be taken to achieve success.
Business decisions are not based on data only but an individual's risk taking ability. Sebastian Wernicke explains this characteristic with examples of Netflix, Amazon and Google.
Are you data-driven or addicted to data?Tony Clement
Are you addicted to data? Here's a self diagnosis to see if you're on the path to bad data habits, and some counter measures to put into action if you are.
1) The document discusses how data analysis can be used to create hit TV shows by studying user activities and experiences while watching shows.
2) It provides examples of how Amazon and Netflix used data collected from free pilot episodes and viewership trends to inform their decisions about which shows to develop into full series.
3) The key lessons are that data analysis alone cannot determine the solution - intellectual decisions are also needed to interpret the data and take wise risks that can lead to extraordinary success. Managers should use both data and decision-making skills to solve problems.
Although analyzing “big data” has the power to transform your business, the ease of doing so has been over-stated. In reality, harnessing big data is still a messy and labor-intensive business. We are incredibly excited by what we can do with data but also think some of the hype is doing brands a disservice, because it creates a false expectation of how easy this work is going to be. Most things in life that are important and worthwhile are difficult, and the analysis of Big Data is no different. Don’t believe these commonly heard myths…
This document discusses how data and analysis can help create hit TV shows. Roy Price at Amazon Studios released the first episodes of several shows online for free to gather viewer data. Ted Serandos at Netflix analyzed past viewer data on shows, actors, and directors to select content. While data provides insights, human judgment is still needed to interpret the data and put the pieces back together. Effective use of both data and human creativity can help managers make smarter decisions by predicting customer behavior and emerging trends. Taking risks by testing new show ideas, as Amazon did, can also lead to success despite imperfect data analysis.
Effective Presenting with ‘Think, Feel, Do!’Ray Poynter
Effective research needs to result in outcomes, and changes that are beneficial to the organisation commissioning the research.
The ability of the research to help bring about change depends on how it is communicated. In many cases, the only part of the research that has any impact or visibility is the presentation, i.e. the actual presentation and any report / ‘leave behind’/ or 'take-aways'.
In this webinar, Ray Poynter, focuses on how to use the ‘Think, Feel, Do!’ approach to create effective communications, i.e. communications that result in actions.
Scientific notation and scaling are used to help understand numbers and sizes that are too large or small for direct comprehension. Scientific notation converts very large or small numbers into a more readable format with an exponent. Scaling creates comparable models to study objects too big, like planets, or too small, like molecules, to directly observe. While less accurate and requiring unit conversions, these techniques are important for studying the vast universe whose real sizes are beyond direct human experience or reference.
Can automated feature engineering prevent target leaks Meir Maor
In this talk we will review common and subtle ways of how problem definitions can go wrong. Exemplified by cases we encounter in the field, we will discuss target leaks (the use of information which cannot be available at prediction time), address sampling bias and consider ways to identify & tackle them.
You'll hear many real-life examples of how these issues manifested and see how introducing automated feature engineering can change the way data scientists discover and treat them.
The document discusses challenges that can arise when implementing G7 color management. It notes that initial excitement about G7 can fade when difficulties emerge, just as in marriage, and lists some common attitude and practice obstacles encountered. These include running to a visual proof match before hitting accurate numbers, adjusting densities without communication, and expecting one curve to work for multiple conditions. It acknowledges that while G7 aims for consistency, some flexibility is still required in the real world. Overall the document provides an overview of challenges in transitioning to G7 and maintaining successful implementation.
This is the final pitch presentation of Bastiaan Andriessen, Maarten Bamelis, Andreas De Lille and Ruben Meul during #oSoc14 TDD, the final event of open Summer of code 2014. Zerkzoeker is a tombstone searching tool for West Flemish cemeteries, an oSoc14 challenge from our partners Leiedal and the Opening Up project. More info on http://summerofcode.be
DevOps Days SLC 16: Stop running with sharp metricsJulia Wester
There are a thousand metrics floating around and it is difficult to tell what is truly important. Whether you’re the person who is being measured by something that doesn’t quite make sense or the leader that is trying to figure out just how the heck to show to others that her team is successful, there are a lot of questions out there and a lot of people that are just feeling injured by metrics.
Julia Wester will share examples of good and bad techniques for using data when coaching teams. Come, listen and learn how to avoid the pitfalls of managing by numbers, including how to identify and avoid vanity metrics, how to choose metrics that drive desired behaviors, and ways to visualize balanced team metrics that enable continuous improvement.
This document provides information about recognizing common cognitive biases and misconceptions when interpreting facts and data. It lists 10 common "dramatic instincts" that can lead us to perceive things as more dramatic than they are, such as the size instinct, negativity instinct, and blame instinct. The document encourages questioning narratives that trigger these instincts. It also provides "rules of thumb" to help control each dramatic instinct, such as checking proportions to control the size instinct and noticing slow changes to control perceptions of destiny or inevitability. The overall message is that recognizing our cognitive biases can help us achieve a more fact-based understanding of the world.
As content marketers, we're rich with data but still far too poor with insights. Find out how to see both the forest and the trees, and to create "lead" metrics that look forward, instead of always just measuring success and failure in the past.
The document summarizes key ideas from an Agile Portugal conference presentation on rebooting Agile. It discusses evolving versions of principles from George Orwell's Animal Farm and the Agile Manifesto. It then presents strategies for addressing malnutrition in Vietnam, focusing on positive deviance pioneered by Jerry Sternin of using existing successful local solutions rather than outsider recommendations. The presentation concludes with encouragement to keep trying and learn from failures.
Data analysis provides insights into audience preferences that can help create successful TV shows and make good business decisions. For example, Game of Thrones analyzed audience demand for fantasy genres as well as reactions to sample episodes. Both Amazon and Netflix released sample episodes to collect data on viewer behaviors like pausing and rewatching. Netflix was more successful because in addition to data analysis, it allowed intuition and risk-taking in decision making. While data provides insights, human judgment is still needed to solve problems and make effective choices.
Netflix used data to understand their viewers' preferences for shows, actors, and genres. They analyzed this data and decided to license "House of Cards", a political drama about a senator, rather than their originally planned sitcom about four Republican senators. "House of Cards" was hugely popular with ratings of 9.1 while their other show "Alpha House" received average ratings of only 7.5. The document argues that Netflix's success was due to their use of data to understand their audience but also relying on human judgment and creativity rather than being driven solely by data when making final decisions.
1. The document provides tips on how to use data analysis to create a hit TV show by studying trends in user activities and experiences while watching shows.
2. It discusses using data to break problems down but notes that data analysis alone is not suited for putting solutions back together, and the brain is better for conclusions.
3. The document advises Indian managers to use intellectual skills to combine data analysis with personal decision making to solve problems.
The document discusses how Amazon and Netflix used data analysis to develop successful TV shows. Amazon held a competition to select TV show pilots, then analyzed viewer data like ratings and histories to develop shows. They concluded a sitcom about Republican senators would do well but "Alpha House" was only average. Netflix's Chief Content Officer looked at their viewer data to make "House of Cards", betting on a drama about a senator, which became a hit with a 9.1 rating. However, the document notes that while data analysis is useful, it does not always lead to optimal results, and following data alone can lead to wrong decisions. Complex problems require both deep analysis of parts and combining them insightfully.
1) Netflix used data analysis to break down the problem of finding a hit TV show, while Amazon Studios used data directly to arrive at the solution of making a sitcom about four Republican Senators.
2) While Amazon's approach was not very successful, Netflix was successful because they used data to understand audience preferences but then made the creative decision to produce "House of Cards", a drama about a Senator, which was not directly suggested by the data.
3) Relying too heavily on data-driven decision making can leave little room for errors, which could have serious consequences like wrongly denying parole or greenlighting mediocre TV shows. Data is best for understanding problems but human judgment is
You are the ultimate data wrangler. The polyglot master of python and R. You know all about the differences of linear versus logistic regression. You know when to use a dimensionality reduction algorithm and when to use a neural net. You have petabytes of data taking structural-form at your command, and you have the R-squared score to prove it!
But all of your data wrangling and number crunching won't matter if the decision makers ignore your data.
The tools to communicate the message in your data are simple, yet they can be a hard to learn. So, let’s talk about the five critical communication tools you need to master "The Art of Speaking Data."
1) The document discusses how data analysis is used to make hit TV shows.
2) It analyzes data from IMDB on 2500 TV shows' ratings and finds that while House of Cards was in the top 2% of rated shows, Alpha House had a slightly above average rating despite similar data.
3) The key insight is that data analysis alone is not enough - human judgment from expert brains is needed to properly analyze and draw conclusions from data, and risks must be taken to achieve success.
Business decisions are not based on data only but an individual's risk taking ability. Sebastian Wernicke explains this characteristic with examples of Netflix, Amazon and Google.
Are you data-driven or addicted to data?Tony Clement
Are you addicted to data? Here's a self diagnosis to see if you're on the path to bad data habits, and some counter measures to put into action if you are.
1) The document discusses how data analysis can be used to create hit TV shows by studying user activities and experiences while watching shows.
2) It provides examples of how Amazon and Netflix used data collected from free pilot episodes and viewership trends to inform their decisions about which shows to develop into full series.
3) The key lessons are that data analysis alone cannot determine the solution - intellectual decisions are also needed to interpret the data and take wise risks that can lead to extraordinary success. Managers should use both data and decision-making skills to solve problems.
Although analyzing “big data” has the power to transform your business, the ease of doing so has been over-stated. In reality, harnessing big data is still a messy and labor-intensive business. We are incredibly excited by what we can do with data but also think some of the hype is doing brands a disservice, because it creates a false expectation of how easy this work is going to be. Most things in life that are important and worthwhile are difficult, and the analysis of Big Data is no different. Don’t believe these commonly heard myths…
This document discusses how data and analysis can help create hit TV shows. Roy Price at Amazon Studios released the first episodes of several shows online for free to gather viewer data. Ted Serandos at Netflix analyzed past viewer data on shows, actors, and directors to select content. While data provides insights, human judgment is still needed to interpret the data and put the pieces back together. Effective use of both data and human creativity can help managers make smarter decisions by predicting customer behavior and emerging trends. Taking risks by testing new show ideas, as Amazon did, can also lead to success despite imperfect data analysis.
Effective Presenting with ‘Think, Feel, Do!’Ray Poynter
Effective research needs to result in outcomes, and changes that are beneficial to the organisation commissioning the research.
The ability of the research to help bring about change depends on how it is communicated. In many cases, the only part of the research that has any impact or visibility is the presentation, i.e. the actual presentation and any report / ‘leave behind’/ or 'take-aways'.
In this webinar, Ray Poynter, focuses on how to use the ‘Think, Feel, Do!’ approach to create effective communications, i.e. communications that result in actions.
Automated Decision making with Predictive Applications – Big Data HamburgLars Trieloff
Most businesses are making most decisions the way Lizards do: based on very simple reflex-response patterns and let cognitive biases taint their decision making. Instead of letting gut feel and biases take over, predictive applications make decisions fast, cheap and fact-based.
Big Data Berlin – Automating Decisions is the Next Frontier for Big DataLars Trieloff
Just collecting, storing and analyzing data is not enough. In order to benefit from it, you have to overcome organizational and human inertia and establish automated processes that use insights gained from your data.
This presentation has been presented at http://dataconomy.com/28-august-2014-big-data-berlin/
1. The document discusses various techniques for analyzing data and presenting findings, including finding "data breaks", building stories around key events or changes in numbers, and using cause-and-effect frameworks.
2. It emphasizes starting with the consumer perspective and focusing on relative changes in metrics over time rather than absolute numbers.
3. The examples show how to structure analyses around identifying patterns in the data, explaining causal relationships, and drawing conclusions supported by evidence.
The document summarizes the learning process of a student team ("Team Fishreel") over 10 weeks as they worked to understand a national security problem and develop a solution for the NSA. It traces their progression from an initial focus on technical solutions like building models with NSA data, to conducting over 50 interviews to understand the problem from the perspective of analysts, and mapping the different stakeholders within the NSA that would be involved in deploying their solution. It shows how their understanding of the problem expanded from automating a specific task to providing entity insights, and how they developed an understanding of the complex deployment landscape at the NSA.
A presentation given at UberConf 2012 in Broomfield, Colorado, USA.
Further game theory resources an be found at https://gist.github.com/matthewmccullough/2721876 and http://ambientideas.com/blog/index.php/2011/04/game-theory-and-softwaredev/
Similar to How to use data to make a hit tv show (20)
This presentation reports the statistical analysis of TED Talks. It reports various parameters which define the success of a TED Talk and similary failure of a TED Talk. It is the summary of the TED Talk "Lies, Damned Lies and Statistics" by Sebastian Wernicke
This presentation analyzes the HBR Article on "Big Data Hype (and Reality)" by Gregory Piatetsky-Shapiro. It emphasizes on the slow improvement of the technology, but in the end provides the areas where big data is useful.
The document discusses challenges with hiring data scientists and suggests alternative approaches. It recommends empowering small cross-functional data-oriented teams explicitly tasked with delivering measurable business benefits. This builds internal data capabilities rather than just hiring expertise. It also stresses the importance of making data science a cultural value throughout the organization so that all employees understand basic principles and practices of data science.
This document discusses how to spot bad statistics. It provides three questions to ask: 1) Can you see uncertainty in the data? Many visualizations overstate certainty. 2) Can I see myself in the data? Data needs context about how it relates to people's lives. 3) How was the data collected? It's important to understand how surveys and studies were conducted. Bad statistics can mislead decision making, so it's crucial to evaluate data collection methods and understand limitations to get full context. Statistics are still important for policymaking, but they must be questioned and interpreted carefully.
The presentation analyzes the HBR article "A Predictive Analytics Primer" by Tom Davenport. It gathers insights on How can we predict better, with better assumptions.
Hans Rosling gives a TED talk debunking myths about developing countries with compelling statistics. Thomas Davenport argues that data is useless without good communication. Hans Rosling advocates making publicly available data searchable and visualized to improve understanding. A manager learns that companies should transform database data into logical infographics while protecting confidential information, and encourage using data insights. Better visualizations of available information can improve decision making.
The document discusses the importance of communicating data effectively. It notes that there is a pressing need for more businesspeople who can make decisions based on data analysis. While it is not necessary for managers to crunch numbers themselves, they must be able to communicate quantitative insights to diverse audiences. Effective communication involves understanding the audience, presenting the appropriate level of detail, and focusing on implications rather than just results. When data is communicated well, it can help organizations make better decisions.
The document discusses how data-driven companies are more profitable and provides insights into becoming data-driven. It recommends making decisions throughout the organization to free up senior time. It also stresses investing in quality data sources that others can trust to align with decisions. Managers should push decision making down, invest in quality data, and bring new data technologies into their organizations to reap the profit benefits of a data-driven approach.
Jer Thorp is a data artist who adds humanity to technology by discovering relationships on the internet and building narratives from pieces of information. He argues that numbers represent real world things and are inherently human. His insights are that data gains meaning when put in a human context by bringing the human element into stories, which builds empathy and respect missing from technology. There is a need for more inclusion of artists, poets and writers to highlight humanity in data science.
The document discusses the challenges of drawing insights from big data. It notes that interpreting big data requires critical thinking to understand human expression and account for uncertainty. Managers can better understand data by asking focused questions, considering language and cultural differences, and using multiple disciplines like linguistics and ethics. While big data offers opportunities, organizations must thoughtfully source, analyze, and communicate data to earn and maintain public trust.
The presentation talks about "Data Science being the sexiest job of the 21st century". What are the challenges faced by the industry and how to Overcome them, is the main theme of the presentation
This presentation analyses the beautiful TED Talk of Alan Smith on "Why should you love statistics". Gathering the insights and employing those insights is the major task of this presentation.
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
3. CASE STUDY : AMAZON’S “ALPHA HOUSE” V/S
NETFLIX’S “HOUSE OF CARDS”.
Where did Amazon go wrong?
OR
Where did Netflix go right?
4. SENIOR EXECUTIVE AT AMAZON STUDIOS – ROY PRICE’S
APPROACH TO DECIDE UPON “ALPHA HOUSE”
1. Select 8 candidates for TV Show.
Broadcast 1st episode of each for free
2. Observe the response of audience
through millions of data points.
3. Make a “Data-Driven” decision to come up
with a sitcom about four US-Republic Senators.
5. CHIEF CONTENT OFFICER OF NETFLIX - TED SARANDOS
APPROACH TO DECIDE UPON “HOUSE OF CARDS”
1. Looked at all the data they already had
about Netflix viewers,
2. Use that data to discover all of these little
bits and pieces about the audience
3. Took a leap of faith, and they decided to
license a drama series about a single Senator.
6. INSIGHTS: “How to use data to
make a hit TV show!! -
Sebastian Wernicke
At TEDxCambridge
7. So whenever you're solving a
complex problem, you're doing
essentially two things.
1. You take that problem apart
into its bits and pieces.
2. You put all of these bits and
pieces back together again to
come to your conclusion.
INSIGHT #1. Data can only help you rip the
pieces apart
8. • The crucial thing is that
data and data analysis is
only good for the first part.
• It's not suited to put those
pieces back together
again and then to come to
a conclusion.
9. • If there's one thing a
brain is good at, it's
taking bits and pieces
back together again.
• Even when you have
incomplete information
INSIGHT #2. BRAIN IS THE BEST TOOL TO PUT BACK TOGETHER THE
BITS AND THE PIECES
10. •It helps you reach to
a good conclusion
•Especially if it's the
brain of an expert.
11. “Data is of course a massively useful tool to make better
decisions, but I believe that things go wrong when data is
starting to drive those decisions” - Sebastian Wernicke
13. REMEMBER THIS EQUATION!!!
Trust the
expert’s logic
to group the
bits.
AWARDING
RESULTS IN
YOUR FIST
Never be
afraid to take
risks.
14.
15. “Even in the face
of huge amounts
of data, it still
pays off to make
decisions, to be
an expert in what
you're doing and
take risks.
Because in the
end, it's not
data, it's risks
that will land
you on the
right end of the
curve.”