This document appears to be notes from a class on data mining and spatial analysis. It discusses several key topics: location and accuracy of data; bias; subjectivity; the project framework of data selection, preprocessing, mining, interpretation and feature selection; and different types of projects like analytical, predictive, narrative and exploratory. It also provides resources on spatial analysis, data filtering and privacy.
This presentation goes over Data Mining the City, a course taught at Columbia University GSAPP. This lecture also covers, complexity, cybernetics and agent based modeling.
1. The document discusses a reading group meeting about data mining cities and using cities as platforms. It provides an agenda and links for the meeting.
2. Cities can be thought of as platforms that are programmed through various codes, rules, and standards that shape their form and function over time. How data and algorithms increasingly program city systems and behaviors is discussed.
3. Examples are given of how urban data is collected and analyzed to understand patterns in human movement and behavior, and how this influences the programming and optimization of systems within cities.
2024 State of Marketing Report – by HubspotMarius Sescu
https://www.hubspot.com/state-of-marketing
· Scaling relationships and proving ROI
· Social media is the place for search, sales, and service
· Authentic influencer partnerships fuel brand growth
· The strongest connections happen via call, click, chat, and camera.
· Time saved with AI leads to more creative work
· Seeking: A single source of truth
· TLDR; Get on social, try AI, and align your systems.
· More human marketing, powered by robots
ChatGPT is a revolutionary addition to the world since its introduction in 2022. A big shift in the sector of information gathering and processing happened because of this chatbot. What is the story of ChatGPT? How is the bot responding to prompts and generating contents? Swipe through these slides prepared by Expeed Software, a web development company regarding the development and technical intricacies of ChatGPT!
Product Design Trends in 2024 | Teenage EngineeringsPixeldarts
The realm of product design is a constantly changing environment where technology and style intersect. Every year introduces fresh challenges and exciting trends that mold the future of this captivating art form. In this piece, we delve into the significant trends set to influence the look and functionality of product design in the year 2024.
How Race, Age and Gender Shape Attitudes Towards Mental HealthThinkNow
Mental health has been in the news quite a bit lately. Dozens of U.S. states are currently suing Meta for contributing to the youth mental health crisis by inserting addictive features into their products, while the U.S. Surgeon General is touring the nation to bring awareness to the growing epidemic of loneliness and isolation. The country has endured periods of low national morale, such as in the 1970s when high inflation and the energy crisis worsened public sentiment following the Vietnam War. The current mood, however, feels different. Gallup recently reported that national mental health is at an all-time low, with few bright spots to lift spirits.
To better understand how Americans are feeling and their attitudes towards mental health in general, ThinkNow conducted a nationally representative quantitative survey of 1,500 respondents and found some interesting differences among ethnic, age and gender groups.
Technology
For example, 52% agree that technology and social media have a negative impact on mental health, but when broken out by race, 61% of Whites felt technology had a negative effect, and only 48% of Hispanics thought it did.
While technology has helped us keep in touch with friends and family in faraway places, it appears to have degraded our ability to connect in person. Staying connected online is a double-edged sword since the same news feed that brings us pictures of the grandkids and fluffy kittens also feeds us news about the wars in Israel and Ukraine, the dysfunction in Washington, the latest mass shooting and the climate crisis.
Hispanics may have a built-in defense against the isolation technology breeds, owing to their large, multigenerational households, strong social support systems, and tendency to use social media to stay connected with relatives abroad.
Age and Gender
When asked how individuals rate their mental health, men rate it higher than women by 11 percentage points, and Baby Boomers rank it highest at 83%, saying it’s good or excellent vs. 57% of Gen Z saying the same.
Gen Z spends the most amount of time on social media, so the notion that social media negatively affects mental health appears to be correlated. Unfortunately, Gen Z is also the generation that’s least comfortable discussing mental health concerns with healthcare professionals. Only 40% of them state they’re comfortable discussing their issues with a professional compared to 60% of Millennials and 65% of Boomers.
Race Affects Attitudes
As seen in previous research conducted by ThinkNow, Asian Americans lag other groups when it comes to awareness of mental health issues. Twenty-four percent of Asian Americans believe that having a mental health issue is a sign of weakness compared to the 16% average for all groups. Asians are also considerably less likely to be aware of mental health services in their communities (42% vs. 55%) and most likely to seek out information on social media (51% vs. 35%).
AI Trends in Creative Operations 2024 by Artwork Flow.pdfmarketingartwork
Creative operations teams expect increased AI use in 2024. Currently, over half of tasks are not AI-enabled, but this is expected to decrease in the coming year. ChatGPT is the most popular AI tool currently. Business leaders are more actively exploring AI benefits than individual contributors. Most respondents do not believe AI will impact workforce size in 2024. However, some inhibitions still exist around AI accuracy and lack of understanding. Creatives primarily want to use AI to save time on mundane tasks and boost productivity.
This presentation goes over Data Mining the City, a course taught at Columbia University GSAPP. This lecture also covers, complexity, cybernetics and agent based modeling.
1. The document discusses a reading group meeting about data mining cities and using cities as platforms. It provides an agenda and links for the meeting.
2. Cities can be thought of as platforms that are programmed through various codes, rules, and standards that shape their form and function over time. How data and algorithms increasingly program city systems and behaviors is discussed.
3. Examples are given of how urban data is collected and analyzed to understand patterns in human movement and behavior, and how this influences the programming and optimization of systems within cities.
2024 State of Marketing Report – by HubspotMarius Sescu
https://www.hubspot.com/state-of-marketing
· Scaling relationships and proving ROI
· Social media is the place for search, sales, and service
· Authentic influencer partnerships fuel brand growth
· The strongest connections happen via call, click, chat, and camera.
· Time saved with AI leads to more creative work
· Seeking: A single source of truth
· TLDR; Get on social, try AI, and align your systems.
· More human marketing, powered by robots
ChatGPT is a revolutionary addition to the world since its introduction in 2022. A big shift in the sector of information gathering and processing happened because of this chatbot. What is the story of ChatGPT? How is the bot responding to prompts and generating contents? Swipe through these slides prepared by Expeed Software, a web development company regarding the development and technical intricacies of ChatGPT!
Product Design Trends in 2024 | Teenage EngineeringsPixeldarts
The realm of product design is a constantly changing environment where technology and style intersect. Every year introduces fresh challenges and exciting trends that mold the future of this captivating art form. In this piece, we delve into the significant trends set to influence the look and functionality of product design in the year 2024.
How Race, Age and Gender Shape Attitudes Towards Mental HealthThinkNow
Mental health has been in the news quite a bit lately. Dozens of U.S. states are currently suing Meta for contributing to the youth mental health crisis by inserting addictive features into their products, while the U.S. Surgeon General is touring the nation to bring awareness to the growing epidemic of loneliness and isolation. The country has endured periods of low national morale, such as in the 1970s when high inflation and the energy crisis worsened public sentiment following the Vietnam War. The current mood, however, feels different. Gallup recently reported that national mental health is at an all-time low, with few bright spots to lift spirits.
To better understand how Americans are feeling and their attitudes towards mental health in general, ThinkNow conducted a nationally representative quantitative survey of 1,500 respondents and found some interesting differences among ethnic, age and gender groups.
Technology
For example, 52% agree that technology and social media have a negative impact on mental health, but when broken out by race, 61% of Whites felt technology had a negative effect, and only 48% of Hispanics thought it did.
While technology has helped us keep in touch with friends and family in faraway places, it appears to have degraded our ability to connect in person. Staying connected online is a double-edged sword since the same news feed that brings us pictures of the grandkids and fluffy kittens also feeds us news about the wars in Israel and Ukraine, the dysfunction in Washington, the latest mass shooting and the climate crisis.
Hispanics may have a built-in defense against the isolation technology breeds, owing to their large, multigenerational households, strong social support systems, and tendency to use social media to stay connected with relatives abroad.
Age and Gender
When asked how individuals rate their mental health, men rate it higher than women by 11 percentage points, and Baby Boomers rank it highest at 83%, saying it’s good or excellent vs. 57% of Gen Z saying the same.
Gen Z spends the most amount of time on social media, so the notion that social media negatively affects mental health appears to be correlated. Unfortunately, Gen Z is also the generation that’s least comfortable discussing mental health concerns with healthcare professionals. Only 40% of them state they’re comfortable discussing their issues with a professional compared to 60% of Millennials and 65% of Boomers.
Race Affects Attitudes
As seen in previous research conducted by ThinkNow, Asian Americans lag other groups when it comes to awareness of mental health issues. Twenty-four percent of Asian Americans believe that having a mental health issue is a sign of weakness compared to the 16% average for all groups. Asians are also considerably less likely to be aware of mental health services in their communities (42% vs. 55%) and most likely to seek out information on social media (51% vs. 35%).
AI Trends in Creative Operations 2024 by Artwork Flow.pdfmarketingartwork
Creative operations teams expect increased AI use in 2024. Currently, over half of tasks are not AI-enabled, but this is expected to decrease in the coming year. ChatGPT is the most popular AI tool currently. Business leaders are more actively exploring AI benefits than individual contributors. Most respondents do not believe AI will impact workforce size in 2024. However, some inhibitions still exist around AI accuracy and lack of understanding. Creatives primarily want to use AI to save time on mundane tasks and boost productivity.
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Organizational culture includes values, norms, systems, symbols, language, assumptions, beliefs, and habits that influence employee behaviors and how people interpret those behaviors. It is important because culture can help or hinder a company's success. Some key aspects of Netflix's culture that help it achieve results include hiring smartly so every position has stars, focusing on attitude over just aptitude, and having a strict policy against peacocks, whiners, and jerks.
PEPSICO Presentation to CAGNY Conference Feb 2024Neil Kimberley
PepsiCo provided a safe harbor statement noting that any forward-looking statements are based on currently available information and are subject to risks and uncertainties. It also provided information on non-GAAP measures and directing readers to its website for disclosure and reconciliation. The document then discussed PepsiCo's business overview, including that it is a global beverage and convenient food company with iconic brands, $91 billion in net revenue in 2023, and nearly $14 billion in core operating profit. It operates through a divisional structure with a focus on local consumers.
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Organizational culture includes values, norms, systems, symbols, language, assumptions, beliefs, and habits that influence employee behaviors and how people interpret those behaviors. It is important because culture can help or hinder a company's success. Some key aspects of Netflix's culture that help it achieve results include hiring smartly so every position has stars, focusing on attitude over just aptitude, and having a strict policy against peacocks, whiners, and jerks.
PEPSICO Presentation to CAGNY Conference Feb 2024Neil Kimberley
PepsiCo provided a safe harbor statement noting that any forward-looking statements are based on currently available information and are subject to risks and uncertainties. It also provided information on non-GAAP measures and directing readers to its website for disclosure and reconciliation. The document then discussed PepsiCo's business overview, including that it is a global beverage and convenient food company with iconic brands, $91 billion in net revenue in 2023, and nearly $14 billion in core operating profit. It operates through a divisional structure with a focus on local consumers.
Content Methodology: A Best Practices Report (Webinar)contently
This document provides an overview of content methodology best practices. It defines content methodology as establishing objectives, KPIs, and a culture of continuous learning and iteration. An effective methodology focuses on connecting with audiences, creating optimal content, and optimizing processes. It also discusses why a methodology is needed due to the competitive landscape, proliferation of channels, and opportunities for improvement. Components of an effective methodology include defining objectives and KPIs, audience analysis, identifying opportunities, and evaluating resources. The document concludes with recommendations around creating a content plan, testing and optimizing content over 90 days.
How to Prepare For a Successful Job Search for 2024Albert Qian
The document provides guidance on preparing a job search for 2024. It discusses the state of the job market, focusing on growth in AI and healthcare but also continued layoffs. It recommends figuring out what you want to do by researching interests and skills, then conducting informational interviews. The job search should involve building a personal brand on LinkedIn, actively applying to jobs, tailoring resumes and interviews, maintaining job hunting as a habit, and continuing self-improvement. Once hired, the document advises setting new goals and keeping skills and networking active in case of future opportunities.
A report by thenetworkone and Kurio.
The contributing experts and agencies are (in an alphabetical order): Sylwia Rytel, Social Media Supervisor, 180heartbeats + JUNG v MATT (PL), Sharlene Jenner, Vice President - Director of Engagement Strategy, Abelson Taylor (USA), Alex Casanovas, Digital Director, Atrevia (ES), Dora Beilin, Senior Social Strategist, Barrett Hoffher (USA), Min Seo, Campaign Director, Brand New Agency (KR), Deshé M. Gully, Associate Strategist, Day One Agency (USA), Francesca Trevisan, Strategist, Different (IT), Trevor Crossman, CX and Digital Transformation Director; Olivia Hussey, Strategic Planner; Simi Srinarula, Social Media Manager, The Hallway (AUS), James Hebbert, Managing Director, Hylink (CN / UK), Mundy Álvarez, Planning Director; Pedro Rojas, Social Media Manager; Pancho González, CCO, Inbrax (CH), Oana Oprea, Head of Digital Planning, Jam Session Agency (RO), Amy Bottrill, Social Account Director, Launch (UK), Gaby Arriaga, Founder, Leonardo1452 (MX), Shantesh S Row, Creative Director, Liwa (UAE), Rajesh Mehta, Chief Strategy Officer; Dhruv Gaur, Digital Planning Lead; Leonie Mergulhao, Account Supervisor - Social Media & PR, Medulla (IN), Aurelija Plioplytė, Head of Digital & Social, Not Perfect (LI), Daiana Khaidargaliyeva, Account Manager, Osaka Labs (UK / USA), Stefanie Söhnchen, Vice President Digital, PIABO Communications (DE), Elisabeth Winiartati, Managing Consultant, Head of Global Integrated Communications; Lydia Aprina, Account Manager, Integrated Marketing and Communications; Nita Prabowo, Account Manager, Integrated Marketing and Communications; Okhi, Web Developer, PNTR Group (ID), Kei Obusan, Insights Director; Daffi Ranandi, Insights Manager, Radarr (SG), Gautam Reghunath, Co-founder & CEO, Talented (IN), Donagh Humphreys, Head of Social and Digital Innovation, THINKHOUSE (IRE), Sarah Yim, Strategy Director, Zulu Alpha Kilo (CA).
Trends In Paid Search: Navigating The Digital Landscape In 2024Search Engine Journal
The search marketing landscape is evolving rapidly with new technologies, and professionals, like you, rely on innovative paid search strategies to meet changing demands.
It’s important that you’re ready to implement new strategies in 2024.
Check this out and learn the top trends in paid search advertising that are expected to gain traction, so you can drive higher ROI more efficiently in 2024.
You’ll learn:
- The latest trends in AI and automation, and what this means for an evolving paid search ecosystem.
- New developments in privacy and data regulation.
- Emerging ad formats that are expected to make an impact next year.
Watch Sreekant Lanka from iQuanti and Irina Klein from OneMain Financial as they dive into the future of paid search and explore the trends, strategies, and technologies that will shape the search marketing landscape.
If you’re looking to assess your paid search strategy and design an industry-aligned plan for 2024, then this webinar is for you.
5 Public speaking tips from TED - Visualized summarySpeakerHub
From their humble beginnings in 1984, TED has grown into the world’s most powerful amplifier for speakers and thought-leaders to share their ideas. They have over 2,400 filmed talks (not including the 30,000+ TEDx videos) freely available online, and have hosted over 17,500 events around the world.
With over one billion views in a year, it’s no wonder that so many speakers are looking to TED for ideas on how to share their message more effectively.
The article “5 Public-Speaking Tips TED Gives Its Speakers”, by Carmine Gallo for Forbes, gives speakers five practical ways to connect with their audience, and effectively share their ideas on stage.
Whether you are gearing up to get on a TED stage yourself, or just want to master the skills that so many of their speakers possess, these tips and quotes from Chris Anderson, the TED Talks Curator, will encourage you to make the most impactful impression on your audience.
See the full article and more summaries like this on SpeakerHub here: https://speakerhub.com/blog/5-presentation-tips-ted-gives-its-speakers
See the original article on Forbes here:
http://www.forbes.com/forbes/welcome/?toURL=http://www.forbes.com/sites/carminegallo/2016/05/06/5-public-speaking-tips-ted-gives-its-speakers/&refURL=&referrer=#5c07a8221d9b
ChatGPT and the Future of Work - Clark Boyd Clark Boyd
Everyone is in agreement that ChatGPT (and other generative AI tools) will shape the future of work. Yet there is little consensus on exactly how, when, and to what extent this technology will change our world.
Businesses that extract maximum value from ChatGPT will use it as a collaborative tool for everything from brainstorming to technical maintenance.
For individuals, now is the time to pinpoint the skills the future professional will need to thrive in the AI age.
Check out this presentation to understand what ChatGPT is, how it will shape the future of work, and how you can prepare to take advantage.
The document provides career advice for getting into the tech field, including:
- Doing projects and internships in college to build a portfolio.
- Learning about different roles and technologies through industry research.
- Contributing to open source projects to build experience and network.
- Developing a personal brand through a website and social media presence.
- Networking through events, communities, and finding a mentor.
- Practicing interviews through mock interviews and whiteboarding coding questions.
Google's Just Not That Into You: Understanding Core Updates & Search IntentLily Ray
1. Core updates from Google periodically change how its algorithms assess and rank websites and pages. This can impact rankings through shifts in user intent, site quality issues being caught up to, world events influencing queries, and overhauls to search like the E-A-T framework.
2. There are many possible user intents beyond just transactional, navigational and informational. Identifying intent shifts is important during core updates. Sites may need to optimize for new intents through different content types and sections.
3. Responding effectively to core updates requires analyzing "before and after" data to understand changes, identifying new intents or page types, and ensuring content matches appropriate intents across video, images, knowledge graphs and more.
A brief introduction to DataScience with explaining of the concepts, algorithms, machine learning, supervised and unsupervised learning, clustering, statistics, data preprocessing, real-world applications etc.
It's part of a Data Science Corner Campaign where I will be discussing the fundamentals of DataScience, AIML, Statistics etc.
Time Management & Productivity - Best PracticesVit Horky
Here's my presentation on by proven best practices how to manage your work time effectively and how to improve your productivity. It includes practical tips and how to use tools such as Slack, Google Apps, Hubspot, Google Calendar, Gmail and others.
The six step guide to practical project managementMindGenius
The six step guide to practical project management
If you think managing projects is too difficult, think again.
We’ve stripped back project management processes to the
basics – to make it quicker and easier, without sacrificing
the vital ingredients for success.
“If you’re looking for some real-world guidance, then The Six Step Guide to Practical Project Management will help.”
Dr Andrew Makar, Tactical Project Management
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...Applitools
During this webinar, Anand Bagmar demonstrates how AI tools such as ChatGPT can be applied to various stages of the software development life cycle (SDLC) using an eCommerce application case study. Find the on-demand recording and more info at https://applitools.info/b59
Key takeaways:
• Learn how to use ChatGPT to add AI power to your testing and test automation
• Understand the limitations of the technology and where human expertise is crucial
• Gain insight into different AI-based tools
• Adopt AI-based tools to stay relevant and optimize work for developers and testers
* ChatGPT and OpenAI belong to OpenAI, L.L.C.
The document discusses various AI tools from OpenAI like GPT-3 and DALL-E 2, as well as ChatGPT. It explores how search engines are using AI and things to consider around AI-generated content. Potential SEO uses of ChatGPT are also presented, such as generating content at scale, conducting topic research, and automating basic coding tasks. The document encourages further reading on using ChatGPT for SEO purposes.
More than Just Lines on a Map: Best Practices for U.S Bike Routes
This session highlights best practices and lessons learned for U.S. Bike Route System designation, as well as how and why these routes should be integrated into bicycle planning at the local and regional level.
Presenters:
Presenter: Kevin Luecke Toole Design Group
Co-Presenter: Virginia Sullivan Adventure Cycling Association
Ride the Storm: Navigating Through Unstable Periods / Katerina Rudko (Belka G...DevGAMM Conference
Has your project been caught in a storm of deadlines, clashing requirements, and the need to change course halfway through? If yes, then check out how the administration team navigated through all of this, relocating 160 people from 3 countries and opening 2 offices during the most turbulent time in the last 20 years. Belka Games’ Chief Administrative Officer, Katerina Rudko, will share universal approaches and life hacks that can help your project survive unstable periods when there seem to be too many tasks and a lack of time and people.
Ride the Storm: Navigating Through Unstable Periods / Katerina Rudko (Belka G...
Week 2 - Data Mining the City
1. DATA MINING THE CITY
Weds 7p-9p 200 Buell
Violet Whitney, vw2205@columbia.edu
please submit your attendance
and medium profile:
http://shoutkey.com/beer
2. New room except…
...Nov 8 in Avery 114
Attendance is
sometime at the end of
class and will expire
Zach White
Windows : (
75. Inductive
to examine empirical evidence in
the search for patterns that might
support new theories or general
principles
Deductive
focusing on the testing of known
theories or principles against data
Normative
using spatial analysis to develop
or prescribe new or better designs
Spatial Analysis
The project
(づ ̄ ³ ̄)づ
85. Fold a paper into 8 sections
8min
1 problem per 1 min
4 min (2 min each)
discuss w/ partner how you
would get data
1 min
decide on most interesting
d<>d
(☞゚ヮ゚)☞ ☜(゚ヮ゚☜)
86. DATA MINING THE CITY
Weds 7p-9p 200 Buell
Violet Whitney, vw2205@columbia.edu
Week 2 course feedback:
http://shoutkey.com/taro
Next week we have guests coming which is super exciting. We’re going to learn about how to solve mapping, routing and path problems.
Allan William Martin - computational designer and former head of Product at Floored - automated Floorplan layouts, and now a Product Manager in cloud computing
Jeff Tarr - who is the senior engineer working on indoor localization, and understanding paths at Sidewalk Labs
We’re super lucky to have both of them. They are both going to be making content specifically for you to help you in your own projects so that’s even more exciting.
In this portion we’ll contextualize the criticisms of spatial analytics …. so we understand how we are operating and we can be critical and aware of our own approaches
Across many fields, analytics have been applied spatially. A couple examples: regional science is a field of the social sciences concerned with analytical approaches to problems that are specifically urban, rural, or regional. Spatial economics deals with what is where, and why.
But there are major critiques of these methods.
In his landmark text “Explanation of Geography” David Harvey critiques these emerging types for being devoid of theory and ethnographic perspective, and for being co-opted for political purposes. The book could also have been called “The Role of Theory in Scientific Explanation”...as Harvey recognizes that these scientific fields are also deeply interwoven with theoretical issues.
Another critique of spatial analysis is that it looks at problems from a God’s Eye View, and is overly reductive of individual experience.
In history, Subaltern Studies questions the history of the masses. What happens at the base levels of society rather than what happens among the elite. It fundamentally questions who speaks for whom. When histories about masses are written by elite historians but not by the masses themselves. When a map is drawn, do the masses speak for themselves, or does an elite planner, economist or government official speak for them?
Spatial Analytics borrows objective and subjective tactics, but is critiqued for a God’s Eye View perspective that can lack first person understanding because it tries to reduce trends from multiple occurrences into a model, trend or behavior pattern.
Objectivity is a central philosophical concept, related to reality and truth. It is concerned with finding agreed understanding of the natural world often through measurement.
Subjectivity is based on or influenced by personal feelings, tastes, or opinions.
The distinction between subjectivity and objectivity can be a fine line, but data mining can borrows objective and subjective methods to understand spatial phenomenon.
There’s further criticism in the obsession with data. You may have heard the term “quantrapreneurs” which mocks companies built on data.
The quantified self also known as lifelogging, is a movement to incorporate technology into data acquisition on aspects of a person's daily life in terms of inputs (food consumed, quality of surrounding air), states (mood, arousal, blood oxygen levels), and performance, whether mental or physical. In short, quantified self is self-knowledge through self-tracking with technology.
But there are skeptics of the quantified self.
In a quote from a skeptic:
"Quantified self" practitioners as a group are not necessarily curious about human values or an understanding of what makes us human. They're more interested in anything that can be measured and given a number. They believe the maxim that only the things that are measured can be improved. But I see a lot of measuring, but not much improvement....
The skeptic continues...
Quantifying the number of times we eat, sleep, or tweet doesn’t somehow reveal something more truthful about ourselves over just experiencing it. Are we actually learning something more fundamental about ourselves? Why do we think there’s something more true in the numbers than how I feel?
.
Coffee argument
It seems somewhat impossible to describe everything…
And even when we describe and categorize the world, the boundaries can be somewhat arbitrary.----
How do you distinguish a cell in the small intestine from the descending colon? The cell doesn’t know that its part of that system or even the digestive system. Humans only delineate it that way.
To this degree, boundaries and categories can be somewhat arbitrary in nature.
Can “reality” be described? For Nietzche (nee-cha) and nihlists there is no reason to describe it because there is no objective order or structure in the world except for what we give it.
“Every belief, every considering something true, is necessarily false because there is simply no true world.”
The perspective is that humans search and attribute meaning in a meaningless world.
The phenomenon of humans perceiving correlations that don’t exist such as the faces we imagine in trees or the clouds or the circle and lines on the screen is called apophenia.
In data science it is called Illusory correlation…
the phenomenon of perceiving a relationship between variables (typically people, events, or behaviors) even when no such relationship exists.
So what is the distinction between “reality” and our simulation of it in our models, maps, and data. If we had all of the data points about the feel, color, location size of everything in the natural world and modeled it, what would be the difference between reality and our simulation? Simulation theory (as popularized by neuromancer and the matrix) is the hypothesis that reality could be simulated - that we couldn’t tell the difference anyways if we were in a simulation.
You may have heard of the Borges Map which describes a 1:1 scale map which has been used again and again by other authors and artists.
In 1893, Lewis Carroll, author of Alice in Wonderland, imagined a fictional map that had "the scale of a mile to the mile."
In his passage:
“And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”
“Have you used it much?” I enquired.
“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.”
Today the 1:1 map and the blend between reality and simulation is closer than ever through the Internet and Internet of things
A reading from the Artist, Hito Steyerl
The whorf hypothesis describes: reality is embedded in culture’s language and that language controls thought and cultural norms. Some languages create the capacity to discuss concepts that don’t exist or can’t be comprehended in other languages.
The fact that we describe colors categorically or that we think of counting sequentially drive our capacity of how we understand those things. But many things are less clearly classified and often fit into gradient fields. You can imagine a language not made up of categorical words but made up of songs, rhythm, pitch, and tone. Instead of saying bluish green would we hum somewhere in between the pitch of blue and green? Could volume indicate intensity of saturation?
This would fundamentally shift our understanding of what is.
….questions?
Words and language create a shared understanding of what something is, however a word in inherently reductionist. So we reduce at the level of the word, but we also reduce when we model a concept.
In data science we use models, trend lines and patterns to understand what happened and to predict what will happen.
When we have a set of data points from individual events, we attempt to understand it by creating generalized model. In machine learning this is called fitting. When we underfit a model it is too general to be useful. Our ideal spot is in the middle: where it is specific enough to be useful but general enough to be applied to other data samples. Overfitting tries to hug the data set too closely. Then when the model is applied to a another dataset, the model isn’t relevant.
Because big data uses huge data sets, its useful to model around a percentage of that data to represent all of the data. However if a model is overfit to the sample data it wont apply to the rest of your data.
---Modeling income in NYC
Overfitting is relevant in examples in your life as well:
I looked for wall hooks on Amazon once, and now its recommendation engine thinks I love wall hooks. Because its overfitting to the sample data it was given.
You can also think of examples of overfitting and underfitting in architecture.
Modernist architecture was intended to universalize design around a standard man. But we often find that we are misfits in the models that are built for us. A modernist chair might be too large for us. Or the seat on a plane is too small for a tall person.
In Goldilocks and the 3 bears, Goldilocks finds that the chairs (lets call them models) don’t all fit her
When comparing and modeling data from different sources we need to find common ground to compare the data sets. Data synthesis is A method that uses statistical techniques to combine results from different studies and obtain a quantitative estimate of the overall effect of a particular intervention or variable on a defined outcome—i.e., it is a statistical process for pooling data from many clinical trials to glean a clear answer.
A planner’s first stop in describing the existing conditions of a community is usually the Census Bureau. To protect the privacy of respondents, Census data is delivered at different geographies and across different periods of time. For example, the best estimate of the number of households in a community may be available for each Census block from the Decennial Census (last conducted in 2010), and the best estimate of household income may be the five-year rolling data product from the American Community Survey for each Census tract. Combining these disparate data sets to create a coherent and complete representation of what is happening in a community at any point in time is difficult. It’s a bit like trying to completely understand a subject from photos that are taken from different angles, at different points in time, from different distances. Further complicating the problem, urban planners like to use non-Census data sets, such as school quality, that may introduce yet another set of geographies (e.g., school districts).
Lets talk about how location and accuracy effect our data
So there’s all these various words or mechanism to describe and track objects and their behavior. We can describe an object’s location with a unique identifier number (this is used in 3d modeling programs and is called a GUID), we can describe its color, length height and width but each of these descriptors changes the way we understand that object.
But its not just the word we record about an object that changes its definition, its definition is also defined by the tool or method that we use to describe it. You can think of our senses as a tool for measuring the natural environment. You can see, smell, hear, taste, and touch, each which describe a different aspect of a thing. Likewise we can use tools with sensors to understand and record events and to sense objects. Each tool or sensor has limited agency in what it can record.
A camera for example is limited to its resolution, what distance it is from where its recording, the range of color it can capture, whether its view is obscured etc. All of these factors will effect the outcome of what is recorded. A camera placed at a different height with greater resolution would capture very different results. An infrared camera would capture a lower range of spectral colors. However if I used a lidar sensor to detect the distance of the object I would have no information about the object's color and I would only know its position relative to the position of the lidar receiver.
Take these examples of how location is tracked to understand how the tool impacts the data that is recorded.
GPS - Global Positioning System is a radio navigation system that allows land, sea, and airborne users to determine their exact geographic location, velocity, and time, 24 hours a day, in all weather conditions, anywhere in the world.
GPS is made up of 29 satellites orbitting the earth.
The working/operation of GPS is based on the ‘trilateration’ mathematical principle. The position is determined from the distance measurements to satellites. From the figure, the four satellites are used to determine the position of the receiver on the earth. The target location is confirmed by the 4th satellite. And three satellites are used to trace the location place. A fourth satellite is used to confirm the target location of each of those space vehicles. GPS consists of satellite, control station and monitor station and receiver. The GPS receiver takes the information from the satellite and uses the method of triangulation to determine a user’s exact position.
Agency of GPS: its accurate up to about 10meters which isn’t very useful for indoor positioning. It works best when line of site is direct so its much worse in brick buildings or when obscured by foliage.
Wifi pinging - With this method, a signal is sent through a wifi hotspot to a user’s device such as a smart phone, smart watch, or computer to relate what IP addresses are within range. The strength of a signal or its mere presence within a wifi network can indicate where people are located.
While wifi is usually limited to individual wifi networks, larger connected networks can be used to track the movement of a device through the city. Using time stamps of when a unique user’s address shows up at various places throughout a city
Agency -
- 10 meters of accuracy
Mac addresses are scrambled every so often so its hard to track who is in range
Beacons - These are small devices that work similarly to wiif pinging but a major difference is that it can track who a person is (their unique profile). It can track while not connected to the web (bluetooth) and its accuracy is much better +- 1 m.
Beacons are often used commercially to communicate with a shopper’s smartphone to improve the in-store experience. Beacons use Bluetooth to detect nearby smartphones with the intent of sending them ads, coupons, or product information, or to track how a customer moves through a store. Companies like Apple and McDonald's have used beacons to deliver in-store deals to customers phones.
This is also sometimes referred to as geo-fencing.
Computer vision - CV is a method for analyzing imagery, often through pattern and object recognition. Computer vision can be used to identify a particular person using facial recognition; it can understand various objects like cars, trees or people, or to recognize gestures or even the mood of someone given his or her expression. These methods are often applied to video surveillance, but can also be applied to analyze stock image or video footage.
Computer vision can be used to track where pedestrians walk.
Agency -
While CV is the most accurate in tracking up to the inch where someone goes it has other major limitations. Its not always accurate in defining what is a peron and can often track two people as one, or thing something non-human is human. It also requires translating a 2d video into a plan view which is difficult to accurately translate.
So data is changed by how it is recorded with a tool, but also how it is translated when it is communicated and visualized.
Maps cannot be created without map projections. All map projections necessarily distort the surface in some fashion. Depending on the purpose of the map, some distortions are acceptable and others are not; therefore, different map projections exist in order to preserve some properties of the sphere-like body at the expense of other properties.
https://en.wikipedia.org/wiki/List_of_map_projections#pseudocylindrical
All trying to solve the issue of flattening a spherical surface - more computable, more easy to reproduce
So accuracy can be distorted when data is translated to a map and can be lost via the tool it is tracked with. In images and especially in satellite photography this is an issue when one pixel can represent a foot to a mile.
https://www.artforum.com/video/mode=large&id=51651
When data is lots when or changed by its means of recording this is also similar to history. Think about how history is recorded.
After a series of events such as a presidential election someone must record this history based on their own observations and interpretations. How might someones record of history differ if they record through an aerial camera, through a camera on the ground, or if they hear an event remotely through a radio?
But even if all observations and understanding is subjective, is it worthwhile to start somewhere?
History Helps Us Understand Change and How the Society We Live in Came to Be The second reason history is inescapable as a subject of serious study follows closely on the first. The past causes the present, and so the future.
Models can be problematic but they can also help us make better decisions
John Snow...not the game of thrones character
During the cholera epedemic in London, John Snow used surveys and mapping to show that cholera was spread through germs (feces in water etc) rather than through bad air (miasma theory). Before this germs were not accepted as causing sickness and many didn’t want to believe it.
By talking to local residents, he identified the source of the outbreak as the public water pump on Broad Street. Although Snow's chemical and microscope examination of a water sample from the Broad Street pump did not conclusively prove its danger, his studies of the pattern of the disease were convincing enough to persuade the local council to disable the well pump by removing its handle. This action has been commonly credited as ending the outbreak.
He also used statistics to illustrate the connection between the quality of the water source and cholera cases. He showed that the Southwark and Vauxhall Waterworks Company was taking water from sewage-polluted sections of the Thames and delivering the water to homes thus increasing incidences of cholera.
Data from open NYC - how its reduced or abstracted
10 min exercise !!!!!!!!!!!!!
Bias has several definitions, and its common usage is decidedly negative. We typically use it to mean systematic favoritism of a group. Generally speaking, “bias” is derived from the ancient Greek word that describes an oblique line (i.e., a deviation from the horizontal). In Data Science, bias is a deviation from expectation in the data. More fundamentally, bias refers to an error in the data. But, the error is often subtle or goes unnoticed. So, why does bias occur in the first place?
Over the next posts in this series, we will briefly define and describe common statistical and cognitive biases, as listed below:
Selection (or sample) Bias
Seasonal Bias
Linearity Bias
Confirmation Bias
Recall Bias
Survivor Bias
Observer Bias
Reinforcement Bias
Math Masters of Destruction
how data is used to lie
Verizon map video
Regional science is a field of the social sciences concerned with analytical approaches to problems that are specifically urban, rural, or regional.
Overly reductionist is no longer useful, overfitting is so specific that it no longer fits multiple people - I like to think of a sock vs a toe sock, one may fit many feet, the other is so formed to one type of foot that it doesn’t fit my stubby toes
Stephanie Dinkins is an artist focused on artificial intelligence as it intersects race, gender, aging and our future histories.
examination of the codification of social, cultural and future histories
Exercise on hypothesis...
A quick overview from the last class…
We learned what Data Mining is…
With Data Selection you take raw data from websites, a database somewhere, or a website’s API - we got data in our exercise last class by manually scraping Google Maps
With Pre-Processing & Cleaning Data - you clean the data for whatever purposes you need. In our case we had addresses which needed to be turned into latitude and longitudes so that they were useful to us
With Feature Selection - you need to select which features are useful for your data mining and visualization, we did this by selecting the location attributes ( latitude and longitude data) and deleting attributes that weren’t relevant such as the open hours of a store or its rating.
With Data Mining we visualized our data set by getting images from Google Street View. There are a number of ways we could begin to sort these images to understand correlations such as how many people appear in each image or how many windows are in each image. What we graph or analyze in these images depends entirely on what we are trying to understand.
Lastly in data mining we interpret and evaluate our result. We have yet to do this...
We learned about encoding and decoding information, and about what APIs are
What skills do they need
Analytical: Based on history of events describe something that happened,
such as public and private funding for bus stations in 2015 in New York City was lower in neighborhoods with an average lower median income.
Analytical….
To understand hurricane damage
Analytical….
To understand how different cultures think about objects or concepts
Analytical….
Forensic to understand an event
https://www.nytimes.com/interactive/2016/09/25/us/charlotte-scott-shooting-video.html?mtrref=undefined
Analytical….
Forensic…
To understand deaths from a fire
Predictive: Tries to base a historical set of data to predict what might happen in the future.
For example, when a McDonalds becomes a new tenant at any location, foot-traffic on the street in the surrounding 5 blocks increases 10%.
This can also be used to make arguments or decisions, such as: because foot-traffic increases 10% any time a new McDonalds moves in, a new subway stop should be built here to capture larger traffic.
Historical analysis could also be implement dynamic zoning based on what has happened in a past location.
Predictive: continued
https://quickdraw.withgoogle.com/data
Predictive: continued why we won’t have jobs anymore…
...so pay attention
https://affinelayer.com/pixsrv/index.html
Narrative: Which has the intent of telling a story, which can also be argumentative, but is not intended to draw a scientific conclusion.
Exploratory: Is intended to explore how data might be related but may not have an end goal of arguing anything in particular. For example the project on Broadway visuallizes instagram images along broadway, what colors are used in the images, and how many images are posted at each location throughout the day and then its visualized side by side with the street view locations.
http://www.on-broadway.nyc/
Exploratory continued…
You can imagine doing exquisite corpses of buildings or streets, or building interiors with streetview
Exploratory continued…
Composite images: pulling together multiple datasets
http://mpkelley.photography/?category=airportraits
WHERE DO WE START?!?!
If we come back to our data mining diagram, you’ll notice it starts at the phase of data selection...but before we start collecting our data, we need to know what we’re collecting it for
Data analysis starts first with a questions phase with a problem you want to solve:
Such as “What are the characteristics of buildings that are most heavily instagrammed? What are the characteristics of diverse neighborhoods?
The wrangling phase includes data selection, pre-processing and cleaning data as well as feature selection - this is a moment to investigate and understand data and its attributes and may require going through multiple data sets to find out if they are useful
The next step is to look for patterns and correlations in the data. This is often done through graphing various features of the data to see how they might be correlated
You’ll often need to go back and forth between the wrangling phase of getting and cleaning data and exploring the data to make sure that its useful
The next step is to draw conclusions or make predictions from that data set - this often includes machine learning and statistics. Making broad and accurate conclusions takes a lot of research and vetting. Your projects will scratch the surface here but will make provocative claims supported by data.
Finally the analysis is communicated (often via blog posts, reports which include data visualization).
SO AGAIN….WHERE DO WE START?!?!
Exercise to get started
This isn’t the only way to start a project….just one way to get ourselves moving
Example problems in the form of a question:
“What are the characteristics of buildings that are most heavily instagrammed? What are the characteristics of diverse neighborhoods?
Regional science is a field of the social sciences concerned with analytical approaches to problems that are specifically urban, rural, or regional.