The document discusses different visualization techniques throughout history and their impacts. It begins by describing cave paintings from 35,000 BC and then discusses the invention of word clouds in 1992 AD. It next talks about how John Snow's visualization of cholera deaths in 1854 London helped identify the water pump responsible for spreading the disease. The document concludes by examining the controversial "hockey stick graph" showing modern warming is unprecedented, and how visualization design choices can influence data interpretation.
AI BOOT CAMP PROJECT ONE DECK - Preview v3.pdfJohn Andrews
UNC AI Bootcamp project 1. Using public dataset for future prediction. In this project, our team analyzed various datasets using Python tools to predict sea level rise in Wilmington, NC over the next decade. We also created a codeset to compare any two costal cities over the next decade.
Abstract: This program provides a comparison of sea level rise between two cities. It offers detailed visualizations of historical sea level data. Additionally, it projects future trends based on various scenarios and highlights potential impacts on coastal communities, empowering users to make informed decisions regarding adaptation and mitigation strategies.
This program was originally designed to help New Hanover County allocate funds to mediate the effect of sea level rise. All data is average monthly in the format yyyy-mm-01. All data is assumed to be on the 1st of the month. However, the program is written to compare any 2 locations in the world where sea level data is available. city1 = location 1. This sea level rise data is compared to northern hemisphere temperature. A correlation between sea level rise to northern hemisphere temperature rise city2 = location 2. The sea level rise from location 1 and location 2 are plotted, and a correlation is determined.The PSMSL.org data is used because, at some sites, sea level data is available for more than 80 years. The data utilized in this program is sourced from PSMSL.org, a reputable organization known for providing comprehensive and reliable sea level data. PSMSL.org is recognize at numerous coastal sites worldwide. The organization's commitment to data integrity and accuracy makes it a trusted source for scientific research and policy-making.
Wilmington is among the sites monitored by PSMSL.org, with sea level data available since 1935. Remarkably, only 17 data points are missing from this extensive dataset, ensuring a robust analysis of sea level trends in the region. The reference point established in 1970, set at -7000 mm, serves as a valuable baseline for understanding mean sea level variations over time. Missing data points, denoted by -9999, have been systematically cleaned to maintain the integrity of the analysis.
The full project can be accessed here: https://github.com/GuyInFreezer/project-1/tree/main
Team Members: Jay Requarth, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin, Matt Nicholas and Jana Avery
Ben Kirtman, Director, Cooperative Institute for Marine & Atmospheric Studies, University of Miami, Rosentiel School for Marine and Atmospheric Science
UCAR Congressional Briefing - April 2018
Monitoring Changes in Antarctic Sea Ice Phenology: 1990-2015priscillaahn
Tan & Le Drew (2016) presented an innovative methodology to detect both temporal and spatial changes in Arctic sea ice phenology using sea ice concentration (SIC) indices derived from remotely sensed data. This project attempts to apply this analysis to the still ambiguous Antarctic sea-ice pack for the years 1990-2015.
Climate change extremes by season in the United StatesZachary Labe
11 September 2023…
Hershey Horticulture Society (Presentation): Climate change extremes by season in the United States, Hershey, PA, USA.
References...
Eischeid, J.K., M.P. Hoerling, X.-W. Quan, A. Kumar, J. Barsugli, Z.M. Labe, K.E. Kunkel, C.J. Schreck III, D.R. Easterling, T. Zhang, J. Uehling, and X. Zhang (2023). Why has the summertime central U.S. warming hole not disappeared? Journal of Climate, DOI:10.1175/JCLI-D-22-0716.1
Labe, Z.M., T.R. Ault, and R. Zurita-Milla (2016), Identifying Anomalously Early Spring Onsets in the CESM Large Ensemble Project, R. Clim Dyn, DOI:10.1007/s00382-016-3313-2
Labe, Z.M., N.C. Johnson, and T.L Delworth (2023). Changes in United States summer temperatures revealed by explainable neural networks. Preprint. DOI: 10.22541/essoar.168987129.98069596/v1
ICLR Forecast: 2019 Wildfire Season (May 17, 2019)glennmcgillivray
On May 17, 2019, ICLR provided a forecast of the 2019 wildfire season led by Richard Carr from the Canadian Forest Service. The interactive webinar summarized the current conditions in Canada and provided a forecast for the 2019 wildfire season.
Richard Carr provides fire weather processing for the Canadian Wildland Fire Information System (CWFIS) and international projects. He also provides fire weather briefings to the Canadian Forest Service (CFS) fire group, the Canadian Interagency Forest Fire Centre (CIFFC), and to federal emergency response personnel, and helps provide seasonal forecasting of fire risk. Richard helps provide information to the North American Seasonal Fire Assessment Outlook, and the North American Drought Monitor via AAFC’s Canadian Drought Monitor. Richard represents NRCan-CFS in the CIFFC Forest and Fire Meteorology Working Group.
1 GEOG 1112 Lab 4 Insolation and Net Radiation .docxjoyjonna282
1
GEOG 1112
Lab 4 Insolation and Net Radiation
Due: 11:59pm, Sept. 12, 2015
Please type your answers in Word, and print out the blank graphs, and draw the graphs by
hand, and then scan the completed graphs (or take pictures of your answers if you don’t have
a scanner) as images. And then insert the images into the Word file. Submit it to the folder of
Lab 4 in the Dropbox on the homepage of the course on D2L in the following format:
If Mary Smith is submitting lab 4, please name the file as MarySmith_lab4.doc.
Purpose
The purpose of this lab is for you to understand annual cycle of insolation and daily cycle of
net radiation.
Background
Insolation is the amount of solar radiation that reaches the earth surface. The annual cycle of
insolation at a given location on the globe depends on two factors: (a) the latitude at which the
observer is located and (b) the sun’s changing angle above the horizon at noon, which is caused
by the movement of subsolar point.
Net radiation is the difference between incoming and outgoing flows of solar radiation. The
daily cycles of net radiation at a given location on the global depends on the changing incoming
and outgoing flows of radiation in the course of a day. It is called radiation surplus when
incoming radiation is larger than outgoing; It is radiation deficit when outgoing larger than
incoming. Air temperature change is associated with the change in net radiation.
Assignment
1 The Annual Cycle of Insolation
(1) Table 1 gives the latitude of the subsolar point at ten-day intervals throughout the year. Plot
theses points on the blank graph (A) and connect them with a smooth curve. (10 points)
(2) Table 2 gives the intensity of insolation at various latitudes throughout the year. Plot these
monthly values for the following latitudes: 0°, 20°N, 40°S, 60°N, 90°N on the blank graph
(B). Enter the data point in the middle of the month as shown on the partially drawn line for
60°N. Connect the points with a smooth curve and label the latitude. (20 points)
2
Table 1 The latitude of subsolar point throughout the year
Table 2 Insolation throughout the Year (one unit is 889 gram calories per square
centimeter)
Date Subsolar Point Date Subsolar Point
3
S
u
b
so
la
r
P
o
in
t
4
(3) Study the insolation curves in relation to the annual curve of the subsolar point.
Compare times of maximum and minimum values at 60°N and 40°S. Explain how these two
curves differ in timing. (10 points)
(4) Explain how the maximum and minimum of insolation at the equator are related to the
curve of the subsolar point. (10 points)
(5) Compare the insolation curves of 20°N and 90°N. Explain how these two curves differ
in timing. (10 points)
su
b
so
la
r
p
o
in
t
5
2 The Daily Cycle of Net Radiation
(1) Table 3 gives the net radiation at the ground and air temperat ...
Michal Mucha: Build and Deploy an End-to-end Streaming NLP Insight System | P...PyData
At this workshop, you will build your own messaging insights system - data ingestion from a live data source (Reddit), queueing, deploying a machine learning model, and serving messages with insights to your mobile phone!
Unit testing data with marbles - Jane Stewart Adams, Leif WalshPyData
In the same way that we need to make assertions about how code functions, we need to make assertions about data, and unit testing is a promising framework. In this talk, we'll explore what is unique about unit testing data, and see how Two Sigma's open source library Marbles addresses these unique challenges in several real-world scenarios.
More Related Content
Similar to Winning Ways for your Visualization Plays by Mark Grundland
AI BOOT CAMP PROJECT ONE DECK - Preview v3.pdfJohn Andrews
UNC AI Bootcamp project 1. Using public dataset for future prediction. In this project, our team analyzed various datasets using Python tools to predict sea level rise in Wilmington, NC over the next decade. We also created a codeset to compare any two costal cities over the next decade.
Abstract: This program provides a comparison of sea level rise between two cities. It offers detailed visualizations of historical sea level data. Additionally, it projects future trends based on various scenarios and highlights potential impacts on coastal communities, empowering users to make informed decisions regarding adaptation and mitigation strategies.
This program was originally designed to help New Hanover County allocate funds to mediate the effect of sea level rise. All data is average monthly in the format yyyy-mm-01. All data is assumed to be on the 1st of the month. However, the program is written to compare any 2 locations in the world where sea level data is available. city1 = location 1. This sea level rise data is compared to northern hemisphere temperature. A correlation between sea level rise to northern hemisphere temperature rise city2 = location 2. The sea level rise from location 1 and location 2 are plotted, and a correlation is determined.The PSMSL.org data is used because, at some sites, sea level data is available for more than 80 years. The data utilized in this program is sourced from PSMSL.org, a reputable organization known for providing comprehensive and reliable sea level data. PSMSL.org is recognize at numerous coastal sites worldwide. The organization's commitment to data integrity and accuracy makes it a trusted source for scientific research and policy-making.
Wilmington is among the sites monitored by PSMSL.org, with sea level data available since 1935. Remarkably, only 17 data points are missing from this extensive dataset, ensuring a robust analysis of sea level trends in the region. The reference point established in 1970, set at -7000 mm, serves as a valuable baseline for understanding mean sea level variations over time. Missing data points, denoted by -9999, have been systematically cleaned to maintain the integrity of the analysis.
The full project can be accessed here: https://github.com/GuyInFreezer/project-1/tree/main
Team Members: Jay Requarth, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin, Matt Nicholas and Jana Avery
Ben Kirtman, Director, Cooperative Institute for Marine & Atmospheric Studies, University of Miami, Rosentiel School for Marine and Atmospheric Science
UCAR Congressional Briefing - April 2018
Monitoring Changes in Antarctic Sea Ice Phenology: 1990-2015priscillaahn
Tan & Le Drew (2016) presented an innovative methodology to detect both temporal and spatial changes in Arctic sea ice phenology using sea ice concentration (SIC) indices derived from remotely sensed data. This project attempts to apply this analysis to the still ambiguous Antarctic sea-ice pack for the years 1990-2015.
Climate change extremes by season in the United StatesZachary Labe
11 September 2023…
Hershey Horticulture Society (Presentation): Climate change extremes by season in the United States, Hershey, PA, USA.
References...
Eischeid, J.K., M.P. Hoerling, X.-W. Quan, A. Kumar, J. Barsugli, Z.M. Labe, K.E. Kunkel, C.J. Schreck III, D.R. Easterling, T. Zhang, J. Uehling, and X. Zhang (2023). Why has the summertime central U.S. warming hole not disappeared? Journal of Climate, DOI:10.1175/JCLI-D-22-0716.1
Labe, Z.M., T.R. Ault, and R. Zurita-Milla (2016), Identifying Anomalously Early Spring Onsets in the CESM Large Ensemble Project, R. Clim Dyn, DOI:10.1007/s00382-016-3313-2
Labe, Z.M., N.C. Johnson, and T.L Delworth (2023). Changes in United States summer temperatures revealed by explainable neural networks. Preprint. DOI: 10.22541/essoar.168987129.98069596/v1
ICLR Forecast: 2019 Wildfire Season (May 17, 2019)glennmcgillivray
On May 17, 2019, ICLR provided a forecast of the 2019 wildfire season led by Richard Carr from the Canadian Forest Service. The interactive webinar summarized the current conditions in Canada and provided a forecast for the 2019 wildfire season.
Richard Carr provides fire weather processing for the Canadian Wildland Fire Information System (CWFIS) and international projects. He also provides fire weather briefings to the Canadian Forest Service (CFS) fire group, the Canadian Interagency Forest Fire Centre (CIFFC), and to federal emergency response personnel, and helps provide seasonal forecasting of fire risk. Richard helps provide information to the North American Seasonal Fire Assessment Outlook, and the North American Drought Monitor via AAFC’s Canadian Drought Monitor. Richard represents NRCan-CFS in the CIFFC Forest and Fire Meteorology Working Group.
1 GEOG 1112 Lab 4 Insolation and Net Radiation .docxjoyjonna282
1
GEOG 1112
Lab 4 Insolation and Net Radiation
Due: 11:59pm, Sept. 12, 2015
Please type your answers in Word, and print out the blank graphs, and draw the graphs by
hand, and then scan the completed graphs (or take pictures of your answers if you don’t have
a scanner) as images. And then insert the images into the Word file. Submit it to the folder of
Lab 4 in the Dropbox on the homepage of the course on D2L in the following format:
If Mary Smith is submitting lab 4, please name the file as MarySmith_lab4.doc.
Purpose
The purpose of this lab is for you to understand annual cycle of insolation and daily cycle of
net radiation.
Background
Insolation is the amount of solar radiation that reaches the earth surface. The annual cycle of
insolation at a given location on the globe depends on two factors: (a) the latitude at which the
observer is located and (b) the sun’s changing angle above the horizon at noon, which is caused
by the movement of subsolar point.
Net radiation is the difference between incoming and outgoing flows of solar radiation. The
daily cycles of net radiation at a given location on the global depends on the changing incoming
and outgoing flows of radiation in the course of a day. It is called radiation surplus when
incoming radiation is larger than outgoing; It is radiation deficit when outgoing larger than
incoming. Air temperature change is associated with the change in net radiation.
Assignment
1 The Annual Cycle of Insolation
(1) Table 1 gives the latitude of the subsolar point at ten-day intervals throughout the year. Plot
theses points on the blank graph (A) and connect them with a smooth curve. (10 points)
(2) Table 2 gives the intensity of insolation at various latitudes throughout the year. Plot these
monthly values for the following latitudes: 0°, 20°N, 40°S, 60°N, 90°N on the blank graph
(B). Enter the data point in the middle of the month as shown on the partially drawn line for
60°N. Connect the points with a smooth curve and label the latitude. (20 points)
2
Table 1 The latitude of subsolar point throughout the year
Table 2 Insolation throughout the Year (one unit is 889 gram calories per square
centimeter)
Date Subsolar Point Date Subsolar Point
3
S
u
b
so
la
r
P
o
in
t
4
(3) Study the insolation curves in relation to the annual curve of the subsolar point.
Compare times of maximum and minimum values at 60°N and 40°S. Explain how these two
curves differ in timing. (10 points)
(4) Explain how the maximum and minimum of insolation at the equator are related to the
curve of the subsolar point. (10 points)
(5) Compare the insolation curves of 20°N and 90°N. Explain how these two curves differ
in timing. (10 points)
su
b
so
la
r
p
o
in
t
5
2 The Daily Cycle of Net Radiation
(1) Table 3 gives the net radiation at the ground and air temperat ...
Michal Mucha: Build and Deploy an End-to-end Streaming NLP Insight System | P...PyData
At this workshop, you will build your own messaging insights system - data ingestion from a live data source (Reddit), queueing, deploying a machine learning model, and serving messages with insights to your mobile phone!
Unit testing data with marbles - Jane Stewart Adams, Leif WalshPyData
In the same way that we need to make assertions about how code functions, we need to make assertions about data, and unit testing is a promising framework. In this talk, we'll explore what is unique about unit testing data, and see how Two Sigma's open source library Marbles addresses these unique challenges in several real-world scenarios.
The TileDB Array Data Storage Manager - Stavros Papadopoulos, Jake BolewskiPyData
TileDB is an open-source storage manager for multi-dimensional sparse and dense array data. It has a novel architecture that addresses some of the pain points in storing array data on “big-data” and “cloud” storage architectures. This talk will highlight TileDB’s design and its ability to integrate with analysis environments relevant to the PyData community such as Python, R, Julia, etc.
Using Embeddings to Understand the Variance and Evolution of Data Science... ...PyData
In this talk I will discuss exponential family embeddings, which are methods that extend the idea behind word embeddings to other data types. I will describe how we used dynamic embeddings to understand how data science skill-sets have transformed over the last 3 years using our large corpus of jobs. The key takeaway is that these models can enrich analysis of specialized datasets.
Deploying Data Science for Distribution of The New York Times - Anne BauerPyData
How many newspapers should be distributed to each store for sale every day? The data science group at The New York Times addresses this optimization problem using custom time series modeling and analytical solutions, while also incorporating qualitative business concerns. I'll describe our modeling and data engineering approaches, written in Python and hosted on Google Cloud Platform.
Graph Analytics - From the Whiteboard to Your Toolbox - Sam LermaPyData
However, the the graph theory jargon can make graph analytics seem more intimidating for self-study than is necessary. In this talk, the audience will be exposed to some of the basic concepts of graph theory (no prerequisite math knowledge needed!) and a few of the Python tools available for graph analysis.
Do Your Homework! Writing tests for Data Science and Stochastic Code - David ...PyData
To productionize data science work (and have it taken seriously by software engineers, CTOs, clients, or the open source community), you need to write tests! Except… how can you test code that performs nondeterministic tasks like natural language parsing and modeling? This talk presents an approach to testing probabilistic functions in code, illustrated with concrete examples written for Pytest.
RESTful Machine Learning with Flask and TensorFlow Serving - Carlo MazzaferroPyData
Those of us who use TensorFlow often focus on building the model that's most predictive, not the one that's most deployable. So how to put that hard work to work? In this talk, we'll walk through a strategy for taking your machine learning models from Jupyter Notebook into production and beyond.
Mining dockless bikeshare and dockless scootershare trip data - Stefanie Brod...PyData
In September 2017, dockless bikeshare joined the transportation options in the District of Columbia. In March 2018, scooter share followed. During the pilot of these technologies, Python has helped District Department of Transportation answer some critical questions. This talk will discuss how Python was used to answer research questions and how it supported the evaluation of this demonstration.
Avoiding Bad Database Surprises: Simulation and Scalability - Steven LottPyData
There are many stories of developers creating databases that don't operate at scale. The application is good, but the database won't work the realistic volumes of data. It's like a horror movie where they never looked behind the door, ran into the dark forest and night, and discovered the database was the monster killing their application. How can we leverage Python to avoid scaling problems?
Machine learning often requires us to think spatially and make choices about what it means for two instances to be close or far apart. So which is best - Euclidean? Manhattan? Cosine? It all depends! In this talk, we'll explore open source tools and visual diagnostic strategies for picking good distance metrics when doing machine learning on text.
End-to-End Machine learning pipelines for Python driven organizations - Nick ...PyData
The recent advances in machine learning and artificial intelligence are amazing! Yet, in order to have real value within a company, data scientists must be able to get their models off of their laptops and deployed within a company’s data pipelines and infrastructure. In this session, I'll demonstrate how one-off experiments can be transformed into scalable ML pipelines with minimal effort.
We will be using Beautiful Soup to Webscrape the IMDB website and create a function that will allow you to create a dictionary object on specific metadata of the IMDB profile for any IMDB ID you pass through as an argument.
1D Convolutional Neural Networks for Time Series Modeling - Nathan Janos, Jef...PyData
This talk describes an experimental approach to time series modeling using 1D convolution filter layers in a neural network architecture. This approach was developed at System1 for forecasting marketplace value of online advertising categories.
Extending Pandas with Custom Types - Will AydPyData
Pandas v.0.23 brought to life a new extension interface through which you can extend NumPy's type system. This talk will explain what that means in more detail and provide practical examples of how the new interface can be leveraged to drastically improve your reporting.
Machine learning models are increasingly used to make decisions that affect people’s lives. With this power comes a responsibility to ensure that model predictions are fair. In this talk I’ll introduce several common model fairness metrics, discuss their tradeoffs, and finally demonstrate their use with a case study analyzing anonymized data from one of Civis Analytics’s client engagements.
What's the Science in Data Science? - Skipper SeaboldPyData
The gold standard for validating any scientific assumption is to run an experiment. Data science isn’t any different. Unfortunately, it’s not always possible to design the perfect experiment. In this talk, we’ll take a realistic look at measurement using tools from the social sciences to conduct quasi-experiments with observational data.
Applying Statistical Modeling and Machine Learning to Perform Time-Series For...PyData
Forecasting time-series data has applications in many fields, including finance, health, etc. There are potential pitfalls when applying classic statistical and machine learning methods to time-series problems. This talk will give folks the basic toolbox to analyze time-series data and perform forecasting using statistical and machine learning models, as well as interpret and convey the outputs.
Solving very simple substitution ciphers algorithmically - Stephen Enright-WardPyData
A historical text may now be unreadable, because its language is unknown, or its script forgotten (or both), or because it was deliberately enciphered. Deciphering needs two steps: Identify the language, then map the unknown script to a familiar one. I’ll present an algorithm to solve a cartoon version of this problem, where the language is known, and the cipher is alphabet rearrangement.
The Face of Nanomaterials: Insightful Classification Using Deep Learning - An...PyData
Artificial intelligence is emerging as a new paradigm in materials science. This talk describes how physical intuition and (insightful) machine learning can solve the complicated task of structure recognition in materials at the nanoscale.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
4. Visualization is old as art
but it is just getting started
“I am here”
Hand cloud
invented 35,000 B.C.
4!
w http://en.wikipedia.org/wiki/Cueva_de_las_Manos!
http://kristybuddeneducation.com/2012/07/ and http://www.tagxedo.com/app.html!
5. Visualization is old as art
but it is just getting started
“I am here”
Hand cloud
invented 35,000 B.C.
5!
w http://en.wikipedia.org/wiki/Cueva_de_las_Manos!
http://kristybuddeneducation.com/2012/07/ and http://www.tagxedo.com/app.html!
“I blog here”
Word cloud
invented 1992 A.D.
6. Seeing the pattern in the data can
change how we view our world
Modern epidemiology started with plotting dots on a map.
6!
w
http://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak!
7. Modern epidemiology started with plotting dots on a map.
7!
w
http://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak!
In 1854, a cholera outbreak in
Soho killed over 600 people.
John Snow plotted the locations
of the deaths to show that they
were clustered around the
neighborhood water pump.
Seeing the pattern in the data can
change how we view our world
8. Communicating data effectively can
change what we do with our world
Hockey stick graph was arguably the most controversial
chart in science: our future depends on how we read it.
8!
w
http://www.ipcc.ch/pdf/climate-changes-2001/synthesis-syr/english/summary-policymakers.pdf!
line, a filtered annual curve suppressing
fluctuations below near decadal
time-scales). There are uncertainties in
the annual data (thin black whisker
bars represent the 95% confidence
range) due to data gaps, random
instrumental errors and uncertainties,
uncertainties in bias corrections in the
ocean surface temperature data and
also in adjustments for urbanisation over
the land. Over both the last 140 years
and 100 years, the best estimate is that
the global average surface temperature
has increased by 0.6 0.2°C.
(b) Additionally, the year by year (blue
curve) and 50 year average (black
curve) variations of the average surface
temperature of the Northern Hemisphere
for the past 1000 years have been
reconstructed from “proxy” data
calibrated against thermometer data (see
list of the main proxy data in the
diagram). The 95% confidence range in
the annual data is represented by the
grey region. These uncertainties increase
in more distant times and are always
much larger than in the instrumental
record due to the use of relatively sparse
proxy data. Nevertheless the rate and
duration of warming of the 20th century
has been much greater than in any of
the previous nine centuries. Similarly, it
is likely7
that the 1990s have been the
warmest decade and 1998 the warmest
year of the millennium.
[Based upon (a) Chapter 2, Figure 2.7c
and (b) Chapter 2, Figure 2.20]
3
1860 1880 1900 1920 1940 1960 1980 2000
Year
Departuresintemperature(°
fromthe1961to1990avera
Departuresintemperature(°C)
fromthe1961to1990average
(b) the past 1,000 years
NORTHERN HEMISPHERE
Data from thermometers (red) and from tree rings,
corals, ice cores and historical records (blue).
1000 1200 1400 1600 1800 2000
Year
−1.0
−0.5
0.0
0.5
Data from thermometers.
−0.8
−0.4
0.0
0.4
9. Communicating data effectively can
change what we do with our world
Hockey stick graph was arguably the most controversial
chart in science: our future depends on how we read it.
9!
w http://www.ipcc.ch/pdf/climate-changes-2001/synthesis-syr/english/summary-policymakers.pdf!
http://www.amazon.com/dp/023115254X/ and http://www.amazon.com/dp/1906768358/!
line, a filtered annual curve suppressing
fluctuations below near decadal
time-scales). There are uncertainties in
the annual data (thin black whisker
bars represent the 95% confidence
range) due to data gaps, random
instrumental errors and uncertainties,
uncertainties in bias corrections in the
ocean surface temperature data and
also in adjustments for urbanisation over
the land. Over both the last 140 years
and 100 years, the best estimate is that
the global average surface temperature
has increased by 0.6 0.2°C.
(b) Additionally, the year by year (blue
curve) and 50 year average (black
curve) variations of the average surface
temperature of the Northern Hemisphere
for the past 1000 years have been
reconstructed from “proxy” data
calibrated against thermometer data (see
list of the main proxy data in the
diagram). The 95% confidence range in
the annual data is represented by the
grey region. These uncertainties increase
in more distant times and are always
much larger than in the instrumental
record due to the use of relatively sparse
proxy data. Nevertheless the rate and
duration of warming of the 20th century
has been much greater than in any of
the previous nine centuries. Similarly, it
is likely7
that the 1990s have been the
warmest decade and 1998 the warmest
year of the millennium.
[Based upon (a) Chapter 2, Figure 2.7c
and (b) Chapter 2, Figure 2.20]
3
1860 1880 1900 1920 1940 1960 1980 2000
Year
Departuresintemperature(°
fromthe1961to1990avera
Departuresintemperature(°C)
fromthe1961to1990average
(b) the past 1,000 years
NORTHERN HEMISPHERE
Data from thermometers (red) and from tree rings,
corals, ice cores and historical records (blue).
1000 1200 1400 1600 1800 2000
Year
−1.0
−0.5
0.0
0.5
Data from thermometers.
−0.8
−0.4
0.0
0.4
10. How can we display our data
without distorting the truth?
Hockey stick graph was arguably the most controversial
chart in science: our future depends on how we read it.
10!
w
http://www.ipcc.ch/pdf/climate-changes-2001/synthesis-syr/english/summary-policymakers.pdf!
line, a filtered annual curve suppressing
fluctuations below near decadal
time-scales). There are uncertainties in
the annual data (thin black whisker
bars represent the 95% confidence
range) due to data gaps, random
instrumental errors and uncertainties,
uncertainties in bias corrections in the
ocean surface temperature data and
also in adjustments for urbanisation over
the land. Over both the last 140 years
and 100 years, the best estimate is that
the global average surface temperature
has increased by 0.6 0.2°C.
(b) Additionally, the year by year (blue
curve) and 50 year average (black
curve) variations of the average surface
temperature of the Northern Hemisphere
for the past 1000 years have been
reconstructed from “proxy” data
calibrated against thermometer data (see
list of the main proxy data in the
diagram). The 95% confidence range in
the annual data is represented by the
grey region. These uncertainties increase
in more distant times and are always
much larger than in the instrumental
record due to the use of relatively sparse
proxy data. Nevertheless the rate and
duration of warming of the 20th century
has been much greater than in any of
the previous nine centuries. Similarly, it
is likely7
that the 1990s have been the
warmest decade and 1998 the warmest
year of the millennium.
[Based upon (a) Chapter 2, Figure 2.7c
and (b) Chapter 2, Figure 2.20]
3
1860 1880 1900 1920 1940 1960 1980 2000
Year
Departuresintemperature(°
fromthe1961to1990avera
Departuresintemperature(°C)
fromthe1961to1990average
(b) the past 1,000 years
NORTHERN HEMISPHERE
Data from thermometers (red) and from tree rings,
corals, ice cores and historical records (blue).
1000 1200 1400 1600 1800 2000
Year
−1.0
−0.5
0.0
0.5
Data from thermometers.
−0.8
−0.4
0.0
0.4
Figure 1: Variations of the Earth’s
surface temperature over the last
140 years and the last millennium.
(a) The Earth’s surface temperature is
shown year by year (red bars) and
approximately decade by decade (black
line, a filtered annual curve suppressing
fluctuations below near decadal
time-scales). There are uncertainties in
the annual data (thin black whisker
bars represent the 95% confidence
range) due to data gaps, random
instrumental errors and uncertainties,
uncertainties in bias corrections in the
ocean surface temperature data and
also in adjustments for urbanisation over
the land. Over both the last 140 years
and 100 years, the best estimate is that
the global average surface temperature
has increased by 0.6 0.2°C.
(b) Additionally, the year by year (blue
curve) and 50 year average (black
curve) variations of the average surface
temperature of the Northern Hemisphere
for the past 1000 years have been
reconstructed from “proxy” data
calibrated against thermometer data (see
list of the main proxy data in the
diagram). The 95% confidence range in
the annual data is represented by the
grey region. These uncertainties increase
in more distant times and are always
much larger than in the instrumental
record due to the use of relatively sparse
proxy data. Nevertheless the rate and
duration of warming of the 20th century
has been much greater than in any of
the previous nine centuries. Similarly, it
is likely7
that the 1990s have been the
warmest decade and 1998 the warmest
year of the millennium.
[Based upon (a) Chapter 2, Figure 2.7c
and (b) Chapter 2, Figure 2.20]
3
1860 1880 1900 1920 1940 1960 1980 2000
Year
Departuresintemperature(°C)
fromthe1961to1990average
Departuresintemperature(°C)
fromthe1961to1990average
Variations of the Earth's surface temperature for:
(a) the past 140 years
(b) the past 1,000 years
GLOBAL
NORTHERN HEMISPHERE
Data from thermometers (red) and from tree rings,
corals, ice cores and historical records (blue).
1000 1200 1400 1600 1800 2000
Year
−1.0
−0.5
0.0
0.5
Data from thermometers.
−0.8
−0.4
0.0
0.4
0.8
1860 1880 1900 1920 1940 1960 1980 2000
Year
Departuresinte
fromthe1961to
Departuresintemperature(°C)
fromthe1961to1990average
(b) the past 1,000 years
NORTHERN HEMISPHERE
Data from thermometers (red) and from tree rings,
corals, ice cores and historical records (blue).
1000 1200 1400 1600 1800 2000
Year
−1.0
−0.5
0.0
0.5
Data from thermometers.
−0.8
−0.4
0.0
11. How can we display our data
without distorting the truth?
The visualization design decisions we make affect which
interpretations of the data are facilitated or impeded.
11!
w http://www.takepart.com/an-inconvenient-truth/film!
http://www.ipcc.ch/pdf/climate-changes-2001/synthesis-syr/english/summary-policymakers.pdf!
Figure 1: Variations of the Earth’s
surface temperature over the last
140 years and the last millennium.
(a) The Earth’s surface temperature is
shown year by year (red bars) and
approximately decade by decade (black
line, a filtered annual curve suppressing
fluctuations below near decadal
time-scales). There are uncertainties in
the annual data (thin black whisker
bars represent the 95% confidence
range) due to data gaps, random
instrumental errors and uncertainties,
uncertainties in bias corrections in the
ocean surface temperature data and
also in adjustments for urbanisation over
the land. Over both the last 140 years
and 100 years, the best estimate is that
the global average surface temperature
has increased by 0.6 0.2°C.
(b) Additionally, the year by year (blue
curve) and 50 year average (black
curve) variations of the average surface
temperature of the Northern Hemisphere
for the past 1000 years have been
reconstructed from “proxy” data
calibrated against thermometer data (see
list of the main proxy data in the
diagram). The 95% confidence range in
the annual data is represented by the
grey region. These uncertainties increase
in more distant times and are always
much larger than in the instrumental
record due to the use of relatively sparse
proxy data. Nevertheless the rate and
duration of warming of the 20th century
has been much greater than in any of
the previous nine centuries. Similarly, it
is likely7
that the 1990s have been the
warmest decade and 1998 the warmest
year of the millennium.
[Based upon (a) Chapter 2, Figure 2.7c
and (b) Chapter 2, Figure 2.20]
3
1860 1880 1900 1920 1940 1960 1980 2000
Year
Departuresintemperature(°C)
fromthe1961to1990average
Departuresintemperature(°C)
fromthe1961to1990average
Variations of the Earth's surface temperature for:
(a) the past 140 years
(b) the past 1,000 years
GLOBAL
NORTHERN HEMISPHERE
Data from thermometers (red) and from tree rings,
corals, ice cores and historical records (blue).
1000 1200 1400 1600 1800 2000
Year
−1.0
−0.5
0.0
0.5
Data from thermometers.
−0.8
−0.4
0.0
0.4
0.8
1860 1880 1900 1920 1940 1960 1980 2000
Year
Departuresinte
fromthe1961to
Departuresintemperature(°C)
fromthe1961to1990average
(b) the past 1,000 years
NORTHERN HEMISPHERE
Data from thermometers (red) and from tree rings,
corals, ice cores and historical records (blue).
1000 1200 1400 1600 1800 2000
Year
−1.0
−0.5
0.0
0.5
Data from thermometers.
−0.8
−0.4
0.0
12. How can we select
the aspect ratio?
!! Use 1:1 … Why? It is fair and square.
!! Use 3:2 … Why? It is wider than taller, like a landscape photo.
!! Use the golden ratio…
Why? It is a most pleasing proportion found in nature and art.
!! Make the average slope of all line segments 45º…
Why? It is perceptually optimal for orientation discrimination.
!! Minimize arc length, keeping area under the plot constant…
Why? It is short, sweet, and mathematically optimal.
!! Take the screen size or the widow size as given…
Why? It fits, so obviously this must be what the user wants.
!! Depends on the situation…
Why? It depends on the story the user is meant to believe.
12!
w http://vis.berkeley.edu/papers/banking/2006-Banking-InfoVis.pdf and http://vis.stanford.edu/papers/arclength-banking!
http://www.manorpress.co.uk/printing%20info.html!
13. How can we select
the map projection?
Representing the Earth on a flat map must in some way
distort distances, directions, angles, shapes, and/or areas.
13!
w http://maps.google.com/"
http://earthobservatory.nasa.gov/Features/10thAnniversary/top10.php!
Mercator Projection
Preserves angles but not areas
14. How can we select
the map projection?
Representing the Earth on a flat map must in some way
distort distances, directions, angles, shapes, and/or areas.
14!
w http://maps.google.com/"
http://cdn.static-economist.com/sites/default/files/true-size-of-africa.jpg!
Mercator Projection
Preserves angles but not areas
15. How can we select
the map projection?
Representing the Earth on a flat map must in some way
distort distances, directions, angles, shapes, and/or areas.
!! Tissot indicatrix measures geometric distortion by showing
how circles on the globe appear as ellipses on the map.
15!
w http://wordpress.mrreid.org/2011/06/07/tissots-indicatrix/"
http://www.progonos.com/furuti/MapProj/Normal/TOC/cartTOC.html!
Mercator Projection
Preserves angles
but not areas
Robinson Projection
Nearly preserves areas
but not angles
16. How can we select
the map projection?
Representing the Earth on a flat map must in some way
distort distances, directions, angles, shapes, and/or areas.
!! Cartograms distort the size and shape of regions in order to
make their area proportional to a given variable of interest.
!! Computed using density diffusion or cellular automata.
16!
w http://www.worldmapper.org/!
http://arxiv.org/pdf/physics/0401102v1.pdf!
Land Mass
Equal area cartogram
Land Mass
Equal area cartogram
17. How can we select
the map projection?
Representing the Earth on a flat map must in some way
distort distances, directions, angles, shapes, and/or areas.
!! Cartograms distort the size and shape of regions in order to
make their area proportional to a given variable of interest
!! Computed using density diffusion or cellular automata
17!
w http://www.worldmapper.org/!
http://arxiv.org/pdf/physics/0401102v1.pdf!
Population
Equal area cartogram
GDP Wealth
Equal area cartogram
18. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments.
!! Position
!! Shape
!! Length
!! Orientation
!! Area and volume
!! Hue, saturation, brightness
!! Texture and transparency
!! Alignment and proximity
!! Containment and connection
!! Labels and glyphs
!! Motion and flicker
18!
w !
http://www2.parc.com/istl/projects/uir/publications/items/UIR-1986-02-Mackinlay-TOG-Automati!
http://visualfunhouse.com/altered_reality/14-perfect-shot-photography-illusions.html!
19. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Which is bigger?
And by how much?
19!
w
http://michaelbach.de/ot/!
20. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Which is bigger?
And by how much?
20!
w
http://michaelbach.de/ot/!
+20%
21. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Equity Market Heat Map
21!
w
http://www.marketwatch.com/tools/stockresearch/marketmap!
22. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Which is bigger?
And by how much?
22!
w
http://en.wikipedia.org/wiki/Ebbinghaus_illusion!
And by how much?
23. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Which is bigger?
And by how much?
23!
w
http://en.wikipedia.org/wiki/Ebbinghaus_illusion!
And by how much?
24. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Which is bigger?
And by how much?
24!
w
http://en.wikipedia.org/wiki/Ebbinghaus_illusion!
And by how much?
25. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
UK Government Spending Bubble Chart
25!
w
http://www.theguardian.com/news/datablog/2010/oct/18/government-spending-department-2009-10!
26. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Which is brighter?
And by how much?
26!
w
http://www.handprint.com/HP/WCL/tech13.html!
27. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
Which is brighter?
And by how much?
27!
w
http://www.handprint.com/HP/WCL/tech13.html!
+36%
28. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
US Presidential Election Map
28!
w
http://bsumaps.blogspot.co.uk/2012/11/cartographic-election-resources.html!
29. How can we visually
represent values?
Numeric values express absolute magnitudes but
visual perception makes relative judgments, not very well.
US Presidential Election Map
29!
w
http://statchatva.org/2013/01/14/lower-turnout-in-2012-makes-the-case-for-political-realignment-in-2008/!
30. How can we visually
represent values?
Help users by labeling data and adding trend indicators.
US Presidential Election Map
30!
w
http://www.theguardian.com/news/datablog/2012/nov/07/us-elections-2012-results-map!
31. How can we visually
represent values?
Help users by labeling data and adding trend indicators.
Online Ad Campaign Performance
Bar Chart
31!
w
http://www.FunctionalElegance.com!
32. How can we visually
represent values?
Help users by labeling data and adding trend indicators.
Online Ad Campaign Performance
Traffic Light Bar Chart
32!
w
http://www.FunctionalElegance.com!
33. !! Hue order is not obvious.
!! Hue changes make edges.
!! Yellows make highlights.
!! Detail is harder to see.
!! Eyes are more sensitive to
brightness than hues.
How can we visually
represent values?
Perceptually uniform color gradients for continuous values.
Avoid rainbow color maps:
33!
w http://blog.visual.ly/rainbow-color-scales/!
http://researchweb.watson.ibm.com/people/l/lloydt/color/color.HTM!
34. How can we visually
represent values?
Use transparency to overlay information layers.
!! Normally transparent layers are composited using linear
interpolation, an averaging operation that reduces variation.
!! Blending by linear interpolation can result in reduced contrast,
dull colors, detail loss, and a lack of selective emphasis.
Satellite Map Overlay
34!
w
http://lsntap.org/blogs/gis-mapping-part-i!
35. How can we visually
represent values?
Use transparency to overlay information layers.
!! Apply image blending operators that are designed to produce
composite images that preserve key visual characteristics of
their components: contrast, color, detail, and salience.
35!
w
http://www.FunctionalElegance.com/Portfolio/Publications.html!
36. How can we visually
represent values?
Use transparency to overlay information layers.
!! Apply image blending operators that are designed to produce
composite images that preserve key visual characteristics of
their components: contrast, color, detail, and salience.
36!
w
http://www.FunctionalElegance.com/Portfolio/Publications.html!
37. How can we visually
represent values?
Use transparency to overlay information layers.
!! Render the arc of each node connection in order of decreasing
length using a color gradient that emphasizes to short links.
Facebook Friend Connection Map
37!
w
http://paulbutler.org/archives/visualizing-facebook-friends/!
38. How can we visually
represent values?
Familiar visual metaphors make interpretation easier.
SnapShot News Analysis Tool
38!
w http://www.grapeshot.co.uk/snapshot/snapshot.html!
http://www.FunctionalElegance.com!
39. How can we visually
represent values?
Familiar visual metaphors make interpretation easier.
SnapShot News Radar
39!
w http://www.grapeshot.co.uk/snapshot/snapshot.html!
http://www.FunctionalElegance.com!
40. How can we visually
represent values?
Familiar visual metaphors make interpretation easier.
SnapShot News Radar
40!
w http://www.grapeshot.co.uk/snapshot/snapshot.html!
http://www.FunctionalElegance.com!
41. How can we visually
represent values?
Familiar visual metaphors make interpretation easier.
SnapShot News Radar
41!
w http://www.grapeshot.co.uk/snapshot/snapshot.html!
http://www.FunctionalElegance.com!
42. How can we visually
represent connections?
Use proximity to find connections in a cloud of points.
!! Take a sphere of influence around each point, with radius
equal to its nearest neighbor distance, and connect every
pair of points whose spheres of influence intersect.
Sphere of Influence Graphs
Work in Rn for any Lp Metric
42!
w
http://dx.doi.org/10.1016/S0166-218X(02)00246-9!
43. How can we visually
represent connections?
Use tree drawing to see patterns in hierarchical data.
!! Coloring each node according to its data type reveals the
structure of expression trees, such as XML, JSON, and HTML.
Website Home Pages as HTML Trees
43!
w http://www.generatorx.no/20060528/websites-as-graphs/!
http://www.flickr.com/photos/davezilla/155045024/!
44. How can we visually
represent connections?
Use word clouds that place their terms meaningfully.
!! TextArc writes the sentences of a text along a circular arc and
places each term according to its average position in the text.
“Alice in Wonderland” TextArc
44!
w http://www.textarc.org/!
http://www.nytimes.com/2002/04/15/arts/15ARTS.html!
45. How can we visually
represent connections?
Use word clouds that place their terms meaningfully.
!! Self-organizing maps (SOM) are neural networks, which can
create knowledge maps that cluster closely associated topics.
Self-organizing Map of a Mailing List
45!
w http://dx.doi.org/10.1371/journal.pone.0058779!
http://wiki.sugarlabs.org/go/Sugar_Labs/SOM!
46. How can we visually
represent connections?
Use graph drawing tools to see how data is connected.
!! Graph drawing algorithms, such as simulated annealing or
spring systems, make it easier to follow how data is related.
Graph Drawing of Medical Knowledge
46!
w http://cs.brown.edu/~rt/gdhandbook/!
http://visualization.geblogs.com/visualization/network/!
47. How can we visually
represent connections?
Use graph drawing tools to see how data is connected.
!! Hierarchical edge bundles group connections belonging to
related nodes, which can be placed radially along a circle.
Graph Drawing of Medical Knowledge
47!
w http://www.win.tue.nl/~dholten/papers/bundles_infovis.pdf!
http://visualization.geblogs.com/visualization/network/!
48. What is the secret to great
information visualization design?
Start with the user, not the data and not the graphic.
!! What will this allow you to do that you can't do now?
!! What difference can you observe in your business?
!! What value do you expect that this will add to your business?
!! What will this let your customers do that they can't do now?
!! What difference can your customers observe in their business?
!! What value can your customers expect that this will add?
!! How does this fit in with your other strategic plans?
!! How will you know that this is has been a success?
!! How will you build on this success?
!! So what?
48!
w
http://www.FunctionalElegance.com!
49. What does the future hold for
information visualization?
In an information economy, there is no shortage of
information; only understanding is in short supply.
!! Will interactive charts become more common as letting people
play with the data drives engagement?
!! Will subtly animated infographics become more common as
graphic designers compete for attention?
!! Will augmented reality glasses superimpose an information
visualization layer on our everyday lives?
!! Will cheap displays make ambient visualization ubiquitous?
!! Will virtual reality and gesture interfaces have an impact?
Let me know!
Mark@FunctionalElegance.com
49!
w
http://www.FunctionalElegance.com!
50. Online information
visualization resources
Thank you for making this presentation possible!
Visualization Galleries:!
•! Tree Visualizations (Hans-Jörg Schulz): http://vcg.informatik.uni-rostock.de/~hs162/treeposter/poster.html!
•! Time Series Visualizations (Christian Tominski & Wolfgang Aigner): http://survey.timeviz.net/!
•! Visual Complexity (Manuel Lima): http://www.visualcomplexity.com/vc/!
•! D3 JavaScript Visualization Library: https://github.com/mbostock/d3/wiki/Gallery!
•! WebdesignerDepot.com Examples (Cameron Chapman): http://bit.ly/1nYR89L
Visualization Courses:
•! University of Utah (Miriah Meyer): http://www.sci.utah.edu/~miriah/cs6964/!
•! University of British Columbia (Tamara Munzner): http://www.cs.ubc.ca/~tmm/courses/533-09/!
•! University of California Berkeley (Michael Porath): http://blogs.ischool.berkeley.edu/i247s13/!
•! University of Washington (Jeffrey Heer): https://courses.cs.washington.edu/courses/cse512/14wi/!
•! Georgia Institute of Technology (John Stasko): http://www.cc.gatech.edu/~stasko/7450/syllabus.html!
Visualization Tutorials:
•! Storytelling with Data (Jonathan Corum): http://style.org/tapestry/!
•! Visualization Analysis and Design (Tamara Munzner): http://www.cs.ubc.ca/~tmm/courses/533-11/book/!
•! Principles of Information Visualization (Jessie Kennedy): http://mkweb.bcgsc.ca/vizbi/2012/!
•! Information Visualization for Knowledge Discovery (Ben Shneiderman): http://bit.ly/1cw3oa2!
•! Data Visualization Best Practices (Jen Underwood): http://www.slideshare.net/idigdata/!
•! Data Visualization (Jan Willem Tulp): http://www.slideshare.net/janwillemtulp/!
•! Information Visualization Primer (Xavier Tricoche): https://www.cs.purdue.edu/homes/cs530/new/slides/Infovis.pdf!
•! Visual Techniques for Exploring Databases (Daniel A. Keim): http://www.dbs.informatik.uni-muenchen.de/~daniel/KDD97.pdf!
Visualization Sites:
•! Perceptual Edge (Stephen Few): http://www.perceptualedge.com/!
•! Envisioning Information (Edward Tufte): http://www.edwardtufte.com/tufte/!
•! Information is Beautiful (David McCandless): http://www.informationisbeautiful.net/!
•! Information Aesthetics (Andrew Vande Moere): http://infosthetics.com/!
•! Flowing Data (Nathan Yau): http://flowingdata.com/learning/
50!