The document discusses data visualization and effective communication of quantitative information through visuals. It emphasizes that visualization is important but often done poorly, costing time and money. The key is understanding how humans perceive and think about visual information to design displays that match this process and avoid weaknesses. The document provides principles for effective visualization including showing only relevant information, maximizing "data ink" ratio, and not including visual differences that don't reflect real differences in the data.
Gave a talk at StartCon about the future of Growth. I touch on viral marketing / referral marketing, fake news and social media, and marketplaces. Finally, the slides go through future technology platforms and how things might evolve there.
The Six Highest Performing B2B Blog Post FormatsBarry Feldman
If your B2B blogging goals include earning social media shares and backlinks to boost your search rankings, this infographic lists the size best approaches.
32 Ways a Digital Marketing Consultant Can Help Grow Your BusinessBarry Feldman
How can a digital marketing consultant help your business? In this resource we'll count the ways. 24 additional marketing resources are bundled for free.
A comparison to a previous-generation workstation and a current-generation competitor
Saving time at any point in the machine learning process can prove invaluable. Our medical 3D imaging, NLP, and computer vision tests show that the HP Z8 Fury G5 classified more samples in less time than the other workstations we tested. With these time savings, scientists, analysts, and engineers could act on valuable insights sooner, integrating them into technologies, diagnoses, or business strategies.
Gave a talk at StartCon about the future of Growth. I touch on viral marketing / referral marketing, fake news and social media, and marketplaces. Finally, the slides go through future technology platforms and how things might evolve there.
The Six Highest Performing B2B Blog Post FormatsBarry Feldman
If your B2B blogging goals include earning social media shares and backlinks to boost your search rankings, this infographic lists the size best approaches.
32 Ways a Digital Marketing Consultant Can Help Grow Your BusinessBarry Feldman
How can a digital marketing consultant help your business? In this resource we'll count the ways. 24 additional marketing resources are bundled for free.
A comparison to a previous-generation workstation and a current-generation competitor
Saving time at any point in the machine learning process can prove invaluable. Our medical 3D imaging, NLP, and computer vision tests show that the HP Z8 Fury G5 classified more samples in less time than the other workstations we tested. With these time savings, scientists, analysts, and engineers could act on valuable insights sooner, integrating them into technologies, diagnoses, or business strategies.
Just Because You Can Doesn\'t Mean You Should - Graphing Data Wellfigarsd
The objective of this presentation is to provide the audience with the principles gleaned from such giants as Cleveland and Tufte within the context of JMP in an effort to combat graphic entropy. Presented at Discovery Summit 2010.
PMICOS Webinar: Building a Sound Schedule in an Enterprise EnvironmentAcumen
Dr. Dan Patterson presented a one-hour webinar on effective scheduling using metrics analysis. He reviewed some of the common problems found in schedules and the research that backs the claim that, in the end, the schedule drives project success.
Spreadmart To Data Mart BISIG PresentationDan English
Presentation at the North Central BI Special Interest Group (BISIG) going over a case study of converting an Excel Spreadmart solution to a SSAS data mart solution
Stratigent and Klipfolio have been partnered for years and are taking this opportunity to educate the industry via a joint webinar focused on helping organizations to maximize the value of their data silos to bring the best in class visualizations to life in an automated way.
Data Driven Practice in e-MDs. This covers custom crystal reports from scratch, slicing and dicing data in Excel, Visualizing Data, and understanding that change isn't really a technical problem.
Slides from my presentation at the Data Intelligence conference in Washington DC (6/23/2017). See this link for the abstract: http://www.data-intelligence.ai/presentations/36
State-Of-The Art Machine Learning Algorithms and How They Are Affected By Nea...inside-BigData.com
In this deck from the HPC Knowledge Portal 2017 Conference, Rob Farber from TechEnablement presents: State-Of-The Art Machine Learning Algorithms and How They Are Affected By Near-Term Technology Trends.
"Industry and Wall Street projections indicate that Machine Learning will touch every piece of data in the data center by 2020. This has created a technology arms race and algorithmic competition as IBM, NVIDIA, Intel, and ARM strive to dominate the retooling of the computer industry to support ubiquitous machine learning workloads over the next 3-4 years. Similarly, algorithm designers compete to create faster and more accurate training and inference techniques that can address complex problems spanning speech, image recognition, image tagging, self-driving cars, data analytics and more. The challenges for researchers and technology providers encompass big data, massive parallelism, distributed processing, and real-time processing. Deep-learning and low-precision inference (based on INT8 and FP16 arithmetic) are current hot topics."
Watch the video: https://wp.me/p3RLHQ-i2K
Learn more: http://www.hpckp.org/index.php/conference/2017
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Nagios Conference 2013 - Andy Brist - Data Visualizations and Nagios XINagios
Andy Brist's presentation on Data Visualizations and Nagios XI.
The presentation was given during the Nagios World Conference North America held Sept 20-Oct 2nd, 2013 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
PyData 2015 Keynote: "A Systems View of Machine Learning" Joshua Bloom
Despite the growing abundance of powerful tools, building and deploying machine-learning frameworks into production continues to be major challenge, in both science and industry. I'll present some particular pain points and cautions for practitioners as well as recent work addressing some of the nagging issues. I advocate for a systems view, which, when expanded beyond the algorithms and codes to the organizational ecosystem, places some interesting constraints on the teams tasked with development and stewardship of ML products.
About: Dr. Joshua Bloom is an astronomy professor at the University of California, Berkeley where he teaches high-energy astrophysics and Python for data scientists. He has published over 250 refereed articles largely on time-domain transients events and telescope/insight automation. His book on gamma-ray bursts, a technical introduction for physical scientists, was published recently by Princeton University Press. He is also co-founder and CTO of wise.io, a startup based in Berkeley. Josh has been awarded the Pierce Prize from the American Astronomical Society; he is also a former Sloan Fellow, Junior Fellow at the Harvard Society, and Hertz Foundation Fellow. He holds a PhD from Caltech and degrees from Harvard and Cambridge University.
This month, we will dive into the world of data analysis and visualization. As data continues to proliferate our lives and work, the question of how to make sense of it and turn it into information and knowledge becomes more and more challenging. At the same time, powerful tools are becoming available to help analysts sift through data and present it in a way that draws attention to key bits of knowledge than can be derived. As such, the skills related to using these tools effectively have become highly sought-after as organizations seek to dig out the treasures hidden in their data troves.
Presentation by Stephen Lett (Procter & Gamble)
My presentation at The Richmond Data Science Community (Jan 2018). The slides are slightly different than what I had presented last year at The Data Intelligence Conference.
S&OP Volume Mix FORECAST LESS AND GET BETTER RESULTSSteelwedge
This paper challenges conventional wisdom in the area of forecasting and planning: that companies need
to project forecasts and plans far into the future at a detailed, highly granular level.
Our position is that, in most cases, this is not necessary. Rather, detailed forecasts and plans are normally
needed only inside of what’s called the Planning Time Fence. This is a point in the future: the cumulative
lead time to acquire material and to build the product, plus a short time allowance for planning and order
releasing. Inside of the Planning Time Fence, high granularity is needed. But, most often, not beyond it.
To analyse why operationalizing AI is so challenging, it’s important to understand the full lifecycle of an AI project, and identify the stakeholders involved.
Through 2023, Gartner estimates that 50% of IT leaders will struggle to move their AI projects past proof of concept (POC) to a production level of maturity.
To reduce this high failure rate, organisations need to build the right roles for AI success. In many organisations, data scientists are still wearing too many hats due to a dearth of talent across other roles.
This session will highlight how, in order to successfully operationalise and scale AI POCs, organisations must to build diverse AI roles and skills with a collaborative structure that is paramount.
DataOps - Big Data and AI World London - March 2020 - Harvinder AtwalHarvinder Atwal
Title
DataOps, the secret weapon for delivering AI, data science, and business intelligence value at speed.
Synopsis
● According to recent research, just 7.3% of organisations say the state of their data and analytics is excellent, and only 22% of companies are currently seeing a significant return from data science expenditure.
● Poor returns on data & analytics investment are often the result of applying 20th-century thinking to 21st-century challenges and opportunities.
● Modern data science and analytics require secure, efficient processes to turn raw data from multiple sources and in numerous formats into useful inputs to a data product.
● Developing, orchestrating and iterating modern data pipelines is an extremely complex process requiring multiple technologies and skills.
● Other domains have to successfully overcome the challenge of delivering high-quality products at speed in complex environments. DataOps applies proven agile principles, lean thinking and DevOps practices to the development of data products.
● A DataOps approach aligns data producers, analytical data consumers, processes and technology with the rest of the organisation and its goals.
Just Because You Can Doesn\'t Mean You Should - Graphing Data Wellfigarsd
The objective of this presentation is to provide the audience with the principles gleaned from such giants as Cleveland and Tufte within the context of JMP in an effort to combat graphic entropy. Presented at Discovery Summit 2010.
PMICOS Webinar: Building a Sound Schedule in an Enterprise EnvironmentAcumen
Dr. Dan Patterson presented a one-hour webinar on effective scheduling using metrics analysis. He reviewed some of the common problems found in schedules and the research that backs the claim that, in the end, the schedule drives project success.
Spreadmart To Data Mart BISIG PresentationDan English
Presentation at the North Central BI Special Interest Group (BISIG) going over a case study of converting an Excel Spreadmart solution to a SSAS data mart solution
Stratigent and Klipfolio have been partnered for years and are taking this opportunity to educate the industry via a joint webinar focused on helping organizations to maximize the value of their data silos to bring the best in class visualizations to life in an automated way.
Data Driven Practice in e-MDs. This covers custom crystal reports from scratch, slicing and dicing data in Excel, Visualizing Data, and understanding that change isn't really a technical problem.
Slides from my presentation at the Data Intelligence conference in Washington DC (6/23/2017). See this link for the abstract: http://www.data-intelligence.ai/presentations/36
State-Of-The Art Machine Learning Algorithms and How They Are Affected By Nea...inside-BigData.com
In this deck from the HPC Knowledge Portal 2017 Conference, Rob Farber from TechEnablement presents: State-Of-The Art Machine Learning Algorithms and How They Are Affected By Near-Term Technology Trends.
"Industry and Wall Street projections indicate that Machine Learning will touch every piece of data in the data center by 2020. This has created a technology arms race and algorithmic competition as IBM, NVIDIA, Intel, and ARM strive to dominate the retooling of the computer industry to support ubiquitous machine learning workloads over the next 3-4 years. Similarly, algorithm designers compete to create faster and more accurate training and inference techniques that can address complex problems spanning speech, image recognition, image tagging, self-driving cars, data analytics and more. The challenges for researchers and technology providers encompass big data, massive parallelism, distributed processing, and real-time processing. Deep-learning and low-precision inference (based on INT8 and FP16 arithmetic) are current hot topics."
Watch the video: https://wp.me/p3RLHQ-i2K
Learn more: http://www.hpckp.org/index.php/conference/2017
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Nagios Conference 2013 - Andy Brist - Data Visualizations and Nagios XINagios
Andy Brist's presentation on Data Visualizations and Nagios XI.
The presentation was given during the Nagios World Conference North America held Sept 20-Oct 2nd, 2013 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
PyData 2015 Keynote: "A Systems View of Machine Learning" Joshua Bloom
Despite the growing abundance of powerful tools, building and deploying machine-learning frameworks into production continues to be major challenge, in both science and industry. I'll present some particular pain points and cautions for practitioners as well as recent work addressing some of the nagging issues. I advocate for a systems view, which, when expanded beyond the algorithms and codes to the organizational ecosystem, places some interesting constraints on the teams tasked with development and stewardship of ML products.
About: Dr. Joshua Bloom is an astronomy professor at the University of California, Berkeley where he teaches high-energy astrophysics and Python for data scientists. He has published over 250 refereed articles largely on time-domain transients events and telescope/insight automation. His book on gamma-ray bursts, a technical introduction for physical scientists, was published recently by Princeton University Press. He is also co-founder and CTO of wise.io, a startup based in Berkeley. Josh has been awarded the Pierce Prize from the American Astronomical Society; he is also a former Sloan Fellow, Junior Fellow at the Harvard Society, and Hertz Foundation Fellow. He holds a PhD from Caltech and degrees from Harvard and Cambridge University.
This month, we will dive into the world of data analysis and visualization. As data continues to proliferate our lives and work, the question of how to make sense of it and turn it into information and knowledge becomes more and more challenging. At the same time, powerful tools are becoming available to help analysts sift through data and present it in a way that draws attention to key bits of knowledge than can be derived. As such, the skills related to using these tools effectively have become highly sought-after as organizations seek to dig out the treasures hidden in their data troves.
Presentation by Stephen Lett (Procter & Gamble)
My presentation at The Richmond Data Science Community (Jan 2018). The slides are slightly different than what I had presented last year at The Data Intelligence Conference.
S&OP Volume Mix FORECAST LESS AND GET BETTER RESULTSSteelwedge
This paper challenges conventional wisdom in the area of forecasting and planning: that companies need
to project forecasts and plans far into the future at a detailed, highly granular level.
Our position is that, in most cases, this is not necessary. Rather, detailed forecasts and plans are normally
needed only inside of what’s called the Planning Time Fence. This is a point in the future: the cumulative
lead time to acquire material and to build the product, plus a short time allowance for planning and order
releasing. Inside of the Planning Time Fence, high granularity is needed. But, most often, not beyond it.
To analyse why operationalizing AI is so challenging, it’s important to understand the full lifecycle of an AI project, and identify the stakeholders involved.
Through 2023, Gartner estimates that 50% of IT leaders will struggle to move their AI projects past proof of concept (POC) to a production level of maturity.
To reduce this high failure rate, organisations need to build the right roles for AI success. In many organisations, data scientists are still wearing too many hats due to a dearth of talent across other roles.
This session will highlight how, in order to successfully operationalise and scale AI POCs, organisations must to build diverse AI roles and skills with a collaborative structure that is paramount.
DataOps - Big Data and AI World London - March 2020 - Harvinder AtwalHarvinder Atwal
Title
DataOps, the secret weapon for delivering AI, data science, and business intelligence value at speed.
Synopsis
● According to recent research, just 7.3% of organisations say the state of their data and analytics is excellent, and only 22% of companies are currently seeing a significant return from data science expenditure.
● Poor returns on data & analytics investment are often the result of applying 20th-century thinking to 21st-century challenges and opportunities.
● Modern data science and analytics require secure, efficient processes to turn raw data from multiple sources and in numerous formats into useful inputs to a data product.
● Developing, orchestrating and iterating modern data pipelines is an extremely complex process requiring multiple technologies and skills.
● Other domains have to successfully overcome the challenge of delivering high-quality products at speed in complex environments. DataOps applies proven agile principles, lean thinking and DevOps practices to the development of data products.
● A DataOps approach aligns data producers, analytical data consumers, processes and technology with the rest of the organisation and its goals.
Productionising Machine Learning to automate the enterprise. Conference research question: How can you pin-point which core business processes to transform with increased automation and streamline daily workflows to boost in house efficiencies?
DataOps: Nine steps to transform your data science impact Strata London May 18Harvinder Atwal
According to Forrester Research, only 22% of companies are currently seeing a significant return from data science expenditures. Most data science implementations are high-cost IT projects, local applications that are not built to scale for production workflows, or laptop decision support projects that never impact customers. Despite this high failure rate, we keep hearing the same mantra and solutions over and over again. Everybody talks about how to create models, but not many people talk about getting them into production where they can impact customers.
Harvinder Atwal offers an entertaining and practical introduction to DataOps, a new and independent approach to delivering data science value at scale, used at companies like Facebook, Uber, LinkedIn, Twitter, and eBay. The key to adding value through DataOps is to adapt and borrow principles from Agile, Lean, and DevOps. However, DataOps is not just about shipping working machine learning models; it starts with better alignment of data science with the rest of the organization and its goals. Harvinder shares experience-based solutions for increasing your velocity of value creation, including Agile prioritization and collaboration, new operational processes for an end-to-end data lifecycle, developer principles for data scientists, cloud solution architectures to reduce data friction, self-service tools giving data scientists freedom from bottlenecks, and more. The DataOps methodology will enable you to eliminate daily barriers, putting your data scientists in control of delivering ever-faster cutting-edge innovation for your organization and customers.
Machine learning - What they don't teach you on Coursera ODSC London 2016Harvinder Atwal
I’ll show some example of live models at MoneySuperMarket. However, the main theme will be that there is far more to successful implementation of Machine Learning than just creating good algorithms. There needs to be just as much effort, if not more, put into selling the benefits to the business, working with developers and engineers to put the model into production, building testing into the process and ongoing maintenance of the solution.
Case Study Interactive: How To Work With Structured And Unstructured Data To Increase Customer Acquisition And Reduce Churn With Relevant Communication
How can analytics improve your attribution model accuracy to highlight and transform your most successful marketing channels?
How can you introduce predictive analytics to increase your customer segmentation competency?
How can insights from consumer data help you to predict customer lifetime value and focus on your top customers?
How can split testing consumer data help to improve your customer offering and boost retention rates?
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
2. Agenda
Warm-Up 5 mins
Data Visualisation: Why it matters 5 mins
The Rules 10 mins
Seven Common Quantitative Relationships 5 mins
The best means to encode quantitative data in charts 5 mins
Step by Step Guide 10 mins
Test 5 mins
2
3. Agenda
Warm-Up 5 mins
Data Visualisation: Why it matters 5 mins
The Rules 10 mins
Seven Common Quantitative Relationships 5 mins
The best means to encode quantitative data in charts 5 mins
Step by Step Guide 10 mins
Test 5 mins
3
4. Even the best information is useless, if its story is
poorly told
Most presentations of quantitative business data are poorly designed –
painfully so, often to the point of misinformation.
Anyone can start drawing charts in Excel and use PowerPoint but
hardly anyone is trained to do so effectively.
The effective display of
quantitative information
involves two fundamental
challenges
Selecting the right medium Designing the individual visual
of display (for example, a components of the selected
table or a graph, and the and medium to display the
appropriate kind of either) information and its message as
clearly as possible
4
5. Bad Data Visualisation can have tragic consequences
In Jan 1986 NASA had to Morton Thiokol engineers
decide whether to launch produced a chart and
the Challenger shuttle in recommended that
a ―100-year cold‖ shuttles not be flown
below 53F because of
potential damage to the
O-Rings in the booster
rockets
Morton Thiokol managers
Morton Thiokol managers accepted the
agree to the flight recommendation and
passed it on to NASA
NASA asks for the
recommendation to be
reconsidered
5
6. The engineers are Morton Thiokol came up with this chart
Looking at the O-Ring damage over the previous 24 shuttle missions, the data was
presented in chronological order showing the location and extent of the damage
sustained to the left and right boosters and the temperature at launch time.
6
7. The Morton Thiokol engineers failed to convince their
management and NASA with fatal consequences
7
8. Would this chart have been more convincing?
If instead we remove all the extraneous data and do a simple plot of
temperature vs damage then the pattern becomes much clearer.
ALWAYS
damage below
66F
Never damage
above 76F
8
9. WTF!? How many hours of valuable management time have been
wasted trying to understand a badly drawn chart?
How many £billions have been wasted on
incorrect decisions because someone
has misinterpreted a chart message?
9
10. To communicate effectively visually you need to understand
visual perception and cognition.
Present your message in a way that takes advantage of the
strengths of visual perception while avoiding its
weaknesses - matching the human thought process.
You can develop a simple set of skills (graphicacy) based on
this knowledge.
, based on clear-cut
This is principles about what
mostly Not
works and what
doesn’t
10
11. Agenda
Warm-Up 5 mins
Data Visualisation: Why it matters 5 mins
The Rules 10 mins
Seven Common Quantitative Relationships 5 mins
The best means to encode quantitative data in charts 3 mins
Step by Step Guide 10 mins
Test 5 mins
11
12. 12
Research Finding: Communication is most
effective when you say neither more nor
less than what is relevant to your message.
Principle #1: Display neither more nor less
than what is relevant to your message.
13. Tufte’s data-ink ratio is the single most important
concept in data visualisation
Data-ink ratio =
data-ink / total ink used to print the graphic
= proportion of a graphic’s ink devoted to the
non-redundant display of data-information
= 1.0 − proportion of a graphic that can be erased
without loss of data-information.
(The Visual Display of Quantitative Information, Edward R.
Tufte, Graphics Press, Cheshire CT, 1983, p.93)
13
14. Eliminate all redundant visual information!
You wouldn’t write a document like this using multiple
fonts, gratuitous formatting, redundant
excessive highlighting, variable colours, difficult to
read italics, pointless underlining, desperate shadows in
multiple sizes.
● Yet everyday you see the graphical equivalent as people
try to make their charts “interesting” instead of useful!.
14
15. How many items of redundant visual information can you
see in this chart?
Grey Underlining
Sales and Appointments by Region
Background
Border
200
183
180
Legend Key
Vertical
Excessive 160 150 Lines
tick marks 3-D Effect
140 Data Labels
120 112
100 97
Volume 100 91 Appointments
85
Sales
75
80
60
Border on
40 Legend
Border on
20
Bars
0
Wales and West London And South East Scotland and North Midlands
Floor
Region
Highlighting for
no reason
15
16. Less is more; the same chart de-junked…
Volumes Sales and Appointments by Region
200
Appointments Sales
180
160
140
120
100
80
60
40
20
0
Wales and West London And South East Scotland and North Midlands
Region
17. Research Finding: People perceive visual
differences in an information display as
differences in meaning.
Principle #2: Do not include visual
differences in a graph that do not correspond
to actual differences in the data.
17
18. What is the meaning of the different colours that appear on the
bars? The answer is “nothing.”
Don’t confuse
people and waste
their time by
including visual
differences that are
meaningless.
18
19. Research Finding: The visual properties that work
best for representing quantitative values are the
length or 2-D location of objects.
Principle #3: Use the lengths or 2-D locations of
objects to encode quantitative values in graphs
unless they have already been used for other
variables.
19
26. Bar B is actually only 10% bigger than A, not 100%
560
550
540
530
520
510
500
490
480
470
A B
26
27. Research Finding: People perceive differences in the
lengths or 2-D locations of objects fairly accurately
and interpret them as differences in the actual values
that they represent.
Principle #4: Differences in the visual properties that
represent values (that is, differences in their lengths
or 2-D locations) should accurately correspond to
the actual differences in the values they represent.
27
28. Research Finding: People perceive things that
appear connected as wholes and things that appear
disconnected as discrete.
Principle #5: Do not visually connect values that are
discrete, thereby suggesting a relationship that does
not exist in the data.
28
29. The regions are discrete, so values that measure something going on in
these regions should be displayed as discrete.
Connecting discrete
items with a line is
misleading. Doing so
forms a pattern of
upwards and
downwards slopes
that are utterly
meaningless.
29
30. Research Finding: People pay most attention to and
consider most important those parts of a visual
display that are most salient.
Principle #6: Make the information that is most
important to your message more visually salient in a
graph than information that is less important.
30
31. Some information is more important to your
message than others
You can communicate this
fact in a graph by making
those items that are most
important more visually
dominant (salient).
It is your job to direct
people’s eyes to the most
important parts of the
display, so they
adequately focus on them.
31
32. Research Finding: Short-term memory is limited to
about four chunks of information at a time.
Principle #7: Augment people’s short-term memory
by combining multiple facts into a single visual
pattern that can be stored as a chunk of memory and
by presenting all the information they need to
compare within eye span.
32
33. By presenting quantitative information visually as
patterns, more information can be simultaneously stored in
short-term memory,
Each of the two lines in
this line graph combines
12 different sales
figures, one per
month, into a single
pattern of upward and
downward sloping line
segments.
When encoded in a visual
pattern such as this, these
12 numbers can be stored
together as a single chunk
of information in short-
term memory
33
34. Agenda
Warm-Up 5 mins
Data Visualisation: Why it matters 5 mins
The Rules 10 mins
Seven Common Quantitative Relationships 5 mins
The best means to encode quantitative data in charts 5 mins
Step by Step Guide 10 mins
Test 5 mins
34
35. Seven common quantitative relationships in graphs and
how to display them
Meaningful quantitative information always
involves relationships. With rare exceptions in
business graphs, these relationships always boil
down to one or more of the seven relationships
described on the following slides.
35
36. Time Series
Expresses the rise and fall of
values through time.
– Use lines to emphasize
overall pattern.
– Use bars to emphasize
individual values.
– Use points connected by
lines to slightly emphasize
individual values while still
highlighting the overall
pattern.
– Always place time on the
horizontal axis.
36
37. Ranking
Expresses values in order by
size.
Use bars only (horizontal or
vertical).
– To highlight high
values, sort in descending
order.
– To highlight low values, sort
in ascending order.
37
38. Part-to-Whole
Expresses the portion of each
part relative to the whole.
– Use bars only (horizontal
or vertical).
– Use stacked bars only
when you must display
measures of the whole
38
39. Deviation
Expresses how and the degree to
which one or more things differ from
another.
– Use lines to emphasize the overall
pattern only when displaying
deviation and timeseries
relationships together.
– Use points connected by lines to
slightly emphasize individual data
points while also highlighting the
overall pattern when displaying
deviation and time-series
relationships together.
– Use bars to emphasize individual
values, but limit to vertical bars
when a time series relationship is
included.
– Always include a reference line to
compare the measures of deviation
against.
39
40. Distribution
Expresses a range of values as well as the
shape of the distribution across that range.
Single distribution:
– Use vertical bars to emphasize individual
values
– Use lines to emphasize the overall shape.
Multiples distributions:
– Use vertical or horizontal bars (a.k.a.
range bars or boxes) to encode the full
range from the low value to the high
value, or some meaningful portion of the
range (for example, 90% of the values).
– Use points or lines together to encode
measures of centre (for example, the
median).
40
41. Correlation
Expresses how two paired
sets of values vary in relation
to one another.
– Use points and a trend
line in the form of a
scatter plot.
41
42. Nominal Comparison
Simply expresses the
comparative sizes of multiple
related but discrete values in no
particular order.
– Use bars only (horizontal or
vertical).
42
43. Agenda
Warm-Up 5 mins
Data Visualisation: Why it matters 5 mins
The Rules 10 mins
Seven Common Quantitative Relationships 5 mins
The best means to encode quantitative data in charts 5 mins
Step by Step Guide 10 mins
Test 5 mins
43
44. Four types of objects work best for encoding quantitative
values in graphs: points, lines, bars, and boxes.
Bars
Points
Boxes
Lines
44
45. Points and Lines
Points are the smallest of the objects that are used to encode values in graphs. They can take
the shape of dots, squares, triangles, Xs, dashes, and other simple objects. They have two
primary strengths:
(1) they can be used to encode quantitative values along two quantitative
scales simultaneously, as in a scatter plot, and
(2) they can be used to in place of bars when the quantitative scale does not
begin at zero. Unlike lines, points emphasize individual values, rather
than the shape of those values as they move up and down.
Lines connect the individual values in a series, emphasizing the shape of the data
as it moves from value to value. As such, they are superb for showing the shape
of data as it moves and changes through time. Trends, patterns, and exceptions
stand out clearly.
You should only use lines to encode data along an interval scale.
45
46. Do not use lines for Nominal or Ordinal scales!
Sales
160 Wrong Wrong
Sales
140 120
120 100
100
80
80
60
60
40
40
20 Nominal Scale 20
0 0
Extra-Value Standard Branded Finest
Wales and West London And South East Scotland and North
In nominal and ordinal scales, the individual items are not related closely enough to be linked with lines, so
you should use bars or points instead. Lines suggest change from one item to the next, but change isn’t
happening if the items aren’t closely related as sequential subdivisions of a continuous range of values. For
instance, it is appropriate to use lines to display change from one day to the next or from one price range to
the next, but not from one community bank to the next.
46
47. Use lines only for Interval scales
Sales
120 Right If, however, you want to emphasize individual
items, such as individual months, or to
100 support discrete comparisons of multiple
values at the same location along the interval
80 scale, such as revenues and expenses for
individual months, then bars or points work
60 best.
40
Sales
20
Interval Scale 120
100
0
Q1 Q2 Q3 Q4 80
60
With interval scales, you are not forced in all
cases to use lines; you can use bars and points 40
as well. If you want to emphasize the overall
shape of the data or changes from one item to 20
the next, lines work best.
0
Q1 Q2 Q3 Q4
47
48. Bars encode data in a way that emphasizes individual
values powerfully
This ability is due in part to the fact that bars encode quantitative values in two ways:
(1) the 2-D position of the bar’s endpoint in relation to the quantitative scale, and
(2) the length of the bar.
You probably recognize that these two characteristics correspond precisely to the two visual attributes that can be used
to encode data in graphs. When you want to draw focus to individual values or to support the comparison of individual
values to one another (see figure 19), bars are an ideal choice. They don’t, however, do as well as lines in revealing the
overall shape of the data. Bars may be oriented vertically or horizontally.
100 Budget Actual
90
80
70
60
50
40
30
20
10
0
Rewards Exchange
48
49. Whenever you use bars, your quantitative scale must
include zero
The lengths of the bars encode their values, but won’t When you would normally use bars, but
do so accurately if those values don’t begin at zero. wish to narrow the quantitative scale to
Notice what happens when you narrow the show differences between the values in
quantitative scale and use bars below. Actual sales
appear to be half of planned sales, but in fact they are greater detail, you should switch from bars
90% of the plan. to points, because points encode values
merely as 2-D location in relation to the
quantitative scale, which eliminates the
need to begin the scale at zero.
100 Budget Actual 560
550
540
90 530
520
510
80 500
490
480
70 470
Rewards Exchange A B
49
50. Boxes
Boxes are a lot like bars, except that
both ends encode quantitative values.
When bars are used in this way, they are
sometimes called range bars. They are
used to encode a range of
values, usually from the highest to the
lowest, rather than a single value.
In the 1970s John Tukey invented a
method of using rectangles (bars with or
without fill colors) in combination with
individual data points (often a short line)
and thin bars to encode several facts
about a distribution of values, including
the median (middle value), middle
50%, etc.
He called his invention a box plot (a.k.a.
box-and-whisker plot).
50
51. Agenda
Warm-Up 5 mins
Data Visualisation: Why it matters 5 mins
The Rules 10 mins
Seven Common Quantitative Relationships 5 mins
The best means to encode quantitative data in charts 5 mins
Step by Step Guide 10 mins
Test 5 mins
51
52. Step 1: Determine your message
Will the data be used to look up
and compare individual values, or
will the data need to be precise?
Determine your message. If so, you should display it in a
Don’t just turn your data table.
into a chart!
Think about what your Or, do both.
data means, what you
want to communicate Is the message contained in the
and most importantly shape of the data—in
your audiences’ trends, patterns, exceptions, or
needs. comparisons that involve more
than a few values? If so, you
should display it in a graph.
52
53. Step 2: Determine the best means to encode the values
Nominal comparison. Bars (horizontal or vertical). Points (if the quantitative scale does
not include zero).
Time Series. Lines to emphasize the overall shape of the data
Bars to emphasize and support comparisons between individual values
Points connected by lines to slightly emphasize individual values while still highlighting
the overall shape of the data
Ranking. Bars (horizontal or vertical). Points (if the quantitative scale does not include
zero)
What am I Part-to-Whole. Bars (horizontal or vertical) Note: Pie charts are commonly used to
display part-to-whole relationships, but they don’t work nearly as well as bar
trying to graphs because it is much harder to compare the sizes of slices than the length of
bars. Use stacked bars only when you must display measures of the whole as well
represent? as the parts
Deviation. Lines to emphasize the overall shape of the data (only when displaying
deviation and time-series relationships together)
Points connected by lines to slightly emphasize individual data points while also
highlighting the overall shape (only when displaying deviation and time-series
relationships together)
Frequency Distribution. Bars (vertical only) to emphasize individual values. This kind
of graph is called a histogram
Lines to emphasize the overall shape of the data. This kind of graph is called a
frequency polygon.
Correlation. Points and a trend line in the form of a scatter plot
53
54. Step 3: Determine where to display each variable – One
Variable
Place the categorical variable on the x-axis if your graph will include ONE categorical variable and any one of
the following is true:
• The categorical scale is an interval scale
• You are using lines to encode the data
• You are using bars to encode the data and the labels are not long or many
If you are using bars place the categorical variable on the Y-axis when either of these two conditions exist:
• The text labels associated with the bars are long
• There are many bars. 120
0 20 40 60 80 100 120
100
Beef
Fresh pork 80
Lamb
Bacon 60
Sausage
40
Beef fillet jnt
Beef sirloin joint Is better than 20
Pork roulades
Fresh pork mince 0
Fresh poultry gravy
Be avy
ge
rs
s
b
nt
es
s
rk
n
ce
f
rlo nt
k
e
er
er
oc
m
co
oi
ge
po
tj
Be
ad
in
sa
rg
rg
La
gr
st
Po in j
Ba
le
Beef stock
m
An bur
u
ul
h
bu
bu
fil
ef
try
Sa
es
rk
ro
ef
ef
k
s
po
4 beef burgers
ul
Fr
si
rk
ea
gu
Be
be
po
ef
h
st
es
Be
8 beef steak burgers
4
h
ef
es
Fr
be
Fr
Angus burgers
8
54
55. Step 3: Determine where to display each variable – Two or
three variables
If the graph involves two or three variables, you must decide which to display along the
axes and which to encode using distinct versions of another visual attribute, such as
colour.
200
With a line graph, place the variable that is
180 Appointments
most important to your message along the X
160 Sales
axis.
140
With a bar graph, encode the variable whose 120
items you want to make it easiest to 100
compare using a method other than 80
association with an axis. Notice how much
60
easier it is to compare appointments and
40
sales than the regions, because they are
positioned next to one another. 20
0
Wales and West London And South Scotland and North Midlands
East
55
56. Step 3: Determine where to display each variable - the
problem of the fourth variable
This solution involves a series of small graphs, arranged in the same way as a graph with three
variables, all arranged together in a way that can be seen simultaneously. Each graph is
alike, including consistent scales, differing only in that each features a different item of a
categorical variable. Each graph varies according to a fourth variable, which is sales channel
(e.g. product).
Using small multiples to support an additional variable is a powerful technique. Graphs can be
arranged horizontally, vertically, or even in a matrix of columns and rows. If you need to display
one more variable than you can fit into a single graph, select this approach.
Face-Value Rewards Big Exchange
Midlands
2010
Sales
Scotland and North
Appointments
2011
London And South East
Wales and West
0 50 100 150 200 0 50 100 150 200 0 50 100 150 200
56
57. Step 4: Determine the best design for the remaining
objects - Scale
It’s now time to make a series of design decisions that remain, including the scales and
text. These decisions are concerned with the placement and visual appearance of
items.
Sales
If the graph will be used for analysis purposes 1800
that require seeing the differences between
1600
values in as much detail as possible, narrowing
the scale can be useful. Generally, you should
1400
adjust the scale so that it extends a little below
the lowest data value and a little above the 1200
highest.
1000
800
560
Q1 Q2 Q3 Q4
550
540
530
520
If you are using bars to encode the
510
data, but your message could be better
communicated by narrowing the
500
scale, Remember to switch from bars
490
to points!
480
470
A B
57
58. Step 4: Determine the best design for the remaining
objects - Legend
Sales
1800
If a Legend Is Required, and London and South East
you are using lines, label the 1600
lines directly Wales and West
1400
1200
1000
100 Budget Actual 800
90 Q1 Q2 Q3 Q4
80
70
60 If you are using bars, place the
50
40
legend above the plot area with
30
the labels arranged side-by-side
20 in the same order as the bars
10
0
Rewards Exchange
58
59. Step 4: Determine the best design for the remaining
objects – Tick Marks and Scales
Tick marks are only necessary on quantitative scales, for they serve no real purpose
on categorical scales. A number between 5 and 10 tick marks usually does the job;
too many clutters the graph and too few fail to give the level of detail needed to
interpret the values.
If the graph can be read with the scale in only one place (left, right, top
bottom) place it nearest the data you want to emphasise or make easiest to
read.
If the graph is so large it cannot be read with only one scale, place it in both
positions ( top and bottom, left and right).
59
60. Step 4: Determine the best design for the remaining
objects – Gridlines
Unless they are necessary to understand your message or divide a scatter plot into sections leave them
off, and when used subdue them visually. Bear in mind graphs display patterns and relationships. If
you want to communicate data with a high degree of quantitative accuracy use a table.
Sales
1800
1600
1400
1200
1000
800
Q1 Q2 Q3 Q4
60
61. Step 4: Determine the best design for the remaining
objects – Descriptive Text
Although the primary message of a graph is carried in the picture it provides, text is
always required to some degree to clarify the meaning of that picture. Some text if often
needed, including:
– A descriptive title
– Axis titles (unless the nature of the scale and its unit of measure are already clear)
Numbers in the form of text Widget Sales by Region and
along quantitative scales are Sales Calendar Quarter (2007)
always necessary and 1800
legends often are. It is often London and South East
useful to include one or 1600
more notes to describe what Wales and West
is going on in the 1400
graph, what ought to be Widget sales in
examined in particular, or London and South
1200
how to read the East have been
ahead of Wales
graph, whenever these bits 1000 and West with the
of important information are exception of Q3
not otherwise obvious.
800
Q1 Q2 Q3 Q4
61
62. Step 5: Determine if particular data should be featured, and
if so, how
The final major stage in the process involves highlighting particular data if some data is more important
than the rest.
Whatever the reason, you have a number of possible ways to make selected data stand out.
One of the best and simplest ways is to encode those items using bright or dark colours, which will stand
out clearly if you’ve used soft colours for everything else. Other methods include:
–When bars are used, place borders only around those bars that should be highlighted.
–When lines are used, make the lines that must stand out thicker.
–When points are used, make the featured points larger or include fill colour in them alone.
Sales Sales
1800
120
100 1600
80
1400
60
1200
40
20 1000
0 800
A B
Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
62
63. Remember to follow this process for graph selection and design in
order to communicate your information in the most effective manner
Determine your message and identify your data
Determine if a table, graph, or combination of both is
needed to communicate your message
Determine the best means to encode the values
Determine where to display each variable
The best means to encode quantitative data in charts
Determine the best design for the remaining objects
Determine if particular data should be featured, and
if so, how
63
64. Summary
Whenever you create a graph, you have a choice to
make — to communicate or not. That’s what it all comes
down to. If you have something important to say, then
say it clearly and accurately. These guidelines are
designed to help you do just that.
65. Agenda
Warm-Up 5 mins
Data Visualisation: Why it matters 5 mins
The Rules 10 mins
Seven Common Quantitative Relationships 5 mins
The best means to encode quantitative data in charts 5 mins
Step by Step Guide 10 mins
Test 5 mins
65
66. Which graph makes it easier to determine whether Mid-Cap US
stocks or Small-Cap US stocks have a greater share?
A B
66
69. Which graph makes it easier to focus on the pattern of change
through time, instead of the individual values?
A
B
69
70. Only one of these graphs accurately encodes the values. The other skews the
values in a misleading manner. Which graph presents the data accurately?
A B
70
71. Which map makes it easier to find all of the counties with
positive growth rates?
A B
71
72. Which graph makes it easier to determine R&D’s travel
expense?
A
B
72
Once you know what you want to say, effective visual communication is achieved by displaying information in a way that enables people to clearly see an accurate representation of your message and understand what they see. To do this, you must understand a few things about how people see (visual perception) and how people think (cognition).
A common problem with tables and graphs is the excessive presence of visual content that doesn’t represent actual data. Whenever quantitative information is presented, the data itself should stand out clearly, without distraction. This involves eliminating anything that doesn’t represent data, except for visual devices that support the data in a necessary way (for example, axes in a graph), in which case they should be displayed in muted fashion so as to not distract from the data itself.
Because differences in visual properties, such as color, are used to communicate actual differences in theinformation itself, visual differences should never be used arbitrarily. When people notice visual differences,they try to discern the meaning of those differences. Don’t confuse people and waste their time by includingvisual differences that are meaningless.
It is usually best to encode the third variable using distinct colours, rather than any of the other available methods, such as different line or fill patterns. Just be careful to use colours that are still distinct, even when photocopied.
Keeping the quantitative scale consistent makes it is easy to compare the charts.
Be careful whenever you narrow the scale to make sure that it is obvious to your audience that you’ve done so and won’t misread big differences between lines and points on the graph with big differences in their values, which might not be the case.Points aren’t as visually prominent as bars and consequently don’t emphasize individual values quite as forcefully, but points are a fine substitute for bars when you need to narrow the quantitative scale.
The more directly you can label data, the better. For instance in a line graph with multiple lines, if you can label the lines directly (for example, at the ends of the lines), the graph will be much easier to read. In a bar graph with multiple sets of bars, you usually need a legend, but you can make it much easier to read by arranging the labels to match the arrangement of the bars, rather than the more usual way on the right. Notice also that the legend doesn’t need a border around it - it simply isn’t necessary.
Even on quantitative scales, only major tick marks are necessary, with rare exceptions.When the quantitative scale corresponds to the Y axis, it can be placed on the left side, right side, or on both sides of the graph. When it corresponds to the X axis, it can be placed on the top, bottom, or both. It is usually sufficient to place the quantitative scale in one place, but if the graph is so large that some values are positioned too far from the scale to adequately determine their values, placing the scale on both the left and the right, or the top and the bottom, will solve the problem.When it only needs to appear in one place, the best choice of position depends on which values you want to emphasize or make easier to read. Placing the scale nearest to those values will accomplish . Avoid placing the scale on the right side of the graph, however, unless really necessary to serve this purpose, because the scale so rarely appears only on the right that this might momentarily disoriented those who use the graph.If the quantitative scale ranges between positive and negative values, the axis line should be positioned at zero, but the labels should be placed elsewhere so they won’t interfere with the data. For instance, when the quantitative scale is on the X axis, it is usually best to place the text labels just below the plot area of the graph.
Grid lines in graphs are mostly a vestige of the old days when graphs had to be drawn by hand on grid paper. Today, with computer-generated graphs, grid lines are only useful when one of the following conditions exists:• Values cannot be interpreted with the necessary degree of accuracy• Subset of points in multiple related scatter plots must be comparedBear in mind that it is not the purpose of a graph to communicate data with a high degree of quantitative accuracy, which is handled better by a table. Graphs display patterns and relationships. If a bit more accuracy than can be easily discerned is necessary, however, you may include grid lines, but when you do, you should subdue them visually, making them just barely visible enough to do the job. When you are using multiple related scatter plots and wish to make it easy for folks to compare the same subset of values in two or more graphs, a subtle matrix of vertical and horizontal grid lines neatly divides the graphs into sections, making it easy to isolate particular ranges of values.