Guidelines for data visualisation: eye vegetables and eye candyJen Stirrup
What's your data visualization vegetables? What's your candy? This session will look at data visualization theory and practice of hot data visualization topics such as: how can you choose which chart to choose and when?
How can you best structure your dashboard?
What about pie charts? What is the fuss about, and when are they best used?
Color blindness - how can you cater for the 1 out of 12 color blind males (and not forgetting the 1 out of 100 color blind females?)
To 3D or not to 3D? Why is it missing in Power View? And any other data visualization topics you care to mention! Come along for dataviz fun, and to learn the "why" along with practical advice.
Using effective visual aids is important for getting across your message when describing data. This can be in a presentation, poster or paper. This talk goes through some basic design tips that can help your visual aids look professional and work effectively.
Written for the Enabling Excellence ETN. https://eetraining.wordpress.com/
AMIA 2015 Visual Analytics in Healthcare Tutorial Part 1David Gotz
A concise introduction to the topic of visualization. Designed for beginners with no prior experience with visualization. These slides were the first part of a half-day tutorial on Visual Analytics held in conjunction with the 2015 AMIA Annual Symposium. It was sponsored by the AMIA Visual Analytics Working Group. For more information, please see www.visualanalyticshealthcare.org or contact the author of the slides: David Gotz @ http://gotz.web.unc.edu
Best Practices for Killer Data VisualizationQualtrics
There’s something special about simple, powerful visualizations that tell a story. In fact, 65% of people are visual learners.
Join Qualtrics and Sasha Pasulka from Tableau as we illuminate the world of data visualization and give you clear takeaways to help you tell a better story with data. Getting executive buy-in or that seat at the table may come down to who can visualize data in a way that excites and enlightens the audience.
Guidelines for data visualisation: eye vegetables and eye candyJen Stirrup
What's your data visualization vegetables? What's your candy? This session will look at data visualization theory and practice of hot data visualization topics such as: how can you choose which chart to choose and when?
How can you best structure your dashboard?
What about pie charts? What is the fuss about, and when are they best used?
Color blindness - how can you cater for the 1 out of 12 color blind males (and not forgetting the 1 out of 100 color blind females?)
To 3D or not to 3D? Why is it missing in Power View? And any other data visualization topics you care to mention! Come along for dataviz fun, and to learn the "why" along with practical advice.
Using effective visual aids is important for getting across your message when describing data. This can be in a presentation, poster or paper. This talk goes through some basic design tips that can help your visual aids look professional and work effectively.
Written for the Enabling Excellence ETN. https://eetraining.wordpress.com/
AMIA 2015 Visual Analytics in Healthcare Tutorial Part 1David Gotz
A concise introduction to the topic of visualization. Designed for beginners with no prior experience with visualization. These slides were the first part of a half-day tutorial on Visual Analytics held in conjunction with the 2015 AMIA Annual Symposium. It was sponsored by the AMIA Visual Analytics Working Group. For more information, please see www.visualanalyticshealthcare.org or contact the author of the slides: David Gotz @ http://gotz.web.unc.edu
Best Practices for Killer Data VisualizationQualtrics
There’s something special about simple, powerful visualizations that tell a story. In fact, 65% of people are visual learners.
Join Qualtrics and Sasha Pasulka from Tableau as we illuminate the world of data visualization and give you clear takeaways to help you tell a better story with data. Getting executive buy-in or that seat at the table may come down to who can visualize data in a way that excites and enlightens the audience.
Visualizing and Communicating High-dimensional DataStefan Kühn
Slides from my talk at Data Natives, starting with the different Modes of Perception, the components of Visualization and Graphics and how to transport Information efficiently, then giving examples of how modern approximation techniques - manifold learning, principal curves - and visualization techniques - pair plots, correlation plots, parallel coordinates, grand tour - can be used in order to approach complex multi-dimensional data.
Data Visualization dataviz superpower! Guidelines on using best practice data visualization principles for Power BI, Excel, SSRS, Tableau and other great tools!
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/bas3-Ue2qxc.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Auto Visualization involves the problem of producing meaningful graphics when presented with data. Relevant to this task are the strategies that expert statisticians and data analysts use to gain insights through visualization, as well as the portfolio of diagnostic methods devised by statisticians in the last 50 years. While some researchers and companies may claim to do automatic visualization, the problem is much deeper than simply producing collections of histograms, bar charts, and scatterplots. The deeper problem is what subset of these graphics is critical to recognizing anomalies, outliers, unusual distributions, missing values, and so on. This talk will cover aspects of this deeper problem and will introduce H2O software that implements some of these algorithms.
Leland Wilkinson is Chief Scientist at H2O.ai and Adjunct Professor of Computer Science at the University of Illinois Chicago. He received an A.B. degree from Harvard in 1966, an S.T.B. degree from Harvard Divinity School in 1969, and a Ph.D. from Yale in 1975. Wilkinson wrote the SYSTAT statistical package and founded SYSTAT Inc. in 1984. After the company grew to 50 employees, he sold SYSTAT to SPSS in 1994 and worked there for ten years on research and development of visualization systems. Wilkinson subsequently worked at Skytree and Tableau before joining H2O.ai. Wilkinson is a Fellow of the American Statistical Association, an elected member of the International Statistical Institute, and a Fellow of the American Association for the Advancement of Science. He has won best speaker award at the National Computer Graphics Association and the Youden prize for best expository paper in the statistics journal Technometrics. He has served on the Committee on Applied and Theoretical Statistics of the National Research Council and is a member of the Boards of the National Institute of Statistical Sciences (NISS) and the Institute for Pure and Applied Mathematics (IPAM). In addition to authoring journal articles, the original SYSTAT computer program and manuals, and patents in visualization and distributed analytic computing, Wilkinson is the author (with Grant Blank and Chris Gruber) of Desktop Data Analysis with SYSTAT. He is also the author of The Grammar of Graphics, the foundation for several commercial and opensource visualization systems (IBMRAVE, Tableau, Rggplot2, and PythonBokeh).
This slide deck is from a workshop that took place at the UNC Chapel Hill Davis Library Research Hub.
Collecting data is now easier than it has ever been. But, as data becomes more prolific, datasets become larger and more complex. How do we find meaningful patterns in our data? How can we communicate those patterns to others? Data visualization allows us to make sense of today’s ever evolving information landscape.
This workshop will introduce the history and basic principles of data visualization. Learn about best practices and resources for making an impact with your data through compelling charts, graphs and maps.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Visualizing and Communicating High-dimensional DataStefan Kühn
Slides from my talk at Data Natives, starting with the different Modes of Perception, the components of Visualization and Graphics and how to transport Information efficiently, then giving examples of how modern approximation techniques - manifold learning, principal curves - and visualization techniques - pair plots, correlation plots, parallel coordinates, grand tour - can be used in order to approach complex multi-dimensional data.
Data Visualization dataviz superpower! Guidelines on using best practice data visualization principles for Power BI, Excel, SSRS, Tableau and other great tools!
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/bas3-Ue2qxc.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Auto Visualization involves the problem of producing meaningful graphics when presented with data. Relevant to this task are the strategies that expert statisticians and data analysts use to gain insights through visualization, as well as the portfolio of diagnostic methods devised by statisticians in the last 50 years. While some researchers and companies may claim to do automatic visualization, the problem is much deeper than simply producing collections of histograms, bar charts, and scatterplots. The deeper problem is what subset of these graphics is critical to recognizing anomalies, outliers, unusual distributions, missing values, and so on. This talk will cover aspects of this deeper problem and will introduce H2O software that implements some of these algorithms.
Leland Wilkinson is Chief Scientist at H2O.ai and Adjunct Professor of Computer Science at the University of Illinois Chicago. He received an A.B. degree from Harvard in 1966, an S.T.B. degree from Harvard Divinity School in 1969, and a Ph.D. from Yale in 1975. Wilkinson wrote the SYSTAT statistical package and founded SYSTAT Inc. in 1984. After the company grew to 50 employees, he sold SYSTAT to SPSS in 1994 and worked there for ten years on research and development of visualization systems. Wilkinson subsequently worked at Skytree and Tableau before joining H2O.ai. Wilkinson is a Fellow of the American Statistical Association, an elected member of the International Statistical Institute, and a Fellow of the American Association for the Advancement of Science. He has won best speaker award at the National Computer Graphics Association and the Youden prize for best expository paper in the statistics journal Technometrics. He has served on the Committee on Applied and Theoretical Statistics of the National Research Council and is a member of the Boards of the National Institute of Statistical Sciences (NISS) and the Institute for Pure and Applied Mathematics (IPAM). In addition to authoring journal articles, the original SYSTAT computer program and manuals, and patents in visualization and distributed analytic computing, Wilkinson is the author (with Grant Blank and Chris Gruber) of Desktop Data Analysis with SYSTAT. He is also the author of The Grammar of Graphics, the foundation for several commercial and opensource visualization systems (IBMRAVE, Tableau, Rggplot2, and PythonBokeh).
This slide deck is from a workshop that took place at the UNC Chapel Hill Davis Library Research Hub.
Collecting data is now easier than it has ever been. But, as data becomes more prolific, datasets become larger and more complex. How do we find meaningful patterns in our data? How can we communicate those patterns to others? Data visualization allows us to make sense of today’s ever evolving information landscape.
This workshop will introduce the history and basic principles of data visualization. Learn about best practices and resources for making an impact with your data through compelling charts, graphs and maps.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
33. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
34. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
35. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
36. “The first rule of color:
do not talk about color!”
- Tamara Munzner
47. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
48.
49.
50.
51.
52.
53. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
54.
55.
56.
57. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
58.
59.
60.
61.
62. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
67. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned
scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
68.
69.
70.
71. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
72.
73.
74. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
87. Piecharts are the information visualization
equivalent of a roofing hammer to the
frontal lobe. They have no place in the world
of grownups, and occupy the same semiotic
space as short pants, a runny nose, and
chocolate smeared on one’s face. They are
as professional as a pair of assless chaps.
http://blog.codahale.com/2006/04/29/google-analytics-the-goggles-they-do-nothing/
88. Piecharts are the information visualization
equivalent of a roofing hammer to the frontal
lobe. They have no place in the world of
grownups, and occupy the same semiotic
space as short pants, a runny nose, and
chocolate smeared on one’s face. They are
as professional as a pair of assless chaps.
http://blog.codahale.com/2006/04/29/google-analytics-the-goggles-they-do-nothing/
89. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
90.
91.
92. Tables are preferable to graphics for many small
data sets. A table is nearly always better than a
dumb pie chart; the only thing worse than a pie
chart is several of them, for then the viewer is
asked to compared quantities located in spatial
disarray both within and between pies… Given
their low data-density and failure to order
numbers along a visual dimension, pie charts
should never be used.
-Edward Tufte, The Visual Display of Quantitative Information
93. Tables are preferable to graphics for many
small data sets. A table is nearly always better
than a dumb pie chart; the only thing worse than
a pie chart is several of them, for then the viewer
is asked to compared quantities located in spatial
disarray both within and between pies… Given
their low data-density and failure to order
numbers along a visual dimension, pie charts
should never be used.
-Edward Tufte, The Visual Display of Quantitative Information
94. Clinton Trump
Among Democrats 99% 1%
Among Republicans 53% 47%
Who do you think did a better
job in tonight’s debate?
118. The most important measurement should exploit
the highest ranked encoding possible.
• Position along a common scale
• Position on identical but nonaligned scales
• Length
• Angle or Slope
• Area
• Volume or Density or Color saturation
• Color hue
119. Cleveland’s three visual operations
of pattern perception:
1. Detection
2. Assembly
3. Estimation
162. R/GGplot2 code for every plot in this
presentation available at http://goo.gl/xH5PLV
The rendered document is at
http://rpubs.com/jrauser/hhsd_notes
This presentation is at
https://goo.gl/LuDNje
179. Choose you own adventure
A. What are gridlines for?
B. Humans are bad at vertical distance
C. Do I have to include 0 on my y-axis?
D. What aspect ratio should I use?
180. Weber’s law: The “Just Noticeable
Difference” is proportional to the
size of the initial stimuli.
209. Q: Should I include 0 on my scale?
A: It depends.
210. Q: Should I include 0 on my scale?
A: Relying on the pre-attentive
perception of size or intensity?
Yes, otherwise you will mislead.
Using position? It’s up to you.
218. “Above all else, show
the variation in the data.”
-Rauser (via Tufte)
219. R/GGplot2 code for every plot in this
presentation available at http://goo.gl/xH5PLV
The rendered document is at
http://rpubs.com/jrauser/hhsd_notes
This presentation is at
https://goo.gl/LuDNje