Learn how you can use Reveal’s R & Python scripting capability to bring advanced data preparation, deeper analytics, and richer visualizations to your users!
Scalable tabular (SFrame, SArray) and graph (SGraph) data-structures built for out-of-core data analysis.
The SFrame package provides the complete implementation of:
SFrame
SArray
SGraph
The C++ SDK surface area (gl_sframe, gl_sarray, gl_sgraph)
I am shubham sharma graduated from Acropolis Institute of technology in Computer Science and Engineering. I have spent around 2 years in field of Machine learning. I am currently working as Data Scientist in Reliance industries private limited Mumbai. Mainly focused on problems related to data handing, data analysis, modeling, forecasting, statistics and machine learning, Deep learning, Computer Vision, Natural language processing etc. Area of interests are Data Analytics, Machine Learning, Machine learning, Time Series Forecasting, web information retrieval, algorithms, Data structures, design patterns, OOAD.
Scalable tabular (SFrame, SArray) and graph (SGraph) data-structures built for out-of-core data analysis.
The SFrame package provides the complete implementation of:
SFrame
SArray
SGraph
The C++ SDK surface area (gl_sframe, gl_sarray, gl_sgraph)
I am shubham sharma graduated from Acropolis Institute of technology in Computer Science and Engineering. I have spent around 2 years in field of Machine learning. I am currently working as Data Scientist in Reliance industries private limited Mumbai. Mainly focused on problems related to data handing, data analysis, modeling, forecasting, statistics and machine learning, Deep learning, Computer Vision, Natural language processing etc. Area of interests are Data Analytics, Machine Learning, Machine learning, Time Series Forecasting, web information retrieval, algorithms, Data structures, design patterns, OOAD.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2nwSwEh.
Marco Bonzanini discusses the process of building data pipelines, e.g. extraction, cleaning, integration, pre-processing of data; in general, all the steps necessary to prepare data for a data-driven product. In particular, he focuses on data plumbing and on the practice of going from prototype to production. Filmed at qconlondon.com.
Marco Bonzanini is Data Scientist and co-organizer of PyData London Meetup.
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
Azure DevOps offers many tools that you can choose from to augment your DevOps practices. Whether you are delivering software on-prem or in the cloud, building OSS or commercial solutions, using .NET, Java, Swift or any other language, you should see what Azure DevOps has to offer.
Intro to Machine Learning with H2O and AWSSri Ambati
Navdeep Gill @ Galvanize Seattle- May 2016
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Azure DevOps offers many tools that you can choose from to augment your DevOps practices. Whether you are delivering software on-prem or in the cloud, building OSS or commercial solutions, using .NET, Java, Swift or any other language, you should see what Azure DevOps has to offer.
Python - A Comprehensive Programming LanguageTsungWei Hu
Python - A Comprehensive Programming Language, talk at
1. CSIE, Providence University, 2009/05/08
2. CSIE, National Taichung Institute of Technology, 2009/10/29
Azure DevOps offers many tools that you can choose from to augment your DevOps practices. Whether you are delivering software on-prem or in the cloud, building OSS or commercial solutions, using .NET, Java, Swift or any other language, you should see what Azure DevOps has to offer.
Autonomous Machines with Project BonsaiIvo Andreev
Autonomous machines rely on fusion of many technologies to sense, plan, optimize and act as if an intelligent superhuman is in control.
Project Bonsai is a machine teaching service that combines machine learning (ML), calibration and optimization to create intelligent control systems using simulations. The teaching curriculum is performed using a proprietary “Inkling” language close to JavaScript and training a model is easy and interactive. Join this session for a Bonsai jump start and a demo and try it yourself – it is free.
DataMass Summit - Machine Learning for Big Data in SQL ServerŁukasz Grala
Sesja pokazująca zarówno Machine Learning Server (czyli algorytmy uczenia maszynowego w językach R i Python), ale także możliwość korzystania z danych JSON w SQL Server, czy też łączenia się do danych znajdujących się na HDFS, HADOOP, czy Spark poprzez Polybase w SQL Server, by te dane wykorzystywać do analizy, predykcji poprzez modele w językach R lub Python.
A Hands-on Intro to Data Science and R Presentation.pptSanket Shikhar
Using popular data science tools such as Python and R, the book offers many examples of real-life applications, with practice ranging from small to big data.
Come può .NET contribuire alla Data Science? Cosa è .NET Interactive? Cosa c'entrano i notebook? E Apache Spark? E il pythonismo? E Azure? Vediamo in questa sessione di mettere in ordine le idee.
Design and Build Real Apps Blazing Fast! App Builder™ is a cloud-based, low-code, WYSIWYG drag & drop tool that helps digital product teams design and build business apps faster than ever before.
More Related Content
Similar to Reveal's Advanced Analytics: Using R & Python
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2nwSwEh.
Marco Bonzanini discusses the process of building data pipelines, e.g. extraction, cleaning, integration, pre-processing of data; in general, all the steps necessary to prepare data for a data-driven product. In particular, he focuses on data plumbing and on the practice of going from prototype to production. Filmed at qconlondon.com.
Marco Bonzanini is Data Scientist and co-organizer of PyData London Meetup.
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
Azure DevOps offers many tools that you can choose from to augment your DevOps practices. Whether you are delivering software on-prem or in the cloud, building OSS or commercial solutions, using .NET, Java, Swift or any other language, you should see what Azure DevOps has to offer.
Intro to Machine Learning with H2O and AWSSri Ambati
Navdeep Gill @ Galvanize Seattle- May 2016
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Azure DevOps offers many tools that you can choose from to augment your DevOps practices. Whether you are delivering software on-prem or in the cloud, building OSS or commercial solutions, using .NET, Java, Swift or any other language, you should see what Azure DevOps has to offer.
Python - A Comprehensive Programming LanguageTsungWei Hu
Python - A Comprehensive Programming Language, talk at
1. CSIE, Providence University, 2009/05/08
2. CSIE, National Taichung Institute of Technology, 2009/10/29
Azure DevOps offers many tools that you can choose from to augment your DevOps practices. Whether you are delivering software on-prem or in the cloud, building OSS or commercial solutions, using .NET, Java, Swift or any other language, you should see what Azure DevOps has to offer.
Autonomous Machines with Project BonsaiIvo Andreev
Autonomous machines rely on fusion of many technologies to sense, plan, optimize and act as if an intelligent superhuman is in control.
Project Bonsai is a machine teaching service that combines machine learning (ML), calibration and optimization to create intelligent control systems using simulations. The teaching curriculum is performed using a proprietary “Inkling” language close to JavaScript and training a model is easy and interactive. Join this session for a Bonsai jump start and a demo and try it yourself – it is free.
DataMass Summit - Machine Learning for Big Data in SQL ServerŁukasz Grala
Sesja pokazująca zarówno Machine Learning Server (czyli algorytmy uczenia maszynowego w językach R i Python), ale także możliwość korzystania z danych JSON w SQL Server, czy też łączenia się do danych znajdujących się na HDFS, HADOOP, czy Spark poprzez Polybase w SQL Server, by te dane wykorzystywać do analizy, predykcji poprzez modele w językach R lub Python.
A Hands-on Intro to Data Science and R Presentation.pptSanket Shikhar
Using popular data science tools such as Python and R, the book offers many examples of real-life applications, with practice ranging from small to big data.
Come può .NET contribuire alla Data Science? Cosa è .NET Interactive? Cosa c'entrano i notebook? E Apache Spark? E il pythonismo? E Azure? Vediamo in questa sessione di mettere in ordine le idee.
Similar to Reveal's Advanced Analytics: Using R & Python (20)
Design and Build Real Apps Blazing Fast! App Builder™ is a cloud-based, low-code, WYSIWYG drag & drop tool that helps digital product teams design and build business apps faster than ever before.
Infragistics - Ultimate 21.2 - Launch DeckPoojitha B
1) Hyper-productivity with App Builder, a WYSIWYG web IDE for web app building plus a complete design-to-code platform.
2) Delivering innovations and new experiences in Blazor, Angular, React & Web Components.
3) .NET 6 Support for web & desktop, including WPF, Windows Forms, ASP.NET Core, & ASP.NET MVC.
The Ultimate Guide To Embedded Analytics Poojitha B
Did you know that the lack of in-context data prevents you from making smarter business decisions - and as a result, missing out on key revenue opportunities?
Embedded Analytics: 5 Steps to App ModernizationPoojitha B
Learn how your organizations can use embedded data analytics to deliver smarter apps that help your customers make data-driven decisions and the 5 steps to app modernization.
Ignite UI For Blazor Launch Webinar Oct 2020Poojitha B
Build Beautiful Apps Fast with C# and Blazor! Join us and learn how you can deliver better, high-performance, feature-rich Blazor applications, faster.
Win More Customers With Embedded AnalyticsPoojitha B
Learn how you can provide superior customer experiences with embedded analytics with 7 easy steps. When it comes to embedded analytics there are many aspects to consider when it comes to the experience of your end-users.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
3. Today’s Agenda
• Advanced Scripting in Reveal
• Pre-Requisites for R & Python
• Visualization Customizations
• Live Demo
• Wrap Up
House Keeping
• Recording and slides will be available after the webinar. We’ll send a follow-up email
• Please ask questions in the Questions Window
4. Summer of BI Series …
• June 10th: Dashboard Best Practices, Do’s and Don’ts
• June 24th: Building a Data Driven Org
• July 8th: Advanced Analytics: Using R & Python
• July 22nd: Advanced Analytics: Machine Learning with Reveal
• Aug 5th: Embedded Analytics: 5 Steps to App Modernization
https://www.infragistics.com/webinars
6. Using R & Python in Custom Visualizations
• Using Reveal, you are not limited to “what’s in the box”
in terms of chart types and visualizations
• We’ve enabled widely used scripting / programming
languages like R and Python which are common
amongst data scientists
• You and your power-user co-workers are only limited by
your imagination (and what custom libraries will offer) in
terms of data visualizations
• Both R & Python can do more than data visualizations –
you can perform any action on your available data
7. Common Scenarios for R & Python Visualizations
1. There is a visualization you require that we do not
ship in Reveal
2. You require advanced scripting or data preparation
tasks to be applied to the data you are working with
3. You are a citizen data scientist or developer and you
have created visualizations that you want to re-use
4. You are using a different visualization product and
want to create mashups with existing Reveal
dashboards
8. Reveal Data
Visualizations
40 Data Visualizations
in 7 Categories
Compare Data
Part to Whole
Data Distribution
Data Trend Analysis
Data Relationships
KPI’s and Gauges
Geospatial Data
9. Custom
Visualizations
Endless options … consider
that Python’s most popular
library is Matplotlib, and it
has many extensions …
• Biggle, Chac, DISLIN,
GNU, Octave, Gnuplot-
py, Plplot, PyCha,
PyPlotter, SageMath,
SciPy, wxPython, Plotly,
Bokeh …. and more!
11. System Requirements
• Advanced Scripting is supported in the following
platforms:
• WPF Desktop Client
• WPF Desktop SDK
• Python 2.7 or higher (3.8 or higher preferred)
• R 3.x or higher (4.0 or higher preferred)
17. Matplotlib
• The most popular open
source graphics library for
Python
• Thousands of examples
online to inspire you to
extend what is possible with
Reveal
• Explore the gallery at
https://matplotlib.org/3.1.0/g
allery/index.html
20. Creating a Custom Visualization
• Connect to your data
source as if you were
using a built in Reveal
Chart and drag / drop
the fields you require for
your visualization
21. Creating a Custom Visualization
• Select Python from the
Change Visualization
dropdown
22. Setting Up Your Script
• Switch the Settings Tab,
click the Edit Script
button
• Note the Libraries and
Fields that are available
by default
• Your data is referenced
in the data object
23. Setting Up Your Script
• Switch the Settings Tab,
click the Edit Script
button
• Note the Libraries and
Fields that are available
by default
• Your data is referenced
in the data object
24. Watch the Magic Happen!
• By default, Python with
Matplotlib renders as an
Image in the Reveal
Visualization canvas
25. Clean Up the Visualization
• Apply a Data Filter
• All Settings, Filters, etc in
Reveal work across any
custom visualization
27. Area Chart
ax = plt.gca()
data.plot(kind='area',x='Date',y='Sum of Spend',ax=ax)
data.plot(kind='area',x='Date',y='Sum of Budget', color='green', ax=ax)
29. Heatmap Chart
from plotly import data
campaignid = np.unique(np.array(data['CampaignID']))
territory = np.unique(np.array(data['Territory']))
spend = np.array(data['Sum of Spend']).reshape((7, 5))
fig, ax = plt.subplots(figsize=(5.5, 6.5))
im = ax.imshow(spend)
# Show all ticks...
ax.set_xticks(np.arange(len(territory)))
ax.set_yticks(np.arange(len(campaignid)))
# ... and label them with the respective list entries
ax.set_xticklabels(territory)
ax.set_yticklabels(campaignid)
# Loop over data dimensions and create text annotations.
for i in range(len(campaignid)):
for j in range(len(territory)):
text = ax.text(j, i, spend[i, j],
ha="center", va="center", color="w")
ax.set_title("Campaign Spend (dollars)")
fig.tight_layout()
32. Reveal – Simple and Beautiful Visualizations
• Using Reveal, you are not limited to “what’s in the box”
in terms of chart types and visualizations
• Don’t limit the experience you can deliver to your
customer
• Use R & Python to super-charge your dashboards with
custom visualizations
• Learn R:
https://www.tutorialspoint.com/r/r_boxplots.htm
• Learn Python:
https://matplotlib.org/devdocs/gallery/index.html
33. Use Reveal to Enable Advanced Scripting
and Custom Data Visualizations
Try Today at revealbi.io
Contact Us for a Personalized Demo! sales@infragistics.com
34. Email Us with Questions!
Jason Beres
Senior VP, Developer Tools
jasonb@Infragistics.com
Casey McGuigan
Product Manager, Reveal
cmcguigan@Infragistics.com
revealbi.io
https://www.codespeedy.com/how-to-change-line-color-in-matplotlib/
https://matplotlib.org/3.1.0/gallery/color/named_colors.html
Color Demo - https://matplotlib.org/3.1.0/gallery/color/color_demo.html
https://matplotlib.org/3.1.0/gallery/lines_bars_and_markers/linestyles.html#sphx-glr-gallery-lines-bars-and-markers-linestyles-py
an RGB or RGBA (red, green, blue, alpha) tuple of float values in [0, 1] (e.g., (0.1, 0.2, 0.5) or (0.1, 0.2, 0.5, 0.3));
a hex RGB or RGBA string (e.g., '#0f0f0f' or '#0f0f0f80'; case-insensitive);
a string representation of a float value in [0, 1] inclusive for gray level (e.g., '0.5');
one of {'b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'};
a X11/CSS4 color name (case-insensitive);
a name from the xkcd color survey, prefixed with 'xkcd:' (e.g., 'xkcd:sky blue'; case insensitive);
one of the Tableau Colors from the 'T10' categorical palette (the default color cycle): {'tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive', 'tab:cyan'} (case-insensitive);
a "CN" color spec, i.e. 'C' followed by a number, which is an index into the default property cycle (matplotlib.rcParams['axes.prop_cycle']); the indexing is intended to occur at rendering time, and defaults to black if the cycle does not include color.