PlotlyJS.jl is a Julia wrapper for the interactive JavaScript plotting library plotly.js. It provides two main layers: 1) a faithful representation of the plotly.js API to allow constructing plots and visualizations programmatically in JSON format, and 2) convenience functions and syntax to make common plotting tasks more natural in Julia, such as plotting data with a single function call or combining multiple plots into subplots. The library aims to make interactive visualization easy and publication-quality from Julia.
The document discusses Chaco, an interactive graphics library for Python. It provides examples of using Chaco to create simple static plots as well as more complex, interactive visualizations. Code samples are given to demonstrate line plots, scatter plots, and image plots built with Chaco.
This document discusses Go programming language concepts including:
- Three key types - one type is constant, two types are structures, and three important functions are len(), append(), and copy()
- Structures, embedding structures, anonymous structures, and interfaces
- Control structures like if/else, for loops, and switch statements
- Functions, variables, slices, and maps
- Examples of using len(), append(), and copy() functions
This document provides a list of 12 programming problems to solve involving various computer science concepts like functions, structures, classes, inheritance, file handling, SQL queries, and stacks and queues. The problems cover generating series using functions, creating structures to store student and employee records, working with classes and objects, inheritance between person and employee classes, file input/output operations, SQL create, insert, update and select statements, and implementing stacks and queues using both arrays and linked lists. The document serves as a guide for practicing and reinforcing different programming concepts.
Python легко и просто. Красиво решаем повседневные задачиMaxim Kulsha
The document discusses various techniques for iteration in Python. It covers iterating over lists, dictionaries, files and more. It provides examples of iterating properly to avoid errors like modifying a list during iteration. Context managers are also discussed as a clean way to handle resources like file objects. Overall the document shares best practices for writing efficient and robust iteration code in Python.
The document discusses building and training a convolutional neural network (CNN) model for image classification using Keras and TensorFlow, and then converting the model to Core ML format.
It first loads image data, preprocesses it, and sets up a CNN model in Keras with Conv2D, MaxPooling2D, Flatten, and Dense layers. It then compiles and trains the model on the image data. Next, it evaluates the trained model and exports it to Core ML format for use on iOS devices. Finally, it shows sample Core ML code for performing predictions using the converted model on an image input.
Monoids - Part 1 - with examples using Scalaz and CatsPhilip Schwarz
A monoid is an algebraic structure consisting of a set and a binary operation that is associative and has an identity element. Some key properties of monoids include:
1) A monoid consists of a type A, a binary operation op that combines two values of type A, and a zero value that acts as an identity.
2) The binary operation must be associative, meaning op(op(x,y),z) = op(x,op(y,z)).
3) The zero value must satisfy the identity laws: op(x, zero) = x and op(zero, x) = x.
4) Common examples of monoids include string concatenation
The document discusses Chaco, an interactive graphics library for Python. It provides examples of using Chaco to create simple static plots as well as more complex, interactive visualizations. Code samples are given to demonstrate line plots, scatter plots, and image plots built with Chaco.
This document discusses Go programming language concepts including:
- Three key types - one type is constant, two types are structures, and three important functions are len(), append(), and copy()
- Structures, embedding structures, anonymous structures, and interfaces
- Control structures like if/else, for loops, and switch statements
- Functions, variables, slices, and maps
- Examples of using len(), append(), and copy() functions
This document provides a list of 12 programming problems to solve involving various computer science concepts like functions, structures, classes, inheritance, file handling, SQL queries, and stacks and queues. The problems cover generating series using functions, creating structures to store student and employee records, working with classes and objects, inheritance between person and employee classes, file input/output operations, SQL create, insert, update and select statements, and implementing stacks and queues using both arrays and linked lists. The document serves as a guide for practicing and reinforcing different programming concepts.
Python легко и просто. Красиво решаем повседневные задачиMaxim Kulsha
The document discusses various techniques for iteration in Python. It covers iterating over lists, dictionaries, files and more. It provides examples of iterating properly to avoid errors like modifying a list during iteration. Context managers are also discussed as a clean way to handle resources like file objects. Overall the document shares best practices for writing efficient and robust iteration code in Python.
The document discusses building and training a convolutional neural network (CNN) model for image classification using Keras and TensorFlow, and then converting the model to Core ML format.
It first loads image data, preprocesses it, and sets up a CNN model in Keras with Conv2D, MaxPooling2D, Flatten, and Dense layers. It then compiles and trains the model on the image data. Next, it evaluates the trained model and exports it to Core ML format for use on iOS devices. Finally, it shows sample Core ML code for performing predictions using the converted model on an image input.
Monoids - Part 1 - with examples using Scalaz and CatsPhilip Schwarz
A monoid is an algebraic structure consisting of a set and a binary operation that is associative and has an identity element. Some key properties of monoids include:
1) A monoid consists of a type A, a binary operation op that combines two values of type A, and a zero value that acts as an identity.
2) The binary operation must be associative, meaning op(op(x,y),z) = op(x,op(y,z)).
3) The zero value must satisfy the identity laws: op(x, zero) = x and op(zero, x) = x.
4) Common examples of monoids include string concatenation
Functional programming avoids changing-state and mutable data. Referential transparency means expressions can be replaced without affecting observable behavior. Pure functions only depend on argument values and have no other effects. Case classes provide functionality like equals, hashCode and pattern matching out of the box. Futures allow running blocking operations asynchronously and chaining results with map, flatMap and for comprehensions. Implicits allow type conversions and providing parameters implicitly. Sealed classes allow exhaustive pattern matching of a type hierarchy.
This document discusses SQL and database connectivity using Python. It covers SQL statements like SELECT, INSERT, UPDATE, DELETE. It describes database tables like orders and parts with sample data. It also explains how to connect to databases using Python DB-API modules, execute queries, retrieve and manipulate result sets. Key methods like cursor.execute, fetchall are demonstrated along with transaction handling.
Traverse allows running actions over data structures and accumulating results. It traverses data like lists and vectors, running an applicative functor like Future or Option for each element. Variations include parallel, non-empty, unordered, and flat traversals. Traverse underpins many operations and is a powerful abstraction for representing imperative loops in a functional way.
N-Queens Combinatorial Problem - Polyglot FP for fun and profit - Haskell and...Philip Schwarz
Learn how to write FP code that displays a graphical representation of all the numerous N-Queens solutions for N=4,5,6,7,8 .
See how to neatly solve the problem by exploiting its self-similarity and using a divide and conquer approach.
Make light work of assembling multiple images into a whole, by exploiting Doodle’s facilities for combining images using a relative layout.
See relevant FP functions, like Foldable’s intercalate and intersperse, in action.
Code for part 3: https://github.com/philipschwarz/n-queens-combinatorial-problem-scala-part-3
After 10 years of Object Orientated Java, 2 years of Functional Programming in Scala was enough to convince me that I could never switch back. But why? The answer is simple: Functional Scala lets you think less. It reduces the number of moving parts you need to hold in your head, lets you stay focussed and saves your mental stack from overflowing.
In this talk I'll show you how to stop treating Scala as a better Java and start exploring the world of Functional Programming. I'll use code examples to demonstrate a four step path that'll let you ease yourself into the world of Functional Programming while continuing to deliver production quality code.
With each step we'll learn a new technique, and understand how it leaves you with less to think about: Hopefully this talk will convince you that Functional Scala leads to code that's easier to hold in your head, and leave you excited about learning a new paradigm.
The document discusses concepts of object-oriented programming including classes, objects, encapsulation, inheritance, polymorphism, and abstraction. It defines a Shape class and subclasses like Rectangle, Circle, and Triangle. It demonstrates inheritance by having the subclasses inherit attributes and behaviors from the Shape class. It also shows polymorphism through method overriding to calculate the area for different shapes.
Some languages, like SML, Haskell, and Scala, have built-in support for pattern matching, which is a generic way of branching based on the structure of data.
While not without its drawbacks, pattern matching can help eliminate a lot of boilerplate, and it's often cited as a reason why functional programming languages are so concise.
In this talk, John A. De Goes talks about the differences between built-in patterns, and so-called first-class patterns (which are "do-it-yourself" patterns implemented using other language features).
Unlike built-in patterns, first-class patterns aren't magical, so you can store them in variables and combine them in lots of interesting ways that aren't always possible with built-in patterns. In addition, almost every programming language can support first-class patterns (albeit with differing levels of effort and type-safety).
During the talk, you'll watch as a mini-pattern matching library is developed, and have the opportunity to follow along and build your own pattern matching library in the language of your choice.
This document provides an overview of algebraic structures like monoids, lattices, groups, and rings. It discusses how these structures appear at both the value level and type level in Scala. At the value level, concepts like monoids, semigroups, and groups are demonstrated with examples. At the type level, product and sum types are shown to form monoids, and union/intersection types form lattices. Higher kinded type classes are used to abstract over these structures. Applications to data processing and error handling are also covered. In conclusion, algebraic structures provide useful abstractions both mathematically and in Scala's type system.
The document provides an overview of the Seaborn Python library for statistical data visualization. It discusses preparing data, controlling figure aesthetics, basic plot types like scatter plots and histograms, customizing plots, and using built-in datasets. Key steps include importing libraries, setting the style, loading datasets, and calling plotting functions to visualize relationships in the data.
The document provides an overview of the Python library Bokeh for interactive data visualization. It summarizes the basic steps to create plots which include preparing data, creating a plot, adding renderers to visualize the data, specifying the output, and showing or saving the results. It also describes various plot types that can be created like bar charts, box plots, histograms, and scatter plots. Additionally, it covers options for customizing plots, arranging multiple plots, and embedding plots.
This document summarizes a presentation about type classes in Scala. It introduces type classes like Monoid and Functor. It provides examples of Monoid instances for types like Int and Option. It explains how to define type classes for new types. It shows how to generalize functions like sequence and traverse to work for any Applicative functor using type classes. Finally, it discusses related concepts like Semigroup, Semigroupal functors, and NonEmptyList.
Scala, Haskell and LISP are examples of programming languages using the functional programming paradigm. Join us in this TechTalk to know why functional programming is so important, how to implement some of its core concepts in your existing programming languages, and how functional programming inspired Google's Map Reduce, Twitter's Algebird, and many other technologies.
By Mohammad Ghabboun - Senior Software Engineer, SOUQ.com
Visualisation alone is not enough to solve most data analysis challenges. The data may be too big or too messy to show in a single plot. In this talk, I'll outline my current thinking about how the synthesis of visualisation, modeling, and data manipulation allows you to effectively explore and understand large and complex datasets. There are three key ideas:
1. Using tidyr to make nested data frame, where one column is a list of data frames.
2. Using purrr to use function programming tools instead of writing for loops
3. Visualising models by converting them to tidy data with broom, by David Robinson.
This work is embedded in R so I'll not only talk about the ideas, but show concrete code for working with large sets of models. You'll see how you can combine the dplyr and purrr packages to fit many models, then use tidyr and broom to convert to tidy data which can be visualised with ggplot2.
With the availability of powerful but relatively low-level plotting libraries like d3.js, plot.ly, and matplotlib, it is easier than it has ever been to create beautiful visualizations. However, these plotting libraries must be very general and thus quite complicated to accommodate arbitrarily complex plotting and visualization tasks.In this talk, I describe the plotting system used by yt, an analysis and visualization platform for volumetric data written in python. The yt plotting system wraps matplotlib, creating a domain-specific API for creating publication quality plots that matches users' intuition for how they would like to explore and visualize their data. I will provide tips for designing and testing domain-specific plotting APIs so that the resulting plots are beautiful by default, but still modifiable with the full power of the underlying plotting library.
This document discusses using data to build new products and solve business problems. It outlines linking different data sets together and adding to existing data to gain new insights. Examples are given of tying demographic data to interest data to better understand audiences. Specific examples discussed include analyzing over 3.6 million tweets to understand trends around Halloween and using social listening, demographics, interests and history to inform dating predictions. The importance of clear visualizations and designing products around user workflows is emphasized.
PLOTCON NYC: The Future of Business Intelligence: Data VisualizationPlotly
This document discusses the importance and rise of data visualization. It notes that we are in an era of "big data" where vast amounts of data are being generated and collected daily through activities like searching, browsing, communicating, shopping, and more. However, simply having data is not enough - the data needs to be easier to understand and act upon. The document argues that data visualization is an essential skill for communicating information to others in an efficient and effective way. It discusses some of the challenges in designing good visualizations that are readable, interpretable, meaningful, relevant and timely for audiences. The document provides tips on designing visualizations with the audience and comprehension in mind through techniques like annotation and animation.
Human: Thank you
PLOTCON NYC: Behind Every Great Plot There's a Great Deal of WranglingPlotly
If you are struggling to make a plot, tear yourself away from stackoverflow for a moment and ... take a hard look at your data. Is it really in the most favorable form for the task at hand? Time and time again I have found that my visualization struggles are really a symptom of unfinished data wrangling. R has long had excellent facilities for data aggregation or "split-apply-combine": split an object into pieces, compute on each piece, and glue the result back together again. Recent developments, especially in the purrr package, have made "split-apply-combine" even easier and more general. But this requires a certain comfort level with lists, especially with lists that are columns inside a data frame. This is unfamiliar to most of us. I give an overview of this set of problems and match them up with solutions based on grouped, nested, and split data frames.
PLOTCON NYC: Custom Colormaps for Your FieldPlotly
Visualizations can be clear or obscure depending on the color scheme used to represent the data, and careful use of color can also be attractive. However, colormaps have not generally received the attention they deserve, given their significance. The colors used carry the responsibility of conveying data honestly and accurately. They should generally be perceptually uniform so that equal steps through the dataset are represented by equal perceptual jumps in the colormap. They should be intuitive to help support quick, natural understanding of the data. They should match basic properties of the data, like showing the presence of information (sequential) or anomalies in a field (diverging). Additionally, just as different variables are typically represented with different specific Greek letters when written, different variables should also be represented with different colormaps when plotted. A suite of colormaps called cmocean have been developed to meet the needs of oceanographers, and can be used by any plotter out there. The suite is freely available for many different software packages (including Python and R). You can use these colormaps to help convey your data honestly and accurately.
The document discusses the benefits of exercise for mental health. It states that regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise has also been shown to increase gray matter volume in the brain and reduce risks for conditions like Alzheimer's and dementia.
PLOTCON NYC: Interactive Visual Statistics on Massive DatasetsPlotly
Visualization is oftentimes the best way to explore raw data. But as data grows to include millions and billions of points, traditional visualization techniques break down. Whether you're loading the data into limited memory, or separating the signal from the noise when thousands of data points occupy each pixel, as data gets big, visualization gets challenging.
In this talk, Peter will describe an approach called "datashading" that deconstructs the classical infovis pipeline to place statistical processing at the heart of the visualization task. The result is a scalable, interactive system that is easy to use and produces perceptually accurate renderings of extremely large datasets. He will show the open-source Datashader library, which implements these ideas, and makes them available within Jupyter notebooks and Bokeh data applications.
PLOTCON NYC: Get Your Point Across: The Art of Choosing the Right Visualizati...Plotly
Why does one decide to visualize data? And once they have decided to visualize their data, how do they know the best way to tell their story? To answer these questions, this talk will focus on the iterative design process, visualization fundamentals, and storytelling techniques. We will then anchor these principles in effective visualization examples.
Of particular importance is the ability and willingness to refine and redefine your objectives as you determine the right visualization for you (or your client/audience). This talk will walk through a Datascope client case study in order to convey the importance of a flexible and collaborative process when approaching data visualization problems in order to deliver the best end result.
The audience will walk away armed with helpful visualization techniques and an understanding of the iterative design process.
PLOTCON NYC: New Data Viz in Data JournalismPlotly
In this talk I present a survey of forms and tools that are used by practicing data journalists. I walk through examples of different techniques used by journalists to convey complex information to readers, including static charts and graphics, probabilistic models, simulations, and others. I discuss the tools that are available for creating such storytelling devices, examining their successes and shortcomings, and speculate on future directions. I also look at how open source software has impacted journalism. The audience should walk away with a better understanding of how data journalists work in practice, what tools are available for citizen data journalists, and how journalists can work together with the open source community.
Functional programming avoids changing-state and mutable data. Referential transparency means expressions can be replaced without affecting observable behavior. Pure functions only depend on argument values and have no other effects. Case classes provide functionality like equals, hashCode and pattern matching out of the box. Futures allow running blocking operations asynchronously and chaining results with map, flatMap and for comprehensions. Implicits allow type conversions and providing parameters implicitly. Sealed classes allow exhaustive pattern matching of a type hierarchy.
This document discusses SQL and database connectivity using Python. It covers SQL statements like SELECT, INSERT, UPDATE, DELETE. It describes database tables like orders and parts with sample data. It also explains how to connect to databases using Python DB-API modules, execute queries, retrieve and manipulate result sets. Key methods like cursor.execute, fetchall are demonstrated along with transaction handling.
Traverse allows running actions over data structures and accumulating results. It traverses data like lists and vectors, running an applicative functor like Future or Option for each element. Variations include parallel, non-empty, unordered, and flat traversals. Traverse underpins many operations and is a powerful abstraction for representing imperative loops in a functional way.
N-Queens Combinatorial Problem - Polyglot FP for fun and profit - Haskell and...Philip Schwarz
Learn how to write FP code that displays a graphical representation of all the numerous N-Queens solutions for N=4,5,6,7,8 .
See how to neatly solve the problem by exploiting its self-similarity and using a divide and conquer approach.
Make light work of assembling multiple images into a whole, by exploiting Doodle’s facilities for combining images using a relative layout.
See relevant FP functions, like Foldable’s intercalate and intersperse, in action.
Code for part 3: https://github.com/philipschwarz/n-queens-combinatorial-problem-scala-part-3
After 10 years of Object Orientated Java, 2 years of Functional Programming in Scala was enough to convince me that I could never switch back. But why? The answer is simple: Functional Scala lets you think less. It reduces the number of moving parts you need to hold in your head, lets you stay focussed and saves your mental stack from overflowing.
In this talk I'll show you how to stop treating Scala as a better Java and start exploring the world of Functional Programming. I'll use code examples to demonstrate a four step path that'll let you ease yourself into the world of Functional Programming while continuing to deliver production quality code.
With each step we'll learn a new technique, and understand how it leaves you with less to think about: Hopefully this talk will convince you that Functional Scala leads to code that's easier to hold in your head, and leave you excited about learning a new paradigm.
The document discusses concepts of object-oriented programming including classes, objects, encapsulation, inheritance, polymorphism, and abstraction. It defines a Shape class and subclasses like Rectangle, Circle, and Triangle. It demonstrates inheritance by having the subclasses inherit attributes and behaviors from the Shape class. It also shows polymorphism through method overriding to calculate the area for different shapes.
Some languages, like SML, Haskell, and Scala, have built-in support for pattern matching, which is a generic way of branching based on the structure of data.
While not without its drawbacks, pattern matching can help eliminate a lot of boilerplate, and it's often cited as a reason why functional programming languages are so concise.
In this talk, John A. De Goes talks about the differences between built-in patterns, and so-called first-class patterns (which are "do-it-yourself" patterns implemented using other language features).
Unlike built-in patterns, first-class patterns aren't magical, so you can store them in variables and combine them in lots of interesting ways that aren't always possible with built-in patterns. In addition, almost every programming language can support first-class patterns (albeit with differing levels of effort and type-safety).
During the talk, you'll watch as a mini-pattern matching library is developed, and have the opportunity to follow along and build your own pattern matching library in the language of your choice.
This document provides an overview of algebraic structures like monoids, lattices, groups, and rings. It discusses how these structures appear at both the value level and type level in Scala. At the value level, concepts like monoids, semigroups, and groups are demonstrated with examples. At the type level, product and sum types are shown to form monoids, and union/intersection types form lattices. Higher kinded type classes are used to abstract over these structures. Applications to data processing and error handling are also covered. In conclusion, algebraic structures provide useful abstractions both mathematically and in Scala's type system.
The document provides an overview of the Seaborn Python library for statistical data visualization. It discusses preparing data, controlling figure aesthetics, basic plot types like scatter plots and histograms, customizing plots, and using built-in datasets. Key steps include importing libraries, setting the style, loading datasets, and calling plotting functions to visualize relationships in the data.
The document provides an overview of the Python library Bokeh for interactive data visualization. It summarizes the basic steps to create plots which include preparing data, creating a plot, adding renderers to visualize the data, specifying the output, and showing or saving the results. It also describes various plot types that can be created like bar charts, box plots, histograms, and scatter plots. Additionally, it covers options for customizing plots, arranging multiple plots, and embedding plots.
This document summarizes a presentation about type classes in Scala. It introduces type classes like Monoid and Functor. It provides examples of Monoid instances for types like Int and Option. It explains how to define type classes for new types. It shows how to generalize functions like sequence and traverse to work for any Applicative functor using type classes. Finally, it discusses related concepts like Semigroup, Semigroupal functors, and NonEmptyList.
Scala, Haskell and LISP are examples of programming languages using the functional programming paradigm. Join us in this TechTalk to know why functional programming is so important, how to implement some of its core concepts in your existing programming languages, and how functional programming inspired Google's Map Reduce, Twitter's Algebird, and many other technologies.
By Mohammad Ghabboun - Senior Software Engineer, SOUQ.com
Visualisation alone is not enough to solve most data analysis challenges. The data may be too big or too messy to show in a single plot. In this talk, I'll outline my current thinking about how the synthesis of visualisation, modeling, and data manipulation allows you to effectively explore and understand large and complex datasets. There are three key ideas:
1. Using tidyr to make nested data frame, where one column is a list of data frames.
2. Using purrr to use function programming tools instead of writing for loops
3. Visualising models by converting them to tidy data with broom, by David Robinson.
This work is embedded in R so I'll not only talk about the ideas, but show concrete code for working with large sets of models. You'll see how you can combine the dplyr and purrr packages to fit many models, then use tidyr and broom to convert to tidy data which can be visualised with ggplot2.
With the availability of powerful but relatively low-level plotting libraries like d3.js, plot.ly, and matplotlib, it is easier than it has ever been to create beautiful visualizations. However, these plotting libraries must be very general and thus quite complicated to accommodate arbitrarily complex plotting and visualization tasks.In this talk, I describe the plotting system used by yt, an analysis and visualization platform for volumetric data written in python. The yt plotting system wraps matplotlib, creating a domain-specific API for creating publication quality plots that matches users' intuition for how they would like to explore and visualize their data. I will provide tips for designing and testing domain-specific plotting APIs so that the resulting plots are beautiful by default, but still modifiable with the full power of the underlying plotting library.
This document discusses using data to build new products and solve business problems. It outlines linking different data sets together and adding to existing data to gain new insights. Examples are given of tying demographic data to interest data to better understand audiences. Specific examples discussed include analyzing over 3.6 million tweets to understand trends around Halloween and using social listening, demographics, interests and history to inform dating predictions. The importance of clear visualizations and designing products around user workflows is emphasized.
PLOTCON NYC: The Future of Business Intelligence: Data VisualizationPlotly
This document discusses the importance and rise of data visualization. It notes that we are in an era of "big data" where vast amounts of data are being generated and collected daily through activities like searching, browsing, communicating, shopping, and more. However, simply having data is not enough - the data needs to be easier to understand and act upon. The document argues that data visualization is an essential skill for communicating information to others in an efficient and effective way. It discusses some of the challenges in designing good visualizations that are readable, interpretable, meaningful, relevant and timely for audiences. The document provides tips on designing visualizations with the audience and comprehension in mind through techniques like annotation and animation.
Human: Thank you
PLOTCON NYC: Behind Every Great Plot There's a Great Deal of WranglingPlotly
If you are struggling to make a plot, tear yourself away from stackoverflow for a moment and ... take a hard look at your data. Is it really in the most favorable form for the task at hand? Time and time again I have found that my visualization struggles are really a symptom of unfinished data wrangling. R has long had excellent facilities for data aggregation or "split-apply-combine": split an object into pieces, compute on each piece, and glue the result back together again. Recent developments, especially in the purrr package, have made "split-apply-combine" even easier and more general. But this requires a certain comfort level with lists, especially with lists that are columns inside a data frame. This is unfamiliar to most of us. I give an overview of this set of problems and match them up with solutions based on grouped, nested, and split data frames.
PLOTCON NYC: Custom Colormaps for Your FieldPlotly
Visualizations can be clear or obscure depending on the color scheme used to represent the data, and careful use of color can also be attractive. However, colormaps have not generally received the attention they deserve, given their significance. The colors used carry the responsibility of conveying data honestly and accurately. They should generally be perceptually uniform so that equal steps through the dataset are represented by equal perceptual jumps in the colormap. They should be intuitive to help support quick, natural understanding of the data. They should match basic properties of the data, like showing the presence of information (sequential) or anomalies in a field (diverging). Additionally, just as different variables are typically represented with different specific Greek letters when written, different variables should also be represented with different colormaps when plotted. A suite of colormaps called cmocean have been developed to meet the needs of oceanographers, and can be used by any plotter out there. The suite is freely available for many different software packages (including Python and R). You can use these colormaps to help convey your data honestly and accurately.
The document discusses the benefits of exercise for mental health. It states that regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise has also been shown to increase gray matter volume in the brain and reduce risks for conditions like Alzheimer's and dementia.
PLOTCON NYC: Interactive Visual Statistics on Massive DatasetsPlotly
Visualization is oftentimes the best way to explore raw data. But as data grows to include millions and billions of points, traditional visualization techniques break down. Whether you're loading the data into limited memory, or separating the signal from the noise when thousands of data points occupy each pixel, as data gets big, visualization gets challenging.
In this talk, Peter will describe an approach called "datashading" that deconstructs the classical infovis pipeline to place statistical processing at the heart of the visualization task. The result is a scalable, interactive system that is easy to use and produces perceptually accurate renderings of extremely large datasets. He will show the open-source Datashader library, which implements these ideas, and makes them available within Jupyter notebooks and Bokeh data applications.
PLOTCON NYC: Get Your Point Across: The Art of Choosing the Right Visualizati...Plotly
Why does one decide to visualize data? And once they have decided to visualize their data, how do they know the best way to tell their story? To answer these questions, this talk will focus on the iterative design process, visualization fundamentals, and storytelling techniques. We will then anchor these principles in effective visualization examples.
Of particular importance is the ability and willingness to refine and redefine your objectives as you determine the right visualization for you (or your client/audience). This talk will walk through a Datascope client case study in order to convey the importance of a flexible and collaborative process when approaching data visualization problems in order to deliver the best end result.
The audience will walk away armed with helpful visualization techniques and an understanding of the iterative design process.
PLOTCON NYC: New Data Viz in Data JournalismPlotly
In this talk I present a survey of forms and tools that are used by practicing data journalists. I walk through examples of different techniques used by journalists to convey complex information to readers, including static charts and graphics, probabilistic models, simulations, and others. I discuss the tools that are available for creating such storytelling devices, examining their successes and shortcomings, and speculate on future directions. I also look at how open source software has impacted journalism. The audience should walk away with a better understanding of how data journalists work in practice, what tools are available for citizen data journalists, and how journalists can work together with the open source community.
PLOTCON NYC: Data Science in the Enterprise From Concept to ExecutionPlotly
Data science can create incredible value for companies. Those that do it well, use it as a tool for strategic differentiation in the market. However, generating value from data science, whether by embedding it into an actual product or using it to drive business strategy and operations, can be complex. Particularly with strategy and operations, delivering value in the enterprise from data science has a unique set of challenges. In this context, value is created only after the results of an analysis have led to actions in the line of business. To achieve success, this requires a complementary set of skills in addition to data analysis and modeling. It requires business acumen, persuasion, coordination, processes, and execution. In this talk, we will discuss the concept of analytics to execution at Red Hat, and how delivering value from data science in the enterprise extends far beyond the traditional data science workflow.
PLOTCON NYC: Building a Flexible Analytics StackPlotly
ABSTRACT
Board rooms and business reports have long been stuffed with basic graphs and ugly charts. Now, inspired by the work that appears in today's newspapers and scientific journals, and powered by tools like Plotly, businesses are doing much more to find answers and insight in their data. This talk will outline both how companies, and particularly leading tech startups, are combining--and often building--technologies that collect, prepare, move, transform, and visualize data, and the problems these businesses are solving with this new stack of data tools.
PLOTCON NYC: Mapping Networked Attention: What We Learn from Social DataPlotly
At a time when attention is a scarce commodity, true power lies in understanding the networked nature of digital audiences - who is a central authority, who resides at the periphery, and how friends, followers and fans are inter-connected. It is no longer possible to demand one's attention, or even expect it at a certain point in time. For a message to spread, it must be picked out from overflowing streams of updates, photos and links, and chosen to be reposted by each individual. The networked nature of social media may give some messages an overwhelming boost in popularity, but in most cases they fade as fast as they were created. It is imperative that we use available data to better model, track and gain insight about our audience in order to make the optimal decision at any given time.
PLOTCON NYC: The Architecture of Jupyter: Protocols for Interactive Data Expl...Plotly
Project Jupyter, evolved from the IPython environment, provides a platform for interactive computing that is widely used today in research, education, journalism and industry. The core premise of the Jupyter architecture is to design tools around the experience of interactive computing, building an environment, protocol, file format and libraries optimized for the computational process when there is a human in the loop, in a live iteration with ideas and data assisted by the computer.
In this talk, I will discuss what are the basic ideas that underpin Jupyter, and how they provide "lego blocks" that enable the project team, and the broader community, to develop a variety of tools and approaches to problems in interactive computing, data science, visualization and more.
Dev Concepts: Object-Oriented ProgrammingSvetlin Nakov
What Is Object-Oriented Programming?
Watch the video lesson from Svetlin Nakov and learn more at:
https://softuni.org/dev-concepts/what-is-object-oriented-programming
This document provides an introduction to the Python programming language. It covers topics such as data types, control statements, functions, input/output, errors and exceptions, object oriented programming, modules and packages. The document is presented over multiple slides with code examples.
Introduction to Pylab and Matploitlib. yazad dumasia
This document provides an introduction and overview of the Pylab module in Python. It discusses how Pylab is embedded in Matplotlib and provides a MATLAB-like experience for plotting and visualization. The document then provides examples of basic plotting libraries that can be used with Matplotlib like NumPy. It also demonstrates how to install Matplotlib on different operating systems like Windows, Ubuntu Linux, and CentOS Linux. Finally, it showcases various basic plot types like line plots, scatter plots, histograms, pie charts, and subplots with code examples.
Sierpinski Triangle - Polyglot FP for Fun and Profit - Haskell and ScalaPhilip Schwarz
Take the very first baby steps on the path to doing graphics in Haskell and Scala.
Learn about a simple yet educational recursive algorithm producing images that are pleasing to the eye.
Learn how functional programs deal with the side effects required to draw images.
See how libraries like Gloss and Doodle make drawing Sierpinski’s triangle a doddle.
Code for this slide deck:
https://github.com/philipschwarz/sierpinski-triangle-haskell-gloss
https://github.com/philipschwarz/sierpinski-triangle-scala-cats-io
https://github.com/philipschwarz/sierpinski-triangle-scala-awt-and-doodle
Errata:
1. the title 'Sierpinski Triangle' on the front slide could be improved by replacing it with 'Sierpinski's Triangle'.
2. a couple of typos on two slides
3. the triangles drawn using Doodle are not equilateral, as intended but isosceles.
(UPDATE 2021-06-15 I opened PR https://github.com/creativescala/doodle/pull/99 and as a result, an equilateral triangle has now been added to Doodle: https://github.com/creativescala/doodle/commit/30d20efebcc2016942e9cdbae85fefca5b95fa3c).
Here is a corrected version of the deck: https://www.slideshare.net/pjschwarz/sierpinski-triangle-polyglot-fp-for-fun-and-profit-haskell-and-scala-with-minor-corrections
Fifth part of the Course "Java Open Source GIS Development - From the building blocks to extending an existing GIS application." held at the University of Potsdam in August 2011
Functional Core and Imperative Shell - Game of Life Example - Haskell and ScalaPhilip Schwarz
See a program structure flowchart used to highlight how an FP program breaks down into a functional core and imperative shell
View a program structure flowchart for the Game of Life
See the code for Game of Life’s functional core and imperative shell, both in Haskell and in Scala.
Code:
https://github.com/philipschwarz/functional-core-imperative-shell-scala
https://github.com/philipschwarz/functional-core-imperative-shell-haskell
Julia programming language is a high-level, high-performance dynamic programming language for technical computing. It can be applied for Data Science, Machine Learning tasks, the web, among others. These slides are a brief introduction to this amazing language that facilitates my daily activities as Data Science and Software Engineer. For more information about the language access http://julialang.org/.
The document discusses developing a Python scraping API that extracts data from various sources like databases, spreadsheets, PDFs, and text files. It outlines the key steps as:
1. Connecting to databases and extracting data using Python libraries like PyMySQL and Pandas.
2. Extracting data from spreadsheets using openpyxl and extracting text, links, images from PDFs using libraries like PyPDF2, PdfPlumber, and PyMuPDF.
3. Processing and storing the extracted data in a MySQL database with tables created using SQL commands.
Introduction to Python 01-08-2023.pon by everyone else. . Hence, they must be...DRVaibhavmeshram1
Python
Language
is uesd in engineeringStory adapted from Stephen Covey (2004) “The Seven Habits of Highly Effective People” Simon & Schuster).
“Management is doing things right, leadership is doing the right things”
(Warren Bennis and Peter Drucker)
Story adapted from Stephen Covey (2004) “The Seven Habits of Highly Effective People” Simon & Schuster).
“Management is doing things right, leadership is doing the right things”
(Warren Bennis and Peter Drucker)
Story adapted from Stephen Covey (2004) “The Seven Habits of Highly Effective People” Simon & Schuster).
“Management is doing things right, leadership is doing the right things”
(Warren Bennis and Peter Drucker)
The Sponsor:
Champion and advocates for the change at their level in the organization.
A Sponsor is the person who won’t let the change initiative die from lack of attention, and is willing to use their political capital to make the change happen
The Role model:
Behaviors and attitudes demonstrated by them are looked upon by everyone else. . Hence, they must be willing to go first.
Employees watch leaders for consistency between words and actions to see if they should believe the change is really going to happen.
The decision maker:
Leaders usually control resources such as people, budgets, and equipment, and thus have the authority to make decisions (as per their span of control) that affect the initiative.
During change, leaders must leverage their decision-making authority and choose the options that will support the initiative.
The Decision-Maker is decisive and sets priorities that support change.
The Sponsor:
Champion and advocates for the change at their level in the organization.
A Sponsor is the person who won’t let the change initiative die from lack of attention, and is willing to use their political capital to make the change happen
The Role model:
Behaviors and attitudes demonstrated by them are looked upon by everyone else. . Hence, they must be willing to go first.
Employees watch leaders for consistency between words and actions to see if they should believe the change is really going to happen.
The decision maker:
Leaders usually control resources such as people, budgets, and equipment, and thus have the authority to make decisions (as per their span of control) that affect the initiative.
During change, leaders must leverage their decision-making authority and choose the options that will support the initiative.
The Decision-Maker is decisive and sets priorities that support change.
The Sponsor:
Champion and advocates for the change at their level in the organization.
A Sponsor is the person who won’t let the change initiative die from lack of attention, and is willing to use their political capital to make the change happen
The Role model:
Behaviors and attitudes demonstrated by them are looked upon by everyone else. . Hence, they must be willing to go first.
Employees watch leaders for consistency between words and actions to see if they s
Contract-driven development with OpenAPI 3 and Vert.x | DevNation Tech TalkRed Hat Developers
Have you ever been frustrated by developing and documenting an HTTP API? When it comes down to defining the HTTP interface between frontend and backend, have you ever had problems specifying the parameters or the shape of the body without misunderstandings? In this talk we’ll introduce you to "Contract Driven Development" (or API Design First approach), a methodology that uses declarative API Contracts to enable developers to efficiently design, communicate, and evolve their HTTP APIs, while automating API implementation phases where possible. In order to implement this methodology, we’ll show you how to develop an API contract using OpenAPI 3 and how you can easily implement the HTTP endpoints using Vert.x Web OpenAPI.
These questions will be a bit advanced level 2sadhana312471
These questions will be a bit advanced(Intermediate) in terms of Python interview.
This is the continuity of Nail the Python Interview Questions.
The fields that these questions will help you in are:
• Python Developer
• Data Analyst
• Research Analyst
• Data Scientist
This document provides a tutorial on data science in Python. It discusses Python's history and the Jupyter notebook interface. It also demonstrates how to import Python packages, load data, inspect data, and munge data for analysis. Specific techniques shown include importing datasets, checking data types and dimensions, selecting rows and columns, and obtaining summary information about the data.
The slides shown here have been used for talks given to scientists in informal contexts.
Python is introduced as a valuable tool for both producing and evaluating data.
The talk is essentially a guided tour of the author's favourite parts of the Python ecosystem. Besides the Python language itself, NumPy and SciPy as well as Matplotlib are mentioned.
A last part of the talk concerns itself with code execution speed. With this problem in sight, Cython and f2py are introduced as means of glueing different languages together and speeding Python up.
The source code for the slides, code snippets and further links are available in a git repository at
https://github.com/aeberspaecher/PythonForScientists
The document discusses installing Python 3 on Ubuntu and Windows systems. It provides step-by-step instructions for installing Python 3.8 using apt on Ubuntu and downloading/running the installer on Windows. Basic Python data visualization techniques like line plots, bar charts, histograms, box plots, and scatter plots are then introduced using the Matplotlib library. Code examples are given for creating each type of plot.
Kotlin is a JVM language developed by Jetbrains. Its version 1.0 (production ready) was released at the beginning of the year and made some buzz within the android community. This session proposes to discover this language, which takes up some aspects of groovy or scala, and that is very close to swift in syntax and concepts. We will see how Kotlin boosts the productivity of Java & Android application development and how well it accompanies reactive development.
https://www.dmdiploma.com/studymaterial?id=5/python-for-data-science
This Python course provides a beginner-friendly introduction to Python for Data Science.
The document discusses properties in Python classes. Properties allow accessing attributes through normal attribute syntax, while allowing custom behavior through getter and setter methods. This avoids directly accessing attributes and allows for validation in setters. Properties are defined using the @property and @setter decorators, providing a cleaner syntax than regular getter/setter methods. They behave like regular attributes but allow underlying method calls.
This document summarizes key aspects of iteration in Python based on the provided document:
1. Python supports multiple ways of iteration including for loops and generators. For loops are preferred for iteration over finite collections while generators enable infinite iteration.
2. Common iteration patterns include iterating over elements, indices, or both using enumerate(). Numerical iteration can be done with for loops or while loops.
3. Functions are first-class objects in Python and can be passed as arguments or returned as values, enabling functional programming patterns like mapping and filtering.
Similar to PLOTCON NYC: PlotlyJS.jl: Interactive plotting in Julia (20)
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Natural Language Processing (NLP), RAG and its applications .pptxfkyes25
1. In the realm of Natural Language Processing (NLP), knowledge-intensive tasks such as question answering, fact verification, and open-domain dialogue generation require the integration of vast and up-to-date information. Traditional neural models, though powerful, struggle with encoding all necessary knowledge within their parameters, leading to limitations in generalization and scalability. The paper "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" introduces RAG (Retrieval-Augmented Generation), a novel framework that synergizes retrieval mechanisms with generative models, enhancing performance by dynamically incorporating external knowledge during inference.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
6. In [2]: f(x, y) = "Two arguments: $(x) and $(y)"
f("hello", "plotcon")
Out[2]: "Two arguments: hello and plotcon"
7. In [3]:
In [4]:
In [5]:
In [6]:
f(x::Number, y) = "First arg is a number ($(x)), second isn't ($(y))"
# still old method
f("hello", "plotcon")
# new method
f(2, "plotcon")
# Generics
# also new method, but this time with floating point first argument
f(2.0, "plotcon")
Out[3]: f (generic function with 2 methods)
Out[4]: "Two arguments: hello and plotcon"
Out[5]: "First arg is a number (2), second isn't (plotcon)"
Out[6]: "First arg is a number (2.0), second isn't (plotcon)"
8. In [7]:
In [8]:
In [9]:
# longer function syntax
function f(x::Number, y::Number)
"Two numbers: ($(x), $(y))"
end
# newest method
f(2.0, 2)
# unsigned 8 bit int and BigInt
f(0x81, big(4))
Out[7]: f (generic function with 3 methods)
Out[8]: "Two numbers: (2.0, 2)"
Out[9]: "Two numbers: (129, 4)"
21. In [18]: # layout optional
plot(trace2)
Out[18]:
22. In [19]: # more than one trace
plot([trace1, trace3], layout)
Out[19]:
23. Convenience API
The plot function has a number of other methods that try to make it a bit easier to
construct simple plots (remember multiple dispatch? :) )
In [20]: methods(plot)
Out[20]: 13 methods for generic function plot:
plot{T<:Number,T2<:Number}(x::AbstractArray{T,1},
y::AbstractArray{T2,2}) at
plot{T<:Number,T2<:Number}(x::AbstractArray{T,1},
y::AbstractArray{T2,2}, l::PlotlyJS.Layout; style, kwargs...) at
plot{T<:Number,T2<:Number}(x::AbstractArray{T,2},
y::AbstractArray{T2,2}) at
plot{T<:Number,T2<:Number}(x::AbstractArray{T,2},
y::AbstractArray{T2,2}, l::PlotlyJS.Layout; style, kwargs...) at
/Users/sglyon/.julia/v0.5/PlotlyJS/src/convenience_api.jl:31
/Users/sglyon/.julia/v0.5/PlotlyJS/src/convenience_api.jl:31
/Users/sglyon/.julia/v0.5/PlotlyJS/src/convenience_api.jl:40
/Users/sglyon/.julia/v0.5/PlotlyJS/src/convenience_api.jl:40
42. Electron
We use to provide an app for PlotlyJS.jl
This buys us at least 2 things:
1. Dedicated GUI that we completely control
2. Full 2-way communication with javascript
Javascript interop enables:
Live updates of trace or layout attributes
Extending traces or adding new traces to a displayed plot
Raw svg output from d3.js for conversion to pdf, png, jpeg, eps, etc.
More...
Demo
Blink.jl Electron