A lecture given for Stats 285 at Stanford on October 30, 2017. I discuss how OSS technology developed at Anaconda, Inc. has helped to scale Python to GPUs and Clusters.
Talk given at first OmniSci user conference where I discuss cooperating with open-source communities to ensure you get useful answers quickly from your data. I get a chance to introduce OpenTeams in this talk as well and discuss how it can help companies cooperate with communities.
Keynote talk at PyCon Estonia 2019 where I discuss how to extend CPython and how that has led to a robust ecosystem around Python. I then discuss the need to define and build a Python extension language I later propose as EPython on OpenTeams: https://openteams.com/initiatives/2
At my first visit to SciPy in Latin America, I was able to review the history of PyData, SciPy, and NumFOCUS, and discuss how to grow its communities and cooperate in the future. I also introduce OpenTeams as a way for open-source contributors to grow their reputation and build businesses.
Travis Oliphant "Python for Speed, Scale, and Science"Fwdays
Python is sometimes discounted as slow because of its dynamic typing and interpreted nature and not suitable for scale because of the GIL. But, in this talk, I will show how with the help of talented open-source contributors around the world, we have been able to build systems in Python that are fast and scalable to many machines and how this has helped Python take over Science.
Intro to TensorFlow and PyTorch Workshop at Tubular LabsKendall
These are some introductory slides for the Intro to TensorFlow and PyTorch workshop at Tubular Labs. The Github code is available at:
https://github.com/PythonWorkshop/Intro-to-TensorFlow-and-PyTorch
TensorFlow에 대한 분석 내용
- TensorFlow?
- 배경
- DistBelief
- Tutorial - Logistic regression
- TensorFlow - 내부적으로는
- Tutorial - CNN, RNN
- Benchmarks
- 다른 오픈 소스들
- TensorFlow를 고려한다면
- 설치
- 참고 자료
Talk given at first OmniSci user conference where I discuss cooperating with open-source communities to ensure you get useful answers quickly from your data. I get a chance to introduce OpenTeams in this talk as well and discuss how it can help companies cooperate with communities.
Keynote talk at PyCon Estonia 2019 where I discuss how to extend CPython and how that has led to a robust ecosystem around Python. I then discuss the need to define and build a Python extension language I later propose as EPython on OpenTeams: https://openteams.com/initiatives/2
At my first visit to SciPy in Latin America, I was able to review the history of PyData, SciPy, and NumFOCUS, and discuss how to grow its communities and cooperate in the future. I also introduce OpenTeams as a way for open-source contributors to grow their reputation and build businesses.
Travis Oliphant "Python for Speed, Scale, and Science"Fwdays
Python is sometimes discounted as slow because of its dynamic typing and interpreted nature and not suitable for scale because of the GIL. But, in this talk, I will show how with the help of talented open-source contributors around the world, we have been able to build systems in Python that are fast and scalable to many machines and how this has helped Python take over Science.
Intro to TensorFlow and PyTorch Workshop at Tubular LabsKendall
These are some introductory slides for the Intro to TensorFlow and PyTorch workshop at Tubular Labs. The Github code is available at:
https://github.com/PythonWorkshop/Intro-to-TensorFlow-and-PyTorch
TensorFlow에 대한 분석 내용
- TensorFlow?
- 배경
- DistBelief
- Tutorial - Logistic regression
- TensorFlow - 내부적으로는
- Tutorial - CNN, RNN
- Benchmarks
- 다른 오픈 소스들
- TensorFlow를 고려한다면
- 설치
- 참고 자료
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...Databricks
Scalability and interactivity make Spark an excellent platform for data scientists who want to analyze very large datasets and build predictive models. However, the productivity of data scientists is hampered by lack of abstractions for building models for diverse types of data. For example, processing text or image data requires low level data coercion and transformation steps, which are not easy to compose into complex workflows for production applications. There is also a lack of domain specific libraries, for example for computer vision and image processing.
We present an open-source Spark library which simplifies common data science tasks such as feature construction and hyperparameter tuning, and allows data scientists to iterate and experiment on their models faster. The library integrates seamlessly with SparkML pipeline object model, and is installable through spark-packages.
The library brings deep learning and image processing to Spark through CNTK, OpenCV and Tensorflow in frictionless manner, thus enabling scenarios such training on GPU-enabled nodes, deep neural net featurization and transfer learning on large image datasets. We discuss the design and architecture of the library, and show examples of building a machine learning models for image classification.
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...Simplilearn
This presentation on TensorFlow will help you in understanding what exactly is TensorFlow and how it is used in Deep Learning. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this tutorial, you will learn the fundamentals of TensorFlow concepts, functions, and operations required to implement deep learning algorithms and leverage data like never before. This TensorFlow tutorial is ideal for beginners who want to pursue a career in Deep Learning. Now, let us deep dive into this TensorFlow tutorial and understand what TensorFlow actually is and how to use it.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning Libraries
3. Why TensorFlow?
4. What is TensorFlow?
5. What are Tensors?
6. What is a Data Flow Graph?
7. Program Elements in TensorFlow
8. Use case implementation using TensorFlow
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
Fix typo of original tutorial slide.
Introduction of PyTorch
Explains PyTorch usages by a CNN example.
Describes the PyTorch modules (torch, torch.nn, torch.optim, etc) and the usages of multi-GPU processing.
Also gives examples for Recurrent Neural Network and Transfer Learning.
Slides from my talk at Facebook Developer Circle: Pune Launch on Aug 19 2017. Supplementary code available here: https://github.com/mayurbhangale/pytorch_notebook
introduction to Python by Mohamed Hegazy , in this slides you will find some code samples , these slides first presented in TensorFlow Dev Summit 2017 Extended by GDG Helwan
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016MLconf
DL4J and DataVec for Enterprise Deep Learning Workflows: Applications in NLP, sensor processing (IoT), image processing, and audio processing have all emerged as prime deep learning applications. In this session we will take a look at a practical review of building practical and secure Deep Learning workflows in the enterprise. We’ll see how DL4J’s DataVec tool enables scalable ETL and vectorization pipelines to be created for a single machine or scale out to Spark on Hadoop. We’ll also see how Deep Networks such as Recurrent Neural Networks are able to leverage DataVec to more quickly process data for modeling.
TensorFrames: Google Tensorflow on Apache SparkDatabricks
Presentation at Bay Area Spark Meetup by Databricks Software Engineer and Spark committer Tim Hunter.
This presentation covers how you can use TensorFrames with Tensorflow to distributed computing on GPU.
Language translation with Deep Learning (RNN) with TensorFlowS N
The author is going to take you into the realm of Recurrent Neural Network (RNN). He will be training a sequence to sequence model on a dataset of English and French sentences that can translate new (unseen) sentences from English to French.
This will be a walkthrough of an end to end technique to train a Deep RNN model. You will learn to build various components necessary to build a Sequence-to-Sequence model.
You will learn about the fundamentals of Deep Learning, mainly RNN, concepts that will be required in this solution. A familiarity of Deep Learning concepts would be handy, but most of the concepts used in this example will be covered during the demo.
Technologies to be used:
Python, Jupyter, TensorFlow, FloydHub
Source code: https://github.com/syednasar/deeplearning/blob/master/language-translation/dlnd_language_translation.ipynb
...
( Python Training: https://www.edureka.co/python )
This Edureka Python Numpy tutorial (Python Tutorial Blog: https://goo.gl/wd28Zr) explains what exactly is Numpy and how it is better than Lists. It also explains various Numpy operations with examples.
Check out our Python Training Playlist: https://goo.gl/Na1p9G
This tutorial helps you to learn the following topics:
1. What is Numpy?
2. Numpy v/s Lists
3. Numpy Operations
4. Numpy Special Functions
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...Altoros
1. The elements of Neural Networks: Weights, Biases, and Gating functions
2. MNIST (Hand writing recognition) using simple NN in TensorFlow (Introduce Tensors, Computation Graphs)
3. MNIST using Convolution NN in TensorFlow
4. Understanding words and sentences as Vectors
5. word2vec in TensorFlow
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016MLconf
Say What You Mean: Scaling Machine Learning Algorithms Directly from Source Code: Scaling machine learning applications is hard. Even with powerful systems like Spark, Tensor Flow, and Theano, the code you write has more to do with getting these systems to work at all than it does with your algorithm itself. But it doesn’t have to be this way!
In this talk, I’ll discuss an alternate approach we’ve taken with Pyfora, an open-source platform for scalable machine learning and data science in Python. I’ll show how it produces efficient, large scale machine learning implementations directly from the source code of single-threaded Python programs. Instead of programming to a complex API, you can simply say what you mean and move on. I’ll show some classes of problem where this approach truly shines, discuss some practical realities of developing the system, and I’ll talk about some future directions for the project.
Pysense: wireless sensor computing in Python?Davide Carboni
PySense aims at bringing wireless sensor (and "internet of things") macroprogramming to the audience of Python programmers. WSN macroprogramming is an emerging approach where the network is seen as a whole and the programmer focuses only on the application logic. The PySense runtime environment
partitions the code and transmits code snippets to the right nodes finding a balance between energy
consumption and computing performances.
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...Databricks
Scalability and interactivity make Spark an excellent platform for data scientists who want to analyze very large datasets and build predictive models. However, the productivity of data scientists is hampered by lack of abstractions for building models for diverse types of data. For example, processing text or image data requires low level data coercion and transformation steps, which are not easy to compose into complex workflows for production applications. There is also a lack of domain specific libraries, for example for computer vision and image processing.
We present an open-source Spark library which simplifies common data science tasks such as feature construction and hyperparameter tuning, and allows data scientists to iterate and experiment on their models faster. The library integrates seamlessly with SparkML pipeline object model, and is installable through spark-packages.
The library brings deep learning and image processing to Spark through CNTK, OpenCV and Tensorflow in frictionless manner, thus enabling scenarios such training on GPU-enabled nodes, deep neural net featurization and transfer learning on large image datasets. We discuss the design and architecture of the library, and show examples of building a machine learning models for image classification.
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...Simplilearn
This presentation on TensorFlow will help you in understanding what exactly is TensorFlow and how it is used in Deep Learning. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this tutorial, you will learn the fundamentals of TensorFlow concepts, functions, and operations required to implement deep learning algorithms and leverage data like never before. This TensorFlow tutorial is ideal for beginners who want to pursue a career in Deep Learning. Now, let us deep dive into this TensorFlow tutorial and understand what TensorFlow actually is and how to use it.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning Libraries
3. Why TensorFlow?
4. What is TensorFlow?
5. What are Tensors?
6. What is a Data Flow Graph?
7. Program Elements in TensorFlow
8. Use case implementation using TensorFlow
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
Fix typo of original tutorial slide.
Introduction of PyTorch
Explains PyTorch usages by a CNN example.
Describes the PyTorch modules (torch, torch.nn, torch.optim, etc) and the usages of multi-GPU processing.
Also gives examples for Recurrent Neural Network and Transfer Learning.
Slides from my talk at Facebook Developer Circle: Pune Launch on Aug 19 2017. Supplementary code available here: https://github.com/mayurbhangale/pytorch_notebook
introduction to Python by Mohamed Hegazy , in this slides you will find some code samples , these slides first presented in TensorFlow Dev Summit 2017 Extended by GDG Helwan
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016MLconf
DL4J and DataVec for Enterprise Deep Learning Workflows: Applications in NLP, sensor processing (IoT), image processing, and audio processing have all emerged as prime deep learning applications. In this session we will take a look at a practical review of building practical and secure Deep Learning workflows in the enterprise. We’ll see how DL4J’s DataVec tool enables scalable ETL and vectorization pipelines to be created for a single machine or scale out to Spark on Hadoop. We’ll also see how Deep Networks such as Recurrent Neural Networks are able to leverage DataVec to more quickly process data for modeling.
TensorFrames: Google Tensorflow on Apache SparkDatabricks
Presentation at Bay Area Spark Meetup by Databricks Software Engineer and Spark committer Tim Hunter.
This presentation covers how you can use TensorFrames with Tensorflow to distributed computing on GPU.
Language translation with Deep Learning (RNN) with TensorFlowS N
The author is going to take you into the realm of Recurrent Neural Network (RNN). He will be training a sequence to sequence model on a dataset of English and French sentences that can translate new (unseen) sentences from English to French.
This will be a walkthrough of an end to end technique to train a Deep RNN model. You will learn to build various components necessary to build a Sequence-to-Sequence model.
You will learn about the fundamentals of Deep Learning, mainly RNN, concepts that will be required in this solution. A familiarity of Deep Learning concepts would be handy, but most of the concepts used in this example will be covered during the demo.
Technologies to be used:
Python, Jupyter, TensorFlow, FloydHub
Source code: https://github.com/syednasar/deeplearning/blob/master/language-translation/dlnd_language_translation.ipynb
...
( Python Training: https://www.edureka.co/python )
This Edureka Python Numpy tutorial (Python Tutorial Blog: https://goo.gl/wd28Zr) explains what exactly is Numpy and how it is better than Lists. It also explains various Numpy operations with examples.
Check out our Python Training Playlist: https://goo.gl/Na1p9G
This tutorial helps you to learn the following topics:
1. What is Numpy?
2. Numpy v/s Lists
3. Numpy Operations
4. Numpy Special Functions
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...Altoros
1. The elements of Neural Networks: Weights, Biases, and Gating functions
2. MNIST (Hand writing recognition) using simple NN in TensorFlow (Introduce Tensors, Computation Graphs)
3. MNIST using Convolution NN in TensorFlow
4. Understanding words and sentences as Vectors
5. word2vec in TensorFlow
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016MLconf
Say What You Mean: Scaling Machine Learning Algorithms Directly from Source Code: Scaling machine learning applications is hard. Even with powerful systems like Spark, Tensor Flow, and Theano, the code you write has more to do with getting these systems to work at all than it does with your algorithm itself. But it doesn’t have to be this way!
In this talk, I’ll discuss an alternate approach we’ve taken with Pyfora, an open-source platform for scalable machine learning and data science in Python. I’ll show how it produces efficient, large scale machine learning implementations directly from the source code of single-threaded Python programs. Instead of programming to a complex API, you can simply say what you mean and move on. I’ll show some classes of problem where this approach truly shines, discuss some practical realities of developing the system, and I’ll talk about some future directions for the project.
Pysense: wireless sensor computing in Python?Davide Carboni
PySense aims at bringing wireless sensor (and "internet of things") macroprogramming to the audience of Python programmers. WSN macroprogramming is an emerging approach where the network is seen as a whole and the programmer focuses only on the application logic. The PySense runtime environment
partitions the code and transmits code snippets to the right nodes finding a balance between energy
consumption and computing performances.
Euro python2011 High Performance PythonIan Ozsvald
I ran this as a 4 hour tutorial at EuroPython 2011 to teach High Performance Python coding.
Techniques covered include bottleneck analysis by profiling, bytecode analysis, converting to C using Cython and ShedSkin, use of the numerical numpy library and numexpr, multi-core and multi-machine parallelisation and using CUDA GPUs.
Write-up with 49 page PDF report: http://ianozsvald.com/2011/06/29/high-performance-python-tutorial-v0-1-from-my-4-hour-tutorial-at-europython-2011/
Natural Language Processing with CNTK and Apache Spark with Ali ZaidiDatabricks
Apache Spark provides an elegant API for developing machine learning pipelines that can be deployed seamlessly in production. However, one of the most intriguing and performant family of algorithms – deep learning – remains difficult for many groups to deploy in production, both because of the need for tremendous compute resources and also because of the inherent difficulty in tuning and configuring.
In this session, you’ll discover how to deploy the Microsoft Cognitive Toolkit (CNTK) inside of Spark clusters on the Azure cloud platform. Learn about the key considerations for administering GPU-enabled Spark clusters, configuring such workloads for maximum performance, and techniques for distributed hyperparameter optimization. You’ll also see a real-world example of training distributed deep learning learning algorithms for speech recognition and natural language processing.Microsoft Cognitive Toolkit (CNTK) inside of Spark clusters on the Azure cloud platform. We’ll discuss the key considerations for administering GPU-enabled Spark clusters, configuring such workloads for maximum performance, and techniques for distributed hyperparameter optimization. We’ll illustrate a real-world example of training distributed deep learning learning algorithms for speech recognition and natural language processing.
With Dask and Numba, you can NumPy-like and Pandas-like code and have it run very fast on multi-core systems as well as at scale on many-node clusters.
From Zero to Hero - All you need to do serious deep learning stuff in R Kai Lichtenberg
Slides from my talk at the useR Group Münster 04/17/18 on how to start with GPU enabled deep learning in R. First I'm showing how to create a NVIDIA docker based image with RStudio, TensorFlow and Keras for R and then comes an introduction to deep learning (classic MNIST classification with MLP and CNN).
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
The search for faster computing remains of great importance to the software community. Relatively inexpensive modern hardware, such as GPUs, allows users to run highly parallel code on thousands, or even millions of cores on distributed systems.
Building efficient GPU software is not a trivial task, often requiring a significant amount of engineering hours to attain the best performance. Similarly, distributed computing systems are inherently complex. In recent years, several libraries were developed to solve such problems. However, they often target a single aspect of computing, such as GPU computing with libraries like CuPy, or distributed computing with Dask.
Libraries like Dask and CuPy tend to provide great performance while abstracting away the complexity from non-experts, being great candidates for developers writing software for various different applications. Unfortunately, they are often difficult to be combined, at least efficiently.
With the recent introduction of NumPy community standards and protocols, it has become much easier to integrate any libraries that share the already well-known NumPy API. Such changes allow libraries like Dask, known for its easy-to-use parallelization and distributed computing capabilities, to defer some of that work to other libraries such as CuPy, providing users the benefits from both distributed and GPU computing with little to no change in their existing software built using the NumPy API.
Linux and Open Source in Math, Science and EngineeringPDE1D
Covers a brief history of Open Source Math, Science and Engineering Software on Linux. A look at the software tools currently available for mathematical analysis and plotting for math science and engineering. Presented at 2011 Ohio LinuxFest.
Introduction of PyTorch
Explains PyTorch usages by a CNN example.
Describes the PyTorch modules (torch, torch.nn, torch.optim, etc) and the usages of multi-GPU processing.
Also gives examples for Recurrent Neural Network and Transfer Learning.
With Anaconda (in particular Numba and Dask) you can scale up your NumPy and Pandas stack to many cpus and GPUs as well as scale-out to run on clusters of machines including Hadoop.
Python is dominating the fast-growing data-science landscape. This talk provides a foundational overview of the practice of data science and some of the most popular Python libraries for doing data science. It also provides an overview of how Anaconda brings it all together.
Making NumPy-style and Pandas-style code faster and run in parallel. Continuum has been working on scaled versions of NumPy and Pandas for 4 years. This talk describes how Numba and Dask provide scaled Python today.
Using Anaconda to light up dark data. My talk given to the Berkeley Institute of Data Science describing Anaconda and the Blaze ecosystem for bringing a virtual analytical database to your data.
Conda is a cross-platform package manager that lets you quickly and easily build environments containing complicated software stacks. It was built to manage the NumPy stack in Python but can be used to manage any complex software dependencies.
Blaze: a large-scale, array-oriented infrastructure for PythonTravis Oliphant
This talk gives a high-level overview of the motivation, design goals, and status of the Blaze project from Continuum Analytics which is a large-scale array object for Python.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
1. Scaling Python to GPUs and CPUs
Stanford Stats 285
October 30, 2017
Travis E. Oliphant
President, Chief Data Scientist, Co-founder Anaconda, Inc.
1
2. My Background
2
• MS/BS degrees in Elec. Comp. Engineering
• PhD from Mayo Clinic in Biomedical Engineering
(Ultrasound and MRI)
• Creator and Developer of SciPy (1998-2009)
• Professor at BYU (2001-2007) Inverse Problems
• Creator and Developer of NumPy (2005-2012)
• Started Numba (2012)
• Founder of NumFOCUS / PyData
• Python Software Foundation Director (2012)
• Co-founder of Continuum Analytics => Anaconda, Inc.
• CEO (2012) => Chief Data Scientist (2017)
SciPy
3. 3
Empower domain experts with high-level tools that
exploit modern hard-ware
Array Oriented Computing
expertise
4. • Express domain knowledge
directly in arrays (tensors,
matrices, vectors) --- easier to
teach programming in domain
• Can take advantage of
parallelism and accelerators
• Array expressions
Why Array-oriented computing
4
Object
Attr1
Attr2
Attr3
Object
Attr1
Attr2
Attr3
Object
Attr1
Attr2
Attr3
Object
Attr1
Attr2
Attr3
Object
Attr1
Attr2
Attr3
Object
Attr1
Attr2
Attr3
Attr1 Attr2 Attr3
Object1
Object2
Object3
Object4
Object5
Object6
5. • Today’s vector machines (and vector co-processors, or GPUS) were
made for array-oriented computing.
• The software stack has just not caught up --- unfortunate because
APL came out in 1963.
• There is a reason Fortran remains popular.
More reasons for array-oriented
5
14. Data Science Workflow
14
New Data
NotebooksUnderstand Data
Getting Data
Understand World
Reports
Microservices
Dashboards
Applications
Decisions
and
Actions
Models
Exploratory Data Analysis and Viz
Data Products
15. 15
New Data
NotebooksUnderstand Data
Getting Data
Understand World
Reports
Microservices
Dashboards
Applications
Decisions
and
Actions
Models
Exploratory Data Analysis and Viz
Data Products
Data Science Workflow
21. Embrace Innovation Without Anarchy
21
From http://www.slideshare.net/RevolutionAnalytics/r-at-microsoft
Reproducibility
22. 22
Conda
Conda Forge
Conda Environments
Anaconda Project
A cross-platform and language agnostic package and
environment manager
A community-led collection of recipes, build
infrastructure, and packages for conda.
Custom isolated software sandboxes to allow easy
reproducibility and sharing of data-science work.
Reproducible, executable project directories
23. • Language independent
• Platform independent
• No special privileges required
• No VMs or containers
• Enables:
- Reproducibility
- Collaboration
- Scaling
“conda – package everything”
23
A
Python v2.7
Conda Sandboxing Technology
B
Python
v3.4
Pandas
v0.18
Jupyter
C
R
R
Essentials
conda
NumPy
v1.11
NumPy
v1.10
Pandas
v0.16
25. Basic Conda Usage
25
Install a package conda install sympy
List all installed packages conda list
Search for packages
conda search llvm
Create a new environment
conda create -n py3k python=3
Remove a package
conda remove nose
Get help
conda install --help
26. Advanced Conda Usage
26
Install a package in an environment conda install -n py3k sympy
Update all packages conda update --all
Export list of packages conda list --export packages.txt
Install packages from an export conda install --file packages.txt
See package history conda list --revisions
Revert to a revision conda install --revision 23
Remove unused packages and cached
tarballs
conda clean -pt
29. Without NumPy
29
from math import sin, pi
def sinc(x):
if x == 0:
return 1.0
else:
pix = pi*x
return sin(pix)/pix
def step(x):
if x > 0:
return 1.0
elif x < 0:
return 0.0
else:
return 0.5
functions.py
>>> import functions as f
>>> xval = [x/3.0 for x in
range(-10,10)]
>>> yval1 = [f.sinc(x) for x
in xval]
>>> yval2 = [f.step(x) for x
in xval]
Python is a great language but
needed a way to operate quickly
and cleanly over multi-
dimensional arrays.
30. With NumPy
30
from numpy import sin, pi
from numpy import vectorize
import functions as f
vsinc = vectorize(f.sinc)
def sinc(x):
pix = pi*x
val = sin(pix)/pix
val[x==0] = 1.0
return val
vstep = vectorize(f.step)
def step(x):
y = x*0.0
y[x>0] = 1
y[x==0] = 0.5
return y
>>> import functions2 as f
>>> from numpy import *
>>> x = r_[-10:10]/3.0
>>> y1 = f.sinc(x)
>>> y2 = f.step(x)
functions2.py
Offers N-D array, element-by-element
functions, and basic random numbers,
linear algebra, and FFT capability for
Python
http://numpy.org
Fiscally sponsored by NumFOCUS
31. NumPy: an Array Extension of Python
31
• Data: the array object
– slicing and shaping
– data-type map to Bytes
• Fast Math (ufuncs):
– vectorization
– broadcasting
– aggregations
35. Summary
35
• Provides foundational N-dimensional array composed
of homogeneous elements of a particular “dtype”
• The dtype of the elements is extensive (but difficult to
extend)
• Arrays can be sliced and diced with simple syntax to
provide easy manipulation and selection.
• Provides fast and powerful math, statistics, and linear
algebra functions that operate over arrays.
• Utilities for sorting, reading and writing data also
provided.
37. Scale Up vs Scale Out
37
Big Memory &
Many Cores
/ GPU Box
Best of Both
(e.g. GPU Cluster)
Many commodity
nodes in a cluster
ScaleUp
(BiggerNodes)
Scale Out
(More Nodes)
Numba
Dask
Dask with Numba
40. Numba (compile Python to CPUs and GPUs)
40
conda install numba
Intermediate
Representatio
n (IR)
x86
ARM
PTX
Python
LLVMNumba
Code Generation
Backend
Parsing
Frontend
41. 41
@jit('void(f8[:,:],f8[:,:],f8[:,:])')
def filter(image, filt, output):
M, N = image.shape
m, n = filt.shape
for i in range(m//2, M-m//2):
for j in range(n//2, N-n//2):
result = 0.0
for k in range(m):
for l in range(n):
result += image[i+k-m//2,j+l-n//2]*filt[k, l]
output[i,j] = result
~1500x speed-up
Image Processing
42. Works with and does not replace the standard Python interpreter
(all of your existing Python libraries are still available)
Numba Features
42
44. Example: Filter an array
44
Array Allocation
Looping over ndarray x as an iterator
Using numpy math functions
Returning a slice of the array
Numba decorator
(nopython=True not required)
2.7x Speedup
over NumPy!
45. NumPy UFuncs and GUFuncs
45
NumPy ufuncs (and gufuncs) are functions that operate “element-wise” (or “sub-
dimension-wise”) across an array without an explicit loop.
This implicit loop (which is in machine code) is at the core of why NumPy is fast.
Dispatch is done internally to a particular code-segment based on the type of the
array. It is a very powerful abstraction in the PyData stack.
Making new fast ufuncs used to be only possible in C — painful!
With numba.vectorize and numba.guvectorize it is now easy!
The inner secrets of NumPy are now at your finger-tips for you to make your own
magic!
46. Simple Ufunc
46
@vectorize
def dot2(a,b,x,y):
return a*x + b*y
>>> a, b, x, y = np.random.randn(4,1000)
>>> z = a * x + b * y
>>> z2 = dot2(a, b, x, y) # faster
Faster (especially) as N grows because it does not create temporaries.
NumPy creates temporary arrays for intermediate results.
Numba creates a fast machine-code kernel from the Python template
and calls it for every element in the arrays.
47. Generalized Ufunc
47
@guvectorize(‘f8[:], f8[:], f8[:]’, ‘(n),(n)->()’)
def dot2(a,b,c):
c[0]=a[0]*b[0] + a[1]*b[1]
>>> a, b = np.random.randn(10000,2), np.random.randn(10000,2)
>>> z1 = np.einsum(‘ij,ij->i’, a, b)
>>> z2 = dot2(a, b) # uses last dimension as in each kernel
This can create quite a bit of computation with very little code.
Numba creates a fast machine-code kernel from the Python template
and calls it for every element in the arrays.
3.8x faster
48. 48
Perform a computation
on a finite window of the input.
For a linear system, this is a FIR
filter and what np.convolve or
sp.signal.lfilter can do.
But what if you want some
arbitrary computation like a
windowed median filter.
Example: Making a windowed compute filter
49. 49
Hand-coded implementation
Build a ufunc for the kernel which is faster for
large arrays!
This can now run easily on GPU
with ‘target=cuda’ and many-cores
‘target=parallel’
Array-oriented!
Example: Making a windowed compute filter
51. 1. Create a realistic benchmark test case.
(Do not use your unit tests as a benchmark!)
2. Run a profiler on your benchmark.
(cProfile is a good choice)
3. Identify hotspots that could potentially be compiled by Numba with a little refactoring.
(see online documentation)
4. Apply @numba.jit, @numba.vectorize, and @numba.guvectorize as needed to critical
functions. (Small rewrites may be needed to work around Numba limitations.)
5. Re-run benchmark to check if there was a performance improvement.
6. Use target=parallel to get access to multiple cores (or target=cuda if you have a
GPU)
How to Use Numba
51
52. How Numba works
52
Bytecode
Analysis
Python Function
(bytecode)
Function
Arguments
Type
Inference
Numba IR
LLVM IR
Machine
Code
@jit
def do_math(a,b):
…
>>> do_math(x, y)
Cache
Execute!
Rewrite IR
Lowering
LLVM JIT
53. 7 things about Numba you may not know
53
1
2
3
4
5
6
7
Numba is 100% Open Source
Numba + Jupyter = Rapid
CUDA Prototyping
Numba can compile for the
CPU and the GPU at the same time
Numba makes array processing
easy with @(gu)vectorize
Numba comes with a
CUDA Simulator
You can send Numba
functions over the network
Numba developers are working
On a GPU DataFrame (pygdf)
54. Numba is quite popular!
54
A numba mailing list reports experiments of a SciPy author who got 2x speed-
up by removing their Cython type annotations and surrounding function with
numba.jit (with a few minor changes needed to the code).
With Numba’s ahead-of-time compilation one can use Numba to create a
library that you ship to others (who then don’t need to have Numba installed).
This is not as clean as it could be.
SciPy (and NumPy) would look very different in Numba had existed 16 years
ago when SciPy was getting started — and the PyPy crowd would be happier.
57. CUDA Python (in open-source Numba!)
57
CUDA Development
using Python syntax for
optimal performance!
10-20x faster than CPU
You have to understand
CUDA at least a little —
writing kernels that launch in
parallel on the GPU
58. Classic Example
58
from numba import jit
@jit
def mandel(x, y, max_iters):
c = complex(x,y)
z = 0j
for i in range(max_iters):
z = z*z + c
if z.real * z.real + z.imag * z.imag >= 4:
return 255 * i // max_iters
return 255
Mandelbrot
60. Other topics
60
CUDA Python — write general GPU kernels with Python
Device Arrays — manage memory transfer from host to GPU
Streaming — manage asynchronous and parallel GPU compute streams
CUDA Simulator in Python — to help debug your kernels
HSA Support — early support for HSA-based GPUs and APUs
Pyculib — access to cuFFT, cuBLAS, cuSPARSE, cuRAND, CUDA Sorting
https://github.com/ContinuumIO/gtc2017-numba
62. • Designed to parallelize the Python ecosystem
• Handles complex algorithms
• Co-developed with Pandas/SKLearn/Jupyter teams
• Familiar APIs for Python users
• Scales
• Scales from multicore to 1000-node clusters
• Resilience, responsive, and real-time
63. • Parallelizes NumPy, Pandas, SKLearn
• Satisfies subset of these APIs
• Uses these libraries internally
• Co-developed with these teams
• Task scheduler supports custom algorithms
• Parallelize existing code
• Build novel real-time systems
• Arbitrary task graphs
with data dependencies
• Same scalability
64. demo video
• High level: Scaling Pandas
• Same Pandas look and feel
• Uses Pandas under the hood
• Scales nicely onto many machines
• Low level: Arbitrary task scheduling
• Parallelize normal Python code
• Build custom algorithms
• React real-time
• Demo deployed with
• dask-kubernetes
Google Compute Engine
• github.com/dask/dask-kubernetes
• Youtube link
• https://www.youtube.com/watch?v=o
ds97a5Pzw0&
65. Why do people choose Dask?
• Familiar with Python:
• Drop-in NumPy/Pandas/SKLearn APIs
• Native memory environment
• Easy debugging and diagnostics
• Have complex problems:
• Parallelize existing code without expensive rewrites
• Sophisticated algorithms and systems
• Real-time response to small-data
• Scales up and down:
• Scales to 1000-node clusters
• Also runs cheaply on a laptop
#import pandas as pd
import dask.dataframe as dd
66. Dask
66
• Started as part of Blaze in early 2014.
• General parallel programming engine
• Flexible and therefore highly suited for
• Commodity Clusters
• Advanced Algorithms
• Wide community adoption and use
conda install -c conda-forge dask
pip install dask[complete] distributed --upgrade
69. Dask: Parallel Data Processing
69
Synthetic views of
Numpy ndarrays
Synthetic views of
Pandas DataFrames
with HDFS support
DAG construction and
workflow manager
70. Dask is a Python parallel computing library that is:
• Familiar: Implements parallel NumPy and Pandas objects
• Fast: Optimized for demanding for numerical applications
• Flexible: for sophisticated and messy algorithms
• Scales up: Runs resiliently on clusters of 100s of machines
• Scales down: Pragmatic in a single process on a laptop
• Interactive: Responsive and fast for interactive data science
Dask complements the rest of Anaconda. It was developed with
NumPy, Pandas, and scikit-learn developers.
Overview of Dask
70
71. x.T - x.mean(axis=0)
df.groupby(df.index).value.mean()
def load(filename):
def clean(data):
def analyze(result):
Dask array (mimics NumPy)
Dask dataframe (mimics Pandas) Dask delayed (wraps custom code)
b.map(json.loads).foldby(...)
Dask bag (collection of data)
Dask Collections: Familiar Expressions and API
71
73. New Spark/Hadoop clusters
• Create and provision a Spark/Hadoop cluster with
a few simple steps
• Work on the cloud or with your existing in-house
servers
Dask Graphs: Example Machine Learning Pipeline
73
74. Example 1: Using Dask DataFrames on a cluster with CSV data
74
• Built from Pandas DataFrames
• Match Pandas interface
• Access data from HDFS, S3, local, etc.
• Fast, low latency
• Responsive user interface
76. Example 3: Using Dask Arrays with global temperature data
76
• Built from NumPy
n-dimensional arrays
• Matches NumPy interface
(subset)
• Solve medium-large
problems
• Complex algorithms
79. • Single machine with multiple threads or processes
• On a cluster with SSH (dcluster)
• Resource management: YARN (knit), SGE, Slurm
• On the cloud with Amazon EC2 (dec2)
• On a cluster with Anaconda for cluster management
• Manage multiple conda environments and packages
on bare-metal or cloud-based clusters
Using Anaconda and Dask on your Cluster
79
80. • The scheduler, workers, and clients pass messages between each other.
Semantically these messages encode commands, status updates, and
data, like the following:
• Please compute the function sum on the data x and store in y
• The computation y has been completed
• Be advised that a new worker named alice is available for use
• Here is the data for the keys 'x', and 'y'
Dask Protocol
80
In practice we represent these messages with dictionaries/mappings
81. • Protocol is a combination of msg pack for instructions and headers
• pickle/cloudpickle for arbitrary python objects
• byte strings (with optional compression) for large data.
• Prefer LZ4 to Snappy but either will be tried for messages above 1000B
• Protocol is extensible by registering serializers.
Dask Protocol
81
http://distributed.readthedocs.io/en/latest/protocol.html
82. Dask Protocol - Scheduler
82
However, the Scheduler never uses the language-specific serialization
and instead only deals with MsgPack. If the client sends a pickled
function up to the scheduler the scheduler will not unpack function but
will instead keep it as bytes. Eventually those bytes will be sent to a
worker, which will then unpack the bytes into a proper Python function.
Because the Scheduler never unpacks language-specific serialized bytes
it may be in a different language.
• The Scheduler is protected from unpickling unsafe code
• The Scheduler can be run under pypy for improved performance. This is only useful for
larger clusters.
• We could conceivably implement workers and clients for other languages (like R or Julia)
and reuse the Python scheduler. The worker and client code is fairly simple and much
easier to reimplement than the scheduler, which is complex.
• The scheduler might some day be rewritten in more heavily optimized C or Go
83. • Scheduling arbitrary graphs is hard.
• Optimal graph scheduling is NP-hard
• Scalable Scheduling requires Linear time solutions
• Fortunately dask does well with a lot of heuristics
• … and a lot of monitoring and data about sizes
• … and how long functions take.
Dask Scheduler
83
85. What makes Dask different?
Lets look at some pictures of directed graphs
86.
87.
88. Most Parallel Framework Architectures
User API
High Level Representation
Logical Plan
Low Level Representation
Physical Plan
Task scheduler
for execution
99. By dropping the high level representation
Costs
• Lose specialization
• Lose opportunities for high level optimization
Benefits
• Become generalists
• More flexibility for new domains and algorithms
• Access to smarter algorithms
• Better task scheduling
Resource constraints, GPUs, multiple clients,
async-real-time, etc..
101. Scalable Pandas DataFrames
• Same API
import dask.dataframe as dd
df = dd.read_parquet(‘s3://bucket/accounts/2017')
df.groupby(df.name).value.mean().compute()
• Efficient Timeseries Operations
df.loc[‘2017-01-01’] # Uses the Pandas index…
df.value.rolling(10).std() # for efficient…
df.value.resample(‘10m’).mean() # operations.
• Co-developed with Pandas
and by the Pandas developer community
102. Scalable NumPy Arrays
• Same API
import dask.array as da
x = da.from_array(my_hdf5_file)
y = x.dot(x.T)
• Applications
• Atmospheric science
• Satellite imagery
• Biomedical imagery
• Optimization algorithms
check out dask-glm
103. Parallelize Scikit-Learn/Joblib
• Scikit-Learn parallelizes with Joblib
estimator = RandomForest(…)
estimator.fit(train_data, train_labels, njobs=8)
• Joblib can use Dask
from sklearn.externals.joblib import parallel_backend
with parallel_backend('dask', scheduler=‘…’):
estimator.fit(train_data, train_labels)
https://pythonhosted.org/joblib/
http://distributed.readthedocs.io/en/latest/joblib.html
Joblib
Thread pool
104. Parallelize Scikit-Learn/Joblib
• Scikit-Learn parallelizes with Joblib
estimator = RandomForest(…)
estimator.fit(train_data, train_labels, njobs=8)
• Joblib can use Dask
from sklearn.externals.joblib import parallel_backend
with parallel_backend('dask', scheduler=‘…’):
estimator.fit(train_data, train_labels)
https://pythonhosted.org/joblib/
http://distributed.readthedocs.io/en/latest/joblib.html
Joblib
Dask
105. Many Other Libraries in Anaconda
• Scikit-Image uses dask to break down images and speed
up algorithms with overlapping regions
• Geopandas can use Dask to partition data
spatially and accelerate spatial joins
106. Dask Scales Up
• Thousand node clusters
• Cloud computing
• Super computers
• Gigabyte/s bandwidth
• 200 microsecond task overhead
Dask Scales Down (the median cluster size is one)
• Can run in a single Python thread pool
• Almost no performance penalty (microseconds)
• Lightweight
• Few dependencies
• Easy install
107. Parallelize Web Backends
• Web servers process thousands of small computations asynchronously
for web pages or REST endpoints
• Dask provides dynamic, heterogenous computation
• Supports small data
• 10ms roundtrip times
• Dynamic scaling for different loads
• Supports asynchronous Python (like GoLang)
async def serve(request):
future = dask_client.submit(process, request)
result = await future
return result
108. Debugging support
• Clean Python tracebacks when user code breaks
• Connect to remote workers with IPython sessions
for advanced debugging
109. Resource constraints
• Define limited hardware resources for workers
• Specify resource constraints when submitting tasks
$ dask-worker … —resources GPU=2
$ dask-worker … —resources GPU=2
$ dask-worker … —resources special-db=1
future = client.submit(my_function, resources={‘GPU’: 1})
• Used for GPUs, big-memory machines, special
hardware, database connections, I/O machines, etc..
110. Collaboration
• Many users can share the same cluster simultaneously
• Define public datasets
• Repeated computation and data use is shared among everyone
df = dd.read_parquet(…).persist()
client.publish_dataset(accounts=df)
df = client.get_dataset(‘accounts’)
113. • Dask is not a SQL database.
Does Pandas well, but won’t optimize complex queries.
• Dask is not MPI
Very fast, but does leave some performance on the table
200us task overhead
a couple copies in the network stack
• Dask is not a JVM technology
It’s a Python library
(although Julia bindings available)
• Dask is not always necessary
You may not need parallelism
Dask’s limitations
How do you recreate your analysis and models?
With Anaconda you have the ability to encapsulate data science into packages, notebooks and entire environments
Essential for technical conversations and SE team.
Does not replace the standard Python interpreter!
End user does not need a C or C++ compiler(Compiler required to compile Numba packages)
< 70 MB package
Uses LLVM to do final optimization and machine code generation.
Supports wide range of Python language constructs and numeric data types.
Does not replace the standard Python interpreter!
End user does not need a C or C++ compiler(Compiler required to compile Numba packages)
< 70 MB package
Uses LLVM to do final optimization and machine code generation.
Supports wide range of Python language constructs and numeric data types.