The document provides an overview of Chainer, a Python-based deep learning framework developed by Preferred Networks. Some key points:
- Chainer uses an approach called "Define-by-Run" where the computational graph is constructed on the fly during forward computation rather than being predefined. This provides flexibility for complex neural network architectures.
- Chainer is designed to be efficient for research and development use cases with small to medium sized datasets. It focuses on flexibility for rapid prototyping rather than scalability to large datasets.
- CuPy is introduced as a NumPy-compatible GPU library that Chainer is built upon, analogous to how other frameworks use NumPy on the CPU. This allows Chainer to leverage
This presentation was given at the Green500 BoF at SC21, in which PFN's VP of Computing Infrastructure Yusuke Doi discussed the power measurement for PFN's MN-3 supercomputer with MN-Core™ accelerators and how the company improved MN-3's power efficiency from 29.7GF/W to 39.38GF/W in 5 months.
More about MN-Core: https://projects.preferred.jp/mn-core/en/
More about MN-3: https://projects.preferred.jp/supercomputers/en/
Slides from the TensorFlow meetup at eBay NYC 06/07/2016 based on my blog https://medium.com/@st553/using-transfer-learning-to-classify-images-with-tensorflow-b0f3142b9366
이 발표에서는 TensorFlow의 지난 1년을 간단하게 돌아보고, TensorFlow의 차기 로드맵에 따라 개발 및 도입될 예정인 여러 기능들을 소개합니다. 또한 2017년 및 2018년의 머신러닝 프레임워크 개발 트렌드와 방향에 대한 이야기도 함께 합니다.
In this talk, I look back the TensorFlow development over the past year. Then discusses the overall development direction of machine learning frameworks, with an introduction to features that will be added to TensorFlow later on.
This presentation was given at the Green500 BoF at SC21, in which PFN's VP of Computing Infrastructure Yusuke Doi discussed the power measurement for PFN's MN-3 supercomputer with MN-Core™ accelerators and how the company improved MN-3's power efficiency from 29.7GF/W to 39.38GF/W in 5 months.
More about MN-Core: https://projects.preferred.jp/mn-core/en/
More about MN-3: https://projects.preferred.jp/supercomputers/en/
Slides from the TensorFlow meetup at eBay NYC 06/07/2016 based on my blog https://medium.com/@st553/using-transfer-learning-to-classify-images-with-tensorflow-b0f3142b9366
이 발표에서는 TensorFlow의 지난 1년을 간단하게 돌아보고, TensorFlow의 차기 로드맵에 따라 개발 및 도입될 예정인 여러 기능들을 소개합니다. 또한 2017년 및 2018년의 머신러닝 프레임워크 개발 트렌드와 방향에 대한 이야기도 함께 합니다.
In this talk, I look back the TensorFlow development over the past year. Then discusses the overall development direction of machine learning frameworks, with an introduction to features that will be added to TensorFlow later on.
Intro to TensorFlow and PyTorch Workshop at Tubular LabsKendall
These are some introductory slides for the Intro to TensorFlow and PyTorch workshop at Tubular Labs. The Github code is available at:
https://github.com/PythonWorkshop/Intro-to-TensorFlow-and-PyTorch
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016MLconf
DL4J and DataVec for Enterprise Deep Learning Workflows: Applications in NLP, sensor processing (IoT), image processing, and audio processing have all emerged as prime deep learning applications. In this session we will take a look at a practical review of building practical and secure Deep Learning workflows in the enterprise. We’ll see how DL4J’s DataVec tool enables scalable ETL and vectorization pipelines to be created for a single machine or scale out to Spark on Hadoop. We’ll also see how Deep Networks such as Recurrent Neural Networks are able to leverage DataVec to more quickly process data for modeling.
A lecture given for Stats 285 at Stanford on October 30, 2017. I discuss how OSS technology developed at Anaconda, Inc. has helped to scale Python to GPUs and Clusters.
Profiling PyTorch for Efficiency & Sustainabilitygeetachauhan
From my talk at the Data & AI summit - latest update on the PyTorch Profiler and how you can use it for optimizations for efficiency. Talk also dives into the future and what we need to do together as an industry to move towards Sustainable AI
Suggestions:
1) For best quality, download the PDF before viewing.
2) Open at least two windows: One for the Youtube video, one for the screencast (link below), and optionally one for the slides themselves.
3) The Youtube video is shown on the first page of the slide deck, for slides, just skip to page 2.
Screencast: http://youtu.be/VoL7JKJmr2I
Video recording: http://youtu.be/CJRvb8zxRdE (Thanks to Al Friedrich!)
In this talk, we take Deep Learning to task with real world data puzzles to solve.
Data:
- Higgs binary classification dataset (10M rows, 29 cols)
- MNIST 10-class dataset
- Weather categorical dataset
- eBay text classification dataset (8500 cols, 500k rows, 467 classes)
- ECG heartbeat anomaly detection
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
On-device machine learning: TensorFlow on AndroidYufeng Guo
Machine learning has traditionally been the solely performed on servers and high performance machines. But there is great value is having on-device machine learning for mobile devices. Doing ML inference on mobile devices has huge potential and is still in its early stages. However, it's already more powerful than most realize.
In this demo-oriented talk, you will see some examples of deep learning models used for local prediction on mobile devices. Learn how to use TensorFlow to implement a machine learning model that is tailored to a custom dataset, and start making delightful experiences today!
An Introduction to TensorFlow architectureMani Goswami
Introduces you to the internals of TensorFlow and deep dives into distributed version of TensorFlow. Refer to https://github.com/manigoswami/tensorflow-examples for examples.
[AI07] Revolutionizing Image Processing with Cognitive Toolkitde:code 2017
Deep Learning has revolutionized the field of image processing. I'll show real-world examples using CNTK, from anomaly classification using CNNs to generation using Generative Adversarial Networks.
製品/テクノロジ: AI (人工知能)/Deep Learning (深層学習)/Microsoft Azure/Machine Learning (機械学習)
Michael Lanzetta
Microsoft Corporation
Developer Experience and Evangelism
Principal Software Development Engineer
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016MLconf
Machine Learning with TensorFlow: TensorFlow has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk will go over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
Squeezing Deep Learning Into Mobile PhonesAnirudh Koul
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smart phones. Highlights some frameworks and best practices.
Highly-scalable Reinforcement Learning RLlib for Real-world ApplicationsBill Liu
website: https://learn.xnextcon.com/event/eventdetails/W20051110
video: https://www.youtube.com/watch?v=8tG8PJC6oaU
In reinforcement learning (RL), an agent learns how to optimize performance solely by collecting experience in the real world or via a simulator. RL is being applied to problems such as decision making, process optimization (e.g., manufacturing and supply chains), ad serving, recommendations, self-driving cars, and algorithmic trading.
In this talk, I will discuss RLlib, a reinforcement learning library built on Ray with a strong focus on large-scale execution and scalability, ease-of-use for general users, as well as customizability for developers and researchers.
RLlib offers autonomous task-learning via many common RL algorithms and it scales from a laptop to a cluster with hundreds of machines. It is used by dozens of organizations, from startups to research labs to large organizations. You will see RLlib in action with a live demo.
Slides for the Part One of "Deep learning implementations and frameworks" presented as a Tutorial at PAKDD (Pacific Asia Knowledge Discovery and Data Mining Conference) 2016.
The presentation took place on April 19, 2016 at Auckland, New Zealand.
http://pakdd16.wordpress.fos.auckland.ac.nz/technical-program/tutorials/
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...Edureka!
( ** Deep Learning Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** )
This Edureka PyTorch Tutorial (Blog: https://goo.gl/4zxMfU) will help you in understanding various important basics of PyTorch. It also includes a use-case in which we will create an image classifier that will predict the accuracy of an image data-set using PyTorch.
Below are the topics covered in this tutorial:
1. What is Deep Learning?
2. What are Neural Networks?
3. Libraries available in Python
4. What is PyTorch?
5. Use-Case of PyTorch
6. Summary
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Intro to TensorFlow and PyTorch Workshop at Tubular LabsKendall
These are some introductory slides for the Intro to TensorFlow and PyTorch workshop at Tubular Labs. The Github code is available at:
https://github.com/PythonWorkshop/Intro-to-TensorFlow-and-PyTorch
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016MLconf
DL4J and DataVec for Enterprise Deep Learning Workflows: Applications in NLP, sensor processing (IoT), image processing, and audio processing have all emerged as prime deep learning applications. In this session we will take a look at a practical review of building practical and secure Deep Learning workflows in the enterprise. We’ll see how DL4J’s DataVec tool enables scalable ETL and vectorization pipelines to be created for a single machine or scale out to Spark on Hadoop. We’ll also see how Deep Networks such as Recurrent Neural Networks are able to leverage DataVec to more quickly process data for modeling.
A lecture given for Stats 285 at Stanford on October 30, 2017. I discuss how OSS technology developed at Anaconda, Inc. has helped to scale Python to GPUs and Clusters.
Profiling PyTorch for Efficiency & Sustainabilitygeetachauhan
From my talk at the Data & AI summit - latest update on the PyTorch Profiler and how you can use it for optimizations for efficiency. Talk also dives into the future and what we need to do together as an industry to move towards Sustainable AI
Suggestions:
1) For best quality, download the PDF before viewing.
2) Open at least two windows: One for the Youtube video, one for the screencast (link below), and optionally one for the slides themselves.
3) The Youtube video is shown on the first page of the slide deck, for slides, just skip to page 2.
Screencast: http://youtu.be/VoL7JKJmr2I
Video recording: http://youtu.be/CJRvb8zxRdE (Thanks to Al Friedrich!)
In this talk, we take Deep Learning to task with real world data puzzles to solve.
Data:
- Higgs binary classification dataset (10M rows, 29 cols)
- MNIST 10-class dataset
- Weather categorical dataset
- eBay text classification dataset (8500 cols, 500k rows, 467 classes)
- ECG heartbeat anomaly detection
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
On-device machine learning: TensorFlow on AndroidYufeng Guo
Machine learning has traditionally been the solely performed on servers and high performance machines. But there is great value is having on-device machine learning for mobile devices. Doing ML inference on mobile devices has huge potential and is still in its early stages. However, it's already more powerful than most realize.
In this demo-oriented talk, you will see some examples of deep learning models used for local prediction on mobile devices. Learn how to use TensorFlow to implement a machine learning model that is tailored to a custom dataset, and start making delightful experiences today!
An Introduction to TensorFlow architectureMani Goswami
Introduces you to the internals of TensorFlow and deep dives into distributed version of TensorFlow. Refer to https://github.com/manigoswami/tensorflow-examples for examples.
[AI07] Revolutionizing Image Processing with Cognitive Toolkitde:code 2017
Deep Learning has revolutionized the field of image processing. I'll show real-world examples using CNTK, from anomaly classification using CNNs to generation using Generative Adversarial Networks.
製品/テクノロジ: AI (人工知能)/Deep Learning (深層学習)/Microsoft Azure/Machine Learning (機械学習)
Michael Lanzetta
Microsoft Corporation
Developer Experience and Evangelism
Principal Software Development Engineer
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016MLconf
Machine Learning with TensorFlow: TensorFlow has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk will go over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
Squeezing Deep Learning Into Mobile PhonesAnirudh Koul
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smart phones. Highlights some frameworks and best practices.
Highly-scalable Reinforcement Learning RLlib for Real-world ApplicationsBill Liu
website: https://learn.xnextcon.com/event/eventdetails/W20051110
video: https://www.youtube.com/watch?v=8tG8PJC6oaU
In reinforcement learning (RL), an agent learns how to optimize performance solely by collecting experience in the real world or via a simulator. RL is being applied to problems such as decision making, process optimization (e.g., manufacturing and supply chains), ad serving, recommendations, self-driving cars, and algorithmic trading.
In this talk, I will discuss RLlib, a reinforcement learning library built on Ray with a strong focus on large-scale execution and scalability, ease-of-use for general users, as well as customizability for developers and researchers.
RLlib offers autonomous task-learning via many common RL algorithms and it scales from a laptop to a cluster with hundreds of machines. It is used by dozens of organizations, from startups to research labs to large organizations. You will see RLlib in action with a live demo.
Slides for the Part One of "Deep learning implementations and frameworks" presented as a Tutorial at PAKDD (Pacific Asia Knowledge Discovery and Data Mining Conference) 2016.
The presentation took place on April 19, 2016 at Auckland, New Zealand.
http://pakdd16.wordpress.fos.auckland.ac.nz/technical-program/tutorials/
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...Edureka!
( ** Deep Learning Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** )
This Edureka PyTorch Tutorial (Blog: https://goo.gl/4zxMfU) will help you in understanding various important basics of PyTorch. It also includes a use-case in which we will create an image classifier that will predict the accuracy of an image data-set using PyTorch.
Below are the topics covered in this tutorial:
1. What is Deep Learning?
2. What are Neural Networks?
3. Libraries available in Python
4. What is PyTorch?
5. Use-Case of PyTorch
6. Summary
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
License Plate Recognition System using Python and OpenCVVishal Polley
License plate recognition (LPR) is a type of technology, mainly software, that enables computer systems to read automatically the registration number (license number) of vehicles from digital pictures.
ML Platform Q1 Meetup: Airbnb's End-to-End Machine Learning InfrastructureFei Chen
ML platform meetups are quarterly meetups, where we discuss and share advanced technology on machine learning infrastructure. Companies involved include Airbnb, Databricks, Facebook, Google, LinkedIn, Netflix, Pinterest, Twitter, and Uber.
Perform Twitter sentiment live stream analysis and classify the sentiment of a given text further analyzing the sentiments or emotions of people towards the entity.
Monitoring of GPU Usage with Tensorflow Models Using PrometheusDatabricks
Understanding the dynamics of GPU utilization and workloads in containerized systems is critical to creating efficient software systems. We create a set of dashboards to monitor and evaluate GPU performance in the context of TensorFlow. We monitor performance in real time to gain insight into GPU load, GPU memory and temperature metrics in a Kubernetes GPU enabled system. Visualizing TensorFlow training job metrics in real time using Prometheus allows us to tune and optimize GPU usage. Also, because Tensor flow jobs can have both GPU and CPU implementations it is useful to view detailed real time performance data from each implementation and choose the best implementation. To illustrate our system, we will show a live demo gathering and visualizing GPU metrics on a GPU enabled Kubernetes cluster with Prometheus and Grafana.
While the adoption of machine learning and deep learning techniques continue to grow, many organizations find it difficult to actually deploy these sophisticated models into production. It is common to see data scientists build powerful models, yet these models are not deployed because of the complexity of the technology used or lack of understanding related to the process of pushing these models into production.
As part of this talk, I will review several deployment design patterns for both real-time and batch use cases. I’ll show how these models can be deployed as scalable, distributed deployments within the cloud, scaled across hadoop clusters, as APIs, and deployed within streaming analytics pipelines. I will also touch on topics related to security, end-to-end governance, pitfalls, challenges, and useful tools across a variety of platforms. This presentation will involve demos and sample code for the the deployment design patterns.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Conquering the Lambda architecture in LinkedIn metrics platform with Apache C...Khai Tran
Metrics play an important role in data-driven companies like LinkedIn, where we leverage them extensively for reporting, experimentation, and in-product applications. We built an offline platform to help people define and produce metrics driven through their transformation code, mostly in Pig or Hive, and metadata-rich configurations. Many of our users would like to look at these metrics in a real-time fashion. To support this, we recently built an extension to the platform that auto-generates Samza real-time flow from existing offline transformation code with just a single command. Combining with the existing offline platform, we delivered Lambda architecture without maintaining multiple code bases.
In this talk, we will describe how we use Apache Calcite to translate our offline logic, served as the single source of truth, into both Samza code and configuration for real-time execution.
In this deck from the University of Houston CACDS HPC Workshop, Jeff Larkin from Nvidia presents: The Past, Present, and Future of OpenACC.
"OpenACC is an open specification for programming accelerators with compiler directives. It aims to provide a simple path for accelerating existing applications for a wide range of devices in a performance portable way. This talk with discuss the history and goals of OpenACC, how it is being used today, and what challenges it will address in the future."
Watch the video presentation: http://wp.me/p3RLHQ-dTm
Despite the increase of deep learning practitioners and researchers, many of them do not use GPUs, this may lead to long training/evaluation cycles and non-practical research.
In his talk, Lior shares how to get started with GPUs and some of the best practices that helped him during research and work. The talk is for everyone who works with machine learning (deep learning experience is NOT mandatory!), It covers the very basics of how GPU works, CUDA drivers, IDE configuration, training, inference, and multi-GPU training.
Continuous Delivery to the Cloud: Automate Thru Production with CI + SpinnakerVMware Tanzu
To continuously deliver software to the cloud, companies must adopt critical capabilities that ensure the safety, security, scalability and traceability of deployed applications—from development hand-off through production release.
Pivotal built a deep integration with Spinnaker, an open source continuous delivery platform, and Cloud Foundry (CF) to automate the full path to production. Spinnaker complements and extends the capabilities of continuous integration (CI) tools, including Concourse, to enable developers to ship code rapidly with increased confidence and greater visibility, as well as provide full auditability and operational control of applications.
Spinnaker functions as an application-centric control plane, abstracting the details of cloud platforms not relevant to developers and organizing cloud resources around applications. It provides opinionated building blocks to perform common actions and allow deployment pipelines to be assembled consistently and as needed. Spinnaker’s pipeline workflows support more advanced rollout mechanisms like blue/green deployments, conditional deployments, time window restrictions, and automated canary analysis. Spinnaker also facilitates “in production” application testing, stressing and scaling.
In this webinar, you will learn how:
- Continuous delivery practices complement and extend continuous integration, enabling consistent, safe production releases
- Spinnaker works with CI solutions to execute complex, rule-driven, cloud-provider-integrated, high-volume deployments
- Spinnaker’s multi-cloud asset inventory supports construction of further operational tools like chaos engineering, zero-day security vulnerability scanning, and autoscalers
We’ll also demo a Spinnaker pipeline so you can see continuous delivery to PCF in action.
Presenters : Jon Schneider, Olga Kundzich, Pat Johnson from Pivotal
Big Data for Testing - Heading for Post Process and AnalyticsOPNFV
Yujun Zhang, ZTE Corporation, Donald Hunter, Cisco, Trevor Cooper, Intel
The testing community created tens of testing projects, hundreds of testing cases, thousands of testing jobs. Huge amount of testing data has been produced. What comes next, then?
The testing community puts in place tools and procedures to declare testcases/projects, normalize and upload results. These tools and procedures have been adopted so we now have lots of data covering lots of scenarios, hardware, installers.
In this presentation, we shall discuss the stakes and challenges of result post processing.
* How analytics can provide valuable inputs to the community, end users or upstream projects.
* How can we produce accurate indicators, reports and graphs, focus on interpreting / consuming test results.
* How can we get the best of breeds of our result mine?
NIPS2013読み会: More Effective Distributed ML via a Stale Synchronous Parallel P...Shohei Hido
NIPS2013読み会の発表資料です。
Qirong Ho et al, "More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server", NIPS2013.
http://media.nips.cc/nipsbooks/nipspapers/paper_files/nips26/631.pdf
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
3. Preferred Networks (PFN)
A startup that applies deep learning to industrial IoT
l Founded: March 2014
l Headquarter: Tokyo, Japan
l U.S. Subsidiary: San Mateo, California
l Company size: 35 engineers & researchers
l Investors: Toyota, FANUC, NTT
Deep learning Industrial IoT
3
Manufacturing
Automotive
Healthcare
4. Partnering with world-leading companies using Chainer
l R&D collaboraOon on industrial problems with real-world data
̶ Specific requirements, modified algorithms, many trials and errors, etc
̶ Different from making general-purpose recogniOon system
4
Toyota FANUC
Panasonic
NTT
Cisco NVIDIA
5. Two types of background behind DL frameworks
1. Scalability-oriented
l Use-cases in mind
̶ Image/speech recogniOon system
̶ Fast DL as a service in cloud
l Problem type
̶ A few general applicaOons
̶ 10+ million training samples
̶ 10+ nodes cluster w/ fast network
l Possible boZleneck
̶ Tuning of well-known algorithms
̶ Distributed computaOon for
model/data-parallel training
2. Flexibility-oriented
l Use-cases in mind
̶ Algorithm research
̶ R&D projects for new products
l Problem type
̶ Various specific applicaOons
̶ 10+ k training samples
̶ 1 node with mulOple GPUs
l Possible boZleneck
̶ Trial-and-error in prototyping
̶ Debugging, profiling & refactoring
̶ (wait Ome during compilaOon)
6. Designed for efficient research & development
l Flexible: new kinds of complex models for various applicaOons
l IntuiOve: rapid prototyping and efficient trial-and-error
l Powerful: comparable performance for 1 node & mulO-GPUs
6
Scalability-oriented Flexibility-oriented
8. Neural network and computation
x1
xN
・・ h1
hH
・・・・
kM
k1
yM
y1
Forward computation
Backward computation
(backpropagation)
・・
・・
Input Hidden units Output
Text
Image
Sensor
Object:
Tulip
Anomaly score:
0.35
Category:
Sports
・・
・・・・
8
9. Chainer focuses on network representation/training
l Design choices for deep learning frameworks
̶ How to build neural networks?
̶ How to train neural networks?
̶ Which text format/language for modeling?
̶ Which language for compuOng?
̶ Run with GPU?
̶ Run on mulOple GPUs?
̶ Run on mulOple compute nodes?
9
10. Building and training neural networks:
Computational graph construction is the key
1. Construct a computaOonal graph
̶ Based on network definiOon given by users
̶ Chains of funcOons and operaOons on input variables
2. Compute loss and gradients
̶ Forward computaOon to calculate loss for a minibatch
̶ BackpropagaOon gives gradients to all of parameters
3. OpOmize model
̶ Update each parameter with the gradient
̶ Repeat unOl convergence
Step 1. is the most important and there are many approaches
10
11. Building blocks
l These funcOonaliOes are very similar between frameworks
l But the structure, abstracOon level, and interface are different
l It comes to the design of domain-specific language for NN
Array data structure
(vector/matrix/tensor)
Operations & functions
Network
(computational graph)
Optimizer
(SGD/AdaGrad/Adam)
11
12. Types of domain-specific language for neural networks
l Text DSL
̶ Ex. Caffe (prototxt)
̶ Ex. CNTK (NDL)
l Symbolic program
̶ OperaOons
on symbols
̶ Ex. Theano
̶ Ex. TensorFlow
l ImperaOve program
̶ Direct computaOons
on raw data arrays
̶ Ex. Torch.nn
̶ Ex. Chainer
# Symbolic definiOon
A = Variable(‘A’)
B = Variable(‘B’)
C = B * A
D = C + Constant(1)
# Compile
f = compile(D)
d = f(A=np.ones(10),
B=np.ones(10) * 2)
# ImperaOve declaraOon
a = np.ones(10)
b = np.ones(10) * 2
c = b * a
d = c + 1
%% DefiniOon in text
f: {
“A”: “Variable”,
“B”: “Variable”,
“C”: [“B”, “*”, “A”],
“ret”: [“C”, “+”, 1]
}
# Compile
f = compile(“f.txt”)
d = f(A=np.ones(10),
B=np.ones(10) * 2)
12
Ex. MXNet
13. Comparison of DSL type
DSL type Pros. Cons.
Text DSL
• Human-readable definiOon
• Non-programmer can easily
edit the network
• Users must study the format
• Format might have to be
extended for new algorithms
Internal DSL
Symbolic
• StaOc analysis at compile
• OpOmizaOon before training
• Easy to parallelize
• Users must study special syntax
• May need more efforts to
implement new algorithms
ImperaOve
• Less efforts to learn syntax
• Easy debugging and profiling
• Suitable for new algorithms
with complex logic
• Hard to opOmize in advance
• Less efficient in memory
allocaOon and parallelizaOon
Chainer is at the extreme end of imperaOve program for high flexibility
13
15. Chainer as an open-source project
l hZps://github.com/pfnet/chainer
l 50 contributors
l 1,277 stars & 255 fork
l 3,708 commits
l AcOve development & release for last 10 months
̶ v1.0.0 (June 2015) to v1.7.2 (March 2016)
15
Original developer
Seiya Tokui
16. CuPy
Chainer software stack
CPU NVIDIA GPU
CUDA
cuDNN
BLAS
NumPy
Chainer
l Chainer is built on top of NumPy and CUDA
l CuPy is also introduced as an equivalent of NumPy on GPU
16
17. Run
Define
Graph build scheme (1/2) - Define-and-Run:
Most of frameworks use this scheme (Chainer does not)
l Define: build a computaOonal graph based on definiOon
l Run: update the model (parameters) using training dataset
Network
definiOon
ComputaOonal
graph
Gradient
funcOon
Parameters
ComputaOonal
graph
Gradient
funcOon
Parameters
Training
data
Update
Loss & gradient
Auto differenOaOon
17
18. Define-by-Run
Graph build scheme (2/2) - Define-by-Run:
Computational graph construction on the fly
l No graph is constructed before training
l Instead, the graph is built at each forward computaOon
l ComputaOonal graph can be modified dynamically
for each iteraOon/sample or depending on some condiOons
Model
definiOon
ComputaOonal
graph
Gradient
funcOon
Parameters
Training
data
Update
Dynamic change
CondiOons
18
19. Define-by-Run example: MLP for MNIST
l Only transformaOons between units are set before training
l ConnecOon is given as forward computaOon
l1 = Linear(784, n_units)
l2 = Linear(n_units, 10))
Linear l2Linear l1
x yh1
W bias
0
5
9
W bias
ReLU
def forward(x):
h1 = ReLU(l1(x))
return l2(h1)
19
20. Define-by-Run:
An interpreted language for neural network
l Idea
̶ Forward computaOon actually goes through computaOonal graph
̶ By remembering the history, the actual graph can be obtained
l Advantage
̶ Flexibility for new algorithms with complex components
u Ex. recurrent, recursive, aZenOon, memory, adversarial, etc
̶ IntuiOve coding with highly imperaOve network definiOon
u Ex. stochasOc network of which graph changes for each iteraOon
l Current drawbacks
̶ Graph is generated every Ome also for fixed networks
̶ No opOmizaOon even for staOc part of graphs
u JIT-like analysis and subgraph cache might be useful
20
21. Basic components (1/2): Variable and Function
l Variable
̶ Variable wraps arrays (.data)
̶ It remembers parent funcOon
(.creator)
̶ It will be assigned gradient (.grad)
̶ It keeps track of not only data
but also computaOons
l FuncOon
̶ TransformaOon between Variable
̶ Stateless
̶ e.g. sigmoid, tanh, ReLU,
maxpooling, dropout
Function
x y
Variable
x yh1
0
5
9
21
22. Chain (MLP2)
Basic components (2/2): Link and Chain
l Link = funcOon with state
̶ Parameters are also Variable
and gradients will be assigned
̶ e.g. Linear (fully-connected), LSTM
ConvoluOon2d, word-embedding
l Chain = network
̶ Chain has a set of child Link
̶ Forward computaOon is defined
in . __call__()
̶ e.g. MLP2, AlexNet, GoogLeNet,
RNNLM, seq2seq,
Link
(Linear)
y=f(W*x+b)
x y
W b
Linear l2Linear l1
yh1
W bias
W bias
ReLU
22
23. Backpropagation through computational graph
l Consider an objecOve (Link.Linear): L = f(x * w + b)
l This computes the value of L in forward computaOon, and
simultaneously builds the following computaOonal graph
l The gradient of L can be computed with respect to
any variables by backpropagaOon
l Then the opOmizer updates the value of parameters
*x
W
+
b
f L
is Variable
is FuncOon
23
24. Code sample (1/4): Multi-layer perceptron
class MLP2(Chain):
def __init__(self):
super(MLP2, self).__init__(
l1=L.Linear(784, 100),
l2=L.Linear(100, 10),
)
def __call__(self, x):
h1 = F.relu(self.l1(x))
y = self.l2(h1)
return y
class Classifier(Chain):
def __init__(self, predictor):
super(Classifier, self).
__init__(predictor=predictor)
def __call__(self, x, t):
y = self.predictor(x)
self.accuracy = F.accuracy(y, t)
self.loss = F.softmax_cross_entropy(y, t)
return self.loss, self.accuracy
# Model and optimizer setup
model = Classifier(MLP2())
optimizer = optimizers.SGD()
optimizer.setup(model)
# training loop with minibatch
for i in range(0, datasize, batchsize):
x = Variable(x_tr[i:i+batchsize])
t = Variable(y_tr[i:i+batchsize])
model.zerograds()
loss, acc = model(x, t)
loss.backward()
optimizer.update()
Chain (MLP2)
Linear l2Linear l1
yh1
W bias
W bias
ReLU
24
25. Code sample (2/4): Convolutional neural network
class AlexNet(Chain):
def __init__(self):
super(AlexNet, self).__init__(
conv1=L.Convolution2D(3, 96, 11, stride=4),
conv2=L.Convolution2D(96, 256, 5, pad=2),
conv3=L.Convolution2D(256, 384, 3, pad=1),
conv4=L.Convolution2D(384, 384, 3, pad=1),
conv5=L.Convolution2D(384, 256, 3, pad=1),
fc6=L.Linear(9216, 4096),
fc7=L.Linear(4096, 4096),
fc8=L.Linear(4096, 1000),
)
def __call__(self, x, t):
h = F.max_pooling_2d(F.relu(
F.local_response_normalization(self.conv1(x))), 3, stride=2)
h = F.max_pooling_2d(F.relu(
F.local_response_normalization(self.conv2(h))), 3, stride=2)
h = F.relu(self.conv3(h))
h = F.relu(self.conv4(h))
h = F.max_pooling_2d(F.relu(self.conv5(h)), 3, stride=2)
h = F.dropout(F.relu(self.fc6(h)), train=self.train)
h = F.dropout(F.relu(self.fc7(h)), train=self.train)
y = self.fc8(h)
return y
* ImageNet Classification with Deep Convolutional Neural Networks
http://www.image-net.org/challenges/LSVRC/2012/supervision.pdf
conv2d
conv2d
conv2d
conv2d
conv2d
linear
linear
25
linear
26. Code sample (3/4): Recurrent neural network
class SimpleRNN(Chain):
def __init__(self, n_vocab, n_units):
super(SimpleRNN, self).__init__(
embed=L.EmbedID(n_vocab, n_units)
x2h=L.Linear(n_units, n_units),
h2h=L.Linear(n_units, n_units),
h2y=L.Linear(n_units, n_vocab),)
self.h = None
def __call__(self, x):
y, h_new = self.fwd_one_step(x, self.h)
self.h = h_new
return y
def fwd_one_step(self, x, h):
x = F.tanh(self.embed(x))
if h is None:
h = F.tanh(self.x2h(x))
else:
h = F.tanh(self.x2h(x) + self.h2h(h))
y = F.softmax(self.h2y(h))
return y, h
x_1 h y_1
x_2 h y_2
x_3 h y_3
x_4 h y_4
BPTT length = 3
Input word OutputRecurrent state
# Truncated BPTT (length=3)
for i in range(0, datasize, batchsize):
...
accum_loss += model(x, t)
if i % bptt_length == 0:
model.zerograds()
accum_loss.backward()
accum_loss.unchain_backward()
optimizer.update()
26
27. Code sample (4/4): Deep Networks with Stochastic Depth
A paper published on arXiv, March 30, 2016
l A variant of Residual Net that skips connecOons stochasOcally
̶ Outperformed the original Residual Net (ImageNet 2015 winner, MSR)
̶ StochasOc skip:
Taken from http://arxiv.org/abs/1603.09382v2
G. Huang et al.
# Mock code in Chainer
class StochasticResNet(Chain):
def __init__(self, prob, size, …):
super(StochasticResNet, size, …).__init__(
## Define f[i] as same for Residual Net )
self.p = prob # Survival probabilities
def __call__(self, h):
for i in range(self.size):
b = numpy.random.binomial(1, self.p[i])
c = self.f[i](h) + h if b == 1 else h
h = F.relu(c)
return h
w/ survival probability:
27
28. Miscellaneous
l Other features
̶ Install with pip in one line:
̶ MulO-GPU support by explicitly selecOng the ID to use
̶ Pre-trained Caffe model import from Model Zoo
̶ Model serializaOon & save & load : HDF5 or NumPy npz
l Future direcOon (not only for Chainer)
̶ JIT-like opOmizaOon during Define-by-Run
̶ Memory consumpOon reducOon (GPU memory is sOll small)
̶ Handling variable-length inputs without minibatch
̶ Maximizing performance on mulO-node & mulO-GPU environment
$ pip install chainer
28
30. CuPy: (partially-)NumPy-compatible GPU library
l MoOvaOon: NumPy + CUDA = CuPy
̶ NumPy is the standard library in Python for numerical computaOon
̶ CUDA is the standard APIs for using GPU for high-performance
̶ Unfortunately, NumPy does NOT work with CUDA
l CuPy supports:
̶ Fast computaOon using NVIDIA’s cuBLAS and cuDNN
̶ Array indexing, slicing, transpose, and reshape
̶ Most of operaOons/funcOons in NumPy
u Chainer v1.7.2 already supports more than 170 funcOons
̶ User-defined funcOons and kernels
̶ all dtypes, broadcasOng, memory pool, etc
30
31. How to use CuPy
l Usage of CuPy: just replace NumPy with CuPy
l Conversion between numpy.ndarray and cupy.ndarray
l Ex. CPU/GPU-agnosOc logsumexp funcOon
def logsumexp(x, axis=None):
xp = cuda.get_array_module(x) #Get CuPy or NumPy
x_max = x.max(axis)
exp_sum = xp.exp(x - x_max).sum(axis)
return x_max + xp.log(exp_sum)
import numpy, cupy
enable_cupy = True
xp = cupy if enable_cupy else numpy
w_c = cupy.asarray(numpy.ones(10)) # cupy.ndarray
w_n = cupy.asnumpy(cupy.ones(10)) # numpy.ndarray
31
32. CuPy implementation:
Optimized for performance & NumPy-compatibility
l Use Cython for cupy.core & cupy.cuda
l Dynamic code generaOon & compile
̶ CUDA code is generated for specific tensor dimension & data type
̶ On-the-fly compile by nvcc and binary cache (faster awer 1st use)
CUDA libraries (cuBLAS, cuRAND, cuDNN)
ndarray
ufunc, elementwise, reduc5on
CUDA Python wrapper cupy.cuda
cupy.core
Tensor opera5ons & func5ons cupy
32
33. CuPy performance on linear algebra:
5 to 25 times faster than NumPy
def test(xp):
a = xp.arange(1000000).reshape(1000, -1)
return a.T * 2
test(numpy)
t1 = datetime.datetime.now()
for i in range(1000):
test(numpy)
t2 = datetime.datetime.now()
print(t2 -t1)
test(cupy)
t1 = datetime.datetime.now()
for i in range(1000):
test(cupy)
t2 = datetime.datetime.now()
print(t2 -t1)
msec speed
up
NumPy 2,929 1.0
CuPy 585 5.0
CuPy +
Memory Pool
123 23.8
Intel Core i7-4790 @3.60GHz,32GB, GeForce GTX 970
33
34. Use CuPy for GPU-based computation
l Support three paZerns as wrappers
̶ ElementwiseKernel: for element-wise computaOon
̶ ReducOonKernel: for reduce operaOon along axis
̶ ufunc: universal funcOon as in Numpy
l Ex. definiOon of an element-wise funcOon
l Usage (automaOc broadcast and type check are supported)
squared_diff = cupy.ElementwiseKernel(
‘float32 x, float32 y’, # Input
‘float32 z’, # Output
‘z = (x - y) * (x - y)’, # Operation
‘squared_diff’) # Name
squared_diff(cupy.arange(10), 10)
34
36. Public benchmark results (CNN):
Chainer shows comparable performance
l Forward computaOon is almost the same with TensorFlow
l Training with backward computaOon is slower, but it can be
offset by no compilaOon Ome while debugging/tuning
0
200
400
600
800
1000
1200
AlexNet GoogLeNet VGG-A OverFeat
Torch
TensorFlow
Chainer
Caffe (naCve)
0
200
400
600
800
1000
1200
AlexNet GoogLeNet VGG-A OverFeat
Torch
TensorFlow
Chainer
Caffe (naCve)
Forward computation (msec) Backward computation (msec)
Taken from https://github.com/soumith/convnet-benchmarks, using cuDNN except Caffe 36
37. Chainer can benefit from latest CUDA libraries:
Ex. Winograd algorithm in cuDNN v5
l Conv3x3 is common in CNNs & now computed with Winograd
l State-of-the-art CNN models (e.g., GoogLeNet, VGG-A)
can be accelerated up to 2.0x at test Ome (forward only)
0
100
200
300
400
500
600
AlexNet GoogLeNet VGG-A OverFeat
cuDNN v4
cuDNN v5
0
100
200
300
400
500
600
AlexNet GoogLeNet VGG-A OverFeat
cuDNN v4
cuDNN v5
Forward computation (msec) Backward computation (msec)
Independently measured by a modified version of soumith/convnet-benchmarks
cuDNN v5 can be used in Chainer v1.8.0 37
38. Algorithm implementation in Chainer:
A Neural Algorithm of Artistic Style (Gatys et al., 2015)
l hZps://github.com/maZya/chainer-gogh
Content
image (cat)
Style
image
New
artistic
image
+ =
Main code (45 lines) 38