This document provides an overview of ONNX and ONNX Runtime. ONNX is an open format for machine learning models that allows models to be shared across different frameworks and tools. ONNX Runtime is a cross-platform open source inference engine that runs ONNX models. It supports hardware acceleration and has a modular design that allows for custom operators and execution providers to extend its capabilities. The document discusses how ONNX helps with deploying machine learning models from research to production and how ONNX Runtime performs high performance inference through optimizations and hardware acceleration.
ONNX - The Lingua Franca of Deep LearningHagay Lupesko
(deck from my Prepare.AI talk in May 2018)
ONNX is an open source format to encode deep learning models that is driven by industry leaders such as AWS, Facebook and Microsoft, and supported by a growing number of frameworks and platforms. With ONNX, deep learning practitioners gain model interoperability, which enables to pick and choose the framework and platform that is best suited for the task at hand. In this talk, I will dive into the ONNX format, explain the motivation, demo use cases, and discuss the roadmap.
Accelerated Training of Transformer ModelsDatabricks
Language models help in automating a wide range of natural language processing (NLP) tasks such as speech recognition, machine translation, text summarization and more. Transformer architecture was introduced a few years back and it has significantly changed the NLP landscape since then. Transformer based models are getting bigger and better to improve the state of the art on language understanding and generation tasks.
In this talk, I present an introduction of MLFlow. I also show some examples of using it by means of MLFlow Tracking, MLFlow Projects and MLFlow Models. I also used Databricks as an example of remote tracking.
ONNX - The Lingua Franca of Deep LearningHagay Lupesko
(deck from my Prepare.AI talk in May 2018)
ONNX is an open source format to encode deep learning models that is driven by industry leaders such as AWS, Facebook and Microsoft, and supported by a growing number of frameworks and platforms. With ONNX, deep learning practitioners gain model interoperability, which enables to pick and choose the framework and platform that is best suited for the task at hand. In this talk, I will dive into the ONNX format, explain the motivation, demo use cases, and discuss the roadmap.
Accelerated Training of Transformer ModelsDatabricks
Language models help in automating a wide range of natural language processing (NLP) tasks such as speech recognition, machine translation, text summarization and more. Transformer architecture was introduced a few years back and it has significantly changed the NLP landscape since then. Transformer based models are getting bigger and better to improve the state of the art on language understanding and generation tasks.
In this talk, I present an introduction of MLFlow. I also show some examples of using it by means of MLFlow Tracking, MLFlow Projects and MLFlow Models. I also used Databricks as an example of remote tracking.
PyTorch is one of the most widely used deep learning library in python community. In this talk I will cover the basic to advanced guide to implement deep learning model using PyTorch. My goal is to introduce PyTorch and show how to use it for deep learning project.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
End to end Machine Learning using Kubeflow - Build, Train, Deploy and ManageAnimesh Singh
With the breadth of sheer functionalities which need to be addressed in the Machine Learning world around building, training, serving and managing models, getting it done in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it's a great foundation for deploying ML workloads. Kubeflow is designed to take advantage of these benefits. In this talk, we are going to address how to make it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and support the full lifecycle Machine Learning using open source technologies like Kubeflow, Tensorflow, PyTorch,Tekton, Knative, Istio and others. We are going to discuss how to enable distributed training of models, model serving, canary rollouts, drift detection, model explainability, metadata management, pipelines and others. Additionally we will discuss Watson productization in progress based on Kubeflow Pipelines and Tekton, and point to Kubeflow Dojo materials and follow-on workshops.
Build Large-Scale Data Analytics and AI Pipeline Using RayDPDatabricks
A large-scale end-to-end data analytics and AI pipeline usually involves data processing frameworks such as Apache Spark for massive data preprocessing, and ML/DL frameworks for distributed training on the preprocessed data. A conventional approach is to use two separate clusters and glue multiple jobs. Other solutions include running deep learning frameworks in an Apache Spark cluster, or use workflow orchestrators like Kubeflow to stitch distributed programs. All these options have their own limitations. We introduce Ray as a single substrate for distributed data processing and machine learning. We also introduce RayDP which allows you to start an Apache Spark job on Ray in your python program and utilize Ray’s in-memory object store to efficiently exchange data between Apache Spark and other libraries. We will demonstrate how this makes building an end-to-end data analytics and AI pipeline simpler and more efficient.
Fine tune and deploy Hugging Face NLP modelsOVHcloud
Are you currently managing AI projects that require a lot of GPU power?
Are you tired of managing the complexity of your infrastructures, GPU instances and your Kubeflow yourself?
Need flexibility for your AI platform or SaaS solution?
OVHcloud innovates in AI by offering simple and turnkey solutions to train your models and put them into production.
The content was modified from Google Content Group
Eric ShangKuan(ericsk@google.com)
---
TensorFlow Lite guide( for mobile & IoT )
TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
The TensorFlow Lite interpreter:
- optimize models on many different hardware types, like mobile phones, embedded Linux devices, and microcontrollers.
The TensorFlow Lite converter:
- which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
---
Event: PyLadies TensorFlow All-Around
Date: Sep 25, 2019
Event link: https://www.meetup.com/PyLadies-Berlin/events/264205538/
Linkedin: http://linkedin.com/in/mia-chang/
MLflow is an MLOps tool that enables data scientist to quickly productionize their Machine Learning projects. To achieve this, MLFlow has four major components which are Tracking, Projects, Models, and Registry. MLflow lets you train, reuse, and deploy models with any library and package them into reproducible steps. MLflow is designed to work with any machine learning library and require minimal changes to integrate into an existing codebase. In this session, we will cover the common pain points of machine learning developers such as tracking experiments, reproducibility, deployment tool and model versioning. Ready to get your hands dirty by doing quick ML project using mlflow and release to production to understand the ML-Ops lifecycle.
An Introduction to Google Colab as a development Platform using Python as a programming language. Mentioning Tips that are beneficial to make the development experience much easier.
End-to-End Deep Learning Deployment with ONNXNick Pentreath
The Open Neural Network Exchange (ONNX) standard has emerged for representing deep learning models in a standardized format. In this talk, I will discuss:
1. ONNX for exporting deep learning computation graphs, the ONNX-ML component of the specification for exporting both traditional ML models, common feature extraction, data transformation and post-processing steps.
2. How to use ONNX and the growing ecosystem of exporter libraries for common frameworks (including TensorFlow, PyTorch, Keras, scikit-learn and Apache SparkML) to deploy complete deep learning pipelines.
3. Best practices for working with and combining these disparate exporter toolkits, as well as highlight the gaps, issues, and missing pieces to be taken into account and still to be addressed.
Build an LLM-powered application using LangChain.pdfStephenAmell4
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
Given at the MLOps. Summit 2020 - I cover the origins of MLOps in 2018, how MLOps has evolved from 2018 to 2020, and what I expect for the future of MLOps
This session is continuation of “Automated Production Ready ML at Scale” in last Spark AI Summit at Europe. In this session you will learn about how H&M evolves reference architecture covering entire MLOps stack addressing a few common challenges in AI and Machine learning product, like development efficiency, end to end traceability, speed to production, etc.
LLaMa 2 is a large language model (LLM) developed by Meta AI. It is a successor to the Llama model, and it is one of the most powerful LLMs available today. Llama 2 is trained on a massive dataset of text and code, and it can be used for a wide range of tasks, including:
Generating text, such as articles, poems, and code
Translating languages
Answering questions in a comprehensive and informative way
Following instructions and completing requests thoughtfully
LLama 2 is still under development, but it has already been shown to outperform other LLMs on many benchmarks. For example, Llama 2 outperforms other open source LLMs on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests.
Thomas Wolf "An Introduction to Transfer Learning and Hugging Face"Fwdays
In this talk I'll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by Hugging Face, in particular our transformers, tokenizers, and NLP libraries as well as our distilled and pruned models.
Linux has become integral part of Embedded systems. This presentation gives deeper perspective of Linux from system programming perspective. Stating with basics of Linux it goes on till advanced aspects like system calls, process subsystem, inter process communication mechanisms, thread and various synchronization mechanisms like mutex and semaphores. Understanding these services provided by Linux are essential for embedded systems engineer. These constructs helps users to write efficient programs in Linux environment
As ODP enters its third year we are seeing increased maturity in its capabilities as well as increased adoption by application writers. This talk highlights ODP developments since SFO15 and discusses what’s ahead for ODP in 2016 as it enters production use.
PyTorch is one of the most widely used deep learning library in python community. In this talk I will cover the basic to advanced guide to implement deep learning model using PyTorch. My goal is to introduce PyTorch and show how to use it for deep learning project.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
End to end Machine Learning using Kubeflow - Build, Train, Deploy and ManageAnimesh Singh
With the breadth of sheer functionalities which need to be addressed in the Machine Learning world around building, training, serving and managing models, getting it done in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it's a great foundation for deploying ML workloads. Kubeflow is designed to take advantage of these benefits. In this talk, we are going to address how to make it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and support the full lifecycle Machine Learning using open source technologies like Kubeflow, Tensorflow, PyTorch,Tekton, Knative, Istio and others. We are going to discuss how to enable distributed training of models, model serving, canary rollouts, drift detection, model explainability, metadata management, pipelines and others. Additionally we will discuss Watson productization in progress based on Kubeflow Pipelines and Tekton, and point to Kubeflow Dojo materials and follow-on workshops.
Build Large-Scale Data Analytics and AI Pipeline Using RayDPDatabricks
A large-scale end-to-end data analytics and AI pipeline usually involves data processing frameworks such as Apache Spark for massive data preprocessing, and ML/DL frameworks for distributed training on the preprocessed data. A conventional approach is to use two separate clusters and glue multiple jobs. Other solutions include running deep learning frameworks in an Apache Spark cluster, or use workflow orchestrators like Kubeflow to stitch distributed programs. All these options have their own limitations. We introduce Ray as a single substrate for distributed data processing and machine learning. We also introduce RayDP which allows you to start an Apache Spark job on Ray in your python program and utilize Ray’s in-memory object store to efficiently exchange data between Apache Spark and other libraries. We will demonstrate how this makes building an end-to-end data analytics and AI pipeline simpler and more efficient.
Fine tune and deploy Hugging Face NLP modelsOVHcloud
Are you currently managing AI projects that require a lot of GPU power?
Are you tired of managing the complexity of your infrastructures, GPU instances and your Kubeflow yourself?
Need flexibility for your AI platform or SaaS solution?
OVHcloud innovates in AI by offering simple and turnkey solutions to train your models and put them into production.
The content was modified from Google Content Group
Eric ShangKuan(ericsk@google.com)
---
TensorFlow Lite guide( for mobile & IoT )
TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
The TensorFlow Lite interpreter:
- optimize models on many different hardware types, like mobile phones, embedded Linux devices, and microcontrollers.
The TensorFlow Lite converter:
- which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
---
Event: PyLadies TensorFlow All-Around
Date: Sep 25, 2019
Event link: https://www.meetup.com/PyLadies-Berlin/events/264205538/
Linkedin: http://linkedin.com/in/mia-chang/
MLflow is an MLOps tool that enables data scientist to quickly productionize their Machine Learning projects. To achieve this, MLFlow has four major components which are Tracking, Projects, Models, and Registry. MLflow lets you train, reuse, and deploy models with any library and package them into reproducible steps. MLflow is designed to work with any machine learning library and require minimal changes to integrate into an existing codebase. In this session, we will cover the common pain points of machine learning developers such as tracking experiments, reproducibility, deployment tool and model versioning. Ready to get your hands dirty by doing quick ML project using mlflow and release to production to understand the ML-Ops lifecycle.
An Introduction to Google Colab as a development Platform using Python as a programming language. Mentioning Tips that are beneficial to make the development experience much easier.
End-to-End Deep Learning Deployment with ONNXNick Pentreath
The Open Neural Network Exchange (ONNX) standard has emerged for representing deep learning models in a standardized format. In this talk, I will discuss:
1. ONNX for exporting deep learning computation graphs, the ONNX-ML component of the specification for exporting both traditional ML models, common feature extraction, data transformation and post-processing steps.
2. How to use ONNX and the growing ecosystem of exporter libraries for common frameworks (including TensorFlow, PyTorch, Keras, scikit-learn and Apache SparkML) to deploy complete deep learning pipelines.
3. Best practices for working with and combining these disparate exporter toolkits, as well as highlight the gaps, issues, and missing pieces to be taken into account and still to be addressed.
Build an LLM-powered application using LangChain.pdfStephenAmell4
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
Given at the MLOps. Summit 2020 - I cover the origins of MLOps in 2018, how MLOps has evolved from 2018 to 2020, and what I expect for the future of MLOps
This session is continuation of “Automated Production Ready ML at Scale” in last Spark AI Summit at Europe. In this session you will learn about how H&M evolves reference architecture covering entire MLOps stack addressing a few common challenges in AI and Machine learning product, like development efficiency, end to end traceability, speed to production, etc.
LLaMa 2 is a large language model (LLM) developed by Meta AI. It is a successor to the Llama model, and it is one of the most powerful LLMs available today. Llama 2 is trained on a massive dataset of text and code, and it can be used for a wide range of tasks, including:
Generating text, such as articles, poems, and code
Translating languages
Answering questions in a comprehensive and informative way
Following instructions and completing requests thoughtfully
LLama 2 is still under development, but it has already been shown to outperform other LLMs on many benchmarks. For example, Llama 2 outperforms other open source LLMs on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests.
Thomas Wolf "An Introduction to Transfer Learning and Hugging Face"Fwdays
In this talk I'll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by Hugging Face, in particular our transformers, tokenizers, and NLP libraries as well as our distilled and pruned models.
Linux has become integral part of Embedded systems. This presentation gives deeper perspective of Linux from system programming perspective. Stating with basics of Linux it goes on till advanced aspects like system calls, process subsystem, inter process communication mechanisms, thread and various synchronization mechanisms like mutex and semaphores. Understanding these services provided by Linux are essential for embedded systems engineer. These constructs helps users to write efficient programs in Linux environment
As ODP enters its third year we are seeing increased maturity in its capabilities as well as increased adoption by application writers. This talk highlights ODP developments since SFO15 and discusses what’s ahead for ODP in 2016 as it enters production use.
Oleksii Moskalenko "Continuous Delivery of ML Pipelines to Production"Fwdays
Here in DS team in WIX we want to help to create stunning sites by applying recent achievement of AI research to production. Since Data Science engineering practices are still not fully shaped we found out that it is crucial to bring the best practices from software engineering - give Data Scientist ability to deliver models fast without loss in quality and computation efficiency to stay competitive in this overhyped market. To achieve this we are developing our own infrastructure for creating pipelines and deploying them to production with minimum (to none) engineer involvement.
This talk will cover initial motivation, solved technical issues and lessons learned while building such ML delivery system.
Website: https://fwdays.com/en/event/data-science-fwdays-2019/review/continuous-delivery-of-ml-pipelines-to-production
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-trevett
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group and Vice President at NVIDIA, presents the "APIs for Accelerating Vision and Inferencing: Options and Trade-offs" tutorial at the May 2018 Embedded Vision Summit.
The landscape of SDKs, APIs and file formats for accelerating inferencing and vision applications continues to rapidly evolve. Low-level compute APIs, such as OpenCL, Vulkan and CUDA are being used to accelerate inferencing engines such as OpenVX, CoreML, NNAPI and TensorRT. Inferencing engines are being fed via neural network file formats such as NNEF and ONNX. Some of these APIs, like OpenCV, are vision-specific, while others, like OpenCL, are general-purpose. Some engines, like CoreML and TensorRT, are supplier-specific, while others, such as OpenVX, are open standards that any supplier can adopt. Which ones should you use for your project?
In this presentation, Trevett presents the current landscape of APIs, file formats and SDKs for inferencing and vision acceleration, explaining where each one fits in the development flow. Trevett also highlights where these APIs overlap and where they complement each other, and previews some of the latest developments in these APIs.
EclipseCon Eu 2015 - Breathe life into your Designer!melbats
You have your shiny new DSL up and running thanks to the Eclipse Modeling Technologies and you built a powerful tooling with graphical modelers, textual syntaxes or dedicated editors to support it. But how can you see what is going on when a model is executed ? Don't you need to simulate your design in some way ? Wouldn't you want to see your editors being animated directly within your modeling environment based on execution traces or simulator results?
The GEMOC Research Project designed a methodology to bring animation and execution analysis to DSLs. The companion technologies required to put this in action are small dedicated components (all open-source) at a "proof of concept" maturity level extending proven components : Sirius, Eclipse Debug, Xtend making such features within the reach of Eclipse based tooling. The general intent regarding those OSS technologies is to leverage them within different contexts and contribute them to Eclipse once proven strong enough. The method covers a large spectrum of use cases from DSLs with a straightforward execution semantic to a combination of different DSLs with concurrent execution semantic. Any tool provider can leverage both the technologies and the method to provide an executable DSL and animated graphical modelers to its users enabling simulation and debugging at an early phase of the design.
This talk presents the approach, the technologies and demonstrate it through an example: providing Eclipse Debug integration and diagram animation capabilities for Arduino Designer (EPL) : setting breakpoints, stepping forward or backward in the execution, inspecting the variables states... We will walk you through the steps required to develop such features, the choices to make and the trade-offs involved. Expects live demos with simulated blinking leds and a virtual cat robot !
.NET Core, ASP.NET Core Course, Session 3aminmesbahi
Session 3,
Introducing to Compiler
What is the LLVM?
LLILC
RyuJIT
AOT Compilation
Preprocessors and Conditional Compilation
An Overview on Dependency Injection
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2016-member-meeting-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Peter McGuinness, representing the Khronos Group, delivers the presentation "New Standards for Embedded Vision and Neural Networks" at the December 2016 Embedded Vision Alliance Member Meeting. McGuinness discusses new standardization work for embedded neural network and vision software.
.NET Applications & Cloud Meetup at 7 Peaks 18th September 2019.
At our .NET Applications & Cloud meetup we explored new features in .NET Core 3 and C# 8.0. And learn about .NET on the cloud.
See more details and photos from the event here: https://7peakssoftware.com/dotnet-applications-and-cloud-2019/
Learn more about the tremendous value Open Data Plane brings to NFV
Bob Monkman, Networking Segment Marketing Manager, ARM
Bill Fischofer, Senior Software Engineer, Linaro Networking Group
Moderator:
Brandon Lewis, OpenSystems Media
TSC Sponsored BoF: Can Linux and Automotive Functional Safety Mix ? Take 2: T...Linaro
Session ID: SFO17-218
Session Name: TSC Sponsored BoF: Can Linux and Automotive Functional Safety Mix ? Take 2: Towards an open source, industry acceptable high assurance OS - SFO17-218
Speaker:
Track:
★ Session Summary ★
All are welcome!
At the first edition of the Automotive BoF held at Budapest David Rusling and
Robin Randhawa broached the topic of open source software use in the safety
critical parts of the Automotive domain. That discussion led to some important
realisations about Linux possibilities and realities. In this second edition
of the Automotive Bof David and Robin provide further interesting insights
from discussions with major Tier 1 Automotive OEMs. Overall, things seem to be
trending towards some concrete proposals for the role of Linaro in this space.
Join us at the BoF to learn more.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/sfo17/sfo17-218/
Presentation:
Video:
---------------------------------------------------
★ Event Details ★
Linaro Connect San Francisco 2017 (SFO17)
25-29 September 2017
Hyatt Regency San Francisco Airport
---------------------------------------------------
Keyword:
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
SFO15-102:ODP Project Update
Speaker: Bill Fischofer
Date: September 21, 2015
★ Session Description ★
The OpenDataPlane project is now two years old and is beginning to see widespread interest on the part of both application writers and platform providers. This talk will discuss recent developments in ODP and its uses and look at what lies ahead for this fast-growing open source project.
★ Resources ★
Video: https://www.youtube.com/watch?v=QxK3waNaVEQ
Presentation: http://www.slideshare.net/linaroorg/sfo15102odp-project-update
Etherpad: pad.linaro.org/p/sfo15-102
Pathable: https://sfo15.pathable.com/meetings/302651
★ Event Details ★
Linaro Connect San Francisco 2015 - #SFO15
September 21-25, 2015
Hyatt Regency Hotel
http://www.linaro.org
http://connect.linaro.org
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
14. ONNX
ONNX is a machine learning model representation format that is open source.
ONNX establishes a standard set of operators - the building blocks of machine
learning and deep learning models - as well as a standard file format, allowing AI
developers to utilise models with a range of frameworks, tools, runtimes, and
compilers.
15. ONNX is now on
● Training - CPU,GPU
● Deployment - Edge/Mobile
17. How do I get an ONNX model?
1. ONNX Model Zoo
2. Model creation service such as Azure Custom Vision and
AutoML
18. How do I get an ONNX model?
1. ONNX Model Zoo
2. Model creation service such as Azure Custom Vision and AutoML
3. Convert model from existing model from another framework
19. How do I get an ONNX model?
1. ONNX Model Zoo
2. Model creation service such as Azure Custom Vision and AutoML
3. Convert model from existing model from another framework
4. End to End machine Learning Service using an Azure Machine
Learning Service
20. And also you can get them from
Native Export from
● Pytorch and CNTK
● Converter
https://github.com/microsoft/OLive
21. ML models : Research to production
Deployment
Data
Collection
Trainin
g
Inference
and
deployment
Conversion
23. ONNX Runtime
● Available for all platforms
● Hardware Vendors do provide support
https://github.com/microsoft/DirectML
https://github.com/onnx/tutorials
31. Common results today are
● Word - Grammar Check
● Bing - Q/A
● Cognitive service
And many more
32. Technical Design Principles
● Should support new architecture
● Also be supportive for the traditional ML
● Backward Compatibility
● Compact
● Cross-Platform representation for Serialization
33. ONNX Specification
● ONNX is a open specification that consists of the following components:
● A definition of the extensible computation graph model
● Definition of Standard Data types
● Definition of the Build operation which is for the versioned operation
sert(Schema)
34. ONNX model File Format
Model
a. Version Info
b. Metadata
c. Acyclic Computation Data Flow Graph
35. ONNX model File Format
Graph
a. Input and output units
b. List of computational Node
c. Graph Name
36. ONNX model File Format
Computational Node
a. Zero to more input types of the design types
b. One or more output defined types
c. Operators
d. Operator Parameter
37. ONNX Supported Data Types
Tensor Data Type
● Int8,int16,int32,int64
● Uint8,uint16,uint32,uint64
● float16,float32,float64,float,double
● Bool
● String
● complex 64,complex128
39. Operator in ONNX
https://github.com/onnx/onnx/blob/master/docs/Operators.md
● An operator is identified by <name,domain,version>
● Core ops (ONNX and ONNX-ML)
○ Should be supported by ONNX-compatible products
○ Generally cannot be meaningfully further decomposed
○ Currently 124 ops in ai.onnx domain and 18 in ai.onnx. ml
○ Supports many scenarios/problem areas including image classification,
recommendation,natural language processing,etc.
40. ONNX - Custom Ops
● Ops Specific to framework or runtime
● Indicated by custom domain name
● Primarily meant to be a safety valve
41. ONNX versioning
Done in 3 Levels
● IR version (file format) :Currenly at version %
● Opset Version ONNX Model declare which operator sets they require as a list
of the two-part-operator ids(domain,opset_version)
● Operator Verion: A given Operator is identified by a three
tuple(domain,Op_type,Op_version)
https://github.com/onnx/onnx/blob/master/docs/Versioning.md
And also the version do change
42. ONNX - Design Principles
● Provide complete implementation of the ONNX standard — implement all
versions of the operators (since opset 7)
● Backward compatibility
● High performance
● Cross platform
● Leverage custom accelerators and runtimes to enable maximum performance
(execution providers)
● Support hybrid execution of the models
● Extensible through pluggable modules
46. Graph Partitioning
Given a mutable graph, graph partitioner assigns graph nodes to each execution provider per their
capability and idea goal is to reach best performance in a heterogeneous environment.
ONNX RUNTIME uses a "greedy" node assignment mechanism
● Users specify a preferred execution provider list in order
● ONNX RUNTIME will go thru the list in order to check each provider's capability and assign nodes to
it if it can run the nodes.
FUTURE:
● Manual tuned partitioning
● ML based partitioning
47. Rewrite rule
An interface created for finding patterns (with specific nodes) and applying
rewriting rules against a sub-graph.
49. Transformer Level 0
Transformers anyway will be applied after graph partitioning (e.g. cast insertion,
mem copy insertion)
Level 1: General transformers not specific to any specific execution provider (e.g.
drop out elimination)
Level 2: Execution provider specific transformers (e.g. transpose insertion for
FPGA)
52. Execution Provider
A hardware accelerator interface to query its capability and get corresponding executables.
● Kernel based execution providers
These execution providers provides implementations of operators defined in ONNX (e.g.
CPUExecutionProvider, CudaExecutionProvider, MKLDNNExecutionProvider, etc.)
● Runtime based execution providers
These execution providers may not have implementations with the granularity of ONNX ops, but it
can run whole or partial ONNX graph. Say, it can run several ONNX ops (a sub-graph) together
with one function it has (e.g. TensorRTExecutionProvider, nGraphExecutionProvider, etc.)
54. Extending ONNX runtime
● Execution providers
○ Implement the lExecution Provider interface
■ Examples: TensorRT, OpenVino, NGraph, Android NNAPI, etc
● Custom operators
○ Support operators outside the ONNX standard
○ Support for writing custom ops in both C/C++ and Python
● Graph optimizers
○ Implement the Graph Transformer interface
55. ONNX Go Live(OLive)
● Automates the process of ONNX model shipping
● Integrates
○ model conversion
○ correctness test
○ performance tuning
into a single pipeline and outputs a production ready ONNX model with ONNX Runtime configurations
(execution provider + optimization options)
https://github.com/microsoft/OLive
56. So to know more you have to go to this link here
https://azure.microsoft.com/en-in/blog/onnx-runtime-is-now-open-source/
https://www.onnxruntime.ai/python/tutorial.html