Fine tune and deploy Hugging Face NLP modelsOVHcloud
Are you currently managing AI projects that require a lot of GPU power?
Are you tired of managing the complexity of your infrastructures, GPU instances and your Kubeflow yourself?
Need flexibility for your AI platform or SaaS solution?
OVHcloud innovates in AI by offering simple and turnkey solutions to train your models and put them into production.
Thomas Wolf "An Introduction to Transfer Learning and Hugging Face"Fwdays
In this talk I'll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by Hugging Face, in particular our transformers, tokenizers, and NLP libraries as well as our distilled and pruned models.
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
An introduction to the Transformers architecture and BERTSuman Debnath
The transformer is one of the most popular state-of-the-art deep (SOTA) learning architectures that is mostly used for natural language processing (NLP) tasks. Ever since the advent of the transformer, it has replaced RNN and LSTM for various tasks. The transformer also created a major breakthrough in the field of NLP and also paved the way for new revolutionary architectures such as BERT.
Dmitry Kan, Principal AI Scientist at Silo AI and host of the Vector Podcast [1], will give an overview of the landscape of vector search databases and their role in NLP, along with the latest news and his view on the future of vector search. Further, he will share how he and his team participated in the Billion-Scale Approximate Nearest Neighbor Challenge and improved recall by 12% over a baseline FAISS.
Presented at https://www.meetup.com/open-nlp-meetup/events/282678520/
YouTube: https://www.youtube.com/watch?v=RM0uuMiqO8s&t=179s
Follow Vector Podcast to stay up to date on this topic: https://www.youtube.com/@VectorPodcast
Fine tune and deploy Hugging Face NLP modelsOVHcloud
Are you currently managing AI projects that require a lot of GPU power?
Are you tired of managing the complexity of your infrastructures, GPU instances and your Kubeflow yourself?
Need flexibility for your AI platform or SaaS solution?
OVHcloud innovates in AI by offering simple and turnkey solutions to train your models and put them into production.
Thomas Wolf "An Introduction to Transfer Learning and Hugging Face"Fwdays
In this talk I'll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by Hugging Face, in particular our transformers, tokenizers, and NLP libraries as well as our distilled and pruned models.
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
An introduction to the Transformers architecture and BERTSuman Debnath
The transformer is one of the most popular state-of-the-art deep (SOTA) learning architectures that is mostly used for natural language processing (NLP) tasks. Ever since the advent of the transformer, it has replaced RNN and LSTM for various tasks. The transformer also created a major breakthrough in the field of NLP and also paved the way for new revolutionary architectures such as BERT.
Dmitry Kan, Principal AI Scientist at Silo AI and host of the Vector Podcast [1], will give an overview of the landscape of vector search databases and their role in NLP, along with the latest news and his view on the future of vector search. Further, he will share how he and his team participated in the Billion-Scale Approximate Nearest Neighbor Challenge and improved recall by 12% over a baseline FAISS.
Presented at https://www.meetup.com/open-nlp-meetup/events/282678520/
YouTube: https://www.youtube.com/watch?v=RM0uuMiqO8s&t=179s
Follow Vector Podcast to stay up to date on this topic: https://www.youtube.com/@VectorPodcast
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Build an LLM-powered application using LangChain.pdfAnastasiaSteele10
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
End to end Machine Learning using Kubeflow - Build, Train, Deploy and ManageAnimesh Singh
With the breadth of sheer functionalities which need to be addressed in the Machine Learning world around building, training, serving and managing models, getting it done in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it's a great foundation for deploying ML workloads. Kubeflow is designed to take advantage of these benefits. In this talk, we are going to address how to make it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and support the full lifecycle Machine Learning using open source technologies like Kubeflow, Tensorflow, PyTorch,Tekton, Knative, Istio and others. We are going to discuss how to enable distributed training of models, model serving, canary rollouts, drift detection, model explainability, metadata management, pipelines and others. Additionally we will discuss Watson productization in progress based on Kubeflow Pipelines and Tekton, and point to Kubeflow Dojo materials and follow-on workshops.
Presenting the landscape of AI/ML in 2023 by introducing a quick summary of the last 10 years of its progress, current situation, and looking at things happening behind the scene.
最近のNLP×DeepLearningのベースになっている"Transformer"について、研究室の勉強会用に作成した資料です。参考資料の引用など正確を期したつもりですが、誤りがあれば指摘お願い致します。
This is a material for the lab seminar about "Transformer", which is the base of recent NLP x Deep Learning research.
MLOps (a compound of “machine learning” and “operations”) is a practice for collaboration and communication between data scientists and operations professionals to help manage the production machine learning lifecycle. Similar to the DevOps term in the software development world, MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements. MLOps applies to the entire ML lifecycle - from integrating with model generation (software development lifecycle, continuous integration/continuous delivery), orchestration, and deployment, to health, diagnostics, governance, and business metrics.
To watch the full presentation click here: https://info.cnvrg.io/mlopsformachinelearning
In this webinar, we’ll discuss core practices in MLOps that will help data science teams scale to the enterprise level. You’ll learn the primary functions of MLOps, and what tasks are suggested to accelerate your teams machine learning pipeline. Join us in a discussion with cnvrg.io Solutions Architect, Aaron Schneider, and learn how teams use MLOps for more productive machine learning workflows.
- Reduce friction between science and engineering
- Deploy your models to production faster
- Health, diagnostics and governance of ML models
- Kubernetes as a core platform for MLOps
- Support advanced use-cases like continual learning with MLOps
The Steep Forces Driving Cloud ComputingCraig Rispin
Craig Rispin, Business Futurist presentation for executives of Oracle and Telstra on - "The Steep Forces Driving Cloud Computing" on 19th Nov 2009 in Melbourne, Australia.
For more info on Craig Rispin, CSP:
www.FutureTrendsGroup.com
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Build an LLM-powered application using LangChain.pdfAnastasiaSteele10
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
End to end Machine Learning using Kubeflow - Build, Train, Deploy and ManageAnimesh Singh
With the breadth of sheer functionalities which need to be addressed in the Machine Learning world around building, training, serving and managing models, getting it done in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it's a great foundation for deploying ML workloads. Kubeflow is designed to take advantage of these benefits. In this talk, we are going to address how to make it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and support the full lifecycle Machine Learning using open source technologies like Kubeflow, Tensorflow, PyTorch,Tekton, Knative, Istio and others. We are going to discuss how to enable distributed training of models, model serving, canary rollouts, drift detection, model explainability, metadata management, pipelines and others. Additionally we will discuss Watson productization in progress based on Kubeflow Pipelines and Tekton, and point to Kubeflow Dojo materials and follow-on workshops.
Presenting the landscape of AI/ML in 2023 by introducing a quick summary of the last 10 years of its progress, current situation, and looking at things happening behind the scene.
最近のNLP×DeepLearningのベースになっている"Transformer"について、研究室の勉強会用に作成した資料です。参考資料の引用など正確を期したつもりですが、誤りがあれば指摘お願い致します。
This is a material for the lab seminar about "Transformer", which is the base of recent NLP x Deep Learning research.
MLOps (a compound of “machine learning” and “operations”) is a practice for collaboration and communication between data scientists and operations professionals to help manage the production machine learning lifecycle. Similar to the DevOps term in the software development world, MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements. MLOps applies to the entire ML lifecycle - from integrating with model generation (software development lifecycle, continuous integration/continuous delivery), orchestration, and deployment, to health, diagnostics, governance, and business metrics.
To watch the full presentation click here: https://info.cnvrg.io/mlopsformachinelearning
In this webinar, we’ll discuss core practices in MLOps that will help data science teams scale to the enterprise level. You’ll learn the primary functions of MLOps, and what tasks are suggested to accelerate your teams machine learning pipeline. Join us in a discussion with cnvrg.io Solutions Architect, Aaron Schneider, and learn how teams use MLOps for more productive machine learning workflows.
- Reduce friction between science and engineering
- Deploy your models to production faster
- Health, diagnostics and governance of ML models
- Kubernetes as a core platform for MLOps
- Support advanced use-cases like continual learning with MLOps
The Steep Forces Driving Cloud ComputingCraig Rispin
Craig Rispin, Business Futurist presentation for executives of Oracle and Telstra on - "The Steep Forces Driving Cloud Computing" on 19th Nov 2009 in Melbourne, Australia.
For more info on Craig Rispin, CSP:
www.FutureTrendsGroup.com
DevOps Done Right The How and Why of Versioning Environment ArtifactsPerforce
If you have ever been tasked with figuring out how to deal with large sets of files or large binaries in a version control system, you’ve probably had problems with performance, reliability and scalability. While there are workarounds that can address some of these issues, the workarounds introduce their own complexity that can be difficult to implement and support.
Gene Kim has talked many times (most recently at PuppetConf 2014) about the importance of not only versioning source code but other assets, too. Version control is more important than ever as more companies embrace DevOps and Continuous Delivery and Deployment.
2019 Performance Monitoring and Management Trends and InsightsOpsRamp
Join 451 Research's Senior Analyst Nancy Gohring and OpsRamp's Vice President of Marketing Darren Cunningham as they discuss the latest trends in IT monitoring and management.
This interactive webinar will review the latest research and feature a live Q&A on what's hot, what's new, and what's next in this dynamic and distributed market. Sponsored by OpsRamp, this webinar will also provide an overview of OpsRamp's service-centric AIOps platform and how OpsRamp customers are controlling the chaos with a new approach to IT operations as a service.
To learn more, visit https://www.opsramp.com/about-opsramp...
Also, follow us on social media channels to learn about product highlights, news, announcements, events, conferences and more -
Twitter - https://www.twitter.com/OpsRamp
LinkedIn - https://www.linkedin.com/company/opsramp
Facebook - https://www.facebook.com/OpsRampHQ/
In the software engineering world, change is the only constant. And in the course of the last decades, the frequency of that change has exploded. What Agile has brought to software teams, DevOps is now bringing to the entire organization. And the results speak for themselves. The DevOps high-performers are killing it. Insane deploy frequencies of features, high reliability of applications, and high productivity of cross-functional teams have amplified the speed at which ideas become a reality.
In parallel, Application Security was doing its own thing and to a large part remained oblivious to all the impressive improvements that were happening in software engineering. Because breaking an application doesn’t need any knowledge of how it was created in the first place.
This talk will cover anti-patterns that are preventing application security from being adopted by development teams, such as:
* Signals versus Noise
* Lost in Translation
* Make it easy
In our recap of the final day of International Supercomputing 2016, we explore how OpenPOWER members are working in the HPC industry and continue the conversation around cognitive computing, deep learning, and machine learning.
Teaching Elephants to Dance (Federal Audience): A Developer's Journey to Digi...Burr Sutter
We can be brilliant developers, but we won’t succeed—and won’t lead our organizations to succeed—without a new perspective (if you will) and new assumptions about the components of the “technology ecosystem” that are fundamentally critical to our success. This includes the operators, QA team, DBAs, security folks, and even the pure business contingent—in most cases, each of these individuals and groups plays a critical role in the success of what we create and give birth to as developers. What we do in isolation might be genius, but if we insulate ourselves—especially with arrogance—from these colleagues, neither our code nor our organizations will realize their full potential, and most will fail. The bottom line is that our old ways are no longer viable, and as the elite within our industry, we will be the leaders and heroes who discard old assumptions and adopt a new perspective in this exciting journey to digital transformation—where the impossible can become reality.
Clarisse Hedglin from IBM presented this as part of 3 days International Summit .. She shared the scenarios AI can solve for today using the IBM AI infrastructure.
Selected Topics
Modern Artificial Intelligence 1980s-2021 and Beyond
A Vision for the Next Decade of Computing
The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence
Keras and TensorFlow: The Next Five Years
A Vision for the Future of ML Frameworks
AI Implementation at Scale: Lessons from the Front Lines
A Future with Self-Driving Vehicles
Advances in Renewable Energy: Enabling Our Decarbonized Energy Future with Technology Innovations and Smart Operations
Accelerating Health Care at Bayer with Science@Scale and Federated Learning
Large-Scale Deep Learning Recommendation Models at Facebook
Is AI at the Edge the Killer App for 5G?
Deep Learning for Anomaly Detection
From Storytelling to StoryLiving: A Vision for the Future of Immersive Entertainment
A New Era in Virtual Cinematography
Digital Transformation Is Here: Augmenting Human Capacity with Exponential Compute
Rethinking Drug Discovery in the Era of Digital Biology
Representation Learning for Autonomous Robots
Architecting the Secure Accelerated Data Center of the Future
Convergence of AI and HPC to Solve Grand Challenge Science Problems
Presenting US HHS Artificial Intelligence Strategy 2021: AI Mission and Ambition Commentary by the CAIO
How could OpenAI, a small organization of just 200 employees, managed to shake the foundations of large companies like Google and Meta? Everyone dreams about being a unicorn – having razor sharp focus, high talent-density , rapid speed of innovation but in reality, even startups end up becoming slow organizations very quickly. Why does this happen?
Technology changes and process changes in how people build and manage Internet systems have driven a need for a new approach to monitoring. We talk about why, what and how.
Presentation at Osaka Institute of Technology Technofrontia on Electronics.
Introduction on the latest trends in ICT space. Thoughts about the role of technology and the importance of innovation for Japan economy. Introduction of the the idea of Global Democracy as a product of Information Revolution.
Monitoring of GPU Usage with Tensorflow Models Using PrometheusDatabricks
Understanding the dynamics of GPU utilization and workloads in containerized systems is critical to creating efficient software systems. We create a set of dashboards to monitor and evaluate GPU performance in the context of TensorFlow. We monitor performance in real time to gain insight into GPU load, GPU memory and temperature metrics in a Kubernetes GPU enabled system. Visualizing TensorFlow training job metrics in real time using Prometheus allows us to tune and optimize GPU usage. Also, because Tensor flow jobs can have both GPU and CPU implementations it is useful to view detailed real time performance data from each implementation and choose the best implementation. To illustrate our system, we will show a live demo gathering and visualizing GPU metrics on a GPU enabled Kubernetes cluster with Prometheus and Grafana.
Starting your AI/ML project right (May 2020)Julien SIMON
In this talk, we’ll see how you can put your AI/ML project on the right track from the get-go. Applying common sense and proven best practices, we’ll discuss skills, tools, methods, and more. We’ll also look at several real-life projects built by AWS customers in different industries and startups.
Building Machine Learning Inference Pipelines at Scale (July 2019)Julien SIMON
Talk at OSCON, Portland, 18/07/2019
Real-life Machine Learning applications require more than a single model. Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. Predictions may need post-processing: filtering, sorting, combining, etc.
Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker)
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
Reinventing Deep Learning with Hugging Face Transformers
1. Reinventing Deep Learning
with Hugging Face Transformers
Julien Simon, Chief Evangelist, Hugging Face
julsimon@huggingface.co
2. 2022: Transformers are eating Deep Learning
BERT
ROBERTA
GPT-2
GPT-3
VISION TRANSFORMER
WAV2VEC2
CLIP
SEGFORMER
STABLE DIFFUSION
"Transformers are emerging as a general-purpose architecture for ML"
https://www.stateof.ai/
RNN and CNN usage down, Transformers usage up!
https://www.kaggle.com/kaggle-survey-2021
RNN
CNN
LSTM
5. Deep Learning: how it's going
87% of data science
projects never
make it into produc
ti
on
https://venturebeat.com/2019/07/19/
why-do-87-of-data-science-projects-
never-make-it-into-production
Only 25% of
companies report
widespread adop
ti
on
https://www.pwc.com/us/en/tech-
effect/ai-analytics/ai-
predictions.html
7. 7
Hugging Face: one of the fastest-growing open source projects
https://github.com/huggingface/transformers/
8. The Hugging Face Hub: The Github of Machine Learning
73,000+ models
10,000+ datasets
10,000+ organiza
ti
ons
100,000+ users
1M+ downloads daily
h
tt
ps://huggingface.co
10. Machine Learning Hardware
• A new generation of chips designed for ML workloads
• Faster training
• 6-hour training 2x faster? Several iterations in a single day!
• Faster prediction
• Lower latency, increased throughput
• Better cost-performance ratio
• Hugging Face partners with hardware innovators
https://github.com/huggingface/optimum
•
11. Developer Tools
Model in
production
10,000+ datasets
on the hub
73,000+ models
on the hub
No-code AutoML
HW-accelerated
managed API
HW-accelerated
inference
Hosted ML
applications
HW-accelerated training
Amazon SageMaker
Optimum
Datasets
Models
Hugging Face Endpoints
for Azure
Transformers
Accelerate
Optimum
13. Summing things up
• ML is complicated because we love to make it complicated
• Make sure your teams focus on the right things
1. Define business goals and KPIs
2. Save time with pre-trained models and transfer learning
3. Don't reinvent tools, platform, infrastructure
4. Collaborate on models and datasets
14. Learn more about Hugging Face and Transformers
https://huggingface.co/tasks
https://huggingface.co/course
https://github.com/huggingface
Hugging Face Endpoints launch event (27/9): https://shorturl.at/gJQSX
Learn more about our solutions
https://huggingface.co/support - Expert Acceleration Program, from modeling to production
https://huggingface.co/platform - Private Hub, running on your infrastructure (on prem or cloud)
Getting started Stay in touch!
@julsimon
julsimon.medium.com
youtube.com/c/juliensimonfr