This is a presentation how to introduce CQRS pattern to an existing application, step by step, without breaking changes and holding up the development.
A Context Map will visualize your system: cluttered models, too much or not enough communication, dependencies on other systems are just some of the insights you'll gain if your start using them
OpenAI’s GPT 3 Language Model - guest Steve OmohundroNumenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
We hear a lot about microservices vs. SOA but in reality most companies have both. In this session learn about how you can introduce microservices into your existing infrastructure and where microservices makes the most sense. Topics include how API management and the integration platform help you introduce microservices without the anarchy. See how products such as Oracle API Platform Cloud Service and Oracle Service Bus can be used to support traditional integration styles as well as microservices.
Presented by Luis Weir, Principal, Oracle Ace Director, Capgemini, at Oracle OpenWorld 2016.
This is a presentation how to introduce CQRS pattern to an existing application, step by step, without breaking changes and holding up the development.
A Context Map will visualize your system: cluttered models, too much or not enough communication, dependencies on other systems are just some of the insights you'll gain if your start using them
OpenAI’s GPT 3 Language Model - guest Steve OmohundroNumenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
We hear a lot about microservices vs. SOA but in reality most companies have both. In this session learn about how you can introduce microservices into your existing infrastructure and where microservices makes the most sense. Topics include how API management and the integration platform help you introduce microservices without the anarchy. See how products such as Oracle API Platform Cloud Service and Oracle Service Bus can be used to support traditional integration styles as well as microservices.
Presented by Luis Weir, Principal, Oracle Ace Director, Capgemini, at Oracle OpenWorld 2016.
AI, or artificial intelligence, is powering a massive shift in how engineers, scientists, and programmers develop and improve products and services. 85% of executives expect to gain or strengthen their competitive advantage through the use of AI, but is AI really poised to transform your research, products, or business?
Learn how AI system can be designed to perceive its environment, make decisions, and take action. Get an overview of AI for engineers, and discover the ways in which it fits into an engineering workflow. You will also learn how MATLAB and Simulink® are giving engineers and scientists AI capabilities that were once available only to highly-specialized software developers and Data Scientists.
AZConf 2023 - Considerations for LLMOps: Running LLMs in productionSARADINDU SENGUPTA
With the recent explosion in development and interest in large language, vision and speech models, it has become apparent that running large models in production will be a key driver in enterprise adoption of ML. Traditional MLOps, i.e. running machine learning models in production, already has so many variabilities to address starting from data integrity, data drift and model optimization. Running a large model (language or vision) in production keeping in mind business requirements is different altogether. In this talk, I will try to explain the general framework for LLMOps and certain considerations while designing a system for inferencing a large model.
This talk will be covered in sub-topics:
1. Model Optimization
2. Model fine-tuning
3. Model Editing
4. Model Serving and deployment
5. Model metrics monitoring
6. Embedding and artifact management
In each sub-topic, a brief understanding of the current open-source tool sets will also be mentioned so that tool-chain selection is a bit easier.
The theory of SOLID principles was
introduced by Robert C. Martin in his 2000
paper “Design Principles and Design
Patterns”.
SOLID => Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, Dependency Inversion.
Understanding LLMOps-Large Language Model OperationsMy Gen Tec
The GPT (Generative Pre-trained Transformer) models created by OpenAI and the BERT (Bidirectional Encoder Representations from Transformers) models created by Google are two of the most well-known LLMOps. These models have produced cutting-edge outcomes in a variety of applications, including text summarization, chatbots, and language translation.
Shparkley: Scaling Shapley with Apache SparkDatabricks
Shapley algorithm is an interpretation algorithm that is well-recognized by both the industry and academia. However, given its exponential runtime complexity and existing implementations taking a very long time to generate feature contributions for a single instance, it has found limited practical use in the industry.
A Beginner's Guide to Large Language ModelsAjitesh Kumar
Large Language Models (LLMs) are a type of deep learning model designed to process and understand vast amounts of natural language data. Built on neural network architectures, particularly the transformer architecture, LLMs have revolutionized the field of natural language processing. In this presentation, we will explore the world of LLMs, their significance, and the different types of LLMs based on the transformer architecture, such as autoregressive language models (e.g., GPT), autoencoding language models (e.g., BERT), and combined models (e.g., T5). Join us as we delve into the world of LLMs and discover their potential in shaping the future of natural language processing.
Generative AI, Game Development and the Future of CivilizationJon Radoff
This is my talk from Gamescom Congress in 2023: the topic is the use of generative AI in game development -- but the context is much broader. This is about the next stage of human civilization, where our minds and our creativity are extended through the use of AI tools and agents. My talk is not only about creativity, but empowerment: tools that act upon our goals and reflect our individuality.
This is a somewhat condensed and updated version of a lecture I presented at the MIT Media Lab course on Metaverse (MAS.S61).
Decoupling your application using Symfony Messenger and eventshmmonteiro
Web applications get more complex over time.
We start with a simple application, create a business on top of it. Start hiring people, and all of a sudden the code does not talk in the same business language. It becomes harder to change.
There are strategies like Domain Driven Design that shows how to put business rules into your code and publish domain events that can be consumed asynchronously by another service.
For example, when a booking is made, we want to send an invoice, reacting to a Booking was confirmed domain event.
By decoupling the code with domain events we have the help of Symfony Messenger.
Symfony messenger allow us to simplify our business allowing to publish and react to those domain events, no matter where we publish them. We can even create specific alarms on some specific events that are important to our business and specific retry strategies.
Build an LLM-powered application using LangChain.pdfStephenAmell4
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
I will talk about Generative AI and its applications to 2D art production in the gaming industry. We will explore the Stable Diffusion neural net and concepts such as Prompt Engineering, Image-to-Image, ControlNet, and Dreambooth and how they can enhance game development. Moreover, we will compare the pros and cons of Stable Diffusion with Midjourney. As a result, you will better understand the potential benefits of incorporating generative AI into your game development workflow.
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
AI, or artificial intelligence, is powering a massive shift in how engineers, scientists, and programmers develop and improve products and services. 85% of executives expect to gain or strengthen their competitive advantage through the use of AI, but is AI really poised to transform your research, products, or business?
Learn how AI system can be designed to perceive its environment, make decisions, and take action. Get an overview of AI for engineers, and discover the ways in which it fits into an engineering workflow. You will also learn how MATLAB and Simulink® are giving engineers and scientists AI capabilities that were once available only to highly-specialized software developers and Data Scientists.
AZConf 2023 - Considerations for LLMOps: Running LLMs in productionSARADINDU SENGUPTA
With the recent explosion in development and interest in large language, vision and speech models, it has become apparent that running large models in production will be a key driver in enterprise adoption of ML. Traditional MLOps, i.e. running machine learning models in production, already has so many variabilities to address starting from data integrity, data drift and model optimization. Running a large model (language or vision) in production keeping in mind business requirements is different altogether. In this talk, I will try to explain the general framework for LLMOps and certain considerations while designing a system for inferencing a large model.
This talk will be covered in sub-topics:
1. Model Optimization
2. Model fine-tuning
3. Model Editing
4. Model Serving and deployment
5. Model metrics monitoring
6. Embedding and artifact management
In each sub-topic, a brief understanding of the current open-source tool sets will also be mentioned so that tool-chain selection is a bit easier.
The theory of SOLID principles was
introduced by Robert C. Martin in his 2000
paper “Design Principles and Design
Patterns”.
SOLID => Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, Dependency Inversion.
Understanding LLMOps-Large Language Model OperationsMy Gen Tec
The GPT (Generative Pre-trained Transformer) models created by OpenAI and the BERT (Bidirectional Encoder Representations from Transformers) models created by Google are two of the most well-known LLMOps. These models have produced cutting-edge outcomes in a variety of applications, including text summarization, chatbots, and language translation.
Shparkley: Scaling Shapley with Apache SparkDatabricks
Shapley algorithm is an interpretation algorithm that is well-recognized by both the industry and academia. However, given its exponential runtime complexity and existing implementations taking a very long time to generate feature contributions for a single instance, it has found limited practical use in the industry.
A Beginner's Guide to Large Language ModelsAjitesh Kumar
Large Language Models (LLMs) are a type of deep learning model designed to process and understand vast amounts of natural language data. Built on neural network architectures, particularly the transformer architecture, LLMs have revolutionized the field of natural language processing. In this presentation, we will explore the world of LLMs, their significance, and the different types of LLMs based on the transformer architecture, such as autoregressive language models (e.g., GPT), autoencoding language models (e.g., BERT), and combined models (e.g., T5). Join us as we delve into the world of LLMs and discover their potential in shaping the future of natural language processing.
Generative AI, Game Development and the Future of CivilizationJon Radoff
This is my talk from Gamescom Congress in 2023: the topic is the use of generative AI in game development -- but the context is much broader. This is about the next stage of human civilization, where our minds and our creativity are extended through the use of AI tools and agents. My talk is not only about creativity, but empowerment: tools that act upon our goals and reflect our individuality.
This is a somewhat condensed and updated version of a lecture I presented at the MIT Media Lab course on Metaverse (MAS.S61).
Decoupling your application using Symfony Messenger and eventshmmonteiro
Web applications get more complex over time.
We start with a simple application, create a business on top of it. Start hiring people, and all of a sudden the code does not talk in the same business language. It becomes harder to change.
There are strategies like Domain Driven Design that shows how to put business rules into your code and publish domain events that can be consumed asynchronously by another service.
For example, when a booking is made, we want to send an invoice, reacting to a Booking was confirmed domain event.
By decoupling the code with domain events we have the help of Symfony Messenger.
Symfony messenger allow us to simplify our business allowing to publish and react to those domain events, no matter where we publish them. We can even create specific alarms on some specific events that are important to our business and specific retry strategies.
Build an LLM-powered application using LangChain.pdfStephenAmell4
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
I will talk about Generative AI and its applications to 2D art production in the gaming industry. We will explore the Stable Diffusion neural net and concepts such as Prompt Engineering, Image-to-Image, ControlNet, and Dreambooth and how they can enhance game development. Moreover, we will compare the pros and cons of Stable Diffusion with Midjourney. As a result, you will better understand the potential benefits of incorporating generative AI into your game development workflow.
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
Introduce Microsoft's Generative AI application tools and provide examples of their use in the medical field. The presentation is given by Raymond Tsai, a Principal Technical Program Manager of Azure HPC & AI Engineering group.
For the EFL teachers' training in China.
thanks for Andy's presentation version in English. the Chinese version translated by John Wu, from Shaoguan, China
johnwuchina@gmail.com
1. NVIDIA GTC Taiwan 2017
結合智能視覺系統之機械手臂
達明機器人 黃鐘賢(David)
2017/10/26
A leading company in
collaborative robot and vision technologies.
TECHMAN ROBOT INC.
2. Confidential
Outline
n 達明機器人簡介 – Smart, Simple and Safe
n Be Smart at Edge – 手眼力協調之機械手臂
n Be Smart among Robots – 多手臂協作
n Be Smart at Cloud – 智慧工廠
n Be Smart in the Future – 基於深度學習之手臂應用
2
3. Confidential
Outline
n 達明機器人簡介 – Smart, Simple and Safe
n Be Smart at Edge – 手眼力協調之機械手臂
n Be Smart among Robots – 多手臂協作
n Be Smart at Cloud – 智慧工廠
n Be Smart in the Future – 基於深度學習之手臂應用
3
17. Confidential
Smart 智能
TM5 Vision System
n 內置視覺系統,無須擔心如何整合複雜
的視覺組件
n 自動矯正視覺系統的參數
n 物件偵測、影像增強、條碼讀取
n 使用簡易的用戶介面實現自動化任務
n 核心辨識與定位模組透過GPU加速
n 基於深度學習之文字辨識模組(OCR)
17
23. Confidential
Outline
n 達明機器人簡介 – Smart, Simple and Safe
n Be Smart at Edge – 手眼力協調之機械手臂
n Be Smart among Robots – 多手臂協作
n Be Smart at Cloud – 智慧工廠
n Be Smart in the Future – 基於深度學習之手臂應用
23
24. Confidential
Be Smart at Edge – 手眼力協調之機械手臂
24
Intruder detection is running
on GPU with 30 frames/s
25. Confidential
Outline
n 達明機器人簡介 – Smart, Simple and Safe
n Be Smart at Edge – 手眼力協調之機械手臂
n Be Smart among Robots – 多手臂協作
n Be Smart at Cloud – 智慧工廠
n Be Smart in the Future – 基於深度學習之手臂應用
25
27. Confidential
Outline
n 達明機器人簡介 – Smart, Simple and Safe
n Be Smart at Edge – 手眼力協調之機械手臂
n Be Smart among Robots – 多手臂協作
n Be Smart at Cloud – 智慧工廠
n Be Smart in the Future – 基於深度學習之手臂應用
27
29. Confidential
Outline
n 達明機器人簡介 – Smart, Simple and Safe
n Be Smart at Edge – 手眼力協調之機械手臂
n Be Smart among Robots – 多手臂協作
n Be Smart at Cloud – 智慧工廠
n Be Smart in the Future – 基於深度學習之手臂應用
29
30. Confidential
AI is coming!
類別 公司 視覺技術擴展至深度學習的導入
併購or投資合作的對象 內部研發
機器視覺
(Machine Vision)
Cognex ViDi
Halcon In-House R&D
(2016導入深度學習OCR)
工業機器人
(Robotics)
Fanuc Preferred Network
ABB Vicarious
Kuka Huawei
台灣廠商
(TAIROS 2017)
所羅門 In-House R&D
(Inspection, Bin Picking)
台達 In-House R&D
(Inspection)
n Cognex收購機器視覺深度學習公司ViDi (2017/04)
n Fanuc投資深度學習技術開發公司Preferred Network (PFN)
n ABB投資AI新創公司Vicarious
n Kuka與華為展開深度學習合作
30
31. Confidential
Is Paradigm shifting? From sense-plan-act to AI
n Google: Deep learning for robots (Early of 2016)
n Over 800,000 grasp attempts (3000 robot-hours of practice)
31
Source: https://research.googleblog.com/2016/03/deep-learning-for-robots-learning-from.html
35. Confidential
Is AI ready for commercialization?
n 傳統作法
1. 相機、手臂校正
2. 物件3D建模或由CAD檔匯入
3. 辨識參數調整
4. 取物姿態教導與箱體防撞姿態迴避
n 產品變異性高,需與夾爪或吸嘴設計相配合
n 物體可能有一個到數個可取物的姿態
n 對稱性工件
n 點雲未必能完美成像
n 與手臂控制相關
n 箱體防撞,箱體越深越困難
n 到取物點的軌跡規劃
n 終端客戶無法忍受長時間的訓練與學習
n 簡單的教導、逐漸提高成功率
n HI (Human Intelligence) + AI
35
36. Confidential
Smart 2D/3D Sensor
AI-powered Robot
2D/3D Vision
• Object detection
• Pose estimation
• Inspection
Hand/Eye/Force
• Force control
• Path planning
• Grasping point
identification
• Task scheduling
Zero-shot Learning
• Detect and grasp unknown
objects
Simulation
• Training/learning in
virtual world
• Knowledge transferring
Cooperation among Neural
networks
36
* Inspired by Prof. Chun-Yi Lee, NTHU