ROS 2 AI Integration Working Group 1: ALMA, SustainML & ROS 2 use case eProsima
The new ROS 2 AI Integration Working Group is focused on enabling Machine Learning technologies for ROS 2.
In this presentation you'll find:
- ALMA: the Human Centric Algebraic Machine Learning project
- SustainML
- Enabling ML technologies for ROS 2 robots with Vulcanexus
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewenconfluent
Flink and Kafka are popular components to build an open source stream processing infrastructure. We present how Flink integrates with Kafka to provide a platform with a unique feature set that matches the challenging requirements of advanced stream processing applications. In particular, we will dive into the following points:
Flink’s support for event-time processing, how it handles out-of-order streams, and how it can perform analytics on historical and real-time streams served from Kafka’s persistent log using the same code. We present Flink’s windowing mechanism that supports time-, count- and session- based windows, and intermixing event and processing time semantics in one program.
How Flink’s checkpointing mechanism integrates with Kafka for fault-tolerance, for consistent stateful applications with exactly-once semantics.
We will discuss “”Savepoints””, which allows users to save the state of the streaming program at any point in time. Together with a durable event log like Kafka, savepoints allow users to pause/resume streaming programs, go back to prior states, or switch to different versions of the program, while preserving exactly-once semantics.
We explain the techniques behind the combination of low-latency and high throughput streaming, and how latency/throughput trade-off can configured.
We will give an outlook on current developments for streaming analytics, such as streaming SQL and complex event processing.
Spark SQL Catalyst Code Optimization using Function Outlining with Kavana Bha...Databricks
Spark SQL Catalyst optimizer, post query plan optimization, compiles the SQL query to Java code. Without code generation, such query expressions would have to be interpreted for each row of data, by walking down a tree of nodes. This introduces large amounts of branches and virtual function calls that slow down execution. With code generation, a query is collapsed into a single optimized function that eliminates multiple function calls and leverages CPU registers for intermediate data.
This code is then compiled in runtime to Java bytecode using Janino compiler. This presentation focuses on further catalyst code generation optimizations possible using function outlining. Automatic code generation tools generally tend to generate huge optimized functions. Large functions that are frequently executed might degrade runtime performance by preventing JVM optimizations such as function inlining. To avoid this, code generation tools should try to contain independent logic into separate functions.
This presentation will take the audience through the Spark Catalyst Code generation, how automatic split of large functions into smaller functions was achieved and the performance benefits associated with it
How to use Exachk effectively to manage Exadata environments OGBEmeaSandesh Rao
Exachk is a tool for helping with best practices with an Exadata Box. This presentation will go through setup , usage , options and how to use it more effectively to be more proactive in fixing issues with an Exadata environment. There are features like baselines , scheduler for ongoing automation , Collection Manager an Apex based interface which is used to determine the common problems , how to setup this dashboard all for free and in under 30 minutes to be a rockstar Exadata DBA
ROS 2 AI Integration Working Group 1: ALMA, SustainML & ROS 2 use case eProsima
The new ROS 2 AI Integration Working Group is focused on enabling Machine Learning technologies for ROS 2.
In this presentation you'll find:
- ALMA: the Human Centric Algebraic Machine Learning project
- SustainML
- Enabling ML technologies for ROS 2 robots with Vulcanexus
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewenconfluent
Flink and Kafka are popular components to build an open source stream processing infrastructure. We present how Flink integrates with Kafka to provide a platform with a unique feature set that matches the challenging requirements of advanced stream processing applications. In particular, we will dive into the following points:
Flink’s support for event-time processing, how it handles out-of-order streams, and how it can perform analytics on historical and real-time streams served from Kafka’s persistent log using the same code. We present Flink’s windowing mechanism that supports time-, count- and session- based windows, and intermixing event and processing time semantics in one program.
How Flink’s checkpointing mechanism integrates with Kafka for fault-tolerance, for consistent stateful applications with exactly-once semantics.
We will discuss “”Savepoints””, which allows users to save the state of the streaming program at any point in time. Together with a durable event log like Kafka, savepoints allow users to pause/resume streaming programs, go back to prior states, or switch to different versions of the program, while preserving exactly-once semantics.
We explain the techniques behind the combination of low-latency and high throughput streaming, and how latency/throughput trade-off can configured.
We will give an outlook on current developments for streaming analytics, such as streaming SQL and complex event processing.
Spark SQL Catalyst Code Optimization using Function Outlining with Kavana Bha...Databricks
Spark SQL Catalyst optimizer, post query plan optimization, compiles the SQL query to Java code. Without code generation, such query expressions would have to be interpreted for each row of data, by walking down a tree of nodes. This introduces large amounts of branches and virtual function calls that slow down execution. With code generation, a query is collapsed into a single optimized function that eliminates multiple function calls and leverages CPU registers for intermediate data.
This code is then compiled in runtime to Java bytecode using Janino compiler. This presentation focuses on further catalyst code generation optimizations possible using function outlining. Automatic code generation tools generally tend to generate huge optimized functions. Large functions that are frequently executed might degrade runtime performance by preventing JVM optimizations such as function inlining. To avoid this, code generation tools should try to contain independent logic into separate functions.
This presentation will take the audience through the Spark Catalyst Code generation, how automatic split of large functions into smaller functions was achieved and the performance benefits associated with it
How to use Exachk effectively to manage Exadata environments OGBEmeaSandesh Rao
Exachk is a tool for helping with best practices with an Exadata Box. This presentation will go through setup , usage , options and how to use it more effectively to be more proactive in fixing issues with an Exadata environment. There are features like baselines , scheduler for ongoing automation , Collection Manager an Apex based interface which is used to determine the common problems , how to setup this dashboard all for free and in under 30 minutes to be a rockstar Exadata DBA
기업을 위한 Google Drive For Work 100% 활용 백서Charly Choi
기업을 위한 Google Drive for Work 100% 활용 백서
본 백서에서는 ‘Google Drive for Work’이 갖고 있는 장점이 무엇인지, 기업의 IT인프라 시스템에 어떠한 영향을 줄 수 있는지, Google Apps Unlimited 로 알려진 ‘Google Drive for Work’이 저려함 비용의 무제한 용량의 스토리지만 제공하는 것 뿐 아니라, 기업에서 필요로 하는 IT인프라시스템의 전체적인 요건들을 어떻게 갖추고 있고, 일반적으로 잘 알려지지 않는 아주 중요한 기능들이 무엇인지를 설명하고자 합니다.
또한 본 백서는 ‘Google Drive for Work’ 을 도입 검토하는 기업에서 의사결정을 쉽게 하기 위한 지침서가 될 수 있을 것으로 기대를 합니다.
기업을 위한 Google Drive For Work 100% 활용 백서Charly Choi
기업을 위한 Google Drive for Work 100% 활용 백서
본 백서에서는 ‘Google Drive for Work’이 갖고 있는 장점이 무엇인지, 기업의 IT인프라 시스템에 어떠한 영향을 줄 수 있는지, Google Apps Unlimited 로 알려진 ‘Google Drive for Work’이 저려함 비용의 무제한 용량의 스토리지만 제공하는 것 뿐 아니라, 기업에서 필요로 하는 IT인프라시스템의 전체적인 요건들을 어떻게 갖추고 있고, 일반적으로 잘 알려지지 않는 아주 중요한 기능들이 무엇인지를 설명하고자 합니다.
또한 본 백서는 ‘Google Drive for Work’ 을 도입 검토하는 기업에서 의사결정을 쉽게 하기 위한 지침서가 될 수 있을 것으로 기대를 합니다.