1. The document describes an SDK called Atom that uses Redis Streams to enable communication between microservices.
2. Atom allows developers to break applications into reusable microservices that can interact through publishing to and subscribing from Redis Streams.
3. Redis Streams provides an efficient way to handle pub/sub messaging between microservices as well as asynchronous request/response capabilities through its ability to add and query entries in streams.
Terraform in production - experiences, best practices and deep dive- Piotr Ki...PROIDEA
In my presentation I would like to share my experiences about working with Terraform in various infra projects (ECS/Kops/Core-infra types). I'm gonna share what's "common-sense" in deploying projects with terraform with several different approaches (Should I use module? Should I write my own? How to structure repo with code? Terraform in Terraform (kops example)?)
The document provides an overview of the Aerospike architecture, including the client, cluster, storage, indexes, RAM, flash storage, and cross datacenter replication (XDR). It describes Aerospike's goals of handling high transaction volumes at low latency while scaling linearly. The key aspects of the architecture are the smart client that routes to data in one hop, shared-nothing nodes, single row transactions, smart cluster management, and XDR for data replication across datacenters.
ECMAScript is the name of the international standard that defines JavaScript. ES6 → ECMAScript 2015. Latest ECMAScript version is ES7 which is ECMAScript 2016.
Basically it is a superset of es5
This document summarizes a presentation on developing with Apache NiFi. It discusses NiFi's REST API for programmatic access, the NiFi developer guide for building custom processors, and tips for contributing to the NiFi project through the GitHub pull request process. Key aspects of the NiFi architecture like its repositories and FlowFile lifecycle are also overviewed.
The document discusses the internals and architecture of the Nginx web server. It covers Nginx's event-driven and non-blocking architecture, its use of memory pools and data structures like radix trees, how it processes HTTP requests through different phases, and how modules and extensions can be developed for Nginx. The document also provides an overview of Nginx's configuration, caching, and load balancing capabilities.
Oak, the Architecture of the new RepositoryMichael Dürig
Apache Jackrabbit Oak is a new JCR implementation with a completely new architecture. Based on concepts like eventual consistency and multi-version concurrency control, and borrowing ideas from distributed version control systems and cloud-scale databases, the Oak architecture is a major leap ahead for Jackrabbit. This presentation describes the Oak architecture and shows what it means for the scalability and performance of modern content applications. Changes to existing Jackrabbit functionality are described and the migration process is explained.
TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. It allows for large scale JavaScript application development with features like classes, inheritance, modules, generators, async/await, and decorators. TypeScript introduces strong typing, interfaces, generics, and other design-time features not available in JavaScript to reduce complexity in large codebases. It supports the future of JavaScript by allowing the use of ES6 and ES7 features today through compiler options. Getting started is easy using the TypeScript playground or boilerplate projects on GitHub.
Terraform in production - experiences, best practices and deep dive- Piotr Ki...PROIDEA
In my presentation I would like to share my experiences about working with Terraform in various infra projects (ECS/Kops/Core-infra types). I'm gonna share what's "common-sense" in deploying projects with terraform with several different approaches (Should I use module? Should I write my own? How to structure repo with code? Terraform in Terraform (kops example)?)
The document provides an overview of the Aerospike architecture, including the client, cluster, storage, indexes, RAM, flash storage, and cross datacenter replication (XDR). It describes Aerospike's goals of handling high transaction volumes at low latency while scaling linearly. The key aspects of the architecture are the smart client that routes to data in one hop, shared-nothing nodes, single row transactions, smart cluster management, and XDR for data replication across datacenters.
ECMAScript is the name of the international standard that defines JavaScript. ES6 → ECMAScript 2015. Latest ECMAScript version is ES7 which is ECMAScript 2016.
Basically it is a superset of es5
This document summarizes a presentation on developing with Apache NiFi. It discusses NiFi's REST API for programmatic access, the NiFi developer guide for building custom processors, and tips for contributing to the NiFi project through the GitHub pull request process. Key aspects of the NiFi architecture like its repositories and FlowFile lifecycle are also overviewed.
The document discusses the internals and architecture of the Nginx web server. It covers Nginx's event-driven and non-blocking architecture, its use of memory pools and data structures like radix trees, how it processes HTTP requests through different phases, and how modules and extensions can be developed for Nginx. The document also provides an overview of Nginx's configuration, caching, and load balancing capabilities.
Oak, the Architecture of the new RepositoryMichael Dürig
Apache Jackrabbit Oak is a new JCR implementation with a completely new architecture. Based on concepts like eventual consistency and multi-version concurrency control, and borrowing ideas from distributed version control systems and cloud-scale databases, the Oak architecture is a major leap ahead for Jackrabbit. This presentation describes the Oak architecture and shows what it means for the scalability and performance of modern content applications. Changes to existing Jackrabbit functionality are described and the migration process is explained.
TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. It allows for large scale JavaScript application development with features like classes, inheritance, modules, generators, async/await, and decorators. TypeScript introduces strong typing, interfaces, generics, and other design-time features not available in JavaScript to reduce complexity in large codebases. It supports the future of JavaScript by allowing the use of ES6 and ES7 features today through compiler options. Getting started is easy using the TypeScript playground or boilerplate projects on GitHub.
The Happy Marriage of Redis and Protobuf by Scott Haines of Twilio - Redis Da...Redis Labs
The document summarizes a presentation about using Protocol Buffers and Redis together. It discusses how Protocol Buffers provide strict data types, versioning, and serialization/deserialization benefits. It then outlines Redis key patterns using namespaces, versions, data categories and identifiers. Examples are provided to show how Protocol Buffers messages can be stored in Redis using these key patterns, including storing connections data in sets, sorted sets and individual messages. Benefits discussed include structure, readability, testing and abstraction.
In the Cloud Native community, eBPF is gaining popularity, which can often be the best solution for solving different challenges with deep observability of system. Currently, eBPF is being embraced by major players.
Mydbops co-Founder, Kabilesh P.R (MySQL and Mongo Consultant) illustrates on debugging linux issues with eBPF. A brief about BPF & eBPF, BPF internals and the tools in actions for faster resolution.
The document describes Uber's use of Kafka for reliable messaging. Kafka is used for inter-service messaging, stream processing, database changelog transport, data ingestion, and logging. It provides ordered, partitioned streaming and unordered queueing semantics. The summary describes Uber's consumer wrapper that implements features like acknowledgments, redelivery, delays between retries, dead letter queues, competing consumers, and multi-datacenter failover to support reliable messaging on top of Kafka.
A common request sent from your web browser to a web server goes quite a long way and it can take a great deal of time until the data your browser can display are fetched back. I will talk about making this great deal of time significantly less great by caching things on different levels, starting with client-side caching for faster display and minimizing transferred data, storing results of already performed operations and computations and finishing with lowering the load of database servers by caching result sets. Cache expiration and invalidation is the hardest part so I will cover that too. Presentation will be focused mainly on PHP, but most of the principles are quite general work elsewhere too.
The document discusses techniques for storing time series data at scale in a time series database (TSDB). It describes storing 16 bytes of data per sample by compressing timestamps and values. It proposes organizing data into blocks, chunks, and files to handle high churn rates. An index structure uses unique IDs and sorted label mappings to enable efficient queries over millions of time series and billions of samples. Benchmarks show the TSDB can handle over 100,000 samples/second while keeping memory, CPU and disk usage low.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
The document discusses security issues with AngularJS and summarizes four general attack vectors:
A1: Attacking the AngularJS sandbox by bypassing restrictions on dangerous objects and methods. Early versions had trivial bypasses but later versions required more creative techniques.
A2: Attacking the AngularJS sanitizer, which aims to sanitize HTML strings and remove XSS attacks. There were issues with both an older sanitizer version and the current version.
A3: Attacking the Content Security Policy (CSP) mode in AngularJS.
A4: Attacking vulnerabilities directly in the AngularJS codebase through techniques like sandbox bypasses.
The document describes a memory management system using memory folios to address problems with legacy page caching and compound pages. Memory folios provide a unified interface for accessing pages and simplify operations on high-order and compound pages. Folios also improve page cache performance by maintaining a shorter LRU list with one entry per folio rather than per page.
In this presentation, Yasunori Goto and Qi fuli will talk about basis of NVDIMM, the issues of RAS of Non Volatile DIMM(NVDIMM), and what feature is made and is developing for it.
NVDIMM is expected as new age device recently. Though a cpu can read/write the NVDIMM directly like RAM, the data of NVDIMM remains after power down or reboot. So, on memory database will be one of the good example of usecase of NVDIMM.
Since many people have made great effort for Linux, NVDIMM drivers, filesystems,management command, and many libraries has been well developed for a few years,
However, Yasunori Goto found some issues about RAS(Reliabivity, Availability, and Serviceability) feature of NVDIMM, because characteristic of the NVDIMM is likely mixture of Storage and RAM. For example, NVDIMM does not have hotplug feature because it is inserted at DIMM slot like RAM, but its data must be back-upped/restored like storage.
Kernel Recipes 2017 - EBPF and XDP - Eric LeblondAnne Nicolas
Berkeley Packet Filter is an old friend for most people that deal with network under Linux. But its extended version eBPF is completely redefining the scope of usage and interaction with the kernel. It can indeed be used to instrument most parts of the kernel. This goes from network tracing to process or I/O monitoring.
This talk will provide an overview of eBPF, from concept to tools like BCC. It will then focus on XDP for eXtreme Data Path and the possible applications in term of networking provided by this new framework.
Eric Leblond, Stamus Network
Security is one of fundamental features for enterprise adoption. Specifically, for SQL users, row/column-level access control is important. However, when a cluster is used as a data warehouse accessed by various user groups via different ways, it is difficult to guarantee data governance in a consistent way. In this talk, we focus on SQL users and talk about how to provide row/column-level access controls with common access control rules throughout the whole cluster with various SQL engines, e.g., Apache Spark 2.1, Apache Spark 1.6 and Apache Hive 2.1. If some of rules are changed, all engines are controlled consistently in near real-time. Technically, we enables Spark Thrift Server to work with an identify given by JDBC connection and take advantage of Hive LLAP daemon as a shared and secured processing engine. We demonstrate row-level filtering, column-level filtering and various column maskings in Apache Spark with Apache Ranger. We use Apache Ranger as a single point of security control center.
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
The columnar roadmap: Apache Parquet and Apache ArrowDataWorks Summit
The Hadoop ecosystem has standardized on columnar formats—Apache Parquet for on-disk storage and Apache Arrow for in-memory. With this trend, deep integration with columnar formats is a key differentiator for big data technologies. Vertical integration from storage to execution greatly improves the latency of accessing data by pushing projections and filters to the storage layer, reducing time spent in IO reading from disk, as well as CPU time spent decompressing and decoding. Standards like Arrow and Parquet make this integration even more valuable as data can now cross system boundaries without incurring costly translation. Cross-system programming using languages such as Spark, Python, or SQL can becomes as fast as native internal performance.
In this talk we’ll explain how Parquet is improving at the storage level, with metadata and statistics that will facilitate more optimizations in query engines in the future. We’ll detail how the new vectorized reader from Parquet to Arrow enables much faster reads by removing abstractions as well as several future improvements. We will also discuss how standard Arrow-based APIs pave the way to breaking the silos of big data. One example is Arrow-based universal function libraries that can be written in any language (Java, Scala, C++, Python, R, ...) and will be usable in any big data system (Spark, Impala, Presto, Drill). Another is a standard data access API with projection and predicate push downs, which will greatly simplify data access optimizations across the board.
Speaker
Julien Le Dem, Principal Engineer, WeWork
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
Page cache mechanism in Linux kernel.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
This document provides an overview of Apache Flink internals. It begins with an introduction and recap of Flink programming concepts. It then discusses how Flink programs are compiled into execution plans and executed in a pipelined fashion, as opposed to being executed eagerly like regular code. The document outlines Flink's architecture including the optimizer, runtime environment, and data storage integrations. It also covers iterative processing and how Flink handles iterations both by unrolling loops and with native iterative datasets.
Apache Pulsar Development 101 with PythonTimothy Spann
Apache Pulsar Development 101 with Python PS2022_Ecosystem_v0.0
There is always the fear a speaker cannot make it. So just in case, since I was the MC for the ecosystem track I put together a talk just in case.
Here it is. Never seen or presented.
FIWARE Wednesday Webinars - The Use of DDS Middleware in Robotics (Part 2)FIWARE
The Use of DDS Middleware in Robotics - 17 June 202
Corresponding webinar recording: https://youtu.be/pTkZk4VF0gY
This webinar, in cooperation with FIWARE Foundation Gold Member eProsima, will provide an introduction to core real-time technologies: FAST DDS, the most complete Open Source DDS for ROS 2, and Micro XRCE-DDS, the middleware for microcontrollers and micro-ROS.
Chapter: Robotics
Difficulty: 3
Audience: Technical Domain Specific
Speakers: Jaime Martin Losa (CEO, eProsima) and Francesca Finocchiaro (Team Manager micro-ROS, eProsima)
The Happy Marriage of Redis and Protobuf by Scott Haines of Twilio - Redis Da...Redis Labs
The document summarizes a presentation about using Protocol Buffers and Redis together. It discusses how Protocol Buffers provide strict data types, versioning, and serialization/deserialization benefits. It then outlines Redis key patterns using namespaces, versions, data categories and identifiers. Examples are provided to show how Protocol Buffers messages can be stored in Redis using these key patterns, including storing connections data in sets, sorted sets and individual messages. Benefits discussed include structure, readability, testing and abstraction.
In the Cloud Native community, eBPF is gaining popularity, which can often be the best solution for solving different challenges with deep observability of system. Currently, eBPF is being embraced by major players.
Mydbops co-Founder, Kabilesh P.R (MySQL and Mongo Consultant) illustrates on debugging linux issues with eBPF. A brief about BPF & eBPF, BPF internals and the tools in actions for faster resolution.
The document describes Uber's use of Kafka for reliable messaging. Kafka is used for inter-service messaging, stream processing, database changelog transport, data ingestion, and logging. It provides ordered, partitioned streaming and unordered queueing semantics. The summary describes Uber's consumer wrapper that implements features like acknowledgments, redelivery, delays between retries, dead letter queues, competing consumers, and multi-datacenter failover to support reliable messaging on top of Kafka.
A common request sent from your web browser to a web server goes quite a long way and it can take a great deal of time until the data your browser can display are fetched back. I will talk about making this great deal of time significantly less great by caching things on different levels, starting with client-side caching for faster display and minimizing transferred data, storing results of already performed operations and computations and finishing with lowering the load of database servers by caching result sets. Cache expiration and invalidation is the hardest part so I will cover that too. Presentation will be focused mainly on PHP, but most of the principles are quite general work elsewhere too.
The document discusses techniques for storing time series data at scale in a time series database (TSDB). It describes storing 16 bytes of data per sample by compressing timestamps and values. It proposes organizing data into blocks, chunks, and files to handle high churn rates. An index structure uses unique IDs and sorted label mappings to enable efficient queries over millions of time series and billions of samples. Benchmarks show the TSDB can handle over 100,000 samples/second while keeping memory, CPU and disk usage low.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
The document discusses security issues with AngularJS and summarizes four general attack vectors:
A1: Attacking the AngularJS sandbox by bypassing restrictions on dangerous objects and methods. Early versions had trivial bypasses but later versions required more creative techniques.
A2: Attacking the AngularJS sanitizer, which aims to sanitize HTML strings and remove XSS attacks. There were issues with both an older sanitizer version and the current version.
A3: Attacking the Content Security Policy (CSP) mode in AngularJS.
A4: Attacking vulnerabilities directly in the AngularJS codebase through techniques like sandbox bypasses.
The document describes a memory management system using memory folios to address problems with legacy page caching and compound pages. Memory folios provide a unified interface for accessing pages and simplify operations on high-order and compound pages. Folios also improve page cache performance by maintaining a shorter LRU list with one entry per folio rather than per page.
In this presentation, Yasunori Goto and Qi fuli will talk about basis of NVDIMM, the issues of RAS of Non Volatile DIMM(NVDIMM), and what feature is made and is developing for it.
NVDIMM is expected as new age device recently. Though a cpu can read/write the NVDIMM directly like RAM, the data of NVDIMM remains after power down or reboot. So, on memory database will be one of the good example of usecase of NVDIMM.
Since many people have made great effort for Linux, NVDIMM drivers, filesystems,management command, and many libraries has been well developed for a few years,
However, Yasunori Goto found some issues about RAS(Reliabivity, Availability, and Serviceability) feature of NVDIMM, because characteristic of the NVDIMM is likely mixture of Storage and RAM. For example, NVDIMM does not have hotplug feature because it is inserted at DIMM slot like RAM, but its data must be back-upped/restored like storage.
Kernel Recipes 2017 - EBPF and XDP - Eric LeblondAnne Nicolas
Berkeley Packet Filter is an old friend for most people that deal with network under Linux. But its extended version eBPF is completely redefining the scope of usage and interaction with the kernel. It can indeed be used to instrument most parts of the kernel. This goes from network tracing to process or I/O monitoring.
This talk will provide an overview of eBPF, from concept to tools like BCC. It will then focus on XDP for eXtreme Data Path and the possible applications in term of networking provided by this new framework.
Eric Leblond, Stamus Network
Security is one of fundamental features for enterprise adoption. Specifically, for SQL users, row/column-level access control is important. However, when a cluster is used as a data warehouse accessed by various user groups via different ways, it is difficult to guarantee data governance in a consistent way. In this talk, we focus on SQL users and talk about how to provide row/column-level access controls with common access control rules throughout the whole cluster with various SQL engines, e.g., Apache Spark 2.1, Apache Spark 1.6 and Apache Hive 2.1. If some of rules are changed, all engines are controlled consistently in near real-time. Technically, we enables Spark Thrift Server to work with an identify given by JDBC connection and take advantage of Hive LLAP daemon as a shared and secured processing engine. We demonstrate row-level filtering, column-level filtering and various column maskings in Apache Spark with Apache Ranger. We use Apache Ranger as a single point of security control center.
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
The columnar roadmap: Apache Parquet and Apache ArrowDataWorks Summit
The Hadoop ecosystem has standardized on columnar formats—Apache Parquet for on-disk storage and Apache Arrow for in-memory. With this trend, deep integration with columnar formats is a key differentiator for big data technologies. Vertical integration from storage to execution greatly improves the latency of accessing data by pushing projections and filters to the storage layer, reducing time spent in IO reading from disk, as well as CPU time spent decompressing and decoding. Standards like Arrow and Parquet make this integration even more valuable as data can now cross system boundaries without incurring costly translation. Cross-system programming using languages such as Spark, Python, or SQL can becomes as fast as native internal performance.
In this talk we’ll explain how Parquet is improving at the storage level, with metadata and statistics that will facilitate more optimizations in query engines in the future. We’ll detail how the new vectorized reader from Parquet to Arrow enables much faster reads by removing abstractions as well as several future improvements. We will also discuss how standard Arrow-based APIs pave the way to breaking the silos of big data. One example is Arrow-based universal function libraries that can be written in any language (Java, Scala, C++, Python, R, ...) and will be usable in any big data system (Spark, Impala, Presto, Drill). Another is a standard data access API with projection and predicate push downs, which will greatly simplify data access optimizations across the board.
Speaker
Julien Le Dem, Principal Engineer, WeWork
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
Page cache mechanism in Linux kernel.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
This document provides an overview of Apache Flink internals. It begins with an introduction and recap of Flink programming concepts. It then discusses how Flink programs are compiled into execution plans and executed in a pipelined fashion, as opposed to being executed eagerly like regular code. The document outlines Flink's architecture including the optimizer, runtime environment, and data storage integrations. It also covers iterative processing and how Flink handles iterations both by unrolling loops and with native iterative datasets.
Apache Pulsar Development 101 with PythonTimothy Spann
Apache Pulsar Development 101 with Python PS2022_Ecosystem_v0.0
There is always the fear a speaker cannot make it. So just in case, since I was the MC for the ecosystem track I put together a talk just in case.
Here it is. Never seen or presented.
FIWARE Wednesday Webinars - The Use of DDS Middleware in Robotics (Part 2)FIWARE
The Use of DDS Middleware in Robotics - 17 June 202
Corresponding webinar recording: https://youtu.be/pTkZk4VF0gY
This webinar, in cooperation with FIWARE Foundation Gold Member eProsima, will provide an introduction to core real-time technologies: FAST DDS, the most complete Open Source DDS for ROS 2, and Micro XRCE-DDS, the middleware for microcontrollers and micro-ROS.
Chapter: Robotics
Difficulty: 3
Audience: Technical Domain Specific
Speakers: Jaime Martin Losa (CEO, eProsima) and Francesca Finocchiaro (Team Manager micro-ROS, eProsima)
eProsima RPC over DDS - OMG June 2013 Berlin MeetingJaime Martin Losa
DDS is being increasingly selected as the foundation of many mission- and business-critical systems. Some of these systems are designed to be completely data-centric and asynchronous, while others prefer to maintain some interactions (such as placing an order, performing a computation, etc.) as traditional client/server, request/reply, interactions. As such, many DDS users would like to define Services as a collection of operations/methods, and invoke methods using DDS as the transport for requests, replies and exceptions.
This talk will introduce eProsima RPC for DDS, a high performance Remote Procedure Call framework based on DDS, 100% standards-based and open source.
Find in this presentation an overview of the micro-ROS project, with its latest developments and new features
This presentation contains the workshop: Deeply embedded software" delivered at the European Robotics Forum on April 13th, 2021..
Micro XRCE-DDS: Bringing DDS into microcontrollerseProsima
This presentation is an introduction to eProsima Micro XRCE-DDS, an open source wire protocol that implements the OMG DDS for eXtremly Resource Constrained Environment standard (DDS-XRCE). The aim of the DDS-XRCE protocol is to provide access to the DDS Global-Data-Space from resource-constrained devices.
Traitement temps réel chez Streamroot - Golang Paris Juin 2016Simon Caplette
This document summarizes Simon Caplette's work as a Backend Scalability Engineer at streamroot.io, which provides peer-to-peer functionality to save large video broadcasters bandwidth. Key aspects of Streamroot's realtime systems built with Go include high concurrency and scalability components like trackers, signaling servers, and autoscalers. Go was well-suited due to its built-in concurrency and ability to handle many concurrent processes. The document also discusses Streamroot's realtime data pipeline using Kafka and InfluxDB for low latency analytics.
Fast Streaming into Clickhouse with Apache PulsarTimothy Spann
https://github.com/tspannhw/SpeakerProfile/tree/main/2022/talks
Fast Streaming into Clickhouse with Apache Pulsar
https://github.com/tspannhw/FLiPC-FastStreamingIntoClickhouseWithApachePulsar
https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/285271332/
Fast Streaming into Clickhouse with Apache Pulsar - Meetup 2022
StreamNative - Apache Pulsar - Stream to Altinity Cloud - Clickhouse
May the 4th Be With You!
04-May-2022 Clickhosue Meetup
CREATE TABLE iotjetsonjson_local
(
uuid String,
camera String,
ipaddress String,
networktime String,
top1pct String,
top1 String,
cputemp String,
gputemp String,
gputempf String,
cputempf String,
runtime String,
host String,
filename String,
host_name String,
macaddress String,
te String,
systemtime String,
cpu String,
diskusage String,
memory String,
imageinput String
)
ENGINE = MergeTree()
PARTITION BY uuid
ORDER BY (uuid);
CREATE TABLE iotjetsonjson ON CLUSTER '{cluster}' AS iotjetsonjson_local
ENGINE = Distributed('{cluster}', default, iotjetsonjson_local, rand());
select uuid, top1pct, top1, gputempf, cputempf
from iotjetsonjson
where toFloat32OrZero(top1pct) > 40
order by toFloat32OrZero(top1pct) desc, systemtime desc
select uuid, systemtime, networktime, te, top1pct, top1, cputempf, gputempf, cpu, diskusage, memory,filename
from iotjetsonjson
order by systemtime desc
select top1, max(toFloat32OrZero(top1pct)), max(gputempf), max(cputempf)
from iotjetsonjson
group by top1
select top1, max(toFloat32OrZero(top1pct)) as maxTop1, max(gputempf), max(cputempf)
from iotjetsonjson
group by top1
order by maxTop1
Tim Spann
Developer Advocate
StreamNative
An introduction to KrakenD, the ultra-high performance API Gateway with middlewares. An opensource tool built using go that is currently serving traffic in major european sites.
One Billion Black Friday Shoppers on a Distributed Data Store (Fahd Siddiqui,...DataStax
EmoDB is an open source RESTful data store built on top of Cassandra that stores JSON documents and, most notably, offers a databus that allows subscribers to watch for changes to those documents in real time. It features massive non-blocking global writes, asynchronous cross data center communication, and schema-less json content.
For non-blocking global writes, we created a ""JSON delta"" specification that defines incremental updates to any json document. Each row, in Cassandra, is thus a sequence of deltas that serves as a Conflict-free Replicated Datatype (CRDT) for EmoDB's system of record. We introduce the concept of ""distributed compactions"" to frequently compact these deltas for efficient reads.
Finally, the databus forms a crucial piece of our data infrastructure and offers a change queue to real time streaming applications.
About the Speaker
Fahd Siddiqui Lead Software Engineer, Bazaarvoice
Fahd Siddiqui is a Lead Software Engineer at Bazaarvoice in the data infrastructure team. His interests include highly scalable, and distributed data systems. He holds a Master's degree in Computer Engineering from the University of Texas at Austin, and frequently talks at Austin C* User Group. About Bazaarvoice: Bazaarvoice is a network that connects brands and retailers to the authentic voices of people where they shop. More at www.bazaarvoice.com
Realizing the promise of portable data processing with Apache BeamDataWorks Summit
The world of big data involves an ever changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the Big Data ecosystem together; it enables users to "run-anything-anywhere".
This talk will briefly cover the capabilities of the Beam model for data processing, as well as the current state of the Beam ecosystem. We'll discuss Beam architecture and dive into the portability layer. We'll offer a technical analysis of the Beam's powerful primitive operations that enable true and reliable portability across diverse environments. Finally, we'll demonstrate a complex pipeline running on multiple runners in multiple deployment scenarios (e.g. Apache Spark on Amazon Web Services, Apache Flink on Google Cloud, Apache Apex on-premise), and give a glimpse at some of the challenges Beam aims to address in the future.
Porting a Streaming Pipeline from Scala to RustEvan Chan
How we at Conviva ported a streaming data pipeline in months from Scala to Rust. What are the important human and technical factors in our port, and what did we learn?
CocoaConf: The Language of Mobile Software is APIsTim Burks
We’re all excited about using the same language to write our mobile apps and cloud services, but as we do, we’ll still need to work with a few things that aren’t written with Swift. Fortunately, there are some great patterns that we can use for doing that. In this session we’ll talk about two technologies that you can use to make your app speak with APIs written in any language: OpenAPI and Protocol Buffers, and then we’ll see how to use them from clients and servers that are written in Swift.
Presented Friday November 4, 2016 in San Jose.
BKK16-409 VOSY Switch Port to ARMv8 Platforms and ODP IntegrationLinaro
Virtual Open Systems has developed VOSYSwitch, a high-performance user space networking virtual switch solution enabling NFV, based on the open source packet processing framework SnabbSwitch. In this talk, the experience of porting VOSYSwitch from x86 to ARMv8 will be shared, along with the integration of ODP as a driver layer for the available hardware resources. In addition to this presentation, a live demonstration will showcase chained VNFs connected through VOSYSwitch, where an OpenFastPath web server is implemented behind an ODP enabled packet filtering firewall. The targeted platforms are Freescale (NXP) LS2085A and Cavium's ThunderX.
Nanog75, Network Device Property as CodeDamien Garros
Device configuration templates have simplified a lot of things for the network industry but many networks are still managing their device properties (aka variables) manually which is very tedious and error prone. This talk will present a new approach to generate and manage network device properties easily using infrastructure as code principles.
Learn more about the tremendous value Open Data Plane brings to NFV
Bob Monkman, Networking Segment Marketing Manager, ARM
Bill Fischofer, Senior Software Engineer, Linaro Networking Group
Moderator:
Brandon Lewis, OpenSystems Media
The document discusses several interoperability standards for IoT - CoAP, OMA LWM2M, and IPSO Smart Objects. It describes how they build upon each other with CoAP providing REST APIs for constrained devices, OMA LWM2M building on CoAP to define device management objects and models, and IPSO Smart Objects further defining application objects based on the LWM2M model. The standards provide a layered approach to connectivity, services, data models and applications to enable interoperability for IoT devices and services.
micro-ROS: Developing ROS 2 professional applications based on MCUseProsima
micro-ROS bridges the gap between ROS 2 and embedded devices like microcontrollers (MCUs). It allows ROS 2 nodes to run on MCUs and larger processors, accelerating application development by combining CPUs and MCUs. micro-ROS addresses legacy MCU challenges like memory limitations and lack of software development tools. It provides a middleware architecture using Micro XRCE-DDS that is memory efficient and portable across operating systems and hardware. The latest enhancements expand hardware and RTOS support and bring ROS 2 features to MCUs. micro-ROS aims to accelerate new applications in areas like robotics, IoT, and autonomous systems.
Tungsten Fabric provides a network fabric connecting all environments and clouds. It aims to be the most ubiquitous, easy-to-use, scalable, secure, and cloud-grade SDN stack. It has over 300 contributors and 100 active developers. Recent improvements include better support for microservices, containers, ingress/egress policies, and load balancing. It can provide consistent security and networking across VMs, containers, and bare metal.
What you need to know about .NET Core 3.0 and beyondJon Galloway
The document provides an overview of .NET Core 3.0 including its top features, upcoming release schedule, and what is coming next. It discusses the key features in .NET Core 3.0 such as Windows desktop apps, microservices, gRPC, and machine learning. It also outlines the future of .NET with .NET 5 which will unify the different .NET implementations into a single platform.
Similar to Atom The Redis Streams-Powered Microservices SDK: Dan Pipemazo (20)
Redis Day Bangalore 2020 - Session state caching with redisRedis Labs
This document discusses using Redis caching to improve performance for the DBS Paylah mobile wallet application. Paylah aims to significantly increase its user base which will increase load on its backend systems. Caching application data and session state in Redis can reduce latency, improve responsiveness for users, and reduce costs by lowering load on legacy backend databases and mainframes. The document outlines some key Paylah use cases where caching transaction histories and account details in Redis would accelerate retrieval and improve the mobile experience by avoiding the need to access slower backend systems on each request.
Protecting Your API with Redis by Jane Paek - Redis Day Seattle 2020Redis Labs
The document discusses rate limiting and metering using Redis. It begins by introducing rate limiting and metering and why Redis is well-suited for these tasks. It then covers different Redis data structures that can be used, such as lists, hashes, sorted sets and strings. Common Redis commands for counting, setting keys and checking time to live are also presented. Different rate limiting design patterns and anti-patterns are described, including fixed window, sliding window and token bucket approaches. Finally, resources for further information are provided.
SQL, Redis and Kubernetes by Paul Stanton of Windocks - Redis Day Seattle 2020Redis Labs
The document discusses common use cases for combining SQL, Redis, and Kubernetes including caching, session management, rate limiting, and data ingestion. It outlines how Kubernetes can be used for scaling microservices while Redis is used for data service scaling. The presentation proposes combining Redis, SQL Server, and Kubernetes with a proxy service, and describes using Redis for caching, session storage, and rate limiting of SQL data. It also discusses running Redis and front-end apps on Kubernetes and deploying SQL as a Kubernetes service through a proxy.
Rust and Redis - Solving Problems for Kubernetes by Ravi Jagannathan of VMwar...Redis Labs
This document discusses using Rust and Redis to build cloud native platforms. It first provides context about devops and the need to do more with less. It then discusses how platforms are becoming more distributed and Kubernetes upends distribution paradigms. The document dives into how Rust addresses issues like concurrency and systems programming. It also discusses how Redis can be used for caching, queues, streams and more. Finally, it mentions that Rust and Redis will be demonstrated.
Redis for Data Science and Engineering by Dmitry Polyakovsky of OracleRedis Labs
This document contains a presentation about using Redis for data science and engineering. It introduces the presenter and provides an agenda that covers using Redis for data science and data engineering. The presentation notes that Redis can be used as both a data store and job queue, has flexible data structures and is fast, though it uses RAM and cannot query by value. It also lists Python Pandas and includes a demo and links for further information.
Practical Use Cases for ACLs in Redis 6 by Jamie Scott - Redis Day Seattle 2020Redis Labs
Jamie Scott from RedisLabs presented on practical use cases for access control lists (ACLs) in Redis 6. The presentation covered new security features in Redis 6 including encryption in transit, key space and command restrictions, and multiple access control list users. It demonstrated how ACLs allow users to define access based on key labels and restrictions. ACLs can facilitate discretionary and mandatory access controls. The presentation showed examples of using ACLs to restrict user access by key labels and commands to enhance operational security.
Moving Beyond Cache by Yiftach Shoolman Redis Labs - Redis Day Seattle 2020Redis Labs
This document summarizes a presentation about Redis version 6 and beyond. Some key points include:
- Redis version 6 includes new features like ACL for security, client-side caching, diskless replication, and multi-threaded I/O.
- Redis is positioned as both a cache and a database due to its speed, data structures, and ability to handle complex data models through modules.
- Redis Enterprise provides additional capabilities like durability, high availability, geo-distribution, security and multi-tenancy.
- Modern data models in Redis modules include Streams, RediSearch, RedisGraph, RedisTimeSeries, RedisAI, RedisJSON and RedisBloom.
- RedisInsight is
Leveraging Redis for System Monitoring by Adam McCormick of SBG - Redis Day S...Redis Labs
The document discusses how Sinclair Broadcast Group leverages Redis for system monitoring of its content delivery network. It operates 193 news stations with 10,000 active pages daily and millions in archive. New stories are posted every 15 seconds and must be visible across its 1,000+ targets within 1 minute. Redis is used to track performance across the multi-level CDN and ensure service level agreements are met with real-time resolution and alerting. It provides a black box view of the audience experience and can scale monitoring to all relevant pages within 30 seconds. Redis acts as a distributed data store to parallelize the monitoring task across the large scale of the network.
JSON in Redis - When to use RedisJSON by Jay Won of Coupang - Redis Day Seatt...Redis Labs
The document summarizes a presentation about when to use the RedisJSON data type. It discusses how Coupang uses Redis extensively for their ad platform. It then compares the performance and memory usage of storing JSON data as strings, hashes, or using the RedisJSON data type. Benchmark results show RedisJSON can provide better performance for retrieving and updating JSON fields compared to strings and hashes, though it uses more memory. The document recommends using RedisJSON for smaller JSON payloads after benchmarking and memory monitoring.
Highly Available Persistent Session Management Service by Mohamed Elmergawi o...Redis Labs
The document discusses the challenges of building a highly available persistent session management service. It describes Zulily's legacy architecture which lacked high availability and required manual intervention. A new architecture is proposed using Redis for persistent storage, Dynomite for real-time replication across data centers, and a connection pooling proxy to improve efficiency and distribute load. The architecture provides high availability through replication, reduces overhead through connection pooling, and handles failures through consistent hashing and health checks. It was tested through simulations and showed a failure rate of only 0.42% during outages.
Anatomy of a Redis Command by Madelyn Olson of Amazon Web Services - Redis Da...Redis Labs
The document describes the process that a Redis command follows from the client side to the server side. On the client side, the command is sent over the network to the Redis server. On the server side, the command is read from the kernel buffers, validated, executed by calling the relevant command handler, and the response is written back to the client over the network. The core functions involved on the server side are ReadQueryFromClient(), ProcessInputBuffer(), ProcessCommand(), Call(), and handleClientsWithPendingWrites(). Redis 6.0 introduced I/O threads to handle reads and writes in parallel for improved performance while still maintaining Redis' single-threaded processing model.
Building a Multi-dimensional Analytics Engine with RedisGraph by Matthew Goos...Redis Labs
This document discusses MDmetrix, a healthcare data intelligence company that uses RedisGraph to provide flexible analysis of hospital data. RedisGraph is a graph database that represents data as nodes and relationships and uses an adjacency matrix and linear algebra to query the graph. MDmetrix models its healthcare data as a property graph in RedisGraph to allow for complex queries across different data dimensions like patients, facilities, procedures and drugs. RedisGraph allows MDmetrix to query the data more easily than traditional OLAP cubes or relational databases due to the semi-structured and flexible nature of the graph model.
RediSearch 1.6 by Pieter Cailliau - Redis Day Bangalore 2020Redis Labs
RediSearch 1.6 includes a new low-level API that allows other Redis modules to embed RediSearch indexing capabilities. It also introduces index aliasing and several performance improvements such as forked thread garbage collection. Based on benchmarks, RediSearch 1.6 shows 48-73% better performance than version 1.4, particularly during high update rates where it maintains more stable read latencies.
RedisGraph 2.0 by Pieter Cailliau - Redis Day Bangalore 2020Redis Labs
RedisGraph 2.0 provides significant improvements including:
- Full text search support through embedded RediSearch 1.6 enabling graph-aided search.
- Support for returning full graph responses to enable better visualization.
- Broad support for Cypher including triadic closure and new graph-aided search capabilities.
- Performance improvements of up to 3.7x faster operations per second and 3.9x faster query times compared to RedisGraph v1.2.
- Support for benchmarking including the LDBC benchmark.
RedisTimeSeries 1.2 by Pieter Cailliau - Redis Day Bangalore 2020Redis Labs
RedisTimeSeries is a time-series database that provides compression to reduce memory usage by up to 98% and improve performance. The RedisTimeSeries 1.2 release includes compression algorithms based on a Facebook paper that provide stable ingestion times independent of the number of data points. It also includes a reviewed API with performance improvements and clearer functionality. Performance testing showed ingestion throughput improved by 2-3% and query performance increased from 15-70% with the new release compared to the previous version.
RedisAI 0.9 by Sherin Thomas of Tensorwerk - Redis Day Bangalore 2020Redis Labs
This document summarizes RedisAI 0.9 and its capabilities for model deployment and benchmarking. It introduces RedisAI's new tensor data type and ability to deploy models to CPU and GPU. It then discusses AIBench, a tool developed to benchmark AI serving solutions like RedisAI, TensorFlow Serving, and REST APIs. The benchmarks show RedisAI providing 5.5x and 2.5x more inferences than REST APIs and TensorFlow Serving respectively, due to its data locality. The document concludes by mentioning RedisAI's integration with MLFlow for model deployment with a single command.
Rate-Limiting 30 Million requests by Vijay Lakshminarayanan and Girish Koundi...Redis Labs
The document discusses how Freshworks uses Redis Labs to rate limit 30 million API requests per day through their API gateway called Fluffy. Fluffy stores rate limit policies and maintains counters to track requests. Redis Labs allows Fluffy to easily scale to handle the high volume of requests by providing a fast, in-memory data store for managing rate limiting counters. The system was able to successfully rate limit 30 million requests per day with Redis Labs.
Solving Complex Scaling Problems by Prashant Kumar and Abhishek Jain of Myntr...Redis Labs
Redis was used by Myntra to solve several complex scaling problems. It was used to build a scalable user segment service to support high read throughput of up to 5 million requests per minute with low latency. Redis allowed the service to scale beyond a single instance and included features like automatic backups and memory management. Redis also helped build a scalable mobile verification platform to reliably handle 100,000 requests per minute and scale to support higher future volumes. It was used as both a transient store and persistent backend. Finally, Redis locks helped build a scalable A/B testing platform by allowing experiments to be created and updated in an orderly concurrent fashion.
Redis as a High Scale Swiss Army Knife by Rahul Dagar and Abhishek Gupta of G...Redis Labs
This document discusses how Redis is used as a high-performance data store and messaging broker to power various services and personalization features at Goibibo, a leading online travel agency in India. Some key ways Redis is used include caching website content to improve performance, powering probabilistic models for personalization, acting as a broker for asynchronous background tasks, storing real-time user behavior signals to power adaptive features, and powering location-based services. Redis provides high throughput, reliability and various data structures to meet Goibibo's needs.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
2. PRESENTED BY
Agenda:
1. Starting with a demo!
We’ll see atom and redis in action using a depth-sensing camera
2. SDK Architecture (featuring Redis Streams!)
We’ll dive into the SDK in the context of the demo and take a look
at what’s going on under the hood
3. Open Source + Signup Give-Away
We’ll discuss open-sourcing the core of atom and the hardware
we’re giving away to our early developers
4. PRESENTED BY
Atom OS Overview
An in-depth dive into how we built the core of Atom
5. PRESENTED BY
1. Atom is a specification and a set of
client libraries that allow users to
create reusable microservices that
interact with each other through Redis
2. Through Docker and docker-compose
we can link together these
microservices to launch applications
3. By abstracting applications into
microservices we can do the following:
● Allow each piece of code to be
developed in the optimal language.
● Easily reuse and share code
elementary-robotics/atom atomdocs.io
Atom OS: High Level
6. PRESENTED BY
1. Use microservices to allow for reusable code elements with
zero install or dependencies and scalable message passing.
2. Industries still rely on copying and pasting code into new projects
far too often and we need better reusability to scale.
3. With full-stack hardware and software products, things like
machine learning and computer vision (python) should be
implemented in different language from your embedded code (C).
The Goal of Atom OS
7. PRESENTED BY
Redis 5.0+ Server
Unix socket
and/or TCP
Atom Command
Line Interface
English-like interface
to interact with Atom
Nucleus
Architecture and Terminology
9. PRESENTED BY
- Any language with an atom language
client
- Publish data
- Expose commands and send
responses
$MY_ELEMENT
Your
Code!
FROM elementaryrobotics/atom
Architecture and Terminology
12. PRESENTED BY
Our goal:
● Abstract out complex engineering problems into reusable, sharable elements.
● Don’t sacrifice performance or increase complexity in doing so
Architecture Question: Why microservices?
Questions we asked ourselves before building Atom:
● Do we need another microservice framework?
(gRPC, thrift, REST, ROS, zeroMQ, DDS)
● How would we do it if we weren’t going to build this in a reusable, abstracted
fashion?
13. PRESENTED BY
Data Publication and Subscription
● Publishing should be stateless and fire-and-forget
● Consumption should be able to be regulated by the consumer
○ Solve the “slow subscriber” problem, i.e. how to handle a
subscriber who only wants 1Hz updates on a 1kHz stream
● Low latency
● Support many parallel clients without any
extra burden on publisher / performance hit
Command and Response
● Should be able to call commands and receive responses across
as many languages as possible
● Easy to make command either synchronous or asynchronous
● Easy load-balancing without complicated multi-threading
Serialization
● Optional and not overly burdensome on either the CPU or the user’s sanity
● If serialized, messages can ideally be read without knowing the schema
Architecture Question: What are we looking for?
14. PRESENTED BY
Install and OS Requirements
● Write code once, it should work on any
OS (including graphics!)
● Setup should be as minimal as possible.
OS and/or system-related install bugs
are the worst.
Language Support
● Support as many languages as possible.
● Allows each problem to be solved in it
(or its programmer’s) ideal language.
● Atwood’s law: any application that can
be written in JavaScript, will eventually
be written in JavaScript.
Architecture Question: What are we looking for? ...Contd
Service Discovery
● Should be able to discover other
microservices in the system.
● For each microservice, should be
able to identify its health, available
commands and streams.
Logging
● Everything should be able to be logged.
● Failures should be easily traceable to an
outside observer.
15. PRESENTED BY
It turns out that’s a pretty big list of things we’re asking for, but there’s a solution!
>25 languages
Streams allow for
novel data flows
Production-tested
and well-supported
Redis
Requirements
installed in container
Multi-OS
Docker-compose
Docker
>25 languages
Easy, fast, pretty
much JSON
Completely optional
MessagePack
+ +
Architecture Solution
18. PRESENTED BY
XADD s2 MAXLEN ~ X kA vA kB vB …
XADD s1 MAXLEN ~ X k1 v1 k2 v2 …
stream: s1
0
-
k1
v1
k2
v2
1
-
k1
v1
k2
v2
2
-
k1
v1
k2
v2
X
-
k1
v1
k2
v2
...
0
-
kA
vA
kB
vB
1
-
kA
vA
kB
vB
2
-
kA
vA
kB
vB
X
-
kA
vA
kB
vB
...
stream: s2
Redis Streams: Overview
19. PRESENTED BY
XADD s2 MAXLEN ~ X kA vA kB vB …
XADD s1 MAXLEN ~ X k1 v1 k2 v2 …
stream: s1
0
-
k1
v1
k2
v2
1
-
k1
v1
k2
v2
2
-
k1
v1
k2
v2
X
-
k1
v1
k2
v2
XREAD BLOCK N STREAMS s1 $
XREAD BLOCK N STREAMS s1 ID
Subscribe to all entries from one stream
...
0
-
kA
vA
kB
vB
1
-
kA
vA
kB
vB
2
-
kA
vA
kB
vB
X
-
kA
vA
kB
vB
...
stream: s2
Redis Streams: Overview
20. PRESENTED BY
XADD s2 MAXLEN ~ X kA vA kB vB …
XADD s1 MAXLEN ~ X k1 v1 k2 v2 …
stream: s1
0
-
k1
v1
k2
v2
1
-
k1
v1
k2
v2
2
-
k1
v1
k2
v2
X
-
k1
v1
k2
v2
XREAD BLOCK N STREAMS s1 $
XREAD BLOCK N STREAMS s1 ID
Subscribe to all entries from one stream
...
0
-
kA
vA
kB
vB
1
-
kA
vA
kB
vB
2
-
kA
vA
kB
vB
X
-
kA
vA
kB
vB
...
stream: s2
XREAD BLOCK N STREAMS s1 s2 ID1 ID2
Subscribe to all entries from multiple streams
Redis Streams: Overview
21. PRESENTED BY
XADD s2 MAXLEN ~ X kA vA kB vB …
XADD s1 MAXLEN ~ X k1 v1 k2 v2 …
stream: s1
0
-
k1
v1
k2
v2
1
-
k1
v1
k2
v2
2
-
k1
v1
k2
v2
X
-
k1
v1
k2
v2
XREAD BLOCK N STREAMS s1 $
XREAD BLOCK N STREAMS s1 ID
Subscribe to all entries from one stream
...
0
-
kA
vA
kB
vB
1
-
kA
vA
kB
vB
2
-
kA
vA
kB
vB
X
-
kA
vA
kB
vB
...
stream: s2
XREAD BLOCK N STREAMS s1 s2 ID1 ID2
Subscribe to all entries from multiple streams
XREVRANGE s1 + - COUNT N
Get the latest N entries from a stream
Redis Streams: Overview
22. PRESENTED BY
XADD s2 MAXLEN ~ X kA vA kB vB …
XADD s1 MAXLEN ~ X k1 v1 k2 v2 …
stream: s1
0
-
k1
v1
k2
v2
1
-
k1
v1
k2
v2
2
-
k1
v1
k2
v2
X
-
k1
v1
k2
v2
XREAD BLOCK N STREAMS s1 $
XREAD BLOCK N STREAMS s1 ID
Subscribe to all entries from one stream
...
0
-
kA
vA
kB
vB
1
-
kA
vA
kB
vB
2
-
kA
vA
kB
vB
X
-
kA
vA
kB
vB
...
stream: s2
XREAD BLOCK N STREAMS s1 s2 ID1 ID2
Subscribe to all entries from multiple streams
XREVRANGE s1 + - COUNT N
Get the latest N entries from a stream
XRANGE s1 - + COUNT N
Get the oldest N entries from a stream
Redis Streams: Overview
27. PRESENTED BY
Improvements over Pub/Sub
● Redis acts as an N-value last value cache
○ Can query for the most recent piece of data without having to monitor the stream
● Can traverse the stream as convenient, asking for all data since last read
● N is set by publisher and redis auto-prunes when efficient (when using XADD MAXLEN ~)
Consumer Groups
● Single consumer: Redis keeps track of where you were in the stream
● Multiple consumers: Redis auto-routes messages and provides
introspection and failover handling. (XACK)
Multiple Interaction Paradigms
● Can replicate pub-sub if desired, else can use last value cache
● Producer doesn’t care how the clients are interacting with the data
Benefits of Data Publication and Subscription
28. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:req XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
elem:foo:rep
Redis Streams: Async or Sync Command and Response
29. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:reqXADD elem:bar:req MAXLEN ~ X elem foo cmd Y data Z
elem:foo:rep
XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
Redis Streams: Async or Sync Command and Response
30. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:req
(entryID, {“elem”: foo, “cmd”: Y, “data”: Z})
elem:foo:rep
XADD elem:bar:req MAXLEN ~ X elem foo cmd Y data Z XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
Redis Streams: Async or Sync Command and Response
31. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:req
(entryID, {“elem”: foo, “cmd”: Y, “data”: Z})
elem:foo:rep
XADD elem:bar:req MAXLEN ~ X elem foo cmd Y data Z XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
XREAD BLOCK 1000 STREAMS elem:foo:rep lastEntryID
Redis Streams: Async or Sync Command and Response
32. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:req
(entryID, {“elem”: foo, “cmd”: Y, “data”: Z})
elem:foo:rep
XADD elem:bar:req MAXLEN ~ X elem foo cmd Y data Z XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
XREAD BLOCK 1000 STREAMS elem:foo:rep lastEntryID
XADD elem:foo:rep MAXLEN ~ X elem bar id ID time T
(entryID, {“elem”: bar, “id”: ID, “time”: T})
Redis Streams: Async or Sync Command and Response
33. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:req
(entryID, {“elem”: foo, “cmd”: Y, “data”: Z})
elem:foo:rep
XADD elem:bar:req MAXLEN ~ X elem foo cmd Y data Z XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
XREAD BLOCK 1000 STREAMS elem:foo:rep lastEntryID
XADD elem:foo:rep MAXLEN ~ X elem bar id ID time T
(entryID, {“elem”: bar, “id”: ID, “time”: T})
XREAD BLOCK T STREAMS elem:foo:rep lastEntryID
Redis Streams: Async or Sync Command and Response
34. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:req
(entryID, {“elem”: foo, “cmd”: Y, “data”: Z})
elem:foo:rep
XADD elem:bar:req MAXLEN ~ X elem foo cmd Y data Z XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
XREAD BLOCK 1000 STREAMS elem:foo:rep lastEntryID
XADD elem:foo:rep MAXLEN ~ X elem bar id ID time T
(entryID, {“elem”: bar, “id”: ID, “time”: T})
XREAD BLOCK T STREAMS elem:foo:rep lastEntryID XADD elem:foo:rep MAXLEN ~ X elem bar id ID resp R
Redis Streams: Async or Sync Command and Response
35. PRESENTED BY
Requestor: “foo” Responder: “bar”
elem:bar:req
(entryID, {“elem”: foo, “cmd”: Y, “data”: Z})
elem:foo:rep
XADD elem:bar:req MAXLEN ~ X elem foo cmd Y data Z XREAD BLOCK 0 STREAMS elem:bar:req lastEntryID
XREAD BLOCK 1000 STREAMS elem:foo:rep lastEntryID
XADD elem:foo:rep MAXLEN ~ X elem bar id ID time T
(entryID, {“elem”: bar, “id”: ID, “time”: T})
XREAD BLOCK T STREAMS elem:foo:rep lastEntryID
(entryID, {“elem”: bar, “id”: ID, “resp”: R})
XADD elem:foo:rep MAXLEN ~ X elem bar id ID resp R
Redis Streams: Async or Sync Command and Response
36. PRESENTED BY
Consumer Groups
● Single consumer: All commands go to single copy of
element, redis keeps track of where you are in
processing your command queue
● Multiple, idempotent consumers: Redis auto-routes
commands to instances of the same element and
load-balances for you
Sync vs Async
● Completely up to the caller, can choose to wait for
response or not
Logging and Introspection
● Each command in the system can be uniquely identified
by the tuple of
(element, stream ID)
Benefits of Command and Response
Requestor REDIS
elem:bar:req
elem:foo:rep
Responder
37. PRESENTED BY
Obligatory: None
● No serialization required; redis supports binary data
● Serialization agreement is left to the two sides of the
message
Supported in the spec: MessagePack
● Allows for type-strict messaging if desired using explicit
casts of received messages
● Allows for optional parameters if using JSON-like objects
● Can sniff the wire and decode a packet since the types
are encoded into the schema
Camera Data
● Typically leave in native/raw format. Don’t waste
time/CPU.
Redis Streams Serialization
Requestor REDIS
elem:bar:req
elem:foo:rep
Responder
41. PRESENTED BY
Atom OS: What is it?
● A specification around using redis streams to implement an RPC and messaging protocol with the
functionalities described in this talk
● A set of language clients that implement said specification in as many languages as possible
● A pre-compiled docker container that has all of the requirements and language clients you need to get up
and running quickly
○ elementaryrobotics/atom
● A suite of reusable elements that are deployed by us and the community on dockerhub that expose
functionality using the atom SDK
○ Realsense
○ Stream-viewer
○ Segmentation
○ Voice
○ Record
○ More coming soon!
42. PRESENTED BY
Atom OS: Open Source
● Source code for Atom and elements can be found on github
○ https://github.com/elementary-robotics
● Docker containers are built and shipped with CI/CD using CircleCI to dockerhub
○ https://circleci.com/gh/elementary-robotics/atom
○ https://hub.docker.com/u/elementaryrobotics/
● Documentation with walkthroughs and examples
○ https://atomdocs.io
● Language support currently implemented for C, C++ and Python
○ Please help us add languages!
● Specification Improvements for v2.0
○ Want to add heartbeat, better consumer groups, parameter server, any more ideas you have!