This document summarizes a presentation about Apache Twill, which provides abstractions for building large-scale applications on Apache Hadoop YARN. It discusses why Twill was created to simplify developing on YARN, Twill's architecture and components, key features like real-time logging and elastic scaling, real-world uses at CDAP, and the Twill roadmap.
Discuss the different ways model can be served with MLflow. We will cover both the open source MLflow and Databricks managed MLflow ways to serve models. Will cover the basic differences between batch scoring and real-time scoring. Special emphasis on the new upcoming Databricks production-ready model serving.
Application Timeline Server - Past, Present and FutureVARUN SAXENA
How YARN Application timeline server evolved from Application History Server to Application Timeline Server v1 to ATSv2 or ATS Next gen, which is currently under development.
This slide was present at Hadoop Big Data Meetup at eBay, Bangalore, India.
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training**
This Edureka tutorial on PySpark Tutorial will provide you with a detailed and comprehensive knowledge of Pyspark, how it works, the reason why python works best with Apache Spark. You will also learn about RDDs, data frames and mllib.
Discuss the different ways model can be served with MLflow. We will cover both the open source MLflow and Databricks managed MLflow ways to serve models. Will cover the basic differences between batch scoring and real-time scoring. Special emphasis on the new upcoming Databricks production-ready model serving.
Application Timeline Server - Past, Present and FutureVARUN SAXENA
How YARN Application timeline server evolved from Application History Server to Application Timeline Server v1 to ATSv2 or ATS Next gen, which is currently under development.
This slide was present at Hadoop Big Data Meetup at eBay, Bangalore, India.
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training**
This Edureka tutorial on PySpark Tutorial will provide you with a detailed and comprehensive knowledge of Pyspark, how it works, the reason why python works best with Apache Spark. You will also learn about RDDs, data frames and mllib.
How netflix manages petabyte scale apache cassandra in the cloudVinay Kumar Chella
At Netflix, we manage petabytes of data in Apache Cassandra which must be reliably accessible to users in mere milliseconds. To achieve this, we have built sophisticated control planes that turn our persistence layer based on Apache Cassandra into a truly self-driving system. We will start with the user interface that Netflix developers use to interact with their Cassandra databases and dive deep into the automation that powers it all. From cluster creation, through scaling up, to cluster death, complex automation drives large fleets of virtual machines hosted on the AWS cloud. First, we will cover the basics of how Netflix deploys Apache Cassandra. In particular, this begins with how we mold Apache Cassandra to the Netflix philosophy of immutable infrastructure, including managing software and hardware upgrades in the face of ever-failing hardware. Then we will explore the concrete techniques needed for such a massive deployment, specifically pull-based control planes and auto-healing strategies. Next, we will cover how Netflix has automated complex but critical Apache Cassandra maintenance tasks such as continuous snapshot backups and always-on anti-entropy repair for keeping our datasets safe and consistent. Both of these systems have gone through multiple architectural evolutions, and we have learned many lessons along the way. Lastly, we will share some of the ways this has gone wrong, and what you can do to avoid them. We will cover a few case studies of major Cassandra outages at Netflix, their root cause, and what we learned from those incidents. At the end of this talk, we hope that participants leave with concrete understanding of the challenges in running massive scale Apache Cassandra as well as solid advice and techniques for building their own self-driving data persistence layer.
NEW LAUNCH! Intro to Amazon Athena. Analyze data in S3, using SQLAmazon Web Services
Amazon Athena is a new interactive query service that makes it easy to analyze data in Amazon S3, using standard SQL. Athena is serverless, so there is no infrastructure to setup or manage, and you can start analyzing your data immediately. You don’t even need to load your data into Athena, it works directly with data stored in S3.
In this session, we will show you how easy is to start querying your data stored in Amazon S3, with Amazon Athena. First we will use Athena to create the schema for data already in S3. Then, we will demonstrate how you can run interactive queries through the built-in query editor. We will provide best practices and use cases for Athena. Then, we will talk about supported queries, data formats, and strategies to save costs when querying data with Athena.
MySQL Group Replication - Ready For Production? (2018-04)Kenny Gryp
At the end of 2016, Oracle released a new Plugin called MySQL Group Replication, which is a new MySQL replication method that aims to provide better High Availability, and built-in failover with consistency guarantees.
I evaluated the initial GA versions back in early 2017. I presented my initial findings with several best practices and concerns with the current implementation which made me state that Group Replication was not quite ready yet.
(https://www.slideshare.net/Grypyrg/my-sql-group-replication)
(Un)lucky as I was, a large part of the attendees were Oracle developers and the months after this, many of these bugs and missing features were implemented in both MySQL 8.0 as well as backported to MySQL 5.7. (Thank you!)
This is a followup presentation on my previous analysis, where I will look into the changes since and re-evaluate the readiness of Group Replication for production usage and provide my insights and opinion on the state of GR.
Exploring Java Heap Dumps (Oracle Code One 2018)Ryan Cuprak
Memory leaks are not always simple or easy to find. Heap dumps from production systems are often gigantic (4+ gigs) with millions of objects in memory. Simple spot checking with traditional tools is woefully inadequate in these situations, especially with real data. Leaks can be entire object graphs with enormous amounts of noise. This session will show you how to build custom tools using the Apache NetBeans Profiler/Heapwalker APIs. Using these APIs, you can read and analyze Java heaps programmatically to ask really hard questions. This gives you the power to analyze complex object graphs with tens of thousands of objects in seconds.
The Parquet Format and Performance Optimization OpportunitiesDatabricks
The Parquet format is one of the most widely used columnar storage formats in the Spark ecosystem. Given that I/O is expensive and that the storage layer is the entry point for any query execution, understanding the intricacies of your storage format is important for optimizing your workloads.
As an introduction, we will provide context around the format, covering the basics of structured data formats and the underlying physical data storage model alternatives (row-wise, columnar and hybrid). Given this context, we will dive deeper into specifics of the Parquet format: representation on disk, physical data organization (row-groups, column-chunks and pages) and encoding schemes. Now equipped with sufficient background knowledge, we will discuss several performance optimization opportunities with respect to the format: dictionary encoding, page compression, predicate pushdown (min/max skipping), dictionary filtering and partitioning schemes. We will learn how to combat the evil that is ‘many small files’, and will discuss the open-source Delta Lake format in relation to this and Parquet in general.
This talk serves both as an approachable refresher on columnar storage as well as a guide on how to leverage the Parquet format for speeding up analytical workloads in Spark using tangible tips and tricks.
Building a Streaming Microservice Architecture: with Apache Spark Structured ...Databricks
As we continue to push the boundaries of what is possible with respect to pipeline throughput and data serving tiers, new methodologies and techniques continue to emerge to handle larger and larger workloads
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
One key feature that differentiates HBase from other distributed databases is its support of coprocessors. Bloomberg develops and manages some very low-latency systems that service real-time requests. In order to achieve real-time speeds, it was necessary to utilize coprocessors, which are similar to traditional stored procedures. As a result, we were able to match the average latency of an HBase cluster with that of a traditional database. This was done by using coprocessors to parallelize a lot of data computation and reduce the number of round-trips to the cluster by a factor of 5, thereby lowering the amount of data sent over the wire by 5. However, there are also significant challenges to managing coprocessors in a production environment. In this talk, I will to review the use case for HBase coprocessors and some practical tips on how to properly develop and deploy them. Some of the key topics covered in this talk are:
Type of coprocessors
Development challenges
Deployment challenges
Speakers
Amit Anand, Senior Software Developer, Bloomberg LP
Esther Kundin, Senior Software Engineer, Bloomberg LP
Introducing the Apache Flink Kubernetes OperatorFlink Forward
Flink Forward San Francisco 2022.
The Apache Flink Kubernetes Operator provides a consistent approach to manage Flink applications automatically, without any human interaction, by extending the Kubernetes API. Given the increasing adoption of Kubernetes based Flink deployments the community has been working on a Kubernetes native solution as part of Flink that can benefit from the rich experience of community members and ultimately make Flink easier to adopt. In this talk we give a technical introduction to the Flink Kubernetes Operator and demonstrate the core features and use-cases through in-depth examples."
by
Thomas Weise
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Building Enterprise Grade Applications in Yarn with Apache TwillCask Data
Speaker: Poorna Chandra, from Cask
Big Data Applications Meetup, 07/27/2016
Palo Alto, CA
More info here: http://www.meetup.com/BigDataApps/
Link to talk: https://www.youtube.com/watch?v=I1GLRXyQlx8
About the talk:
Twill is an Apache incubator project that provides higher level abstraction to build distributed systems applications on YARN. Developing distributed applications using YARN is challenging because it does not provide higher level APIs, and lots of boiler plate code needs to be duplicated to deploy applications. Developing YARN applications is typically done by framework developers, like those familiar with Apache Flink or Apache Spark, who need to deploy the framework in a distributed way.
By using Twill, application developers need only be familiar with the basics of the Java programming model when using the Twill APIs, so they can focus on solving business problems. In this talk I present how Twill can be leveraged and an example of Cask Data Application Platform (CDAP) that heavily uses Twill for resource management.
How netflix manages petabyte scale apache cassandra in the cloudVinay Kumar Chella
At Netflix, we manage petabytes of data in Apache Cassandra which must be reliably accessible to users in mere milliseconds. To achieve this, we have built sophisticated control planes that turn our persistence layer based on Apache Cassandra into a truly self-driving system. We will start with the user interface that Netflix developers use to interact with their Cassandra databases and dive deep into the automation that powers it all. From cluster creation, through scaling up, to cluster death, complex automation drives large fleets of virtual machines hosted on the AWS cloud. First, we will cover the basics of how Netflix deploys Apache Cassandra. In particular, this begins with how we mold Apache Cassandra to the Netflix philosophy of immutable infrastructure, including managing software and hardware upgrades in the face of ever-failing hardware. Then we will explore the concrete techniques needed for such a massive deployment, specifically pull-based control planes and auto-healing strategies. Next, we will cover how Netflix has automated complex but critical Apache Cassandra maintenance tasks such as continuous snapshot backups and always-on anti-entropy repair for keeping our datasets safe and consistent. Both of these systems have gone through multiple architectural evolutions, and we have learned many lessons along the way. Lastly, we will share some of the ways this has gone wrong, and what you can do to avoid them. We will cover a few case studies of major Cassandra outages at Netflix, their root cause, and what we learned from those incidents. At the end of this talk, we hope that participants leave with concrete understanding of the challenges in running massive scale Apache Cassandra as well as solid advice and techniques for building their own self-driving data persistence layer.
NEW LAUNCH! Intro to Amazon Athena. Analyze data in S3, using SQLAmazon Web Services
Amazon Athena is a new interactive query service that makes it easy to analyze data in Amazon S3, using standard SQL. Athena is serverless, so there is no infrastructure to setup or manage, and you can start analyzing your data immediately. You don’t even need to load your data into Athena, it works directly with data stored in S3.
In this session, we will show you how easy is to start querying your data stored in Amazon S3, with Amazon Athena. First we will use Athena to create the schema for data already in S3. Then, we will demonstrate how you can run interactive queries through the built-in query editor. We will provide best practices and use cases for Athena. Then, we will talk about supported queries, data formats, and strategies to save costs when querying data with Athena.
MySQL Group Replication - Ready For Production? (2018-04)Kenny Gryp
At the end of 2016, Oracle released a new Plugin called MySQL Group Replication, which is a new MySQL replication method that aims to provide better High Availability, and built-in failover with consistency guarantees.
I evaluated the initial GA versions back in early 2017. I presented my initial findings with several best practices and concerns with the current implementation which made me state that Group Replication was not quite ready yet.
(https://www.slideshare.net/Grypyrg/my-sql-group-replication)
(Un)lucky as I was, a large part of the attendees were Oracle developers and the months after this, many of these bugs and missing features were implemented in both MySQL 8.0 as well as backported to MySQL 5.7. (Thank you!)
This is a followup presentation on my previous analysis, where I will look into the changes since and re-evaluate the readiness of Group Replication for production usage and provide my insights and opinion on the state of GR.
Exploring Java Heap Dumps (Oracle Code One 2018)Ryan Cuprak
Memory leaks are not always simple or easy to find. Heap dumps from production systems are often gigantic (4+ gigs) with millions of objects in memory. Simple spot checking with traditional tools is woefully inadequate in these situations, especially with real data. Leaks can be entire object graphs with enormous amounts of noise. This session will show you how to build custom tools using the Apache NetBeans Profiler/Heapwalker APIs. Using these APIs, you can read and analyze Java heaps programmatically to ask really hard questions. This gives you the power to analyze complex object graphs with tens of thousands of objects in seconds.
The Parquet Format and Performance Optimization OpportunitiesDatabricks
The Parquet format is one of the most widely used columnar storage formats in the Spark ecosystem. Given that I/O is expensive and that the storage layer is the entry point for any query execution, understanding the intricacies of your storage format is important for optimizing your workloads.
As an introduction, we will provide context around the format, covering the basics of structured data formats and the underlying physical data storage model alternatives (row-wise, columnar and hybrid). Given this context, we will dive deeper into specifics of the Parquet format: representation on disk, physical data organization (row-groups, column-chunks and pages) and encoding schemes. Now equipped with sufficient background knowledge, we will discuss several performance optimization opportunities with respect to the format: dictionary encoding, page compression, predicate pushdown (min/max skipping), dictionary filtering and partitioning schemes. We will learn how to combat the evil that is ‘many small files’, and will discuss the open-source Delta Lake format in relation to this and Parquet in general.
This talk serves both as an approachable refresher on columnar storage as well as a guide on how to leverage the Parquet format for speeding up analytical workloads in Spark using tangible tips and tricks.
Building a Streaming Microservice Architecture: with Apache Spark Structured ...Databricks
As we continue to push the boundaries of what is possible with respect to pipeline throughput and data serving tiers, new methodologies and techniques continue to emerge to handle larger and larger workloads
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
One key feature that differentiates HBase from other distributed databases is its support of coprocessors. Bloomberg develops and manages some very low-latency systems that service real-time requests. In order to achieve real-time speeds, it was necessary to utilize coprocessors, which are similar to traditional stored procedures. As a result, we were able to match the average latency of an HBase cluster with that of a traditional database. This was done by using coprocessors to parallelize a lot of data computation and reduce the number of round-trips to the cluster by a factor of 5, thereby lowering the amount of data sent over the wire by 5. However, there are also significant challenges to managing coprocessors in a production environment. In this talk, I will to review the use case for HBase coprocessors and some practical tips on how to properly develop and deploy them. Some of the key topics covered in this talk are:
Type of coprocessors
Development challenges
Deployment challenges
Speakers
Amit Anand, Senior Software Developer, Bloomberg LP
Esther Kundin, Senior Software Engineer, Bloomberg LP
Introducing the Apache Flink Kubernetes OperatorFlink Forward
Flink Forward San Francisco 2022.
The Apache Flink Kubernetes Operator provides a consistent approach to manage Flink applications automatically, without any human interaction, by extending the Kubernetes API. Given the increasing adoption of Kubernetes based Flink deployments the community has been working on a Kubernetes native solution as part of Flink that can benefit from the rich experience of community members and ultimately make Flink easier to adopt. In this talk we give a technical introduction to the Flink Kubernetes Operator and demonstrate the core features and use-cases through in-depth examples."
by
Thomas Weise
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Building Enterprise Grade Applications in Yarn with Apache TwillCask Data
Speaker: Poorna Chandra, from Cask
Big Data Applications Meetup, 07/27/2016
Palo Alto, CA
More info here: http://www.meetup.com/BigDataApps/
Link to talk: https://www.youtube.com/watch?v=I1GLRXyQlx8
About the talk:
Twill is an Apache incubator project that provides higher level abstraction to build distributed systems applications on YARN. Developing distributed applications using YARN is challenging because it does not provide higher level APIs, and lots of boiler plate code needs to be duplicated to deploy applications. Developing YARN applications is typically done by framework developers, like those familiar with Apache Flink or Apache Spark, who need to deploy the framework in a distributed way.
By using Twill, application developers need only be familiar with the basics of the Java programming model when using the Twill APIs, so they can focus on solving business problems. In this talk I present how Twill can be leveraged and an example of Cask Data Application Platform (CDAP) that heavily uses Twill for resource management.
Hadoop became the most common systm to store big data.
With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself.
Together they form a big ecosystem.
This presentation covers some of those systems.
While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.
Hadoop became the most common systm to store big data.
With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself.
Together they form a big ecosystem.
This presentation covers some of those systems.
While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.
Cooperative Task Execution for Apache SparkDatabricks
Apache Spark has enabled a vast assortment of users to express batch, streaming, and machine learning computations, using a mixture of programming paradigms and interfaces. Lately, we observe that different jobs are often implemented as part of the same application to share application logic, state, or to interact with each other. Examples include online machine learning, real-time data transformation and serving, low-latency event monitoring and reporting. Although the recent addition of Structured Streaming to Spark provides the programming interface to enable such unified applications over bounded and unbounded data, the underlying execution engine was not designed to efficiently support jobs with different requirements (i.e., latency vs. throughput) as part of the same runtime. It therefore becomes particularly challenging to schedule such jobs to efficiently utilize the cluster resources while respecting their requirements in terms of task response times. Scheduling policies such as FAIR could alleviate the problem by prioritizing critical tasks, but the challenge remains, as there is no way to guarantee no queuing delays. Even though preemption by task killing could minimize queuing, it would also require task resubmission and loss of progress, leading to wasted cluster resources. In this talk, we present Neptune, a new cooperative task execution model for Spark with fine-grained control over resources such as CPU time. Neptune utilizes Scala coroutines as a lightweight mechanism to suspend task execution with sub-millisecond latency and introduces new scheduling policies that respect diverse task requirements while efficiently sharing the same runtime. Users can directly use Neptune for their continuous applications as it supports all existing DataFrame, DataSet, and RDD operators. We present an implementation of the execution model as part of Spark 2.4.0 and describe the observed performance benefits from running a number of streaming and machine learning workloads on an Azure cluster.
Speaker: Konstantinos Karanasos
Jakarta Concurrency is the successor of the Java EE Concurrency API. Jakarta Concurrency provides concurrency features aligned with Java SE that work in the Jakarta EE environment. For Jakarta EE 10 the team significantly upgraded Jakarta Concurrency. In this talk Steve Millidge, the lead of the Jakarta Concurrency project, will talk you through the new features in Jakarta EE 10, ideas for the future and how you can contribute to shape Jakarta Concurrency.
These slides were presented at the Community Day at the EclipseCon 2022 https://www.eclipsecon.org/2022/jakarta-ee-community-day
Read more about Jakarta Concurrency on our blog: https://blog.payara.fish/jakarta-concurrency-present-and-future
Find out more about Payara and Jakarta EE on https://www.payara.fish/solutions/jakarta-ee-and-the-payara-platform/
Oracle Drivers configuration for High AvailabilityLudovico Caldara
... is it a developer's job?
UCP, GridLink, TAF, AC, TAC, FAN… The configuration of Oracle Drivers for application high availability is not an easy job. The developers often care about the minimal working configuration, while the DBAs are busy with the operations. In this session I will try to demystify application server’s connectivity to the database and give a direction toward the highest availability, using Real Application Clusters and new Oracle features like TAC and CMAN TDM.
In this talk, a closer look into the lifecycle of operators will be presented. With an understanding of how operators evolve, it becomes clear what
challenges during operator upgrades. A brief overview of lifecycle management tools such as Helm, OLM, and Carvel is presented in this context. In particular, it will be discussed whether these tools can help, which restrictions apply and where further development would be desirable.
At the end of this talk, you will know what operator lifecycle management is about, what its challenges are, and which tools may be used to reduce operational friction.
This talk was given by Julian Fischer for DoK Day Europe @ KubeCon 2022.
Link: https://youtu.be/_lQhoCUQReU
https://go.dok.community/slack
https://dok.community/
From the DoK Day EU 2022 (https://youtu.be/Xi-h4XNd5tE)
The ability to extend Kubernetes with Custom Resource Definitions and respective controllers has led to the OperatorSDK, which became
the de facto standard for data service automation on Kubernetes. There are countless operator implementations available, and new operators are
being released on a daily basis. Organizations managing hundreds of Kubernetes clusters for dozens of developer teams are also challenged to
manage the lifecycle of hundreds of Kubernetes operators. The goal is to keep the operational overhead to a minimum.
In this talk, a closer look into the lifecycle of operators will be presented. With an understanding of how operators evolve, it becomes clear what
challenges during operator upgrades. A brief overview of lifecycle management tools such as Helm, OLM, and Carvel is presented in this context. In particular, it will be discussed whether these tools can help, which restrictions apply and where further development would be desirable.
At the end of this talk, you will know what operator lifecycle management is about, what its challenges are, and which tools may be used to reduce operational friction.
-----
Julian Fischer, CEO of anynines, has dedicated his career to the automation of software operations. In more than fifteen years, he has built several application platforms. He has been using Kubernetes, Cloud Foundry, and BOSH in recent years. Within platform automation, Julian has a strong focus on data service automation at scale.
Get SaaSy with Red Hat OpenShift on AWS (CON305-S) - AWS re:Invent 2018Amazon Web Services
Making AWS services accessible from within OpenShift is seamless. From a single platform, operations teams can administer AWS services, and developers can easily find and consume those services within their applications in a truly hybrid environment. In this session, we dive deep into deploying OpenShift on AWS and deploying the AWS Service Broker. We also share some very interesting use cases. Because this is a deep-dive session, we recommend you have an understanding of containers and native AWS services. This session is brought to you by AWS partner, Red Hat, Inc.
Distributed real time stream processing- why and howPetr Zapletal
In this talk you will discover various state-of-the-art open-source distributed streaming frameworks, their similarities and differences, implementation trade-offs, their intended use-cases, and how to choose between them. Petr will focus on the popular frameworks, including Spark Streaming, Storm, Samza and Flink. You will also explore theoretical introduction, common pitfalls, popular architectures, and much more.
The demand for stream processing is increasing. Immense amounts of data has to be processed fast from a rapidly growing set of disparate data sources. This pushes the limits of traditional data processing infrastructures. These stream-based applications, include trading, social networks, the Internet of Things, and system monitoring, are becoming more and more important. A number of powerful, easy-to-use open source platforms have emerged to address this.
Petr's goal is to provide a comprehensive overview of modern streaming solutions and to help fellow developers with picking the best possible solution for their particular use-case. Join this talk if you are thinking about, implementing, or have already deployed a streaming solution.
Free GitOps Workshop + Intro to Kubernetes & GitOpsWeaveworks
Follow along in this free workshop and experience GitOps!
AGENDA:
Welcome - Tamao Nakahara, Head of DX (Weaveworks)
Introduction to Kubernetes & GitOps - Mark Emeis, Principal Engineer (Weaveworks)
Weave Gitops Overview - Tamao Nakahara
Free Gitops Workshop - David Harris, Product Manager (Weaveworks)
If you're new to Kubernetes and GitOps, we'll give you a brief introduction to both and how GitOps is the natural evolution of Kubernetes.
Weave GitOps Core is a continuous delivery product to run apps in any Kubernetes. It is free and open source, and you can get started today!
https://www.weave.works/product/gitops-core
If you’re stuck, also come talk to us at our Slack channel! #weave-gitops http://bit.ly/WeaveGitOpsSlack (If you need to invite yourself to the Slack, visit https://slack.weave.works/)
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Building large scale applications in yarn with apache twill
1. Build Large Scale Applications in YARN with
Henry Saputra (@Kingwulf) - hsaputra@apache.org
Terence Yim (@chtyim) - chtyim@apache.org
Apache Big Data Conference - North America - 2016
2. Disclaimer
Apache Twill is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by
Incubator. Incubation is required of all newly accepted projects until a further review indicates that the
infrastructure, communications, and decision making process have stabilized in a manner consistent with
other successful ASF projects. While incubation status is not necessarily a reflection of the completeness
or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF
3. Agenda
● Why Apache Twill?
● Architecture and Components
● Features
● Real World Enterprise Use Cases - CDAP
● Roadmap
● Q & A
4. Apache Hadoop ® YARN
● MapReduce NextGen aka MRv2
● New ResourceManager manages the global assignment of compute
resources to applications
● Introduce concept of ApplicationMaster per application to communicate
with ResourceManager for compute resource management
● http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-
site/index.html
6. Developing Application in YARN
● http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-
site/WritingYarnApplications.html
● It is actually not as “simple” as it sounds
● Lots of boilerplates code with very steep learning curve
● Given the power and generic nature of YARN, developing applications
directly on top of YARN could be very difficult
● Standard components:
○ Application Client
○ Application Master
○ Application
9. Apache Twill
● Add simplicity to the power of YARN
○ Java thread-like programming model
○ Instead of running in multiple threads, it runs in many containers in YARN
● Incubated at Apache Software Foundation since November 2013
○ Has successfully created seven releases
○ http://twill.incubator.apache.org/index.html
10. Hello World in Twill
public class HelloWorld {
public static class HelloWorldRunnable extends AbstractTwillRunnable {
@Override
public void run() {
LOG.info("Hello World. My first distributed application.");
}
}
public static void main(String[] args) throws Exception {
TwillRunnerService twillRunner = new YarnTwillRunnerService(new YarnConfiguration(), "localhost:2181");
twillRunner.startAndWait();
TwillController controller = twillRunner.prepare(new HelloWorldRunnable())
.addLogHandler(new PrinterLogHandler(new PrintWriter(System.out, true)))
.start();
try {
controller.awaitTermination();
} catch (Exception ex) {
...
}
}
}
11. Why Apache Twill
● Apache Twill provides abstraction and virtualization for YARN to reduce
complexity to develop complex and distributed large scale applications
● Apache Twill allows developers to leverage the power of YARN by offering
programming paradigms
● Apache Twill offers commons needs for distributed large scale application
development
○ Lifecycle management
○ Service discovery
○ Distributed coordination and resiliency to failures
○ Real-time Logging
13. Main Apache Projects Used
● Apache Hadoop YARN
● Apache Hadoop HDFS
● Apache Zookeeper
● Apache Kafka
14. Features - 1
Main Features
1. Real-time logging
2. Resource Report
3. State Recovery
4. Elastic Scaling
5. Command Messages
6. Service Discovery
15. Features -2
Notable New Features
1. 0.5.0-incubating
a. Placement Policy APIs
b. Submitting to non default YARN Queue
c. Distributed Lock
2. 0.6.0-incubating
a. Restart instances of runnables for Twill applications
b. MapR Extension
c. Remove Guava Dependencies from client APIs
3. 0.7.0-incubating
a. Allow setting environment variable on Twill containers
b. Support for Azure Blob Storage
17. Resource Report - 1
/**
* This interface provides a snapshot of the resources an application is using
* broken down by each runnable.
*/
public interface ResourceReport {
// Get all the run resources being used by all instances of the specified runnable.
Collection<TwillRunResources> getRunnableResources(String runnableName);
// Get all the run resources being used across all runnables.
Map<String, Collection<TwillRunResources>> getResources();
// Get the resources application master is using.
TwillRunResources getAppMasterResources();
// Get the id of the application master.
String getApplicationId();
// Get the id of the application master.
List<String> getServices();
}
18. Resource Report - 2
/**
* Information about the container the {@link TwillRunnable}
* is running in.
*/
public interface TwillRunResources {
int getInstanceId();
int getVirtualCores();
int getMemoryMB();
String getHost();
String getContainerId();
Integer getDebugPort();
LogEntry.Level getLogLevel();
}
19. Resource Report - 3
● Client get the resource report from Twill using the TwillController.
getResourceReport API to return resource reporting
public interface TwillController extends ServiceController {
...
/**
* Get a snapshot of the resources used by the application, broken down by each runnable.
*
* @return A {@link ResourceReport} containing information about resources used by the application or
* null in case the user calls this before the application completely starts.
*/
@Nullable
ResourceReport getResourceReport();
...
}
22. Elastic Scaling
● Ability to add or reduce number of YARN containers to run the application
● Twill API TwillController.changeInstances is used to accomplish this
task
/**
* Changes the number of running instances of a given runnable.
*
* @param runnable The name of the runnable.
* @param newCount Number of instances for the given runnable.
* @return A {@link Future} that will be completed when the number running instances has been
* successfully changed. The future will carry the new count as the result. If there is any error
* while changing instances, it'll be reflected in the future.
*/
Future<Integer> changeInstances(String runnable, int newCount);
24. Placement Policy API - 1 (New)
● Expose container placement policy from YARN
● Will allow Twill to allocate containers in specific racks and host based on
DISTRIBUTED deployment mode
25. Placement Policy API - 2 (New)
/**
* Defines a container placement policy.
*/
interface PlacementPolicy {
enum Type {
DISTRIBUTED, DEFAULT
}
Set<String> getNames();
Type getType();
Set<String> getHosts();
Set<String> getRacks();
}
26. Restart Instances for Twill Runnables - 1 (New)
● Instances of TwillRunnable will be run in YARN containers
● Each Twill application could have one or more instances of
TwillRunnable
● Twill provides ability to restart particular runnable instance without
affecting other runnables
● This is useful when certain runnables are not running well and you would
need to restart certain instances based on the identifier
27. Restart Instances for Twill Runnables - 2 (New)
/**
* For controlling a running application.
*/
public interface TwillController extends ServiceController {
...
Future<String> restartAllInstances(String runnable);
Future<Set<String>> restartInstances(Map<String, ? extends Set<Integer>> runnableToInstanceIds);
Future<String> restartInstances(String runnable, int instanceId, int... moreInstanceIds);
...
}
28. Setting Environment Variables on Containers (New)
● Provides ability to set environment variables on the YARN containers where TwillRunnable
instances are running
/**
* This interface exposes methods to set up the Twill runtime environment and start a Twill application.
*/
public interface TwillPreparer {
...
// Adds the set of environment variables that will be set as container environment variables for all runnables.
TwillPreparer withEnv(Map<String, String> env);
/**
* Adds the set of environment variables that will be set as container environment variables for the given runnable.
* Environment variables set through this method has higher precedence than the one set through {@link #withEnv(Map)}
* if there is a key clash.
*/
TwillPreparer withEnv(String runnableName, Map<String, String> env);
...
}
29. Real World Enterprise Usages - CDAP
● Cask Data Application Platform (CDAP) - http://cdap.io
○ Open source data application framework
○ Simplifies and enhances data application development and management
■ APIs for simplification, portability and standardization
● Works across wide range of Hadoop versions and all common distros
■ Built-in System services, such as metrics and logs aggregation, dataset
management, and distributed transaction service for common big data applications
needs
○ Extensions to enhance user experience
■ Hydrator - Interactive data pipeline construction
■ Tracker - Metadata discovery and data lineage
31. Apache Twill in CDAP
● CDAP runs different types of work on YARN
○ Long running daemons
○ Real-time transactional streaming framework
○ REST services
○ Workflow execution
● CDAP only interacts with Twill
○ Greatly simplifies the CDAP code base
○ Just a matter of minutes to add support for new type of work to run on YARN
○ Native support of common needs
■ Service discovery
■ Leader election and distributed locking
■ Elastic scaling
■ Security
33. Service Discovery
● CDAP exposes all functionalities through REST
● Almost all CDAP HTTP services are running in YARN
○ No fixed host and port.
○ Bind to ephemeral port
○ Announce the host and port through Twill
■ Unique service name for a given service type
● Router inspects the request URI to derive a service name
○ Uses Twill discovery service client to locate actual host and port
○ Proxy the request and response
34. Long Running Applications
● All CDAP services on YARN are long running
○ Transaction server, metrics and log processing, real-time data ingestion, …
● Many user applications are long running too
○ Real-time streaming, HTTP service, application daemon
● Not too big of a problem in non-secure cluster
○ Logs not collected, log files may get too big, …
■ Twill build-in log collections can help
● Secure cluster, specifically Kerberos enabled cluster
○ All all Hadoop services use delegation token
■ NN, RM, HBase Master, Hive, KMS, ...
○ YARN containers doesn’t have the keytab, and it should not, hence can’t update the token
35. Long Running Applications in Twill
● Twill provides native support for updating delegation tokens
○ TwillRunner.scheduleSecureStoreUpdate
● Update delegation tokens from the launcher process (kinit process)
○ Acquires new delegation tokens periodically
○ Serializes tokens to HDFS
○ Notifies all running applications about the update
■ Through command message
○ Each runnable refreshes delegation tokens by reading from HDFS
■ Requires a non-expired HDFS delegation token
● New launcher process will discovery all Twill apps from ZK
○ Can run HA launcher processes using leader election support from Twill
36. Scalability
● Many components in CDAP are linearly scalable, such as
○ Streaming data ingestion (through REST endpoint)
○ Log processing
■ Reads from Kafka, writes to HDFS
○ Metrics processing
■ Reads from Kafka, writes to timeseries table
○ User real-time streaming DAG
○ User HTTP service
● Twill supports adding/reducing YARN containers for a given TwillRunnable
○ No need to restart application
○ Guarantees a unique instance ID is assigned
■ Application can use it for partitioning
37. High Availability
● In production environment, it is important to have high availability
● Twill provides couple means to achieve that
○ Running multiple instances of the same TwillRunnable
○ Use dynamic service discovery to route requests
○ Twill Automatic restart of TwillRunnable container if it gets killed / exit abnormally
■ Killed container will be removed from the service discovery
■ Restarted container will be added to the service discovery
○ Built-in leader election support to have active-passive type of redundancy
■ Tephra service use that, as it requires only having one active server
○ Built-in distributed lock to help synchronization
■ Synchronize when there is configuration changes among TwillRunnable instances
38. Placement Policy
● CDAP can run multiple instances for a given service type
○ Scalability
○ Redundancy for availability
● YARN doesn’t expect applications care where containers run
○ Can provide location hint, but is not guaranteed
○ Depends on the YARN scheduler
● CDAP uses Twill to control container placement
○ Different instances of the same TwillRunnable runs on different host
○ Certain TwillRunnable cannot run on the same host
■ Stream handler, Tephra transaction server
● Both are heavy CPU and IO bound
39. Performance and Load Testing
● We perform load testing for CDAP components
○ Real-time stream ingestion handler
○ Tephra transaction server
● A scalable load testing framework written using Twill
○ Multiple REST clients in each TwillRunnable
■ One client per thread
○ Can gradually increase number of threads as well as number of containers
■ Use command message to increase threads
■ Use elastic scaling API to increase number of containers
○ Collect metrics through log messages
■ Use the built-in log collection support
40. Apache Twill in Enterprise
● CDAP, which uses Twill, is being used by large enterprise in production
● Apache Twill is proven framework
○ Has been running on different cluster configurations
■ AWS, Azure, bare metal, VMs
● Compatible with wide range of Hadoop versions
○ Vanilla Hadoop 2.0 - 2.7
○ HDP 2.1 - 2.3
○ CDH 5
○ MapR 4.1 - 5.1
41. Roadmap
● Expose newly added YARN features
● Smarter containers management
○ Run simple runnable in AM
○ Multiple runnables in one container
● Speedup application launch time
● Fine-grained control of containers lifecycle
○ When to start, stop and restart on failure
● Simple application launching with better classloader isolation
● Smaller footprint
○ Optional Kafka, optional ZooKeeper
● Generalize to run on more frameworks
○ Apache Mesos, Kubernetes
42. Thank you!
● Twill is Open Source and needs your contributions
○ http://twill.incubator.apache.org
○ dev@twill.incubator.apache.org
○ @ApacheTwill
● Contributions are welcomed!