CloudStream service is a Full Management Service in Huawei Cloud. Support several features, such as On-Demand Billing, easy-to-use Stream SQL in online SQL editor, test Stream SQL in real-time style, Multi-tenant, security isolation and so on. We choose Apache Flink as streaming compute platform. Inside of CloudStream Cluster, Flink job can run on Yarn, Mesos, Kubernetes. We also have extended Apache Flink to meet IoT scenario needs. There are specialized tests on Flink reliability with college cooperation. Finally continuously improve the infrastructure around CS including open source projects and cloud services. CloudStream is different with any other real-time analysis cloud service. The development process can also be shared at architecture and principles.
In this presentation from CA World 2017 you will learn how Asurion manages their large and growing portfolio of APIs that support clients, partners and millions of customers. The overhead of managing and communicating these APIs to various groups has become cumbersome and slow as the number of APIs has increased. To eliminate this overhead, Asurion uses CA API Management to enable API developers to self-publish their APIs out to the rest of the company. This new self-service portal will also allow application developers to learn about and gain access to the APIs without having to request access through an administrative team.
To learn more visit: http://ow.ly/VdNI50fzJyt
APIs are the lynchpin to the success of your digital business. Explore how you can effectively design, secure, monitor and manage APIs across the enterprise.
Infrastructure-as-Code with Pulumi- Better than all the others (like Ansible)?Jonas Hecht
There's a new Infrastructure-as-Code (IaC) kid on the block: Pulumi is there to frighten the established: Chef, Puppet, Terraform, Cloudformation, Ansible... But is it really the "better" tool and how could they be compared? Is it only hype-driven? We'll find out, incl. lot's of example code. (ContainerConf / Continuous Lifecycle 2019 Talk in Mannheim)
Example GitHub code: https://github.com/jonashackt/pulumi-python-aws-ansible
https://github.com/jonashackt/pulumi-typescript-aws-fargate
In this presentation from CA World 2017 you will learn how Asurion manages their large and growing portfolio of APIs that support clients, partners and millions of customers. The overhead of managing and communicating these APIs to various groups has become cumbersome and slow as the number of APIs has increased. To eliminate this overhead, Asurion uses CA API Management to enable API developers to self-publish their APIs out to the rest of the company. This new self-service portal will also allow application developers to learn about and gain access to the APIs without having to request access through an administrative team.
To learn more visit: http://ow.ly/VdNI50fzJyt
APIs are the lynchpin to the success of your digital business. Explore how you can effectively design, secure, monitor and manage APIs across the enterprise.
Infrastructure-as-Code with Pulumi- Better than all the others (like Ansible)?Jonas Hecht
There's a new Infrastructure-as-Code (IaC) kid on the block: Pulumi is there to frighten the established: Chef, Puppet, Terraform, Cloudformation, Ansible... But is it really the "better" tool and how could they be compared? Is it only hype-driven? We'll find out, incl. lot's of example code. (ContainerConf / Continuous Lifecycle 2019 Talk in Mannheim)
Example GitHub code: https://github.com/jonashackt/pulumi-python-aws-ansible
https://github.com/jonashackt/pulumi-typescript-aws-fargate
Zero downtime deployment of micro-services with KubernetesWojciech Barczyński
Talk on deployment strategies with Kubernetes covering kubernetes configuration files and the actual implementation of your service in Golang and .net core.
You will find demos for recreate, rolling updates, blue-green, and canary deployments.
Source and demos, you will find on github: https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes
MLOps (a compound of “machine learning” and “operations”) is a practice for collaboration and communication between data scientists and operations professionals to help manage the production machine learning lifecycle. Similar to the DevOps term in the software development world, MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements. MLOps applies to the entire ML lifecycle - from integrating with model generation (software development lifecycle, continuous integration/continuous delivery), orchestration, and deployment, to health, diagnostics, governance, and business metrics.
To watch the full presentation click here: https://info.cnvrg.io/mlopsformachinelearning
In this webinar, we’ll discuss core practices in MLOps that will help data science teams scale to the enterprise level. You’ll learn the primary functions of MLOps, and what tasks are suggested to accelerate your teams machine learning pipeline. Join us in a discussion with cnvrg.io Solutions Architect, Aaron Schneider, and learn how teams use MLOps for more productive machine learning workflows.
- Reduce friction between science and engineering
- Deploy your models to production faster
- Health, diagnostics and governance of ML models
- Kubernetes as a core platform for MLOps
- Support advanced use-cases like continual learning with MLOps
Automation API testing becoming a crucial part of most of the project. This whitepaper provides an insight into how API automation with REST Assured is certainly the way forward in API testing.
IBM API Connect is a Comprehensive API Solution. It is an integrated creation, runtime, management, and security foundation for enterprise grade API’s and Microservices to power modern digital applications.
In this webinar,
API Management Concepts
IBM API Connect overview and features
Kellton Tech’s API Strategy with IBM API Connect.
Technology: IBM API Connect 5.0
Github Copilot vs Amazon CodeWhisperer for Java developers at JCON 2023Vadym Kazulkin
In this talk I will compare 2 services Github Copilot (including Copilot X) and Amazon CodeWhisperer from the perspective of the Java developers in terms of the quality of the given recommendations for simple tasks, complex algorithms, Spring Boot and AWS development, IDE integration and pricing.
Both services are the machine learning-powered services that help improve developer productivity by generating code recommendations based on developers’ comments in natural language and their code. Based on natural language comments, these services also automatically recommend unit test code that matches your implementation code.
Hear from the product team about Apigee's key products and technology. Learn how customers use Apigee to grow reach with mobile apps, accelerate development and create new products through APIs, and gain end-to-end visibility into business and operations by analyzing 360 degrees of information.
From Functions-as-a-Service to Backend-as-a-Service, even Big Data-as-a-Service, Serverless is taking many different shapes. Learn what these mean and how Google Cloud Platform is building technology to make sure there's nothing standing between you and running your code. You'll see live demos of integration between Firebase, Cloud Functions, Cloud Pub/Sub (and even machine learning) to build autoscaling apps in record time - all without managing servers or application runtimes.
Bret is on the Google Cloud Platform team at Google, focusing on serverless products like Google Cloud Functions, App Engine, Firebase, machine learning APIs, and more. He's often on the running trail, volleyball court or kickball field.
YouTube Link - https://youtu.be/CwLrdjgsJjU
** Selenium Certification Training
https://www.edureka.co/testing-with-selenium-webdriver **
This Edureka PPT on "Test Automation using Python" will provide you with detailed and comprehensive knowledge on selenium fundamentals. It will also guide you through Python concepts, how to locate elements in selenium using Python. This PPT will cover the following topics:
Introduction to Selenium
Why Python for Automation Testing?
Selenium and Python Binding
PyCharm for Python
Locators in Selenium
Demo - Automating Hotstar website
Selenium playlist: https://goo.gl/NmuzXE
Selenium Blog playlist: http://bit.ly/2B7C3QR
Software Testing Blog playlist: http://bit.ly/2UXwdJm
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Are your APIs becoming too complicated and ad hoc? Feeling the need to set up policies for your API? This presentation will give you strategy options for designing and developing your APIs.
Overview of Confluence and its features and how it is useful for enterprises. Updated with new social features in Confluence 3.0 and SharePoint Integration
Video: https://data-artisans.com/flink-forward-berlin/resources/monitoring-flink-with-prometheus
Live Demo Code: https://github.com/mbode/flink-prometheus-example
Prometheus is a cloud-native monitoring system prioritizing reliability and simplicity – and Flink works really well with it! This session will show you how to leverage the Flink metrics system together with Pronetheus to improve the observability of your jobs. There will be a live demo establishing how everything ties in together. The talk is aimed at people already building and running Flink jobs who would like to gain more insight into them. It is fine if you are not familiar with Prometheus yet as the basic concepts will be introduced. If you have ever wondered how you could use modern monitoring tools to be alerted in the middle of the night in case your Flink job‘s 99th percentile end-to-end latency degraded for some reason, this might just be the talk you are looking for.
Zero downtime deployment of micro-services with KubernetesWojciech Barczyński
Talk on deployment strategies with Kubernetes covering kubernetes configuration files and the actual implementation of your service in Golang and .net core.
You will find demos for recreate, rolling updates, blue-green, and canary deployments.
Source and demos, you will find on github: https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes
MLOps (a compound of “machine learning” and “operations”) is a practice for collaboration and communication between data scientists and operations professionals to help manage the production machine learning lifecycle. Similar to the DevOps term in the software development world, MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements. MLOps applies to the entire ML lifecycle - from integrating with model generation (software development lifecycle, continuous integration/continuous delivery), orchestration, and deployment, to health, diagnostics, governance, and business metrics.
To watch the full presentation click here: https://info.cnvrg.io/mlopsformachinelearning
In this webinar, we’ll discuss core practices in MLOps that will help data science teams scale to the enterprise level. You’ll learn the primary functions of MLOps, and what tasks are suggested to accelerate your teams machine learning pipeline. Join us in a discussion with cnvrg.io Solutions Architect, Aaron Schneider, and learn how teams use MLOps for more productive machine learning workflows.
- Reduce friction between science and engineering
- Deploy your models to production faster
- Health, diagnostics and governance of ML models
- Kubernetes as a core platform for MLOps
- Support advanced use-cases like continual learning with MLOps
Automation API testing becoming a crucial part of most of the project. This whitepaper provides an insight into how API automation with REST Assured is certainly the way forward in API testing.
IBM API Connect is a Comprehensive API Solution. It is an integrated creation, runtime, management, and security foundation for enterprise grade API’s and Microservices to power modern digital applications.
In this webinar,
API Management Concepts
IBM API Connect overview and features
Kellton Tech’s API Strategy with IBM API Connect.
Technology: IBM API Connect 5.0
Github Copilot vs Amazon CodeWhisperer for Java developers at JCON 2023Vadym Kazulkin
In this talk I will compare 2 services Github Copilot (including Copilot X) and Amazon CodeWhisperer from the perspective of the Java developers in terms of the quality of the given recommendations for simple tasks, complex algorithms, Spring Boot and AWS development, IDE integration and pricing.
Both services are the machine learning-powered services that help improve developer productivity by generating code recommendations based on developers’ comments in natural language and their code. Based on natural language comments, these services also automatically recommend unit test code that matches your implementation code.
Hear from the product team about Apigee's key products and technology. Learn how customers use Apigee to grow reach with mobile apps, accelerate development and create new products through APIs, and gain end-to-end visibility into business and operations by analyzing 360 degrees of information.
From Functions-as-a-Service to Backend-as-a-Service, even Big Data-as-a-Service, Serverless is taking many different shapes. Learn what these mean and how Google Cloud Platform is building technology to make sure there's nothing standing between you and running your code. You'll see live demos of integration between Firebase, Cloud Functions, Cloud Pub/Sub (and even machine learning) to build autoscaling apps in record time - all without managing servers or application runtimes.
Bret is on the Google Cloud Platform team at Google, focusing on serverless products like Google Cloud Functions, App Engine, Firebase, machine learning APIs, and more. He's often on the running trail, volleyball court or kickball field.
YouTube Link - https://youtu.be/CwLrdjgsJjU
** Selenium Certification Training
https://www.edureka.co/testing-with-selenium-webdriver **
This Edureka PPT on "Test Automation using Python" will provide you with detailed and comprehensive knowledge on selenium fundamentals. It will also guide you through Python concepts, how to locate elements in selenium using Python. This PPT will cover the following topics:
Introduction to Selenium
Why Python for Automation Testing?
Selenium and Python Binding
PyCharm for Python
Locators in Selenium
Demo - Automating Hotstar website
Selenium playlist: https://goo.gl/NmuzXE
Selenium Blog playlist: http://bit.ly/2B7C3QR
Software Testing Blog playlist: http://bit.ly/2UXwdJm
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Are your APIs becoming too complicated and ad hoc? Feeling the need to set up policies for your API? This presentation will give you strategy options for designing and developing your APIs.
Overview of Confluence and its features and how it is useful for enterprises. Updated with new social features in Confluence 3.0 and SharePoint Integration
Video: https://data-artisans.com/flink-forward-berlin/resources/monitoring-flink-with-prometheus
Live Demo Code: https://github.com/mbode/flink-prometheus-example
Prometheus is a cloud-native monitoring system prioritizing reliability and simplicity – and Flink works really well with it! This session will show you how to leverage the Flink metrics system together with Pronetheus to improve the observability of your jobs. There will be a live demo establishing how everything ties in together. The talk is aimed at people already building and running Flink jobs who would like to gain more insight into them. It is fine if you are not familiar with Prometheus yet as the basic concepts will be introduced. If you have ever wondered how you could use modern monitoring tools to be alerted in the middle of the night in case your Flink job‘s 99th percentile end-to-end latency degraded for some reason, this might just be the talk you are looking for.
Near real-time anomaly detection at Lyftmarkgrover
Near real-time anomaly detection at Lyft, by Mark Grover and Thomas Weise at Strata NY 2018.
https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/69155
This session takes an in-depth look at:
- Trends in stream processing
- How streaming SQL has become a standard
- The advantages of Streaming SQL
- Ease of development with streaming SQL: Graphical and Streaming SQL query editors
- Business value of streaming SQL and its related tools: Domain-specific UIs
- Scalable deployment of streaming SQL: Distributed processing
Advanced Stream Processing with Flink and Pulsar - Pulsar Summit NA 2021 KeynoteStreamNative
In this talk, Till Rohrmann and Addison Higham discuss how Flink allows for ambitious stream processing workflows and how Pulsar and Flink enable new capabilities that push forward the state-of-the-art in streaming. They will also share upcoming features and new capabilities in the integrations between Flink and Pulsar and how these two communities are working together to truly advance the power of stream processing.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Devoxx 2018 - Pivotal and AxonIQ - Quickstart your event driven architectureBen Wilcock
Devoxx Belgium 2018 - Let Pivotal and AxonIQ take you on a journey into Axon Trader. Axon Trader is a new open-source reference architecture that demonstrates how to use Spring, Axon and Pivotal Cloud Foundry to deliver evolutionary microservice applications to prod in minutes.
The accompanying YouTube video can be found here: https://www.youtube.com/watch?v=15hzHUI4WNA
Delivering the power of data using Spring Cloud DataFlow and DataStax Enterpr...VMware Tanzu
SpringOne Platform 2017
Gilbert Lau, Data Stax; Wayne Lund, Pivotal
"Spring Cloud Data Flow satisfies all of the demands of modern streaming and task workloads. A growing number of customers are viewing Pivotal Cloud Foundry as an ideal runtime for these types of workloads to take advantage of all of the microservice architecture features of Spring Boot apps leveraging Spring Cloud Services. This is only half of the equation. Once the streaming data is persisted on their database, our customers want to generate actionable insights to provide the best customer experience to stay on top of the competitive marketplace. DataStax Enterprise (DSE) is a single and unified big data platform with Apache Cassandra NoSQL database at its core. Integrated within each node of DSE is powerful indexing, search through Apache Solr, analytics through Apache Spark, and a enterprise-ready graph functionality. It is by far the only operational data platform which can scale linearly in excess of 1,000 nodes, with no single point of failure, and is capable of providing real-time active-everywhere replication across many datacenters and cloud providers.
In this presentation and demo we will take a common social data set and show SCDF advantages on PCF for microservice scaling and pipelining data into a DataStax Enterprise Cassandra NoSQL database. Then followed by extracting meaningful information through DataStax Enterprise Search, DataStax Enterprise Analytics, and DataStax Cassandra Service Broker Tile for PCF using a Spring Boot Dashboard application."
Cloud Experience: Data-driven Applications Made Simple and FastDatabricks
A complex real-time data workflow implementation is very challenging. This session will describe the architecture of a data platform that provides a single, secure, high-performance system that can be deployed in a hybrid cloud architectures. We will present how to support simultaneous, consistent and high-performance access through multiple industry open source and cloud compatible standards of streaming, table, TSDB, object, and file APIs. A new serverless technology is also used in the architecture to support a dynamic and flexible implementations. The presenter will also outline how the platform was integrated with the Spark eco-system, including AI and ML tools, to simplify the development process
Building a fully managed stream processing platform on Flink at scale for Lin...Flink Forward
Apache Flink is a distributed stream processing framework that allows users to process and analyze data in real-time. At LinkedIn, we developed a fully managed stream processing platform on Flink running on K8s to power hundreds of stream processing pipelines in production. This platform is the backbone for other infra systems like Search, Espresso (internal document store) and feature management etc. We provide a rich authoring and testing environment which allows users to create, test, and deploy their streaming jobs in a self-serve fashion within minutes. Users can focus on their business logic, leaving the Flink platform to take care of management aspects such as split deployment, resource provisioning, auto-scaling, job monitoring, alerting, failure recovery and much more. In this talk, we will introduce the overall platform architecture, highlight the unique value propositions that it brings to stream processing at LinkedIn and share the experiences and lessons we have learned.
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...Flink Forward
Flink Forward San Francisco 2022.
To improve Amazon Alexa experiences and support machine learning inference at scale, we built an automated end-to-end solution for incremental model building or fine-tuning machine learning models through continuous learning, continual learning, and/or semi-supervised active learning. Customer privacy is our top concern at Alexa, and as we build solutions, we face unique challenges when operating at scale such as supporting multiple applications with tens of thousands of transactions per second with several dependencies including near-real time inference endpoints at low latencies. Apache Flink helps us transform and discover metrics in near-real time in our solution. In this talk, we will cover the challenges that we faced, how we scale the infrastructure to meet the needs of ML teams across Alexa, and go into how we enable specific use cases that use Apache Flink on Amazon Kinesis Data Analytics to improve Alexa experiences to delight our customers while preserving their privacy.
by
Aansh Shah
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Flink Forward
Flink Forward San Francisco 2022.
Probably everyone who has written stateful Apache Flink applications has used one of the fault-tolerant keyed state primitives ValueState, ListState, and MapState. With RocksDB, however, retrieving and updating items comes at an increased cost that you should be aware of. Sometimes, these may not be avoidable with the current API, e.g., for efficient event-time stream-sorting or streaming joins where you need to iterate one or two buffered streams in the right order. With FLIP-220, we are introducing a new state primitive: BinarySortedMultiMapState. This new form of state offers you to (a) efficiently store lists of values for a user-provided key, and (b) iterate keyed state in a well-defined sort order. Both features can be backed efficiently by RocksDB with a 2x performance improvement over the current workarounds. This talk will go into the details of the new API and its implementation, present how to use it in your application, and talk about the process of getting it into Flink.
by
Nico Kruber
Introducing the Apache Flink Kubernetes OperatorFlink Forward
Flink Forward San Francisco 2022.
The Apache Flink Kubernetes Operator provides a consistent approach to manage Flink applications automatically, without any human interaction, by extending the Kubernetes API. Given the increasing adoption of Kubernetes based Flink deployments the community has been working on a Kubernetes native solution as part of Flink that can benefit from the rich experience of community members and ultimately make Flink easier to adopt. In this talk we give a technical introduction to the Flink Kubernetes Operator and demonstrate the core features and use-cases through in-depth examples."
by
Thomas Weise
Flink Forward San Francisco 2022.
Resource Elasticity is a frequently requested feature in Apache Flink: Users want to be able to easily adjust their clusters to changing workloads for resource efficiency and cost saving reasons. In Flink 1.13, the initial implementation of Reactive Mode was introduced, later releases added more improvements to make the feature production ready. In this talk, we’ll explain scenarios to deploy Reactive Mode to various environments to achieve autoscaling and resource elasticity. We’ll discuss the constraints to consider when planning to use this feature, and also potential improvements from the Flink roadmap. For those interested in the internals of Flink, we’ll also briefly explain how the feature is implemented, and if time permits, conclude with a short demo.
by
Robert Metzger
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Flink Forward
Flink Forward San Francisco 2022.
Flink consumers read from Kafka as a scalable, high throughput, and low latency data source. However, there are challenges in scaling out data streams where migration and multiple Kafka clusters are required. Thus, we introduced a new Kafka source to read sharded data across multiple Kafka clusters in a way that conforms well with elastic, dynamic, and reliable infrastructure. In this presentation, we will present the source design and how the solution increases application availability while reducing maintenance toil. Furthermore, we will describe how we extended the existing KafkaSource to provide mechanisms to read logical streams located on multiple clusters, to dynamically adapt to infrastructure changes, and to perform transparent cluster migrations and failover.
by
Mason Chen
One sink to rule them all: Introducing the new Async SinkFlink Forward
Flink Forward San Francisco 2022.
Next time you want to integrate with a new destination for a demo, concept or production application, the Async Sink framework will bootstrap development, allowing you to move quickly without compromise. In Flink 1.15 we introduced the Async Sink base (FLIP-171), with the goal to encapsulate common logic and allow developers to focus on the key integration code. The new framework handles things like request batching, buffering records, applying backpressure, retry strategies, and at least once semantics. It allows you to focus on your business logic, rather than spending time integrating with your downstream consumers. During the session we will dive deep into the internals to uncover how it works, why it was designed this way, and how to use it. We will code up a new sink from scratch and demonstrate how to quickly push data to a destination. At the end of this talk you will be ready to start implementing your own Flink sink using the new Async Sink framework.
by
Steffen Hausmann & Danny Cranmer
Tuning Apache Kafka Connectors for Flink.pptxFlink Forward
Flink Forward San Francisco 2022.
In normal situations, the default Kafka consumer and producer configuration options work well. But we all know life is not all roses and rainbows and in this session we’ll explore a few knobs that can save the day in atypical scenarios. First, we'll take a detailed look at the parameters available when reading from Kafka. We’ll inspect the params helping us to spot quickly an application lock or crash, the ones that can significantly improve the performance and the ones to touch with gloves since they could cause more harm than benefit. Moreover we’ll explore the partitioning options and discuss when diverging from the default strategy is needed. Next, we’ll discuss the Kafka Sink. After browsing the available options we'll then dive deep into understanding how to approach use cases like sinking enormous records, managing spikes, and handling small but frequent updates.. If you want to understand how to make your application survive when the sky is dark, this session is for you!
by
Olena Babenko
Flink powered stream processing platform at PinterestFlink Forward
Flink Forward San Francisco 2022.
Pinterest is a visual discovery engine that serves over 433MM users. Stream processing allows us to unlock value from realtime data for pinners. At Pinterest, we adopt Flink as the unified streaming processing engine. In this talk, we will share our journey in building a stream processing platform with Flink and how we onboarding critical use cases to the platform. Pinterest has supported 90+near realtime streaming applications. We will cover the problem statement, how we evaluate potential solutions and our decision to build the framework.
by
Rainie Li & Kanchi Masalia
Flink Forward San Francisco 2022.
This talk will take you on the long journey of Apache Flink into the cloud-native era. It started all the way from where Hadoop and YARN were the standard way of deploying and operating data applications.
We're going to deep dive into the cloud-native set of principles and how they map to the Apache Flink internals and recent improvements. We'll cover fast checkpointing, fault tolerance, resource elasticity, minimal infrastructure dependencies, industry-standard tooling, ease of deployment and declarative APIs.
After this talk you'll get a broader understanding of the operational requirements for a modern streaming application and where the current limits are.
by
David Moravek
Where is my bottleneck? Performance troubleshooting in FlinkFlink Forward
Flinkn Forward San Francisco 2022.
In this talk, we will cover various topics around performance issues that can arise when running a Flink job and how to troubleshoot them. We’ll start with the basics, like understanding what the job is doing and what backpressure is. Next, we will see how to identify bottlenecks and which tools or metrics can be helpful in the process. Finally, we will also discuss potential performance issues during the checkpointing or recovery process, as well as and some tips and Flink features that can speed up checkpointing and recovery times.
by
Piotr Nowojski
Using the New Apache Flink Kubernetes Operator in a Production DeploymentFlink Forward
Flink Forward San Francisco 2022.
Running natively on Kubernetes, using the new Apache Flink Kubernetes Operator is a great way to deploy and manage Flink application and session deployments. In this presentation, we provide: - A brief overview of Kubernetes operators and their benefits. - Introduce the five levels of the operator maturity model. - Introduce the newly released Apache Flink Kubernetes Operator and FlinkDeployment CRs - Dockerfile modifications you can make to swap out UBI images and Java of the underlying Flink Operator container - Enhancements we're making in: - Versioning/Upgradeability/Stability - Security - Demo of the Apache Flink Operator in-action, with a technical preview of an upcoming product using the Flink Kubernetes Operator. - Lessons learned - Q&A
by
James Busche & Ted Chang
Flink Forward San Francisco 2022.
The Table API is one of the most actively developed components of Flink in recent time. Inspired by databases and SQL, it encapsulates concepts many developers are familiar with. It can be used with both bounded and unbounded streams in a unified way. But from afar it can be difficult to keep track of what this API is capable of and how it relates to Flink's other APIs. In this talk, we will explore the current state of Table API. We will show how it can be used as a batch processor, a changelog processor, or a streaming ETL tool with many built-in functions and operators for deduplicating, joining, and aggregating data. By comparing it to the DataStream API we will highlight differences and elaborate on when to use which API. We will demonstrate hybrid pipelines in which both APIs interact with one another and contribute their unique strengths. Finally, we will take a look at some of the most recent additions as a first step to stateful upgrades.
by
David Andreson
Flink Forward San Francisco 2022.
Based on the new Flink-Pulsar connector, we implemented Flink's TableAPI and Catalog to help users to interact with the Pulsar cluster via Flink SQL easily. We would like to go through the design and implementation of the SQL connector in the following aspects:
1. Two different modes of use Pulsar as a metadata store
2. Data format transformation and management
3. SQL semantics support within Pulsar context
by
Sijie Guo & Neng Lu
Dynamic Rule-based Real-time Market Data AlertsFlink Forward
Flink Forward San Francisco 2022.
At Bloomberg, we deal with high volumes of real-time market data. Our clients expect to be notified of any anomalies in this market data, which may indicate volatile movements in the markets, notable trades, forthcoming events, or system failures. The parameters for these alerts are always evolving and our clients can update them dynamically. In this talk, we'll cover how we utilized the open source Apache Flink and Siddhi SQL projects to build a distributed, scalable, low-latency and dynamic rule-based, real-time alerting system to solve our clients' needs. We'll also cover the lessons we learned along our journey.
by
Ajay Vyasapeetam & Madhuri Jain
Exactly-Once Financial Data Processing at Scale with Flink and PinotFlink Forward
Flink Forward San Francisco 2022.
At Stripe we have created a complete end to end exactly-once processing pipeline to process financial data at scale, by combining the exactly-once power from Flink, Kafka, and Pinot together. The pipeline provides exactly-once guarantee, end-to-end latency within a minute, deduplication against hundreds of billions of keys, and sub-second query latency against the whole dataset with trillion level rows. In this session we will discuss the technical challenges of designing, optimizing, and operating the whole pipeline, including Flink, Kafka, and Pinot. We will also share our lessons learned and the benefits gained from exactly-once processing.
by
Xiang Zhang & Pratyush Sharma & Xiaoman Dong
Processing Semantically-Ordered Streams in Financial ServicesFlink Forward
Flink Forward San Francisco 2022.
What if my data is already in order? Stream Processing has given us an elegant and powerful solution for running analytic queries and logic over high volumes of continuously arriving data. However, in both Apache Flink and Apache Beam, the notion of time-ordering is baked in at a very low level, making it difficult to express computations that are interested in a semantic-, rather than time-ordering of the data. In financial services, what often matters the most about the data moving between systems is not when the data was created, but in what order, to the extent that many institutions engineer a global sequencing over all data entering and produced by their systems to achieve complete determinism. How, then, can financial institutions and others best employ Stream Processing on streams of data that are already ordered? I will cover various techniques that can make this work, as well as seek input from the community on how Flink might be improved to better support these use-cases.
by
Patrick Lucas
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
Flink Forward San Francisco 2022.
In modern data platform architectures, stream processing engines such as Apache Flink are used to ingest continuous streams of data into data lakes such as Apache Iceberg. Streaming ingestion to iceberg tables can suffer by two problems (1) small files problem that can hurt read performance (2) poor data clustering that can make file pruning less effective. To address those two problems, we propose adding a shuffling stage to the Flink Iceberg streaming writer. The shuffling stage can intelligently group data via bin packing or range partition. This can reduce the number of concurrent files that every task writes. It can also improve data clustering. In this talk, we will explain the motivations in details and dive into the design of the shuffling stage. We will also share the evaluation results that demonstrate the effectiveness of smart shuffling.
by
Gang Ye & Steven Wu
Batch Processing at Scale with Flink & IcebergFlink Forward
Flink Forward San Francisco 2022.
Goldman Sachs's Data Lake platform serves as the firm's centralized data platform, ingesting 140K (and growing!) batches per day of Datasets of varying shape and size. Powered by Flink and using metadata configured by platform users, ingestion applications are generated dynamically at runtime to extract, transform, and load data into centralized storage where it is then exported to warehousing solutions such as Sybase IQ, Snowflake, and Amazon Redshift. Data Latency is one of many key considerations as producers and consumers have their own commitments to satisfy. Consumers range from people/systems issuing queries, to applications using engines like Spark, Hive, and Presto to transform data into refined Datasets. Apache Iceberg allows our applications to not only benefit from consistency guarantees important when running on eventually consistent storage like S3, but also allows us the opportunity to improve our batch processing patterns with its scalability-focused features.
by
Andreas Hailu
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
3. Background about Huawei Cloud
❖ Cloud BU
❖ Foundation at 2017/06
❖ Huawei Cloud
❖ HUAWEI CLOUD services-let enterprises use ICT
services in the same way as using water and electric
utilities.
4. Why choose Flink
❖ Graceful Runtime framework
❖ Rich Stream SQL function
❖ lightweight async checkpoint
❖ Real low latency and hight throughput
❖ expansibility: ML, Graph, Edge
5. Cloud Stream Service
❖ Cloud Stream Service (CS) :
Real-time big data stream analysis service on Huawei Cloud.
Compatible with Apache Flink and Spark APIs, CS also fully
managed computing clusters. Users just focus on StreamSQL or
UDF and run jobs in real time.
❖ CS is the first public cloud native service that choose
Flink as its Runtime computing engine in the world.
https://www.huaweicloud.com/en-us/product/cs.html
6. CS Overview
- Industrial IoT
- Car Internet
- exchange(BitCoin/Stock)
- Bank/insurance industry
- Electronic Commerce …
Make the computing easier
- Union batch and stream
- SQL and Job visualization
- Streaming monitoring
Connect everything
- Open Source source/sink
- Cloud Service source/sink
8. Cost Comparison (Reference)
Item Offline Environment Buildup CS Saved Cost
Hardware cost
80,000 x 3 = 105,000 CNY
The hardware cost of a single
physical machine is 80
thousand CNY. The cost is for
reference only.
0.5 x 20 x 24 x 30 x 12 x 3 =
259,000 CNY
Users are charged 0.5 CNY
per hour for a single SPU. 20
SPUs are purchased.
O&M manpower cost 200,000 CNY/man-year 0
Water/Electricity/DC
maintenance 76300 CNY/year 0
Total 516,300 CNY 259,000 CNY 42.9%
To achieve the same computing capability
CS saves:42.9% costs
9. Job types
❖ Flink SQL: First-class citizen for easy-use
❖ Flink Jar job: FlinkML, Gelly, CEP, SQL
❖ Spark Streaming and structured streaming Jar job
❖ PySpark Jar Job
❖ Edge Computing Job: beta now
10. Connect to Ecosystem
❖ Open Source Connectors(Flink connector and
Bahir Flink)
❖ Connect to cloud native service in Huawei Cloud
Problem of Connection API adapter:
1. define unified connector API between Flink and Spark such as Kafka, JDBC connector..
2. define cloud service general connector API such as object bucket storage..
Apache Bahir need more contributions.
13. Flink Benchmark - chicken ribs
❖ Standard benchmark problem:
❖ just focus on performance and supposed use case
❖ can’t cover all the API and feature
❖ performance only show your best, no worst case
❖ Enterprise care more reliability and best practice
14. Flink Reliability benchmark
❖ Test metric dimensionality for every API:
❖ overall source generating rate:
❖ fixed rate, rapid rate, index rate
❖ data skew and backpresure
❖ Job.ratio= max{Vertex.ratio | Vertex∈Job};
❖ Vertex.ratio = max{SubTask.ratio | SubTask∈Vertex}
❖ latency
❖ job latency: source generate rate and job processing rate
❖ event latency: the time cost between source and sink
❖ throughput and GC …
AutoRun a large-scale test to find Flink that may encounter runtime memory overflow,
calculation result error, run-time reliability problems, and collect metrics of anti-pressure,
latency, throughput, memory, CPU, rate to analyze the reasons for the reliability problem.
15. Flink ReliabilityBench project
❖ The generated report include all API
❖ In next half year, we’ll publish Flink reliability bench and
standard benchmark to Cloud Stream Service
❖ User just set the needed resource, then auto run the
bench, generate a final report for tuning and best
practice guide
Welcome everyone and Flink community to try it then
16. Some problem
❖ In SQL, how expression JSON and OpenTSDB, and other data format?
❖ SQL with phrase:
❖ how make a general and extensible rule to support all connector?
❖ how support general and extensible cloud standard, like object
bucket storage..
❖ API server?
❖ manage job lifetime and metric
❖ For job, input the source data, …, output sink data with Streaming
API
❖ sink reliability support for external Write ahead log framework:
❖ source1 - processing – sink1 – source2 - processing - sink –
source2 - …
maybe lost data
17. Intelligent Streaming Computing
❖ Open Source framework
❖ Streaming+ML: Spark MLlib, pySpark, Flink ML
❖ Streaming+Graph: Spark GraphX, Flink Gelly
❖ SQL: bonding the above by UDF
Stream Analysis is not enough, Intelligent framework is need.
If we make less efforts, maybe surpassed by others quickly.
Keep hunger
18. Scenario 1: streaming trading analysis
Just a example diagram for showing. From sohu site.
1. Disorder stream data for K line
charts of 5min, 15min, 30min, 60min
2. Aggregate streaming data at window
3. Low latency
BitCoin trading pain spots
Cloud StreamDIS
Kafka Flink
Cloud Table
OpenTSDB
HBas
e
Spark
DCS(Redis)
Huawei
Cloud
solution
19. Scenario 3: Stream Analysis and ETL
CS uses jobs of the Flink SQL,
Flink, and Spark Streaming types to
conduct exception detection, real-
time alarm reporting, and CEP-
based processing on stream data.
Feedback/decision-
making/monitoring: Based on the
positive feedback during service
running and monitoring information,
CS provides guidance for positive
product optimization, loss stop,
quantization, and visualization.
20. Enhanced Statistics and ML Features
Extraction
Design Principles
• Incremental
computation
• Fixed size
memory
• Constant to sub-
linear time
complexity
21. Enhanced Statistics and ML Features
Extraction
𝑆2
= 𝑦 − 𝑓 𝑥𝑖, 𝛽1, 𝛽2, 𝛽3, … , 𝛽 𝑛
2
For the linear fit:
𝑆2
= 𝑦𝑖 − 𝑓 𝛽1 + 𝛽2 𝑥𝑖
2
𝛽2 =
𝑠 𝑥𝑦,𝑡
𝑚2,𝑡
2
𝛽1 = 𝑦 − 𝛽2 𝑥
Regression parameters
𝑚2,𝑡 = 𝑚2,𝑡−1 + (𝑥 𝑡 − 𝑥 𝑡−1)(𝑥 𝑡 − 𝑥 𝑡)
Incremental variance (2nd central moment)
𝑥 = 𝑥 𝑡−1 +
1
𝑡
(𝑥 𝑡 − 𝑥 𝑡−1)
Incremental mean
In general:
𝑠 𝑥𝑦,𝑡 =
𝑡 − 2
𝑡 − 1
𝑠 𝑥𝑦,𝑡−1 +
1
𝑡
𝑥 𝑡 − 𝑥 𝑡−1 𝑦𝑡 − 𝑦𝑡−1
Incremental covariance
Online Linear Regression Learner
Execution time (s)
Trhoughput(ev)
Time range (ms)
Events
Latency analysis
Throughput analysis
22. GeoSepatial
• DDL for Time Geospatial
• ST_Point
• ST_Line
• ST_Polygon
• SQL Geospatial Scalar Functions
• ST_CONTAINS
• ST_COVERS
• ST_DISJOINT
• ST_BUFFER
• ST_INTERSECTION
• ST_ENVELOPE
• SQL Time Geospatial
• AGG_DISTANCE
• AVG_SPEED
• … on HOP/TUMBLE/OVER/SESSION windows
• …on count/time windows
• ….on rowtime/proctime windows
•Huawei offers complete coverage of geospatial standard plus extra time-
based functions
• ST_DISTANCE
• ST_PERIMETER
• ST_AREA (polygon)
• ST_OVERLAPS
• ST_INTERSECTS
• ST_WITHIN
Realtime IoT Analytics
Flink IoT Stream Engine
Deploy Execute
Geometry
Engine
GeoSpatial
function
User
Define
Function Geometry
Engine
GeoSpatial
function
Stream Topology
Stream SQL IoT
Translation
Optimizatio
n
IoT Op. Library
SQL IoT
Fun.
SQL IoT Functions
•ST_DISTANCE
• ST_PERIMETER
• ST_AREA
(polygon)
• ST_OVERLAPS
• ST_INTERSECTS
• ST_WITHIN
•…
• ST_CONTAINS
• ST_COVERS
• ST_DISJOINT
• ST_BUFFER
•ST_INTERSECTION
•ST_ENVELOPE
•…
Stream IoT Operators
•Window Tumble Count/
Time
•Window Hop Count/
Time
•Window Session Count/
Time
•Process Function
•Map
•FlatMap
Stream SQL Time
GeoSpatial Analytics
Submit
Continuous data
23. GeoSepatial Examples
Select if cars deviate from road
SELECT carId FROM CarStream
WHERE ST_WITHIN( +
ST_POINT( car.lat, car.lon),
ST_BUFFER( ST_ROAD_FROM_FILE(file), 2.0))
Compute Time Aggregates over Spatial Data
SELECT timestampa, lat, lon,
AGG_DISTANCE( ST_POINT(lat, lon)) OVER (
PARTITION BY carid ORDER BY proctime RANGE BETWEEN
INTERVAL '1' HOUR PRECEDING AND CURRENT ROW),
AVG_SPEED( ST_POINT(lat, lon)) OVER (
PARTITION BY carid ORDER BY proctime RANGE BETWEEN
INTERVAL '1' HOUR PRECEDING AND CURRENT ROW)
FROM CarStream
Filter by region
SELECT timestampr, lat, lon, speed
FROM CarStream
WHERE ST_WITHIN( ST_POINT(lat, lon), ST_POLYGON( ARRAY[
ST_POINT(53.454326,7.334517),
ST_POINT(53.682480, 13.906822),
ST_POINT(47.761194, 12.607594),
ST_POINT(47.722358, 7.601213),
ST_POINT(53.454326,7.334517)]))
24. Flink CEP on SQL enhance
SQL CEP Syntax
SELECT * FROM stream...
MATCH_RECOGNIZE (
[row_pattern_partition_by ]
[row_pattern_order_by ]
[row_pattern_measures ]
[row_pattern_rows_per_match ]
[row_pattern_skip_to ]
PATTERN (row_pattern) [with_in clause]
[duration clause]
[row_pattern_subset_clause]
DEFINE row_pattern_definition_list )
Define pattern matching computation
Offer complete syntax
coverage for real time
CEP analytics
SELECT * FROM Ticker
MATCH_RECOGNIZE (
PARTITION BY symbol
MEASURES
FINAL FIRST(A.price) AS firstAPrice,
FIINAL FIRST(B.price) AS firstBPrice,
FINAL FIRST(C.price) AS firstCPrice,
FINAL LAST(A.price) AS lastAPrice,
FINAL LAST(B.price) AS lastBPrice,
FINAL LAST(C.price) AS lastCPrice
ONE ROW PER MATCH
AFTER MATCH SKIP PAST LAST ROW
PATTERN ((A B C){2})
DEFINE A AS A.price < 50, B AS B.price < 30,
C AS C.price < 70 ) # Events: ~2.5M # Matched events: ~ 100K
# Stocks: 7 Average latency: ~ 27.13 ms
I’m Jinkui Shi, come from Huawei Hangzhou office.Now I work on CloudStream Service of Huawei Cloud. I ever worked in Sohu and Alibaba, at then I’m interesting at Spark and microservice. Recent two years, I’m focus developing product with Flink and spark streaming.
Huawei CloudBU founded at June 2017. CloudBU is top business division. At the past half an year, there are about hundreds new service created at HuaweiCloud.
I’m Huawei Cloud EI(enterprise intelligence), which include Bigdata service,machine learning and AI services.
Before I join Huawei, there is a Streaming product called StreamSmart writed by C++, it support CQL.
The first question why we choose Flink. There lots of streaming framework such as storm, jstrom, heron, kafka stream, apex, samza, nifi, akka stream, beam and so on. Finally we choose apache Flink as our runtime executing engine because of Flink have graceful dataflow Runtime framework, rich stream SQL function, lightweight async checkpoint mechanism, really low latency and hight throughput. After these basic ability Flink also support machine learning and graph, also can run on edge device. Indeed we developed huawei Flink release version that we add some advanced features such as GeoSpatial, Dr. Radu will introduce then.
CS is the first cloud native service that choose Flink as its Runtime computing engine in the world.
Cloud Stream service is new service at Huawei Cloud, design at May 2017, after less than three month development we beta it at huawei cloud. At March 7th 2017 we release it officially.
CloudStream basic ability is streaming analysis such as ETL/abnormal detective. CloudStream choose Flink as main runtime engine, also support spark streaming and spark other parts. We firstly provide SQL editor.
CloudStream have three parts. Firstly is use case and industry, we provide some template for different use case. Secondly for making the computing easier, the runtime executing engine we support Flink and Spark at same time. So use can run Flink SQL, machine learning algorithm include Spark MLlib, FlinkML, Dl4j framework, graph framework include Spark GraphX and Flink Gelly. Flink IoT enhance feature and CEP enhance feature.
Thirdly at runtime having rich connecters is very important. CloudStream now support Flink open source connectors by VPC cluster and HuaweiCloud Service connectors.
Apache Bahir is a good connectors toolkit but need more improvement and keep the API compatibility with spark.
This is the main features of cloud stream service.
Easy-to-use: we provide sql editor to finish the business online and submit the job directly or just test the SQL. Every job has runtime monitor for execution graph and data stream statistic visualization.
Pay-as-you-go: User just pay for what the running job’s costs. The payment unit is SPU called Stream Processing Units which include 1 cpu core and four gigabytes(GB) memory, every SPU just half of one China Yuan per hour.
Secure and reliable: CloudStream provide two fully-managed cluster, one is sharing cluster for Flink SQL without UDF, the other is exclusive cluster for Flink Jar job and Spark Jar job. The exclusive need pay extra six SPU for exclusive management. Exclusive cluster just run one user or tenant user’s job. So it’s safest.
We diff the costs between Cloud Stream Service and offline. For the same cpu and memory resource, CS save 42.9 costs, it’s very exciting.
CloudStream now support five kinds of job: Flink SQL, Flink jar job include any UDF, Spark job include any UDF, pyspark jar job, and the edge job which is beta.
The connectors of CloudStream covers open source connectors and Huawei Cloud native service.
We also find some problem for improving, first is define unified connector API for the same connector between Flink and Spark such as Kafka, JDBC..
The other is defining general cloud service standard such as object bucket storage..
It’ll be useful for user exchange between Flink and Spark or other framework. I think apache Baihri framework need more efforts.
User just create a job from defined template, and modify the parameter and business logic. After that choose SPU amount, set the checkpoint storage with Object bucket service. Then click submit button. Then the job will be submit to the cluster.
In the next half of this year, we’ll publish resource costs estimation feature which cloudstream can auto-estimated how many SPU current job need.
Visualization include SQL editor, job monitoering. SQL visualization, nodebook, job pipline and DSL are ongoing.
Visualization cover job developing, job runtime metric monitoring, streaming data sample and sink data show.
chicken ribs is a Chinese story. It means things have not enouth value and a little pity for giving up. The benchmark result is just a reference.
Flink在请求反压计算时,JobManager 会通过 Akka 给每个 TaskManager 发送TriggerStackTraceSample消息。默认情况下,TaskManager 会触发100次 stack trace 采样,每次间隔 50ms(即一次反压检测至少要等待5秒钟)。并将这 100 次采样的结果返回给 JobManager,由 JobManager 来计算反压比率(反压出现的次数/采样的次数)。
We create a private project called FlinkReliabilityBench. It test every Flink API with four measure data skew and backpressure, latency, throughput and GC.
It simulate the actuality source streaming data, automatic set different parameter combination, and then statistic by metric and get the best parameter combination and the worse combination, even the crash case.