Organizations that can make sense out of massive amounts of data produced by systems, customers, or partners will have a competitive edge. Ballerina Stream Processing provides real-time event stream processing capabilities to microservices, with intuitive SQL queries allowing users to filter, aggregate and correlate data to make sense, take decisions and act in real-time in a distributed manner.
In this talk, we will discuss the following:
* Ballerina’s Stream Processing capability.
* How can it be used for real-time decision making?
* Building highly scalable data pipelines with data processing at the edge.
* Building event-driven architecture with stream processing.
* The roadmap.
Characteristics of cloud native apps.
Problems in implementing event-driven stateful applications.
Siddhi: Cloud-Native Stream Processor.
Patterns of implementing event-driven applications.
Deploying event-driven applications on Kubernetes with Siddhi and NATS.
What's streaming processing? The evolution of streaming SQL. It's advantages & challenges, and how we can overcome them. Presented at WSO2 Con 2018 USA
Organizational success depends on our ability to sense the environment, grab opportunities and eliminate threats that are present in real-time. Such real-time processing is now available to all organizations (with or without a big data background) through the new WSO2 Stream Processor.
This slides presents WSO2 Stream Processor’s new features and improvements and explains how they make an organization excel in the current competitive marketplace. Some key features we will consider are:
* WSO2 Stream Processor’s highly productive developer environment, with graphical drag-and-drop, and the Streaming SQL query editor
* The ability to process real-time queries that span from seconds to years
* Its interactive visualization and dashboarding features with improved widget generation
* Its ability to processing at scale via distributed deployments with full observability
* Default support for HTTP analytics, distributed message trace analytics, and Twitter analytics
Characteristics of cloud native apps.
Problems in implementing event-driven stateful applications.
Siddhi: Cloud-Native Stream Processor.
Patterns of implementing event-driven applications.
Deploying event-driven applications on Kubernetes with Siddhi and NATS.
What's streaming processing? The evolution of streaming SQL. It's advantages & challenges, and how we can overcome them. Presented at WSO2 Con 2018 USA
Organizational success depends on our ability to sense the environment, grab opportunities and eliminate threats that are present in real-time. Such real-time processing is now available to all organizations (with or without a big data background) through the new WSO2 Stream Processor.
This slides presents WSO2 Stream Processor’s new features and improvements and explains how they make an organization excel in the current competitive marketplace. Some key features we will consider are:
* WSO2 Stream Processor’s highly productive developer environment, with graphical drag-and-drop, and the Streaming SQL query editor
* The ability to process real-time queries that span from seconds to years
* Its interactive visualization and dashboarding features with improved widget generation
* Its ability to processing at scale via distributed deployments with full observability
* Default support for HTTP analytics, distributed message trace analytics, and Twitter analytics
Webinar: MongoDB Use Cases within the Oil, Gas, and Energy IndustriesMongoDB
In this session we will dive into some of the use-cases companies are currently deploying MongoDB for in the energy space. It is becoming more important for companies to make data driven decisions, and MongoDB can often be the right tool for analyzing the massive amounts of data coming in. Whether tracking oil well site statistics, power meter data, or feeds from sensors, MongoDB can be a great fit for tracking and analyzing that data, using it to make smart, informed business decisions.
Cloud Spanner is the first and only relational database service that is both strongly consistent and horizontally scalable. With Cloud Spanner you enjoy all the traditional benefits of a relational database: ACID transactions, relational schemas (and schema changes without downtime), SQL queries, high performance, and high availability. But unlike any other relational database service, Cloud Spanner scales horizontally, to hundreds or thousands of servers, so it can handle the highest of transactional workloads.
Cassandra as event sourced journal for big data analyticsAnirvan Chakraborty
Avoiding destructive updates and keeping history of data using event sourcing approaches has large advantages for data analytics. This talk describes how Cassandra can be used as event journal as part of CQRS/Lambda Architecture using event sourcing and further used for data mining and machine learning purposes in a big data pipeline.
All the principles are demonstrated on an application called Muvr that we built. It uses data from wearable devices such as accelerometer in a watch or heartbeat monitor to classify user's exercises in near real time. It uses mobile devices and clustered Akka actor framework to distribute computation and then stores events as immutable facts in journal backed by Cassandra. The data are then read by Apache Spark and used for more expensive analytics and machine learning tasks such as suggests improvements to user's exercise routine or improves machine learning models for better real time exercise classification that can be used immediately. The talk mentions some of the internals of Spark when working with Cassandra and focuses on its machine learning capabilities enabled by Cassandra. A lot of the analytics are done for each user individually so the whole pipeline must handle potentially large amount of concurrent users and a lot of raw data so we need to ensure attributes such as responsiveness, elasticity and resilience.
Streaming Operational Data with MariaDB MaxScaleMariaDB plc
MariaDB experts explain how to stream data using MariaDB MaxScale, a database proxy that can vastly improve your server's transactional data processing without sacrificing scalability, security or speed. In this webinar, learn how to use MaxScale to convert data to JSON documents or AVRO objects, and watch as MariaDB's senior software engineers do a live demo of how to use the Kafka producer.
Watch the webinar here: https://mariadb.com/resources/webinars/streaming-operational-data-mariadb-maxscale
codecentric AG: CQRS and Event Sourcing Applications with CassandraDataStax Academy
CQRS (Command Query Responsibility Segregation) is a pattern, which separates the process of querying and updating data. As a query only returns data without any side effects, a command is designed to change data. CQRS is often combined with Event Sourcing. This is an architecture in which all changes to an application state are stored as a sequence of events.
Because of its great capability to store time series data Cassandra is the perfect fit for implementing the event store. But there a still a lot of open questions: What about the data modeling? What techniques will be used to process and store data in the Cassandra database? How to access the current state of the application, without replaying every event? And what about failure handling?
In this talk, I will give a brief introduction to CQRS and the Event Sourcing pattern and will then answer the questions above using a real life example of a data store for customer data.
These slides were designed for Apache Hadoop + Apache Apex workshop (University program).
Audience was mainly from third year engineering students from Computer, IT, Electronics and telecom disciplines.
I tried to keep it simple for beginners to understand. Some of the examples are using context from India. But, in general this would be good starting point for the beginners.
Advanced users/experts may not find this relevant.
We’ll present details about Argus, a time-series monitoring and alerting platform developed at Salesforce to provide insight into the health of infrastructure as an alternative to systems such as Graphite and Seyren.
Symantec: Cassandra Data Modelling techniques in actionDataStax Academy
Our product presents an aggregated view of metadata collected for billions of objects (files, emails, sharepoint objects etc.). We used Cassandra to store those billions of objects along with aggregated view of that metadata. Customers can analyse the corpus of data in real time by searching in completely flexible way i.e. be able to get summary aggregates for many billions of objects, and then be able to further drill down to items by filtering using various facets of the metadata. We achieve this using a combination of Cassandra and ElasticSearch. This presentation will talk about various data modelling techniques we use to aggregate and then further summarise all that metadata and be able to search the summary in real t
Improve your SQL workload with observabilityOVHcloud
La majeure partie du SI d'OVH repose sur des bases de données relationnelles (PostgreSQL, MySQL, MariaDB). En termes de volumétrie cela représente 400 bases pesants plus de 20To de données réparties sur 60 clusters dans deux zones géographiques le tout propulsant 3000 applications.
Comment tout voir dans notre parc ? Mieux encore, comment faire pour que tout le monde puisse suivre l'activité de sa base de données ? C'est le challenge que nous nous sommes fixés, un an après nous pouvons partager notre expérience.
Et si l'observability n'était pas juste un buzzword, mais avait un réel impact sur la production ?
Organizations that can make sense out of massive amounts of data produced by systems, customers, or partners will have a competitive edge. Ballerina Stream Processing provides real-time event stream processing capabilities to microservices, with intuitive SQL queries allowing users to filter, aggregate and correlate data to make sense, take decisions and act in real-time in a distributed manner.
To view recording of this webinar please use below URL
http://wso2.com/library/webinars/2015/11/wso2-product-release-webinar-wso2-complex-event-processor-4.0/
In this webinar, Lasantha and Suho will discuss the following key features and improvements in detail:
Integrating WSO2 CEP with Apache Storm to achieve distributed real-time stream processing
Key features of the latest version of Siddhi
New transports that enhances integration capabilities of WSO2 CEP
Creating query templates using execution manager
Using the analytics dashboard to visualize results in real-time
Webinar: MongoDB Use Cases within the Oil, Gas, and Energy IndustriesMongoDB
In this session we will dive into some of the use-cases companies are currently deploying MongoDB for in the energy space. It is becoming more important for companies to make data driven decisions, and MongoDB can often be the right tool for analyzing the massive amounts of data coming in. Whether tracking oil well site statistics, power meter data, or feeds from sensors, MongoDB can be a great fit for tracking and analyzing that data, using it to make smart, informed business decisions.
Cloud Spanner is the first and only relational database service that is both strongly consistent and horizontally scalable. With Cloud Spanner you enjoy all the traditional benefits of a relational database: ACID transactions, relational schemas (and schema changes without downtime), SQL queries, high performance, and high availability. But unlike any other relational database service, Cloud Spanner scales horizontally, to hundreds or thousands of servers, so it can handle the highest of transactional workloads.
Cassandra as event sourced journal for big data analyticsAnirvan Chakraborty
Avoiding destructive updates and keeping history of data using event sourcing approaches has large advantages for data analytics. This talk describes how Cassandra can be used as event journal as part of CQRS/Lambda Architecture using event sourcing and further used for data mining and machine learning purposes in a big data pipeline.
All the principles are demonstrated on an application called Muvr that we built. It uses data from wearable devices such as accelerometer in a watch or heartbeat monitor to classify user's exercises in near real time. It uses mobile devices and clustered Akka actor framework to distribute computation and then stores events as immutable facts in journal backed by Cassandra. The data are then read by Apache Spark and used for more expensive analytics and machine learning tasks such as suggests improvements to user's exercise routine or improves machine learning models for better real time exercise classification that can be used immediately. The talk mentions some of the internals of Spark when working with Cassandra and focuses on its machine learning capabilities enabled by Cassandra. A lot of the analytics are done for each user individually so the whole pipeline must handle potentially large amount of concurrent users and a lot of raw data so we need to ensure attributes such as responsiveness, elasticity and resilience.
Streaming Operational Data with MariaDB MaxScaleMariaDB plc
MariaDB experts explain how to stream data using MariaDB MaxScale, a database proxy that can vastly improve your server's transactional data processing without sacrificing scalability, security or speed. In this webinar, learn how to use MaxScale to convert data to JSON documents or AVRO objects, and watch as MariaDB's senior software engineers do a live demo of how to use the Kafka producer.
Watch the webinar here: https://mariadb.com/resources/webinars/streaming-operational-data-mariadb-maxscale
codecentric AG: CQRS and Event Sourcing Applications with CassandraDataStax Academy
CQRS (Command Query Responsibility Segregation) is a pattern, which separates the process of querying and updating data. As a query only returns data without any side effects, a command is designed to change data. CQRS is often combined with Event Sourcing. This is an architecture in which all changes to an application state are stored as a sequence of events.
Because of its great capability to store time series data Cassandra is the perfect fit for implementing the event store. But there a still a lot of open questions: What about the data modeling? What techniques will be used to process and store data in the Cassandra database? How to access the current state of the application, without replaying every event? And what about failure handling?
In this talk, I will give a brief introduction to CQRS and the Event Sourcing pattern and will then answer the questions above using a real life example of a data store for customer data.
These slides were designed for Apache Hadoop + Apache Apex workshop (University program).
Audience was mainly from third year engineering students from Computer, IT, Electronics and telecom disciplines.
I tried to keep it simple for beginners to understand. Some of the examples are using context from India. But, in general this would be good starting point for the beginners.
Advanced users/experts may not find this relevant.
We’ll present details about Argus, a time-series monitoring and alerting platform developed at Salesforce to provide insight into the health of infrastructure as an alternative to systems such as Graphite and Seyren.
Symantec: Cassandra Data Modelling techniques in actionDataStax Academy
Our product presents an aggregated view of metadata collected for billions of objects (files, emails, sharepoint objects etc.). We used Cassandra to store those billions of objects along with aggregated view of that metadata. Customers can analyse the corpus of data in real time by searching in completely flexible way i.e. be able to get summary aggregates for many billions of objects, and then be able to further drill down to items by filtering using various facets of the metadata. We achieve this using a combination of Cassandra and ElasticSearch. This presentation will talk about various data modelling techniques we use to aggregate and then further summarise all that metadata and be able to search the summary in real t
Improve your SQL workload with observabilityOVHcloud
La majeure partie du SI d'OVH repose sur des bases de données relationnelles (PostgreSQL, MySQL, MariaDB). En termes de volumétrie cela représente 400 bases pesants plus de 20To de données réparties sur 60 clusters dans deux zones géographiques le tout propulsant 3000 applications.
Comment tout voir dans notre parc ? Mieux encore, comment faire pour que tout le monde puisse suivre l'activité de sa base de données ? C'est le challenge que nous nous sommes fixés, un an après nous pouvons partager notre expérience.
Et si l'observability n'était pas juste un buzzword, mais avait un réel impact sur la production ?
Organizations that can make sense out of massive amounts of data produced by systems, customers, or partners will have a competitive edge. Ballerina Stream Processing provides real-time event stream processing capabilities to microservices, with intuitive SQL queries allowing users to filter, aggregate and correlate data to make sense, take decisions and act in real-time in a distributed manner.
To view recording of this webinar please use below URL
http://wso2.com/library/webinars/2015/11/wso2-product-release-webinar-wso2-complex-event-processor-4.0/
In this webinar, Lasantha and Suho will discuss the following key features and improvements in detail:
Integrating WSO2 CEP with Apache Storm to achieve distributed real-time stream processing
Key features of the latest version of Siddhi
New transports that enhances integration capabilities of WSO2 CEP
Creating query templates using execution manager
Using the analytics dashboard to visualize results in real-time
Streamsheets and Apache Kafka – Interactively build real-time Dashboards and ...confluent
A powerful stream processing platform and an end-user friendly spreadsheet-interface, if this combination rings a bell, you should definitely attend our „Streamsheets and Apache Kafka“ webinar. While development is interactive with a web user interface, Streamsheets applications can run as mission-critical applications. They directly consume and produce event streams in Apache Kafka. One popular option is to run everything in the cloud leveraging the fully managed Confluent Cloud service on AWS, GCP or Azure. Without any coding or scripting, end-users leverage their existing spreadsheet skills to build customized streaming apps for analysis, dashboarding, condition monitoring or any kind of real-time pre-and post-processing of Kafka or KsqlDB streams and tables.
Hear Kai Waehner of Confluent and Kristian Raue of Cedalo on these topics:
• Where Apache Kafka and Streamsheets fit in the data ecosystem (Industrial IoT, Smart Energy, Clinical Applications, Finance Applications)
• Customer Story: How the Freiburg University Hospital uses Kafka and Streamsheets for dashboarding the utilization of clinical assets
• 15-Minutes Live Demonstration: Building a financial fraud detection dashboard based on Confluent Cloud, ksqlDB and Cedalo Cloud Streamsheets just using spreadsheet formulas.
Speaker:
Kai Waehner, Technology Evangelist, Confluent
Kristian Raue, Founder & Chief Technologist, cedalo
In questa sessione verranno analizzate e discusse le problematiche legate alla pubblicazione dei dati da devices in un tipico scenario IoT. Vedremmo come il servizio Event Hub di Microsoft Azure gestisce l'inserimento per pubblicazione e sottoscrizione offrendo una scalabilità flessibile, adattabile a profili di carico variabile e ai picchi provocati dalla connettività intermittente.
Building Modern Data Pipelines for Time Series Data on GCP with InfluxData by...InfluxData
In this InfluxDays NYC 2019 talk, you will get an overview of the Google data pipelines and some use-cases for infrastructure monitoring and IoT (Google). In addition, we will share some common solutions that can be deployed on GCP including using InfluxDB time series database for Kubernetes Monitoring and IoT.
Sergiy Grytsenko, Senior Software Engineer
“Reactive Extensions: classic Observer in .NET”
• Why should we use Rx when we have events?
• Key types & methods
• Lifetime management, flow control
• Combining several streams
• Tests, I need unit tests!
Inflight to Insights: Real-time Insights with Event Hubs, Stream Analytics an...Todd Whitehead
See how Azure can be used to provide real-time insights at scale using Event Hubs, Stream Analytics and unexpectedly an A10 Close Air Support attack aircraft! The session will demonstrate how to build an end to end solution to ingest, analyse and visualise insights quickly and affordably using the rich Azure platform. We will demonstrate the complete cockpit to insight solution, explaining the role and features of the various components as well as taking you step by step through how it was implemented. Finally we will explore other real-world workloads that would benefit from the power of real-time insights.
The program will read the file like this, java homework6Bank sma.pdfivylinvaydak64229
The program will read the file like this,
> java homework6/Bank small.txt 4
acct:0 bal:999 trans:1
acct:1 bal:1001 trans:1
acct:2 bal:999 trans:1
acct:3 bal:1001 trans:1
acct:4 bal:999 trans:1
acct:5 bal:1001 trans:1
acct:6 bal:999 trans:1
acct:7 bal:1001 trans:1
acct:8 bal:999 trans:1
acct:9 bal:1001 trans:1
acct:10 bal:999 trans:1
acct:11 bal:1001 trans:1
acct:12 bal:999 trans:1
acct:13 bal:1001 trans:1
acct:14 bal:999 trans:1
acct:15 bal:1001 trans:1
acct:16 bal:999 trans:1
acct:17 bal:1001 trans:1
acct:18 bal:999 trans:1
acct:19 bal:1001 trans:1
Each text file looks something like:
1 2 1
3 4 1
5 6 1
7 8 1
9 10 1
11 12 1
File Format: Each line in the external file represents a single transaction, and contains three
numbers: the id of the account from which the money is being transferred, the id of the account
to which the money is going, and the amount of money. For example the line:
17 6 104
indicates that $104 is being transferred from Account #17 to Account #6.
The test data provided includes transfers with the same from and to account numbers, so make
sure your program will work correctly for these transfers. For example:
5 5 40
Count these as two transactions for the account (one transaction taking money from the account
and one putting money into the account).
My goal is to pass each transaction into the queue, the queue will hold the transaction, the
worker will take the transaction, complete the deposit/withdraw, and update the balance of the
account accordingly. I am required to use BlockingQueue. My problem is that the program is not
running correctly. I need to fix the Bank class, how I start up the Bank in main thread, and also
work on Worker class.
More info:
Details
I recommend a design with four classes—Bank, Account, Transaction, and Worker. Both the
Account and Transactions classes are quite simple.
Account needs to store an id number, the current balance for the account, and the number of
transactions that have occurred on the account. Remember that multiple worker threads may be
accessing an account simultaneously and you must ensure that they cannot corrupt its data. You
may also want to override the toString method to handle printing of account information.
Transaction is a simple class that stores information on each transaction (see below for more
information about each transaction). If you’re careful you can treat the Transaction as
immutable. This means that you do not have to worry about multiple threads accessing it.
Remember an immutable object’s values never change, therefore its values are not subject to
corruption in a concurrent environment.
The Bank class maintains a list of accounts and the BlockingQueue used to communicate
between the main thread and the worker threads. The Bank is also responsible for starting up the
worker threads, reading transactions from the file, and printing out all the account values when
everything is done. Note: make sure you start up all the worker threads before reading the
tr.
Timely Year Two: Lessons Learned Building a Scalable Metrics Analytic SystemAccumulo Summit
Timely was born to visualize and analyze metric data at a scale untenable for existing solutions. We're returning to talk about what we've achieved over the past year, provide a detailed look into production architecture and discuss additional features added within the past year including alerting and support for external analytics.
– Speakers –
Drew Farris
Chief Technologist, Booz Allen Hamilton
Drew Farris is a software developer and technology consultant at Booz Allen Hamilton where he helps his client solve problems related to large scale analytics, distributed computing and machine learning. He is a member of the Apache Software Foundation and a contributing author to Manning Publications’ “Taming Text” and the Booz Allen Hamilton “Field Guide to Data Science”.
Bill Oley
Senior Lead Engineer, Booz Allen Hamilton
Bill Oley is a senior lead software engineer at Booz Allen Hamilton where he helps his clients analyze and solve problems related to large scale data ingest, storage, retrieval, and analysis. He is particularly interested in improving visibility into large scale systems by making actionable metrics scalable and usable. He has 16 years of experience designing and developing fault-tolerant distributed systems that operate on continuous streams of data. He holds a bachelor's degree in computer science from the United States Naval Academy and a master's degree in computer science from The Johns Hopkins University.
— More Information —
For more information see http://www.accumulosummit.com/
As more and more organizations and individual users turn to Apache Flink for their streaming workloads, there is a bigger demand for additional functionality out-of-the-box. On one hand, there is demand for more low-level APIs that allow for more control, while on the other, users ask for more high-level additions that make the common cases easier to express. This talk will present the new concepts added to the Datastream API in Flink-1.2 and for the upcoming Flink-1.3 release that tried to consolidate the aforementioned goals. We will talk, among others, about the ProcessFunction, a new low level stream processing primitive that gives the user full control over how each event is processed and can register and react to timers, changes in the windowing logic that allow for more flexible windowing strategies, side outputs, and new features concerning the Flink connectors.
http://flink-forward.org/kb_sessions/declarative-stream-processing-with-streamsql-and-cep/
Complex event processing (CEP) and stream analytics are commonly treated as distinct classes of stream processing applications. While CEP workloads identify patterns from event streams in near real-time, stream analytics queries ingest and aggregate high-volume streams. Both types of use cases have very different requirements which resulted in diverging system designs. CEP systems excel at low-latency processing whereas engines for stream analytics achieve high throughput. Recent advances in open source stream processing yielded systems that can process several millions of events per second at sub-second latency. Systems like Apache Flink enable applications that include typical CEP features as well as heavy aggregations. In this talk we will show how Apache Flink unifies CEP and stream analytics workloads. Guided by examples, we introduce Flink’s CEP-enriched StreamSQL interface and discuss how queries are compiled, optimized, and executed on Flink.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
2. The Problem
○ Integration is not always about request-response.
○ Highly scalable systems use Event Driven Architecture to asynchronously
communicate between multiple processing units.
○ Processing events from Webhooks, CDC, Realtime ETLs, and notification
systems fall into asynchronous event driven systems.
3. What is a Stream?
An unbounded continuous flow of records (having the same format)
E.g., sensor events, triggers from Webhooks, messages from MQ
4. Why Stream Processing ?
Doing continuous processing on the data forever !
Such as :
○ Monitor and detect anomalies
○ Real-time ETL
○ Streaming aggregations (e.g., average service response time in last 5
minutes)
○ Join/correlate multiple data streams
○ Detecting complex event patterns or trends
5. Stream Processing Constructs
○ Projection
○ Modifying the structure of the stream
○ Filter
○ Windows & Aggregations
○ Collection of streaming events over a time or length duration
(last 5 min or last 50 events)
○ Viewed in a sliding or tumbling manner
○ Aggregated over window (e.g., sum, count, min, max, avg, etc)
○ Joins
○ Joining multiple streams
○ Detecting Patterns
○ Trends, non-occurrence of events
6. How to write Stream Processing Logic?
Use language libraries :
○ Have different functions for each stream processor construct.
○ Pros: You can use the same language for implementation.
○ Cons: Quickly becomes very complex and messy.
User SQL dialog :
○ Use easy-to-use SQL to script the logic
○ Pros: Compact and easy to write the logic.
○ Cons: Need to write UDFs, which SQL does not support.
7. Solution for Programing Streaming Efficiently
Merging SQL and native programing
1. Consuming events to Ballerina using standard language constructs
○ Via HTTP, HTTP2, WebSocket, JMS and more.
2. Generate streams out of consumed data
○ Map JSON/XML/text messages into a record.
3. Define SQL to manipulate and process data in real time
○ If needed, use Ballerina functions within SQL
4. Generate output streams
5. Use standard language constructs to handle the output or send to an
endpoint
8. “
Having lots of sensors, among all valid sensors,
detect the sensors that have sent sensor readings
greater than 100 in total within the last minute.
A Use Case
11. Ballerina Stream Processing type Alert record {
string name; int total;
};
type SensorData record {
string name; int reading;
};
Define input and output
record types
12. Ballerina Stream Processing type Alert record {
string name; int total;
};
type SensorData record {
string name; int reading;
};
function alertQuery(
stream<SensorData> sensorDataStream,
stream<Alert> alertStream) {
}
Define input and output
record types
Function with
input/output Streams
13. Ballerina Stream Processing type Alert record {
string name; int total;
};
type SensorData record {
string name; int reading;
};
function alertQuery(
stream<SensorData> sensorDataStream,
stream<Alert> alertStream) {
forever {
}
}
Define input and output
record types
Function with
input/output Streams
Forever block
14. Ballerina Stream Processing type Alert record {
string name; int total;
};
type SensorData record {
string name; int reading;
};
function alertQuery(
stream<SensorData> sensorDataStream,
stream<Alert> alertStream) {
forever {
from sensorDataStream
where reading > 0
window time(60000)
select name, sum(reading) as total
group by name
having total > 100
}
}
Define input and output
record types
Function with
input/output Streams
Forever block
Among all valid sensors, select
ones having greater than 100 reading
in total within the last minute
15. Ballerina Stream Processing type Alert record {
string name; int total;
};
type SensorData record {
string name; int reading;
};
function alertQuery(
stream<SensorData> sensorDataStream,
stream<Alert> alertStream) {
forever {
from sensorDataStream
where reading > 0
window time(60000)
select name, sum(reading) as total
group by name
having total > 100
=> (Alert[] alerts) {
alertStream.publish(alerts);
}
}
}
Define input and output
record types
Function with
input/output Streams
Forever block
Among all valid sensors, select
ones having greater than 100 reading
in total within the last minute
Send Alert
17. Joining Two Streams Over Time
// Detect raw material input falls below 5% of the rate of production consumption
forever {
from productionInputStream window time(10000) as p
join rawMaterialStream window time(10000) as r
on r.name == p.name
select r.name, sum(r.amount) as totalRawMaterial, sum(p.amount) as totalConsumed
group by r.name
having ((totalRawMaterial - totalConsumed) * 100.0 / totalRawMaterial) < 5
=> (MaterialUsage[] materialUsages) {
materialUsageStream.publish(materialUsages);
}
}
18. Detecting Patterns Within Streams
// Detect small purchase transaction followed by a huge purchase transaction
// from the same card within a day
forever {
from every PurchaseStream where price < 20 as e1
followed by PurchaseStream where price > 200 && e1.id == id as e2
within 1 day
select e1.id as cardId, e1.price as initialPayment, e2.price as finalPayment
=> (Alert[] alerts) {
alertStream.publish(alerts);
}
}
19. Building Autonomous Services
○ Process incoming messages or
locally produced events
○ Process events at the receiving
node without sending to
centralised system
○ Services can monitor themselves
throw inbuilt matric streams
producing events locally
○ Do local optimizations and take
actions autonomously
20. Stream Processing at the Edge
○ Support microservices architecture
○ Summarize data at the edge.
○ When possible, take localized decisions.
○ Reduce the amount of data transferred
to the central node.
○ Ability to run independently
○ Highly scalable
21. The Roadmap
○ Support stream processing to incorporate Ballerina’s custom functions.
○ Building Ballerina Stream Processing using Ballerina.
○ Support streams joining with tables.
○ Improve query language.
○ Support State Recovery.
○ Support High Availability.