t is one thing to architect, design, and code the “happy flow” of your automated business processes and services. It is another thing to deal with situations you do not want or expect to occur in your processes and services. This session dives into fault handling in Oracle Service Bus 11g and Oracle SOA Suite 11g, based on an order-to-cash business process. Faults can be divided into business faults, technical faults, programming errors, and faulty user input. Each type needs a different approach to prevent them from occurring or to deal with them. For example, apply User Experience (UX) techniques to improve the quality of your application so that faulty user input can be prevented as much as possible. The session shows and demos what patterns and techniques can be used in Oracle Service Bus and Oracle SOA Suite to prevent and handle technical faults as well as business faults.
It is one thing to design and code the "happy flow" of your automated business processes and services. It is another thing to deal with situations you do not want [or expect] to occur in your processes and services. This session will dive into fault handling in Oracle SOA Suite 11g using a case study based on automated invoice handling. First the session investigates what can go wrong in automated processes and services. Then it categorizes these situations and dives into the mechanisms Oracle SOA Suite 11g offers to handle these different scenarios. These mechanisms include BPEL activities such as Throw and Catch activities, the SOA Suite Enterprise Manager, and SOA Suite's fault handling framework. The session will wrap up by introducing a generic fault handling framework for technical faults used in a real-life project that is realized using a Java fault handler and SOA Suite's fault handling framework.
Advanced Logging and Analysis for SOA, Social, Cloud and Big DataPerficient, Inc.
In this presentation, Perficient experts examined techniques, frameworks and products to log and analyze complex systems and events. Discussions included:
• Logging and exception management as service within your SOA
• Frameworks for event and transaction monitoring
• Event sense and response techniques
• Highly scalable logging product
• Logging in the context of business transaction and events
Non functional performance requirements v2.2Ian McDonald
How to write and structure non-functional requirements. Focusing upon performance requirements. This is a quick get you going guide in how to avoid writing untestable requirements and make sure what you want is delivered.
This presentation was created as part of the module Enterprise Systems Development. In this, we compare the packaged software development with custom software development (based on the paper by sawyer)
Five Cool Use Cases for the Spring Component of the SOA Suite 11gGuido Schmutz
Both Oracle SOA Suite and Oracle Unified Business Process Management Suite make it possible to embed Java code as a Service Component Architecture (SCA) first-class citizen through the Spring component implementation type. Thereby the coarse-grained components of Oracle SOA Suite are extended by the much-finer-grained Spring beans wrapped inside the Spring component. This session presents five cool use cases for the Spring component. It shows how and why you would want to use the Spring component and will hopefully inspire attendees to use it for their own projects.
Wednesday 14-Dec-2016 Future of Data - Princeton Meetup
@TigerLabs in Princeton, NJ A talk on Apache NiFi for processing Drone Data. Apache NiFi reads the images from a directory or MQTT, extracts metadata including geolocation, runs TensorFlow for image recognition and stores all the metadata in Phoenix as well as raw JSON in HDFS. Images are also stored in HDFS.
It is one thing to design and code the "happy flow" of your automated business processes and services. It is another thing to deal with situations you do not want [or expect] to occur in your processes and services. This session will dive into fault handling in Oracle SOA Suite 11g using a case study based on automated invoice handling. First the session investigates what can go wrong in automated processes and services. Then it categorizes these situations and dives into the mechanisms Oracle SOA Suite 11g offers to handle these different scenarios. These mechanisms include BPEL activities such as Throw and Catch activities, the SOA Suite Enterprise Manager, and SOA Suite's fault handling framework. The session will wrap up by introducing a generic fault handling framework for technical faults used in a real-life project that is realized using a Java fault handler and SOA Suite's fault handling framework.
Advanced Logging and Analysis for SOA, Social, Cloud and Big DataPerficient, Inc.
In this presentation, Perficient experts examined techniques, frameworks and products to log and analyze complex systems and events. Discussions included:
• Logging and exception management as service within your SOA
• Frameworks for event and transaction monitoring
• Event sense and response techniques
• Highly scalable logging product
• Logging in the context of business transaction and events
Non functional performance requirements v2.2Ian McDonald
How to write and structure non-functional requirements. Focusing upon performance requirements. This is a quick get you going guide in how to avoid writing untestable requirements and make sure what you want is delivered.
This presentation was created as part of the module Enterprise Systems Development. In this, we compare the packaged software development with custom software development (based on the paper by sawyer)
Five Cool Use Cases for the Spring Component of the SOA Suite 11gGuido Schmutz
Both Oracle SOA Suite and Oracle Unified Business Process Management Suite make it possible to embed Java code as a Service Component Architecture (SCA) first-class citizen through the Spring component implementation type. Thereby the coarse-grained components of Oracle SOA Suite are extended by the much-finer-grained Spring beans wrapped inside the Spring component. This session presents five cool use cases for the Spring component. It shows how and why you would want to use the Spring component and will hopefully inspire attendees to use it for their own projects.
Wednesday 14-Dec-2016 Future of Data - Princeton Meetup
@TigerLabs in Princeton, NJ A talk on Apache NiFi for processing Drone Data. Apache NiFi reads the images from a directory or MQTT, extracts metadata including geolocation, runs TensorFlow for image recognition and stores all the metadata in Phoenix as well as raw JSON in HDFS. Images are also stored in HDFS.
Big Data Day LA 2015 - Always-on Ingestion for Data at Scale by Arvind Prabha...Data Con LA
While the last few years have seen great advancements in computing paradigms for big data stores, there remains one critical bottleneck in this architecture - the ingestion process. Instead of immediate insights into the data, a poor ingestion process can cause headaches and problems to no end. On the other hand, a well-designed ingestion infrastructure should give you real-time visibility into how your systems are functioning at any given time. This can significantly increase the overall effectiveness of your ad-campaigns, fraud-detection systems, preventive-maintenance systems, or other critical applications underpinning your business.
In this session we will explore various modes of ingest including pipelining, pub-sub, and micro-batching, and identify the use-cases where these can be applied. We will present this in the context of open source frameworks such as Apache Flume, Kafka, among others that can be used to build related solutions. We will also present when and how to use multiple modes and frameworks together to form hybrid solutions that can address non-trivial ingest requirements with little or no extra overhead. Through this discussion we will drill-down into details of configuration and sizing for these frameworks to ensure optimal operations and utilization for long-running deployments.
Comparison of various streaming technologies
This meetup will take us through the various streaming technologies such as Storm, Flink, Infosphere Streams and Spark Streaming.
Agenda
• Characteristics of streaming technologies
• Introduction to Apache Storm, Trident and Flink
• Examples of Code and API
• Deep-dive of Spark Streaming
• Comparison of Spark Streaming with other streaming technologies
• Benchmark of Spark Streaming (with code walkthrough)
We will supplement theory concepts with sufficient examples
Real-Time Event & Stream Processing on MS AzureKhalid Salama
These slides discuss the main concepts of event & stream processing, as well as the related technologies on Microsoft Azure. We start by giving and overview of what Event & Stream Processing is. Then we describe the canonical architecture of a Stream Processing solution. We will delve into Message Queuing part of the solution. After that, we Introduce Apache Storm on HDInsight, as well as Azure Stream Analytics. We compare Apache Storm to Azure Stream Analytics, and finally conclude with useful resources
Developing Connected Applications with AWS IoT - Technical 301Amazon Web Services
AWS IoT is a managed cloud platform that can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely.
In this session we look at patterns and architectures for developing connected applications using AWS IoT. We dive into demo applications that tie together physical IoT devices, web browsers, identity providers, and mobile devices to create smart, connected applications using Amazon Web Services.
Speaker: Adam Larter, Solutions Architect, Amazon Web Services
Featured Customer - Tekt Industries
Dean Wampler, O’Reilly author and Big Data Strategist in the office of the CTO at Lightbend discusses practical tips for architecting stream-processing applications and explains how you can tame some of the complexity in moving from data at rest to data in motion.
How to Build Continuous Ingestion for the Internet of ThingsCloudera, Inc.
The Internet of Things is moving into the mainstream and this new world of data-driven products is transforming a vast number of industry sectors and technologies.
However, IoT creates a new challenge: how to build and operationalize continual data ingestion from such a wide and ever-changing array of endpoints so that the data arrives consumption-ready and can drive analysis and action within the business.
In this webinar, Sean Anderson from Cloudera and Kirit Busu, Director of Product Management at StreamSets, will discuss Hadoop's ecosystem and IoT capabilities and provide advice about common patterns and best practices. Using specific examples, they will demonstrate how to build and run end-to-end IOT data flows using StreamSets and Cloudera infrastructure.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
UX, ethnography and possibilities: for Libraries, Museums and ArchivesNed Potter
These slides are adapted from a talk I gave at the Welsh Government's Marketing Awards for the LAM sector, in 2017.
It offers a primer on UX - User Experience - and how ethnography and design might be used in the library, archive and museum worlds to better understand our users. All good marketing starts with audience insight.
The presentation covers the following:
1) An introduction to UX
2) Ethnography, with definitions and examples of 7 ethnographic techniques
3) User-centred design and Design Thinking
4) Examples of UX-led changes made at institutions in the UK and Scandinavia
5) Next Steps - if you'd like to try out UX at your own organisation
The technologies and people we are designing experiences for are constantly changing, in most cases they are changing at a rate that is difficult keep up with. When we think about how our teams are structured and the design processes we use in light of this challenge, a new design problem (or problem space) emerges, one that requires us to focus inward. How do we structure our teams and processes to be resilient? What would happen if we looked at our teams and design process as IA’s, Designers, Researchers? What strategies would we put in place to help them be successful? This talk will look at challenges we face leading, supporting, or simply being a part of design teams creating experiences for user groups with changing technological needs.
An immersive workshop at General Assembly, SF. I typically teach this workshop at General Assembly, San Francisco. To see a list of my upcoming classes, visit https://generalassemb.ly/instructors/seth-familian/4813
I also teach this workshop as a private lunch-and-learn or half-day immersive session for corporate clients. To learn more about pricing and availability, please contact me at http://familian1.com
3 Things Every Sales Team Needs to Be Thinking About in 2017Drift
Thinking about your sales team's goals for 2017? Drift's VP of Sales shares 3 things you can do to improve conversion rates and drive more revenue.
Read the full story on the Drift blog here: http://blog.drift.com/sales-team-tips
How to Become a Thought Leader in Your NicheLeslie Samuel
Are bloggers thought leaders? Here are some tips on how you can become one. Provide great value, put awesome content out there on a regular basis, and help others.
Not Less, Not More: Exactly Once, Large-Scale Stream Processing in ActionParis Carbone
Large-scale data stream processing has come a long way to where it is today. It combines all the essential requirements of modern data analytics: subsecond latency, high throughput and impressively, strong consistency. Apache Flink is a system that serves as a proof-of-concept of these characteristics and it is mainly well-known for its lightweight fault tolerance. Data engineers and analysts can now let the system handle Terabytes of computational state without worrying about failures that can potentially occur.
This presentation describes all the fundamental challenges behind exactly-once processing guarantees in large-scale streaming in a simple and intuitive way. Furthermore, it demonstrate the basic and extended versions of Flink's state-of-the-art snapshotting algorithm tailored to the needs of a dataflow graph.
Building the Ideal Stack for Machine LearningSingleStore
Machine Learning is not new, but its application across memory-optimized distributed systems has led to an explosion in both the number and capability of its uses. Pandora develops personalized content recommendations with machine learning algorithms, Tesla has produced the first widely distributed autonomous vehicle, and Amazon uses autonomous robots to move packages within its warehouses and even deliver packages. When coupled with real-time data, advanced analytics approaches like machine learning and deep learning create immediate business opportunities.
Machine learning has never been more accessible—if your data pipelines support real-time analysis. Attendees will learn tools and techniques for integrating machine learning models across industries and organizations. Steven Camiña, MemSQL Product Manager, will walk through critical technologies needed in your technology ecosystem, including Python, Apache Kafka, Apache Spark, and a real-time database.
This presentation discusses the benefits of merging NPM & APM together to better assist problem response teams in troubleshooting network and application problems.
The presentation highlights a new product offering called NetPod which is a joint solution developed between Emulex and Dynatrace.
Big Data Day LA 2015 - Always-on Ingestion for Data at Scale by Arvind Prabha...Data Con LA
While the last few years have seen great advancements in computing paradigms for big data stores, there remains one critical bottleneck in this architecture - the ingestion process. Instead of immediate insights into the data, a poor ingestion process can cause headaches and problems to no end. On the other hand, a well-designed ingestion infrastructure should give you real-time visibility into how your systems are functioning at any given time. This can significantly increase the overall effectiveness of your ad-campaigns, fraud-detection systems, preventive-maintenance systems, or other critical applications underpinning your business.
In this session we will explore various modes of ingest including pipelining, pub-sub, and micro-batching, and identify the use-cases where these can be applied. We will present this in the context of open source frameworks such as Apache Flume, Kafka, among others that can be used to build related solutions. We will also present when and how to use multiple modes and frameworks together to form hybrid solutions that can address non-trivial ingest requirements with little or no extra overhead. Through this discussion we will drill-down into details of configuration and sizing for these frameworks to ensure optimal operations and utilization for long-running deployments.
Comparison of various streaming technologies
This meetup will take us through the various streaming technologies such as Storm, Flink, Infosphere Streams and Spark Streaming.
Agenda
• Characteristics of streaming technologies
• Introduction to Apache Storm, Trident and Flink
• Examples of Code and API
• Deep-dive of Spark Streaming
• Comparison of Spark Streaming with other streaming technologies
• Benchmark of Spark Streaming (with code walkthrough)
We will supplement theory concepts with sufficient examples
Real-Time Event & Stream Processing on MS AzureKhalid Salama
These slides discuss the main concepts of event & stream processing, as well as the related technologies on Microsoft Azure. We start by giving and overview of what Event & Stream Processing is. Then we describe the canonical architecture of a Stream Processing solution. We will delve into Message Queuing part of the solution. After that, we Introduce Apache Storm on HDInsight, as well as Azure Stream Analytics. We compare Apache Storm to Azure Stream Analytics, and finally conclude with useful resources
Developing Connected Applications with AWS IoT - Technical 301Amazon Web Services
AWS IoT is a managed cloud platform that can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely.
In this session we look at patterns and architectures for developing connected applications using AWS IoT. We dive into demo applications that tie together physical IoT devices, web browsers, identity providers, and mobile devices to create smart, connected applications using Amazon Web Services.
Speaker: Adam Larter, Solutions Architect, Amazon Web Services
Featured Customer - Tekt Industries
Dean Wampler, O’Reilly author and Big Data Strategist in the office of the CTO at Lightbend discusses practical tips for architecting stream-processing applications and explains how you can tame some of the complexity in moving from data at rest to data in motion.
How to Build Continuous Ingestion for the Internet of ThingsCloudera, Inc.
The Internet of Things is moving into the mainstream and this new world of data-driven products is transforming a vast number of industry sectors and technologies.
However, IoT creates a new challenge: how to build and operationalize continual data ingestion from such a wide and ever-changing array of endpoints so that the data arrives consumption-ready and can drive analysis and action within the business.
In this webinar, Sean Anderson from Cloudera and Kirit Busu, Director of Product Management at StreamSets, will discuss Hadoop's ecosystem and IoT capabilities and provide advice about common patterns and best practices. Using specific examples, they will demonstrate how to build and run end-to-end IOT data flows using StreamSets and Cloudera infrastructure.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
UX, ethnography and possibilities: for Libraries, Museums and ArchivesNed Potter
These slides are adapted from a talk I gave at the Welsh Government's Marketing Awards for the LAM sector, in 2017.
It offers a primer on UX - User Experience - and how ethnography and design might be used in the library, archive and museum worlds to better understand our users. All good marketing starts with audience insight.
The presentation covers the following:
1) An introduction to UX
2) Ethnography, with definitions and examples of 7 ethnographic techniques
3) User-centred design and Design Thinking
4) Examples of UX-led changes made at institutions in the UK and Scandinavia
5) Next Steps - if you'd like to try out UX at your own organisation
The technologies and people we are designing experiences for are constantly changing, in most cases they are changing at a rate that is difficult keep up with. When we think about how our teams are structured and the design processes we use in light of this challenge, a new design problem (or problem space) emerges, one that requires us to focus inward. How do we structure our teams and processes to be resilient? What would happen if we looked at our teams and design process as IA’s, Designers, Researchers? What strategies would we put in place to help them be successful? This talk will look at challenges we face leading, supporting, or simply being a part of design teams creating experiences for user groups with changing technological needs.
An immersive workshop at General Assembly, SF. I typically teach this workshop at General Assembly, San Francisco. To see a list of my upcoming classes, visit https://generalassemb.ly/instructors/seth-familian/4813
I also teach this workshop as a private lunch-and-learn or half-day immersive session for corporate clients. To learn more about pricing and availability, please contact me at http://familian1.com
3 Things Every Sales Team Needs to Be Thinking About in 2017Drift
Thinking about your sales team's goals for 2017? Drift's VP of Sales shares 3 things you can do to improve conversion rates and drive more revenue.
Read the full story on the Drift blog here: http://blog.drift.com/sales-team-tips
How to Become a Thought Leader in Your NicheLeslie Samuel
Are bloggers thought leaders? Here are some tips on how you can become one. Provide great value, put awesome content out there on a regular basis, and help others.
Not Less, Not More: Exactly Once, Large-Scale Stream Processing in ActionParis Carbone
Large-scale data stream processing has come a long way to where it is today. It combines all the essential requirements of modern data analytics: subsecond latency, high throughput and impressively, strong consistency. Apache Flink is a system that serves as a proof-of-concept of these characteristics and it is mainly well-known for its lightweight fault tolerance. Data engineers and analysts can now let the system handle Terabytes of computational state without worrying about failures that can potentially occur.
This presentation describes all the fundamental challenges behind exactly-once processing guarantees in large-scale streaming in a simple and intuitive way. Furthermore, it demonstrate the basic and extended versions of Flink's state-of-the-art snapshotting algorithm tailored to the needs of a dataflow graph.
Building the Ideal Stack for Machine LearningSingleStore
Machine Learning is not new, but its application across memory-optimized distributed systems has led to an explosion in both the number and capability of its uses. Pandora develops personalized content recommendations with machine learning algorithms, Tesla has produced the first widely distributed autonomous vehicle, and Amazon uses autonomous robots to move packages within its warehouses and even deliver packages. When coupled with real-time data, advanced analytics approaches like machine learning and deep learning create immediate business opportunities.
Machine learning has never been more accessible—if your data pipelines support real-time analysis. Attendees will learn tools and techniques for integrating machine learning models across industries and organizations. Steven Camiña, MemSQL Product Manager, will walk through critical technologies needed in your technology ecosystem, including Python, Apache Kafka, Apache Spark, and a real-time database.
This presentation discusses the benefits of merging NPM & APM together to better assist problem response teams in troubleshooting network and application problems.
The presentation highlights a new product offering called NetPod which is a joint solution developed between Emulex and Dynatrace.
The presentation provides an introduction to enterprise applications capacity planning using queuing models. Oracle’s Consulting uses presented methodology to estimate hardware architecture and capacity of planned for deployment enterprise applications for Oracle customers.
Lessons from Large-Scale Cloud Software at DatabricksMatei Zaharia
Keynote by Matei Zaharia at SOCC 2019
Abstract:
The cloud has become one of the most attractive ways for enterprises to purchase software, but it requires building products in a very different way from traditional software, which has not been heavily studied in research. I will explain some of these challenges based on my experience at Databricks, a startup that provides a data analytics platform as a service on AWS and Azure. Databricks manages millions of VMs per day to run data engineering and machine learning workloads using Apache Spark, TensorFlow, Python and other software for thousands of customers. Two main challenges arise in this context: (1) building a reliable, scalable control plane that can manage thousands of customers at once and (2) adapting the data processing software itself (e.g. Apache Spark) for an elastic cloud environment (for instance, autoscaling instead of assuming static clusters). These challenges are especially significant for data analytics workloads whose users constantly push boundaries in terms of scale (e.g. number of VMs used, data size, metadata size, number of concurrent users, etc). I’ll describe some of the common challenges that our new services face and some of the main ways that Databricks has extended and modified open source analytics software for the cloud environment (e.g., designing an autoscaling engine for Apache Spark and creating a transactional storage layer on top of S3 in the Delta Lake open source project).
Bio:
Matei Zaharia is an Assistant Professor of Computer Science at Stanford University and Chief Technologist at Databricks. He started the Apache Spark project during his PhD at UC Berkeley in 2009, and has worked broadly on datacenter systems, co-starting the Apache Mesos project and contributing as a committer on Apache Hadoop. Today, Matei tech-leads the MLflow open source machine learning platform at Databricks and is a PI in the DAWN Lab focusing on systems for ML at Stanford. Matei’s research was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in computer science, an NSF CAREER Award, and the US Presidential Early Career Award for Scientists and Engineers (PECASE).
Early Draft: Service Mesh allows developers to focus on business logic while the crosscutting network data layer code is handled by the Service Mesh. This is a boon because this code can be tricky to implement and hard to test all of the edge cases. Service Mesh takes this a few steps further than AOP or Servlet Filters or custom language-specific frameworks because it works regardless of the underlying programming language being used which is great for polyglot development shops. Thus standardizing how these layers work, while allowing teams to pick the best tools or languages for the job at hand. Kubernetes and Istio Service Mesh automate best practices for DevSecOps needs like: failover, scale-out, scalability, health checks, circuit breakers, rate limiters, metrics, observability, avoiding cascading failure, disaster recovery, and traffic routing; supporting CI/CD and microservices architecture.
Istio’s ability to automate and maintaining zero trust networks is its most important feature. In the age of high-profile data breaches, security is paramount. Companies want to avoid major brand issues that impact the bottom line and shrink market capitalization in an instant. Istio allows a standard way to do mTLS and auto certificate rotation which helps prevent a breach and limits the blast radius if a breach occurs. Istio also takes the concern of mTLS from microservices deployments and makes it easy to use taking the burden off of application developers.
Why does application Modernization in the form of decomposing monoliths result in so many microservices ? Why has microservices become the default deployment model for applications. In this talk we will add some sanity to the process of constructing microservices and provide guidelines and design heuristics on structuring microservices. We will look at life after running microservices architectures in production and learn from the mistakes committed over the past five years. We will analyze real life systems on the criteria for consolidating microservices into monoliths or moduliths based on technical and business heuristics as illustrated In [4]. The techniques - a combination of mapping microservices to core technical attributes [2] reduced by affinity mapping and business domain context distillation [3] - have emerged from working with a number of customers where the value of microservices has not been realized despite leveraging Domain Driven Design.
References:
1 https://hackmd.io/10j-7DfqSIu1C8GQjHa1Bw?view
2 https://content.pivotal.io/blog/should-that-be-a-microservice-keep-these-six-factors-in-mind
3 https://bit.ly/2VFwDr1
4 https://twitter.com/RKela/status/1227188151887843329/photo/1
Service Ownership with PagerDuty and Rundeck: Help others help you TraciMyers5
Many engineering and operations teams would like to move to a Service Ownership: "You build it, you own it" operating model. However, as with many ancillary objectives driving DevOps across an organization, this is easier said than done. Often this is because teams lack the human-to-technology mechanisms that allow for a culture of service ownership.
Within the context of incident response, teams need to be able to clearly define who is responsible for tending to issues, how they're notified, and who to lean on for help. This is true for non-incident response scenarios too. How can teams operate at a fast pace and at a large scale, while still maintaining valid and safe service ownership? One of the keys to allowing for service ownership outside of incident response is by imbuing an organization with a culture of self-service operations. This is where a service owner builds and delegates self-service mechanisms for end-users (non service owners) to make use of a given service safer while also reducing the number of interruptions to the service creator/owner.
In this webinar, you'll learn:
How self-service helps organizations adopt a ‘You Build it, You Own it’ model
Necessary mechanisms for service owners to create self-service interfaces to address the needs of their service-users
How to apply self-service while continuing to maintain security and compliance standards
How to allow developers and SREs to safely delegate automation as self-service requests to other teams and IT users
Help developers regain productivity and quality of life by doing what they do best: coding
Grails has great performance characteristics but as with all full stack frameworks, attention must be paid to optimize performance. In this talk Lari will discuss common missteps that can easily be avoided and share tips and tricks which help profile and tune Grails applications.
With Larger Teams, Multiple UI developers developing UI connecting to same singleton store ....creates issues.
How did we at Amdocs have overcome this.... is presented here ...
There will be more detailed presentation covering it in depth
Slow things down to make them go faster [FOSDEM 2022]Jimmy Angelakos
Talk from FOSDEM 2022
It's easy to get misled into overconfidence based on the performance of powerful servers, given today's monster core counts and RAM sizes. However, the reality of high concurrency usage is often disappointing, with less throughput than one would expect. Because of its internals and its multi-process architecture, PostgreSQL is very particular about how it likes to deal with high concurrency and in some cases it can slow down to the point where it looks like it's not performing as it should. In this talk we'll take a look at potential pitfalls when you throw a lot of work at your database. Specifically, very high concurrency and resource contention can cause problems with lock waits in Postgres. Very high transaction rates can also cause problems of a different nature. Finally, we will be looking at ways to mitigate these by examining our queries and connection parameters, leveraging connection pooling and replication, or adapting the workload.
Topics:
1. Understand what we mean by high concurrency.
2. Understand ACID & MVCC in Postgres.
3. Understand how high concurrency affects Postgres performance.
4. Understand how locks/latches affect Postgres performance.
5. Understand how high transaction rates can affect Postgres.
6. Mitigation strategies for high concurrency scenarios.
30 Minutes to the Analytics Platform with Infrastructure as CodeGuido Schmutz
Analytical platforms for PoCs and evaluation can be built in the cloud in an hour - with ready-made setup scripts. But if you put the services together freely, it gets more difficult. The open-source platform-in-a-box "Platys" (https://github.com/TrivadisPF/platys) shows that it is easier for test and PoC environments. In addition to possible uses and examples, we explain services and "just briefly" set up a data lake with a database, event broker, stream processing, blob store, SQL access and data science notebook.
Event Broker (Kafka) in a Modern Data ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Broker. What are the benefits of placing an Event Broker in a Modern Data (Analytics) Architecture? What exactly is an Event Broker and what capabilities should it provide? Why is Apache Kafka the most popular realisation of an Event Broker?
These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Broker.
Then the session will highlight the different architecture styles which can be supported using an Event Broker (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Broker the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
Big Data, Data Lake, Fast Data - Dataserialiation-FormatsGuido Schmutz
The concept of "Data Lake" is in everyone's mind today. The idea of storing all the data that accumulates in a company in a central location and making it available sounds very interesting at first. But Data Lake can quickly turn from a clear, beautiful mountain lake into a huge pond, especially if it is inexpertly entrusted with all the source data formats that are common in today's enterprises, such as XML, JSON, CSV or unstructured text data. Who, after some time, still has an overview of which data, which format and how they have developed over different versions? Anyone who wants to help themselves from the Data Lake must ask themselves the same questions over and over again: what information is provided, what data types do they have and how has the content changed over time?
Data serialization frameworks such as Apache Avro and Google Protocol Buffer (Protobuf), which enable platform-independent data modeling and data storage, can help. This talk will discuss the possibilities of Avro and Protobuf and show how they can be used in the context of a data lake and what advantages can be achieved. The support on Avro and Protobuf by Big Data and Fast Data platforms is also a topic.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Kafka as your Data Lake - is it Feasible?Guido Schmutz
For a long time we discuss how much data we can keep in Kafka. Can we store data forever or do we remove data after a while and maybe having the history in a data lake on Object Storage or HDFS? With the advent of Tiered Storage in Confluent Enterprise Platform, storing data much longer in Kafka is much very feasible. So can we replace a traditional data lake with just Kafka? Maybe at least for the raw data? But what about accessing the data, for example using SQL?
KSQL allows for processing data in a streaming fashion using an SQL like dialect. But what about reading all data of a topic? You can reset the offset and still use KSQL. But there is another family of products, so-called query engines for Big Data. They originate from the idea of reading Big Data sources such as HDFS, object storage or HBase, using the SQL language. Presto, Apache Drill and Dremio are the most popular solutions in that space. Lately these query engines also added support for Kafka topics as a source of data. With that you can read a topic as a table and join it with information available in other data sources. The idea of course is not real-time streaming analytics but batch analytics directly on the Kafka topic, without having to store it in a big data storage.
This talk answers, how well these tools support Kafka as a data source. What serialization formats do they support? Is there some form of predicate push-down supported or do we have to always read the complete topic? How performant is a query against a topic, compared to a query against the same data sitting in HDFS or an object store? And finally, will this allow us to replace our data lake or at least part of it by Apache Kafka?
Event Hub (i.e. Kafka) in Modern Data ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Hub. What are the benefits of placing an Event Hub in a Modern Data (Analytics) Architecture? What exactly is an Event Hub and what capabilities should it provide? Why is Apache Kafka the most popular realization of an Event Hub?
These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Hub.
Then the session will highlight the different architecture styles which can be supported using an Event Hub (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Hub the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform and more and more is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate, ORDS APIs and bridging Kafka with Oracle AQ.
Event Hub (i.e. Kafka) in Modern Data (Analytics) ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Hub. What are the benefits of placing an Event Hub in a Modern Data (Analytics) Architecture? What exactly is an Event Hub and what capabilities should it provide? Why is Apache Kafka the most popular realization of an Event Hub? These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Hub. Then the session will highlight the different architecture styles which can be supported using an Event Hub (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Hub the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
Building Event Driven (Micro)services with Apache KafkaGuido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so.
Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Location Analytics - Real-Time Geofencing using Apache KafkaGuido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries).
Geofencing lays the foundation for realizing use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others.
GPS tracking tells constantly and in real time where a device is located and forms the stream of events which needs to be analyzed against the much more static set of geo-fences. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play.
This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs). The design of such solution so that it can scale with both an increasing amount of position events as well as geo-fences will be discussed as well.
Solutions for bi-directional integration between Oracle RDBMS and Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Location Analytics Real-Time Geofencing using KafkaGuido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries).
Geofencing lays the foundation for realizing use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others.
GPS tracking tells constantly and in real time where a device is located and forms the stream of events which needs to be analyzed against the much more static set of geo-fences. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play.
This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs). The design of such solution so that it can scale with both an increasing amount of position events as well as geo-fences will be discussed as well.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. One option is to first persist the data into a data store and then use a traditional data visualisation solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualisation tools might already integrate with the specific data store. An other option is to use a Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solutions and then we show how the different blueprints can be implemented by mapping products onto the blueprints.
Kafka as an event store - is it good enough?Guido Schmutz
Event Sourcing and CQRS are two popular patterns for implementing a Microservices architectures. With Event Sourcing we do not store the state of an object, but instead store all the events impacting its state. Then to retrieve an object state, we have to read the different events related to a certain object and apply them one by one. CQRS (Command Query Responsibility Segregation) on the other hand is a way to dissociate writes (Command) and reads (Query). Event Sourcing and CQRS are frequently grouped and used together to form something bigger. While it is possible to implement CQRS without Event Sourcing, the opposite is not necessarily correct. In order to implement Event Sourcing, an efficient Event Store is needed. But is that also true when combining Event Sourcing and CQRS? And what is an event store in the first place and what features should it implement?
This presentation will first discuss what functionalities an event store should offer and then present how Apache Kafka can be used to implement an event store. But is Kafka good enough or do specific event store solutions such as AxonDB or Event Store provide a better solution?
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Fundamentals Big Data and AI ArchitectureGuido Schmutz
The right architecture is key for any IT project. This is especially the case for big data projects, where there are no standard architectures which have proven their suitability over years. This session discusses the different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Streaming Analytics architecture as well as Lambda and Kappa architecture and presents the mapping of components from both Open Source as well as the Oracle stack onto these architectures.
The right architecture is key for any IT project. This is valid in the case for big data projects as well, but on the other hand there are not yet many standard architectures which have proven their suitability over years.
This session discusses different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Event Driven architecture as well as Lambda and Kappa architecture.
Each architecture is presented in a vendor- and technology-independent way using a standard architecture blueprint. In a second step, these architecture blueprints are used to show how a given architecture can support certain use cases and which popular open source technologies can help to implement a solution based on a given architecture.
Location Analytics - Real-Time Geofencing using Kafka Guido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries). Geofencing lays the foundation for realising use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play. This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs).
Most data visualization solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualization capabilities. One option is to first persist the data into a data store and then use a traditional data visualization solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualization tools might already integrate with the specific data store. An other option is to use a Streaming Visualization solution. This talk presents different architecture blueprints for integrating data visualization into a fast data solutions.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
Effective fault handling in SOA Suite 11g
1. Effective Fault Handling
in Oracle SOA Suite 11g
Ronald van Luttikhuizen [Vennster]
Guido Schmutz [Trivadis]
1-October-2012 | Oracle OpenWorld
1|x
2. Guido Schmutz
• Working for Trivadis for more than 15 years
• Oracle ACE Director for Fusion Middleware and SOA
• Co-Author of different books
• Consultant, Trainer, Software Architect for Java, Oracle, SOA
and EDA
• Member of Trivadis Architecture Board
• Technology Manager @ Trivadis
• More than 20 years of software development
experience
• Contact: guido.schmutz@trivadis.com
• Blog: http://guidoschmutz.wordpress.com
• Twitter: gschmutz
2|x
3. Ronald van Luttikhuizen
• Managing Partner at Vennster
• Oracle ACE Director for Fusion Middleware and SOA
• Author of different articles, co-author Oracle SOA Book 11g book
• Upcoming book SOA Made Simple
• Architect, consultant, trainer for Oracle, SOA, EDA, Java
• More than 10 years of software development and architecture
experience
• Contact: ronald.van.luttikhuizen@vennster.nl
• Blog: blog.vennster.nl
• Twitter: rluttikhuizen
3|x
4. Agenda
1. What is Fault Handling ?
2. Fault Handling in SOA vs. traditional systems
3. Scenario and Patterns
4. Implementation of Scenario
5. Summary and Best Practices
4|x
5. What is a Fault ?
● Something happened outside normal operational activity or
“happy flow”
• Technical error
• Programming error
• Faulty operation by user
• Exceptional business behavior
● Prevention and handling
6|x
6. Two Types of Faults
Business faults
● Faults that service clients can expect and recover from
● Failure to meet a particular business requirement
● Often: expected, business value, contractual and recoverable
Technical faults
● Faults that service clients do not expect and cannot (easily) recover from
● Results of unexpected errors during runtime, e.g. null pointer errors,
resources not available, and so on
● Often: unexpected, technical, implementation and non-recoverable
7|x
8. Business Fault (II)
<soap:Envelope>
<soap:Header/>
<soap:Body>
<soap:Fault>
3. Actual service response
<faultcode>CST-1234</faultcode>
<faultstring>Customer not found</faultstring>
<detail>
<CustomerNotFoundFault>
<CustName>John Doe</CustName>
<City>Long Beach</City>
</CustomerNotFoundFault>
</detail>
</soap:Fault>
</soap:Body>
</soap:Envelope>
9|x
9. Technical Fault
<wsdl:operation name="orderProduct”>
<wsdl:input message="order:OrderProductMessage"/>
<wsdl:output message="order:OrderProductResponseMessage"/>
<wsdl:fault message="order:ProductNotInStockFaultMessage"
1. Service contract
name="ProductNotInStockFault"/> including fault
<wsdl:fault message="order:CustomerNotFoundFaultMessage"
name="CustomerNotFoundFault"/>
</wsdl:operation>
2. Actual service response
<soap:Body>
<soap:Fault>
<faultcode>S:Server</faultcode>
<faultstring>Could not connect to URL 127.0.0.1 on port 8001</faultstring>
</soap:Fault>
</soap:Body>
10 | x
10. Agenda
1. What is Fault Handling ?
2. Fault Handling in SOA vs. traditional systems
3. Scenario and Patterns
4. Implementation of Scenario
5. Summary and Best Practices
11 | x
11. Fault Handling SOA vs. traditional systems
External BPM User Multiple service consumers
Interface Services part of larger unit
Heterogeneous & external components
Long running processes
ESB
Asynchronous
Timed events
Often enterprise-wide
Implemen- Implemen- Implemen-
tation tation tation Transactions
12 | x
12. Agenda
1. What is Fault Handling ?
2. Fault Handling in SOA vs. traditional systems
3. Scenario and Patterns
4. Implementation of Scenario
5. Summary and Best Practices
13 | x
13. Old System with
limited scalability
Short Network
interruptions No 7*24 avail. for
single instance of
credit card service
Response
sometimes get
Fault if product is lost
no longer available
Not always
available
14 | x
14. Patterns for Fault Tolerant Software
Compensation
Exception shielding
(Limit) retry
Share the load
Alternative
Exception handler
Heartbeat
Throttling
15 | x
15. Agenda
1. What is Fault Handling ?
2. Fault Handling in SOA vs. traditional systems
3. Scenario and Patterns
4. Implementation of Scenario
5. Summary and Best Practices
19 | x
17. Product Management
Result Caching
Result Cache
Problem
● Not to overload the old, non-scalable product system with the new
demand
Solution
● Use Result Caching to cache the product information (read-only
operation)
● Use Service Throttling to limit the number of concurrent requests
21 | x
18. Product Management
Result Caching
Results are returned from cache rather than always invoking the
external service
● Product data is rather static, so ideal candidate for caching
Proxy Business 1 Product DB
Service Service
2 3
Result
Cache
OSB
22 | x
19. Product Management
Service Throttling
Restrict the number of messages on the message flow to a Business
Service
Message Buffer
Proxy Business Product
Service Service DB
OSB
● Set from Operational Settings on the OSB console
23 | x
20. Credit Card Booking
Retry Configuration
Retry
Problem
● Unstable network between us and the external services
Solution
● Use Retry mechanism of OSB to try multiple times
● No Fault Management necessary for service consumer if network
interruption is only for a short time
24 | x
21. Credit Card Booking
Retry Configuration
Configured on the business service in OSB
1
Proxy Business
after 2s Visa Service
Service Service 2
OSB
5x
25 | x
22. Credit Card Booking
Service Pooling
Service Pooling
Problem
● Credit Card Service does not guarantee 7*24 availability for one single
instance
Solution
● Use the multiple instances (endpoints) that the company provides and
use service pooling feature of OSB
● No Fault Management for the service consumer if at least one endpoint
is available
26 | x
23. Credit Card Booking
Service Pooling
Credit Card Service instance 1
Proxy Business
Credit Card Service instance 2
Service Service
OSB Credit Card Service instance 3
27 | x
24. Order History
Fault Management Framework
Use Fault Policy Management
In Mediator to configure retry
Problem
● Order History System not available should have no impact on
Business Process
Solution
● Use Mediator with Fault Management Framework to configure retry
independent of availability of Order History Web Service
32 | x
26. Order Handling Process
Return errors as synchronous response
Problem
● Both Product Management and Credit Card Booking can
Fault Handling return Business Faults
Fault Handling Solution
Reply with Fault ● Handle errors and map them to errors returned to the
service consumer (i.e. the caller of the process)
34 | x
27. Order Handling Process
Return errors as synchronous response
Handle Business Faults in BPEL error handler and reply with an error
35 | x
28. Order Handling Process (II)
Handle missing callback with timeout
Problem
● Order Processing response message can get lost in the Order
Processing system, i.e. the callback message will never
arrive in the process
Solution
● Timeout on the Wait For Answer with a BPEL pick activity
Pick with timeout
with a timeout
● Undo the process by doing compensation
Compensate
● Use the BPEL compensate activity together with
compensation handler to undo the Booking of the Credit Card
36 | x
29. Order Handling Process (II)
Handle missing callback with timeout
Pick Activity for handling callback
message with timeout branch
c
37 | x
30. Order Handling Process (III)
Compensation Handling
Problem
● Order Processing callback message can be a Product No
Compensation Longer Available Business Fault
Handler
Solution
● Undo the process by doing compensation
Handle Business ● Use the BPEL compensate activity together with
Fault and Compensate
compensation handler to undo the Booking of the
Credit Card
38 | x
31. Order Handling Process (III)
Compensation Handling
Compensate activity invokes compensation
handling on the inner scope
• Can only be invoked from within a fault handler or
another compensation handler
39 | x
32. Order Handling Process (V)
Generic Error Handler w. Fault Policy Framework
Problem
● Unexpected (technical) fault
Unexpected (technical) error ● Multiple processes that deal with unexpected faults in
their own way
Solution
● Use fault handler mechanism to enqueue on error queue
without adding process logic
● Create one process to listen to error queue and handle
faults
● Retrieve process information by using (composite) sensors
41 | x
33. Order Handling Process (V)
Generic Error Handler w. Fault Policy Framework
Explanation of generic fault handler
42 | x
35. Agenda
1. What is Fault Handling ?
2. Fault Handling in SOA vs. traditional systems
3. Scenario and Patterns
4. Implementation of Scenario
5. Summary and Best Practices
44 | x
36. Summary
Issue Solution Product
Overloading product management system ThrottlingResult cache OSB
Credit Card Service does not guarantee 7*24 uptime due to e.g. Muliple endpoints OSB
network problems Service pooling
Guarantee message delivery to order management system Availability of queues OSB (and SOA Suite for XA
Enqueue and dequeue in service consumer propagation to OSB)
transaction
Returning business fault over async MEP from order management Separate operation and fault message OSB and SOA Suite (callback
system contract between the two)
Order history service not available Retry in Mediator using fault policy framework SOA Suite
Business fault handling from service to process to consumer Catch faults in process and reply fault to OSB and SOA Suite (correct
consumer contracts)
Detect missing response message Timeout in pick activity SOA Suite
Handle product no longer available Compensation SOA Suite
Avoid calling credit card booking twice Set non-idempotent property SOA Suite
Processes needing to deal with unexpected technical faults. All Fault policy frameworks, error queue, generic SOA Suite
processes solving it in their own way using process logic. error handler, SOA Suite APIs & composite
sensors.
45 | x
37. Summary & Best Practices
● Differentiate between business and technical faults
● Design service contracts with faults in mind: formally describe business faults in
service contracts
● Don’t use exceptions as goto’s
● Design with criticality, likeliness to fail, and cost in mind
● Differentiate fault patterns in OSB and BPM/BPEL
• OSB: Retry, throttling, transaction boundaries
• BPM/BPEL: Compensation, business fault handling, generic fault handler, timeout
● Handle unexpected errors generically
● Make services autonomous
● Fault-handling on scope of services and in wider perspective
46 | x
38. Thank You! Vennster
Ronald van Luttikhuizen
ronald.van.luttikhuizen@vennster.nl
Trivadis AG
Guido Schmutz
guido.schmutz@trivadis.com
BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. HAMBURG MÜNCHEN STUTTGART WIEN
47 | x