This presentation shows Spring Web Services, Spring Integration and Spring Batch applied to a typical scenario. It walks through the advantages of the technologies and their sweet spots.
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Frameworks are large prewritten code to which you add your own code to solve a problem in a specific domain.
You make use of a framework by calling its methods,inheritance,and supplying “call-backs” listeners.
Spring is the most popular application development framework for enterprise Java™.
Millions of developers use Spring to create high performing, easily testable, reusable code without any lock-in.
High-Performance Advanced Analytics with Spark-AlchemyDatabricks
Pre-aggregation is a powerful analytics technique as long as the measures being computed are reaggregable. Counts reaggregate with SUM, minimums with MIN, maximums with MAX, etc. The odd one out is distinct counts, which are not reaggregable.
Traditionally, the non-reaggregability of distinct counts leads to an implicit restriction: whichever system computes distinct counts has to have access to the most granular data and touch every row at query time. Because of this, in typical analytics architectures, where fast query response times are required, raw data has to be duplicated between Spark and another system such as an RDBMS. This talk is for everyone who computes or consumes distinct counts and for everyone who doesn’t understand the magical power of HyperLogLog (HLL) sketches.
We will break through the limits of traditional analytics architectures using the advanced HLL functionality and cross-system interoperability of the spark-alchemy open-source library, whose capabilities go beyond what is possible with OSS Spark, Redshift or even BigQuery. We will uncover patterns for 1000x gains in analytic query performance without data duplication and with significantly less capacity.
We will explore real-world use cases from Swoop’s petabyte-scale systems, improve data privacy when running analytics over sensitive data, and even see how a real-time analytics frontend running in a browser can be provisioned with data directly from Spark.
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Frameworks are large prewritten code to which you add your own code to solve a problem in a specific domain.
You make use of a framework by calling its methods,inheritance,and supplying “call-backs” listeners.
Spring is the most popular application development framework for enterprise Java™.
Millions of developers use Spring to create high performing, easily testable, reusable code without any lock-in.
High-Performance Advanced Analytics with Spark-AlchemyDatabricks
Pre-aggregation is a powerful analytics technique as long as the measures being computed are reaggregable. Counts reaggregate with SUM, minimums with MIN, maximums with MAX, etc. The odd one out is distinct counts, which are not reaggregable.
Traditionally, the non-reaggregability of distinct counts leads to an implicit restriction: whichever system computes distinct counts has to have access to the most granular data and touch every row at query time. Because of this, in typical analytics architectures, where fast query response times are required, raw data has to be duplicated between Spark and another system such as an RDBMS. This talk is for everyone who computes or consumes distinct counts and for everyone who doesn’t understand the magical power of HyperLogLog (HLL) sketches.
We will break through the limits of traditional analytics architectures using the advanced HLL functionality and cross-system interoperability of the spark-alchemy open-source library, whose capabilities go beyond what is possible with OSS Spark, Redshift or even BigQuery. We will uncover patterns for 1000x gains in analytic query performance without data duplication and with significantly less capacity.
We will explore real-world use cases from Swoop’s petabyte-scale systems, improve data privacy when running analytics over sensitive data, and even see how a real-time analytics frontend running in a browser can be provisioned with data directly from Spark.
Debugging PySpark: Spark Summit East talk by Holden KarauSpark Summit
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Spark (Structured) Streaming vs. Kafka StreamsGuido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
This presentation shows how you can implement stream processing solutions with each of the two frameworks, discusses how they compare and highlights the differences and similarities.
JVM Support for Multitenant Applications - Steve Poole (IBM)jaxLondonConference
Presented at JAX London 2013
Per-tenant resource management can help ensure that collocated tenants peacefully share computational resources based on individual quotas. This session begins with a comparison of deployment models (shared: hardware, OS, middleware, everything) to motivate the multitenant approach. The main topic is an exploration of experimental data isolation and resource management primitives in IBM’s JDK that combine to help make multitenant applications smaller and more predictable.
Optimizing Delta/Parquet Data Lakes for Apache SparkDatabricks
This talk outlines data lake design patterns that can yield massive performance gains for all downstream consumers. We will talk about how to optimize Parquet data lakes and the awesome additional features provided by Databricks Delta. * Optimal file sizes in a data lake * File compaction to fix the small file problem * Why Spark hates globbing S3 files * Partitioning data lakes with partitionBy * Parquet predicate pushdown filtering * Limitations of Parquet data lakes (files aren't mutable!) * Mutating Delta lakes * Data skipping with Delta ZORDER indexes
Speaker: Matthew Powers
Presentation explain about
Spring Boot vs Spring vs Spring MVC,
Advantages,
Where to start and how does Spring boot work ?,
Dependency Management,
Logging,
Exception Handling,
Database Handling.
in Spring boot.
Microservices with Java, Spring Boot and Spring CloudEberhard Wolff
Spring Boot makes creating small Java application easy - and also facilitates operations and deployment. But for Microservices need more: Because Microservices are a distributed systems issues like Service Discovery or Load Balancing must be solved. Spring Cloud adds those capabilities to Spring Boot using e.g. the Netflix stack. This talks covers Spring Boot and Spring Cloud and shows how these technologies can be used to create a complete Microservices environment.
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
Business leads, executives, analysts, and data scientists rely on up-to-date information to make business decision, adjust to the market, meet needs of their customers or run effective supply chain operations.
Come hear how Asurion used Delta, Structured Streaming, AutoLoader and SQL Analytics to improve production data latency from day-minus-one to near real time Asurion’s technical team will share battle tested tips and tricks you only get with certain scale. Asurion data lake executes 4000+ streaming jobs and hosts over 4000 tables in production Data Lake on AWS.
Javascript Prototypal Inheritance - Big PictureManish Jangir
Are you still confused about what the prototypes really are in Javascript. Read the slides from the very begining, you will have a very clear picture about inheritance in Javascript. If you have any questions, please leave a comment. I will try to clarify that thing.
Structured and centralized logging with serilogDenis Missias
Make your logs work for you and go beyond unstructured textual logs to create modern log information with rich, structured and centralized log.
https://serilog.net/
Atlanta JUG - Integrating Spring Batch and Spring IntegrationGunnar Hillert
This talk is for everyone who wants to efficiently use Spring Batch and Spring Integration together. Users of Spring Batch often have the requirements to interact with other systems, to schedule the periodic execution Batch jobs and to monitor the execution of Batch jobs. Conversely, Spring Integration users periodically have Big Data processing requirements, be it for example the handling of large traditional batch files or the execution of Apache Hadoop jobs. For these scenarios, Spring Batch is the ideal solution. This session will introduce Spring Batch Integration, a project that provides support to easily tie Spring Batch and Spring Integration together. We will cover the following scenarios:
Launch Batch Jobs through Spring Integration Messages
Generate Informational Messages
Externalize Batch Process Execution using Spring Integration
Create Big Data Pipelines with Spring Batch and Spring Integration
Debugging PySpark: Spark Summit East talk by Holden KarauSpark Summit
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Spark (Structured) Streaming vs. Kafka StreamsGuido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
This presentation shows how you can implement stream processing solutions with each of the two frameworks, discusses how they compare and highlights the differences and similarities.
JVM Support for Multitenant Applications - Steve Poole (IBM)jaxLondonConference
Presented at JAX London 2013
Per-tenant resource management can help ensure that collocated tenants peacefully share computational resources based on individual quotas. This session begins with a comparison of deployment models (shared: hardware, OS, middleware, everything) to motivate the multitenant approach. The main topic is an exploration of experimental data isolation and resource management primitives in IBM’s JDK that combine to help make multitenant applications smaller and more predictable.
Optimizing Delta/Parquet Data Lakes for Apache SparkDatabricks
This talk outlines data lake design patterns that can yield massive performance gains for all downstream consumers. We will talk about how to optimize Parquet data lakes and the awesome additional features provided by Databricks Delta. * Optimal file sizes in a data lake * File compaction to fix the small file problem * Why Spark hates globbing S3 files * Partitioning data lakes with partitionBy * Parquet predicate pushdown filtering * Limitations of Parquet data lakes (files aren't mutable!) * Mutating Delta lakes * Data skipping with Delta ZORDER indexes
Speaker: Matthew Powers
Presentation explain about
Spring Boot vs Spring vs Spring MVC,
Advantages,
Where to start and how does Spring boot work ?,
Dependency Management,
Logging,
Exception Handling,
Database Handling.
in Spring boot.
Microservices with Java, Spring Boot and Spring CloudEberhard Wolff
Spring Boot makes creating small Java application easy - and also facilitates operations and deployment. But for Microservices need more: Because Microservices are a distributed systems issues like Service Discovery or Load Balancing must be solved. Spring Cloud adds those capabilities to Spring Boot using e.g. the Netflix stack. This talks covers Spring Boot and Spring Cloud and shows how these technologies can be used to create a complete Microservices environment.
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
Business leads, executives, analysts, and data scientists rely on up-to-date information to make business decision, adjust to the market, meet needs of their customers or run effective supply chain operations.
Come hear how Asurion used Delta, Structured Streaming, AutoLoader and SQL Analytics to improve production data latency from day-minus-one to near real time Asurion’s technical team will share battle tested tips and tricks you only get with certain scale. Asurion data lake executes 4000+ streaming jobs and hosts over 4000 tables in production Data Lake on AWS.
Javascript Prototypal Inheritance - Big PictureManish Jangir
Are you still confused about what the prototypes really are in Javascript. Read the slides from the very begining, you will have a very clear picture about inheritance in Javascript. If you have any questions, please leave a comment. I will try to clarify that thing.
Structured and centralized logging with serilogDenis Missias
Make your logs work for you and go beyond unstructured textual logs to create modern log information with rich, structured and centralized log.
https://serilog.net/
Atlanta JUG - Integrating Spring Batch and Spring IntegrationGunnar Hillert
This talk is for everyone who wants to efficiently use Spring Batch and Spring Integration together. Users of Spring Batch often have the requirements to interact with other systems, to schedule the periodic execution Batch jobs and to monitor the execution of Batch jobs. Conversely, Spring Integration users periodically have Big Data processing requirements, be it for example the handling of large traditional batch files or the execution of Apache Hadoop jobs. For these scenarios, Spring Batch is the ideal solution. This session will introduce Spring Batch Integration, a project that provides support to easily tie Spring Batch and Spring Integration together. We will cover the following scenarios:
Launch Batch Jobs through Spring Integration Messages
Generate Informational Messages
Externalize Batch Process Execution using Spring Integration
Create Big Data Pipelines with Spring Batch and Spring Integration
Enterprise Integration Patterns with Spring integration!hegdekiranr
“Spring Integration (different from Core Spring Framework) is an Enterprise Integration Patterns implementation.
It provides lightweight messaging within Spring-based applications and supports integration with external systems via declarative adapters.
Those adapters provide a higher-level of abstraction over Spring's support for remoting, messaging, and scheduling.
S2GX 2012 - Introduction to Spring Integration and Spring BatchGunnar Hillert
In this session you will learn what Spring Integration and Spring Batch are all about, how they differ, their commonalities, and how you can use Spring Batch and Spring Integration together.
We will provide a short overview of the Enterprise Integration Patterns (EIP) as described in the highly influential book of the same name. Based on these patterns, we will then see how Spring Integration enables the development of Message-driven applications. This allows you to not only modularize new or existing applications but also makes it easy to integrate with external systems.
This session will also introduce Spring Batch. Spring Batch addresses the needs of any batch process, be it complex calculations in large financial institutions or simple data migration tasks as they exist in many software development projects. We will cover what Spring Batch is, how Spring approaches the concepts of batch and how Spring handles scaling batch processes to be able to handle any volume of data.
You will also see how Spring Integration and Spring Batch maximize the reuse of the integration support provided by the core Spring Framework. In addition to providing a robust, proven foundation, this also flattens the learning curve considerably to all developers already familiar with Spring.
SOAP Web Services have a well established role in the enterprise, but aside from the many benefits of the WS-* standards, SOAP and XML also carry additional baggage for developers. Consequently, REST Web Services are gaining tremendous popularity within the developer community. This session will begin by comparing and contrasting the basic concepts of both SOAP and REST Web Services. Building on that foundation, Sam Brannen will show attendees how to implement SOAP-based applications using Spring-WS 2.0. He will then demonstrate how to build a similar REST-ful application using Spring MVC 3.0. The session will conclude with an in-depth look at both server-side and client-side development as well as efficient integration testing of Web Services using the Spring Framework.
To build up any non-trivial business processing, you may have to connect systems that are exposed by web-services, fire off events over message queues, notify users via email or social networking, and much more.
Apache Camel is a lightweight integration framework that helps you connect systems in a consistent and reliable way. Focus on the business reasons behind what's being integrated, not the underlying details of how.
Messaging is the backbone of many top enterprises. It affords reliable, asynchronous data passing to achieve loosely coupled, highly scalable distributed systems. As enterprises large and small become more interconnected, demand for remote and limited devices to be integrated with enterprise systems is surging. Come see how the most widely used, open-source messaging broker, Apache ActiveMQ, fits nicely and how it supports polyglot messaging.
Get the Most out of Testing with Spring 4.2Sam Brannen
Join Sam Brannen and Nicolas Fränkel to discover what's new in Spring Framework 4.2's testing support and learn tips and best practices for testing modern, Spring-based applications.
Sam Brannen is the Spring Test component lead and author of the Spring TestContext Framework, and Nicolas Fränkel is the author of the book "Integration Testing from the Trenches".
In this session, Sam and Nicolas will cover the latest testing features in Core Spring, Spring Boot, and Spring Security. In addition to new features, they will also present insider tips and best practices on integration testing with suites in TestNG, database transactions, SQL script execution, granularity of context configuration files, optimum use of the context cache, a discussion on TestNG vs. JUnit, and much more.
Presentation of SAPS at the 1st International Workshop on the Information-Centric Web (IC-Web 2011) at the 11th IEEE/IPSJ International Symposium on Applications and the Internet (SAINT 2011) in Munich, Germany
OData: Universal Data Solvent or Clunky Enterprise Goo? (GlueCon 2015)Pat Patterson
Why would anyone but the most pedestrian enterprise developer be interested in a data access protocol originally designed by Microsoft, implemented in XML and handed to OASIS for standardization? The Open Data Protocol, or OData for short, has evolved into a clean, RESTful interface for CRUD operations against data services. Alongside the usual enterprise suspects such as Microsoft, Salesforce and IBM, OData has been adopted by government and non-profit agencies to open up their data and make it accessible to the public. For developers wanting to consume data, or create their own OData services, there's no shortage of open source options, from Apache Olingo in Java to node-odata and ODataCpp. Whether you're accessing customer orders in SAP or the Whitehouse visitor book, you're going to need some OData smarts.
We are Always working to Provide solution for different Industry SegementsStepsindia Foundation | Stepsindia Leadership Institute | Events | Sustainability
Steps India Technologies
Steps India Technologies
Steps India Technologies
www.stepsindia.firm.in/contactus.html
Limiting software architecture to the traditional ideas is not enough for today's challenges. This presentation shows additional tools and how problems like maintainability, reliability and usability can be solved.
Continuous Delivery solves many current challenges - but still adoption is limited. This talks shows reasons for this and how to overcome these problems.
Four Times Microservices - REST, Kubernetes, UI Integration, AsyncEberhard Wolff
How you can build microservices:
- REST with the Netflix stack (Eureka for Service Discovery, Ribbon for Load Balancing, Hystrix for Resilience, Zuul for Routing)
- REST with Consul for Services Discovery
- REST with Kubernetes
- UI integration with ESI (Edge Side Includes)
- UI integration on the client with JavaScript
- Async with Apache Kafka
- Async with HTTP + Atom
This presentation show several options how to implement microservices: the Netflix stack, Consul, and Kubernetes. Also integration options like REST and UI integration are covered.
There are many different deployment options - package managers, tools like Chef or Puppet, PaaS and orchestration tools. This presentation give an overview of these tools and approaches like idempotent installation or immutable server.
Held at Continuous Lifecycle 2016
How to Split Your System into MicroservicesEberhard Wolff
Splitting a system into microservices is a challenging task. This talk shows how ideas like Bounded Context, migration scenarios and technical constraints can be used to build a microservice architecture. Held at WJAX 2016.
Microservices and Self-contained System to Scale AgileEberhard Wolff
Architectures like Microservices and Self-contained Systems provide a way to support agile processes and scale them. Held at JUG Saxony Day 2016 in Dresden.
Data Architecturen Not Just for MicroservicesEberhard Wolff
Microservices change the way data is handled and stored. This presentation shows how Bounded Context, Events, Event Sourcing and CQRS provide new approaches to handle data.
We assume software should contain no redundancies and that a clean architecture is the way to a maintainable system. Microservices challenge these assumptions. Keynote from Entwicklertage 2016 in Karlsruhe.
Nanoservices are smaller than Microservices. This presentation shows how technologies like Amazon Lambda, OSGi and Java EE can be used to enable such small services.
Microservices: Architecture to scale AgileEberhard Wolff
Microservices allow for scaling agile processes. This presentation shows what Microservices are, what agility is and introduces Self-contained Systems (SCS). Finally, it shows how SCS can help to scale agile processes.
Microservices, DevOps, Continuous Delivery – More Than Three BuzzwordsEberhard Wolff
Microservices, DevOps and Continuous Delivery are three hypes at the moment. This talk looks into the relationships between these three approaches and gives an idea how these approaches help to solve concrete problems. Held at Continuous Lifecycle 2015.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
Spring Web Service, Spring Integration and Spring Batch
1. Spring Web Service, Spring
Integration and Spring Batch
Eberhard Wolff
Principal & Regional Director
SpringSource Germany
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited.
2. About the presentation...
• Take a concrete project...
• How can you solve your problems (using the Spring
Portfolio)?
• You will see new Spring technologies in action...
• You will get a different impression about Spring...
• You will see how to do Web Services, EAI...
• ...and even batches
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 2
3. The Case
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited.
4. Build an Application
• Should process orders
• Orders may come in as a file
• Or with a web service
• Express orders are processed immediately
• Other orders in a batch at night for the
next day
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 4
5. Architecture
• I don't do Model Driven Development
– Sorry, Markus
• I don’t do PowerPoint architectures
• I have something far better...
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 5
6. Architecture
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 6
7. Web Services
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited.
8. How you could have done it...
• Generate a service POJO using Java
• Export it via XFire
• Clients use WSDL generated from the interface
• Drawbacks:
– XFire is deprecated and you depend on how it generates the
WSDL
– Exposes an interface that might be used internally
– ...and makes it unchangeable because external clients depend
on it
– Types like java.util.Map cannot be expressed in WSDL, so
work arounds must be used
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 8
9. Contract First
• Contract First: Define the interface before the
implementation
• Contract First is the key to interoperability
• Web Services are used for interoperability
• Not using Contract First with Web Services is therefore
unwise (almost a contradiction)
• Also good to organize projects
– Decide about the interface
– Start coding the implementation
– ...and the client
• That used to work well for CORBA in the last century...
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 9
10. So let’s use WSDL
<?xml version="1.0" encoding="UTF-8"?>
<wsdl:definitions xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:sch="http://www.springsource.com/order"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:tns="http://www.springsource.com/order"
targetNamespace="http://www.springsource.com/order">
<wsdl:types>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified"
• Send an order with some
targetNamespace="http://www.springsource.com/order">
<complexType name="Order">
<sequence>
<element maxOccurs="1" minOccurs="0" name="express"
type="boolean" />
<element maxOccurs="unbounded" minOccurs="1"
name="order-item" type="tns:OrderElement" />
order items
</sequence>
</complexType>
<complexType name="OrderElement">
<all>
<element maxOccurs="1" minOccurs="1" name="count"
nillable="false" type="positiveInteger" />
<element maxOccurs="1" minOccurs="1" name="item"
nillable="false" type="string" />
• 80 lines of WSDL in Eclipse
</all>
</complexType>
<element name="orderRequest">
<complexType>
<sequence>
<element name="order" type="tns:Order" />
</sequence>
formatting
</complexType>
</element>
<element name="orderResponse">
<complexType>
<sequence>
<element name="result" type="string" />
</sequence>
</complexType>
• Show in 5 point font here
</element>
</schema>
</wsdl:types>
<wsdl:message name="orderResponse">
<wsdl:part element="tns:orderResponse" name="orderResponse">
</wsdl:part>
</wsdl:message>
<wsdl:message name="orderRequest">
<wsdl:part element="tns:orderRequest" name="orderRequest">
</wsdl:part>
</wsdl:message>
<wsdl:portType name="Order">
<wsdl:operation name="order">
<wsdl:input message="tns:orderRequest"
name="orderRequest">
</wsdl:input>
<wsdl:output message="tns:orderResponse"
• Just too much code
name="orderResponse">
</wsdl:output>
</wsdl:operation>
</wsdl:portType>
<wsdl:binding name="OrderSoap11" type="tns:Order">
<soap:binding style="document"
transport="http://schemas.xmlsoap.org/soap/http" />
<wsdl:operation name="order">
<soap:operation soapAction="" />
<wsdl:input name="orderRequest">
<soap:body use="literal" />
</wsdl:input>
<wsdl:output name="orderResponse">
<soap:body use="literal" />
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
<wsdl:service name="OrderService">
<wsdl:port binding="tns:OrderSoap11" name="OrderSoap11">
<soap:address
location="http://localhost:8080/order-handling/services" />
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 10
11. So?
• We want Contract First...
• ...but not with WSDL
• We mostly care about the data format
• ...which is defined with XML Schema
• Spring Web Services lets you focus on the
XML Schema
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 11
12. WSDL vs. XSD
• 34 lines are XSD
<?xml version="1.0" encoding="UTF-8"?>
<wsdl:definitions xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:sch="http://www.springsource.com/order"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:tns="http://www.springsource.com/order"
targetNamespace="http://www.springsource.com/order">
• These actually define the data on the
<wsdl:types>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified"
targetNamespace="http://www.springsource.com/order">
<complexType name="Order">
<sequence>
wire
<element maxOccurs="1" minOccurs="0" name="express"
type="boolean" />
<element maxOccurs="unbounded" minOccurs="1"
name="order-item" type="tns:OrderElement" />
</sequence>
</complexType>
<complexType name="OrderElement">
• Easy to deduce from XML sample
<all>
<element maxOccurs="1" minOccurs="1" name="count"
nillable="false" type="positiveInteger" />
<element maxOccurs="1" minOccurs="1" name="item"
nillable="false" type="string" />
</all>
messages
</complexType>
<element name="orderRequest">
<complexType>
<sequence>
<element name="order" type="tns:Order" />
</sequence>
– Tool support: XML Spy, Trang, Microsoft XML
</complexType>
</element>
<element name="orderResponse">
<complexType>
<sequence>
<element name="result" type="string" />
</sequence>
</complexType>
</element>
to Schema …
</schema>
• ...and can be used for POX (Plain Old
</wsdl:types>
<wsdl:message name="orderResponse">
<wsdl:part element="tns:orderResponse" name="orderResponse">
</wsdl:part>
</wsdl:message>
<wsdl:message name="orderRequest">
XML) without SOAP as well
<wsdl:part element="tns:orderRequest" name="orderRequest">
</wsdl:part>
</wsdl:message>
<wsdl:portType name="Order">
<wsdl:operation name="order">
<wsdl:input message="tns:orderRequest"
name="orderRequest">
</wsdl:input>
<wsdl:output message="tns:orderResponse"
name="orderResponse">
</wsdl:output>
</wsdl:operation>
</wsdl:portType>
<wsdl:binding name="OrderSoap11" type="tns:Order">
• The rest is SOAP binding/ports/
<soap:binding style="document"
transport="http://schemas.xmlsoap.org/soap/http" />
<wsdl:operation name="order">
<soap:operation soapAction="" />
<wsdl:input name="orderRequest">
operations
<soap:body use="literal" />
</wsdl:input>
<wsdl:output name="orderResponse">
<soap:body use="literal" />
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
• Can the WSDL be generated?
<wsdl:service name="OrderService">
<wsdl:port binding="tns:OrderSoap11" name="OrderSoap11">
<soap:address
location="http://localhost:8080/order-handling/services" />
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 12
13. XSD: Domain Types
<schema ...>
<complexType name="Order">
<sequence>
<element name="customer-number" type="integer" />
<element name="express" type="boolean" minOccurs="0" />
<element name="order-item" type="tns:OrderElement"
maxOccurs="unbounded" />
</sequence> Note: Optional elements
</complexType> and positive integers
<complexType name="OrderElement">
<all> cannot be expressed in
<element name="count" Java i.e. this is more
type="positiveInteger" />
<element name="item" type="string" /> expressive
</all>
</complexType>
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 13
14. XSD: Request and Response
<element name="orderRequest">
<complexType>
<sequence>
<element name="order" type="tns:Order" />
</sequence>
</complexType>
</element>
<element name="orderResponse">
<complexType>
<sequence>
<element name="result" type="string" />
</sequence>
</complexType>
</element>
</schema> WSDL can now be easily generated:
The operation is essentially defined
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 14
15. Benefits
• Contract First becomes an option
• You can define the data format on the wire
– Interoperability
– Full power of XML Schema
• You are not tied to SOAP but can also use POX
• ...potentially with different transport like JMS
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 15
16. How do you handle the
message?
• Use a Object / XML mapper (OXM)
– I.e. JAXB, XStream, ...
• ...and make the request / response a Java Object
• Then handle it (much like a controller in MVC)
• Easy
• Adapter between external representation and internal
• Benefit: Decoupling business logic from changes of interface
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 16
17. How do you handle the
message?
• Class for Endpoint needs only some annotations
• Will be instantiated as a Spring Bean
• (Almost) no Spring configuration needed (Spring!=XML)
• ...but can be done in XML as well
• Spring==Freedom of Choice
@Endpoint
public class OrderEndpoint {
@PayloadRoot(localPart = "orderRequest",
namespace = "http://www.springsource.com/order")
public OrderResponse handleOrder(OrderRequest req) {
...
}
}
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 17
18. Problems...
• Robustness Principle (aka Postel’s Law):
– Be conservative in what you do; be liberal in what
you accept from others.
– I.e. try to accept every message sent to you
– I.e. only require the data filled that you really need
– ...but only send complete and totally valid responses
• Some Object / XML support this
– but for some the XML must be deserializable into
objects
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 18
19. XPath to the Rescue
• Only needed XML parts are read
@Endpoint
public class OrderEndpoint {
@PayloadRoot(localPart = "orderRequest",
namespace = "http://www.springsource.com/order")
public Source handleOrder(
@XPathParam("/tns:orderRequest/tns:order/tns:order-item")
NodeList orderItemNodeList,
@XPathParam("/tns:orderRequest/tns:order/tns:express/text()")
Boolean expressAsBoolean,
@XPathParam("/tns:orderRequest/tns:order/tns:customer/text()")
double customer) {
...
}
}
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 19
20. Web Services
• Using Spring Web Services you can...
– ...use Contract First without the WSDL overhead
– ...decouple the business logic from interface and clients
– ...implement robust Web Services using XPath easily
– ...or with an Object/XML mapper (less robust but easier)
• Currently in 1.5.6
– 1.5 introduced jms-namespace, email transport, JMS
transport...
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 20
21. Architecture
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 21
22. The core
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited.
23. The core
• Essentially an integration issues
• A Batch or JMS online output
• A Web Services or File input
• Internal routing and handling
• Best practice: Pipes and Filter
– Pipes transfer messages
– ...and store / buffer them
– Filter handle them (routing, etc.)
• Spring Integration supports this
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 23
24. Pipes and Filter:
Frontend
Was just
covered
Web Service fulfillment
File incomingfile FileParser
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 24
25. Spring Integration
Configuration for Frontend
<file:inbound-channel-adapter id="incomingfilename"
directory="file:/tmp/files">
<poller>
<interval-trigger interval="1000" />
</poller>
</file:inbound-channel-adapter>
<file:file-to-string-transformer
delete-files="true" input-channel="incomingfilename"
output-channel="incomingfile" />
Web Service polled, file name read
Directory is fulfillment
Then content is put as a string in the incomingfile
channel
File incomingfile FileParser
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 25
26. Parsing the file
@MessageEndpoint Class automatically instantiated as a Spring Bean
public class FileParser {
@Resource MessageChannel with matching name is injected
private MessageChannel fulfillment;
@ServiceActivator(inputChannel =
"incomingfile") Method called each
public void handleFile(String content){ time a message is
Order order = ...; available on the
GenericMessage<Order> orderMessage = channel incomingfile
new GenericMessage<Order>(order);
fulfillment.send(orderMessage);
}
Web Service fulfillment
}
File incomingfile FileParser
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 26
27. Pipes and Filter:
Backend
This channel has a queue
Further processing is async express
JMS
fulfillment
FulFillment
fulfillment
Router
normal Normal
fulfillment FulFillment
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 27
28. FulFillmentRouter:
express or normal?
Class automatically instantiated as a Spring Bean
@MessageEndpoint
public class FulFillmentRouter {
@Router(inputChannel="fulfillment")
public String routeOrder(Order order) {
if (order.isExpress()) {
return "expressfulfillment";
} else {
return "normalfulfillment";
}
}
}
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 28
29. JMS Adapter
for express orders
<beans ...>
<bean id="connectionFactory"
class="org.apache.activemq.ActiveMQConnectionFactory">
...
</bean>
<jms:outbound-gateway id="jmsout"
request-channel="expressfulfillment"
request-destination="fulfillment-queue" />
</beans>
JMS adapter handles JMS replies transparently i.e. they are
sent to correct Spring Integration response channel
Adapters allow the integration of external systems
with FTP, RMI, HttpInvoker, File, Web Services etc.
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 29
30. Database for normal orders
@MessageEndpoint
public class NormalFulFillment {
private OrderDao orderDao;
@Autowired
public void setOrderDao(OrderDao orderDao) {
this.orderDao = orderDao;
}
@ServiceActivator(inputChannel = "normalfulfillment")
public Order execute(Order order) {
...
}
RendezvousChannel is used to feed back the success
} Passed in as reply to header in the message
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 30
31. Other remarks
• Sending a message can be asynchronous using a queue
• I.e. a different thread will take it and actually handle it
• Spring Integration can also be used with:
– Plain XML Spring configuration
– By setting up the environment with Java
• Spring Integration is currently in 1.0.2
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 31
32. Architecture
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 32
33. The Order Processing Batch
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited.
34. Batches
• Typically consists of steps
• Steps read and write data
• In this case only one step: Read the orders,
process them, write them back
• Typical issues:
– Restarts
– Optimizations (i.e. commits)
– Large volumes of data cannot be loaded at once
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 34
35. How to read the data...
• Load data using a cursor
• Alternative: Only read the primary keys
• Alternative: Load chunks of data
• No option: Load all data (just too much
data)
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 35
36. Spring Batch Configuration:
Processing and Invoices
<job id="fulfillmentjob">
<step id="process" next="invoice">
<tasklet>
<chunk reader="processOrderReader"
writer="processOrderWriter"
commit-interval="10" />
</tasklet> • Commit every 10 items
</step>
<step id="invoice">
• Commit optimization is transparent
... • Thanks to Spring transaction support
</step>
</job>
Store for restarts etc.
<job-repository id="jobRepository" data-source="dataSource"
transaction-manager="transactionManager" />
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 36
37. Order Reader
<bean id="processOrderReader"
class="....JdbcCursorItemReader">
<property name="dataSource" ref="dataSource"/>
<property name="fetchSize" value="10" />
<property name="sql"
value="SELECT * FROM T_ORDER WHERE C_PROCESSED=0" />
<property name="rowMapper">
<bean class="....OrderParameterizedRowMapper"/>
</property>
</bean>
Read the data using a database cursor
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 37
38. Order Writer
public class OrderBatchProcess
extends AbstractItemStreamItemWriter<Order> {
private OrderDao orderDao;
@Autowired
public void setOrderDao(OrderDao orderDao) {
this.orderDao = orderDao;
}
public void write(List<? extends Order> items) throws Exception {
for (Order item : items) {
// do the processing
item.setProcessed(true);
• Just a POJO service
orderDao.update(item); • Works on a chunk
}
}
}
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 38
39. Other features of
Spring Batch
• More complex batches with dependent steps, validation, etc.
• Other datasources (i.e. XML files, other files)
• ...and other targets for the data
• Can be controlled using JMX
• Status of job / step instance is automatically persisted
• Therefore: Easy restart
• Present jobs is restartable anyway
• Scalability by Remote Chunking and Partitioning
• Spring Batch is in 2.0.1
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 39
40. Σ
Sum up
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited.
41. Sum up
Σ
• Spring is far more than the Spring Framework
• Spring Web Services:
– Contract first using XML Schema
– Robust implementations using XPath
– Easy annotation based programming model
• Spring Integration:
– Infrastructure for asynchronous Pipes and Filters including
Routing etc.
– Integration with JMS, Files, XML, databases... available
– Annotations, plain Spring XML or plain Java code
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 41
42. Sum up
• Spring Batch
Σ
– Easy infrastructure for Batches
– Restart etc. included
– Monitoring possible
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 42
43. Build Run
Manage
Copyright 2007 SpringSource. Copying, publishing or distributing without express written permission is prohibited. 43