The document discusses systems integration in the cloud era. It introduces Apache Camel as a tool that can help with cloud integration. Apache Camel supports integration across various cloud computing models including IaaS, PaaS, and SaaS. It implements common integration patterns and allows integration with many cloud platforms and services through custom components. The key messages are that the cloud has arrived and must be integrated, cloud integration is already possible with tools like Apache Camel, and Apache Camel in particular provides strong support for cloud integration through its various components.
WSO2 is a global open source software company that provides a middleware platform for enterprise integration. This document discusses integration patterns that can be implemented using WSO2's middleware products, including the Enterprise Service Bus (ESB), which supports all enterprise integration patterns and can integrate disparate systems. Specific patterns covered include service orchestration, RESTful integration, SAP integration, guaranteed delivery, API facades, cloud-to-cloud and cloud-to-on-premise integration, high availability, and security patterns. Real-world use cases demonstrate how to achieve integration for connected businesses.
WaveMaker - Spring Roo - SpringSource Tool Suite - Choosing the right tool fo...martinlippert
This document compares and contrasts three tools for developing Spring applications: WaveMaker, Spring Roo, and SpringSource Tool Suite. WaveMaker is a visual tool for quickly creating standard web apps without coding. It generates a Spring-based backend and uses JavaScript for the frontend. Spring Roo scaffolds Java and Spring code to reduce boilerplate. SpringSource Tool Suite is an Eclipse-based IDE that enhances the Java experience for Spring projects. The document recommends combining WaveMaker for frontend development and Spring Roo/SpringSource Tool Suite for backend development, and making it easy to use both tools together on the same project.
Integrating Apache Wookie with AEM by Rima Mittal and Ankit GubraniAEM HUB
This document discusses integrating the Apache Wookie widget container with Adobe Experience Manager (AEM). It introduces Apache Wookie and how it works, then covers installing and using the AEM-Wookie Connector Tool to connect an AEM instance to a Wookie server to reuse Wookie's widget pool in AEM. The document demonstrates the connector tool in action.
Azure dev ops integrations with JenkinsDamien Caro
The document discusses various ways to integrate Jenkins with Azure services. It describes the Azure Storage, Slave, and Container Service plugins that help with continuous integration and deployment workflows. It also mentions that the Azure Slave v2 plugin, Azure Container Service plugin, and Azure DevOps integrations portal will soon be available in public preview, while an Azure Jenkins image is already in the Azure Marketplace. NASA's JPL uses Jenkins with Azure services like storage and virtual machines for continuous integration of their Mars terrain pipeline code.
Azure Functions enable the creation of event-driven, compute-on-demand systems that can be triggered by various external events. In this session, you will learn
1. How to leverage functions to execute server-side logic
2. Build serverless architectures
3. Key-vault integration
4. Leveraging durable features
5. Hosting web sites
6. Applying dependency injections
7. Monitoring functions
8. Script-based deployment
The document discusses digital transformation with Red Hat hybrid cloud. It begins by outlining some common business pain points and challenges around technical debt, digitalization, time to market, and return on investment. It then covers key technology trends like cloud-native applications, AI/ML, IoT, blockchain, and more. The rest of the document focuses on how Red Hat's portfolio, including OpenShift and middleware solutions, can help customers address these trends and challenges as part of their digital transformation journey by enabling new application development approaches, modernizing infrastructure, and optimizing processes.
MS Insights Brazil 2015 containers and devopsDamien Caro
Talking about DevOps and containers at MS Insights Sao Paolo 2015.
Talking about containers being or not the solution to implementing DevOps practices ? This talk includes a demonstration that show the integration between Visual Studio Online, Docker Hub and GitHub for continuous integration and automated deployment.
Pivotal microservices spring_pcf_skillsmatter.pptxSufyaan Kazi
This document summarizes a presentation on microservices and how to implement them using Spring Cloud and Netflix OSS. It defines microservices as loosely coupled services with bounded contexts. It discusses challenges of microservices like configuration management, service discovery, routing, and fault tolerance. It presents Spring Cloud and Netflix OSS as tools that can help implement microservices and address these challenges. These include services for configuration, service registration and discovery, circuit breakers, and dashboards. It argues that a platform like Cloud Foundry and Spring Cloud services can help develop and operate microservices at scale.
WSO2 is a global open source software company that provides a middleware platform for enterprise integration. This document discusses integration patterns that can be implemented using WSO2's middleware products, including the Enterprise Service Bus (ESB), which supports all enterprise integration patterns and can integrate disparate systems. Specific patterns covered include service orchestration, RESTful integration, SAP integration, guaranteed delivery, API facades, cloud-to-cloud and cloud-to-on-premise integration, high availability, and security patterns. Real-world use cases demonstrate how to achieve integration for connected businesses.
WaveMaker - Spring Roo - SpringSource Tool Suite - Choosing the right tool fo...martinlippert
This document compares and contrasts three tools for developing Spring applications: WaveMaker, Spring Roo, and SpringSource Tool Suite. WaveMaker is a visual tool for quickly creating standard web apps without coding. It generates a Spring-based backend and uses JavaScript for the frontend. Spring Roo scaffolds Java and Spring code to reduce boilerplate. SpringSource Tool Suite is an Eclipse-based IDE that enhances the Java experience for Spring projects. The document recommends combining WaveMaker for frontend development and Spring Roo/SpringSource Tool Suite for backend development, and making it easy to use both tools together on the same project.
Integrating Apache Wookie with AEM by Rima Mittal and Ankit GubraniAEM HUB
This document discusses integrating the Apache Wookie widget container with Adobe Experience Manager (AEM). It introduces Apache Wookie and how it works, then covers installing and using the AEM-Wookie Connector Tool to connect an AEM instance to a Wookie server to reuse Wookie's widget pool in AEM. The document demonstrates the connector tool in action.
Azure dev ops integrations with JenkinsDamien Caro
The document discusses various ways to integrate Jenkins with Azure services. It describes the Azure Storage, Slave, and Container Service plugins that help with continuous integration and deployment workflows. It also mentions that the Azure Slave v2 plugin, Azure Container Service plugin, and Azure DevOps integrations portal will soon be available in public preview, while an Azure Jenkins image is already in the Azure Marketplace. NASA's JPL uses Jenkins with Azure services like storage and virtual machines for continuous integration of their Mars terrain pipeline code.
Azure Functions enable the creation of event-driven, compute-on-demand systems that can be triggered by various external events. In this session, you will learn
1. How to leverage functions to execute server-side logic
2. Build serverless architectures
3. Key-vault integration
4. Leveraging durable features
5. Hosting web sites
6. Applying dependency injections
7. Monitoring functions
8. Script-based deployment
The document discusses digital transformation with Red Hat hybrid cloud. It begins by outlining some common business pain points and challenges around technical debt, digitalization, time to market, and return on investment. It then covers key technology trends like cloud-native applications, AI/ML, IoT, blockchain, and more. The rest of the document focuses on how Red Hat's portfolio, including OpenShift and middleware solutions, can help customers address these trends and challenges as part of their digital transformation journey by enabling new application development approaches, modernizing infrastructure, and optimizing processes.
MS Insights Brazil 2015 containers and devopsDamien Caro
Talking about DevOps and containers at MS Insights Sao Paolo 2015.
Talking about containers being or not the solution to implementing DevOps practices ? This talk includes a demonstration that show the integration between Visual Studio Online, Docker Hub and GitHub for continuous integration and automated deployment.
Pivotal microservices spring_pcf_skillsmatter.pptxSufyaan Kazi
This document summarizes a presentation on microservices and how to implement them using Spring Cloud and Netflix OSS. It defines microservices as loosely coupled services with bounded contexts. It discusses challenges of microservices like configuration management, service discovery, routing, and fault tolerance. It presents Spring Cloud and Netflix OSS as tools that can help implement microservices and address these challenges. These include services for configuration, service registration and discovery, circuit breakers, and dashboards. It argues that a platform like Cloud Foundry and Spring Cloud services can help develop and operate microservices at scale.
Transaction Control – a Functional Approach to Modular Transaction Management...mfrancis
OSGi Community Event 2016 Presentation by Tim Ward (Paremus)
Transactions are a critical part of almost all Enterprise applications, but correctly managing those transactions isn’t always easy. This is particularly true in a dynamic, modular world where you need to be certain that everything is ready before you begin.
With the advent of lambda expressions and functional interfaces we now have new, better tools for defining transactional work. The OSGi Transaction Control service uses these functional programming techniques to scope transactions and resource access, providing control and flexibility while leaving business logic uncluttered. The resulting solution is decoupled, modular and requires no container magic at all, making testing and portability a breeze.
Background
Software controlled transactions have existed for a long time — commercial products that are still available now can trace their origins back to the 1960s. Since that time a lot has changed, first we saw the rise of C, then of Object Oriented programming, then of the Web, and now of Microservices.
Over the same time period there have been significant changes to the way that transactions are managed – either transaction boundaries have to be explicitly declared, or the management role is delegated to a container technology. Given the complexity of correctly managing the transaction lifecycle, container managed solutions are regarded as the gold standard, however container managed solutions introduce their own problems.
The rise of the Spring framework was a reaction to the complexity, and heavy-touch management of the original Java EE specifications. Instead Spring focussed on “pure POJO” programming, designed to make your code easily portable, runnable and testable inside or outside the container.
While Spring did a much better job of hiding complexity than those early Java EE servers, the fundamental problem with any pure declarative approach is that there must be a container somewhere. Without a container there is no code to start or end the transaction. Even now with Spring, EJB 3.2, CDI etc, the promise of simpler, container independent components is an illusion.
The big problem with declarative transaction management is that it tries to take away too much from the application code, replacing it with “container magic”. The problem with relying on magic is that the resulting system ends up being more complex, not less. We therefore should be aiming to simplify and minimise transaction management code, not eliminate it entirely. Java’s support for functional techniques opens a whole new set of API possibilities for transaction management, and the Apache Aries project has been exploring the possibilities of providing generic resource and transaction management in a concise, type-safe way. Examples from this project demonstrate how transaction management can be made both simple and explicit at the same time.
This document provides an overview of Adobe Experience Manager (AEM) and related technologies. It discusses that AEM was founded in 1993 as Day Software and acquired by Adobe in 2010. It is now an integral part of the Adobe Marketing Cloud and is a leader in web content management. The document outlines key AEM features such as a touch-optimized UI, mobile app creation, commerce integration, and use of Apache Sling and Java Content Repository for its architecture.
This document discusses conducting user research for an API management product. It outlines the research process of empathizing with users through interviews and personas, defining problems by synthesizing findings, ideating solutions through design studios, and prototyping and testing solutions. The research uncovered that API consumers want easy discovery and secure access to APIs without involving operations. It also found that API managers want observability into API usage. Outcomes of the research included API discovery and documentation pages and secure access pages. The value of user research is building empathy, validating assumptions, and providing structured feedback.
This document discusses Spring Boot and Spring Cloud. It provides an overview of how Pivotal enables digital transformation through agile development practices and cloud native platforms. It describes capabilities of Spring Boot like quick project generation and auto configuration. It also discusses how Spring Cloud provides services for microservices like configuration, service registration and discovery, and fault tolerance with circuit breakers. The document includes code samples and demos the creation of a simple Spring Boot application and adding Spring MVC functionality with annotations. It promotes attending hands-on labs to learn how to use Spring Boot and Spring Cloud.
Serverless with Spring Cloud Function, Knative and riff #SpringOneTour #s1tToshiaki Maki
This document summarizes a presentation about serverless computing using Spring Cloud Function, Knative, and riff. It discusses what serverless computing is, an overview of Spring Cloud Function for developing serverless applications, and how Knative and riff can be used as platforms to deploy serverless workloads on Kubernetes. Code examples are provided to demonstrate invoking functions via HTTP and messaging with Spring Cloud Function and deploying functions to Knative and riff.
AWS DevDay Cologne - CI/CD for modern applicationsCobus Bernard
The document discusses approaches for modern application development including continuous integration, continuous deployment, infrastructure as code, microservices, and serverless technologies. It provides examples of using AWS services like CodePipeline, CodeBuild, CodeDeploy, SAM, and CDK to implement infrastructure as code, continuous integration, and continuous deployment. The document contains diagrams and code samples to illustrate these concepts and services.
Cloud-Native Streaming and Event-Driven MicroservicesVMware Tanzu
MARIUS BOGOEVICI SPRING CLOUD STREAM LEAD
Join us for an introduction to Spring Cloud Stream, a framework for creating event-driven microservices that builds on on the ease of development and execution of Spring Boot, the cloud-native capabilities of Spring Cloud, and the message-driven programming model of Spring Integration. See how Spring Cloud Stream’s abstractions and opinionated primitives allow you to easily build applications that can interchangeably use RabbitMQ, Kafka or Google PubSub without changing the application logic. Finally, we will show how these applications can be orchestrated and deployed on different modern runtimes such as Cloud Foundry, Kubernetes or Mesos using Spring Cloud Data Flow.
How Bitbucket Pipelines Loads Connect UI Assets Super-fastAtlassian
Connect add-ons deliver better user experience when they load fast. Between CDN, server-side rendering, service workers, and code splitting, there are loads of techniques you can use to achieve this. In this session, Atlassian Developer Peter Plewa will reveal Bitbucket Pipelines' secret for fast loads, and what they can do in the future to make Pipelines even faster.
Peter Plewa, Development Principal, Atlassian
Adopting Java for the Serverless world at Serverless Meetup New York and BostonVadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless Community. Java is known for its high cold start times and high memory footprint. For both you have to pay to the cloud providers of your choice. That's why most developers tried to avoid using Java for such use cases. But the times change: Community and cloud providers improve things steadily for Java developers. In this talk we look at the features and possibilities AWS cloud provider offers for the Java developers and look the most popular Java frameworks, like Micronaut, Quarkus and Spring (Boot) and look how (AOT compiler and GraalVM native images play a huge role) they address Serverless challenges and enable Java for broad usage in the Serverless world.
Become an hackathon champion with this useful collection of tutorials and examples with links to source code and videos.
Get ready for the next hackathon with IBM Bluemix.
Microservices Architecture for MEAN Applications using Serverless AWSMitoc Group
Digital platforms are by nature resource intensive, expensive to build, and difficult to manage at scale. What if we can change this perception and help MEAN developers architect a digital platform that is low cost and low maintenance? This session describes the underlying architecture behind www.deep.mg, the microservices marketplace built by Mitoc Group and powered by AWS abstracted services like AWS Lambda, Amazon CloudFront, and Amazon DynamoDB. Eugene Istrati, the CTO of Mitoc Group, will dive deep into their approach to microservices architecture on serverless environments and demonstrate how anyone can architect AWS abstracted services to achieve high scalability, high availability, and high performance without huge efforts or expensive resources allocation.
Dynamically assembled REST Microservices using JAX-RS and... Microservices? -...mfrancis
OSGi Community Event 2016 Presentation by Neil Bartlett (Paremus)
REST microservices are a powerful tool for composing large-scale systems, and the standalone nature of a microservice helps to avoid it becoming part of a “big ball of mud” application. Given the power and success of microservices as inter-process modules, why stop there? OSGi has offered in-process microservices for nearly two decades, and uses them to great effect in modular applications.
The new OSGi JAX-RS whiteboard service allows dynamic OSGi services to be automatically exported as JAX-RS Resources, Filters or Applications. These “Microservice modules” can be easily shared or moved between frameworks, allowing you to benefit from a microservice structure that goes all the way down.
Background
Over the last decade there has been a significant shift in the way that many computer programs are written. The focus has changed from building larger, more monolithic applications that provide a single high-level function, to composing these high-level behaviours from groups of smaller, distributed services. This is generally known as a “microservice” architecture, indicating that the services are smaller and lighter weight than typical web services.
The standard for REST microservices in Java is known as JAX-RS. JAX-RS provides a simple annotation-based model in which POJOs can have their methods mapped to RESTful service invocations. There is automatic mapping of HTTP parameters, and of the HTTP response, based on the annotations, and the incoming HTTP Headers. JAX-RS also includes support for grouping these POJOs into a single Application artifact. This allows the POJOs to interact with one another, as well as to share configuration and runtime state. When used in JAX-RS these POJOs are known as JAX-RS resources.
Ideal JAX-RS resources are stateless, and are usually instantiated by the container. JAX-RS resources share many features with OSGi services, in that they provide a way for machines (or processes within a machine) to interact with one another through a defined contract. This synergy between JAX-RS resources and OSGi services is the driver for the OSGi JAX-RS whiteboard service, allowing OSGi services to be transparently exposed using JAX-RS.
Smart Enterprise Application Integration with Apache Camel Kai Wähner
This document discusses a workshop on smart enterprise application integration using Apache Camel. The key messages are to understand enterprise integration patterns, understand the idea behind Apache Camel, and learn to use Apache Camel through hands-on examples. The agenda includes discussing enterprise integration patterns, Apache Camel, a toy shop use case demonstration, followed by a live coding session using Apache Camel. The goal is to explain the concepts through practical application rather than just theory.
Ten Battle-Tested Tips for Atlassian Connect Add-onsAtlassian
The document provides 10 tips for building battle-tested Atlassian Connect add-ons:
1. Automate deployments so they are a single button press.
2. Create rules for deploying to production to make it easy and safe.
3. Understand dependencies and implications of what is built and used.
4. Use other services where it makes sense to avoid reinventing the wheel.
5. Monitor components, servers, applications, users to know where failures happen.
6. Have recovery plans tested regularly to prepare for failures.
7. Handle failures by focusing on fixing issues with notifications and status updates.
8. Plan for traffic patterns to ensure scaling is possible when needed.
David Bosschaert & Carsten Ziegelar - Adobe
"The OSGi platform powering AEM provides a dynamic module system and enables component oriented development. Besides serving the as foundation for AEM, there are benefits for application developers.
This talk outlines the ease of use of OSGi in application code and shows how to master development tasks by using the right APIs and tools. Learn about the latest in component development, asynchronous processing, configuration management and deploying your application code in larger modules, so-called subsystems. A subsystem allows to package a set of bundles and configurations. The subsystem can run isolated from other bundles or other applications.
Learn how to leverage the latest OSGi tech for your own projects. All of the functionality discussed works directly with in AEM 6.1, GA now.
Make the most of the power of OSGi.
Whizlabs webinar - Deploying Portfolio Site with AWS ServerlessDhaval Nagar
In this session, we go through the AWS Serverless eco-system and demo of how to deploy a static site using the following services.
Serverless Framework
Route53
AWS Certification Manager
S3
CloudFront
API Gateway
DynamoDB
SNS
1. The document discusses a single mail client solution for integrating Lotus Domino and Microsoft Exchange with Liferay that was developed by PRODYNA AG.
2. It describes NABUCCO Groupware, an integration layer developed by PRODYNA to provide Liferay with enterprise-level integration of groupware applications like mail, calendar, contacts and tasks from systems like Lotus Domino and Exchange.
3. The DEILA framework is explained which was used to build the user interface for NABUCCO Groupware and integrate it with Liferay.
This document summarizes new features in Visual Studio 2010, .NET 4.0, and C# 4.0. Key updates include improved tooling for cloud, parallel, and TDD development in Visual Studio 2010. .NET 4.0 features enhancements to the base class library like code contracts and parallel extensions. New C# 4.0 features are dynamic lookup, named and optional arguments, improvements for COM interop, and variance support through out and in keywords. The presenter encourages attendees to try the Visual Studio 2010 CTP and familiarize themselves with these new technologies.
Jazoon 2012 - Systems Integration in the Cloud Era with Apache CamelKai Wähner
The document discusses systems integration in the cloud era using Apache Camel. It introduces Apache Camel as an open source framework that implements enterprise integration patterns and supports integration with various cloud platforms and services. It provides examples of using Apache Camel to integrate with IaaS platforms like Amazon Web Services, PaaS platforms like Google App Engine, and SaaS services like Salesforce. The key messages are that cloud integration is already possible, the cloud needs to be integrated, and Apache Camel helps significantly with cloud integration.
This document provides an overview of microservices architecture and how to implement it using Amazon ECS and Docker containers. It discusses what microservices are, characteristics of the architecture, and how ECS provides a fully managed platform for deploying and scheduling containers. It also covers task placement strategies, running services on ECS, and reference architectures like continuous deployment, secrets management, and service discovery that align with the twelve-factor app methodology. Finally, it introduces Blox, an open source project that aims to simplify deploying and managing microservices on ECS.
Transaction Control – a Functional Approach to Modular Transaction Management...mfrancis
OSGi Community Event 2016 Presentation by Tim Ward (Paremus)
Transactions are a critical part of almost all Enterprise applications, but correctly managing those transactions isn’t always easy. This is particularly true in a dynamic, modular world where you need to be certain that everything is ready before you begin.
With the advent of lambda expressions and functional interfaces we now have new, better tools for defining transactional work. The OSGi Transaction Control service uses these functional programming techniques to scope transactions and resource access, providing control and flexibility while leaving business logic uncluttered. The resulting solution is decoupled, modular and requires no container magic at all, making testing and portability a breeze.
Background
Software controlled transactions have existed for a long time — commercial products that are still available now can trace their origins back to the 1960s. Since that time a lot has changed, first we saw the rise of C, then of Object Oriented programming, then of the Web, and now of Microservices.
Over the same time period there have been significant changes to the way that transactions are managed – either transaction boundaries have to be explicitly declared, or the management role is delegated to a container technology. Given the complexity of correctly managing the transaction lifecycle, container managed solutions are regarded as the gold standard, however container managed solutions introduce their own problems.
The rise of the Spring framework was a reaction to the complexity, and heavy-touch management of the original Java EE specifications. Instead Spring focussed on “pure POJO” programming, designed to make your code easily portable, runnable and testable inside or outside the container.
While Spring did a much better job of hiding complexity than those early Java EE servers, the fundamental problem with any pure declarative approach is that there must be a container somewhere. Without a container there is no code to start or end the transaction. Even now with Spring, EJB 3.2, CDI etc, the promise of simpler, container independent components is an illusion.
The big problem with declarative transaction management is that it tries to take away too much from the application code, replacing it with “container magic”. The problem with relying on magic is that the resulting system ends up being more complex, not less. We therefore should be aiming to simplify and minimise transaction management code, not eliminate it entirely. Java’s support for functional techniques opens a whole new set of API possibilities for transaction management, and the Apache Aries project has been exploring the possibilities of providing generic resource and transaction management in a concise, type-safe way. Examples from this project demonstrate how transaction management can be made both simple and explicit at the same time.
This document provides an overview of Adobe Experience Manager (AEM) and related technologies. It discusses that AEM was founded in 1993 as Day Software and acquired by Adobe in 2010. It is now an integral part of the Adobe Marketing Cloud and is a leader in web content management. The document outlines key AEM features such as a touch-optimized UI, mobile app creation, commerce integration, and use of Apache Sling and Java Content Repository for its architecture.
This document discusses conducting user research for an API management product. It outlines the research process of empathizing with users through interviews and personas, defining problems by synthesizing findings, ideating solutions through design studios, and prototyping and testing solutions. The research uncovered that API consumers want easy discovery and secure access to APIs without involving operations. It also found that API managers want observability into API usage. Outcomes of the research included API discovery and documentation pages and secure access pages. The value of user research is building empathy, validating assumptions, and providing structured feedback.
This document discusses Spring Boot and Spring Cloud. It provides an overview of how Pivotal enables digital transformation through agile development practices and cloud native platforms. It describes capabilities of Spring Boot like quick project generation and auto configuration. It also discusses how Spring Cloud provides services for microservices like configuration, service registration and discovery, and fault tolerance with circuit breakers. The document includes code samples and demos the creation of a simple Spring Boot application and adding Spring MVC functionality with annotations. It promotes attending hands-on labs to learn how to use Spring Boot and Spring Cloud.
Serverless with Spring Cloud Function, Knative and riff #SpringOneTour #s1tToshiaki Maki
This document summarizes a presentation about serverless computing using Spring Cloud Function, Knative, and riff. It discusses what serverless computing is, an overview of Spring Cloud Function for developing serverless applications, and how Knative and riff can be used as platforms to deploy serverless workloads on Kubernetes. Code examples are provided to demonstrate invoking functions via HTTP and messaging with Spring Cloud Function and deploying functions to Knative and riff.
AWS DevDay Cologne - CI/CD for modern applicationsCobus Bernard
The document discusses approaches for modern application development including continuous integration, continuous deployment, infrastructure as code, microservices, and serverless technologies. It provides examples of using AWS services like CodePipeline, CodeBuild, CodeDeploy, SAM, and CDK to implement infrastructure as code, continuous integration, and continuous deployment. The document contains diagrams and code samples to illustrate these concepts and services.
Cloud-Native Streaming and Event-Driven MicroservicesVMware Tanzu
MARIUS BOGOEVICI SPRING CLOUD STREAM LEAD
Join us for an introduction to Spring Cloud Stream, a framework for creating event-driven microservices that builds on on the ease of development and execution of Spring Boot, the cloud-native capabilities of Spring Cloud, and the message-driven programming model of Spring Integration. See how Spring Cloud Stream’s abstractions and opinionated primitives allow you to easily build applications that can interchangeably use RabbitMQ, Kafka or Google PubSub without changing the application logic. Finally, we will show how these applications can be orchestrated and deployed on different modern runtimes such as Cloud Foundry, Kubernetes or Mesos using Spring Cloud Data Flow.
How Bitbucket Pipelines Loads Connect UI Assets Super-fastAtlassian
Connect add-ons deliver better user experience when they load fast. Between CDN, server-side rendering, service workers, and code splitting, there are loads of techniques you can use to achieve this. In this session, Atlassian Developer Peter Plewa will reveal Bitbucket Pipelines' secret for fast loads, and what they can do in the future to make Pipelines even faster.
Peter Plewa, Development Principal, Atlassian
Adopting Java for the Serverless world at Serverless Meetup New York and BostonVadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless Community. Java is known for its high cold start times and high memory footprint. For both you have to pay to the cloud providers of your choice. That's why most developers tried to avoid using Java for such use cases. But the times change: Community and cloud providers improve things steadily for Java developers. In this talk we look at the features and possibilities AWS cloud provider offers for the Java developers and look the most popular Java frameworks, like Micronaut, Quarkus and Spring (Boot) and look how (AOT compiler and GraalVM native images play a huge role) they address Serverless challenges and enable Java for broad usage in the Serverless world.
Become an hackathon champion with this useful collection of tutorials and examples with links to source code and videos.
Get ready for the next hackathon with IBM Bluemix.
Microservices Architecture for MEAN Applications using Serverless AWSMitoc Group
Digital platforms are by nature resource intensive, expensive to build, and difficult to manage at scale. What if we can change this perception and help MEAN developers architect a digital platform that is low cost and low maintenance? This session describes the underlying architecture behind www.deep.mg, the microservices marketplace built by Mitoc Group and powered by AWS abstracted services like AWS Lambda, Amazon CloudFront, and Amazon DynamoDB. Eugene Istrati, the CTO of Mitoc Group, will dive deep into their approach to microservices architecture on serverless environments and demonstrate how anyone can architect AWS abstracted services to achieve high scalability, high availability, and high performance without huge efforts or expensive resources allocation.
Dynamically assembled REST Microservices using JAX-RS and... Microservices? -...mfrancis
OSGi Community Event 2016 Presentation by Neil Bartlett (Paremus)
REST microservices are a powerful tool for composing large-scale systems, and the standalone nature of a microservice helps to avoid it becoming part of a “big ball of mud” application. Given the power and success of microservices as inter-process modules, why stop there? OSGi has offered in-process microservices for nearly two decades, and uses them to great effect in modular applications.
The new OSGi JAX-RS whiteboard service allows dynamic OSGi services to be automatically exported as JAX-RS Resources, Filters or Applications. These “Microservice modules” can be easily shared or moved between frameworks, allowing you to benefit from a microservice structure that goes all the way down.
Background
Over the last decade there has been a significant shift in the way that many computer programs are written. The focus has changed from building larger, more monolithic applications that provide a single high-level function, to composing these high-level behaviours from groups of smaller, distributed services. This is generally known as a “microservice” architecture, indicating that the services are smaller and lighter weight than typical web services.
The standard for REST microservices in Java is known as JAX-RS. JAX-RS provides a simple annotation-based model in which POJOs can have their methods mapped to RESTful service invocations. There is automatic mapping of HTTP parameters, and of the HTTP response, based on the annotations, and the incoming HTTP Headers. JAX-RS also includes support for grouping these POJOs into a single Application artifact. This allows the POJOs to interact with one another, as well as to share configuration and runtime state. When used in JAX-RS these POJOs are known as JAX-RS resources.
Ideal JAX-RS resources are stateless, and are usually instantiated by the container. JAX-RS resources share many features with OSGi services, in that they provide a way for machines (or processes within a machine) to interact with one another through a defined contract. This synergy between JAX-RS resources and OSGi services is the driver for the OSGi JAX-RS whiteboard service, allowing OSGi services to be transparently exposed using JAX-RS.
Smart Enterprise Application Integration with Apache Camel Kai Wähner
This document discusses a workshop on smart enterprise application integration using Apache Camel. The key messages are to understand enterprise integration patterns, understand the idea behind Apache Camel, and learn to use Apache Camel through hands-on examples. The agenda includes discussing enterprise integration patterns, Apache Camel, a toy shop use case demonstration, followed by a live coding session using Apache Camel. The goal is to explain the concepts through practical application rather than just theory.
Ten Battle-Tested Tips for Atlassian Connect Add-onsAtlassian
The document provides 10 tips for building battle-tested Atlassian Connect add-ons:
1. Automate deployments so they are a single button press.
2. Create rules for deploying to production to make it easy and safe.
3. Understand dependencies and implications of what is built and used.
4. Use other services where it makes sense to avoid reinventing the wheel.
5. Monitor components, servers, applications, users to know where failures happen.
6. Have recovery plans tested regularly to prepare for failures.
7. Handle failures by focusing on fixing issues with notifications and status updates.
8. Plan for traffic patterns to ensure scaling is possible when needed.
David Bosschaert & Carsten Ziegelar - Adobe
"The OSGi platform powering AEM provides a dynamic module system and enables component oriented development. Besides serving the as foundation for AEM, there are benefits for application developers.
This talk outlines the ease of use of OSGi in application code and shows how to master development tasks by using the right APIs and tools. Learn about the latest in component development, asynchronous processing, configuration management and deploying your application code in larger modules, so-called subsystems. A subsystem allows to package a set of bundles and configurations. The subsystem can run isolated from other bundles or other applications.
Learn how to leverage the latest OSGi tech for your own projects. All of the functionality discussed works directly with in AEM 6.1, GA now.
Make the most of the power of OSGi.
Whizlabs webinar - Deploying Portfolio Site with AWS ServerlessDhaval Nagar
In this session, we go through the AWS Serverless eco-system and demo of how to deploy a static site using the following services.
Serverless Framework
Route53
AWS Certification Manager
S3
CloudFront
API Gateway
DynamoDB
SNS
1. The document discusses a single mail client solution for integrating Lotus Domino and Microsoft Exchange with Liferay that was developed by PRODYNA AG.
2. It describes NABUCCO Groupware, an integration layer developed by PRODYNA to provide Liferay with enterprise-level integration of groupware applications like mail, calendar, contacts and tasks from systems like Lotus Domino and Exchange.
3. The DEILA framework is explained which was used to build the user interface for NABUCCO Groupware and integrate it with Liferay.
This document summarizes new features in Visual Studio 2010, .NET 4.0, and C# 4.0. Key updates include improved tooling for cloud, parallel, and TDD development in Visual Studio 2010. .NET 4.0 features enhancements to the base class library like code contracts and parallel extensions. New C# 4.0 features are dynamic lookup, named and optional arguments, improvements for COM interop, and variance support through out and in keywords. The presenter encourages attendees to try the Visual Studio 2010 CTP and familiarize themselves with these new technologies.
Jazoon 2012 - Systems Integration in the Cloud Era with Apache CamelKai Wähner
The document discusses systems integration in the cloud era using Apache Camel. It introduces Apache Camel as an open source framework that implements enterprise integration patterns and supports integration with various cloud platforms and services. It provides examples of using Apache Camel to integrate with IaaS platforms like Amazon Web Services, PaaS platforms like Google App Engine, and SaaS services like Salesforce. The key messages are that cloud integration is already possible, the cloud needs to be integrated, and Apache Camel helps significantly with cloud integration.
This document provides an overview of microservices architecture and how to implement it using Amazon ECS and Docker containers. It discusses what microservices are, characteristics of the architecture, and how ECS provides a fully managed platform for deploying and scheduling containers. It also covers task placement strategies, running services on ECS, and reference architectures like continuous deployment, secrets management, and service discovery that align with the twelve-factor app methodology. Finally, it introduces Blox, an open source project that aims to simplify deploying and managing microservices on ECS.
Cloud computing is an emerging model where data and services are hosted in remote "clouds" accessed through browsers or apps. Google CEO Eric Schmidt discussed this model in 2006, noting its potential is not fully understood. Major companies benefiting include Google, Yahoo, eBay and Amazon. Amazon Web Services are a leading public cloud platform, offering services like EC2, S3, CloudFront and SimpleDB. Challenges include availability, data lock-in, confidentiality, performance unpredictability and software licensing issues.
DevOpsDaysRiga 2018: Anton Babenko - What you see is what you get… for AWS in...DevOpsDays Riga
Get your AWS infrastructure implemented as code automatically from the visual diagram (cloudcraft.co)! Want to know how to do it? Anton Babenko, a long time developer, CTO, and tech-lead, will show you just in 5 minutes during his Ignite Talk @ DevOpsDays Riga event.
In an increasingly competitive marketplace, speed and business agility are paramount. And integration between customer-facing systems and back-end applications is more crucial than ever.
At this event, you'll learn how open source software built by communities, like Apache Camel, Docker, Kubernetes, OpenShift Origin, and Fabric8, can help organizations integrate services and establish effective continuous integration and delivery (CI/CD) pipelines.
You have heard how containers are great for running microservices, but running and managing large scale applications with microservices architectures is hard and often requires operating complex container management infrastructure. So what exactly is needed to get microservices to run in production at scale?
In this session, we will explore the reasoning and concepts behind microservices and how containers simplify building microservices based applications, and we will walk through a number of patterns used by our customers to run their microservices platforms. We will also dive deep into some of the challenges of running microservices, such as load balancing, service discovery, and secrets management, and we’ll see how Amazon EC2 Container Service (ECS) can help address them. We will also demo how you can easily deploy complex microservices applications using Amazon ECS.
This document summarizes different cloud PaaS options for running Java applications. It discusses Google App Engine, Amazon Elastic Beanstalk, and VMware Cloud Foundry. For each option, it provides a brief overview of features like supported programming languages, scaling capabilities, and available services. It also notes that while PaaS platforms make development easier than IaaS, there are still limitations in flexibility and portability between platforms. The document concludes that there are tradeoffs to consider and it is worth exploring the different options.
Four Scenarios for Using an Integration Service Environment (ISE)Daniel Toomey
The document discusses four scenarios for using an Azure Integration Service Environment (ISE): 1) private static outbound IP addresses, 2) predictable performance by controlling scaling, 3) support for additional hybrid connections via on-prem connectors, and 4) segregated network security by containing integration solutions within a private network. An ISE provides a way to run Logic Apps within an isolated and securable environment with access to on-premises resources. The document outlines the steps to create an ISE and provides examples of using ISE for outbound IPs, scaling, and hybrid connectivity.
Enterprise Integration Patterns Revisited (again) for the Era of Big Data, In...Kai Wähner
The document discusses enterprise integration patterns (EIPs) and how they are relevant for integrating applications and systems in an era of big data and the internet of things. It provides an agenda covering application integration, EIPs, modeling, frameworks and tools. It then discusses how EIPs apply to big data, IoT, microservices, and the future of integration. Real-time integration is highlighted as important for a world with more connected devices and data sources.
IBM BP Session - Multiple CLoud Paks and Cloud Paks Foundational Services.pptxGeorg Ember
Diese Präsentation beinhaltet Erfahrungen, Empfehlungen und Planungs-Gedanken, die man beachten sollte, wenn man multiple IBM Cloud Paks auf der Container Platform OpenShift installieren / deployen möchte. Es beschreibt die Grundlagen zu "common services", auch "foundational services" genannt, die als Basis-Services die Lauffähigkeit dieser Cloud Paks auf OpenShift erläutert und wie man Cloud Paks auch logisch trennen kann auf OpenShift worker nodes über taints und node selectors.
The document discusses service mesh and compares different service mesh solutions. It defines service mesh as providing a language-neutral standard attachment to application containers, enabling configuration of policies without redeploying applications, and separating application and attachment concerns. It compares key service mesh players like Envoy, Istio, Linkerd, Consul Connect, AWS App Mesh, Kong Kuma, and the Service Mesh Interface project. The document provides overviews of each solution's origins, architecture, and value propositions.
Next Generation – Systems Integration in the Cloud Era with Apache Camel - Ja...Kai Wähner
The document discusses systems integration in the cloud era. It introduces Apache Camel as a tool for cloud integration. Apache Camel helps enable integration across various cloud computing models including Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). It provides components to integrate with popular cloud platforms and services like Amazon Web Services, Google App Engine, and Salesforce.
The document provides an overview of cloud computing including its essential characteristics, service models, and deployment models. It discusses the cloud computing model as composed of on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. The service models are described as software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). The deployment models covered are public, private, hybrid, and community clouds.
This document discusses hybrid integration with SAP using Azure services. It demonstrates preparing the SAP and BizTalk environments for integration, including configuring the RFC client and property schema. The document shows connecting to SAP traditionally using RFC and IDOC and executing BAPI calls. It then discusses using Azure services like Service Bus Relay and BizTalk Services for integration. Finally, it proposes developing microservices using BizTalk features to enable integration between SAP and other systems in a modern way.
Serverless Computing with Azure Functions and XamarinMark Arteaga
[Presentation Given to Developer Usergroups - Source code available here https://github.com/redbitdev/RedBit.XamServerless]
Are you spending your day in Visual Studio? Are you curious about developing mobile apps for iOS and Android using Xamarin but not sure where to get started? Or interested in how you can leverage serverless computing? Then this session is for you!
During this session we will cover high level what is ‘serverless’ computing, how to build native iOS and Android apps using C#, how to share that code across the two platforms and how to connect that Xamarin App to a serverless computing environment.
We will work through a sample application built using Xamarin Forms and how to integrate this app with Azure Functions.
This document discusses Google App Engine (GAE) and provides an overview of cloud services, Platform as a Service (PaaS) features of GAE, and Infrastructure as a Service (IaaS) using Amazon EC2 as an example. It then describes implementing and deploying a sample Java web application project to GAE, including setting up the GAE development environment in Eclipse or using Maven, creating and testing the project locally, and deploying it to the GAE server.
The document summarizes the agenda for the Razorfish Technology Summit VI conference on leveraging targeting, platforms and APIs to accelerate businesses. Ray Velez will welcome attendees in the morning. The agenda then includes keynotes, panels and workshops on topics like omnichannel commerce, emerging platforms and experiences, responsive design, big data, and social technologies. There will also be a cocktail party in the evening.
Similar to 2012 05 confess_camel_cloud_integration (20)
Apache Kafka as Data Hub for Crypto, NFT, Metaverse (Beyond the Buzz!)Kai Wähner
Decentralized finance with crypto and NFTs is a huge topic these days. It becomes a powerful combination with the coming metaverse platforms across industries. This session explores the relationship between crypto technologies and modern enterprise architecture.
I discuss how data streaming and Apache Kafka help build innovation and scalable real-time applications of a future metaverse. Let's skip the buzz (and NFT bubble) and instead review existing real-world deployments in the crypto and blockchain world powered by Kafka and its ecosystem.
Apache Kafka is the de facto standard for data streaming to process data in motion. With its significant adoption growth across all industries, I get a very valid question every week: When NOT to use Apache Kafka? What limitations does the event streaming platform have? When does Kafka simply not provide the needed capabilities? How to qualify Kafka out as it is not the right tool for the job?
This session explores the DOs and DONTs. Separate sections explain when to use Kafka, when NOT to use Kafka, and when to MAYBE use Kafka.
No matter if you think about open source Apache Kafka, a cloud service like Confluent Cloud, or another technology using the Kafka protocol like Redpanda or Pulsar, check out this slide deck.
A detailed article about this topic:
https://www.kai-waehner.de/blog/2022/01/04/when-not-to-use-apache-kafka/
Kafka for Live Commerce to Transform the Retail and Shopping MetaverseKai Wähner
Live commerce combines instant purchasing of a featured product and audience participation.
This talk explores the need for real-time data streaming with Apache Kafka between applications to enable live commerce across online stores and brick & mortar stores across regions, countries, and continents in any retail business.
The discussion covers several building blocks of a live commerce enterprise architecture, including transactional data processing, omnichannel, natural language processing, augmented reality, edge computing, and more.
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Apache Kafka vs. Cloud-native iPaaS Integration Platform MiddlewareKai Wähner
Enterprise integration is more challenging than ever before. The IT evolution requires the integration of more and more technologies. Applications are deployed across the edge, hybrid, and multi-cloud architectures. Traditional middleware such as MQ, ETL, ESB does not scale well enough or only processes data in batch instead of real-time.
This presentation explores why Apache Kafka is the new black for integration projects, how Kafka fits into the discussion around cloud-native iPaaS (Integration Platform as a Service) solutions, and why event streaming is a new software category.
A concrete real-world example shows the difference between event streaming and traditional integration platforms respectively cloud-native iPaaS.
Video Recording of this presentation:
https://www.youtube.com/watch?v=I8yZwKg_IJc&t=2842s
Blog post about this topic:
https://www.kai-waehner.de/blog/2021/11/03/apache-kafka-cloud-native-ipaas-versus-mq-etl-esb-middleware/
Data Warehouse vs. Data Lake vs. Data Streaming – Friends, Enemies, Frenemies?Kai Wähner
The concepts and architectures of a data warehouse, a data lake, and data streaming are complementary to solving business problems.
Unfortunately, the underlying technologies are often misunderstood, overused for monolithic and inflexible architectures, and pitched for wrong use cases by vendors. Let’s explore this dilemma in a presentation.
The slides cover technologies such as Apache Kafka, Apache Spark, Confluent, Databricks, Snowflake, Elasticsearch, AWS Redshift, GCP with Google Bigquery, and Azure Synapse.
Serverless Kafka and Spark in a Multi-Cloud Lakehouse ArchitectureKai Wähner
Apache Kafka in conjunction with Apache Spark became the de facto standard for processing and analyzing data. Both frameworks are open, flexible, and scalable.
Unfortunately, the latter makes operations a challenge for many teams. Ideally, teams can use serverless SaaS offerings to focus on business logic. However, hybrid and multi-cloud scenarios require a cloud-native platform that provides automated and elastic tooling to reduce the operations burden.
This session explores different architectures to build serverless Apache Kafka and Apache Spark multi-cloud architectures across regions and continents.
We start from the analytics perspective of a data lake and explore its relation to a fully integrated data streaming layer with Kafka to build a modern data Data Lakehouse.
Real-world use cases show the joint value and explore the benefit of the "delta lake" integration.
Resilient Real-time Data Streaming across the Edge and Hybrid Cloud with Apac...Kai Wähner
Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.
Video recording:
https://qconlondon.com/london2022/presentation/resilient-real-time-data-streaming-across-the-edge-and-hybrid-cloud
Data Streaming with Apache Kafka in the Defence and Cybersecurity IndustryKai Wähner
Agenda:
1) Defence, Modern Warfare, and Cybersecurity in 202X
2) Data in Motion with Apache Kafka as Defence Backbone
3) Situational Awareness
4) Threat Intelligence
5) Forensics and AI / Machine Learning
6) Air-Gapped and Zero Trust Environments
7) SIEM / SOAR Modernization
Technologies discussed in the presentation include Apache Kafka, Kafka Streams, kqlDB, Kafka Connect, Elasticsearch, Splunk, IBM QRadar, Zeek, Netflow, PCAP, TensorFlow, AWS, Azure, GCP, Sigma, Confluent Cloud,
Real-World Deployments of Data Streaming with Apache Kafka across the Healthcare Value Chain using open source and cloud-native technologies and serverless SaaS:
1) Legacy Modernization and Hybrid Cloud: Optum (UnitedHealth Group, Centene, Bayer)
2) Streaming ETL (Bayer, Babylon Health)
3) Real-time Analytics (Cerner, Celmatix, CDC/Centers for Disease Control and Prevention)
4) Machine Learning and Data Science (Recursion, Humana)
5) Open API and Omnichannel (Care.com, Invitae)
The Rise of Data in Motion in the Healthcare Industry - Use Cases, Architectures and Examples powered by Apache Kafka.
Use Cases for Data in Motion in the Healthcare Industry:
- Know Your Patient (= “Customer 360”)
- Operations (Healthcare 4.0 including Drug R&D, Patient Care, etc.)
- IT Perspective (Cybersecurity, Mainframe Offload, Hybrid Cloud, Streaming ETL, etc)
Real-world examples include Covid-19 Electronic Lab Reporting, Cerner, Optum, Centene, Humana, Invitae, Bayer, Celmatix, Care.com.
Apache Kafka for Real-time Supply Chainin the Food and Retail IndustryKai Wähner
Use Cases, Architectures, and Real-World Examples for data in motion and real-time event streaming powered by Apache Kafka across the supply chain and logistics. Case studies and deployments include Baader, Walmart, Migros, Albertsons, Domino's Pizza, Instacart, Grab, Royal Caribbean, and more.
Kafka for Real-Time Replication between Edge and Hybrid CloudKai Wähner
Not all workloads allow cloud computing. Low latency, cybersecurity, and cost-efficiency require a suitable combination of edge computing and cloud integration.
This session explores architectures and design patterns for software and hardware considerations to deploy hybrid data streaming with Apache Kafka anywhere. A live demo shows data synchronization from the edge to the public cloud across continents with Kafka on Hivecell and Confluent Cloud.
Apache Kafka for Predictive Maintenance in Industrial IoT / Industry 4.0Kai Wähner
The manufacturing industry is moving away from just selling machinery, devices, and other hardware. Software and services increase revenue and margins. Equipment-as-a-Service (EaaS) even outsources the maintenance to the vendor.
This paradigm shift is only possible with reliable and scalable real-time data processing leveraging an event streaming platform such as Apache Kafka. This talk explores how Kafka-native Condition Monitoring and Predictive Maintenance help with this innovation.
More details:
https://www.kai-waehner.de/blog/2021/10/25/apache-kafka-condition-monitoring-predictive-maintenance-industrial-iot-digital-twin/
Video recording:
https://youtu.be/tfOuN5KeI9w
Apache Kafka Landscape for Automotive and ManufacturingKai Wähner
Today, in 2022, Apache Kafka is the central nervous system of many applications in various areas related to the automotive and manufacturing industry for processing analytical and transactional data in motion across edge, hybrid, and multi-cloud deployments.
This presentation explores the automotive event streaming landscape, including connected vehicles, smart manufacturing, supply chain optimization, aftersales, mobility services, and innovative new business models.
Afterwards, many real-world examples are shown from companies such as Audi, BMW, Porsche, Tesla, Uber, Grab, and FREENOW.
More detail in the blog post:
https://www.kai-waehner.de/blog/2022/01/12/apache-kafka-landscape-for-automotive-and-manufacturing/
Kappa vs Lambda Architectures and Technology ComparisonKai Wähner
Real-time data beats slow data. That’s true for almost every use case. Nevertheless, enterprise architects build new infrastructures with the Lambda architecture that includes separate batch and real-time layers.
This video explores why a single real-time pipeline, called Kappa architecture, is the better fit for many enterprise architectures. Real-world examples from companies such as Disney, Shopify, Uber, and Twitter explore the benefits of Kappa but also show how batch processing fits into this discussion positively without the need for a Lambda architecture.
The main focus of the discussion is on Apache Kafka (and its ecosystem) as the de facto standard for event streaming to process data in motion (the key concept of Kappa), but the video also compares various technologies and vendors such as Confluent, Cloudera, IBM Red Hat, Apache Flink, Apache Pulsar, AWS Kinesis, Amazon MSK, Azure Event Hubs, Google Pub Sub, and more.
Video recording of this presentation:
https://youtu.be/j7D29eyysDw
Further reading:
https://www.kai-waehner.de/blog/2021/09/23/real-time-kappa-architecture-mainstream-replacing-batch-lambda/
https://www.kai-waehner.de/blog/2021/04/20/comparison-open-source-apache-kafka-vs-confluent-cloudera-red-hat-amazon-msk-cloud/
https://www.kai-waehner.de/blog/2021/05/09/kafka-api-de-facto-standard-event-streaming-like-amazon-s3-object-storage/
The Top 5 Apache Kafka Use Cases and Architectures in 2022Kai Wähner
This document discusses the top 5 use cases and architectures for data in motion in 2022. It describes:
1) The Kappa architecture as an alternative to the Lambda architecture that uses a single stream to handle both real-time and batch data.
2) Hyper-personalized omnichannel experiences that integrate customer data from multiple sources in real-time to provide personalized experiences across channels.
3) Multi-cloud deployments using Apache Kafka and data mesh architectures to share data across different cloud platforms.
4) Edge analytics that deploy stream processing and Kafka brokers at the edge to enable low-latency use cases and offline functionality.
5) Real-time cybersecurity applications that use streaming data
Event Streaming CTO Roundtable for Cloud-native Kafka ArchitecturesKai Wähner
Technical thought leadership presentation to discuss how leading organizations move to real-time architecture to support business growth and enhance customer experience. This is a forum to discuss use cases with your peers to understand how other digital-native companies are utilizing data in motion to drive competitive advantage.
Agenda:
- Data in Motion with Event Streaming and Apache Kafka
- Streaming ETL Pipelines
- IT Modernisation and Hybrid Multi-Cloud
- Customer Experience and Customer 360
- IoT and Big Data Processing
- Machine Learning and Analytics
Apache Kafka in the Public Sector (Government, National Security, Citizen Ser...Kai Wähner
The Rise of Data in Motion in the Public Sector powered by event streaming with Apache Kafka.
Citizen Services:
- Health services, e.g. hospital modernization, track & trace - Covid distance control
- Public administration - reduce bureaucracy, data democratization across government departments
- eGovernment - Efficient and digital citizen engagement, e.g. personal ID application process
Smart City
- Smart driving, parking, buildings, environment
Waste management
- Open exchange – e.g. mobility services (1st and 3rd party)
Energy
- Smart grid and utilities infrastructure (energy distribution, smart home, smart meters, smart water, etc.)
- National Security
Law enforcement, surveillance, police/interior security data exchange
- Defense and military (border control, intelligent solider)
Cybersecurity for situational awareness and threat intelligence
Telco 4.0 - Payment and FinServ Integration for Data in Motion with 5G and Ap...Kai Wähner
The Era of Telco 4.0: Embracing Digital Transformation with Data in Motion. Learn about Payment and FinServ Integration for Data in Motion with 5G and Apache Kafka.
1) The rise of Telco 4.0 and the future forward
2) Data in Motion in the Telco industry
3) Real-world Fintech and Payment examples powered by Data in Motion
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
2. Kai Wähner (MaibornWolff et al GmbH, Munich, Germany)
Main Tasks
Evaluation of Technologies and Products
Requirements Engineering
Enterprise Architecture Management
Business Process Management
Architecture and Development of Applications
Planning and Introduction of SOA
Integration of Legacy Applications
Cloud Computing
Consulting Contact
Developing Email: kai.waehner@mwea.de
Speaking Blog: www.kai-waehner.de/blog
Coaching Twitter: @KaiWaehner
Writing Social Networks: Xing, LinkedIn
3. What is the Problem?
Growth
• Applications
• Interfaces
• Technologies
• Products
39. PaaS Concepts
Application Deployment
Easy Deployment
Automatic Scaling
Development Restrictions
JRE Class White List
Workarounds for Frameworks
No „naked“ Domains
No „write once run everywhere“
Quotas and Limits
Services
Push Queue
Pull Queue
URL Fetch
Accounts
Mail
Memcache
XMPP
Images
Datastore
Cloud Storage
Cloud SQL
40. Hint
Google*App*Engine*
is*a*complex*scenario*for*Apache*Camel*
due*to*its*many*restric(ons!*
Other*„more*open“*PaaS*solu(ons**
such*as*OpenShid*or*Heroku*
are*easier*to*use*...*
48. SaaS Concepts
Software (CRM)
Sales
Service
Social
Data.com
AppExchange
... more ...
Development
Online-Development
(even the Compiler is in the Cloud!)
Own Addons Force.com (PaaS)
Apex Integration of Interfaces
Visualforce
REST
SOAP
Client APIs (Java, etc.)
63. Thank you for your Attention. Any Questions?
Kai Wähner
MaibornWolff et al: www.mwea.de
Email: kai.waehner@mwea.de
Twitter: @KaiWaehner
Blog: www.kai-waehner.de/blog
Social: LinkedIn / Xing