Data exchanges between companies increase a lot. The number of applications which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests.
Three integration frameworks are available in the JVM environment, which fulfil these requirements: Apache Camel, Spring Integration and Mule. They implement the well-known Enteprise Integration Patterns (EIP) and therefore offers a standardized, domain-specific language to integrate applications.
These Integration Frameworks can be used in almost every integration project within the JVM environment - no matter which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code.
This session shows and compares the three alternatives and discusses their pros and cons. Besides, a recommendation will be given when to use a more powerful Enterprise Service Bus (ESB) instead of one of these frameworks.
How to choose the right Integration Framework - Apache Camel (JBoss, Talend),...Kai Wähner
Data exchanges between companies increase a lot. The number of applications which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests.
Three integration frameworks are available in the JVM environment, which fulfil these requirements: Apache Camel, Spring Integration and Mule. They implement the well-known Enteprise Integration Patterns (EIP) and therefore offers a standardized, domain-specific language to integrate applications.
These Integration Frameworks can be used in almost every integration project within the JVM environment - no matter which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code.
This session shows and compares the three alternatives and discusses their pros and cons. Besides, a recommendation will be given when to use a more powerful Enterprise Service Bus (ESB) instead of one of these frameworks.
CamelOne 2012 - Spoilt for Choice: Which Integration Framework to use?Kai Wähner
Spoilt for Choice - Which Integration Framework to use on the Java (JVM) Platform? Apache Camel, Spring Integration, Mule ESB? Or when to use an Enterprise Service Bus (ESB) instead?
Spoilt for Choice: How to Choose the Right Enterprise Service Bus (ESB)?Kai Wähner
Data exchanges in and between companies increase a lot. The number of applications which must be integrated increases, too. As solution, an Enterprise Service Bus (ESB) can be used in almost every integration project - no matter which technologies, transport protocols, data formats, or environments such as Java or .NET are used. All integration projects can be realized in a consistent way without redundant boilerplate code. However, an ESB offers many further features, such as business process management (BPM), master data management, business activity monitoring, or big data. Plenty of ESB products are on the market which differ a lot regarding concepts, programming models, tooling, and open source vs. proprietary. Really one is spoilt for choice.
Dell Technology World - CloudOps - Leveraging DevOps Principles and Practice...Don Demcsak
The push to embed agile practices everywhere in enterprise IT has led some to rethink the way cloud automation and migrations get done. The natural evolution is to operate Cloud as Code. Come see how using DevOps principles such as sprints, Kanban boards, source control, test harnesses, pipelines and leading metrics help manage scale and risk in a multi-cloud world.
CloudExpo NY 2014: Moving Mission Critical Applications to the CloudKacy Clarke
A panel discussion on pitfalls and best practices for migrating mission critical applications to the cloud. I presented on June 12, 2014 with Shane Shelton from McGraw-Hill Education and Nathan Anderson from GE Capital
Systems Integration in the Cloud Era with Apache Camel @ ApacheCon Europe 2012Kai Wähner
Shows the elegance of Apache Camel to integrate different cloud providers such as Amazon Web Services (IaaS), Google App Engine (PaaS), or Salesforce (SaaS).
How to choose the right Integration Framework - Apache Camel (JBoss, Talend),...Kai Wähner
Data exchanges between companies increase a lot. The number of applications which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests.
Three integration frameworks are available in the JVM environment, which fulfil these requirements: Apache Camel, Spring Integration and Mule. They implement the well-known Enteprise Integration Patterns (EIP) and therefore offers a standardized, domain-specific language to integrate applications.
These Integration Frameworks can be used in almost every integration project within the JVM environment - no matter which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code.
This session shows and compares the three alternatives and discusses their pros and cons. Besides, a recommendation will be given when to use a more powerful Enterprise Service Bus (ESB) instead of one of these frameworks.
CamelOne 2012 - Spoilt for Choice: Which Integration Framework to use?Kai Wähner
Spoilt for Choice - Which Integration Framework to use on the Java (JVM) Platform? Apache Camel, Spring Integration, Mule ESB? Or when to use an Enterprise Service Bus (ESB) instead?
Spoilt for Choice: How to Choose the Right Enterprise Service Bus (ESB)?Kai Wähner
Data exchanges in and between companies increase a lot. The number of applications which must be integrated increases, too. As solution, an Enterprise Service Bus (ESB) can be used in almost every integration project - no matter which technologies, transport protocols, data formats, or environments such as Java or .NET are used. All integration projects can be realized in a consistent way without redundant boilerplate code. However, an ESB offers many further features, such as business process management (BPM), master data management, business activity monitoring, or big data. Plenty of ESB products are on the market which differ a lot regarding concepts, programming models, tooling, and open source vs. proprietary. Really one is spoilt for choice.
Dell Technology World - CloudOps - Leveraging DevOps Principles and Practice...Don Demcsak
The push to embed agile practices everywhere in enterprise IT has led some to rethink the way cloud automation and migrations get done. The natural evolution is to operate Cloud as Code. Come see how using DevOps principles such as sprints, Kanban boards, source control, test harnesses, pipelines and leading metrics help manage scale and risk in a multi-cloud world.
CloudExpo NY 2014: Moving Mission Critical Applications to the CloudKacy Clarke
A panel discussion on pitfalls and best practices for migrating mission critical applications to the cloud. I presented on June 12, 2014 with Shane Shelton from McGraw-Hill Education and Nathan Anderson from GE Capital
Systems Integration in the Cloud Era with Apache Camel @ ApacheCon Europe 2012Kai Wähner
Shows the elegance of Apache Camel to integrate different cloud providers such as Amazon Web Services (IaaS), Google App Engine (PaaS), or Salesforce (SaaS).
The AWS Private Equity organization utilizes the Recognized Cloud Transformation Leader (RCTL) program and Transformation Advisor role to enable portfolio company executives to successfully execute a cloud or digital transformation - accelerate migrations/modernization, remove transformation impediments and mitigate risk.
AWS PE Transformation Advisor program overview
Assigns a dedicated PE Transformation Advisor to the executive cloud sponsor (CxO or similar) for an 8-to-12-week engagement that can be further extended as needed. The PE Transformation Advisor aids the executive in value creation by removing transformation blockers, securing buy-in from the executive team, influencing the board, adapting business processes in support of cloud, and preparing the broader organization for the digital transformation.
During the engagement, the PE Transformation Advisor provides prescriptive guidance to define the transformation tenets and guiding principles, assist developing the business case, produce the cloud journey map, establish the Cloud Center of Excellence (CCoE), document KPIs, identify partners, and define the Cloud Operating Model (COM).
Creating an Operating Model to enable a high frequency organizationTom Laszewski
Establishing an appropriate cloud operating model is critical to forming your organization’s successful adoption of cloud, and delivering greater business agility, increasing the cloud migration Return on Investment, and deliver a more secure, performant, reliable, and cost effective cloud computing environment. The impact of the cloud will be felt across your entire organization, including processes and people - not just Information technology. It will significantly affect, and be affected by, your organizational culture and Information technology delivery structures. This session will provide prescriptive guidance regarding the best approaches to evolving an operating model from projects to products, manual, process intensive governance to a ‘trust but verify’ model, long development cycles to continuous integration and deployment, silos between business and IT into a collaborative organizational structure, self-service processes, and continuous improvement. The recommendations in the presentation are based upon lesson learned, best practices, and anti-patterns from thousands of customer’s cloud transformation journeys.
Look at Oracle Integration Cloud – its relationship to ICS. Customer use Case...Phil Wilkins
This is a presentation about Oracle Integration Cloud (ICS) and Oracle Integration Cloud Service - the relationship between the two products. We also look at customer use cases and what lead to an ICS based recommendation and what would we recommend now OIC has been made available
Accelerate your Cloud Migration Journey (Level 100)
Many enterprises have realized the benefits of migrating services to the cloud and are considering the next stage in their journey. In this webinar we will take you through the principles of cloud migration including the business benefits, cloud economics, methodologies and tools to accelerate your journey. We will also discuss the key enablers for cloud adoption based on real examples.
This is a Level 100 webinar
Speakers: Manav Prabhakar, Practice Manager, AWS Professional Services
Jon Austin, Principal Solutions Architect, Amazon Web Services
Parijat Mishra, Solutions Architect Manager, Amazon Web Services
In this presentation from the AWS Lab at Cloud Expo Europe 2014 you will find details of the six patterns that Enterprise organisations typically to follow when adopting Amazon Web Services as well as a summary of the licensing options available for running enterprise applications on Amazon Web Services.
Many large enterprises have begun using AWS to host development and test environments while also building greenfield applications in AWS. After realizing the benefits that AWS has to offer, many Enterprise look for ways to accelerate their migration to the cloud. In beginning this journey they are often faced with a number of challenges such as determining which applications should move, how they should move, and how can they be effectively managed in the cloud. Accenture, working with AWS Solution Architects, and AWS Professional Services have developed a framework, based on our experiences, to quickly, efficiently, and successfully move enterprise applications to AWS at scale. This session will review our approach, tools, and methods that can help Enterprises evolve their cloud transformation programs.
When customers think about moving to the cloud, one of their first considerations is cost. AWS helps lower customer costs through its “pay only for what you use” pricing model. In this session, we explain our TCO analysis methodology and explore the financial considerations of owning and operating a traditional data center or managed hosting provider compared with using AWS services. Customers also share their processes for developing the right cost savings and optimization model.
OpenStack and Cloud Foundry - Pair the leading open source IaaS and PaaSDaniel Krook
OpenStack is the leading open source Infrastructure-as-a-Service, and Cloud Foundry has become the leading open source Platform-as-a-Service. Deploying them together is a natural fit for your next generation systems of engagement.
This special joint meetup of the OpenStack NY and NYC Cloud Foundry communities will give both audiences an introduction to these popular open source IaaS and PaaS projects.
The presentation will describe the compelling advantages of each technology, and then explain how they can be integrated, optimized, and scaled to provide a complete cloud application hosting solution.
Although the Cloud is now mainstream for many large Enterprises, organisations who truly aim to maximise the value of their cloud platform also focus on refactoring the model by which they operate their IT. In this session you will learn about the latest developments in operations within the cloud, including operational processes, architecture, governance, organisation structure, and monitoring tools to maximise agility. This session will highlight the features of a Cloud-enabled IT operating model and how AWS customers benefit from delivering a new approach to IT.
Speaker: Andrew Mitchell, Solutions Architect, Amazon Web Services
Featured Customer - NAB
This is the original eBook I created with Tony Curcio and Nick Glowacki, uploaded here for posterity since it is now somewhat superseded by the smart paper at http://ibm.biz/agile-integration and then in considerably more detail in the first few chapters of the agile integration IBM Redbook http://ibm.biz/agile-integration-redbook
Showdown: Integration Framework (Spring Integration, Apache Camel) vs. Enterp...Kai Wähner
I had a talk at Java User Group Frankfurt (JUGF): "Showdown: Integration Framework (Spring Integration, Apache Camel) vs. Enterprise Service Bus (ESB)". The room was fully packed, interest in integration frameworks, ESBs, and corresponding tooling is increasing every year!
The AWS Private Equity organization utilizes the Recognized Cloud Transformation Leader (RCTL) program and Transformation Advisor role to enable portfolio company executives to successfully execute a cloud or digital transformation - accelerate migrations/modernization, remove transformation impediments and mitigate risk.
AWS PE Transformation Advisor program overview
Assigns a dedicated PE Transformation Advisor to the executive cloud sponsor (CxO or similar) for an 8-to-12-week engagement that can be further extended as needed. The PE Transformation Advisor aids the executive in value creation by removing transformation blockers, securing buy-in from the executive team, influencing the board, adapting business processes in support of cloud, and preparing the broader organization for the digital transformation.
During the engagement, the PE Transformation Advisor provides prescriptive guidance to define the transformation tenets and guiding principles, assist developing the business case, produce the cloud journey map, establish the Cloud Center of Excellence (CCoE), document KPIs, identify partners, and define the Cloud Operating Model (COM).
Creating an Operating Model to enable a high frequency organizationTom Laszewski
Establishing an appropriate cloud operating model is critical to forming your organization’s successful adoption of cloud, and delivering greater business agility, increasing the cloud migration Return on Investment, and deliver a more secure, performant, reliable, and cost effective cloud computing environment. The impact of the cloud will be felt across your entire organization, including processes and people - not just Information technology. It will significantly affect, and be affected by, your organizational culture and Information technology delivery structures. This session will provide prescriptive guidance regarding the best approaches to evolving an operating model from projects to products, manual, process intensive governance to a ‘trust but verify’ model, long development cycles to continuous integration and deployment, silos between business and IT into a collaborative organizational structure, self-service processes, and continuous improvement. The recommendations in the presentation are based upon lesson learned, best practices, and anti-patterns from thousands of customer’s cloud transformation journeys.
Look at Oracle Integration Cloud – its relationship to ICS. Customer use Case...Phil Wilkins
This is a presentation about Oracle Integration Cloud (ICS) and Oracle Integration Cloud Service - the relationship between the two products. We also look at customer use cases and what lead to an ICS based recommendation and what would we recommend now OIC has been made available
Accelerate your Cloud Migration Journey (Level 100)
Many enterprises have realized the benefits of migrating services to the cloud and are considering the next stage in their journey. In this webinar we will take you through the principles of cloud migration including the business benefits, cloud economics, methodologies and tools to accelerate your journey. We will also discuss the key enablers for cloud adoption based on real examples.
This is a Level 100 webinar
Speakers: Manav Prabhakar, Practice Manager, AWS Professional Services
Jon Austin, Principal Solutions Architect, Amazon Web Services
Parijat Mishra, Solutions Architect Manager, Amazon Web Services
In this presentation from the AWS Lab at Cloud Expo Europe 2014 you will find details of the six patterns that Enterprise organisations typically to follow when adopting Amazon Web Services as well as a summary of the licensing options available for running enterprise applications on Amazon Web Services.
Many large enterprises have begun using AWS to host development and test environments while also building greenfield applications in AWS. After realizing the benefits that AWS has to offer, many Enterprise look for ways to accelerate their migration to the cloud. In beginning this journey they are often faced with a number of challenges such as determining which applications should move, how they should move, and how can they be effectively managed in the cloud. Accenture, working with AWS Solution Architects, and AWS Professional Services have developed a framework, based on our experiences, to quickly, efficiently, and successfully move enterprise applications to AWS at scale. This session will review our approach, tools, and methods that can help Enterprises evolve their cloud transformation programs.
When customers think about moving to the cloud, one of their first considerations is cost. AWS helps lower customer costs through its “pay only for what you use” pricing model. In this session, we explain our TCO analysis methodology and explore the financial considerations of owning and operating a traditional data center or managed hosting provider compared with using AWS services. Customers also share their processes for developing the right cost savings and optimization model.
OpenStack and Cloud Foundry - Pair the leading open source IaaS and PaaSDaniel Krook
OpenStack is the leading open source Infrastructure-as-a-Service, and Cloud Foundry has become the leading open source Platform-as-a-Service. Deploying them together is a natural fit for your next generation systems of engagement.
This special joint meetup of the OpenStack NY and NYC Cloud Foundry communities will give both audiences an introduction to these popular open source IaaS and PaaS projects.
The presentation will describe the compelling advantages of each technology, and then explain how they can be integrated, optimized, and scaled to provide a complete cloud application hosting solution.
Although the Cloud is now mainstream for many large Enterprises, organisations who truly aim to maximise the value of their cloud platform also focus on refactoring the model by which they operate their IT. In this session you will learn about the latest developments in operations within the cloud, including operational processes, architecture, governance, organisation structure, and monitoring tools to maximise agility. This session will highlight the features of a Cloud-enabled IT operating model and how AWS customers benefit from delivering a new approach to IT.
Speaker: Andrew Mitchell, Solutions Architect, Amazon Web Services
Featured Customer - NAB
This is the original eBook I created with Tony Curcio and Nick Glowacki, uploaded here for posterity since it is now somewhat superseded by the smart paper at http://ibm.biz/agile-integration and then in considerably more detail in the first few chapters of the agile integration IBM Redbook http://ibm.biz/agile-integration-redbook
Showdown: Integration Framework (Spring Integration, Apache Camel) vs. Enterp...Kai Wähner
I had a talk at Java User Group Frankfurt (JUGF): "Showdown: Integration Framework (Spring Integration, Apache Camel) vs. Enterprise Service Bus (ESB)". The room was fully packed, interest in integration frameworks, ESBs, and corresponding tooling is increasing every year!
Enterprise Integration Patterns with Spring integration!hegdekiranr
“Spring Integration (different from Core Spring Framework) is an Enterprise Integration Patterns implementation.
It provides lightweight messaging within Spring-based applications and supports integration with external systems via declarative adapters.
Those adapters provide a higher-level of abstraction over Spring's support for remoting, messaging, and scheduling.
Enterprise Integration Patterns Revisited (again) for the Era of Big Data, In...Kai Wähner
In 2015, I had two talks about Enterprise Integration Patterns at OOP 2015 in Munich, Germany and at JavaDay 2015 in Kiev, Ukraine. I reused a talk from 2013 and updated it with current trends to show how important Enterprise Integration Patterns (EIP) are everywhere today and in the upcoming years.
Applications have to be integrated – no matter which programming languages, databases or infrastructures are used. However, the realization of integration scenarios is a complex and time-consuming task. Over 10 years ago, Enteprise Integration Patterns (EIP) became the world wide defacto standard for splitting huge, complex integration scenarios into smaller recurring problems. These patterns appear in almost every integration project.
This session revisits EIPs and gives shows status quo. After giving a short introduction with several examples, the audience will learn which EIPs still have a „right to exist“, and which new EIPs emerged in the meantime. The end of the session shows different frameworks and tools which already implement EIPs and therefore help the architect to reduce efforts a lot.
Integration Patterns and Anti-Patterns for Microservices ArchitecturesApcera
Integration Patterns and Anti-Patterns for Microservices Architectures
David Williams
Co-Founder and Partner, Williams Garcia
You can learn more about NATS at http://www.nats.io
You are not Facebook or Google? Why you should still care about Big Data and ...Kai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data.
This session goes beyond the well-known examples of huge companies such as Facebook or Google with millions of users. Instead, this session explains the "big" paradigm and technology shift for your company. See several use cases how big data enables small / medium-sized companies to gain insight into new business opportunities (and threats) and how big data stands to transform much of what the modern enterprise is today.
Learn about solving the unique challenges of big data without an own research lab or several big data experts in your company. Learn how to implement the relevant use cases for your company with low costs and efforts by using open source frameworks, which simplify working with big data a lot.
Steve Sams (VP IBM Global Site & Facilities Services) presentation at Gartner Data Center Conference (Dec 2011). Learn more about IBM Smarter Data Center Services: ibm.co/smarterdc
Systems Integration in the NoSQL Era with Apache Camel (Neo4j, CouchDB, AWS S...Kai Wähner
SQL cannot solve several problems emerging with big data. In the future you will have to integrate NoSQL databases, too. The open source integration framework Apache Camel is already prepared for this challenging task. Several examples are shown for integrating NoSQL databases from CouchDB (Document Store), HBase (Column-oriented), Neo4j (Graph), Amazon Web Services (Key Value Store), and others.
Slides for October 15 webinar with ESG Analyst Scott Sinclair and Avere Systems Engineer Bernie Behn reviewing ESG lab results that tested the Avere vFXT Edge filer on Google Cloud Platform.
The Eight Building Blocks of Enterprise Application ArchitectureTechAhead
Enterprises use multiple applications and sometimes they make a mess. Enterprise application architecture is a tool to bring semblance of order in this chaos
Cloud Computing models : Cloud Computing models Cloud Computing consists of all types of outsourced IT services: Application, Platform, Infrastructure, Security... as a Service: XaaS Two typical deployments: SaaS: applications may be outsourced to different providers in the Cloud, using their own technology IaaS/PaaS: applications are housed by an Infrastructure/ Data Center provider and are downloaded as Virtual Machines
Cloud Computing impacts on IT : Cloud Computing impacts on IT The IT becomes a separate entity from the firm: The technology in the Enterprise cloud, may run in another time zone, country... Cloud computing may reduce the IT branch to the architecture, strategy and planning functions Technology buy, upgrades, licensing and management are not a firm’s concern any longer Replaced by contracts and utility like charging, the bitter relation between business and IT vanishes
Cloud Computing characteristics : Cloud Computing characteristics Web interface for self provisioning and reporting Charging mechanisms for actual consumption Multi-tenant data centers with frequently used platforms (PaaS) Technology is typically virtualized Blades technology could be used for scalability, low cost, reduced space & reduced power consumption Comes with development and deployment tools
The Virtual Enterprise & Business Utilities : The Virtual Enterprise & Business Utilities The Virtual Enterprise business concept, known also as the Networked Enterprise, consists of distributed business functions and utilities, outsourced to partners that work together with the firm to deliver the product to end customers “Business Process Utilities are an emerging form of business process outsourcing. BPU is useful when a more standardized solution is sought that can be paid for on a transactional basis”, Gartner http://www.gartner.com/DisplayDocument?id=527120
The evolution to the Cloud Enterprise : The evolution to the Cloud Enterprise The Monolithic Enterprise The Virtual Enterprise Company 8 Company 9 The Cloud Enterprise
The Cloud Enterprise and EA : The Cloud Enterprise and EA The Business Architecture layer rests on top of the computing cloud consisting of the IT Application and Technology layers The Cloud Enterprise Architecture (EA) consists mostly of Business Architecture, rather than technology detail The Data Center, its virtualization become the concern of the cloud services providers
The EA layers and current outsourcing types : The EA layers and current outsourcing types BPUtility outsourcing Managed Services (Apps,Infra) Outsourcing type Data Centre outsourcing Application outsourcing (SaaS) Call Centre (people) outsourcing The Enterprise
The increasing EA layers virtualization : The increasing EA layers virtualization But virtualization increasingly occurs at interfaces between the EA layers (business, applications...) progressively abstracting and decoupling them, enabling as such business and IT outsourcing The UI become
From the Amazon Web Services Singapore & Malaysia Summits 2015 Track 1 Breakout, 'The Journey to Digital Enterprise' Presented by Daniel Angelucci, CTO, CSC AMEA
In few years, the Business Applications in Enterprises will look very different. This quick deck will tell you how COTS solutions would change, how Enterprise Platforms would change, and how the Enterprise Applications Development would change. Let us know what you think!!!
The modern IT stack has become diverse and distributed, and it’s increasingly challenging to manage heterogeneous platforms and multi-vendor devices. Customers are looking to the cloud and APM to help address these hurdles, as well as accelerate IT transformation.
But migrating to the cloud will take time, it won’t make infrastructure ‘just disappear’, and legacy workloads are going to remain part of the enterprise reality for many. In addition, while APM will continue to be increasingly important, all applications are not the same and an application is still not equal to a digital business service.
Watch this webinar as John Worthington, a service management expert and Director of Product Marketing for eG Innovations, continues our Shift-Left series. You can learn:
• Why domain expertise is important when defining monitoring requirements
• What analytics are useful from a monitoring and observability context
• How end-to-end monitoring with converged application and infrastructure performance can drive ITSM and DevOps integration
A Study on the Application of Web-Scale IT in Enterprises in IoT EraHassan Keshavarz
The concept of Web-Scale IT has become a pattern of global class computing that delivers the capabilities of large cloude sevice provider in the enterprise IT industry and business sector. Based on the Gartner report, WebScale IT is one of the technology trends probably to have a significant effect on companies over the next three years, by 2017. Web-Scale IT is clearly defined as the all things accouring in large scale could service firms such as Google, Amazon, Netfilx, Facebook and so on, that enables them to get high levels of agility and scalability by using new processes and architecures according to the report. This paper scrutinizes how technology can change the business style for IoT using in the future. It is expected that using of Web-Scale IT is critical in this turning point of changing the business method so as to IoT using in the future. For achieve tha aim, the first step toward the WebScale IT for many organization should be bringing Developing and Operations together. This is the movment known as “DevOps”.
Similar to Scandev / SDC2013 - Spoilt for Choice: Which Integration Framework to use – Apache Camel, Spring Integration or Mule? (20)
Apache Kafka as Data Hub for Crypto, NFT, Metaverse (Beyond the Buzz!)Kai Wähner
Decentralized finance with crypto and NFTs is a huge topic these days. It becomes a powerful combination with the coming metaverse platforms across industries. This session explores the relationship between crypto technologies and modern enterprise architecture.
I discuss how data streaming and Apache Kafka help build innovation and scalable real-time applications of a future metaverse. Let's skip the buzz (and NFT bubble) and instead review existing real-world deployments in the crypto and blockchain world powered by Kafka and its ecosystem.
Apache Kafka is the de facto standard for data streaming to process data in motion. With its significant adoption growth across all industries, I get a very valid question every week: When NOT to use Apache Kafka? What limitations does the event streaming platform have? When does Kafka simply not provide the needed capabilities? How to qualify Kafka out as it is not the right tool for the job?
This session explores the DOs and DONTs. Separate sections explain when to use Kafka, when NOT to use Kafka, and when to MAYBE use Kafka.
No matter if you think about open source Apache Kafka, a cloud service like Confluent Cloud, or another technology using the Kafka protocol like Redpanda or Pulsar, check out this slide deck.
A detailed article about this topic:
https://www.kai-waehner.de/blog/2022/01/04/when-not-to-use-apache-kafka/
Kafka for Live Commerce to Transform the Retail and Shopping MetaverseKai Wähner
Live commerce combines instant purchasing of a featured product and audience participation.
This talk explores the need for real-time data streaming with Apache Kafka between applications to enable live commerce across online stores and brick & mortar stores across regions, countries, and continents in any retail business.
The discussion covers several building blocks of a live commerce enterprise architecture, including transactional data processing, omnichannel, natural language processing, augmented reality, edge computing, and more.
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Apache Kafka vs. Cloud-native iPaaS Integration Platform MiddlewareKai Wähner
Enterprise integration is more challenging than ever before. The IT evolution requires the integration of more and more technologies. Applications are deployed across the edge, hybrid, and multi-cloud architectures. Traditional middleware such as MQ, ETL, ESB does not scale well enough or only processes data in batch instead of real-time.
This presentation explores why Apache Kafka is the new black for integration projects, how Kafka fits into the discussion around cloud-native iPaaS (Integration Platform as a Service) solutions, and why event streaming is a new software category.
A concrete real-world example shows the difference between event streaming and traditional integration platforms respectively cloud-native iPaaS.
Video Recording of this presentation:
https://www.youtube.com/watch?v=I8yZwKg_IJc&t=2842s
Blog post about this topic:
https://www.kai-waehner.de/blog/2021/11/03/apache-kafka-cloud-native-ipaas-versus-mq-etl-esb-middleware/
Data Warehouse vs. Data Lake vs. Data Streaming – Friends, Enemies, Frenemies?Kai Wähner
The concepts and architectures of a data warehouse, a data lake, and data streaming are complementary to solving business problems.
Unfortunately, the underlying technologies are often misunderstood, overused for monolithic and inflexible architectures, and pitched for wrong use cases by vendors. Let’s explore this dilemma in a presentation.
The slides cover technologies such as Apache Kafka, Apache Spark, Confluent, Databricks, Snowflake, Elasticsearch, AWS Redshift, GCP with Google Bigquery, and Azure Synapse.
Serverless Kafka and Spark in a Multi-Cloud Lakehouse ArchitectureKai Wähner
Apache Kafka in conjunction with Apache Spark became the de facto standard for processing and analyzing data. Both frameworks are open, flexible, and scalable.
Unfortunately, the latter makes operations a challenge for many teams. Ideally, teams can use serverless SaaS offerings to focus on business logic. However, hybrid and multi-cloud scenarios require a cloud-native platform that provides automated and elastic tooling to reduce the operations burden.
This session explores different architectures to build serverless Apache Kafka and Apache Spark multi-cloud architectures across regions and continents.
We start from the analytics perspective of a data lake and explore its relation to a fully integrated data streaming layer with Kafka to build a modern data Data Lakehouse.
Real-world use cases show the joint value and explore the benefit of the "delta lake" integration.
Resilient Real-time Data Streaming across the Edge and Hybrid Cloud with Apac...Kai Wähner
Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.
Video recording:
https://qconlondon.com/london2022/presentation/resilient-real-time-data-streaming-across-the-edge-and-hybrid-cloud
Data Streaming with Apache Kafka in the Defence and Cybersecurity IndustryKai Wähner
Agenda:
1) Defence, Modern Warfare, and Cybersecurity in 202X
2) Data in Motion with Apache Kafka as Defence Backbone
3) Situational Awareness
4) Threat Intelligence
5) Forensics and AI / Machine Learning
6) Air-Gapped and Zero Trust Environments
7) SIEM / SOAR Modernization
Technologies discussed in the presentation include Apache Kafka, Kafka Streams, kqlDB, Kafka Connect, Elasticsearch, Splunk, IBM QRadar, Zeek, Netflow, PCAP, TensorFlow, AWS, Azure, GCP, Sigma, Confluent Cloud,
Real-World Deployments of Data Streaming with Apache Kafka across the Healthcare Value Chain using open source and cloud-native technologies and serverless SaaS:
1) Legacy Modernization and Hybrid Cloud: Optum (UnitedHealth Group, Centene, Bayer)
2) Streaming ETL (Bayer, Babylon Health)
3) Real-time Analytics (Cerner, Celmatix, CDC/Centers for Disease Control and Prevention)
4) Machine Learning and Data Science (Recursion, Humana)
5) Open API and Omnichannel (Care.com, Invitae)
The Rise of Data in Motion in the Healthcare Industry - Use Cases, Architectures and Examples powered by Apache Kafka.
Use Cases for Data in Motion in the Healthcare Industry:
- Know Your Patient (= “Customer 360”)
- Operations (Healthcare 4.0 including Drug R&D, Patient Care, etc.)
- IT Perspective (Cybersecurity, Mainframe Offload, Hybrid Cloud, Streaming ETL, etc)
Real-world examples include Covid-19 Electronic Lab Reporting, Cerner, Optum, Centene, Humana, Invitae, Bayer, Celmatix, Care.com.
Apache Kafka for Real-time Supply Chainin the Food and Retail IndustryKai Wähner
Use Cases, Architectures, and Real-World Examples for data in motion and real-time event streaming powered by Apache Kafka across the supply chain and logistics. Case studies and deployments include Baader, Walmart, Migros, Albertsons, Domino's Pizza, Instacart, Grab, Royal Caribbean, and more.
Kafka for Real-Time Replication between Edge and Hybrid CloudKai Wähner
Not all workloads allow cloud computing. Low latency, cybersecurity, and cost-efficiency require a suitable combination of edge computing and cloud integration.
This session explores architectures and design patterns for software and hardware considerations to deploy hybrid data streaming with Apache Kafka anywhere. A live demo shows data synchronization from the edge to the public cloud across continents with Kafka on Hivecell and Confluent Cloud.
Apache Kafka for Predictive Maintenance in Industrial IoT / Industry 4.0Kai Wähner
The manufacturing industry is moving away from just selling machinery, devices, and other hardware. Software and services increase revenue and margins. Equipment-as-a-Service (EaaS) even outsources the maintenance to the vendor.
This paradigm shift is only possible with reliable and scalable real-time data processing leveraging an event streaming platform such as Apache Kafka. This talk explores how Kafka-native Condition Monitoring and Predictive Maintenance help with this innovation.
More details:
https://www.kai-waehner.de/blog/2021/10/25/apache-kafka-condition-monitoring-predictive-maintenance-industrial-iot-digital-twin/
Video recording:
https://youtu.be/tfOuN5KeI9w
Apache Kafka Landscape for Automotive and ManufacturingKai Wähner
Today, in 2022, Apache Kafka is the central nervous system of many applications in various areas related to the automotive and manufacturing industry for processing analytical and transactional data in motion across edge, hybrid, and multi-cloud deployments.
This presentation explores the automotive event streaming landscape, including connected vehicles, smart manufacturing, supply chain optimization, aftersales, mobility services, and innovative new business models.
Afterwards, many real-world examples are shown from companies such as Audi, BMW, Porsche, Tesla, Uber, Grab, and FREENOW.
More detail in the blog post:
https://www.kai-waehner.de/blog/2022/01/12/apache-kafka-landscape-for-automotive-and-manufacturing/
Kappa vs Lambda Architectures and Technology ComparisonKai Wähner
Real-time data beats slow data. That’s true for almost every use case. Nevertheless, enterprise architects build new infrastructures with the Lambda architecture that includes separate batch and real-time layers.
This video explores why a single real-time pipeline, called Kappa architecture, is the better fit for many enterprise architectures. Real-world examples from companies such as Disney, Shopify, Uber, and Twitter explore the benefits of Kappa but also show how batch processing fits into this discussion positively without the need for a Lambda architecture.
The main focus of the discussion is on Apache Kafka (and its ecosystem) as the de facto standard for event streaming to process data in motion (the key concept of Kappa), but the video also compares various technologies and vendors such as Confluent, Cloudera, IBM Red Hat, Apache Flink, Apache Pulsar, AWS Kinesis, Amazon MSK, Azure Event Hubs, Google Pub Sub, and more.
Video recording of this presentation:
https://youtu.be/j7D29eyysDw
Further reading:
https://www.kai-waehner.de/blog/2021/09/23/real-time-kappa-architecture-mainstream-replacing-batch-lambda/
https://www.kai-waehner.de/blog/2021/04/20/comparison-open-source-apache-kafka-vs-confluent-cloudera-red-hat-amazon-msk-cloud/
https://www.kai-waehner.de/blog/2021/05/09/kafka-api-de-facto-standard-event-streaming-like-amazon-s3-object-storage/
The Top 5 Apache Kafka Use Cases and Architectures in 2022Kai Wähner
I see the following topics coming up more regularly in conversations with customers, prospects, and the broader Kafka community across the globe:
Kappa Architecture: Kappa goes mainstream to replace Lambda and Batch pipelines (that does not mean that there is no batch processing anymore). Examples: Kafka-powered Kappa architectures from Uber, Disney, Shopify, and Twitter.
Hyper-personalized Omnichannel: Retail and customer communication across online and offline channels becomes the new black, including context-specific upselling, recommendations, and location-based services. Examples: Omnichannel Retail and Customer 360 in Real-Time with Apache Kafka.
Multi-Cloud Deployments: Business units and IT infrastructures span across regions, continents, and cloud providers. Linking clusters for bi-directional replication of data in real-time becomes crucial for many business models. Examples: Global Kafka deployments.
Edge Analytics: Low latency requirements, cost efficiency, or security requirements enforce the deployment of (some) event streaming use cases at the far edge (i.e., outside a data center), for instance, for predictive maintenance and quality assurance on the shop floor level in smart factories. Examples: Edge analytics with Kafka.
Real-time Cybersecurity: Situational awareness and threat intelligence need to process massive data in real-time to defend against cyberattacks successfully. The many successful ransomware attacks across the globe in 2021 were a warning for most CIOs. Examples: Cybersecurity for situational awareness and threat intelligence in real-time.
Event Streaming CTO Roundtable for Cloud-native Kafka ArchitecturesKai Wähner
Technical thought leadership presentation to discuss how leading organizations move to real-time architecture to support business growth and enhance customer experience. This is a forum to discuss use cases with your peers to understand how other digital-native companies are utilizing data in motion to drive competitive advantage.
Agenda:
- Data in Motion with Event Streaming and Apache Kafka
- Streaming ETL Pipelines
- IT Modernisation and Hybrid Multi-Cloud
- Customer Experience and Customer 360
- IoT and Big Data Processing
- Machine Learning and Analytics
Apache Kafka in the Public Sector (Government, National Security, Citizen Ser...Kai Wähner
The Rise of Data in Motion in the Public Sector powered by event streaming with Apache Kafka.
Citizen Services:
- Health services, e.g. hospital modernization, track & trace - Covid distance control
- Public administration - reduce bureaucracy, data democratization across government departments
- eGovernment - Efficient and digital citizen engagement, e.g. personal ID application process
Smart City
- Smart driving, parking, buildings, environment
Waste management
- Open exchange – e.g. mobility services (1st and 3rd party)
Energy
- Smart grid and utilities infrastructure (energy distribution, smart home, smart meters, smart water, etc.)
- National Security
Law enforcement, surveillance, police/interior security data exchange
- Defense and military (border control, intelligent solider)
Cybersecurity for situational awareness and threat intelligence
Telco 4.0 - Payment and FinServ Integration for Data in Motion with 5G and Ap...Kai Wähner
The Era of Telco 4.0: Embracing Digital Transformation with Data in Motion. Learn about Payment and FinServ Integration for Data in Motion with 5G and Apache Kafka.
1) The rise of Telco 4.0 and the future forward
2) Data in Motion in the Telco industry
3) Real-world Fintech and Payment examples powered by Data in Motion
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images