With Serverless/FaaS the unit of work is a fine-grained, ephemeral function triggered by a variety of events. How can we design a system composed of countless functions without loosing sight of each function's purpose or without accidentally introducing a big ball of mud due to highly coupled functions. One approach could be by introducing Domain Driven Design (DDD). DDD is a methodology to capture a business domain as closely as possible into software coming with strategic and tactical design patterns. DDD helps to decompose a system into modular components (Bounded Contexts) and mapping the integration patterns between them (Context Mapping).
In this talk, I am going to highlight how Domain Driven Design and Serverless/FaaS can go together by splitting a system into Bounded Contexts and how these Bounded Contexts can be implemented by using Serverless technologies.
Building Adaptive Systems with Wardley Mapping, Domain-Driven Design, and Tea...Susanne Kaiser
In a world of rapid changes and increasing uncertainties, organizations have to continuously adapt and evolve to remain competitive and excel in the market.
In such a dynamic business landscape organizations need to design for adaptability. Designing for adaptability requires understanding the landscape organizations are operating in, identifying patterns of change, applying principles for organizational fitness, and making mindful strategic decisions to adapt to change.
Organizations need to aim for building systems and team organizations aligned to the business needs and business strategy and evolving them for adaptability to new changes and unknown environments.
This talk brings different perspectives and techniques together from business strategy (Wardley Mapping), software architecture and design (Domain-Driven Design), and team organization (Team Topologies) as a powerful toolset to design, build and evolve adaptive systems and team structures for a fast flow of change.
Overview of the IT4IT tooling market in 2022.
Key trends in the IT4IT / DevOps tooling market are:
- Strategic portfolio management / portfolio backlog management (scaling agile on the enterprise level integrating with Enterprise architecture and Application / Product Portfolio Management)
- On-line collaboration & communication tools supporting team of team planning, problem solving, etc.
- Value stream management (an emerging tooling category) providing visibility across the end-to-end IT value streams
- Multi-cloud discovery & visibility on usage, costs and compliance
- Integrating DevOps tool chain (e.g. CICD pipeline) with the ITSM platform and CMDB
- Integrating security, risk and compliance management into the DevOps tool chain
- AIOps and observability management, consoliding metrics, logs, events mapped to a real-time service model
- Security operations, integrating security monitoring, vulnerability scanning, etc. into end-to-end detect to correct value streams
- Enterprise Service Management (ITSM vendors providing omni-channel services across IT, HR, Facilities, Finance, etc.)
- Leveraging AI/ML in various capabilities such test management, security operations, incident management, etc.
- Sustainability management integrated in IRM/GRC platforms
And last but not least:
- Service / Product portfolio management (managing the portfolio of service/applications, supporting product centric operating models, linked to business capabilities, product owners and teams)
ML Workflows with Amazon SageMaker and AWS Step Functions (API325) - AWS re:I...Amazon Web Services
Learn how you can build, train, and deploy machine learning workflows for Amazon SageMaker on AWS Step Functions. Learn how to stitch together services, such as AWS Glue, with your Amazon SageMaker model training to build feature-rich machine learning applications, and you learn how to build serverless ML workflows with less code. Cox Automotive also shares how it combined Amazon SageMaker and Step Functions to improve collaboration between data scientists and software engineers. We also share some new features to build and manage ML workflows even faster.
Securing your Amazon SageMaker model development in a highly regulated enviro...Amazon Web Services
Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. In this session, we dive deep into the security configurations of Amazon SageMaker components, including notebooks, distributed and batch training, and hosting endpoints. We also review Vanguard’s implementation of key controls in a highly regulated environment. These include fine-grained access control, end-to-end encryption in transit, encryption at rest with AWS KMS customer-managed customer master keys (CMKs), private connectivity to all Amazon SageMaker APIs, and comprehensive audit trails for resource and data access.
In this slide deck, I first describe what resilience is, what it is about, why it is important and how it is different from traditional stability approaches.
After that introductory part the main part is a "small" pattern language which is organized around isolation, the typical starting point of resilient software design. I used quotation marks for "small" as even this subset of a complete resilience pattern language still consists of around 20 patterns.
All the patterns are briefly described and for some of the patterns I added a bit of detail, but as this is a slide deck, the voice track - as usual - is missing. Also this pattern language is still sort of work in progress, i.e., it has not yet settled and some details are still missing. Yet I think (or at least hope), that the slides might contain a few useful insights for you.
Managed Feature Store for Machine LearningLogical Clocks
All hyperscale AI companies build their machine learning platforms around a Feature Store.
A feature is a measurable property of some data-sample. It could be for example an image-pixel, a word from a piece of text, the age of a person, a coordinate emitted from a sensor, or an aggregate value like the average number of purchases within the last hour. A Feature Store is a central place to store curated features within an organization.
Feature Stores are a fuel for AI systems as we use them to train machine learning models so that we can make predictions for feature values that we have never seen before.
During this presentation you learn:
- About the concept of a Feature Store and how it can help manage feature data for Enterprises and ease the path of data from backend systems and data-lakes to Data Scientists.
- Our take on Feature Stores, including best practices and use cases and:
- How to ensure Consistent Features in both Training and Serving
Governance, Access-Control, and Versioning
- To create Training Data in the File Format of your Choice
Eliminate Inconsistency between Features in Training and Inferencing
Watch the webinar with a demo: https://www.logicalclocks.com/webinars
Ingesting and Processing IoT Data Using MQTT, Kafka Connect and Kafka Streams...confluent
(Guido Schmutz, Trivadis) Kafka Summit SF 2018
Internet of Things use cases are a perfect match for processing with a streaming platform such as Kafka and the Confluent Platform. Some of the questions to be answered are: How do we feed the data from our devices into Kafka? Do we directly send data to Kafka? Is Kafka accessible from outside the organization over the internet? What if we want to use a more specific IoT protocol such as MQTT or CoAP in between? How would we integrate it with Kafka? How can we enrich IoT streaming data with static data sitting in a traditional system?
This session will provide answers to these and other questions using a fictitious use case of a trucking company. Trucks are constantly sending data about position and driving habits, which can be used to derive real-time information and actions. A large part of the presentation will be a live demo. The demo will show the implementation of the pipeline incrementally: starting with sending the truck movement events directly to Kafka, then adding MQTT to the sensor data ingestion, followed by using Kafka Streams and KSQL to apply stream processing on the information received. The final pipeline will demonstrate the application of Kafka Connect with MQTT and JDBC source connectors for data ingestion and event stream enrichment, and Kafka Streams and KSQL for stream processing. The key takeaway is the live demonstration of a working end-to-end IoT streaming data ingestion pipeline using Kafka technologies.
Preparing for a future Microservices journey using DDD & Wardley MapsSusanne Kaiser
The journey to Microservices can be very challenging. Identifying proper boundaries, integrating services, and handling infrastructure and operational complexities that Microservices come with can be very overwhelming.
How not to loose sight and to cope with those challenges and still delivering user and business value? One approach could be to focus on that part of your business that gives most competitive advantage - your core domain - and outsource undifferentiating commodities to utility suppliers.
Domain Driven Design combined with Wardley Maps can help us to understand the problem domain and to focus on the core domain.
In this talk Susanne will show how Domain Driven Design and Wardley Maps can be used together to visualise how a value chain can evolve during a Microservices journey and keeping focus on your core domain.
Building Adaptive Systems with Wardley Mapping, Domain-Driven Design, and Tea...Susanne Kaiser
In a world of rapid changes and increasing uncertainties, organizations have to continuously adapt and evolve to remain competitive and excel in the market.
In such a dynamic business landscape organizations need to design for adaptability. Designing for adaptability requires understanding the landscape organizations are operating in, identifying patterns of change, applying principles for organizational fitness, and making mindful strategic decisions to adapt to change.
Organizations need to aim for building systems and team organizations aligned to the business needs and business strategy and evolving them for adaptability to new changes and unknown environments.
This talk brings different perspectives and techniques together from business strategy (Wardley Mapping), software architecture and design (Domain-Driven Design), and team organization (Team Topologies) as a powerful toolset to design, build and evolve adaptive systems and team structures for a fast flow of change.
Overview of the IT4IT tooling market in 2022.
Key trends in the IT4IT / DevOps tooling market are:
- Strategic portfolio management / portfolio backlog management (scaling agile on the enterprise level integrating with Enterprise architecture and Application / Product Portfolio Management)
- On-line collaboration & communication tools supporting team of team planning, problem solving, etc.
- Value stream management (an emerging tooling category) providing visibility across the end-to-end IT value streams
- Multi-cloud discovery & visibility on usage, costs and compliance
- Integrating DevOps tool chain (e.g. CICD pipeline) with the ITSM platform and CMDB
- Integrating security, risk and compliance management into the DevOps tool chain
- AIOps and observability management, consoliding metrics, logs, events mapped to a real-time service model
- Security operations, integrating security monitoring, vulnerability scanning, etc. into end-to-end detect to correct value streams
- Enterprise Service Management (ITSM vendors providing omni-channel services across IT, HR, Facilities, Finance, etc.)
- Leveraging AI/ML in various capabilities such test management, security operations, incident management, etc.
- Sustainability management integrated in IRM/GRC platforms
And last but not least:
- Service / Product portfolio management (managing the portfolio of service/applications, supporting product centric operating models, linked to business capabilities, product owners and teams)
ML Workflows with Amazon SageMaker and AWS Step Functions (API325) - AWS re:I...Amazon Web Services
Learn how you can build, train, and deploy machine learning workflows for Amazon SageMaker on AWS Step Functions. Learn how to stitch together services, such as AWS Glue, with your Amazon SageMaker model training to build feature-rich machine learning applications, and you learn how to build serverless ML workflows with less code. Cox Automotive also shares how it combined Amazon SageMaker and Step Functions to improve collaboration between data scientists and software engineers. We also share some new features to build and manage ML workflows even faster.
Securing your Amazon SageMaker model development in a highly regulated enviro...Amazon Web Services
Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. In this session, we dive deep into the security configurations of Amazon SageMaker components, including notebooks, distributed and batch training, and hosting endpoints. We also review Vanguard’s implementation of key controls in a highly regulated environment. These include fine-grained access control, end-to-end encryption in transit, encryption at rest with AWS KMS customer-managed customer master keys (CMKs), private connectivity to all Amazon SageMaker APIs, and comprehensive audit trails for resource and data access.
In this slide deck, I first describe what resilience is, what it is about, why it is important and how it is different from traditional stability approaches.
After that introductory part the main part is a "small" pattern language which is organized around isolation, the typical starting point of resilient software design. I used quotation marks for "small" as even this subset of a complete resilience pattern language still consists of around 20 patterns.
All the patterns are briefly described and for some of the patterns I added a bit of detail, but as this is a slide deck, the voice track - as usual - is missing. Also this pattern language is still sort of work in progress, i.e., it has not yet settled and some details are still missing. Yet I think (or at least hope), that the slides might contain a few useful insights for you.
Managed Feature Store for Machine LearningLogical Clocks
All hyperscale AI companies build their machine learning platforms around a Feature Store.
A feature is a measurable property of some data-sample. It could be for example an image-pixel, a word from a piece of text, the age of a person, a coordinate emitted from a sensor, or an aggregate value like the average number of purchases within the last hour. A Feature Store is a central place to store curated features within an organization.
Feature Stores are a fuel for AI systems as we use them to train machine learning models so that we can make predictions for feature values that we have never seen before.
During this presentation you learn:
- About the concept of a Feature Store and how it can help manage feature data for Enterprises and ease the path of data from backend systems and data-lakes to Data Scientists.
- Our take on Feature Stores, including best practices and use cases and:
- How to ensure Consistent Features in both Training and Serving
Governance, Access-Control, and Versioning
- To create Training Data in the File Format of your Choice
Eliminate Inconsistency between Features in Training and Inferencing
Watch the webinar with a demo: https://www.logicalclocks.com/webinars
Ingesting and Processing IoT Data Using MQTT, Kafka Connect and Kafka Streams...confluent
(Guido Schmutz, Trivadis) Kafka Summit SF 2018
Internet of Things use cases are a perfect match for processing with a streaming platform such as Kafka and the Confluent Platform. Some of the questions to be answered are: How do we feed the data from our devices into Kafka? Do we directly send data to Kafka? Is Kafka accessible from outside the organization over the internet? What if we want to use a more specific IoT protocol such as MQTT or CoAP in between? How would we integrate it with Kafka? How can we enrich IoT streaming data with static data sitting in a traditional system?
This session will provide answers to these and other questions using a fictitious use case of a trucking company. Trucks are constantly sending data about position and driving habits, which can be used to derive real-time information and actions. A large part of the presentation will be a live demo. The demo will show the implementation of the pipeline incrementally: starting with sending the truck movement events directly to Kafka, then adding MQTT to the sensor data ingestion, followed by using Kafka Streams and KSQL to apply stream processing on the information received. The final pipeline will demonstrate the application of Kafka Connect with MQTT and JDBC source connectors for data ingestion and event stream enrichment, and Kafka Streams and KSQL for stream processing. The key takeaway is the live demonstration of a working end-to-end IoT streaming data ingestion pipeline using Kafka technologies.
Preparing for a future Microservices journey using DDD & Wardley MapsSusanne Kaiser
The journey to Microservices can be very challenging. Identifying proper boundaries, integrating services, and handling infrastructure and operational complexities that Microservices come with can be very overwhelming.
How not to loose sight and to cope with those challenges and still delivering user and business value? One approach could be to focus on that part of your business that gives most competitive advantage - your core domain - and outsource undifferentiating commodities to utility suppliers.
Domain Driven Design combined with Wardley Maps can help us to understand the problem domain and to focus on the core domain.
In this talk Susanne will show how Domain Driven Design and Wardley Maps can be used together to visualise how a value chain can evolve during a Microservices journey and keeping focus on your core domain.
In this presentation, we will tackle the 'Operational Excellence Pillar' of the AWS Well-Architected Framework. This pillar focuses on running and monitoring systems that deliver business value, and continually improving processes and procedures.
Amazon Web Services (AWS) has spent years working with thousands of companies across all industries to create the most comprehensive collection of best practices and guidance known as the Well-Architected Framework. This resource is available for organizations undergoing a cloud transformation who want to ensure their success on AWS.
Topics Include:
- How operational excellence is a consequence of culture.
- The six design principles for operational excellence in the cloud.
- The focus areas of cloud operational excellence.
- What operational excellence looks like in practice.
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...Amazon Web Services
IgnitionOne, a global marketing technology and services leader, announced a strategic partnership with Amazon Neptune, a fast, reliable, and fully managed graph database service offered by AWS. In this session, we discuss how this partnership further enhances the IgnitionOne Customer Intelligence Platform (CIP) with amplified identity resolution capabilities, allowing for greater cross-device performance and cross-browser audience activations. Learn how the IgnitionOne CIP with Amazon Neptune goes beyond traditional Customer Data Platform capabilities, giving brands deeper insights for better omnichannel engagement.
Data Migration Steps PowerPoint Presentation Slides SlideTeam
Presenting this set of slides with name - Data Migration Steps Powerpoint Presentation Slides. This PPT deck displays twenty-six slides with in-depth research. We provide a ready to use deck with all sorts of relevant topics subtopics templates, charts and graphs, overviews, analysis templates. When you download this deck by clicking the download button below, you get the presentation in both standard and widescreen format. All slides are fully editable. change the colors, font size, add or delete text if needed. The presentation is fully supported with Google Slides. It can be easily converted into JPG or PDF format.
Data Marketplace and the Role of Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3IS9sQS
A data marketplace is like an online shopping interface specializing in data. Ideally, it should work just like an online store, with minimal latency and maximum responsiveness. However, this does not mean that all of the data in the data marketplace needs to be stored in the same central repository.
In this session, Shadab Hussain, Americas Sales Head, Data Analytics at Wipro, a partner company with Denodo and a co-sponsor of DataFest 2021, talks about the role of data virtualization in enabling full-featured data marketplaces. Such data marketplaces provide real-time, curated access to data, even when the data is stored across many different sources throughout the organization.
You will learn:
- The main features of a data marketplace
- Why organizations need data marketplaces
- Why data marketplaces sometimes fail
- How data virtualization enables the most effective data marketplaces
- How one of Europe’s premiere public healthcare system organizations leveraged a data marketplace to improve data consumption and ease of access
How HSBC Uses Serverless to Process Millions of Transactions in Real Time (FS...Amazon Web Services
For large financial institutions, it can be extremely hard to predict when your architecture may need to scale to process millions of financial transactions per day. HSBC addressed this challenge by integrating its on-premises mainframe with AWS services such as AWS Lambda, Amazon Kinesis, and Amazon DynamoDB. This integration enables the bank to engage in real time with millions of retail banking customers in a more personal, dynamic, and useful way. The bank applies business logic to its transaction data, and it harnesses the information it gleans to communicate directly with customers through a messaging platform that runs on AWS. In this session, we share an architecture pattern that demonstrates how retail banks can add value by investing in their legacy system when integrating streaming data from on-premises systems to an event-driven, serverless architecture at scale.
Microservices Pattern Language
Microservices Software Architecture Governance, Best Practices and Design Pattern
Decomposition Patterns
Decompose by Business Capability
Decompose by Subdomain
Domain Driven Design - Strategic Patterns and MicroservicesRadosław Maziarka
Presentation describes Domain Driven Design - approach to create applications driven by business domain. I show how to split your monolith base on DDD strategic patterns.
General overview of what is "Chaos Engineering", the current
"perturbation models" available and the benefits of Chaos Engineering to Customers, Business and Tech.
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
(ENT305) Develop an Enterprise-wide Cloud Adoption Strategy | AWS re:Invent 2014Amazon Web Services
Taking a "cloud first" approach requires a different approach than you probably had to consider for your initial few workloads in the cloud. You'll be diving into the deep end of hybrid environments, and that means taking a broad view of your IT strategy, architecture, and organizational design.
Through our experience in helping enterprises navigate this change, AWS has developed the Cloud Adoption Framework (CAF) to assist with planning, creating, managing, and supporting the shift. In this session, we cover how the CAF offers practical guidance and comprehensive guidelines to enterprise organizations, particularly around roles, governance, and efficiency.
Data & Analytics ReInvent Recap [AWS Basel Meetup - Jan 2023].pdfChris Bingham
After recapping the key data & analytics announcements from AWS re:Invent 2022, we look a little deeper at three key new services:
• AWS DataZone
• AWS Omics
• AWS Clean Rooms
And follow up with a demo of using AWS IoT ExpressLink hardware in conjunction with AWS IoT Core, Lambda, and Amplify to build a Gatsby web app that interacts with the AWS IoT ExpressLink demo badge via a device shadow.
Scaling and Modernizing Data Platform with DatabricksDatabricks
Today a Data Platform is expected to process and analyze a multitude of sources spanning batch files, streaming sources, backend databases, REST APIs, and more. There is clearly a need for standardizing the platform that scales and be flexible letting data engineers and data scientists focus on the business problems rather than managing the infrastructure and backend services. Another key aspect of the platform is multi-tenancy to isolate the workloads and able to track cost usage per tenant.
In this talk, Richa Singhal and Esha Shah will cover how to build a scalable Data Platform using Databricks and deploy your data pipelines effectively while managing the costs. The following topics will be covered:
Key tenets of a Data Platform
Setup multistage environment on Databricks
Build data pipelines locally and test on Databricks cluster
CI/CD for data pipelines with Databricks
Orchestrating pipelines using Apache Airflow – Change Data Capture using Databricks Delta
Leveraging Databricks Notebooks for Analytics and Data Science teams
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Dr. Karthik Ramasamy of Streamlio draws on his experience building data products at companies including Pivotal, Twitter, and Streamlio to discuss technology and best practices for designing and implementing data-driven microservices:
* The key principles of microservices and microservice architecture
* The implications of microservices for data
* The role of messaging and processing technology in connecting microservices
Modeling data and best practices for the Azure Cosmos DB.Mohammad Asif
Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. In this session we covered ,modeling of data using NOSQL cosmos database and how it's helpful for distributed application to maintain high availability ,scaling in multiple region and throughput.
Have you ever wondered what the relative differences are between two of the more popular open source, in-memory data stores and caches? In this session, we will describe those differences and, more importantly, provide live demonstrations of the key capabilities that could have a major impact on your architectural Java application designs.
The ability to grow (and shrink) according to the needs and the available resources is an essential part of designing applications. In this talk we'll cover the fundamental elements of scalability, including aspects involving people, processes and technology. With sound and proven principles and some advice on how to shape your organisation, set the right processes and design your application, this session is a must-see for developers and technical leads alike.
Summary of fast development and cloud native architecture along with cost optimization techniques. Presented as opening keynote at the Utility and Cloud Computing 2014 as part of the Cloud Control Workshop.
How can even a small team handle infrastructure complexities that come with Microservices and still deliver business and user value?
The short answer to that could be to build your core domain - that differentiates you from your competitors - in-house and outsource undifferentiating commodities to utility suppliers.
In this keynote Susanne will explore this in more detail and use Wardley Maps to visualize how the value chain can evolve.
In this presentation, we will tackle the 'Operational Excellence Pillar' of the AWS Well-Architected Framework. This pillar focuses on running and monitoring systems that deliver business value, and continually improving processes and procedures.
Amazon Web Services (AWS) has spent years working with thousands of companies across all industries to create the most comprehensive collection of best practices and guidance known as the Well-Architected Framework. This resource is available for organizations undergoing a cloud transformation who want to ensure their success on AWS.
Topics Include:
- How operational excellence is a consequence of culture.
- The six design principles for operational excellence in the cloud.
- The focus areas of cloud operational excellence.
- What operational excellence looks like in practice.
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...Amazon Web Services
IgnitionOne, a global marketing technology and services leader, announced a strategic partnership with Amazon Neptune, a fast, reliable, and fully managed graph database service offered by AWS. In this session, we discuss how this partnership further enhances the IgnitionOne Customer Intelligence Platform (CIP) with amplified identity resolution capabilities, allowing for greater cross-device performance and cross-browser audience activations. Learn how the IgnitionOne CIP with Amazon Neptune goes beyond traditional Customer Data Platform capabilities, giving brands deeper insights for better omnichannel engagement.
Data Migration Steps PowerPoint Presentation Slides SlideTeam
Presenting this set of slides with name - Data Migration Steps Powerpoint Presentation Slides. This PPT deck displays twenty-six slides with in-depth research. We provide a ready to use deck with all sorts of relevant topics subtopics templates, charts and graphs, overviews, analysis templates. When you download this deck by clicking the download button below, you get the presentation in both standard and widescreen format. All slides are fully editable. change the colors, font size, add or delete text if needed. The presentation is fully supported with Google Slides. It can be easily converted into JPG or PDF format.
Data Marketplace and the Role of Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3IS9sQS
A data marketplace is like an online shopping interface specializing in data. Ideally, it should work just like an online store, with minimal latency and maximum responsiveness. However, this does not mean that all of the data in the data marketplace needs to be stored in the same central repository.
In this session, Shadab Hussain, Americas Sales Head, Data Analytics at Wipro, a partner company with Denodo and a co-sponsor of DataFest 2021, talks about the role of data virtualization in enabling full-featured data marketplaces. Such data marketplaces provide real-time, curated access to data, even when the data is stored across many different sources throughout the organization.
You will learn:
- The main features of a data marketplace
- Why organizations need data marketplaces
- Why data marketplaces sometimes fail
- How data virtualization enables the most effective data marketplaces
- How one of Europe’s premiere public healthcare system organizations leveraged a data marketplace to improve data consumption and ease of access
How HSBC Uses Serverless to Process Millions of Transactions in Real Time (FS...Amazon Web Services
For large financial institutions, it can be extremely hard to predict when your architecture may need to scale to process millions of financial transactions per day. HSBC addressed this challenge by integrating its on-premises mainframe with AWS services such as AWS Lambda, Amazon Kinesis, and Amazon DynamoDB. This integration enables the bank to engage in real time with millions of retail banking customers in a more personal, dynamic, and useful way. The bank applies business logic to its transaction data, and it harnesses the information it gleans to communicate directly with customers through a messaging platform that runs on AWS. In this session, we share an architecture pattern that demonstrates how retail banks can add value by investing in their legacy system when integrating streaming data from on-premises systems to an event-driven, serverless architecture at scale.
Microservices Pattern Language
Microservices Software Architecture Governance, Best Practices and Design Pattern
Decomposition Patterns
Decompose by Business Capability
Decompose by Subdomain
Domain Driven Design - Strategic Patterns and MicroservicesRadosław Maziarka
Presentation describes Domain Driven Design - approach to create applications driven by business domain. I show how to split your monolith base on DDD strategic patterns.
General overview of what is "Chaos Engineering", the current
"perturbation models" available and the benefits of Chaos Engineering to Customers, Business and Tech.
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
(ENT305) Develop an Enterprise-wide Cloud Adoption Strategy | AWS re:Invent 2014Amazon Web Services
Taking a "cloud first" approach requires a different approach than you probably had to consider for your initial few workloads in the cloud. You'll be diving into the deep end of hybrid environments, and that means taking a broad view of your IT strategy, architecture, and organizational design.
Through our experience in helping enterprises navigate this change, AWS has developed the Cloud Adoption Framework (CAF) to assist with planning, creating, managing, and supporting the shift. In this session, we cover how the CAF offers practical guidance and comprehensive guidelines to enterprise organizations, particularly around roles, governance, and efficiency.
Data & Analytics ReInvent Recap [AWS Basel Meetup - Jan 2023].pdfChris Bingham
After recapping the key data & analytics announcements from AWS re:Invent 2022, we look a little deeper at three key new services:
• AWS DataZone
• AWS Omics
• AWS Clean Rooms
And follow up with a demo of using AWS IoT ExpressLink hardware in conjunction with AWS IoT Core, Lambda, and Amplify to build a Gatsby web app that interacts with the AWS IoT ExpressLink demo badge via a device shadow.
Scaling and Modernizing Data Platform with DatabricksDatabricks
Today a Data Platform is expected to process and analyze a multitude of sources spanning batch files, streaming sources, backend databases, REST APIs, and more. There is clearly a need for standardizing the platform that scales and be flexible letting data engineers and data scientists focus on the business problems rather than managing the infrastructure and backend services. Another key aspect of the platform is multi-tenancy to isolate the workloads and able to track cost usage per tenant.
In this talk, Richa Singhal and Esha Shah will cover how to build a scalable Data Platform using Databricks and deploy your data pipelines effectively while managing the costs. The following topics will be covered:
Key tenets of a Data Platform
Setup multistage environment on Databricks
Build data pipelines locally and test on Databricks cluster
CI/CD for data pipelines with Databricks
Orchestrating pipelines using Apache Airflow – Change Data Capture using Databricks Delta
Leveraging Databricks Notebooks for Analytics and Data Science teams
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Dr. Karthik Ramasamy of Streamlio draws on his experience building data products at companies including Pivotal, Twitter, and Streamlio to discuss technology and best practices for designing and implementing data-driven microservices:
* The key principles of microservices and microservice architecture
* The implications of microservices for data
* The role of messaging and processing technology in connecting microservices
Modeling data and best practices for the Azure Cosmos DB.Mohammad Asif
Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. In this session we covered ,modeling of data using NOSQL cosmos database and how it's helpful for distributed application to maintain high availability ,scaling in multiple region and throughput.
Have you ever wondered what the relative differences are between two of the more popular open source, in-memory data stores and caches? In this session, we will describe those differences and, more importantly, provide live demonstrations of the key capabilities that could have a major impact on your architectural Java application designs.
The ability to grow (and shrink) according to the needs and the available resources is an essential part of designing applications. In this talk we'll cover the fundamental elements of scalability, including aspects involving people, processes and technology. With sound and proven principles and some advice on how to shape your organisation, set the right processes and design your application, this session is a must-see for developers and technical leads alike.
Summary of fast development and cloud native architecture along with cost optimization techniques. Presented as opening keynote at the Utility and Cloud Computing 2014 as part of the Cloud Control Workshop.
How can even a small team handle infrastructure complexities that come with Microservices and still deliver business and user value?
The short answer to that could be to build your core domain - that differentiates you from your competitors - in-house and outsource undifferentiating commodities to utility suppliers.
In this keynote Susanne will explore this in more detail and use Wardley Maps to visualize how the value chain can evolve.
Best Practices for Building a Warehouse QuicklyWhereScape
Key factors that influence a successful data warehouse task are:
+ Implementing the True Development Approach
+ Choosing a Rapid Development Product
+ Ensuring Data Availability
+ Involving Key Users throughout the whole project
+ Relying on a Pragmatic Governance Framework
+ Utilizing experienced Team Members
+ Selecting the right Hardware, Infrastructure Technology
David Eads, Atlassian, presents how to clean and tune your Jira and Confluence instances and Himanshu Chhetri, Addteq, discusses how to implement DevSecOps within your software organization's delivery pipeline.
Sildes of an internal talk given at Twitter similar to a previous webinar for Redhat with the same title.
Speeding up development is a key concern, cloud and technology improvements like Docker speed up key steps that make continuous delivery possible. Breaking up the work into many separate microservices and datastores with stable APIs allows teams to make progress independently so that the organization scales. Monolithic apps are preferred for small projects, built by small teams and when very low latency and high efficiency is the primary requirement. Monitoring microservices is currently a challenge with solutions starting to emerge.
This introduction to Strategy with Wardley Maps covers:
* What is Wardley Mapping?
* The Problem & Value of Mapping
* Elements of a Map
* Overview of the Strategy Cycle
* A couple of Climatic Patterns
* Several examples
First run @ Wardley Maps London September 2020 as a talk + workshop. https://www.meetup.com/Wardley-Maps-London
Recording will be posted soon.
It is released CC-by-SA, and is based on Simon Wardley's work available on https://medium.com/wardleymaps
Preparing for a future microservices journey (with Wardley Maps)Susanne Kaiser
How can a small team handle infrastructure complexities that come with Microservices and still deliver business and user value?
The short answer to that could be to build your core domain - that differentiates you from your competitors - in-house and outsource undifferentiating commodities to utility suppliers.
In this talk I have used Wardley Maps to visualise how the value chain can evolve when getting infrastructure components handled by different options: Going from open source software to Kubernetes' container orchestration, to Istio's service mesh and to Serverless technologies, such as AWS Lambda.
Technology and Digital Platform | 2019 partner summitAndrew Kumar
Technology: Andrew Kumar will share a refresher of our technology standards, documentation while highlighting what is changing in 2019 in the reference architecture and starter kits.
Digital Platform: Andrew Kumar will follow tech and design updates with a refresher on why the digital platform matters, what exists in the digital platform, what is being worked on, and what is coming next as we co-create value, save team member effort, and improve speed to market with investments in the digital platform.
Introducing domain driven design - dogfood con 2018Steven Smith
DDD provides a set of patterns and practices for tackling complex business problems with software models. Learn the basics of DDD in this session, including several principles and patterns you can start using immediately even if your project hasn't otherwise embraced DDD. Examples will primarily use C#/.NET.
(English slides - except for the title page)
Slides from my presentation delivered in Kraków at SFI 2017 conference.
My attempt to analyse why Software Development in Central Europe (including Poland) concentrates on outsourcing services, what it means in practice and what we can so as the profession of software engineers to become the partners for "the business" similarly to how IT industry evolves in the US or some other most advanced western economies.
Enter Product Engineering!
Optimizing Spark Deployments for Containers: Isolation, Safety, and Performan...Spark Summit
Developers love Linux containers, which neatly package up an application and its dependencies and are easy to create and share. However, this unbeatable developer experience hides some deployment challenges for real applications: how do you wire together pieces of a multi-container application? Where do you store your persistent data if your containers are ephemeral? Do containers really contain and isolate your application, or are they merely hiding potential security vulnerabilities? Are your containers scheduled across your compute resources efficiently, or are they trampling on one another?
Container application platforms like Kubernetes provide the answers to some of these questions. We’ll draw on expertise in Linux security, distributed scheduling, and the Java Virtual Machine to dig deep on the performance and security implications of running in containers. This talk will provide a deep dive into tuning and orchestrating containerized Spark applications. You’ll leave this talk with an understanding of the relevant issues, best practices for containerizing data-processing workloads, and tips for taking advantage of the latest features and fixes in Linux Containers, the JDK, and Kubernetes. You’ll leave inspired and enabled to deploy high-performance Spark applications without giving up the security you need or the developer-friendly workflow you want.
Habitat is amazing technology - but a new technology alone will not deliver business value. A technology is good for your business when it allows you to deliver stronger value in higher quantities at a faster velocity. For a business, much of the value comes in the software applications it produces - the application itself is what makes it money. Come hear how Habitat’s focus on the application as the unit of automation allows you to focus on the application itself and not worry about where it will run. Habitat also allows you to easily change where and what your application runs on. Your application and business needs will change over time, which means you need to be able to change your application at a very high velocity without being locked into one type of infrastructure or one vendor. Come witness how Habitat allows your applications to be infrastructure and platform agnostic - you focus on the application, Habitat takes care of packaging your software, exporting it, and running it wherever you need. Learn how you can deliver stronger value in higher quantities at a faster velocity without sacrificing stability.
Keynote at Dockercon Europe Amsterdam Dec 4th, 2014.
Speeding up development with Docker.
Summary of some interesting web scale microservice architectures.
Please send me updates and corrections to the architecture summaries @adrianco
Thanks Adrian
Introducing Domain Driven Design - codemashSteven Smith
DDD provides a set of patterns and practices for tackling complex business problems with software models. Learn the basics of DDD in this session, including several principles and patterns you can start using immediately even if your project hasn't otherwise embraced DDD. Examples will primarily use C#/.NET.
Similar to Designing a Serverless Application with Domain Driven Design (20)
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
3. Areas of Cost Relating To Poor Software Quality
Source: https://www.it-cisq.org/the-cost-of-poor-quality-software-in-the-us-a-2018-report @suksr
4. Some Indicators for Poor Software Quality (extracted from CISQ report )
t
Defects
0%
Increasing defect trend
Low test coverage
Cyclomatic complexity
Large inheritance depth
High amount of effort to
understand piece of code
Badly engineered
software
Lack of domain
knowledge
Based on: https://www.it-cisq.org/the-cost-of-poor-quality-software-in-the-us-a-2018-report
Communication and
coordination breakdowns
in (large) teams
High degree of class
coupling
@suksr
6. Domain Driven Design (DDD) – Terminology
Strategic Design
Tactical Design
Bounded Context
Ubiquitous Language
Core Subdomain
Supporting Subdomain
Generic Subdomain
Problem Space
Solution Space
Context Maps
Anti-Corruption Layer
Shared Kernel
Open Host Service
Separate Ways
Partnership
Customer-Supplier
Conformist
Domain Model
Entity
Value Object
Aggregate
Repository
Factory
Application Service
Domain Service
Domain Event
@suksr
7. DDD & Wardley Maps
ValueChainInvisibleVisible
Evolution
Genesis Custom-Built Product (+rental) Commodity
(+utility)
Position
Movement
Uncharted Industrialised
@suksr
9. Wardley Maps – VALUE CHAIN
Who are your users?
ValueChain
InvisibleVisible
@suksr
10. Wardley Maps – VALUE CHAIN
Who are your users?
What are your users’ needs?
ValueChain
InvisibleVisible
@suksr
11. Wardley Maps – VALUE CHAIN
Who are your users?
What are your users’ needs?
What are the components/activities to fulfill
your users’ needs incl. dependencies?
ValueChain
InvisibleVisible
Position
@suksr
12. Wardley Maps – LANDSCAPE
ValueChain
InvisibleVisible
Evolution
Components along
evolution axes
Genesis Custom-Built Product (+rental) Commodity (+utility)
Position
Movement
@suksr
13. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Everything evolves
Past Current Future
Supply CompetitionDemand Competition
Uncharted Industrialised
Wardley Maps – PATTERNS
Movement
@suksr
14. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Past Current Future
Characteristics changeUncharted
Undefined market
Uncertain
Unpredictable
Rare
Poorly understood
Forming market
Learning on use
Increasing understanding
Slowly increasing
consumption
Rapid increases in
learning
Growing market
Learning on operation
Increasing education
Rapidly increasing
consumption
Rapid increase in use
Mature market
Known / accepted
Stable
Widespread and stabilising
Commonly understood
(in term of use)
Industrialised
Wardley Maps – PATTERNS
Movement
@suksr
15. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Wardley Maps – PRINCIPLES
Use appropriate methods
per evolution stage
Build in-house
Use/buy off-the-shelf product
Outsource to utility suppliers
@suksr
16. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Use appropriate methods
per evolution stage
Wardley Maps – PRINCIPLES
Build in-house
Use/buy off-the-shelf product
Outsource to utility suppliers
@suksr
17. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Use appropriate methods
per evolution stage
Wardley Maps – PRINCIPLES
Build in-house
Use/buy off-the-shelf product
Outsource to utility suppliers
@suksr
18. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Know your users &
focus on user needs
Wardley Maps – PRINCIPLES
Build in-house / Agile
Use/buy off-the-shelf product / Lean
Outsource to utility suppliers / Six Sigma
@suksr
19. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
DDD & Wardley Maps
Ubiquitous Language
Domain Knowledge
Understanding the problem domain first
Problem Domain
Domain
Experts
Development
Teams
Collaboration
@suksr
21. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
DDD & Wardley Maps Problem Domain
Strategic Design
DDD Patterns &
Practices
Tactical Design
Analysing the
business
domain
Discovering
Subdomains
Problem Space
Decomposing
into modular
components
(Bounded
Contexts)
Mapping
interaction
patterns
between BC
(Context Maps)
Solution Space
@suksr
22. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
DDD & Wardley Maps Problem Domain
Strategic Design
DDD Patterns &
Practices
Tactical Design
Architecting a solution
fitting the problem
domain as closely as
possible
Provides
building blocks
to implement
domain model
Analysing the
business
domain
Discovering
Subdomains
Problem Space
Decomposing
into modular
components
(Bounded
Contexts (BC)
Mapping
interaction
patterns
between BC
(Context Maps)
Solution Space
@suksr
Solution Space
23. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Distilling the problem
domain & discovering the
core subdomain
DDD & Wardley Maps
Core
Subdomain
Problem Domain
STRATEGIC DESIGN (PROBLEM SPACE)
Supporting
Subdomain
Generic
Subdomain
ProblemSpace
@suksr
24. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Distilling the problem
domain & discovering the
core subdomain
DDD & Wardley Maps
Core
Subdomain
Problem Domain
Competitive advantage
Complex
Changes often
Build in-house
STRATEGIC DESIGN (PROBLEM SPACE)
Supporting
Subdomain
Generic
Subdomain
ProblemSpace
@suksr
25. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Distilling the problem
domain & discovering the
core subdomain
DDD & Wardley Maps
Core
Subdomain
Problem Domain
Competitive advantage
Complex
Changes often
Build in-house
No competitive advantage
Quite simple
Does not change often
Prefer to buy/use off-the-shelf
STRATEGIC DESIGN (PROBLEM SPACE)
Supporting
Subdomain
Generic
Subdomain
ProblemSpace
@suksr
26. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Distilling the problem
domain & discovering the
core subdomain
DDD & Wardley Maps
Core
Subdomain
Problem Domain
Competitive advantage
Complex
Changes often
Build in-house
No competitive advantage
Quite simple
Does not change often
Prefer to buy/use off-the-shelf
No competitive advantage
Generally complex
Does not change often
Buy/use off-the-shelf / outsource
STRATEGIC DESIGN (PROBLEM SPACE)
Supporting
Subdomain
Generic
Subdomain
ProblemSpace
@suksr
27. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Model Driven Design
DDD & Wardley Maps
STRATEGIC DESIGN (SOLUTION SPACE)
Development
Teams
Domain
Experts
Ubiquitous
Language
Analysis
Model
Code
Model
Domain Model
Core
Subdomain
Supporting
Subdomain
Generic
Subdomain
Problem Domain
ProblemSpaceSolutionSpace
abstracted by
@suksr
28. ValueChain
InvisibleVisible
Genesis Custom-Built Product (+rental) Commodity (+utility)
Evolution
Bounded Contexts
DDD & Wardley Maps
STRATEGIC DESIGN (SOLUTION SPACE)
Core
Subdomain
Supporting
Subdomain
Generic
Subdomain
Problem Domain
ProblemSpaceSolutionSpace
Linguistic/semantic
boundary
Ownership boundary
Model boundary
Physical boundary
Different architectural
patterns per context
possible
@suksr
39. Hexagonal Architecture
Business
Logic
Application
Port
Adapter
BOUNDED CONTEXT: EVENT MANAGEMENT
EventController
AWS API Gateway
Port
REST-API with
AWS API-Gateway and
AWS Lambda
EventController
AWS API Gateway
newEvent deleteEvent activateEvent
POST
/events
DELETE
/events/{id}
POST
/events/{id}/activate
Inner
Outer
Outside
@suksr
40. export class EventsController {
private readonly eventsService: EventApplicationService;
public constructor(eventsService: EventApplicationService) {
this.eventsService = eventsService;
}
public activateEvent: Handler = async (event: APIGatewayEvent, context: Context, callback: Callback) => {
if (!event.pathParameters) {
return callback(undefined, failure({ status: "error", error: "no event id specified" }));
}
if (!event.requestContext.authorizer) {
return callback(undefined, failure({ status: "error", error: "no authorized user specified" }));
}
try {
const eventId = new EventId(event.pathParameters.id);
const userId = new UserId(event.requestContext.authorizer.claims['cognito:username']);
await this.eventsService.activateEvent(eventId, userId);
callback(undefined, success({status: "ok"}));
} catch(e) {
return callback(undefined, failure({ status: "error", error: e }));
}
};
public newEvent: Handler = async (event: APIGatewayEvent, context: Context, callback: Callback) => {
// ... //
}
REST-API
AdapterPort
Lambda
Function
Lambda
Function
Hexagonal Architecture
BOUNDED CONTEXT: EVENT MANAGEMENT
@suksr
46. Domain Model
BC: EVENT MANAGEMENT
Event
create: Event
activate
reschedule
rename
EventId
id: string
Name
create: Name
name: string
EventStatus
CREATED
ACTIVATED
DEACTIVATED
deactivate
Description
create:
Description
desc: string
Period
create: Period
start: Date
end: Date
Value
Object
Entity
Aggregate
Root
Aggregate
Application
Port
EventController
AWS API Gateway
Port
DynamoDBEventRepository
Event
@suksr
47. export default class Event {
readonly id: EventId;
name: Name;
description?: Description;
status: EventStatus;
period: Period;
private constructor(id: EventId, name: Name, status: EventStatus, period: Period, description?: Description) {
this.id = id;
this.name = name;
this.description = description;
this.status = status;
this.period = period;
}
public activate() {
if (this.status === EventStatus.CLOSED) {
throw new Error("You cannot activate a closed event");
}
if (this.status === EventStatus.ACTIVATED) {
throw new Error("This event has already been activated");
}
this.status = EventStatus.ACTIVATED;
}
public rename(name: Name) {
if (!name) {
throw new Error("You cannot rename the event to an empty name");
}
this.name = name;
}
// ... //
}
Aggregate
@suksr
48. Domain Model
BC: EVENT MANAGEMENT
Application
EventController
AWS API Gateway
DynamoDBEventRepository
Event
EventApplicationService
EventRepository
@suksr
49. export default class EventApplicationService {
private readonly eventRepository: EventRepository;
constructor(eventRepository: EventRepository) {
this.eventRepository = eventRepository;
}
public async activateEvent(id: EventId) {
const event = await this.eventRepository.eventOfId(id);
if (!event) {
throw new Error("Could not deactivate event with id " + id + ", since event does not exist.");
}
event.activate();
await this.eventRepository.saveEvent(event);
}
// ... //
}
ApplicationService
Domain Model
EVENT MANAGEMENT
@suksr
52. Business
Domain
Needs Strategy
Better Software Design
Ubiquitous Language
Domain Knowledge
Domain
Experts
Development
Teams
Collaboration
Gaining Domain Knowledge
Aligning Software Design
to Business Domain
Core
Subdomain
Discovering the
Core Subdomain
Do not apply DDD
everywhere!
Focus on your core!
Core
Subdomain
DDD helps with ...
@suksr
But ...