KEYNOTE TALK
We overload our terms a lot in this industry. “Coupling” is one such. That word covers situations ranging from essential to accidental to comical to cosmic. Coupling seems to be the root of all ills. It is the molasses that slows our every move. And yet, in the industry from which we borrowed the term, “coupling” was not a dirty word. It meant something ingenious. Let us contemplate coupling for a time and see what we can do about it.
Business Intelligence (BI) and Data Management Basics amorshed
This document provides an overview of business intelligence (BI) and data management basics. It discusses topics such as digital transformation requirements, data strategy, data governance, data literacy, and becoming a data-driven organization. The document emphasizes that in the digital age, data is a key asset and organizations need to focus on data management in order to make informed decisions. It also stresses the importance of data culture and competency for successful BI and data initiatives.
1. Enterprise Data Management (EDM) is the ability of an organization to precisely define, easily integrate and effectively retrieve data for both internal applications and external communication. It involves managing various types of data across the enterprise.
2. EDM includes areas like master data management, reference data management, metadata management, data governance, data quality, data analytics, data privacy, data integration, and data architecture.
3. The document discusses definitions and concepts for each of these areas, including roles, processes, and technologies involved. It provides overviews of fundamental concepts, principles, dimensions and processes for data quality, data governance, data privacy and other areas.
Marc embraces database virtualization and containers to help Dave's development team overcome data issues slowing their work. Virtualizing the database and creating "data pods" allows self-service access and the ability to quickly provision testing environments. This enables the team to work more efficiently and meet sprint goals. DataOps is introduced to fully integrate data into DevOps practices, removing it as a bottleneck through tools that provide versioning, automation and developer-friendly interfaces.
How to Use a Semantic Layer to Deliver Actionable Insights at ScaleDATAVERSITY
Learn about using a semantic layer to enable actionable insights for everyone and streamline data and analytics access throughout your organization. This session will offer practical advice based on a decade of experience making semantic layers work for Enterprise customers.
Attend this session to learn about:
- Delivering critical business data to users faster than ever at scale using a semantic layer
- Enabling data teams to model and deliver a semantic layer on data in the cloud.
- Maintaining a single source of governed metrics and business data
- Achieving speed of thought query performance and consistent KPIs across any BI/AI tool like Excel, Power BI, Tableau, Looker, DataRobot, Databricks and more.
- Providing dimensional analysis capability that accelerates performance with no need to extract data from the cloud data warehouse
Who should attend this session?
Data & Analytics leaders and practitioners (e.g., Chief Data Officers, data scientists, data literacy, business intelligence, and analytics professionals).
Data mesh is a decentralized approach to managing and accessing analytical data at scale. It distributes responsibility for data pipelines and quality to domain experts. The key principles are domain-centric ownership, treating data as a product, and using a common self-service infrastructure platform. Snowflake is well-suited for implementing a data mesh with its capabilities for sharing data and functions securely across accounts and clouds, with built-in governance and a data marketplace for discovery. A data mesh implemented on Snowflake's data cloud can support truly global and multi-cloud data sharing and management according to data mesh principles.
Michelle Ufford of Netflix presented on their approach to data quality. They developed Quinto, a data quality service that implements a Write-Audit-Publish pattern for ETL jobs. It audits metrics after data is written to check for issues like row counts being too high/low. Configurable rules determine if issues warrant failing or warning on a job. Future work includes expanding metadata tracking and anomaly detection. The presentation emphasized building modular components over monolithic frameworks and only implementing quality checks where needed.
Big Data and Data Warehousing Together with Azure Synapse Analytics (SQLBits ...Michael Rys
SQLBits 2020 presentation on how you can build solutions based on the modern data warehouse pattern with Azure Synapse Spark and SQL including demos of Azure Synapse.
Business Intelligence (BI) and Data Management Basics amorshed
This document provides an overview of business intelligence (BI) and data management basics. It discusses topics such as digital transformation requirements, data strategy, data governance, data literacy, and becoming a data-driven organization. The document emphasizes that in the digital age, data is a key asset and organizations need to focus on data management in order to make informed decisions. It also stresses the importance of data culture and competency for successful BI and data initiatives.
1. Enterprise Data Management (EDM) is the ability of an organization to precisely define, easily integrate and effectively retrieve data for both internal applications and external communication. It involves managing various types of data across the enterprise.
2. EDM includes areas like master data management, reference data management, metadata management, data governance, data quality, data analytics, data privacy, data integration, and data architecture.
3. The document discusses definitions and concepts for each of these areas, including roles, processes, and technologies involved. It provides overviews of fundamental concepts, principles, dimensions and processes for data quality, data governance, data privacy and other areas.
Marc embraces database virtualization and containers to help Dave's development team overcome data issues slowing their work. Virtualizing the database and creating "data pods" allows self-service access and the ability to quickly provision testing environments. This enables the team to work more efficiently and meet sprint goals. DataOps is introduced to fully integrate data into DevOps practices, removing it as a bottleneck through tools that provide versioning, automation and developer-friendly interfaces.
How to Use a Semantic Layer to Deliver Actionable Insights at ScaleDATAVERSITY
Learn about using a semantic layer to enable actionable insights for everyone and streamline data and analytics access throughout your organization. This session will offer practical advice based on a decade of experience making semantic layers work for Enterprise customers.
Attend this session to learn about:
- Delivering critical business data to users faster than ever at scale using a semantic layer
- Enabling data teams to model and deliver a semantic layer on data in the cloud.
- Maintaining a single source of governed metrics and business data
- Achieving speed of thought query performance and consistent KPIs across any BI/AI tool like Excel, Power BI, Tableau, Looker, DataRobot, Databricks and more.
- Providing dimensional analysis capability that accelerates performance with no need to extract data from the cloud data warehouse
Who should attend this session?
Data & Analytics leaders and practitioners (e.g., Chief Data Officers, data scientists, data literacy, business intelligence, and analytics professionals).
Data mesh is a decentralized approach to managing and accessing analytical data at scale. It distributes responsibility for data pipelines and quality to domain experts. The key principles are domain-centric ownership, treating data as a product, and using a common self-service infrastructure platform. Snowflake is well-suited for implementing a data mesh with its capabilities for sharing data and functions securely across accounts and clouds, with built-in governance and a data marketplace for discovery. A data mesh implemented on Snowflake's data cloud can support truly global and multi-cloud data sharing and management according to data mesh principles.
Michelle Ufford of Netflix presented on their approach to data quality. They developed Quinto, a data quality service that implements a Write-Audit-Publish pattern for ETL jobs. It audits metrics after data is written to check for issues like row counts being too high/low. Configurable rules determine if issues warrant failing or warning on a job. Future work includes expanding metadata tracking and anomaly detection. The presentation emphasized building modular components over monolithic frameworks and only implementing quality checks where needed.
Big Data and Data Warehousing Together with Azure Synapse Analytics (SQLBits ...Michael Rys
SQLBits 2020 presentation on how you can build solutions based on the modern data warehouse pattern with Azure Synapse Spark and SQL including demos of Azure Synapse.
Basics of BI and Data Management (Summary).pdfamorshed
Basics of Business Intelligence and Data Management
BI Architecture
How BI works?
DMBOK framework
what is Data literacy
Data quality
Data Governance
what is self-service or modern BI
Power BI Architecture
How Power BI Works
BI Implementation steps
Neo4j – The Fastest Path to Scalable Real-Time AnalyticsNeo4j
The document discusses how graph databases like Neo4j can enable real-time analytics at massive scale by leveraging relationships in data. It notes that data is growing exponentially but traditional databases can't efficiently analyze relationships. Neo4j natively stores and queries relationships to allow analytics 1000x faster. The document advocates that graphs will form the foundation of modern data and analytics by enhancing machine learning models and enabling outcomes like building intelligent applications faster, gaining deeper insights, and scaling limitlessly without compromising data.
“TODAY, COMPANIES ACROSS ALL INDUSTRIES ARE BECOMING SOFTWARE COMPANIES.”
The familiar refrain is certainly true of the new-school, born-in-the-cloud set. But it can also apply to traditional enterprises that are reinventing themselves by coupling DevOps excellence with intelligent DataOps.
Adobe Behance Scales to Millions of Users at Lower TCO with Neo4jNeo4j
1) Behance is an online platform for showcasing creative work with 25 million members and millions of monthly visitors. It was previously powered by a Cassandra database which had scaling issues.
2) Behance transitioned to using Neo4j, a graph database, which improved performance, flexibility, and reduced costs. It enabled real-time activity feeds and recommendations.
3) This success led to using the graph across Adobe products through the Creative Social Graph initiative. It powered new community features in Lightroom and Photoshop Express at scale.
The document provides an introduction to knowledge graphs. It discusses how knowledge graphs are being used by large enterprises and intelligent agents to capture concepts, entities, and relationships within domains to drive business, generate insights, and enhance relationships. The presentation will cover an overview of what knowledge graphs are, who uses them, why they are used, and how to use them. It then provides some examples of how knowledge graphs are applied, including in intelligent agents, semantic web, search engines, social networks, biology, enterprise knowledge management, and more.
Data Migration Steps PowerPoint Presentation Slides SlideTeam
Presenting this set of slides with name - Data Migration Steps Powerpoint Presentation Slides. This PPT deck displays twenty-six slides with in-depth research. We provide a ready to use deck with all sorts of relevant topics subtopics templates, charts and graphs, overviews, analysis templates. When you download this deck by clicking the download button below, you get the presentation in both standard and widescreen format. All slides are fully editable. change the colors, font size, add or delete text if needed. The presentation is fully supported with Google Slides. It can be easily converted into JPG or PDF format.
Databricks secure deployments and security baselines, doug march 2022Henrik Brattlie
Databricks resources deployed to a pre-provisioned VNET
Databricks traffic isolated from regular network traffic
Prevent data exfiltration
Internal traffic between cluster nodes internal and encrypted
Access to Databricks control plane limited and controlled
Platform Strategy to Deliver Digital Experiences on AzureWSO2
This slide deck introduces Choreo, a cloud native internal developer platform by Microsoft independent software vendor (ISV) Partner, WSO2. It enables your developers to create, deploy, and run new digital components like APIs, microservices, and integrations in serverless mode on any Kubernetes cluster with built-in DevSecOps.
Recording: https://wso2.com/choreo/resources/webinar/platform-strategy-to-deliver-digital-experiences-on-azure/
To be able to use analytics effectively and thus leverage the data treasures in the company, you need a modern and scalable data platform that can react flexibly to events and was designed with a DataOps mindset from the very beginning.
Cognitive computing aims to mimic human reasoning and behavior to solve complex problems. It works by simulating human thought processes through adaptive, interactive, iterative and contextual means. Cognitive computing supplements human decision making in sectors like customer service and healthcare, while artificial intelligence focuses more on autonomous decision making with applications in finance, security and more. A use case of cognitive AI is using it to assess skills, find relevant jobs, negotiate pay, suggest career paths and provide salary comparisons and job openings to help humans.
Speaking to your data is similar to speak any other language, It starts with understanding the basic terminology and describing key concepts. This presentation will focus on the main/ key steps that are critical to learning the foundation of speaking data.
Demystifying Data Warehousing as a Service - DFWKent Graziano
This document provides an overview and introduction to Snowflake's cloud data warehousing capabilities. It begins with the speaker's background and credentials. It then discusses common data challenges organizations face today around data silos, inflexibility, and complexity. The document defines what a cloud data warehouse as a service (DWaaS) is and explains how it can help address these challenges. It provides an agenda for the topics to be covered, including features of Snowflake's cloud DWaaS and how it enables use cases like data mart consolidation and integrated data analytics. The document highlights key aspects of Snowflake's architecture and technology.
Graphs in Retail: Know Your Customers and Make Your Recommendations Engine LearnNeo4j
This document provides an overview and agenda for a presentation on using graph databases like Neo4j for retail applications. The presentation covers introducing graph databases and Neo4j, discussing retail data types, and demonstrating use cases for customer 360 views, recommendations, supply chain management, and other areas. Case studies are presented on using Neo4j for real-time recommendations at a large retailer and real-time promotions at a top US retailer. The document concludes with an invitation for questions.
This document discusses Project Amaterasu, a tool for simplifying the deployment of big data applications. Amaterasu uses Mesos to deploy Spark jobs and other frameworks across clusters. It defines workflows, actions, and environments in YAML and JSON files. Workflows contain a series of actions like Spark jobs. Actions are written in Scala and interface with Amaterasu's context. Environments configure settings for different clusters. Amaterasu aims to improve collaboration and testing for big data teams through continuous integration and deployment of data pipelines.
Data Catalog for Better Data Discovery and GovernanceDenodo
Watch full webinar here: https://buff.ly/2Vq9FR0
Data catalogs are en vogue answering critical data governance questions like “Where all does my data reside?” “What other entities are associated with my data?” “What are the definitions of the data fields?” and “Who accesses the data?” Data catalogs maintain the necessary business metadata to answer these questions and many more. But that’s not enough. For it to be useful, data catalogs need to deliver these answers to the business users right within the applications they use.
In this session, you will learn:
*How data catalogs enable enterprise-wide data governance regimes
*What key capability requirements should you expect in data catalogs
*How data virtualization combines dynamic data catalogs with delivery
Business Value Metrics for Data GovernanceDATAVERSITY
This document discusses how to quantify and communicate the business value of data governance initiatives. It begins with background on information capability and data maturity levels. It then discusses frameworks for understanding business value, such as key performance indicators and how initiatives can generate revenue, cost savings or avoidance. The document provides examples of how to calculate return on investment, net present value and payback period to quantify benefits. It also discusses how to effectively communicate a business case by aligning it with organizational objectives and knowing your audience.
Understanding DataOps and Its Impact on Application QualityDevOps.com
Modern day applications are data driven and data rich. The infrastructure your backends run on are a critical aspect of your environment, and require unique monitoring tools and techniques. In this webinar learn about what DataOps is, and how critical good data ops is to the integrity of your application. Intelligent APM for your data is critical to the success of modern applications. In this webinar you will learn:
The power of APM tailored for Data Operations
The importance of visibility into your data infrastructure
How AIOps makes data ops actionable
In this webinar, we’ll show you how Cloudera SDX reduces the complexity in your data management environment and lets you deliver diverse analytics with consistent security, governance, and lifecycle management against a shared data catalog.
Graphs for Finance - AML with Neo4j Graph Data Science Neo4j
This document discusses using graph data science and graph algorithms to detect fraud. It explains that graph data science uses relationships in data to power predictions. It provides examples of how graph algorithms like Louvain clustering, PageRank, connected components, and Jaccard similarity can be used to identify communities that frequently interact, measure influence, identify accounts sharing identifiers, and measure account similarity to detect fraud in applications like banking and financial services. The document also discusses using graph embeddings and feature engineering with graph networks to improve machine learning models for fraud detection by basing predictions on influential entities and their relationships.
Monitoring as an entry point for collaborationJulien Pivotto
This document summarizes a talk on using monitoring as an entry point for collaboration. It discusses using the Prometheus monitoring system to collect metrics and expose them using exporters. Grafana is then used to visualize the metrics and create dashboards focused on business metrics like requests, errors, and durations. These metrics provide observability across teams and enable alerting when business services are impacted.
Basics of BI and Data Management (Summary).pdfamorshed
Basics of Business Intelligence and Data Management
BI Architecture
How BI works?
DMBOK framework
what is Data literacy
Data quality
Data Governance
what is self-service or modern BI
Power BI Architecture
How Power BI Works
BI Implementation steps
Neo4j – The Fastest Path to Scalable Real-Time AnalyticsNeo4j
The document discusses how graph databases like Neo4j can enable real-time analytics at massive scale by leveraging relationships in data. It notes that data is growing exponentially but traditional databases can't efficiently analyze relationships. Neo4j natively stores and queries relationships to allow analytics 1000x faster. The document advocates that graphs will form the foundation of modern data and analytics by enhancing machine learning models and enabling outcomes like building intelligent applications faster, gaining deeper insights, and scaling limitlessly without compromising data.
“TODAY, COMPANIES ACROSS ALL INDUSTRIES ARE BECOMING SOFTWARE COMPANIES.”
The familiar refrain is certainly true of the new-school, born-in-the-cloud set. But it can also apply to traditional enterprises that are reinventing themselves by coupling DevOps excellence with intelligent DataOps.
Adobe Behance Scales to Millions of Users at Lower TCO with Neo4jNeo4j
1) Behance is an online platform for showcasing creative work with 25 million members and millions of monthly visitors. It was previously powered by a Cassandra database which had scaling issues.
2) Behance transitioned to using Neo4j, a graph database, which improved performance, flexibility, and reduced costs. It enabled real-time activity feeds and recommendations.
3) This success led to using the graph across Adobe products through the Creative Social Graph initiative. It powered new community features in Lightroom and Photoshop Express at scale.
The document provides an introduction to knowledge graphs. It discusses how knowledge graphs are being used by large enterprises and intelligent agents to capture concepts, entities, and relationships within domains to drive business, generate insights, and enhance relationships. The presentation will cover an overview of what knowledge graphs are, who uses them, why they are used, and how to use them. It then provides some examples of how knowledge graphs are applied, including in intelligent agents, semantic web, search engines, social networks, biology, enterprise knowledge management, and more.
Data Migration Steps PowerPoint Presentation Slides SlideTeam
Presenting this set of slides with name - Data Migration Steps Powerpoint Presentation Slides. This PPT deck displays twenty-six slides with in-depth research. We provide a ready to use deck with all sorts of relevant topics subtopics templates, charts and graphs, overviews, analysis templates. When you download this deck by clicking the download button below, you get the presentation in both standard and widescreen format. All slides are fully editable. change the colors, font size, add or delete text if needed. The presentation is fully supported with Google Slides. It can be easily converted into JPG or PDF format.
Databricks secure deployments and security baselines, doug march 2022Henrik Brattlie
Databricks resources deployed to a pre-provisioned VNET
Databricks traffic isolated from regular network traffic
Prevent data exfiltration
Internal traffic between cluster nodes internal and encrypted
Access to Databricks control plane limited and controlled
Platform Strategy to Deliver Digital Experiences on AzureWSO2
This slide deck introduces Choreo, a cloud native internal developer platform by Microsoft independent software vendor (ISV) Partner, WSO2. It enables your developers to create, deploy, and run new digital components like APIs, microservices, and integrations in serverless mode on any Kubernetes cluster with built-in DevSecOps.
Recording: https://wso2.com/choreo/resources/webinar/platform-strategy-to-deliver-digital-experiences-on-azure/
To be able to use analytics effectively and thus leverage the data treasures in the company, you need a modern and scalable data platform that can react flexibly to events and was designed with a DataOps mindset from the very beginning.
Cognitive computing aims to mimic human reasoning and behavior to solve complex problems. It works by simulating human thought processes through adaptive, interactive, iterative and contextual means. Cognitive computing supplements human decision making in sectors like customer service and healthcare, while artificial intelligence focuses more on autonomous decision making with applications in finance, security and more. A use case of cognitive AI is using it to assess skills, find relevant jobs, negotiate pay, suggest career paths and provide salary comparisons and job openings to help humans.
Speaking to your data is similar to speak any other language, It starts with understanding the basic terminology and describing key concepts. This presentation will focus on the main/ key steps that are critical to learning the foundation of speaking data.
Demystifying Data Warehousing as a Service - DFWKent Graziano
This document provides an overview and introduction to Snowflake's cloud data warehousing capabilities. It begins with the speaker's background and credentials. It then discusses common data challenges organizations face today around data silos, inflexibility, and complexity. The document defines what a cloud data warehouse as a service (DWaaS) is and explains how it can help address these challenges. It provides an agenda for the topics to be covered, including features of Snowflake's cloud DWaaS and how it enables use cases like data mart consolidation and integrated data analytics. The document highlights key aspects of Snowflake's architecture and technology.
Graphs in Retail: Know Your Customers and Make Your Recommendations Engine LearnNeo4j
This document provides an overview and agenda for a presentation on using graph databases like Neo4j for retail applications. The presentation covers introducing graph databases and Neo4j, discussing retail data types, and demonstrating use cases for customer 360 views, recommendations, supply chain management, and other areas. Case studies are presented on using Neo4j for real-time recommendations at a large retailer and real-time promotions at a top US retailer. The document concludes with an invitation for questions.
This document discusses Project Amaterasu, a tool for simplifying the deployment of big data applications. Amaterasu uses Mesos to deploy Spark jobs and other frameworks across clusters. It defines workflows, actions, and environments in YAML and JSON files. Workflows contain a series of actions like Spark jobs. Actions are written in Scala and interface with Amaterasu's context. Environments configure settings for different clusters. Amaterasu aims to improve collaboration and testing for big data teams through continuous integration and deployment of data pipelines.
Data Catalog for Better Data Discovery and GovernanceDenodo
Watch full webinar here: https://buff.ly/2Vq9FR0
Data catalogs are en vogue answering critical data governance questions like “Where all does my data reside?” “What other entities are associated with my data?” “What are the definitions of the data fields?” and “Who accesses the data?” Data catalogs maintain the necessary business metadata to answer these questions and many more. But that’s not enough. For it to be useful, data catalogs need to deliver these answers to the business users right within the applications they use.
In this session, you will learn:
*How data catalogs enable enterprise-wide data governance regimes
*What key capability requirements should you expect in data catalogs
*How data virtualization combines dynamic data catalogs with delivery
Business Value Metrics for Data GovernanceDATAVERSITY
This document discusses how to quantify and communicate the business value of data governance initiatives. It begins with background on information capability and data maturity levels. It then discusses frameworks for understanding business value, such as key performance indicators and how initiatives can generate revenue, cost savings or avoidance. The document provides examples of how to calculate return on investment, net present value and payback period to quantify benefits. It also discusses how to effectively communicate a business case by aligning it with organizational objectives and knowing your audience.
Understanding DataOps and Its Impact on Application QualityDevOps.com
Modern day applications are data driven and data rich. The infrastructure your backends run on are a critical aspect of your environment, and require unique monitoring tools and techniques. In this webinar learn about what DataOps is, and how critical good data ops is to the integrity of your application. Intelligent APM for your data is critical to the success of modern applications. In this webinar you will learn:
The power of APM tailored for Data Operations
The importance of visibility into your data infrastructure
How AIOps makes data ops actionable
In this webinar, we’ll show you how Cloudera SDX reduces the complexity in your data management environment and lets you deliver diverse analytics with consistent security, governance, and lifecycle management against a shared data catalog.
Graphs for Finance - AML with Neo4j Graph Data Science Neo4j
This document discusses using graph data science and graph algorithms to detect fraud. It explains that graph data science uses relationships in data to power predictions. It provides examples of how graph algorithms like Louvain clustering, PageRank, connected components, and Jaccard similarity can be used to identify communities that frequently interact, measure influence, identify accounts sharing identifiers, and measure account similarity to detect fraud in applications like banking and financial services. The document also discusses using graph embeddings and feature engineering with graph networks to improve machine learning models for fraud detection by basing predictions on influential entities and their relationships.
Monitoring as an entry point for collaborationJulien Pivotto
This document summarizes a talk on using monitoring as an entry point for collaboration. It discusses using the Prometheus monitoring system to collect metrics and expose them using exporters. Grafana is then used to visualize the metrics and create dashboards focused on business metrics like requests, errors, and durations. These metrics provide observability across teams and enable alerting when business services are impacted.
Cilium:: Application-Aware Microservices via BPFCynthia Thomas
Intro to Cilium Microservices Security with Kubernetes Integration
Open Source Cilium website: cilium.io
GH: github.com/cilium/cilium
Join our Slack! cilium.herokuapp.com
Follow us on Twitter!
@ciliumproject
@_techcet_
Redis Streams for Event-Driven MicroservicesRedis Labs
- Bobby Calderwood is a software engineer and founder who will discuss using event sourcing with Redis and Clojure(Script) to build distributed systems.
- He worked with a major automotive manufacturer on a project using this architecture including event sourcing on Redis streams, implemented in Clojure(Script), to address their needs for asynchronous processing, data replay/redo, and sharing data between teams.
- The presentation will include a demo of building a sample coffee shop ordering system using this architecture.
Building A Distributed Build System at Google Scale (StrangeLoop 2016)Aysylu Greenberg
It's hard to imagine a modern developer workflow without a sufficiently advanced build system: Make, Gradle, Maven, Rake, and many others. In this talk, we'll discuss the evolution of build systems that leads to distributed build systems. Then, we'll dive into how we can build a scalable system that is fast and resilient, with examples from Google. We'll conclude with the discussion of general challenges of migrating systems from one architecture to another.
Watch the video: https://content.pivotal.io/webinars/using-data-science-for-cybersecurity
Enterprise networks are under constant threat. While perimeter security can help keep some bad actors out, we know from experience that there is no 100%, foolproof way to prevent unwanted intrusions. In many cases, bad actors come from within the enterprise, meaning perimeter security methods are ineffective.
Enterprises, therefore, must enhance their cybersecurity efforts to include data science-driven methods for identifying anomalous and potentially nefarious user behavior taking place inside their networks and IT infrastructure.
Join Pivotal’s Anirudh Kondaveeti and Jeff Kelly in this live webinar on data science for cybersecurity. You’ll learn how to perform data-science driven anomalous user behavior using a two-stage framework, including using principal components analysis to develop user specific behavioral models. Anirudh and Jeff will also share examples of successful real-world cybersecurity efforts and tips for getting started.
About the Speakers:
Anirudh Kondaveeti is a Principal Data Scientist at Pivotal with a focus on Cybersecurity and spatio-temporal data mining. He has developed statistical models and machine learning algorithms to detect insider and external threats and "needle-in-the-hay-stack" anomalies in machine generated network data for leading industries.
Jeff Kelly is a Principal Product Marketing Manager at Pivotal.
This document discusses challenges with deploying certain types of application components and strategies for addressing them. It begins by noting that database schema updates, mainframe code changes, and application server configuration changes can be difficult to deploy due to issues like lack of source control and inconsistent processes. Automating the deployment of these components is important to avoid errors and ensure consistency. The document then provides examples of how tools can help with tasks like database change management, modeling application server configurations, and managing mainframe code deployments in an incremental fashion. Overall, it advocates for representing complex deployment components as code that can be versioned and deployed in a reliable, automated manner.
My past-3 yeas-developer-journey-at-linkedin-by-iantsaiKim Kao
Ian Tsai shared his past 3years developer journey at Linkedin. it was about migrate monolith into microservices 3 years ago, he faced so diffcult challenges and need to have effective tools to support the change.
Brian Greig gave a presentation on visualizing data in realtime using WebSockets and D3. He discussed collecting and consuming data from various sources, performing data analytics and visualizations using the DADA loop, using WebSockets for bidirectional data transmission, manipulating the DOM with D3 for data visualization, and presented a case study on building a simulation.
Building Event-Driven (Micro)Services with Apache KafkaGuido Schmutz
Should we use traditional REST APIs to bind services together? Or is it better to use a more loosely-coupled protocol? This talk will dive into how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which.
The process of streaming real-time data from a wide variety of machine data sources and entities can be very complex and unwieldy. Using an agent-based approach, Informatica has invented a new technique and open access product that makes this process much more user friendly and efficient, even when dealing with multiple environments such as Hadoop, Cassandra, Storm, Amazon Kinesis and Complex Event Processing.
The document describes a hybrid honeypot framework for collecting and analyzing malware. The framework uses both client honeypots and server honeypots controlled by a central honeypot controller. Client honeypots actively visit URLs to detect client-side attacks, while server honeypots passively detect server-side attacks. Collected malware is stored in a central database and analyzed on an analysis server to detect known and unknown malware types through dynamic execution and static analysis. The integrated framework was able to collect thousands of malware samples, including some not detected by antivirus software.
R2DBC started as an experiment to enable integration of SQL databases into systems that use reactive programming models. Now it specifies a robust specification that can be implemented to manage data in a fully-reactive and completely non-blocking fashion.
The document provides an overview of Agile, DevOps and Cloud Management from a security, risk management and audit compliance perspective. It discusses how the IT industry paradigm is shifting towards microservices, containers, continuous delivery and cloud platforms. DevOps is described as development and operations engineers participating together in the entire service lifecycle. Key differences in DevOps include changes to configuration management, release and change management, and event monitoring. Factors for DevOps success include culture, collaboration, eliminating waste, unified processes, tooling and automation.
Cloud to hybrid edge cloud evolution Jun112020.pptxMichel Burger
Michel Burger discusses extending cloud computing to the edge by deploying microservices and other cloud-native technologies closer to endpoints and data sources. He outlines how software and computing models have evolved over time from mainframes to client-server architectures to modern cloud-native approaches. Burger also discusses principles for building cloud applications including designing for failure, scaling, and managing state.
Microsoft Sync Framework (part 1) ABTO Software Lecture GarntsarikABTO Software
The document discusses Microsoft Sync Framework, which is a comprehensive synchronization platform that enables collaboration and offline access for applications. It allows synchronization of any type of data stored in any format using any protocol across any network configuration. Key capabilities include support for offline scenarios, synchronization of changes between different endpoints like devices and servers, and handling conflicts that may arise during synchronization. The document provides examples of how to implement synchronization between a local database cache and remote data sources using Sync Framework along with Windows Communication Foundation (WCF) services.
Planning your Next-Gen Change Data Capture (CDC) Architecture in 2019 - Strea...Impetus Technologies
Traditional databases and batch ETL operations have not been able to serve the growing data volumes and the need for fast and continuous data processing.
How can modern enterprises provide their business users real-time access to the most up-to-date and complete data?
In our upcoming webinar, our experts will talk about how real-time CDC improves data availability and fast data processing through incremental updates in the big data lake, without modifying or slowing down source systems. Join this session to learn:
What is CDC and how it impacts business
The various methods for CDC in the enterprise data warehouse
The key factors to consider while building a next-gen CDC architecture:
Batch vs. real-time approaches
Moving from just capturing and storing, to capturing enriching, transforming, and storing
Avoiding stopgap silos to state-through processing
Implementation of CDC through a live demo and use-case
You can view the webinar here - https://www.streamanalytix.com/webinar/planning-your-next-gen-change-data-capture-cdc-architecture-in-2019/
For more information visit - https://www.streamanalytix.com
Michael Coté - The Eternal Recurrence of DevOpsDevOpsDays DFW
We are in the middle of rebuilding everything all yet again, mostly thanks to kubernetes resetting the infrastructure layer. This means it’s time to build a new set of management tools and appdev stacks. After all, kubernetes is “a platform for building platforms,” so now we get to build the platforms. This is frustrating if you’ve lived through two decades (or more!) of platform building, but as this talk will cover, this never ending, platform recurrence of is the natural state and something we should embrace. What lessons can we bring forward to make it better this time?
Nigel Thurlow - DevOps is Enterprise Wide.pdfDevOpsDays DFW
This document discusses various lean and agile concepts including value streams, kanban, scrum, complexity, bottlenecks, and genchi genbutsu. It provides definitions and comparisons of key terms. Several diagrams and formulas are also presented to illustrate lean principles such as queuing theory, Little's Law, and the impact of team size on communication lines. The overall document focuses on clarifying lean thinking and debunking misconceptions.
Dan Barker - Understanding Risk Can Fund TransformationDevOpsDays DFW
Risk quantification can be a valuable tool for selling transformation to executives. It’s also important to understand how your company looks at risk. Most CEOs will have a certain amount of risk they’re willing to take called their risk tolerance. This will help you to understand if your project is worth pursuing. If your project won’t offset more risk than the risk tolerance, then it’s unlikely to be funded if it’s not a new feature or product.
We’ll explore how we can go about quantifying risk in real-world situations I’ve faced in presenting transformation initiatives. This information will help you to understand how a business looks at risk and the associated value of mitigating that risk. Want to start a new CI/CD initiative, bake in the risk averted by putting SAST and DAST in your pipeline. This would have helped Equifax avoid their breach and the risk of such a breach is very high and carries a very large cost.
We’ll also take a look at some additional resources to help you assess risk such as data on breaches, attacks, the FAIR tool, and other resources you can use once you leave the session.
What will the audience get?
The audience will understand how businesses are handling and thinking about risk today. They’ll understand the part they play in that system. And they’ll be able to start assessing the risk mitigation value of their projects in order to get them funded.
Vijay Challa - SSO on Cloud - Gateway ApproachDevOpsDays DFW
On cloud, traditional policy agent based provisioning of SSO becomes a huge bottleneck for scaling at will and diversity on demand. By centralizing the policy enforcement into a centralized pass through layer we de-coupled app teams to realize maximum benefits offered by cloud.
Here is a set of our accomplishments at a very high level, that I would like to share as part of the talk:
Seamless/ transparent scaling, updating of authentication/ security layer, with zero touchpoint to application infrastructure.
Flexible policy enforcement point to support current platform, while migrating towards standards-based system.
Continuous Authentication by policy enforcement in various categories like IP monitoring, Device fingerprinting, session hijacking, real time URL blocking.
Continuous Enforcement and real time configuration of levels of assurance to online assets/ endpoints.
Uniform protection across the organization, with self service capabilities.
Preventing fraud, by intercepting and evaluating traffic towards backend systems/ app to app communication.
Aaron Mell - The Continuous Improvement Toolbox: Post-MortemsDevOpsDays DFW
Post-mortems are a valuable tool in any organizations continuous improvment toolbox. During this short presentation, I’ll discuss the post-mortem process, and how any organization can effectively leverage the post-mortem as a tool to help drive continuous improvement in your organization.
Steve Shangguan - The Unreasonable Effectiveness of Combining and Correlating...DevOpsDays DFW
The three pillars of observability, namely logs, metrics, and traces, have been raising our industry’s awareness of the three categorical data sources that are indispensable to implement true observability. The need for observability becomes even more obvious and pressing as an organization’s infrastructure and workloads evolve to be more hybrid, distributed, and, inevitably, complex. However, in reality, DevOps professionals a lot of times still face the challenges diversified tool chains and disparate data silos. It is no secret that every single signal from applications or infrastructure in itself is limited in its scope and usefulness. The true power only comes in when logs, metrics, and traces are properly combined and correlated together.
Everyone is talking about it, but only a handful few really knows the devil details. In this talk, we will show various scenarios where the combination of logs, metrics, and traces truly make a difference. We will also review some of the challenges and best practices of integrating the three pillars together using OSS tools (and its ecosystem) such as the Elastic Stack.
Farrah Campbell - Open Mind, Open Doors. Change your narrative and achieve wh...DevOpsDays DFW
Life is not easy. But it is up to us to build resilience that enables us to move forward in a graceful way. Attending to difficult feelings without becoming subsumed by them; we can develop an awareness of our bigger picture goals and accomplish tasks we never thought were possible.
We’ve been conditioned over the years to always question whether we can succeed. We tend to focus on all the things that could go wrong instead of the possibilities. The attitude of being ready to work even in the face of challenges and despite odds is what will make all the difference in your life. In this session, you will learn how changing your narrative and focusing on what is possible can have a dramatic effect on both your success at work and your happiness in life.
Bjorn Edwin - Start Your Own DevOps Dojo in 8 Simple StepsDevOpsDays DFW
arget, Verizon, Capital One, Walmart and other giant enterprises have been creating Dojos (immersive learning environments) to facilitate their DevOps adoption. Today, DevOps Dojos may be the best way to help your organization in its journey towards doing DevOps the right way. The DevOps Dojos we have created for our enterprise clients have enabled them to accelerate their software delivery. Based on these experiences, we would like to share how you can start a successful Dojo in 8 simple steps. These steps are industry, domain and technology agnostic.
If you are a leader, manager or engineer who is passionate about bringing the Dojo concept to your organization, this talk will give you the 8 steps you need to follow to design, build and grow your own DevOps Dojo.
The 8 steps are as follows:
Assess the current state of DevOps tools and pipelines.
Identify and build the first and most common delivery pipeline and create a plan for the remaining pipelines.
Determine types of masters, types of coaches and physical space needs.
Adjust teams, players and technical practices as needed.
Design schedules, timelines, curriculum topics, labs and activities.
Make Dojo delivery and execution fun so learning becomes both rewarding and inevitable.
Share ongoing support and updates.
Grow in-house expertise, community and events. Bonus: Typical challenges you may face and how to avoid them
No matter what profession you’re in, our jobs can contain a variety of stress factors, which are unknown to individuals looking from the outside.
“TECH-LIVES MATTER: HANDS UP, DON’T REBOOT” is a lecture based on research that recommends corporations should devote in IT specific employee assistance agendas. Working as a Developer, Specialist, Designer, Engineer, Expert, Manager, and Technician demands a high level of precision over the extended period, and any minute lapse in one’s job could be disastrous.
MY METHOD OF HUMOR AND AFFECTION TITLED THE “DR. DRE METHOD.” ATTENDEES WILL LEARN TO:
D. DOWNLOAD the cause of your Stress.
R. Use a mental ROUTER to direct the stress to a secure site in your mind.
D. Learn to DELETE future Stress elements.
R. REBOOT yourself and focus on positive aspects.
E. ENCRYPT your mind to secure the positive parts.
The pressure of working in the field of computer technology can be a dream for the observers and the nightmare for the workers.
Working in the world of technology can be a great experience, which DEVELOPERS, SPECIALISTS, DESIGNERS, ENGINEERS, EXPERTS, MANAGERS AND TECHNICIANS create and dive each time they start working.
“TECH-LIVES MATTER, HANDS UP, DON’T REBOOT” offers solutions to support individuals who are afflicted by stress within the IT community is employee input, better task content, amplified job control, equal production values, career expansion, enhanced peer socialization, and more excellent workplace ergonomics.
Key Takeaways:
Overall Alertness- Regarding the Onset to Stress
Stress at one’s place of employment
Mentally Supporting Yourself
Recognize the best method in a tense setting
DevOps requires a drastic shift from a traditional long-running project mindset. Best intentions can easily be thwarted simply because the new ways seem to go against the old way of thinking that teams and their managers hold so dearly.
The Chicken or the Egg? With so many changes in culture, mindset, and process involved in DevOps transformation, it can be difficult to see how outcomes will change without some hands-on context. But without changing the culture, mindset and process, there cannot be hands-on context.
In this talk, I’m going to temporarily set aside metrics and industry reports. I’ll use basic math to address some of the common arguments and anti-patterns I’ve seen.
Detangling complex systems with compassion & production excellenceDevOpsDays DFW
Taming the complex distributed systems we're responsible for requires changing not just the tools and technical approaches we use; it also requires changing who is involved in production, how they collaborate, and how we measure success.
In this talk, you'll learn about several practices core to production excellence: giving everyone a stake in production, collaborating to ensure observability, measuring with Service Level Objectives, and prioritizing improvements using risk analysis.
KEYNOTE TALK
What does it take to innovate quickly? I’ll address how blockers to innovation – including culture, skills, antiquated processes, and board level concerns – can stand in the way of business agility. We’ll map out a pathway to digital transformation including new metrics for success, integrating real-world best practices from enterprises, and the most effective organizational patterns, as we integrate the business with development and operations.
DevOps Theory vs. Practice: A Song of Ice and Tire-FireDevOpsDays DFW
In many DevOps talks, you see a speaker from a renowned tech company stand up and describe a perfect utopia of an environment. You look at the perfect environment and dedicated hordes of senior engineers they describe, and you despair of ever getting to that point. Your environment looks nothing like that.
Surprise– their environment doesn’t really look like that either! In this talk, a speaker from an unnamed tech unicorn describes their amazing environment– and then what they just said gets translated from “thought leader” into plain English for you by an official translator. Stop feeling sad– everything is secretly terrible!
Hidden Costs of Chasing the Mythical 'Five Nines'DevOpsDays DFW
“Five Nines” refers to the five nines in 99.999% available that is often synonymous with highly available. Does every highly available service require five nines? Not by a long shot. Yet the general state of the practice is to chase after this typically unrealistic goal almost blindly in many cases, often leading to unnecessarily high costs in both operational and development resources. Even less aggressive availability goals are often over-specified compared to true business drivers.
This talk will cover:
* The history of “five nines”
Common reasons why many organizations often inadvertently over-specify availability requirements
* The costs of such over-specification
* How service agility is negatively affected
* Examples of highly available systems with reasonable availability requirements
* Techniques on how to avoid over-specification based on Site Reliability Engineering principles
* Ways to spend your Error Budget (once you have one) most effectively
Applying these techniques should result in a more cost-effective service that keeps end users and management happy, and fewer alerts to the on-call DevOps engineer.
Stepping Up Your DevOps With Step FunctionsDevOpsDays DFW
Everyone loves Serverless, but what if I told you that you that’s there’s a tool out there that can take it to the next level? AWS Step Functions combine logic and Lambdas into complex state machines that take automation to the next level. In this talk we will cover the basics of Step Functions, a Lambda design pattern that emphasizes code re-use, and some examples of what you can do with Step Functions. The code will be provided ahead of the conference and attendees are encouraged to follow along during the technical portion of the presentation.
DevSecOps Through Blunt Force Trauma, I'm the TraumaDevOpsDays DFW
The document discusses a presentation about achieving DevSecOps transformation through automation and simplification. It begins with some biographical information about the presenter and their experience leading IT transformations at various companies. The presentation then covers topics like managing people through change, leveraging technology to drive outcomes like cost savings and competitive advantage, and examples of transformations that delivered millions in savings through approaches like cloud migration, data center consolidation, and reducing headcount while increasing productivity.
In the past few years, large enterprises started adopting the microservices architecture, however, most ended up with a “distributed monolith” and could not realize the true benefits of microservices. In this talk, I will discuss the challenges of how we built microservices in the past and and what the future strategy should look like. The strategy of leveraging the “platform” (container orchestration + Service Mesh) to provide the outer-architecture capabilities in order to build business-focused polyglot services that are decoupled from the outer architecture.
Over the past year or so I’ve been focusing a good amount on trying to help clients move toward a Continuous Delivery/Continuous Deployment mindset. To that end, I’ve spent a lot of time building pipelines in various CI tools. Something that’s become a common tool in my kit is using Docker containers to build the various projects in our pipelines. This has a lot of very interesting benefits, but also comes with a few challenges.
In my talk I’ll tell you why you should care about building your software with Docker. I will also outline the plusses and minuses of this approach, and relay as many of the various tips and tricks I’ve found in my use of it over the past year or so. After my talk is over you should have a clear understanding of this approach, and whether or not you should trial it.
Would like to describe the transformation process GE Digital has been going through with maintaining Cloud Infrastructure from point and manual click to source controlled to peer-reviewed infrastructure as code.
Describe some of the challenges faced during this transformation:
Culture
Git good with Git.
Keep it simple stupid. (Start Small).
We started with standardizing our IAM resources across our 100+ Cloud Provider Accounts. Leverage terraform modules that get deployed to all 100+ accounts.
Describe the use of our CI Jenkins Packer.io pipeline for creating custom Cloud AMI images and the benefits of custom AMI images + terraform provisioning. Why this is a game changer at Predix.IO for scaling out instances for any cloud provider.
Config [III], would you like carnitas or steak. Well come hungry and learn about Concurrency [VIII] and how you can really eat two burritos at the same time. What are we talking about here?
People have heard about the 12 Factor App. How does this map to a burrito? Are all these steps necessary when you go to your favorite burrito place? Come hungry!
Burritos are amazing!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
20. Everything is coupled to everything
• Ambient temperature
• Ambient atmosphere
• Acoustic vibrations
• Electromagnetic field
• Gravity
• Higgs field
22. Kinds of Coupling
Type Effect
Operational Consumer cannot run without the provider
Development Changes in producer and consumer must be
coordinated
Semantic Change together because of shared concepts
Functional Change together because of shared responsibility
Incidental Change together for no good reason.
23. Analyzing Coupling I
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Email Component
[Component: C#]
Sends emails
24. Analyzing Coupling I
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Email Component
[Component: C#]
Sends emails
Operational: Strong. SMTP is synchronous, connection-oriented, conversational
25. Analyzing Coupling I
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Email Component
[Component: C#]
Sends emails
Operational: Strong. SMTP is synchronous, connection-oriented, conversational
Development: Weak. SMTP is well-defined standard with history of interoperability
26. Analyzing Coupling I
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Email Component
[Component: C#]
Sends emails
Operational: Strong. SMTP is synchronous, connection-oriented, conversational
Development: Weak. SMTP is well-defined standard with history of interoperability
Semantic: Very strong. SMTP defines entities, attributes, and allowed values.
27. Analyzing Coupling I
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Email Component
[Component: C#]
Sends emails
Operational: Strong. SMTP is synchronous, connection-oriented, conversational
Development: Weak. SMTP is well-defined standard with history of interoperability
Semantic: Very strong. SMTP defines entities, attributes, and allowed values.
Functional: Very weak. Sender and MTA both use network connections.
28.
29. Analyzing Coupling II-A
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
SQL connection
to RDBMS
30. Analyzing Coupling II-A
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
SQL connection
to RDBMS
Operational: Very strong. Dependent on availability of server. Must be aware of topology and failover strategy
31. Analyzing Coupling II-A
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
SQL connection
to RDBMS
Operational: Very strong. Dependent on availability of server. Must be aware of topology and failover strategy
Development: Very strong. Dependent on schema, server version, protocol version.
32. Analyzing Coupling II-A
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
SQL connection
to RDBMS
Operational: Very strong. Dependent on availability of server. Must be aware of topology and failover strategy
Development: Very strong. Dependent on schema, server version, protocol version.
Semantic: Very strong. Tables, columns, and joins must be known to both parties.
33. Analyzing Coupling II-A
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Operational: Very strong. Dependent on availability of server. Must be aware of topology and failover strategy
Development: Very strong. Dependent on schema, server version, protocol version.
Semantic: Very strong. Tables, columns, and joins must be known to both parties.
Functional: Weak. Functions of data maintenance don’t overlap with retrieval into objects.
SQL connection
to RDBMS
35. Analyzing Coupling II-B
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
HTTPS request
to REST API
36. Analyzing Coupling II-B
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Operational: Strong, but less than before. Dependent on availability of server.
HTTPS request
to REST API
37. Analyzing Coupling II-B
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Operational: Strong, but less than before. Dependent on availability of server.
Development: Strong, but less. Insulated from data format changes. Open encoding can further reduce coupling
HTTPS request
to REST API
38. Analyzing Coupling II-B
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Operational: Strong, but less than before. Dependent on availability of server.
Development: Strong, but less. Insulated from data format changes. Open encoding can further reduce coupling
Semantic: Still very strong. REST resources and C# entities must align. Concepts will still map 1:1
HTTPS request
to REST API
39. Analyzing Coupling II-B
Reference Data System
[Software System]
Managesreference data for all
counterpartiesthe bank interacts
with
Getscounterparty
datafrom
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Operational: Strong, but less than before. Dependent on availability of server.
Development: Strong, but less. Insulated from data format changes. Open encoding can further reduce coupling
Semantic: Still very strong. REST resources and C# entities must align. Concepts will still map 1:1
Functional: Still weak. Different languages, techniques, design patterns apply.
HTTPS request
to REST API
41. Analyzing Coupling II-C
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Reference Data Receiver
[Component: C#]
Accepts and caches data from the
reference datasystem
Broadcasts
Message Broker
[Software System]
Pub/sub hub, bub
Broadcasts
42. Analyzing Coupling II-C
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Reference Data Receiver
[Component: C#]
Accepts and caches data from the
reference datasystem
Broadcasts
Message Broker
[Software System]
Pub/sub hub, bub
Broadcasts
Operational: Very weak. Receiver can run with stale data when either broker or upstream are broken.
43. Analyzing Coupling II-C
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Reference Data Receiver
[Component: C#]
Accepts and caches data from the
reference datasystem
Broadcasts
Message Broker
[Software System]
Pub/sub hub, bub
Broadcasts
Operational: Very weak. Receiver can run with stale data when either broker or upstream are broken.
Development: Weak. Insulated from schema changes.
44. Analyzing Coupling II-C
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Reference Data Receiver
[Component: C#]
Accepts and caches data from the
reference datasystem
Broadcasts
Message Broker
[Software System]
Pub/sub hub, bub
Broadcasts
Operational: Very weak. Receiver can run with stale data when either broker or upstream are broken.
Development: Weak. Insulated from schema changes.
Semantic: Strong, but not as strong. Broker allows for remapping concepts.
45. Analyzing Coupling II-C
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Reference Data Receiver
[Component: C#]
Accepts and caches data from the
reference datasystem
Broadcasts
Message Broker
[Software System]
Pub/sub hub, bub
Broadcasts
Operational: Very weak. Receiver can run with stale data when either broker or upstream are broken.
Development: Weak. Insulated from schema changes.
Semantic: Strong, but not as strong. Broker allows for remapping concepts.
Functional: Moderate. All components must share the same messaging tech.
49. Chains of Coupling
Retek IBM PIM Informatica
Everybody
Else
Dotcom
Catalog
3rd
Party Data
Vendors
Reviews,
Ratings,
Imagery
Dotcom
Promotions
Online
Store
Faceted
Search
SKU SKU SKU
SKU
SKU SKU SKU
SKU
50. Chains of Coupling
Retek IBM PIM Informatica
Everybody
Else
Dotcom
Catalog
3rd
Party Data
Vendors
Reviews,
Ratings,
Imagery
Dotcom
Promotions
Online
Store
Faceted
Search
SKU SKU SKU
SKU
SKU SKU SKU
SKU
Price Point
51. Chains of Coupling
Retek IBM PIM Informatica
Everybody
Else
Dotcom
Catalog
3rd
Party Data
Vendors
Reviews,
Ratings,
Imagery
Dotcom
Promotions
Online
Store
Faceted
Search
SKU SKU SKU
SKU
SKU SKU SKU
SKU
Price Point Price Point
Price Point
Price Point Price Point
Price Point
Price Point
52. Chains of Coupling – Semantic Polymer
Retek IBM PIM Informatica
Everybody
Else
Dotcom
Catalog
3rd
Party Data
Vendors
Reviews,
Ratings,
Imagery
Dotcom
Promotions
Online
Store
Faceted
Search
SKU SKU SKU
SKU
SKU SKU SKU
SKU
Price Point Price Point
Price Point
Price Point Price Point
Price Point
Price Point
54. Each “interface” was really a chain
1. Extract tables to files
2. Push files across network
3. Load tables into “LZ”
4. Process into “cold” DB
5. Swap hot & cold DBs (hours later)
1. Send message to queue
2. Take message from queue, unwrap, inspect,
and dispatch to 1-of-N other queues.
3. Drain queue to file
4. Batch job wakes up 2 times a day, does FTP
to remote end
5. Another batch job pulls a reconciliation file,
drops file into file system
6. Parser reads the file, shreds it into
messages, puts them on another queue
55. Operational Characteristics in Long Chains
• Latency strictly worse than the slowest link in the chain.
• Availability strictly worse than the least available link.
• Throughput strictly worse than the throughput of the worst bottleneck
• Security strictly worse than the security of the weakest link
57. Information Hiding
“On the Criteria To Be Used in
Decomposing Systems into Modules",
David Parnas,
CACM, 1972
58. A KWIC Example
• Input
Software comprises an endless supply of structures.
• Output
an endless supply of structures. Software comprises
comprises an endless supply of structures. Software
endless supply of structures. Software comprises an
of structures. Software comprises an endless supply
Software comprises an endless supply of structures.
structures. Software comprises an endless supply of
supply of structures. Software comprises an endless
59. Modularization 1
1. Input
Read EBCDIC characters, store them in core. 6-bit
characters packed 4 per word. EOL is a special character.
2. Circular shifter
Prepare index; pair of addr of first char of shift, original
index of line in input array
3. Alphabetizer
Take arrays from 1 & 2, produce new array of pairs like
in 2, but in alphabetical order.
4. Output
Using arrays from 1 & 3, format output
5. Control
Allocate memory, call operations in1 - 4, report errors.
60. Consider the Effect of Changes
For each change case listed here, how many modules have to be changed?
• Read and print ASCII instead of EBCDIC.
• Stop using packed characters, store one character per word.
• Write index for circular shifts to offline storage instead of core to support larger
input documents.
61. Modularization 2
1. Line Storage
Offers functional interface: SETCH, GETCH, GETW,
DELW, DELLINE
2. Input
Reads EBCDIC chars, calls line storage to put them into
lines.
3. Circular Shifter
Offers same interface as line storage. Makes it appear to
have all shifts of all lines.
4. Alphabetizer
Offers sort function INIT, and access function iTH that
gets a line.
5. Output
Repeatedly call iTH on alphabetizer, printing the line.
6. Control
Similar to first approach, call each module in sequence.
62. Consider the Effect of Changes
For each change case listed here, how many modules have to be changed?
• Read and print ASCII instead of EBCDIC.
• Stop using packed characters, store one character per word.
• Write index for circular shifts to offline storage instead of core to support larger
input documents.
63. Why is the second one better?
• It hides decisions inside modules.
• Functional interfaces provide an abstract representation of the underlying data.
• Information hiding
69. Orthogonality in Software
• Separation of concerns
• High cohesion within a module or component
• Low coupling between modules or components
• Little overlap in functionality between modules
• Information hiding / decision hiding
70. Batch Process
File System
[Container: Network File Share]
Stores risk reports
Report Distributor
[Component: C#]
Publishes the report for the web
application
Publishesrisk reportsto
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Central Monitoring Service
[Software System]
The bank-wide monitoringand
alertingdashboard
Trade Data System
[Software System]
The system of record for trades of
type X
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Getstrade datafrom
Sendscritical failure alertsto
[SNMP]
Getscounterparty
datafrom
Email Component
[Component: C#]
Sends emails
Trade Data Importer
[Component: C#]
Imports data from the trade data
system
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Report Checker
[Component: C#]
Checks that the report has been
generated by 9 a.m. Singapore
time
Alerter
[Component: C# with SNMP
library]
Sends SNMPalerts
Sendsalerts using
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Sendsemail
using
Importsdata
using
Importsdata
using
Risk Calculator
[Component: C#]
Does math
Report Generator
[Component: C# and
Microsoft.Office.Interop.Excel]
Generates an Excel compatible
risk report
Generatesthe risk
report using
Calculatesrisk
using
Scheduler
[Component: Quartz.net]
Starts the risk calculation process
at 5 p.m. New York time
Starts
Starts
Publishesthe risk
report using
71. Batch Process
File System
[Container: Network File Share]
Stores risk reports
Report Distributor
[Component: C#]
Publishes the report for the web
application
Publishesrisk reportsto
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Central Monitoring Service
[Software System]
The bank-wide monitoringand
alertingdashboard
Trade Data System
[Software System]
The system of record for trades of
type X
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Getstrade datafrom
Sendscritical failure alertsto
[SNMP]
Getscounterparty
datafrom
Email Component
[Component: C#]
Sends emails
Trade Data Importer
[Component: C#]
Imports data from the trade data
system
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Report Checker
[Component: C#]
Checks that the report has been
generated by 9 a.m. Singapore
time
Alerter
[Component: C# with SNMP
library]
Sends SNMPalerts
Sendsalerts using
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Sendsemail
using
Importsdata
using
Importsdata
using
Risk Calculator
[Component: C#]
Does math
Report Generator
[Component: C# and
Microsoft.Office.Interop.Excel]
Generates an Excel compatible
risk report
Generatesthe risk
report using
Calculatesrisk
using
Scheduler
[Component: Quartz.net]
Starts the risk calculation process
at 5 p.m. New York time
Starts
Starts
Publishesthe risk
report using
Risk calculator produces a data structure that the
report generator must consume.
Example from c4model.com
72. Batch Process
File System
[Container: Network File Share]
Stores risk reports
Report Distributor
[Component: C#]
Publishes the report for the web
application
Publishesrisk reportsto
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Central Monitoring Service
[Software System]
The bank-wide monitoringand
alertingdashboard
Trade Data System
[Software System]
The system of record for trades of
type X
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Getstrade datafrom
Sendscritical failure alertsto
[SNMP]
Getscounterparty
datafrom
Email Component
[Component: C#]
Sends emails
Trade Data Importer
[Component: C#]
Imports data from the trade data
system
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Report Checker
[Component: C#]
Checks that the report has been
generated by 9 a.m. Singapore
time
Alerter
[Component: C# with SNMP
library]
Sends SNMPalerts
Sendsalerts using
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Sendsemail
using
Importsdata
using
Importsdata
using
Risk Calculator
[Component: C#]
Does math
Report Generator
[Component: C# and
Microsoft.Office.Interop.Excel]
Generates an Excel compatible
risk report
Generatesthe risk
report using
Calculatesrisk
using
Scheduler
[Component: Quartz.net]
Starts the risk calculation process
at 5 p.m. New York time
Starts
Starts
Publishesthe risk
report using
Risk calculator produces a data structure that the
report generator must consume.
Data importers probably have similar
implementation needs
Example from c4model.com
73. Batch Process
File System
[Container: Network File Share]
Stores risk reports
Report Distributor
[Component: C#]
Publishes the report for the web
application
Publishesrisk reportsto
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Central Monitoring Service
[Software System]
The bank-wide monitoringand
alertingdashboard
Trade Data System
[Software System]
The system of record for trades of
type X
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Getstrade datafrom
Sendscritical failure alertsto
[SNMP]
Getscounterparty
datafrom
Email Component
[Component: C#]
Sends emails
Trade Data Importer
[Component: C#]
Imports data from the trade data
system
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Report Checker
[Component: C#]
Checks that the report has been
generated by 9 a.m. Singapore
time
Alerter
[Component: C# with SNMP
library]
Sends SNMPalerts
Sendsalerts using
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Sendsemail
using
Importsdata
using
Importsdata
using
Risk Calculator
[Component: C#]
Does math
Report Generator
[Component: C# and
Microsoft.Office.Interop.Excel]
Generates an Excel compatible
risk report
Generatesthe risk
report using
Calculatesrisk
using
Scheduler
[Component: Quartz.net]
Starts the risk calculation process
at 5 p.m. New York time
Starts
Starts
Publishesthe risk
report using
Risk calculator produces a data structure that the
report generator must consume.
Data importers probably have similar
implementation needs
Report checker doesn’t appear to connect with
the file system that holds the reports. FS location
is latent coupling that will be a nasty surprise later.
Example from c4model.com
74. Batch Process
File System
[Container: Network File Share]
Stores risk reports
Report Distributor
[Component: C#]
Publishes the report for the web
application
Publishesrisk reportsto
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Central Monitoring Service
[Software System]
The bank-wide monitoringand
alertingdashboard
Trade Data System
[Software System]
The system of record for trades of
type X
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Getstrade datafrom
Sendscritical failure alertsto
[SNMP]
Getscounterparty
datafrom
Email Component
[Component: C#]
Sends emails
Trade Data Importer
[Component: C#]
Imports data from the trade data
system
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Report Checker
[Component: C#]
Checks that the report has been
generated by 9 a.m. Singapore
time
Alerter
[Component: C# with SNMP
library]
Sends SNMPalerts
Sendsalerts using
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Sendsemail
using
Importsdata
using
Importsdata
using
Risk Calculator
[Component: C#]
Does math
Report Generator
[Component: C# and
Microsoft.Office.Interop.Excel]
Generates an Excel compatible
risk report
Generatesthe risk
report using
Calculatesrisk
using
Scheduler
[Component: Quartz.net]
Starts the risk calculation process
at 5 p.m. New York time
Starts
Starts
Publishesthe risk
report using
Risk calculator produces a data structure that the
report generator must consume.
Data importers probably have similar
implementation needs
Report checker doesn’t appear to connect with
the file system that holds the reports. FS location
is latent coupling that will be a nasty surprise later.
Orchestrator might end need
to do lots of data
transformation to bridge
interfaces.
Example from c4model.com
75. Batch Process
File System
[Container: Network File Share]
Stores risk reports
Report Distributor
[Component: C#]
Publishes the report for the web
application
Publishesrisk reportsto
Reference Data System
[Software System]
Manages reference data for all
counterparties the bank interacts
with
Central Monitoring Service
[Software System]
The bank-wide monitoringand
alertingdashboard
Trade Data System
[Software System]
The system of record for trades of
type X
E-mail system
[Software System]
Microsoft Exchange
Sendsanotification that
areport isready to
Getstrade datafrom
Sendscritical failure alertsto
[SNMP]
Getscounterparty
datafrom
Email Component
[Component: C#]
Sends emails
Trade Data Importer
[Component: C#]
Imports data from the trade data
system
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Report Checker
[Component: C#]
Checks that the report has been
generated by 9 a.m. Singapore
time
Alerter
[Component: C# with SNMP
library]
Sends SNMPalerts
Sendsalerts using
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Sendsemail
using
Importsdata
using
Importsdata
using
Risk Calculator
[Component: C#]
Does math
Report Generator
[Component: C# and
Microsoft.Office.Interop.Excel]
Generates an Excel compatible
risk report
Generatesthe risk
report using
Calculatesrisk
using
Scheduler
[Component: Quartz.net]
Starts the risk calculation process
at 5 p.m. New York time
Starts
Starts
Publishesthe risk
report using
Risk calculator produces a data structure that the
report generator must consume.
Data importers probably have similar
implementation needs
Report checker doesn’t appear to connect with
the file system that holds the reports. FS location
is latent coupling that will be a nasty surprise later.
Orchestrator might end need
to do lots of data
transformation to bridge
interfaces.
Example from c4model.com
77. Problem: Risk calculator
produces a data structure
that the report generator
must consume.
Solutions depend on architectural style
Here we’re in a Windows service so we
might use a shared library to define the interface.
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Risk Calculator
[Component: C#]
Does math
Report Generator
[Component: C# and
Microsoft.Office.Interop.Excel]
Generates an Excel compatible
risk report
Generatesthe risk
report using
Calculatesrisk
using
78. Problem: Redundant implementation details
This would be a good place to use a
shared library for common implementation.
Trade Data Importer
[Component: C#]
Imports data from the trade data
system
Reference Data Importer
[Component: C#]
Imports data from the reference
datasystem
Orchestrator
[Component: C#]
Orchestrates the risk calculation
process
Importsdata
using
Importsdata
using
79. File System
[Container: Network File Share]
Stores risk reports
Report Distributor
[Component: C#]
Publishes the report for the web
application
Publishesrisk reportsto
Report Checker
[Component: C#]
Checks that the report has been
generated by 9 a.m. Singapore
time
Alerter
[Component: C# with SNMP
library]
Sends SNMPalerts
Sendsalerts using
Scheduler
[Component: Quartz.net]
Starts
Publishesthe risk
Problem: Latent coupling about filesystem
layout.
Solution: A module to hide the decision
about filesystem layout from
both the Report Distributor and the
Report Checker
80. Find solutions by rotating your perspective
• When looking at components, think about modules
• When looking at modules, think about components
• When looking at data, think about code
• When looking at code, think about data
81. Use all your tools
1. Module structure – layout of your code and libraries
2. Component structure – interactions between runtime components
3. Abstraction – Emphasize similar interfaces & data formats
83. Kinds of Coupling
• Operational
• Development
• Semantic
• Functional
• Incidental
84. Summary
• Hide decisions
• Choose degrees of freedom that matter
• Avoid semantic polymers
• Use static and dynamic structures
• Find more instances of fewer, more general interfaces
• Prefer explicit to tacit
You’ve been working in your system for a year. Things are going well.
You think it’s made of small pieces arranged beautifully. Until the day everything changes. You get the one new requirements that just doesn’t fit. You move one part, but that forces you to change something on the other side of the world. You change that and find that a different part breaks for no apparent reason.
You begin to realize that your system is not a lovely arrangement of isolated pieces. It is more like a nest of opposing forces, it wants to fly apart or collapse at any time. It’s only held back from chaos by a network of interwoven dependencies. You pull one part and it tugs on a dozen others. You push a piece and it pushes back.
You fight. The system fights back. Eventually, you are consumed by the chaos. A rewrite starts to sound better and better. After all, it should only take two weeks. You have fallen prey to coupling. This is the “choose your own adventure” page that says “You have died, go to page 1.”
As we dissolve large systems into pieces, coupling ever more important. Moving from static equilibrium to dynamic equilibrium.
Coupling is a dirty word in our industry. “Coupled” basically means “bad.” Coupling keeps us from making changes. It keeps our business from evolving.
Coupling not a dirty word in other fields. Coupling means connector. It enables structure.
Coupling as safety. Keeping two parts together.
Coupling allows matter to exist.
Knee bends in one plane. Doesn’t bend in other planes. (Not without damage anyway!)
Knee bends in one plane. Doesn’t bend in other planes. (Not without damage anyway!)
Knee bends in one plane. Doesn’t bend in other planes. (Not without damage anyway!)
Knee bends in one plane. Doesn’t bend in other planes. (Not without damage anyway!)
Knee bends in one plane. Doesn’t bend in other planes. (Not without damage anyway!)
Note: Sometimes called the "Death Star" moon. Two tone coloring observed by Giovanni Cassini. In 2009, discovered that the likely cause is Phoebe.
This is Phoebe. 213 km mean diameter.
Note: Phoebe has a ring, Saturn's largest and most diffuse. 40x biggest than the entirety of the other rings. Ablated from Phoebe by micrometeorites. Solar pressure causes ring material to spiral toward Saturn. Swept up by Iapetus (which is tidally locked to Saturn.)
Tell story of screaming at disks.
What could we do about this to reduce the degree of coupling.
What could we do about this to reduce the degree of coupling.
What could we do about this to reduce the degree of coupling.
What could we do about this to reduce the degree of coupling.
What could we do about this to reduce the degree of coupling.
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
Assume this is a direct database connection
But not as difficult to manage in every case.
SKU = “stock keeping unit”.
SKU = “stock keeping unit”.
SKU = “stock keeping unit”.
SKU = “stock keeping unit”.
Easy to see how this works. Module 1 creates an array which 2, 3, 4 use. Module 2 creates array which 3 uses. Module 3 creates array that 4 uses.
Operational high in all cases… was running in a single process. Development weakened thanks to functional interface. Semantic reduced due to interface instead of direct access to arrays. Functional also reduced by not having array manip & char packing/unpacking in all modules.
Explain what this view is.
Architecture by tesseract
Knee bends in one plane. Doesn’t bend in other planes. (Not without damage anyway!)