What is Apache Kafka and What is an Event Streaming Platform?confluent
Speaker: Gabriel Schenker, Lead Curriculum Developer, Confluent
Streaming platforms have emerged as a popular, new trend, but what exactly is a streaming platform? Part messaging system, part Hadoop made fast, part fast ETL and scalable data integration. With Apache Kafka® at the core, event streaming platforms offer an entirely new perspective on managing the flow of data. This talk will explain what an event streaming platform such as Apache Kafka is and some of the use cases and design patterns around its use—including several examples of where it is solving real business problems. New developments in this area such as KSQL will also be discussed.
This document provides an overview and agenda for a presentation on Apache Kafka. The presentation will cover Kafka concepts and architecture, how it compares to traditional messaging systems, using Kafka with Cloudera, and a demo of installing and configuring Kafka on a Cloudera cluster. It will also discuss Kafka's role in ingestion pipelines and data integration use cases.
High Concurrency Architecture and Laravel Performance TuningAlbert Chen
This document summarizes techniques for improving performance and concurrency in Laravel applications. It discusses caching routes and configuration files, using caching beyond just the database, implementing asynchronous event handling with message queues, separating database reads and writes, enabling OPcache and preloading in PHP 7.4, and analyzing use cases like a news site, ticketing system, and chat service. The document provides benchmarks showing performance improvements from these techniques.
Building a CI/CD Pipeline For Container Deployment to Amazon ECSAmazon Web Services
This document discusses building a continuous integration and continuous deployment (CI/CD) pipeline for containerized applications using Amazon ECS. It covers using Docker images, ECS, and ECR for CI/CD, deployment strategies like blue/green deployments with ECS and ALB, building Docker images with CodeBuild, and orchestrating pipelines with CodePipeline. The presentation then demonstrates these concepts.
Air traffic controller - Streams Processing meetupEd Yakabosky
ATC is a system built using Samza to manage communications with LinkedIn members. It aims to improve the member experience by applying common functionality across different communication types and use cases. It handles thousands of communications per second while maintaining a good understanding of members' states in near-real-time. ATC focuses on sending the right message to the right member through the right channel at the right time using techniques like filtering, aggregation, channel selection and delivery optimization. It was built to be highly scalable using streaming technologies like Kafka, RocksDB and host affinity to replicate state across datacenters for redundancy. Personalization is achieved through relevance scores computed offline and stored in RocksDB.
다시보기 영상 링크: https://youtu.be/hknvd5JucKU
데이터 저장소의 확장에 따라 규모에 맞게 데이터를 관리하는 것은 점점 더 어려워지고 있으며 데이터의 중요성은 지속적으로 올라가고 있습니다. 많은 데이터를 저장하고 활용하기 위해 올바른 저장매체를 선택하기 위해 AWS 에서 제공하는 여러 Storage 서비스들을 알아보고 각 서비스들의 장점과 사용 예를 함께 알아봅니다
Integrating Apache Kafka and Elastic Using the Connect Frameworkconfluent
As a streaming platform, Apache Kafka provides low-latency, high-throughput, fault-tolerant publish and subscribe pipelines and excels at processing streams of real-time events. Kafka provides reliable, millisecond delivery for connecting downstream systems with real-time data.
In this talk, we will show how easy it is to leverage Kafka and the Elasticsearch connector to keep your indices populated with the latest data from the rest of your enterprise, as it changes.
What is Apache Kafka and What is an Event Streaming Platform?confluent
Speaker: Gabriel Schenker, Lead Curriculum Developer, Confluent
Streaming platforms have emerged as a popular, new trend, but what exactly is a streaming platform? Part messaging system, part Hadoop made fast, part fast ETL and scalable data integration. With Apache Kafka® at the core, event streaming platforms offer an entirely new perspective on managing the flow of data. This talk will explain what an event streaming platform such as Apache Kafka is and some of the use cases and design patterns around its use—including several examples of where it is solving real business problems. New developments in this area such as KSQL will also be discussed.
This document provides an overview and agenda for a presentation on Apache Kafka. The presentation will cover Kafka concepts and architecture, how it compares to traditional messaging systems, using Kafka with Cloudera, and a demo of installing and configuring Kafka on a Cloudera cluster. It will also discuss Kafka's role in ingestion pipelines and data integration use cases.
High Concurrency Architecture and Laravel Performance TuningAlbert Chen
This document summarizes techniques for improving performance and concurrency in Laravel applications. It discusses caching routes and configuration files, using caching beyond just the database, implementing asynchronous event handling with message queues, separating database reads and writes, enabling OPcache and preloading in PHP 7.4, and analyzing use cases like a news site, ticketing system, and chat service. The document provides benchmarks showing performance improvements from these techniques.
Building a CI/CD Pipeline For Container Deployment to Amazon ECSAmazon Web Services
This document discusses building a continuous integration and continuous deployment (CI/CD) pipeline for containerized applications using Amazon ECS. It covers using Docker images, ECS, and ECR for CI/CD, deployment strategies like blue/green deployments with ECS and ALB, building Docker images with CodeBuild, and orchestrating pipelines with CodePipeline. The presentation then demonstrates these concepts.
Air traffic controller - Streams Processing meetupEd Yakabosky
ATC is a system built using Samza to manage communications with LinkedIn members. It aims to improve the member experience by applying common functionality across different communication types and use cases. It handles thousands of communications per second while maintaining a good understanding of members' states in near-real-time. ATC focuses on sending the right message to the right member through the right channel at the right time using techniques like filtering, aggregation, channel selection and delivery optimization. It was built to be highly scalable using streaming technologies like Kafka, RocksDB and host affinity to replicate state across datacenters for redundancy. Personalization is achieved through relevance scores computed offline and stored in RocksDB.
다시보기 영상 링크: https://youtu.be/hknvd5JucKU
데이터 저장소의 확장에 따라 규모에 맞게 데이터를 관리하는 것은 점점 더 어려워지고 있으며 데이터의 중요성은 지속적으로 올라가고 있습니다. 많은 데이터를 저장하고 활용하기 위해 올바른 저장매체를 선택하기 위해 AWS 에서 제공하는 여러 Storage 서비스들을 알아보고 각 서비스들의 장점과 사용 예를 함께 알아봅니다
Integrating Apache Kafka and Elastic Using the Connect Frameworkconfluent
As a streaming platform, Apache Kafka provides low-latency, high-throughput, fault-tolerant publish and subscribe pipelines and excels at processing streams of real-time events. Kafka provides reliable, millisecond delivery for connecting downstream systems with real-time data.
In this talk, we will show how easy it is to leverage Kafka and the Elasticsearch connector to keep your indices populated with the latest data from the rest of your enterprise, as it changes.
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...Guozhang Wang
We present Apache Kafka’s core design for stream processing, which relies on its persistent log architecture as the storage and inter-processor communication layers to achieve correctness guarantees. Kafka Streams, a scalable stream processing client library in Apache Kafka, defines the processing logic as read process-write cycles in which all processing state updates and result outputs are captured as log appends. Idempotent and transactional write protocols are utilized to guarantee exactly once semantics. Furthermore, revision-based speculative processing is employed to emit results as soon as possible while handling out-of-order data. We also demonstrate how Kafka Streams behaves in practice with large-scale deployments and performance insights exhibiting its flexible and low-overhead trade-offs.
Kafka Tutorial - basics of the Kafka streaming platformJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
Near Real Time Indexing: Presented by Umesh Prasad & Thejus V M, FlipkartLucidworks
This document summarizes a presentation given by Umesh Prasad and Thejus V M of Flipkart on building a real-time search index for e-commerce. It discusses the need for real-time indexing to support high update rates and microservices architecture at Flipkart. It evaluates using SolrCloud but finds that update-by-delete-and-add hinders performance. The presentation then describes Flipkart's approach using a near real-time Lucene store with optimized data structures and filtering to enable low-latency search across updated documents.
Service Mesh with Apache Kafka, Kubernetes, Envoy, Istio and LinkerdKai Wähner
Microservice architectures are not free lunch! Microservices need to be decoupled, flexible, operationally transparent, data aware and elastic. Most material from last years only discusses point-to-point architectures with inflexible and non-scalable technologies like REST / HTTP. This video takes a look at cutting edge technologies like Apache Kafka, Kubernetes, Envoy, Linkerd and Istio to implement a cloud-native service mesh to solve these challenges and bring microservices to the next level of scale, speed and efficiency.
Key takeaways:
- Apache Kafka decouples services, including event streams and request-response
- Kubernetes provides a cloud-native infrastructure for the Kafka ecosystem
- Service Mesh helps with security and observability at ecosystem / organization scale
- Envoy and Istio sit in the layer above Kafka and are orthogonal to the goals Kafka addresses
Blog post: http://www.kai-waehner.de/blog/2019/09/24/cloud-native-apache-kafka-kubernetes-envoy-istio-linkerd-service-mesh
Video recording of this slide deck: https://youtu.be/Us_C4RFOUrA
오늘날 빅데이터 분석, 처리부터 모든 개발 플랫폼을 연결해주는 카프카의 등장 배경과 의미를 살펴보고, 실무에서 적용한 경험을 바탕으로 적절한 카프카 사용 사례를 정비해 보겠습니다. 또한 카프카의 내부 구동 방식에 대하여 소개하는 시간을 갖겠습니다. 마지막으로 실무에서 카프카를 운영하면서 경험한 구성, 운영 및 모니터링 등 경험을 공유하는 시간입니다. (by. 카카오 고승범)
* 본 세션은 “입문자/초급자/중급자” 분들께 두루 적합한 세션입니다.
Apache Kafka is a distributed streaming platform used for building real-time data pipelines and streaming apps. It provides a unified, scalable, and durable platform for handling real-time data feeds. Kafka works by accepting streams of records from one or more producers and organizing them into topics. It allows both storing and forwarding of these streams to consumers. Producers write data to topics which are replicated across clusters for fault tolerance. Consumers can then read the data from the topics in the order it was produced. Major companies like LinkedIn, Yahoo, Twitter, and Netflix use Kafka for applications like metrics, logging, stream processing and more.
Slides used in following Udemy training: https://www.udemy.com/course/terraform-on-azure/?referralCode=B11C0C9542992626FC4E
Terraform allows you to write your cloud setup in code. If you have used Azure before, you'll know that setting up your infrastructure using the Azure Portal (the Web UI) is far from ideal. Terraform allows you use Infrastructure as Code, rather than executing the steps manually by going through the correct steps in the Azure Portal.
This course will teach you how to write HCL, the HashiCorp Configuration Language, to bring up your infrastructure on Azure. Terraform is cloud agnostic, so the terraform skills learned in this course are easily transferrable to other cloud providers. After teaching you the terraform basics, the course will continue setting up simple architectural patterns, like VMs, to get you used to how terraform works. Once you have a good feeling of how you can use terraform, we dive a bit deeper into the possible Azure Services you can spin up, like Autoscaling, LoadBalancing, MSSQL & MySQL, CosmosDB, Storage Accounts, Azure AD, and others. Also covered is advanced terraform usage, like using remote state, for/foreach loops, and conditionals/functions.
Our mission is to ensure you can start using terraform with Azure in your organisation to automate the provisioning of cloud infrastructure. After taking this course, you'll have a solid basis of Terraform and Azure!
Slides used in following Udemy training: https://www.udemy.com/course/terraform-on-azure/?referralCode=B11C0C9542992626FC4E
ElasticSearch is an open source, distributed, RESTful search and analytics engine. It allows storage and search of documents in near real-time. Documents are indexed and stored across multiple nodes in a cluster. The documents can be queried using a RESTful API or client libraries. ElasticSearch is built on top of Lucene and provides scalability, reliability and availability.
Kafka is a distributed messaging system that allows for publishing and subscribing to streams of records, known as topics. Producers write data to topics and consumers read from topics. The data is partitioned and replicated across clusters of machines called brokers for reliability and scalability. A common data format like Avro can be used to serialize the data.
Can and should Apache Kafka replace a database? How long can and should I store data in Kafka? How can I query and process data in Kafka? These are common questions that come up more and more. This session explains the idea behind databases and different features like storage, queries, transactions, and processing to evaluate when Kafka is a good fit and when it is not.
The discussion includes different Kafka-native add-ons like Tiered Storage for long-term, cost-efficient storage and ksqlDB as event streaming database. The relation and trade-offs between Kafka and other databases are explored to complement each other instead of thinking about a replacement. This includes different options for pull and push-based bi-directional integration.
Key takeaways:
- Kafka can store data forever in a durable and high available manner
- Kafka has different options to query historical data
- Kafka-native add-ons like ksqlDB or Tiered Storage make Kafka more powerful than ever before to store and process data
- Kafka does not provide transactions, but exactly-once semantics
- Kafka is not a replacement for existing databases like MySQL, MongoDB or Elasticsearch
- Kafka and other databases complement each other; the right solution has to be selected for a problem
- Different options are available for bi-directional pull and push-based integration between Kafka and databases to complement each other
Video Recording:
https://youtu.be/7KEkWbwefqQ
Blog post:
https://www.kai-waehner.de/blog/2020/03/12/can-apache-kafka-replace-database-acid-storage-transactions-sql-nosql-data-lake/
Deploy and Serve Model from Azure Databricks onto Azure Machine LearningDatabricks
The document discusses deploying a model trained in Azure Databricks onto Azure Machine Learning. It covers model training in Databricks, packaging the model and storing it in Azure Blob Storage, registering the model with Azure ML, deploying it to an Azure Kubernetes Service cluster, and serving it as a web service. Demo sections show training a model for semantic type detection in Databricks and deploying it using Azure ML. The goal is to make model deployment and consumption seamless across Azure services.
Event-Driven Stream Processing and Model Deployment with Apache Kafka, Kafka ...Kai Wähner
Talk from Kafka Summit San Francisco 2019 (https://kafka-summit.org/sessions/event-driven-model-serving-stream-processing-vs-rpc-kafka-tensorflow/). Video recording will be available for free on the Summit website.
Event-based stream processing is a modern paradigm to continuously process incoming data feeds, e.g. for IoT sensor analytics, payment and fraud detection, or logistics. Machine Learning / Deep Learning models can be leveraged in different ways to do predictions and improve the business processes. Either analytic models are deployed natively in the application or they are hosted in a remote model server. In the latter you combine stream processing with RPC / Request-Response paradigm instead of direct doing direct inference within the application. This talk discusses the pros and cons of both approaches and shows examples of stream processing vs. RPC model serving using Kubernetes, Apache Kafka, Kafka Streams, gRPC and TensorFlow Serving. The trade-offs of using a public cloud service like AWS or GCP for model deployment are also discussed and compared to local hosting for offline predictions directly “at the edge”.
Key takeaways
• Machine Learning / Deep Learning models can be used in different ways to do predictions. Scalability and loose coupling are important success factors
• Stream processing vs. RPC / Request-Response for model serving has many trade-offs – learn about alternatives and best practices for your different scenarios
• Understand the alternatives and trade-offs of model deployment in modern infrastructures like Kubernetes or Cloud Services like AWS or GCP
• See live demos with Java, gRPC, Apache Kafka, KSQL and TensorFlow Serving to understand the trade-offs
Walking Through Spring Cloud Data FlowVMware Tanzu
This document provides an overview and safe harbor statement for the "Walking Through Spring Cloud Data Flow" presentation at SpringOne 2020. It outlines that any information provided is intended for informational purposes only and is subject to change. The presentation will cover topics like Spring Cloud Stream for event-driven applications, Spring Cloud Task for batch applications, application development, deployment and monitoring using Spring Cloud Data Flow. It also provides details about the presenters and includes a sample demo data.
Performance Comparison of HBase and CassandraYashIyengar
The document compares the performance of Hbase and Cassandra databases using YCSB (Yahoo! Cloud Serving Benchmark). It summarizes the key characteristics of each database, including that Hbase is master-based while Cassandra is masterless. The document then describes testing each database with YCSB Workloads A, B, and C at record counts of 100,000; 250,000; and 500,000 to compare their performance under different conditions.
In the workshop with GCP, Home Depot & Cloud FoundryChristopher Grant
Christopher Grant and Eric Johnson talk about Home Depot's experience in piloting Spring apps running in Pivotal Cloud Foundry on top of Google Cloud Platform. They discuss Home Depot's journey using this cutting edge technology stack, including some...
In the Workshop with Google Cloud Platform, HomeDepot.com & Cloud FoundryVMware Tanzu
SpringOne Platform 2016
Speaker: Christopher Grant; Sr. Architect, Home Depot. Eric Johnson; Technical Program Manager, Google.
Come listen to the Home Depot's experience in piloting Spring apps running in Pivotal Cloud Foundry on top of Google Cloud Platform. This session will discuss Home Depot's journey using this cutting edge technology stack, including some of the challenges along the way. And in true DIY fashion, many of the demos provided will be available for attendees to try themselves.
Consistency and Completeness: Rethinking Distributed Stream Processing in Apa...Guozhang Wang
We present Apache Kafka’s core design for stream processing, which relies on its persistent log architecture as the storage and inter-processor communication layers to achieve correctness guarantees. Kafka Streams, a scalable stream processing client library in Apache Kafka, defines the processing logic as read process-write cycles in which all processing state updates and result outputs are captured as log appends. Idempotent and transactional write protocols are utilized to guarantee exactly once semantics. Furthermore, revision-based speculative processing is employed to emit results as soon as possible while handling out-of-order data. We also demonstrate how Kafka Streams behaves in practice with large-scale deployments and performance insights exhibiting its flexible and low-overhead trade-offs.
Kafka Tutorial - basics of the Kafka streaming platformJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
Near Real Time Indexing: Presented by Umesh Prasad & Thejus V M, FlipkartLucidworks
This document summarizes a presentation given by Umesh Prasad and Thejus V M of Flipkart on building a real-time search index for e-commerce. It discusses the need for real-time indexing to support high update rates and microservices architecture at Flipkart. It evaluates using SolrCloud but finds that update-by-delete-and-add hinders performance. The presentation then describes Flipkart's approach using a near real-time Lucene store with optimized data structures and filtering to enable low-latency search across updated documents.
Service Mesh with Apache Kafka, Kubernetes, Envoy, Istio and LinkerdKai Wähner
Microservice architectures are not free lunch! Microservices need to be decoupled, flexible, operationally transparent, data aware and elastic. Most material from last years only discusses point-to-point architectures with inflexible and non-scalable technologies like REST / HTTP. This video takes a look at cutting edge technologies like Apache Kafka, Kubernetes, Envoy, Linkerd and Istio to implement a cloud-native service mesh to solve these challenges and bring microservices to the next level of scale, speed and efficiency.
Key takeaways:
- Apache Kafka decouples services, including event streams and request-response
- Kubernetes provides a cloud-native infrastructure for the Kafka ecosystem
- Service Mesh helps with security and observability at ecosystem / organization scale
- Envoy and Istio sit in the layer above Kafka and are orthogonal to the goals Kafka addresses
Blog post: http://www.kai-waehner.de/blog/2019/09/24/cloud-native-apache-kafka-kubernetes-envoy-istio-linkerd-service-mesh
Video recording of this slide deck: https://youtu.be/Us_C4RFOUrA
오늘날 빅데이터 분석, 처리부터 모든 개발 플랫폼을 연결해주는 카프카의 등장 배경과 의미를 살펴보고, 실무에서 적용한 경험을 바탕으로 적절한 카프카 사용 사례를 정비해 보겠습니다. 또한 카프카의 내부 구동 방식에 대하여 소개하는 시간을 갖겠습니다. 마지막으로 실무에서 카프카를 운영하면서 경험한 구성, 운영 및 모니터링 등 경험을 공유하는 시간입니다. (by. 카카오 고승범)
* 본 세션은 “입문자/초급자/중급자” 분들께 두루 적합한 세션입니다.
Apache Kafka is a distributed streaming platform used for building real-time data pipelines and streaming apps. It provides a unified, scalable, and durable platform for handling real-time data feeds. Kafka works by accepting streams of records from one or more producers and organizing them into topics. It allows both storing and forwarding of these streams to consumers. Producers write data to topics which are replicated across clusters for fault tolerance. Consumers can then read the data from the topics in the order it was produced. Major companies like LinkedIn, Yahoo, Twitter, and Netflix use Kafka for applications like metrics, logging, stream processing and more.
Slides used in following Udemy training: https://www.udemy.com/course/terraform-on-azure/?referralCode=B11C0C9542992626FC4E
Terraform allows you to write your cloud setup in code. If you have used Azure before, you'll know that setting up your infrastructure using the Azure Portal (the Web UI) is far from ideal. Terraform allows you use Infrastructure as Code, rather than executing the steps manually by going through the correct steps in the Azure Portal.
This course will teach you how to write HCL, the HashiCorp Configuration Language, to bring up your infrastructure on Azure. Terraform is cloud agnostic, so the terraform skills learned in this course are easily transferrable to other cloud providers. After teaching you the terraform basics, the course will continue setting up simple architectural patterns, like VMs, to get you used to how terraform works. Once you have a good feeling of how you can use terraform, we dive a bit deeper into the possible Azure Services you can spin up, like Autoscaling, LoadBalancing, MSSQL & MySQL, CosmosDB, Storage Accounts, Azure AD, and others. Also covered is advanced terraform usage, like using remote state, for/foreach loops, and conditionals/functions.
Our mission is to ensure you can start using terraform with Azure in your organisation to automate the provisioning of cloud infrastructure. After taking this course, you'll have a solid basis of Terraform and Azure!
Slides used in following Udemy training: https://www.udemy.com/course/terraform-on-azure/?referralCode=B11C0C9542992626FC4E
ElasticSearch is an open source, distributed, RESTful search and analytics engine. It allows storage and search of documents in near real-time. Documents are indexed and stored across multiple nodes in a cluster. The documents can be queried using a RESTful API or client libraries. ElasticSearch is built on top of Lucene and provides scalability, reliability and availability.
Kafka is a distributed messaging system that allows for publishing and subscribing to streams of records, known as topics. Producers write data to topics and consumers read from topics. The data is partitioned and replicated across clusters of machines called brokers for reliability and scalability. A common data format like Avro can be used to serialize the data.
Can and should Apache Kafka replace a database? How long can and should I store data in Kafka? How can I query and process data in Kafka? These are common questions that come up more and more. This session explains the idea behind databases and different features like storage, queries, transactions, and processing to evaluate when Kafka is a good fit and when it is not.
The discussion includes different Kafka-native add-ons like Tiered Storage for long-term, cost-efficient storage and ksqlDB as event streaming database. The relation and trade-offs between Kafka and other databases are explored to complement each other instead of thinking about a replacement. This includes different options for pull and push-based bi-directional integration.
Key takeaways:
- Kafka can store data forever in a durable and high available manner
- Kafka has different options to query historical data
- Kafka-native add-ons like ksqlDB or Tiered Storage make Kafka more powerful than ever before to store and process data
- Kafka does not provide transactions, but exactly-once semantics
- Kafka is not a replacement for existing databases like MySQL, MongoDB or Elasticsearch
- Kafka and other databases complement each other; the right solution has to be selected for a problem
- Different options are available for bi-directional pull and push-based integration between Kafka and databases to complement each other
Video Recording:
https://youtu.be/7KEkWbwefqQ
Blog post:
https://www.kai-waehner.de/blog/2020/03/12/can-apache-kafka-replace-database-acid-storage-transactions-sql-nosql-data-lake/
Deploy and Serve Model from Azure Databricks onto Azure Machine LearningDatabricks
The document discusses deploying a model trained in Azure Databricks onto Azure Machine Learning. It covers model training in Databricks, packaging the model and storing it in Azure Blob Storage, registering the model with Azure ML, deploying it to an Azure Kubernetes Service cluster, and serving it as a web service. Demo sections show training a model for semantic type detection in Databricks and deploying it using Azure ML. The goal is to make model deployment and consumption seamless across Azure services.
Event-Driven Stream Processing and Model Deployment with Apache Kafka, Kafka ...Kai Wähner
Talk from Kafka Summit San Francisco 2019 (https://kafka-summit.org/sessions/event-driven-model-serving-stream-processing-vs-rpc-kafka-tensorflow/). Video recording will be available for free on the Summit website.
Event-based stream processing is a modern paradigm to continuously process incoming data feeds, e.g. for IoT sensor analytics, payment and fraud detection, or logistics. Machine Learning / Deep Learning models can be leveraged in different ways to do predictions and improve the business processes. Either analytic models are deployed natively in the application or they are hosted in a remote model server. In the latter you combine stream processing with RPC / Request-Response paradigm instead of direct doing direct inference within the application. This talk discusses the pros and cons of both approaches and shows examples of stream processing vs. RPC model serving using Kubernetes, Apache Kafka, Kafka Streams, gRPC and TensorFlow Serving. The trade-offs of using a public cloud service like AWS or GCP for model deployment are also discussed and compared to local hosting for offline predictions directly “at the edge”.
Key takeaways
• Machine Learning / Deep Learning models can be used in different ways to do predictions. Scalability and loose coupling are important success factors
• Stream processing vs. RPC / Request-Response for model serving has many trade-offs – learn about alternatives and best practices for your different scenarios
• Understand the alternatives and trade-offs of model deployment in modern infrastructures like Kubernetes or Cloud Services like AWS or GCP
• See live demos with Java, gRPC, Apache Kafka, KSQL and TensorFlow Serving to understand the trade-offs
Walking Through Spring Cloud Data FlowVMware Tanzu
This document provides an overview and safe harbor statement for the "Walking Through Spring Cloud Data Flow" presentation at SpringOne 2020. It outlines that any information provided is intended for informational purposes only and is subject to change. The presentation will cover topics like Spring Cloud Stream for event-driven applications, Spring Cloud Task for batch applications, application development, deployment and monitoring using Spring Cloud Data Flow. It also provides details about the presenters and includes a sample demo data.
Performance Comparison of HBase and CassandraYashIyengar
The document compares the performance of Hbase and Cassandra databases using YCSB (Yahoo! Cloud Serving Benchmark). It summarizes the key characteristics of each database, including that Hbase is master-based while Cassandra is masterless. The document then describes testing each database with YCSB Workloads A, B, and C at record counts of 100,000; 250,000; and 500,000 to compare their performance under different conditions.
In the workshop with GCP, Home Depot & Cloud FoundryChristopher Grant
Christopher Grant and Eric Johnson talk about Home Depot's experience in piloting Spring apps running in Pivotal Cloud Foundry on top of Google Cloud Platform. They discuss Home Depot's journey using this cutting edge technology stack, including some...
In the Workshop with Google Cloud Platform, HomeDepot.com & Cloud FoundryVMware Tanzu
SpringOne Platform 2016
Speaker: Christopher Grant; Sr. Architect, Home Depot. Eric Johnson; Technical Program Manager, Google.
Come listen to the Home Depot's experience in piloting Spring apps running in Pivotal Cloud Foundry on top of Google Cloud Platform. This session will discuss Home Depot's journey using this cutting edge technology stack, including some of the challenges along the way. And in true DIY fashion, many of the demos provided will be available for attendees to try themselves.
The document discusses the motivation for establishing the Certified IT Architect (CITA) certification. It notes that many large IT projects fail due to a lack of proper architecture. Additionally, the role of IT architect is often overlooked with no defined career path or resources. There is also a lack of education programs focused on IT architecture. The certification aims to address these issues by defining the IT architect role and skills, developing an IT architecture body of knowledge, and providing training and certification programs to establish IT architecture as a recognized profession. This is intended to help increase IT project success rates and deliver more business value from IT.
This document provides an overview of Source Code Management with Subversion. It discusses Subversion's history and features, including its support for directories, files, atomic commits, branching and tagging. It also covers enterprise considerations, the Subversion architecture with servers and clients, development processes using branches and merges, and tools for reporting. The presentation concludes with resources for support, training and learning more about Subversion.
This document discusses using Action Message Format (AMF) for integrating Flex applications via remoting and messaging. It provides an overview of AMF capabilities including remoting services that allow invoking Java methods and messaging services for real-time data push. Benefits of AMF are explained such as faster binary transfer and direct object parsing. Server-side AMF options and terminology are covered along with how to configure AMF in Flex applications using code, MXML, or configuration files. Simple examples of remoting and messaging are also presented.
Building a Production Grade PostgreSQL Cloud Foundry Service | anyninesanynines GmbH
This document discusses building a production-grade PostgreSQL service on Cloud Foundry. Key points include:
- Dedicated PostgreSQL instances per service are recommended over shared instances to avoid single points of failure.
- On-demand provisioning of instances is essential for scalability and ease of deployment. Bosh is well-suited for automating infrastructure management.
- Any necessary PostgreSQL replication and clustering must be automated to support scalability and high availability of the service.
- The architecture involves a service broker implementing the Cloud Foundry API, with PostgreSQL-specific logic encapsulated separately for configuration, credentials, and catalog data. Deployments are managed by a Bosh deployer.
This document discusses scenarios for evaluating business and architectural quality attributes in software systems. For business qualities like time-to-market and cost/benefits, it provides example scenarios that describe the stimulus, artifacts, environment, system response, and how to measure the response. For architectural qualities like conceptual integrity, correctness/completeness, and buildability, it notes that these attributes are more difficult to capture in scenarios since they depend on multiple phases of development and involve complex relationships between components, teams, and available tools.
Chip Childers discusses the future of Cloud Foundry, which will become the industry standard across many sectors like finance, automotive, IoT, healthcare, government, and more. Cloud Foundry aims to support multi-cloud environments through portable and interoperable applications. It also underlies a growing ecosystem of applications and services. The Cloud Foundry community strives to be pragmatic, diverse, respectful and focused on sharing practical experiences.
Everyday life with Cloud Foundry in a big organization (Cloud Foundry Days To...CAFxX
Rakuten has been running the open-source version of Cloud Foundry internally for over 5 years. In this talk we will discuss our experience on three important topics: how we integrated Cloud Foundry with our internal systems, what are the most common issues users face when migrating their apps to Cloud Foundry and how to work with your users to make them advocates for the platform.
This document provides an overview of lean software development methodology. It discusses lean principles like value, value stream, flow, pull and perfection. It demonstrates these principles through examples like stuffing envelopes. The document outlines how to define value, map the value stream, eliminate waste, create flow, implement pull and continuous improvement. It provides real-world examples and discusses how to apply lean thinking in practice. Resources for further learning about scaled agile framework and focus areas are also included.
Este documento describe un viaje desde lo más grande hasta lo más pequeño en el universo, comenzando a 10 millones de años luz de distancia de la Vía Láctea y terminando a una escala subatómica de 100 atometros. A medida que nos acercamos, pasamos de ver galaxias a estrellas, planetas, continentes, ciudades, hojas, células y finalmente partículas subatómicas como protones y quarks.
Technology is an integral part of business. Development or adoption of new technology can lead to competitive advantage. From a strategic perspective, technology should be seen as the enabler of the business model to create and deliver value to customers. It is the means rather than the end goal. With this in mind, companies should develop a formal Technology Strategy to support their business objectives. This presentation puts forward a simple framework called the Technology Strategy Canvas.
IASA is a non-profit professional association run by architects for all IT architects. It is centrally governed but locally run, technology and vendor agnostic. The use, disclosure, reproduction, modification, transfer, or transmittal of this work without the written permission of IASA is strictly prohibited.
The Future for Smart Technology ArchitectsPaul Preiss
The future of software and even hardware is based in ever more complex abilities to adapt to highly dynamic change and input. The Internet of Things brings with it input from billions of sources locally and around the globe and for intelligent architects this represents an opportunity to create deep competitive advantage and customer loyalty.
The Japanese have used intelligent systems for years from cars to trains to vacuum cleaners and there will continue to be smarter and smarter systems. Architects around the world must include this thinking into their designs and strategies. Adaptive social networks, individually designed health care, just in time 3d printing are only some of the components of this coming era.
How to include smart system thinking into designs
How to get started with smart tools like inferencing, fuzzy, neural and other technologies
When to think smart and when to avoid
Possible outcomes to strive for today in preparing your architecture for the age of smart systems
The document summarizes HomeDepot.com's transition from a monolithic architecture to microservices. It discusses how in 2011, the monolith had grown large and difficult to manage, with long release cycles. The objective was to increase the rate of change and number of developers. Key steps included breaking the monolith into domain-driven services, adopting continuous integration, using APIs for communication, and implementing feature flags and traffic routing. This allowed independent development of 30 apps by 2015 compared to just 1 previously, with weekly instead of monthly deployments. The presentation provides guidance on patterns for microservice communication, deployment, data management, and avoiding common pitfalls.
Evolving toward Microservices - O’Reilly SACON KeynoteChristopher Grant
O’Reilly Software Architecture Conference Keynote 4/2016
Evolving toward Microservices
How HomeDepot.com made the switch
Video published at: https://www.oreilly.com/ideas/evolving-toward-microservices-how-home-depot-made-the-transition
JRR & Associates Consulting Services February 2013 Updatejrulseh
JRR & Associates is a global consulting firm that provides services to support small and medium sized manufacturers. Their services include operational improvement, leadership development, global market support, and manufacturing engineering. They have expertise in areas like heat transfer, mergers and acquisitions, safety, and environmental compliance. JRR works with clients to help them achieve excellence and reach their full potential.
Jrr & associates services template february 2013 updatejrulseh
JRR & Associates is a global consulting firm that provides a wide range of services to support industrial manufacturing businesses, including operational improvement, leadership development, market strategy, engineering, safety, and human resources support. They work with small to medium sized manufacturers to help them achieve excellence and growth. JRR's services are provided through their core team as well as associate companies that specialize in areas like manufacturing engineering, safety, IT, and support in key regions including China, Brazil, and India.
The document outlines a consultation process with 4 phases - Discuss, Discover, Design, and Deploy. In the Discuss phase, the consultant meets with the client to understand their goals, requirements, systems, and budget. Next, in Discover, the consultant examines outputs, documents, compatibility, and threats. Then in Design, solutions are created to meet needs. Finally, in Deploy, the solution is tested, installed, trained on, and supported. The overall process aims to target client requirements.
Iasa Spain Chapter - Repaso al proceso de certifficación de CITAPiasaglobal
Breve repaso al programa de certificaciones que ofrece Iasa para posteriormente centrarse en el proceso de certificación de CITA-P (Certified Information Technology Architect Professional Certification - Nivel 4-4).
This document provides an overview of key topics in operations management. It defines operations management as designing, operating, and improving systems that transform inputs into outputs to deliver products and services. It distinguishes between service operations and goods production. Current challenges in operations management include global focus, just-in-time processes, supply chain partnering, and mass customization. Quality is discussed for both goods and services. Total quality management tools and supply chain management concepts are also introduced.
Jrr & associates services template march 2013v2 updatejrulseh
JRR & Associates provides a variety of consulting services to support small and medium sized global manufacturers, including expertise in heat transfer/heat exchangers, M&A assistance, leadership development, operational improvement, and interim senior leadership. Services are offered directly through JRR or through associate companies specializing in areas like manufacturing engineering, safety, IT, and international support in China, Brazil, and India. The guiding principle of JRR is to help clients reach their full potential.
Jrr & associates services template march 2013v2 updatejrulseh
JRR & Associates provides a variety of consulting services to support small and medium sized global manufacturers, including expertise in heat transfer/heat exchangers, leadership development, operational improvement, and support for international markets like China, Brazil, and India. They work to help clients achieve excellence and reach their full potential. Their direct services cover areas like manufacturing, business performance, safety, engineering, IT, and more.
The document discusses various techniques for analyzing an organization's internal environment, including value chain analysis, cost efficiency analysis, effectiveness analysis, and comparative analysis. It then describes the key aspects of each technique. Value chain analysis examines the internal activities an organization engages in to transform inputs into outputs. Cost efficiency analysis aims to minimize costs without compromising quality. Effectiveness analysis evaluates how well products match customer requirements. Comparative analysis compares an organization's capabilities to competitors and past performance.
Studies shows that the corporate circle is embedded with a whole lot of internal and external influencing factors which would mark the success of a particular organization or a company. With that in mind, if the organization or a company is able to win via the six diamonds that can help build the robust internal control systems within the company, then they have met the standards that the customers and shareholders want to see the company grow.
To exceed those expectations, the company or an organization needs to put in the most required extra efforts which not only solve the problems that the customers/clients and the shareholders are facing but also help them direct to bring in the change using automation. This is a huge leap that the company or the organization want to take using innovation and other exponential factors to exceed and excel.
CFW Supply Chain Event - Contract Law - Berry Smith LLPRae Davies
This document summarizes a workshop on supply chain development. The workshop covered topics such as defining supply chains, developing and managing current supply chains, contractual issues, performance measurement, and customer value. It discussed sourcing requirements, partnership with Welsh Government to support economic growth, how Construction Futures Wales can help with consultancy, courses and expert services. The workshop addressed agenda items like supply chain requirements and definitions, collaborative working, and how to access Construction Futures Wales services.
The document discusses a company's product mix and strategies. It defines a product and how marketers classify products into different categories like durability and tangibility. It also discusses how companies can differentiate products through various attributes and design products. Additionally, it covers how companies manage their product mix through approaches like line stretching and product-bundling pricing. Finally, it discusses how packaging, labeling, and guarantees are important marketing tools.
The document provides an overview of The Rock, a professional services firm in Bangalore, India. It summarizes The Rock's services which include HR consulting, training, assessments, event management, and sports coaching. The Rock aims to offer customized solutions through strategic partnerships and a focus on developing talent. It prides itself on its integrated approach, hands-on support, and deep subject matter expertise across various industries.
The document discusses principles of total quality management including Stephen Covey's 7 habits of effective people, strategic planning principles, quality goals and objectives, quality planning steps, the roles and duties of a quality council, developing a quality policy, types of customers, customer/supplier chains, empowerment, continuous process improvement models like Juran's trilogy and the PDSA cycle, quality tools from Japan like the 3K method and Kaizen, motivation theories from Maslow and Herzberg, and the benefits of using teams.
This document discusses several key principles of total quality management including the 7 habits of highly effective people, strategic planning, quality goals and objectives, quality planning, quality councils, quality policies, customer types, customer/supplier chains, and continuous process improvement. It emphasizes that TQM requires a focus on both internal and external customers. The document also covers performance measurement, employee involvement, teams, decision making methods, and supplier partnerships in TQM.
Introduction to the what, when, why, where, and who of conducting website content inventories and audits, with tips on auditing for content quality, performance, and competitive advantage.
The document summarizes key topics from the 2013 STC Summit conference, including professional development, social media, single-sourcing, content strategy, and business metrics. Over 800 people attended the conference, which featured 137 sessions across various communication modes. Popular sessions focused on cultivating online presence, leveraging social media for feedback, conditional text in Flare, and building business cases through metrics that demonstrate revenue impact. The document provides resources for further exploring each topic.
The impact of a company's culture on Lean intiatives - SMEKirk Hazen, P.E.
Lincoln Industries is a large independent metal finishing company that has implemented Lean initiatives to drive growth and improvement. A key lesson is that building a culture of trust where the right people are selected and developed is crucial for Lean success. Lincoln focuses on talent management, engaging employees at all levels, collaborative teamwork and removing constraints. Through kaizen events, visual controls and accountability, Lincoln has realized over $3 million in savings since 2006 while growing sales and improving quality, delivery and productivity. Sustaining a Lean culture requires commitment, clear expectations, accountability and celebrating both successes and lessons learned.
Positioning of the BA practice in the agile, what is the role of BA and how she/he fits in the agile ceremonies, what is the basic role she/he plays and what are the common tools BA uses and can she/he replaces any of the Agile known roles.
============== Follow us ==============
Website: http://xpdays.org
Linked In: https://www.linkedin.com/company/xpdays
Facebook: https://www.facebook.com/xpdaysorg
Twitter: https://twitter.com/xpdaysorg
#agile #business_analysis #xpdays #agilearena
Five Best Supply Management Practices in use Todayfmbabs
The document discusses five best supply management practices used today. It begins by introducing the author and their experience. It then covers strategic goal alignment, describing how best-in-class companies ensure supply management goals are aligned with overall company strategy. It also discusses procurement strategy development, explaining the use of a procurement decision tool to determine procurement strategies based on marketplace complexity and business impact. The five practices are then listed as strategic goal alignment, procurement strategy development, supplier integration, teaming strategies, and performance measurement.
Similar to IASA Architecture Pillars - Quality Attributes (20)
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
5. What is quality
The standard of something as measured
against other things of a similar kind; the
degree of excellence of something.
- Google
6. Quality attributes across industries
• All industries measure the quality of their products
• Whole organizations are devoted to measuring quality
• Attributes are defined, measured and monitored
• Consider potential quality attributes
for the following four industries
– Clothing Manufacturing
– Food Manufacturing
– Shipping & Delivery
– Furniture & Bedding
13. Quality Attributes in IT Architecture
A quality attribute is a non-functional
characteristic of a component or a system. It
represents a cross-cutting architectural concern
for a system or system of systems.
- IASA
22. Packaging & Deployment Discussion
The expectations, process, and management of IT
products following the completion of development and
prior to “normal” day-to-day operating conditions
• Ensures project requirements are
successfully delivered to prod
• Ensures delivery of Quality
Attributes to prod
• Not just the features but also
how well they are delivered
23. Monitoring & Management Discussion
Monitoring & Managing quality attributes in a
standard and objective way
• Problem analysis
• Capacity planning
• Service level agreement (SLA)
• Issue response techniques
• Integrate metrics with processes
25. Balancing Quality Attributes
• Achieving high levels of quality attributes may be
costly or prohibitive
• Improving one attribute may impact another
• It’s important to understand requirements upfront
30. Identifying and prioritizing requirements
• Arrange attributes by group
• Prioritize by importance and
complexity
• Review trade-off points
• Balance requirements against
trade-offs, cost and time
31. Review
• Quality Attributes are critical for the success of your
architecture
• Iasa groups attributes into 4 groupings
– Usage
– Development
– Operation
– Security
• Consideration of packaging/deployment and
monitoring/management helps ensure attributes are effective
while they system is in use
• Requiring excellence from all attributes may be costly or
prohibitive. Review attribute impacts and trade offs to
balance needs of the project