Strata 2016 - Architecting for Change: LinkedIn's new data ecosystemShirshanka Das
Shirshanka Das and Yael Garten describe how LinkedIn redesigned its data analytics ecosystem in the face of a significant product rewrite, covering the infrastructure changes that enable LinkedIn to roll out future product innovations with minimal downstream impact. Shirshanka and Yael explore the motivations and the building blocks for this reimagined data analytics ecosystem, the technical details of LinkedIn’s new client-side tracking infrastructure, its unified reporting platform, and its data virtualization layer on top of Hadoop and share lessons learned from data producers and consumers that are participating in this governance model. Along the way, they offer some anecdotal evidence during the rollout that validated some of their decisions and are also shaping the future roadmap of these efforts.
- Discuss the role of Observability (Logging; Tracing; and Metric) in modern architecture.
- How to implement observability in Golang using OpenCensus.
- The 4 golden signals when designing the metrics.
- How to apply observability into the process.
IRJET- A Survey on Real Time Object Detection using Voice Activated Smart IoTIRJET Journal
This document proposes a system that combines voice-activated virtual assistants with real-time object detection using machine learning. The system would allow disabled or normal users to monitor their home or environment using a camera and voice commands to an assistant like Alexa. When objects are detected by the camera using machine learning techniques, the virtual assistant would verbally notify the user. The document discusses how Amazon Web Services could power the system using serverless computing on image data from the camera. The proposed system aims to provide an affordable home security solution using emerging technologies like the Internet of Things, machine learning, and cloud computing.
Best practices with Microsoft Graph: Making your applications more performant...Microsoft Tech Community
Learn how to take advantage of APIs, platform capabilities and intelligence from Microsoft Graph to make your app more performant, more resilient and more reliable
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
The document discusses Vertex AI pipelines for MLOps workflows. It begins with an introduction of the speaker and their background. It then discusses what MLOps is, defining three levels of automation maturity. Vertex AI is introduced as Google Cloud's managed ML platform. Pipelines are described as orchestrating the entire ML workflow through components. Custom components and conditionals allow flexibility. Pipelines improve reproducibility and sharing. Changes can trigger pipelines through services like Cloud Build, Eventarc, and Cloud Scheduler to continuously adapt models to new data.
Keynote: Harnessing the power of Elasticsearch for simplified searchElasticsearch
Get an overview of the innovation Elastic is bringing to the Enterprise Search landscape, and learn how you can harness these capabilities across your technology landscape to make the power of search work for you.
What is going on - Application diagnostics on Azure - TechDays FinlandMaarten Balliauw
We all like building and deploying cloud applications. But what happens once that’s done? How do we know if our application behaves like we expect it to behave? Of course, logging! But how do we get that data off of our machines? How do we sift through a bunch of seemingly meaningless diagnostics? In this session, we’ll look at how we can keep track of our Azure application using structured logging, AppInsights and AppInsights analytics to make all that data more meaningful.
How build scalable IoT cloud applications with microservicesDave Chen
How build scalable IoT cloud applications with microservices.
Build around business capabilities: Microservices are not organized around technical capabilities of a particular product, but rather business capabilities. As the end goal is user experience and customer satisfaction. Normally teams leveraging microservices are not divided into UI teams or database teams and so on. In fact, there are cross-functional teams that work towards fulfilment of one single functionality. This enables teams agility, reuse services as well as reduce costs.
Running in its own process: Provides better isolation. If service A uses for more memory or CPU or it crashes it will not effect the rest of other services.
Communicating with lightweight: Microservices shift away away from SOAP-based Web Services, ESB layer to more lightweigted communicate, like REST API, event drive proposal, such as MQTT, RabbitMQ or Apache Kafka
Decentralize all the things: In microservices architecture, the goal is to decentralize the decision authority. For instance, let the team to decide which programing languages or database to use for each service. This allows the individual teams to move fast and at scale.
Independently deployable: Every microservice should be independently deployable at any time. This will greatly improve team’s agility and efficiency.
Automation with continuous delivery: In classical monolith-based environments, the release cycle is normally 3 months or even longer, so it’s relatively easy to maintain the build jobs with some manual steps. In a microservice architecture, the number of deployment units increase, organization usually starts with a few of micoservices then grow up to hundreds or thousands over time, therefore an automated is much needed. This can be achieved through automating the build and deploy process through tools like Jenkins or CHEF by having a DevOps team.
Design principles:
1. Isolate All the Things
2. Embrace Asynchronous processing
3. Design for failures
Use HTTP timeout
Automatically retry failed requests
Use circuit breaker pattern
Netflix hystrix
Akka circuit breaker
Manage concurrency update
4. Security improvement
5. Embrace DevOps Practices
6. Monitoring & logging
Monitor tools
Splunk
Zipkin (distributed tracing system)
Logstash
Kibana
Elasticsearch
Newrelic
Strata 2016 - Architecting for Change: LinkedIn's new data ecosystemShirshanka Das
Shirshanka Das and Yael Garten describe how LinkedIn redesigned its data analytics ecosystem in the face of a significant product rewrite, covering the infrastructure changes that enable LinkedIn to roll out future product innovations with minimal downstream impact. Shirshanka and Yael explore the motivations and the building blocks for this reimagined data analytics ecosystem, the technical details of LinkedIn’s new client-side tracking infrastructure, its unified reporting platform, and its data virtualization layer on top of Hadoop and share lessons learned from data producers and consumers that are participating in this governance model. Along the way, they offer some anecdotal evidence during the rollout that validated some of their decisions and are also shaping the future roadmap of these efforts.
- Discuss the role of Observability (Logging; Tracing; and Metric) in modern architecture.
- How to implement observability in Golang using OpenCensus.
- The 4 golden signals when designing the metrics.
- How to apply observability into the process.
IRJET- A Survey on Real Time Object Detection using Voice Activated Smart IoTIRJET Journal
This document proposes a system that combines voice-activated virtual assistants with real-time object detection using machine learning. The system would allow disabled or normal users to monitor their home or environment using a camera and voice commands to an assistant like Alexa. When objects are detected by the camera using machine learning techniques, the virtual assistant would verbally notify the user. The document discusses how Amazon Web Services could power the system using serverless computing on image data from the camera. The proposed system aims to provide an affordable home security solution using emerging technologies like the Internet of Things, machine learning, and cloud computing.
Best practices with Microsoft Graph: Making your applications more performant...Microsoft Tech Community
Learn how to take advantage of APIs, platform capabilities and intelligence from Microsoft Graph to make your app more performant, more resilient and more reliable
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
The document discusses Vertex AI pipelines for MLOps workflows. It begins with an introduction of the speaker and their background. It then discusses what MLOps is, defining three levels of automation maturity. Vertex AI is introduced as Google Cloud's managed ML platform. Pipelines are described as orchestrating the entire ML workflow through components. Custom components and conditionals allow flexibility. Pipelines improve reproducibility and sharing. Changes can trigger pipelines through services like Cloud Build, Eventarc, and Cloud Scheduler to continuously adapt models to new data.
Keynote: Harnessing the power of Elasticsearch for simplified searchElasticsearch
Get an overview of the innovation Elastic is bringing to the Enterprise Search landscape, and learn how you can harness these capabilities across your technology landscape to make the power of search work for you.
What is going on - Application diagnostics on Azure - TechDays FinlandMaarten Balliauw
We all like building and deploying cloud applications. But what happens once that’s done? How do we know if our application behaves like we expect it to behave? Of course, logging! But how do we get that data off of our machines? How do we sift through a bunch of seemingly meaningless diagnostics? In this session, we’ll look at how we can keep track of our Azure application using structured logging, AppInsights and AppInsights analytics to make all that data more meaningful.
How build scalable IoT cloud applications with microservicesDave Chen
How build scalable IoT cloud applications with microservices.
Build around business capabilities: Microservices are not organized around technical capabilities of a particular product, but rather business capabilities. As the end goal is user experience and customer satisfaction. Normally teams leveraging microservices are not divided into UI teams or database teams and so on. In fact, there are cross-functional teams that work towards fulfilment of one single functionality. This enables teams agility, reuse services as well as reduce costs.
Running in its own process: Provides better isolation. If service A uses for more memory or CPU or it crashes it will not effect the rest of other services.
Communicating with lightweight: Microservices shift away away from SOAP-based Web Services, ESB layer to more lightweigted communicate, like REST API, event drive proposal, such as MQTT, RabbitMQ or Apache Kafka
Decentralize all the things: In microservices architecture, the goal is to decentralize the decision authority. For instance, let the team to decide which programing languages or database to use for each service. This allows the individual teams to move fast and at scale.
Independently deployable: Every microservice should be independently deployable at any time. This will greatly improve team’s agility and efficiency.
Automation with continuous delivery: In classical monolith-based environments, the release cycle is normally 3 months or even longer, so it’s relatively easy to maintain the build jobs with some manual steps. In a microservice architecture, the number of deployment units increase, organization usually starts with a few of micoservices then grow up to hundreds or thousands over time, therefore an automated is much needed. This can be achieved through automating the build and deploy process through tools like Jenkins or CHEF by having a DevOps team.
Design principles:
1. Isolate All the Things
2. Embrace Asynchronous processing
3. Design for failures
Use HTTP timeout
Automatically retry failed requests
Use circuit breaker pattern
Netflix hystrix
Akka circuit breaker
Manage concurrency update
4. Security improvement
5. Embrace DevOps Practices
6. Monitoring & logging
Monitor tools
Splunk
Zipkin (distributed tracing system)
Logstash
Kibana
Elasticsearch
Newrelic
The Internet of Things: Patterns for building real world applicationsIron.io
The rapid growth of connected devices is poised to revolutionize the Internet as we know it, covering everything from our bodies to the planet. The wide range of Internet of Things solutions will fuel the growing API economy, providing developers endless opportunities to innovate by providing insight into the world around us.
Despite the futuristic image of seamless connectivity all around us, there is a lot happening behind the scenes at a massive level of scale, putting intense pressure on the data centers providing the infrastructure, the telecom companies providing the network, and the surrounding software ecosystem providing complementary services.
Can we build an Azure IoT controlled device in less than 40 minutes that cost...Codemotion Tel Aviv
This document summarizes how to build an Azure IoT controlled device in 40 minutes for less than $10. It describes using an ESP8266 development board connected to sensors and actuators to control a device. The cloud portion uses Azure IoT Hub to connect the device and send/receive commands. A Xamarin mobile app is also created to control the device. Overall it shows how inexpensive and quick it is to build an IoT prototype using affordable hardware and Azure cloud services.
The Future of Energy - Decentral energy distribution in a digital worldEficode
Alexander Alten-Lorenz
Chief Architect Global Platform / Technology – E.ON SE
Alexander, an experienced and technically proficient Hadoop Engineer/IT Architect, is specialized in application development, use case discovery and hadoop cluster architecture design.
Intro to Machine Learning with H2O and Python - DenverSri Ambati
This document provides an overview of H2O.ai, an open-source in-memory predictive analytics platform. It was founded in 2011 and has 50+ core developers. H2O supports many machine learning algorithms like generalized linear models, random forest, gradient boosting, and deep learning. It can handle large datasets across various environments and programming interfaces like R, Python, and REST APIs. H2O provides scalable supervised and unsupervised learning algorithms for tasks like classification, regression, clustering, and dimensionality reduction.
Azure machine learning ile tahminleme modelleriKoray Kocabas
This document provides information about big data, Internet of Things, and machine learning. It discusses how big data is growing exponentially due to social media, mobile devices, and the Internet of Things. It also discusses challenges of storing and analyzing big data. It then summarizes tools for big data analysis like Hadoop and Azure HDInsight. It discusses machine learning approaches like supervised and unsupervised learning. Finally, it provides an overview of Azure Machine Learning services and how to get started with machine learning.
This document introduces Dato and its machine learning platform. Dato provides intuitive APIs and toolkits that allow developers to easily create intelligent applications for tasks like recommendation, sentiment analysis, churn prediction, and more. It offers scalable data structures, high performance algorithms, and the ability to quickly develop and deploy machine learning models and services. Customers across various industries have been able to build and operationalize intelligent solutions faster using Dato to solve problems in fraud detection, data matching, recommendations, and other domains.
Transforming data into actionable insightsElasticsearch
Learn about the strategic feature areas of the Elastic Stack—Elasticsearch, a data engine like no other, and Kibana, the window into the Elastic Stack.
The session will cover:
Bringing data into the Elastic Stack
Storing data
Analyzing data
Acting on data
Automated machine learning (AutoML) can automate time-consuming tasks in the machine learning lifecycle like data preprocessing, model training, and tuning. This allows data scientists to focus on higher-level work. The presentation demonstrated AutoML on the Titanic dataset in Microsoft Azure Machine Learning service. It showed how AutoML can iterate through various algorithms and hyperparameters, measure model performance, enable model interpretability, facilitate model hosting and drift detection, and support code-based MLOps workflows. AutoML aims to make machine learning more accessible and productive.
Cómo transformar los datos en análisis con los que tomar decisionesElasticsearch
Descubre las áreas de características estratégicas de Elastic Stack: Elasticsearch, un motor de datos inigualable y Kibana, la ventana que da acceso a Elastic Stack.
En la sesión hablaremos sobre:
Cómo incorporar datos a Elastic Stack
Almacenamiento de datos
Análisis de los datos
Actuar en función de los datos
This document discusses cloud computing and different cloud service models such as SaaS, PaaS, and IaaS. It outlines the key characteristics of cloud computing including fast setup, access from anywhere, resilience, efficient growth, and replacing capital expenditures with operational expenditures. Examples are provided of how different types of applications and workload patterns are better suited for certain cloud service models. Concerns about security, control, and data in the cloud are addressed.
Operationalizing Machine Learning (Rajeev Dutt, CEO, Co-Founder, DimensionalM...Amazon Web Services Korea
NeoPulse is a platform that aims to make AI ubiquitous by automating the creation, deployment, and management of AI models. It reduces the barriers to developing AI by requiring less code, having lower costs, and shorter project timelines compared to other platforms. The platform includes components like NeoPulse AI Studio, which can automate the creation of AI models, and NeoPulse Query Runtime, which allows applications to access models via an API. It supports a variety of data types and machine learning techniques. The document describes the end-to-end workflow on NeoPulse and provides examples of companies using it successfully.
The document summarizes the 2015 Amazon Web Services re:Invent conference. It highlights the growth in attendance from 9,000 to 19,000. It outlines new computing and database services announced as well as analytics, security, and management tools. Examples are given of how Netflix and a content management system benefited from migrating to AWS. Lessons learned focused on not all features transferring directly and the learning curve involved. The document encourages hands-on learning with AWS free services and attending next year's conference.
Docker/DevOps Meetup: Metrics-Driven Continuous Performance and ScalabiltyAndreas Grabner
This is the presentation given for the Docker Meetup in Cordoba, Argentina. Recording should soon be up on http://www.meetup.com/Docker-Cordoba-ARG/events/226995018/
Key Takeaways: Pick your Metrics! Automate It! Fail Bad Builds Faster! Deliver Faster with Better Quality!
To the Docker Audience my main point was that: Just adding Docker doesn't give you free performance and scalability of your app. I walk through many examples of failing apps. What are the metrics that highlight the problem and how to automatically detect bad builds by looking at these Metrics along your Pipeline.
The document discusses concepts related to game day and chaos engineering on AWS. It provides examples of chaos experiments that can be conducted such as resource exhaustion, network unreliability, and datastore saturation. It also discusses tools for chaos engineering like Chaos Toolkit and Simian Army. The goal of game days and chaos engineering is to test systems resilience by simulating failures and disasters to gain insights on how to improve systems reliability.
A DIY Guide to Runbooks, Security Incident Reports, & Incident Response (SEC3...Amazon Web Services
In this session, we discuss how you should be building your runbooks and security incident report system (SIRS) using your company's real-world configuration and processes. Our goal is to give you an easier way to start your runbooks and create a SIRS. Now you can be the hero for your company by building a strategy and finding out how secure you are. You also learn more about why you should be running a DevSecOps pipeline and how it will help your team find threats in your production environment. Finally, learn how things are different in each level of environment and where your developers should be working.
Is your Automation Infrastructure ‘Well Architected’?Adam Goucher
The document discusses how to evaluate if an automation infrastructure is "well architected" based on the five pillars from Amazon's Well Architected Framework: operational excellence, security, reliability, performance efficiency, and cost optimization. It provides examples of best practices for each pillar, such as implementing infrastructure as code, automating security practices, testing recovery procedures, optimizing for efficiency, and reducing costs by adopting consumption-based models and using managed services. The overall message is that automation infrastructures should follow architectural best practices to ensure they are secure, reliable, efficient and cost-effective.
Deep Dive: AWS X-Ray London Summit 2017Randall Hunt
Instrument production applications (both in AWS and on prem) with x-ray to collect live telemetry and latency metrics on your applications. You can also use it to debug live!
An introduction to Workload Modelling for Cloud ApplicationsRavi Yogesh
A high-level overview of Workload Modelling as a part of Performance Testing Life Cycle with focus on the challenges faced in Cloud environment relative to traditional IT infrastructure.
ConFoo 2017: Introduction to performance optimization of .NET web appsPierre-Luc Maheu
This document discusses performance optimization of .NET web apps. It defines performance as response time and resource utilization. It emphasizes measuring performance before and during optimization to identify the right things to optimize. A variety of tools are presented for different levels of performance monitoring and profiling, including Application Performance Monitoring, lightweight code profilers, and code profilers for highest detail. Best practices like leveraging load balancers and avoiding implicit transactions are also recommended.
This document provides an agenda and overview of a presentation on COM+ and Microsoft's vision for web and appliance computing. The presentation covers the evolution of COM+ from earlier technologies like OLE and MTS, new features in COM+ 1.0 like attribute-based programming and services for load balancing and queuing. It discusses Microsoft's vision of simplifying and making web applications and appliances more reliable using technologies like Windows DNA and how COM+ fits into n-tier web application models and appliance computing.
The document discusses Amazon Web Services (AWS) machine learning capabilities. It provides an overview of the AWS ML stack, which offers the broadest and most complete set of machine learning capabilities across vision, speech, text, search, chatbots, personalization, forecasting, fraud detection, and more. It also discusses several specific AWS machine learning services, including Amazon Rekognition (image and video analysis), Amazon Fraud Detector (online fraud detection), Amazon Kendra (enterprise search), Amazon CodeGuru (automated code reviews and profiling), and Contact Lens for Amazon Connect (contact center analytics).
The Internet of Things: Patterns for building real world applicationsIron.io
The rapid growth of connected devices is poised to revolutionize the Internet as we know it, covering everything from our bodies to the planet. The wide range of Internet of Things solutions will fuel the growing API economy, providing developers endless opportunities to innovate by providing insight into the world around us.
Despite the futuristic image of seamless connectivity all around us, there is a lot happening behind the scenes at a massive level of scale, putting intense pressure on the data centers providing the infrastructure, the telecom companies providing the network, and the surrounding software ecosystem providing complementary services.
Can we build an Azure IoT controlled device in less than 40 minutes that cost...Codemotion Tel Aviv
This document summarizes how to build an Azure IoT controlled device in 40 minutes for less than $10. It describes using an ESP8266 development board connected to sensors and actuators to control a device. The cloud portion uses Azure IoT Hub to connect the device and send/receive commands. A Xamarin mobile app is also created to control the device. Overall it shows how inexpensive and quick it is to build an IoT prototype using affordable hardware and Azure cloud services.
The Future of Energy - Decentral energy distribution in a digital worldEficode
Alexander Alten-Lorenz
Chief Architect Global Platform / Technology – E.ON SE
Alexander, an experienced and technically proficient Hadoop Engineer/IT Architect, is specialized in application development, use case discovery and hadoop cluster architecture design.
Intro to Machine Learning with H2O and Python - DenverSri Ambati
This document provides an overview of H2O.ai, an open-source in-memory predictive analytics platform. It was founded in 2011 and has 50+ core developers. H2O supports many machine learning algorithms like generalized linear models, random forest, gradient boosting, and deep learning. It can handle large datasets across various environments and programming interfaces like R, Python, and REST APIs. H2O provides scalable supervised and unsupervised learning algorithms for tasks like classification, regression, clustering, and dimensionality reduction.
Azure machine learning ile tahminleme modelleriKoray Kocabas
This document provides information about big data, Internet of Things, and machine learning. It discusses how big data is growing exponentially due to social media, mobile devices, and the Internet of Things. It also discusses challenges of storing and analyzing big data. It then summarizes tools for big data analysis like Hadoop and Azure HDInsight. It discusses machine learning approaches like supervised and unsupervised learning. Finally, it provides an overview of Azure Machine Learning services and how to get started with machine learning.
This document introduces Dato and its machine learning platform. Dato provides intuitive APIs and toolkits that allow developers to easily create intelligent applications for tasks like recommendation, sentiment analysis, churn prediction, and more. It offers scalable data structures, high performance algorithms, and the ability to quickly develop and deploy machine learning models and services. Customers across various industries have been able to build and operationalize intelligent solutions faster using Dato to solve problems in fraud detection, data matching, recommendations, and other domains.
Transforming data into actionable insightsElasticsearch
Learn about the strategic feature areas of the Elastic Stack—Elasticsearch, a data engine like no other, and Kibana, the window into the Elastic Stack.
The session will cover:
Bringing data into the Elastic Stack
Storing data
Analyzing data
Acting on data
Automated machine learning (AutoML) can automate time-consuming tasks in the machine learning lifecycle like data preprocessing, model training, and tuning. This allows data scientists to focus on higher-level work. The presentation demonstrated AutoML on the Titanic dataset in Microsoft Azure Machine Learning service. It showed how AutoML can iterate through various algorithms and hyperparameters, measure model performance, enable model interpretability, facilitate model hosting and drift detection, and support code-based MLOps workflows. AutoML aims to make machine learning more accessible and productive.
Cómo transformar los datos en análisis con los que tomar decisionesElasticsearch
Descubre las áreas de características estratégicas de Elastic Stack: Elasticsearch, un motor de datos inigualable y Kibana, la ventana que da acceso a Elastic Stack.
En la sesión hablaremos sobre:
Cómo incorporar datos a Elastic Stack
Almacenamiento de datos
Análisis de los datos
Actuar en función de los datos
This document discusses cloud computing and different cloud service models such as SaaS, PaaS, and IaaS. It outlines the key characteristics of cloud computing including fast setup, access from anywhere, resilience, efficient growth, and replacing capital expenditures with operational expenditures. Examples are provided of how different types of applications and workload patterns are better suited for certain cloud service models. Concerns about security, control, and data in the cloud are addressed.
Operationalizing Machine Learning (Rajeev Dutt, CEO, Co-Founder, DimensionalM...Amazon Web Services Korea
NeoPulse is a platform that aims to make AI ubiquitous by automating the creation, deployment, and management of AI models. It reduces the barriers to developing AI by requiring less code, having lower costs, and shorter project timelines compared to other platforms. The platform includes components like NeoPulse AI Studio, which can automate the creation of AI models, and NeoPulse Query Runtime, which allows applications to access models via an API. It supports a variety of data types and machine learning techniques. The document describes the end-to-end workflow on NeoPulse and provides examples of companies using it successfully.
The document summarizes the 2015 Amazon Web Services re:Invent conference. It highlights the growth in attendance from 9,000 to 19,000. It outlines new computing and database services announced as well as analytics, security, and management tools. Examples are given of how Netflix and a content management system benefited from migrating to AWS. Lessons learned focused on not all features transferring directly and the learning curve involved. The document encourages hands-on learning with AWS free services and attending next year's conference.
Docker/DevOps Meetup: Metrics-Driven Continuous Performance and ScalabiltyAndreas Grabner
This is the presentation given for the Docker Meetup in Cordoba, Argentina. Recording should soon be up on http://www.meetup.com/Docker-Cordoba-ARG/events/226995018/
Key Takeaways: Pick your Metrics! Automate It! Fail Bad Builds Faster! Deliver Faster with Better Quality!
To the Docker Audience my main point was that: Just adding Docker doesn't give you free performance and scalability of your app. I walk through many examples of failing apps. What are the metrics that highlight the problem and how to automatically detect bad builds by looking at these Metrics along your Pipeline.
The document discusses concepts related to game day and chaos engineering on AWS. It provides examples of chaos experiments that can be conducted such as resource exhaustion, network unreliability, and datastore saturation. It also discusses tools for chaos engineering like Chaos Toolkit and Simian Army. The goal of game days and chaos engineering is to test systems resilience by simulating failures and disasters to gain insights on how to improve systems reliability.
A DIY Guide to Runbooks, Security Incident Reports, & Incident Response (SEC3...Amazon Web Services
In this session, we discuss how you should be building your runbooks and security incident report system (SIRS) using your company's real-world configuration and processes. Our goal is to give you an easier way to start your runbooks and create a SIRS. Now you can be the hero for your company by building a strategy and finding out how secure you are. You also learn more about why you should be running a DevSecOps pipeline and how it will help your team find threats in your production environment. Finally, learn how things are different in each level of environment and where your developers should be working.
Is your Automation Infrastructure ‘Well Architected’?Adam Goucher
The document discusses how to evaluate if an automation infrastructure is "well architected" based on the five pillars from Amazon's Well Architected Framework: operational excellence, security, reliability, performance efficiency, and cost optimization. It provides examples of best practices for each pillar, such as implementing infrastructure as code, automating security practices, testing recovery procedures, optimizing for efficiency, and reducing costs by adopting consumption-based models and using managed services. The overall message is that automation infrastructures should follow architectural best practices to ensure they are secure, reliable, efficient and cost-effective.
Deep Dive: AWS X-Ray London Summit 2017Randall Hunt
Instrument production applications (both in AWS and on prem) with x-ray to collect live telemetry and latency metrics on your applications. You can also use it to debug live!
An introduction to Workload Modelling for Cloud ApplicationsRavi Yogesh
A high-level overview of Workload Modelling as a part of Performance Testing Life Cycle with focus on the challenges faced in Cloud environment relative to traditional IT infrastructure.
ConFoo 2017: Introduction to performance optimization of .NET web appsPierre-Luc Maheu
This document discusses performance optimization of .NET web apps. It defines performance as response time and resource utilization. It emphasizes measuring performance before and during optimization to identify the right things to optimize. A variety of tools are presented for different levels of performance monitoring and profiling, including Application Performance Monitoring, lightweight code profilers, and code profilers for highest detail. Best practices like leveraging load balancers and avoiding implicit transactions are also recommended.
This document provides an agenda and overview of a presentation on COM+ and Microsoft's vision for web and appliance computing. The presentation covers the evolution of COM+ from earlier technologies like OLE and MTS, new features in COM+ 1.0 like attribute-based programming and services for load balancing and queuing. It discusses Microsoft's vision of simplifying and making web applications and appliances more reliable using technologies like Windows DNA and how COM+ fits into n-tier web application models and appliance computing.
The document discusses Amazon Web Services (AWS) machine learning capabilities. It provides an overview of the AWS ML stack, which offers the broadest and most complete set of machine learning capabilities across vision, speech, text, search, chatbots, personalization, forecasting, fraud detection, and more. It also discusses several specific AWS machine learning services, including Amazon Rekognition (image and video analysis), Amazon Fraud Detector (online fraud detection), Amazon Kendra (enterprise search), Amazon CodeGuru (automated code reviews and profiling), and Contact Lens for Amazon Connect (contact center analytics).
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
Developing a Continuous Automated Approach to Cloud SecurityAmazon Web Services
Many organizations struggle daily with the question - "Where do we stand with our AWS security practices?" With the recent release of the Center for Internet Security's CIS AWS Foundations Benchmark, organizations now have an industry-accepted set of security configuration best practices. These benchmarks, in combination with 3rd party security solutions that support them, can form the foundation for security operations at organizations of all sizes through continuous monitoring and auditing.
2016 - 10 questions you should answer before building a new microservicedevopsdaysaustin
Session Presentation by Brian Kelly
Microservices appear simple to build on the surface, but there's more to creating them than just launching some code running in a container. This talk outlines 10 important questions that should be answered about any new microservice before development begins on it - - and certainly before it gets deployed into production.
Scaling Databricks to Run Data and ML Workloads on Millions of VMsMatei Zaharia
Keynote at Scale By The Bay 2020.
Cloud service developers need to handle massive scale workloads from thousands of customers with no downtime or regressions. In this talk, I’ll present our experience building a very large-scale cloud service at Databricks, which provides a data and ML platform service used by many of the largest enterprises in the world. Databricks manages millions of cloud VMs that process exabytes of data per day for interactive, streaming and batch production applications. This means that our control plane has to handle a wide range of workload patterns and cloud issues such as outages. We will describe how we built our control plane for Databricks using Scala services and open source infrastructure such as Kubernetes, Envoy, and Prometheus, and various design patterns and engineering processes that we learned along the way. In addition, I’ll describe how we have adapted data analytics systems themselves to improve reliability and manageability in the cloud, such as creating an ACID storage system that is as reliable as the underlying cloud object store (Delta Lake) and adding autoscaling and auto-shutdown features for Apache Spark.
This document discusses software security and outlines a 4 step plan to improve it. It begins by recommending studying successful security initiatives at other companies. The second step is to inventory your own applications to understand what data and services they involve. The third step is to incorporate security practices into agile development processes and use tools to help scale this. The final step is to drive a security-focused culture change and have plans for incident response.
This document discusses software security and outlines a 4 step plan to improve it. It begins by recommending studying successful security initiatives at other companies. The second step is to inventory your own applications to understand what needs protecting. The third step is to incorporate security practices into agile development processes. The final step is to drive a security-focused culture change across the organization.
Chaos Engineering - The Art of Breaking Things in ProductionKeet Sugathadasa
This is an introduction to Chaos Engineering - the Art of Breaking things in Production. This is conducted by two Site Reliability Engineers which explains the concepts, history, principles along with a demonstration of Chaos Engineering
The technical talk is given in this video: https://youtu.be/GMwtQYFlojU
AWS Public Sector Symposium 2014 Canberra | Putting the "Crowd" to work in th...Amazon Web Services
"Cloud" computing provides significant advantages and enormous cost savings by allowing IT infrastructure to be provisioned as a ubiquitous, metered, unit priced and on demand service. However, the other major resourcing issue faced by CIO’s is the provision of skilled labour to develop, support and maintain a increasing wide range of IT applications.
This session will show attendees how the worldwide pool of freelance developers, the "Crowd", can be utilised as a ubiquitous, metered, unit priced and on demand resource pool to work in the "Cloud" to improve responsiveness to customer demands, reduce development timeframes and achieve significant cost savings.
Although the crowd can bring enormous benefits in terms of cost and agility, there are some technical and business barriers to adoption in large organisations. This presentation will discuss the barriers and, using some real examples, will explain how GoSource overcomes them.
(DVO205) Monitoring Evolution: Flying Blind to Flying by InstrumentAmazon Web Services
Today, AdRoll runs its infrastructure by instrumentation: constantly asking empirical questions, analyzing data for answers, and designing new features with instrumentation in mind to understand how functionality will work upon release. AdRoll’s development methodology did not start out this way, however. It took a cultural shift and many new tools and processes to adopt this approach. In this session, AdRoll and Datadog will discuss how to evolve your organization from a state of “flying blind” to a culture focused on monitoring and data-based decisions. Session sponsored by Datadog.
Building a data warehouse with Amazon Redshift … and a quick look at Amazon ...Julien SIMON
This document provides a summary of a presentation about building data warehouses with Amazon Redshift and using Amazon Machine Learning. The presentation discusses how Amazon Redshift can be used to build a petabyte-scale data warehouse with SQL and no system administration. Case studies are presented showing companies saving on total cost of ownership by migrating to Amazon Redshift. It also briefly introduces Amazon Machine Learning for building predictive models with managed services. Demo examples are shown of loading data into Redshift and using ML to train a regression model and create a real-time prediction API.
Similar to Use of Formal Methods at Amazon Web Services (20)
Entrepreneurial Strategy Generating and Exploiting new entriesSulman Ahmed
This document summarizes key concepts from Chapter 3 of the textbook about entrepreneurial strategy for new entries. It discusses generating new entry opportunities by creating valuable, rare, and inimitable resource bundles. It also covers assessing new opportunities and deciding whether to exploit them. Additionally, it outlines strategies for exploiting new entries such as being a first mover, reducing environmental uncertainty, and reducing customer uncertainty. Risk reduction strategies like market scope strategies and imitation strategies are also summarized.
Entrepreneurial Intentions and corporate entrepreneurshipSulman Ahmed
This document discusses entrepreneurial intentions and corporate entrepreneurship. It defines entrepreneurial intentions as the motivational factors that influence individuals to pursue entrepreneurial outcomes. Intention is stronger when an action is perceived as feasible and desirable. It also discusses how education, age, work history, role models, and support systems influence entrepreneurial characteristics and intentions. Additionally, it contrasts managerial and entrepreneurial decision making, and provides steps to establish corporate entrepreneurship within an organization.
Entrepreneurship main concepts and descriptionSulman Ahmed
This document provides an introduction to an entrepreneurship course for business graduates. The course objectives are to understand components of entrepreneurship like venture capital and stock options, learn how to develop business plans, be familiar with the entrepreneurship environment in Pakistan, and consider becoming an entrepreneur. It also discusses what entrepreneurship is, the forms it can take, and perspectives on entrepreneurship throughout history.
Run time Verification using formal methodsSulman Ahmed
Runtime verification is a technique to check if a system satisfies given correctness properties by monitoring its execution. It involves lightweight monitoring of a system at runtime to identify incorrect behavior. There are three main types of verification: theorem proving, model checking, and testing. Runtime verification monitors systems to check if the actual behavior matches the expected behavior according to contracts or specifications. It can identify failures and additional code can then react, such as displaying error messages.
Group members working on the project include Anum Ameer, Tauqeer Taj, Sulman Ahmed, and Hina Qayyum. The document discusses creating flash cards to help with learning, purchasing a $25 Play Store account to upload the app, deploying the resources in schools which requires a delivery team, and plans to advance the project to other fields like medicine and physics.
Software Engineering Economics Life Cycle.Sulman Ahmed
Software Engineering Economics Life Cycle.
Software Engineering Economics Life Cycle.
Software Engineering Economics Life Cycle.
Software Engineering Economics Life Cycle.
Software Engineering Economics Life Cycle.
This slide is all about the Data mining techniques.This slide is all about the Data mining techniques.This slide is all about the Data mining techniques.This slide is all about the Data mining techniques;This slide is all about the Data mining techniques;This slide is all about the Data mining techniques.This slide is all about the Data mining techniques.This slide is all about the Data mining techniques
This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.This slide is about Data mining rules.
The document discusses association rule mining. It defines frequent itemsets as itemsets whose support is greater than or equal to a minimum support threshold. Association rules are implications of the form X → Y, where X and Y are disjoint itemsets. Support and confidence are used to evaluate rules. The Apriori algorithm is introduced as a two-step approach to generate frequent itemsets and rules by pruning the search space using an anti-monotonic property of support.
This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques.This course is all about the data mining that how we get the optimized results. it included with all types and how we use these techniques
Data mining Basics and complete description Sulman Ahmed
This course is all about the data mining techniques and how we mine the data and get optimize results.This course is all about the data mining techniques and how we mine the data and get optimize results.This course is all about the data mining techniques and how we mine the data and get optimize results.This course is all about the data mining techniques and how we mine the data and get optimize results.This course is all about the data mining techniques and how we mine the data and get optimize results.This course is all about the data mining techniques and how we mine the data and get optimize results.This course is all about the data mining techniques and how we mine the data and get optimize results
Data mining Basics and complete description onwordSulman Ahmed
This document discusses data mining and provides examples of its applications. It begins by explaining why data is mined from both commercial and scientific viewpoints in order to discover useful patterns and information. It then discusses some of the challenges of data mining, such as dealing with large datasets, high dimensionality, complex data types, and distributed data sources. The document outlines common data mining tasks like classification, clustering, association rule mining, and regression. It provides real-world examples of how these techniques are used for applications like fraud detection, customer profiling, and scientific discovery.
De-normalization involves combining or modifying tables in a database to improve query performance for data warehousing and decision support systems (DSS). It aims to enhance performance without losing information by bringing related data items closer together through techniques like collapsing tables, splitting tables, pre-joining tables, and adding redundant or derived columns. The level of de-normalization should be carefully considered based on a cost-benefit analysis of storage needs, maintenance issues, and query requirements.
Normalization is the process of organizing data in a database to eliminate redundancy and ensure data dependencies make sense. The goals are to eliminate storing the same data in multiple tables and only storing related data together. Normalization results in breaking tables into smaller tables and relating them through their primary keys. There are three common normal forms - 1st normal form (1NF), 2nd normal form (2NF), and 3rd normal form (3NF). The document describes transforming a student database from 1NF to 2NF to 3NF to eliminate anomalies like inconsistent changes if data is updated or deleted.
Dimensional modeling (DM) provides a simpler logical data model optimized for decision support compared to entity-relationship (ER) modeling. DM results in a star schema with one central fact table linked to multiple dimension tables through foreign keys. This star structure supports roll-up and aggregation operations for analysis. While ER modeling focuses on micro relationships, DM focuses on macro relationships to optimize query performance for decision support systems (DSS).
The document discusses the four step process of dimensional modeling:
1. Choose the business process - such as orders or invoices.
2. Choose the grain - the level of data granularity like individual transactions or monthly aggregates.
3. Choose the facts - numeric and additive measures like quantity sold or amount.
4. Choose the dimensions - attributes that describe the facts like time, product, or geography. Dimensions provide context for analyzing the facts.
MOLAP refers to multi-dimensional OLAP which implements OLAP using a multi-dimensional data structure or "cube". Dimensions typically include factors like geography, products, and dates. Very high performance is achieved through O(1) lookup into the cube structure to retrieve pre-aggregated results. While MOLAP provides instant response times, it has drawbacks like long load times to pre-calculate the cube and wastage of space for high cardinality dimensions.
Data warehousing and online analytical processing (OLAP) are closely related, with OLAP supporting analysis of data stored in a data warehouse. OLAP enables fast, iterative, and ad-hoc analysis of aggregated data through multidimensional views and techniques like drill-down, roll-up, and pivoting. While it is not feasible to write all possible predefined queries for ad-hoc analysis, OLAP computes answers to "all possible queries" by pre-computing and storing aggregated data at multiple levels.
This document discusses various de-normalization techniques used to improve database query performance, including splitting tables horizontally and vertically, pre-joining tables, and adding redundant columns. Horizontal splitting breaks a table into multiple tables based on common column values, such as campus-specific data, to exploit parallelism and avoid unnecessary queries. Vertical splitting moves infrequently accessed columns like large text to separate tables to reduce storage size and improve performance. Pre-joining identifies frequent joins and combines the joined tables physically to eliminate the join in queries. Adding redundant columns moves or duplicates columns between tables to reduce the need for joins. While these techniques can improve queries, they increase storage usage and update overhead.
De-normalization involves combining or modifying tables from a normalized database structure to improve query performance for data warehousing and decision support systems. It works by reducing the number of tables and joins needed for queries, at the cost of increased data redundancy. Some common de-normalization techniques include collapsing tables, pre-joining tables, and adding redundant columns. Any de-normalization should be carefully analyzed to balance the performance gains against increased storage needs.
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
How to Setup Default Value for a Field in Odoo 17Celine George
In Odoo, we can set a default value for a field during the creation of a record for a model. We have many methods in odoo for setting a default value to the field.
1. Use of Formal Methods
at Amazon Web Services(Chris Newcombe, Tim Rath, Fan Zhang, Bogdan Munteanu, Marc Brooker, Michael Deardeuff )
ASAD RIAZ (021)
MALIK FARHAN (028)
HASSNAIN SHAH (086)
2. What is AWS?
oCloud services
oDatabase storage
oNetworking
oPay-as-you-go pricing
3. AWS Services
oS3
oLaunch a virtual machine
oBuild a web app
oMachine learning (Rekognition)
oDatabases (DynomoDB)
oAnalytics
oAR & VR
4. AWS Business Growth & Cost-efficient
Infrastructure
oS3 grew to store 1 trillion objects. Less than a year later it had
grown to 2 trillion objects, and was regularly handling 1.1 million
requests per second.
oFault tolerant
oReplication
oConsistency
oConcurrency
oLoad Balancing
5. Complexity
High complexity increases the probability of human error in design,
code & operations.
What we have tried?
oDeep design reviews
oStandard verification techniques
oCode reviews
oFault-injection testing
Still subtle bugs & failure reason? (complexity)
6. Solution?
oTLA Temporal Logic of Actions+, a formal specification language.
oTLA+ is based on simple discrete math, i.e. basic set theory and predicates, with which all
engineers are familiar.
oTLA+ specification describes the set of all possible legal behaviors.
oTLA+ describes correctness properties (the ‘what’). & the design of the system (the ‘how’).
oUse conventional mathematical reasoning & TLC model checker.
What is TLC?
A tool which takes a TLA+ specification & exhaustively checks the desired correctness properties.
7. TLA+ (Temporal Logic of Action)
PlusCal (similar to C-style programming language)
PlusCal is automatically translated to TLA+ with a single key press.
System Components Line count (excl. comments) Benefit
S3
Fault-tolerant low-level network
algorithm
804 PlusCal
Found 2 bugs. Found further bugs in
proposed optimizations.
Background redistribution of data 645 PlusCal
Found 1 bug, and found a bug in the first
proposed fix.
DynamoDB
Replication & group- membership
system
939 TLA+
Found 3 bugs, some requiring traces of
35 steps
EBS Volume management 102 PlusCal Found 3 bugs.
Internal distributed lock manager
Lock-free data structure 223 PlusCal
Improved confidence. Failed to find a
liveness bug as we did not check
liveness.
Fault tolerant replication and
reconfiguration algorithm
318 TLA+
Found 1 bug. Verified an aggressive
optimization.
8. Starting steps of Formal Specifications
1. Safety properties: “what the system is allowed to do”
Example: at all times, all committed data is present and correct.
2. Liveness properties: “what the system must eventually do”
Example: Whenever the system receives a request, it must
eventually respond to that request.
3. Next step: “what must go right”?
4. Conforming to the design: with the goal of confirming design
correctly handles all of the dynamic events in the environment.
9. What to confirm?
oNetwork errors & repairs
oDisk errors
oCrashes & restarts
oData center failure and repairs
oActions by human operators
5. Using the model checker to verify that the specification of the system in
its environment implements the chosen correctness properties.
10. TLA & PlusCal Example
The problem
You’re writing software for a bank. You have Alice and Bob as clients,
each with a certain amount of money in their accounts. Alice wants
to send some money to Bob. How do you model this? Assume all you
care about is their bank accounts.
15. Conclusion
At AWS, formal methods have been a big success. They have helped
us prevent subtle, serious bugs from reaching production, bugs that
we would not have found via any other techniques.
In simple words, whatever we are now, that would not have been
achieved without using formal methods.