Updated slides from AWS Melbourne - Cost Mgt. and Optimisation Meetup. (CloudWatch). Agenda:
2:00pm - Setup
2:10pm - Kick off, welcome, and intro
2:20pm - Jason Gorringe: How to get the most out of your AWS usage via pillars of Allocation, Avoidance, Accountability and Transparency
2:50pm - Discussion and Q&A
3:00pm - Peter Shi: Developing a Cost Management Dashboard that provides high speed to insight
4:30pm - Discussion, Q&A, and networking over drinks and snacks
5:30pm - Event Concludes
This document outlines an agenda for an AWS Cost Management workshop. The agenda includes introductions and sessions on AWS Cost Explorer, AWS Budgets, AWS Reservations, and AWS Cost & Usage Reports. It provides overviews of AWS cost management products and highlights recent features including budget redesigns, forecasting enhancements, and reserved instance management updates.
AWS Cost Management Workshop at the San Francisco Loft
AWS offers a number of products that allow you to access, organize, understand, optimize, and control your AWS costs and usage. This workshop will help you get started using AWS Cost Explorer to visualize your usage patterns and identify your underlying cost drivers. From there, you can take action on your insights by learning how to set custom cost and usage budgets and receive alerts via email or Amazon SNS topic using AWS Budgets.
Deep dive session on Cloud Financial Management Fundamentals and Cost Optimization in AWS.
Presented by Spencer Marley, APAC BD at the November 2018 AWSUGBLR Meetup
This document outlines an AWS Cost Management workshop. The workshop includes introductions and three parts: 1) Using AWS Cost Explorer and AWS Budgets to identify cost drivers and set budgets. 2) Building a cost management solution with Amazon Athena and Amazon QuickSight. 3) Best practices for cost management including tagging, visibility tools, and automated cost controls.
Your spend on AWS should always be optimized. Whether you are seeing usage increase because your customers are relying more on your services, or you just want to dial-in your spending for the road ahead, there are things you can and should do to optimize your cloud costs. In this session we will highlight six quick cost optimizations every startup should consider depending on workloads and the patterns you are seeing. We will give you the tools and approaches that can have a significant impact on your startup right now and moving forward. Some of which you can implement right after this session.
Learn the best practices and considerations for cost optimising your AWS environment. We will cover best practices for right sizing, scheduling instances to reduce costs, and finally, how you can save up to 75% on OnDemand costs using reserved instances.
Reducing the Total Cost of IT Infrastructure with AWS Cloud EconomicsAmazon Web Services
AWS offers you a pay-as-you-go approach for pricing for over 70 cloud services. With AWS you pay only for the individual services you need, for as long as you use them, and without requiring long-term contracts or complex licensing.
This webinar will cover a deep-dive into the above stated AWS Pricing Principles and how you can estimate your AWS bill by using the AWS Simple Monthly Calculator. Furthermore, it will highlight the best practices that are at your disposal to help you lower your AWS costs.
We will cover:
Understand how the TCO calculator matches your current infrastructure to the most cost-effective AWS offering.
Learn how volume based discounts and realize important savings as your usage increases.
Discover how services such as S3 and data transfer OUT from EC2, pricing is tiered, meaning the more you use, the less you pay per GB. In addition, data transfer IN is always free of charge. As a result, as your AWS usage needs increase, you benefit from the economies of scale that allow you to increase adoption and keep costs under control.
This document outlines an agenda for an AWS Cost Management workshop. The agenda includes introductions and sessions on AWS Cost Explorer, AWS Budgets, AWS Reservations, and AWS Cost & Usage Reports. It provides overviews of AWS cost management products and highlights recent features including budget redesigns, forecasting enhancements, and reserved instance management updates.
AWS Cost Management Workshop at the San Francisco Loft
AWS offers a number of products that allow you to access, organize, understand, optimize, and control your AWS costs and usage. This workshop will help you get started using AWS Cost Explorer to visualize your usage patterns and identify your underlying cost drivers. From there, you can take action on your insights by learning how to set custom cost and usage budgets and receive alerts via email or Amazon SNS topic using AWS Budgets.
Deep dive session on Cloud Financial Management Fundamentals and Cost Optimization in AWS.
Presented by Spencer Marley, APAC BD at the November 2018 AWSUGBLR Meetup
This document outlines an AWS Cost Management workshop. The workshop includes introductions and three parts: 1) Using AWS Cost Explorer and AWS Budgets to identify cost drivers and set budgets. 2) Building a cost management solution with Amazon Athena and Amazon QuickSight. 3) Best practices for cost management including tagging, visibility tools, and automated cost controls.
Your spend on AWS should always be optimized. Whether you are seeing usage increase because your customers are relying more on your services, or you just want to dial-in your spending for the road ahead, there are things you can and should do to optimize your cloud costs. In this session we will highlight six quick cost optimizations every startup should consider depending on workloads and the patterns you are seeing. We will give you the tools and approaches that can have a significant impact on your startup right now and moving forward. Some of which you can implement right after this session.
Learn the best practices and considerations for cost optimising your AWS environment. We will cover best practices for right sizing, scheduling instances to reduce costs, and finally, how you can save up to 75% on OnDemand costs using reserved instances.
Reducing the Total Cost of IT Infrastructure with AWS Cloud EconomicsAmazon Web Services
AWS offers you a pay-as-you-go approach for pricing for over 70 cloud services. With AWS you pay only for the individual services you need, for as long as you use them, and without requiring long-term contracts or complex licensing.
This webinar will cover a deep-dive into the above stated AWS Pricing Principles and how you can estimate your AWS bill by using the AWS Simple Monthly Calculator. Furthermore, it will highlight the best practices that are at your disposal to help you lower your AWS costs.
We will cover:
Understand how the TCO calculator matches your current infrastructure to the most cost-effective AWS offering.
Learn how volume based discounts and realize important savings as your usage increases.
Discover how services such as S3 and data transfer OUT from EC2, pricing is tiered, meaning the more you use, the less you pay per GB. In addition, data transfer IN is always free of charge. As a result, as your AWS usage needs increase, you benefit from the economies of scale that allow you to increase adoption and keep costs under control.
The document discusses cloud cost optimization strategies. It identifies key pillars for cost optimization including right-sizing resources, leveraging different pricing models, using appropriate storage classes, measuring usage, and designing architectures for cost efficiency. The optimization process involves monitoring usage and costs, identifying unnecessary resources, and establishing a tagging strategy. Key recommendations include turning off idle instances, deleting unused volumes, stopping paid services when not in use, using consolidated billing for discounts, and automating processes. Latest trends discussed are 1ms billing granularity for Lambda and independent provisioning of performance and capacity for EBS volumes.
Cloud Economics: Transform Businesses at Lower Costs - AWS Summit Bahrain 2017Amazon Web Services
Most likely, your organization is not in the business of running data centers, yet a significant amount of time and money is spent doing just that. Amazon Web Services provides a way to acquire and use infrastructure on-demand, so that you pay only for what you consume. This puts more money back into the business, so that you can innovate more, expand faster, and be better positioned to take advantage of new opportunities. Learn from the CEO of DevFactory on how they saved money and redirected their resources towards boosting innovation after taking advantage of the cloud.
This document summarizes an AWSome Day event for AWS partners. The agenda included:
1. Discussing the partnership vision and AWS value proposition for partners.
2. Highlighting business opportunities partners can pursue by working with AWS, such as managed services, migration services, and software solutions.
3. Providing guidance on how partners can successfully work with AWS, including training staff, focusing on automation, and leveraging AWS pricing models.
4. Explaining how partners can effectively go to market with AWS, such as aligning with AWS best practices, highlighting the partnership, and leveraging AWS marketing and support resources.
Sandeep Cashyap discusses cost optimization when using AWS. He emphasizes that AWS allows customers to pay only for what they use. There are many areas where customers can optimize costs, such as rightsizing instances, using reserved instances and spot instances, stopping unused resources, and using different storage classes. Customers should focus on five pillars of cost optimization: right-sizing instances, using the right pricing models, increasing elasticity, monitoring usage, and matching usage to appropriate storage classes.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Moving from an on-premises environment into AWS is just the start of the journey towards cost optimization. In this session we’ll look at a range of ways in which our customers can understand their costs and increase their return-on-investment: building the business case; selecting the right models for the right workloads; benefiting from tiered pricing aggregation; using data to drive the choice of AWS services; implementation of intelligent auto-scaling; and, where appropriate, re-platforming to make use of new architectural patterns such as Serverless.
Learn about how with AWS, you can easily right size your services, leverage reserved instances, and use powerful cost management tools to monitor your costs, so you can always be on top of your how much you’re spending.
Learn the best practices for right sizing and scheduling instances to reduce costs.
Who is this for:
IT managers, IT executives, business leaders, business decision makers, IT decision makers, system engineers, system administrators, developers and architects.
Commercial Management and Cost Optimization on AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Understand primary levers available to optimize your AWS environment
- Aware of tools that will help you to optimize your AWS environment
- Understand organizational mechanisms used by customers to promote optimisation
This document provides an overview of cost optimization strategies when using AWS. It discusses building cloud architectures with cost in mind by following best practices like right-sizing instances, using the appropriate pricing model, and matching usage to the proper storage class. It also covers implementing and maintaining cost optimization at scale through automation, measurement, and monitoring. Key recommendations include tagging resources, using tools like AWS Trusted Advisor, and potentially working with partners to help manage costs across accounts and metrics.
Cloud Economics; How to Quantify the Benefits of Moving to the Cloud - Transf...Amazon Web Services
Most likely, your organization is not in the business of running data centers, yet a significant amount of time and money is spent doing just that. Amazon Web Services provides a way to acquire and use infrastructure on-demand, so that you pay only for what you consume. This puts more money back into the business, so that you can innovate more, expand faster, and be better positioned to take advantage of new opportunities.
Speaker:
Matt Johnson, Solutions Architect, Amazon Web Services
The document discusses how AWS pricing works across various services. It outlines key principles such as understanding the fundamentals of pricing like compute, storage and data transfer costs. It recommends starting early with cost optimization strategies and maximizing flexibility by only paying for resources when needed. AWS offers on-demand and reservation models, with the latter providing discounts for long-term commitments. Pricing details are provided for compute, storage and database services.
1. The document discusses cloud computing and Amazon Web Services (AWS). It describes the benefits of cloud computing like pay for only what you use, lower costs, ability to scale easily.
2. It then explains AWS products like compute, storage, database and analytics services that can be used to build applications. It provides examples of how companies use AWS.
3. The document concludes by suggesting strategies for using AWS, from using it for development to fully migrating to the cloud, and encourages the reader to try AWS free tier and contact support.
Architecture Best Practices: Practical Design Steps to Save Costs - Level 200Amazon Web Services
Did you know that AWS enables builders to architect solutions for price? Beyond the typical challenges of function, performance, and scale, you can make your application cost effective. Using different architectural patterns and AWS services in concert can dramatically reduce the cost of systems operation and per-transaction costs. Attendees will walk away with a new perspective on how they can build systems on AWS economically and effectively.
Speaker: Simon Elisha - Solution Architect Manager, Amazon Web Services
Getting Started with EC2 Spot - November 2016 Webinar SeriesAmazon Web Services
This document discusses how to save up to 90% on EC2 costs by using Spot Instances. It provides an overview of AWS EC2 pricing models including On-Demand, Reserved, and Spot Instances. It then focuses on best practices for using Spot Instances, such as using the Spot Bid Advisor, diversifying Spot Fleets across instance types and Availability Zones, and leveraging the two minute warning for Spot termination. Examples are given of customers saving 75-87% on their EC2 costs by using Spot Instances for batch processing, continuous integration, and real-time ad delivery workloads.
This document discusses using intelligent serverless and scalable real-time data pipelines with AWS Kinesis, Fargate, and CloudFormation (CFN). It begins with an introduction of the speaker and agenda. It then provides an overview of serverless computing and demonstrates how to use Kinesis for real-time streaming. Next, it explains what AWS Fargate is and its benefits. It demonstrates integrating Kinesis and Fargate using CFN. Finally, it briefly discusses some new trends before thanking the audience.
FinOps - AWS Cost and Operational Efficiency - Pop-up Loft Tel AvivAmazon Web Services
Saving thousands on AWS by implementing 4 simple steps: identify and terminate unused resources, leverage the cloud to reduce costs, design for cost optimization and implement governance policies and rules.
This document discusses how to optimize costs when using AWS. It recommends: 1) Architecting for cost efficiency by "paying for what you think you need"; 2) Optimizing usage costs by "paying for what you use"; and 3) Taking advantage of benefits over time by "paying for what you really need". It provides examples of using the right instance types, reserved instances, spot instances, and services to reduce costs. It also recommends monitoring billing closely and using tools like Trusted Advisor and the TCO calculator to find additional savings.
AWS Partner Webcast - Advanced Strategies for AWS Cost Allocation with Tags a...Amazon Web Services
AWS provides two powerful tools for segmenting and allocating your AWS costs: tags, and linked accounts. But getting the most out of them requires planning, consistency and buy-in from your team.
In this webinar, you'll learn proven strategies for separating your resources into multiple linked accounts and assigning tags to your resources. Then you'll see how to use Cloudability to precisely track where your AWS spending is going and provide detailed reporting for the decision-makers who need it.
What you’ll learn:
• When to use tags vs. linked accounts for cost allocation. • How to create a successful tagging strategy.
• How to report and share your costs with the right decision-makers in your organization.
This document discusses Auto Scaling in Amazon DynamoDB. It provides an overview of DynamoDB Auto Scaling, including how it removes the need to manually provision and adjust throughput capacity. It describes the console experience and API actions for Auto Scaling. It also explains how Auto Scaling works under the hood, powered by Application Auto Scaling and CloudWatch. Best practices for Auto Scaling such as optimizing for daily scale down limits are also covered.
This document provides an overview of using AWS Glue and EMR for big data engineering. It discusses the services and components of EMR and Glue for building data lakes and data warehouses. It includes demos of building ETL pipelines to transform and load CSV data into a Redshift data warehouse using both EMR and Glue. The document compares EMR and Glue, highlighting that EMR is a managed Hadoop framework while Glue is a fully managed service.
The document describes a presentation on Amazon Athena, a serverless interactive query service that allows users to analyze data directly from Amazon S3 using standard SQL. The presentation will introduce Athena and demonstrate how it can be used to query data in S3 without having to load it into a database first. It will also discuss how Athena uses Presto and the Glue Data Catalog under the hood and show some customer use cases for log analysis, ETL workflows, and analytics reporting using Athena with other AWS services.
The document discusses cloud cost optimization strategies. It identifies key pillars for cost optimization including right-sizing resources, leveraging different pricing models, using appropriate storage classes, measuring usage, and designing architectures for cost efficiency. The optimization process involves monitoring usage and costs, identifying unnecessary resources, and establishing a tagging strategy. Key recommendations include turning off idle instances, deleting unused volumes, stopping paid services when not in use, using consolidated billing for discounts, and automating processes. Latest trends discussed are 1ms billing granularity for Lambda and independent provisioning of performance and capacity for EBS volumes.
Cloud Economics: Transform Businesses at Lower Costs - AWS Summit Bahrain 2017Amazon Web Services
Most likely, your organization is not in the business of running data centers, yet a significant amount of time and money is spent doing just that. Amazon Web Services provides a way to acquire and use infrastructure on-demand, so that you pay only for what you consume. This puts more money back into the business, so that you can innovate more, expand faster, and be better positioned to take advantage of new opportunities. Learn from the CEO of DevFactory on how they saved money and redirected their resources towards boosting innovation after taking advantage of the cloud.
This document summarizes an AWSome Day event for AWS partners. The agenda included:
1. Discussing the partnership vision and AWS value proposition for partners.
2. Highlighting business opportunities partners can pursue by working with AWS, such as managed services, migration services, and software solutions.
3. Providing guidance on how partners can successfully work with AWS, including training staff, focusing on automation, and leveraging AWS pricing models.
4. Explaining how partners can effectively go to market with AWS, such as aligning with AWS best practices, highlighting the partnership, and leveraging AWS marketing and support resources.
Sandeep Cashyap discusses cost optimization when using AWS. He emphasizes that AWS allows customers to pay only for what they use. There are many areas where customers can optimize costs, such as rightsizing instances, using reserved instances and spot instances, stopping unused resources, and using different storage classes. Customers should focus on five pillars of cost optimization: right-sizing instances, using the right pricing models, increasing elasticity, monitoring usage, and matching usage to appropriate storage classes.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Moving from an on-premises environment into AWS is just the start of the journey towards cost optimization. In this session we’ll look at a range of ways in which our customers can understand their costs and increase their return-on-investment: building the business case; selecting the right models for the right workloads; benefiting from tiered pricing aggregation; using data to drive the choice of AWS services; implementation of intelligent auto-scaling; and, where appropriate, re-platforming to make use of new architectural patterns such as Serverless.
Learn about how with AWS, you can easily right size your services, leverage reserved instances, and use powerful cost management tools to monitor your costs, so you can always be on top of your how much you’re spending.
Learn the best practices for right sizing and scheduling instances to reduce costs.
Who is this for:
IT managers, IT executives, business leaders, business decision makers, IT decision makers, system engineers, system administrators, developers and architects.
Commercial Management and Cost Optimization on AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Understand primary levers available to optimize your AWS environment
- Aware of tools that will help you to optimize your AWS environment
- Understand organizational mechanisms used by customers to promote optimisation
This document provides an overview of cost optimization strategies when using AWS. It discusses building cloud architectures with cost in mind by following best practices like right-sizing instances, using the appropriate pricing model, and matching usage to the proper storage class. It also covers implementing and maintaining cost optimization at scale through automation, measurement, and monitoring. Key recommendations include tagging resources, using tools like AWS Trusted Advisor, and potentially working with partners to help manage costs across accounts and metrics.
Cloud Economics; How to Quantify the Benefits of Moving to the Cloud - Transf...Amazon Web Services
Most likely, your organization is not in the business of running data centers, yet a significant amount of time and money is spent doing just that. Amazon Web Services provides a way to acquire and use infrastructure on-demand, so that you pay only for what you consume. This puts more money back into the business, so that you can innovate more, expand faster, and be better positioned to take advantage of new opportunities.
Speaker:
Matt Johnson, Solutions Architect, Amazon Web Services
The document discusses how AWS pricing works across various services. It outlines key principles such as understanding the fundamentals of pricing like compute, storage and data transfer costs. It recommends starting early with cost optimization strategies and maximizing flexibility by only paying for resources when needed. AWS offers on-demand and reservation models, with the latter providing discounts for long-term commitments. Pricing details are provided for compute, storage and database services.
1. The document discusses cloud computing and Amazon Web Services (AWS). It describes the benefits of cloud computing like pay for only what you use, lower costs, ability to scale easily.
2. It then explains AWS products like compute, storage, database and analytics services that can be used to build applications. It provides examples of how companies use AWS.
3. The document concludes by suggesting strategies for using AWS, from using it for development to fully migrating to the cloud, and encourages the reader to try AWS free tier and contact support.
Architecture Best Practices: Practical Design Steps to Save Costs - Level 200Amazon Web Services
Did you know that AWS enables builders to architect solutions for price? Beyond the typical challenges of function, performance, and scale, you can make your application cost effective. Using different architectural patterns and AWS services in concert can dramatically reduce the cost of systems operation and per-transaction costs. Attendees will walk away with a new perspective on how they can build systems on AWS economically and effectively.
Speaker: Simon Elisha - Solution Architect Manager, Amazon Web Services
Getting Started with EC2 Spot - November 2016 Webinar SeriesAmazon Web Services
This document discusses how to save up to 90% on EC2 costs by using Spot Instances. It provides an overview of AWS EC2 pricing models including On-Demand, Reserved, and Spot Instances. It then focuses on best practices for using Spot Instances, such as using the Spot Bid Advisor, diversifying Spot Fleets across instance types and Availability Zones, and leveraging the two minute warning for Spot termination. Examples are given of customers saving 75-87% on their EC2 costs by using Spot Instances for batch processing, continuous integration, and real-time ad delivery workloads.
This document discusses using intelligent serverless and scalable real-time data pipelines with AWS Kinesis, Fargate, and CloudFormation (CFN). It begins with an introduction of the speaker and agenda. It then provides an overview of serverless computing and demonstrates how to use Kinesis for real-time streaming. Next, it explains what AWS Fargate is and its benefits. It demonstrates integrating Kinesis and Fargate using CFN. Finally, it briefly discusses some new trends before thanking the audience.
FinOps - AWS Cost and Operational Efficiency - Pop-up Loft Tel AvivAmazon Web Services
Saving thousands on AWS by implementing 4 simple steps: identify and terminate unused resources, leverage the cloud to reduce costs, design for cost optimization and implement governance policies and rules.
This document discusses how to optimize costs when using AWS. It recommends: 1) Architecting for cost efficiency by "paying for what you think you need"; 2) Optimizing usage costs by "paying for what you use"; and 3) Taking advantage of benefits over time by "paying for what you really need". It provides examples of using the right instance types, reserved instances, spot instances, and services to reduce costs. It also recommends monitoring billing closely and using tools like Trusted Advisor and the TCO calculator to find additional savings.
AWS Partner Webcast - Advanced Strategies for AWS Cost Allocation with Tags a...Amazon Web Services
AWS provides two powerful tools for segmenting and allocating your AWS costs: tags, and linked accounts. But getting the most out of them requires planning, consistency and buy-in from your team.
In this webinar, you'll learn proven strategies for separating your resources into multiple linked accounts and assigning tags to your resources. Then you'll see how to use Cloudability to precisely track where your AWS spending is going and provide detailed reporting for the decision-makers who need it.
What you’ll learn:
• When to use tags vs. linked accounts for cost allocation. • How to create a successful tagging strategy.
• How to report and share your costs with the right decision-makers in your organization.
This document discusses Auto Scaling in Amazon DynamoDB. It provides an overview of DynamoDB Auto Scaling, including how it removes the need to manually provision and adjust throughput capacity. It describes the console experience and API actions for Auto Scaling. It also explains how Auto Scaling works under the hood, powered by Application Auto Scaling and CloudWatch. Best practices for Auto Scaling such as optimizing for daily scale down limits are also covered.
This document provides an overview of using AWS Glue and EMR for big data engineering. It discusses the services and components of EMR and Glue for building data lakes and data warehouses. It includes demos of building ETL pipelines to transform and load CSV data into a Redshift data warehouse using both EMR and Glue. The document compares EMR and Glue, highlighting that EMR is a managed Hadoop framework while Glue is a fully managed service.
The document describes a presentation on Amazon Athena, a serverless interactive query service that allows users to analyze data directly from Amazon S3 using standard SQL. The presentation will introduce Athena and demonstrate how it can be used to query data in S3 without having to load it into a database first. It will also discuss how Athena uses Presto and the Glue Data Catalog under the hood and show some customer use cases for log analysis, ETL workflows, and analytics reporting using Athena with other AWS services.
Getting to 1.5M Ads/sec: How DataXu manages Big DataQubole
DataXu sits at the heart of the all-digital world, providing a data platform that manages tens of millions of dollars of digital advertising investments from Global 500 brands. The DataXu data platform evaluates 1.5 million online ad opportunities every second for our customers, allowing them to manage and optimize their marketing investments across all digital channels. DataXu employs a wide range of AWS services: Cloud Front, Cloud Trail, CloudWatch, Data Pipeline, Direct Connect, Dynamo DB, EC2, EMR, Glacier, IAM, Kinesis, RDS, Redshift, Route53, S3, SNS, SQS, and VPC to run various workloads at scale for DataXu data platform.
In addition, DataXu also uses Qubole Data Service, QDS, to offer a Unified Analytics Interface tool to DataXu customers. Qubole, a member of APN provides self-managing Big data infrastructure in the Cloud which leverages spot pricing for cost-efficiencies, provides fast performance, and most importantly a streamlined user-interface for ease of use.
Attendees will learn how Qubole provided self-managing Hadoop clusters in the AWS Cloud accelerated DataXu’s batch-oriented analysis jobs; and how Qubole integration with Amazon Redshift enabled DataXu to preform low latency and interactive analysis. Further, in the session we'll take a look at how DataXu opened up QDS access to their customers using QDS user interface thereby providing them with a single tool for both batch-oriented and interactive analysis. By using the QDS user interface buyers of the DataXu data service could perform all manner of analysis against the data stored in their AWS S3 bucket.
Speakers:
Scott Ward
Solutions Architect at Amazon Web Services
Ashish Dubey
Solutions Architect at Qubole
Yekesa Kosuru
VP Engineering at DataXu
The document provides information about querying and analyzing data in Amazon S3 using various AWS services. It discusses:
1. Using Amazon EMR to process raw web logs delivered to S3 by Kinesis Firehose using Apache Spark.
2. Loading the processed data into Amazon Redshift for interactive querying using SQL.
3. Performing ad-hoc analysis on the data in S3 using serverless Athena without having to set up any infrastructure.
BDA308 Serverless Analytics with Amazon Athena and Amazon QuickSight, featuri...Amazon Web Services
Amazon QuickSight is a fast, cloud-powered business intelligence (BI) service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. In this session, we demonstrate how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes. We also introduce SPICE - a new Super-fast, Parallel, In-memory, Calculation Engine in Amazon QuickSight, which performs advanced calculations and render visualizations rapidly without requiring any additional infrastructure, SQL programming, or dimensional modeling, so you can seamlessly scale to hundreds of thousands of users and petabytes of data. Lastly, you will see how Amazon QuickSight provides you with smart visualizations and graphs that are optimized for your different data types, to ensure the most suitable and appropriate visualization to conduct your analysis, and how to share these visualization stories using the built-in collaboration tools. NOTE: Make this more themed towards QuickSight as it applies to other AWS Big Data Services - Redshift, Athena, S3, RDS.
This document discusses AWS and cloud adoption journeys. It describes typical stages of adoption including project, foundation, migration, and reinvention stages. It recommends initial steps for a cloud journey such as creating a minimum viable product, cloud center of excellence, and discovery workshop. The document provides examples of customer cloud journeys over multiple years and discusses concepts like landing zones, account structure, network setup, identity and access management, and service catalog.
(ISM315) How to Quantify TCO & Increase Business Value Gains Using AWSAmazon Web Services
"Do you need to develop a business case for moving to cloud or communicate business value of your investment in AWS? This session introduces you to methods and tools to help you calculate total cost of ownership (TCO) and evaluate your business value gains from AWS.
In this session, you learn how to measure TCO and business value, and communicate a business case to organizations such as finance and procurement. You compare the costs of running your own IT infrastructure on-premises vs. on AWS and quantify intangible benefits. You also learn about resources available from AWS to help you engage in business value conversations with your organization’s leaders and what contact is available to you for further evaluation. "
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
This document provides an overview of Amazon Athena, an interactive query service that allows users to analyze data directly from Amazon S3 using standard SQL. Key points include:
- Athena allows users to query data stored in S3 without having to load it into a separate data warehouse or Hadoop cluster. It uses standard SQL and is serverless, requiring no infrastructure management.
- Customers can analyze large amounts of data stored in S3 for analytics without having to move or preprocess the data first. Athena supports a variety of file formats and is easy to use via the AWS console or JDBC/ODBC drivers.
- The demonstration shows how to use Athena to analyze Amazon ELB access
This document provides tips for optimizing costs when using AWS. It discusses how AWS pricing models allow saving money compared to on-premises infrastructure as usage grows. Specific tips include choosing optimal instance types, using auto-scaling, stopping unused instances, reserving instances to save up to 75%, using spot instances for up to 92% discounts, using appropriate storage classes, offloading tasks from your architecture, leveraging AWS services rather than rebuilding capabilities, and using tools like Trusted Advisor to analyze spending. The goal is to continuously lower costs through economies of scale and passing savings to customers.
AWS 201 Webinar Series - Rightsizing and Cost Optimizing your DeploymentAmazon Web Services
Leveraging the AWS Cloud can help you further lower your overall IT costs and avoid fixed, upfront IT investments. Learning how to right-size your environments can help you to go from capacity guessing to meeting QoE targets for your customers. The session will also cover best practices on how to Architect for Cost from real world customer use cases and ultimately how the AWS Cloud can help you increase revenue by focusing on Innovation and Return on Agility.
Key takeaways
- Replace up-front capital expenses with low variable costs
- Outsource undifferentiated IT tasks to useful services
- Evaluate the total Cost of (Non) Ownership
- Build Cost-aware architectures
- AWS features that help you reduce your spend
- Different purchasing options available with AWS
Who should attend
- Technical Users: Developers, engineers, system administrators and architects
- Decision Makers: IT Managers, directors and business leaders
NEW LAUNCH! Intro to Amazon Athena. Analyze data in S3, using SQLAmazon Web Services
This document provides an overview of Amazon Athena, an interactive query service that makes it easy to analyze data stored in Amazon S3 using standard SQL. Key points include:
- Athena allows users to analyze large datasets in S3 without having to load the data into a separate data warehouse or Hadoop cluster. It is serverless, requiring no infrastructure to manage.
- Users can write SQL queries against data stored in S3 in a variety of formats like CSV, JSON, and columnar formats like Parquet. Athena supports complex queries, joins, and functions.
- Athena is cost effective, with customers paying only for the amount of data scanned during queries. Using compression and columnar formats
AWS Partner Webcast - Improving Your AWS Cost Efficiency with CloudabilityAmazon Web Services
Reducing your Amazon Web Services (AWS) costs can be as easy as turning off unused resources and buying Reserved Instances. But as your AWS infrastructure grows, finding and acting on those opportunities to save becomes more challenging as the number or complexity of projects grows.
Review this webinar to learn how REA Group uses Cloudability AWS cost management tools to manage their infrastructure and reduce their own TCO, while taking advantage of a large and complex set of global deployments on AWS.
What you'll learn:
- How to find and shut down resources that aren’t being used
- Making decisions about Reserved Instance purchases that are easier, faster and more likely to save you money
- How to communicate those savings to stakeholders in finance and management
Data warehousing in the era of Big Data: Deep Dive into Amazon RedshiftAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
FSI201 FINRA’s Managed Data Lake – Next Gen Analytics in the CloudAmazon Web Services
FINRA’s Data Lake unlocks the value in its data to accelerate analytics and machine learning at scale. FINRA's Technology group has changed its customer's relationship with data by creating a Managed Data Lake that enables discovery on Petabytes of capital markets data, while saving time and money over traditional analytics solutions. FINRA’s Managed Data Lake includes a centralized data catalog and separates storage from compute, allowing users to query from petabytes of data in seconds. Learn how FINRA uses Spot instances and services such as Amazon S3, Amazon EMR, Amazon Redshift, and AWS Lambda to provide the 'right tool for the right job' at each step in the data processing pipeline. All of this is done while meeting FINRA’s security and compliance responsibilities as a financial regulator.
ENT316 Keeping Pace With The Cloud: Managing and Optimizing as You ScaleAmazon Web Services
With cloud maturity comes operational efficiencies and endless potential for innovation and business growth. However, the complexities of governing cloud infrastructure are impeding without the right strategy. Visibility, accountability, and actionable insights are some of the most invaluable considerations. The AWS cloud clearly enables convenience and cost savings for organizations that know how to leverage its full potential. Amazon EC2 Reserved Instances (RIs), in particular, present a tremendous opportunity when scaling to save significantly on capacity but there are many considerations to fully reaping the benefits of RIs. In this session, CloudCheckr CTO Patrick Gartlan will present issues that every organization runs into when scaling, provide best practices for how to combat them and help you show your boss how RIs help you save money and move faster.
This session is brought to you by AWS Summit Chicago sponsor, CloudCheckr.
Day 3 - Maintaining Performance & Availability While Lowering Costs with AWSAmazon Web Services
AWS provides you several pricing options that can help you significantly reduce your overall IT cost, including On-Demand Instances, Spot Instances, and Reserved Instances. This session covers high-level architectures and when to use and not to use each of the pricing models for components of those architectures. We walk through several customer examples to illustrate when to use each pricing option. Additionally, we walk through tools that may be useful to determine when to use each pricing model. This session is aimed at technically savvy managers and engineers who need to reduce their cloud spending.
Reasons to attend:
- Learn about Reserved Instances, On-Demand Instances and Spot Instances.
- Discover ways of running more for less in Amazon EC2.
- If you are already running a workload in AWS, attend this webinar to learn how to run the same workload at reduced costs.
ENT316 Keeping Pace With The Cloud: Managing and Optimizing as You ScaleAmazon Web Services
"With cloud maturity comes operational efficiencies and endless potential for innovation and business growth. However, the complexities of governing cloud infrastructure are impeding without the right strategy. Visibility, accountability, and actionable insights are some of the most invaluable considerations. The AWS cloud clearly enables convenience and cost savings for organizations that know how to leverage its full potential. Amazon EC2 Reserved Instances (RIs) in particular, present a tremendous opportunity when scaling to save significantly on capacity but there are many considerations to fully reaping the benefits of RIs. In this session, CloudCheckr CTO Patrick Gartlan will present issues that every organization runs into when scaling, provide best practices for how to combat them and help you show your boss how RIs help you save money and move faster.
This session is brought to you by AWS Summit New York City sponsor, CloudCheckr. "
Similar to AWS Melbourne Cost Mgt. and Opti. Meetup - 20181109 - v2.2 (20)
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
2. Agenda
• 2:00pm - Setup
• 2:10pm - Kick off, welcome, and intro
• 2:20pm - Jason Gorringe, Australia Post: How to get the most out of your
AWS usage
• 2:50pm - Discussion and Q&A
• 3:00pm - Peter Shi, AWS: Developing a Cost Management Dashboard that
provides high speed to insight
• 4:30pm - Discussion, Q&A, and networking over drinks and snacks
• 5:30pm - Event Concludes
4. Why should we care about Cost Optimisation?
Example non-prod workload checklist
Can this run on CentOS/Linux?
$1000
Turn off outside of work hours?
Right size down by 1 size? $118
$236
$787
Can this run on EC2 Spot? $30
Starting non-prod workload
5. Agenda
• 2:00pm - Setup
• 2:10pm - Kick off, welcome, and intro
• 2:20pm - Jason Gorringe, Australia Post: How to get the most out of your
AWS usage
• 2:50pm - Discussion and Q&A
• 3:00pm - Peter Shi, AWS: Developing a Cost Management Dashboard that
provides high speed to insight
• 4:30pm - Discussion, Q&A, and networking over drinks and snacks
• 5:30pm - Event Concludes
7. History
• About Post
• About Me
• My Journey with Cloud Services
• Cost Optimisation principles
• Where to from here?
8. Beginning the journey
Communicate
• Constant two-way communication between business, IT and the billing
team is vital
Educate
• An understanding of the intricacies of AWS billing
Empower
• Give control to users of the platform to manage their own costs
16. • Showback and Chargeback
• Monitoring and reporting
• Analysis and trending
• Collaboration and communication
17. Where to from here?
• Automated tagging
• Product/team account based strategy
• Review of services consumed
• Tagging (integration and central management)
19. Agenda
• 2:00pm - Setup
• 2:10pm - Kick off, welcome, and intro
• 2:20pm - Jason Gorringe, Australia Post: How to get the most out of your
AWS usage
• 2:50pm - Discussion and Q&A
• 3:00pm - Peter Shi, AWS: Developing a Cost Management Dashboard that
provides high speed to insight
• 4:30pm - Discussion, Q&A, and networking over drinks and snacks
• 5:30pm - Event Concludes
20. Agenda
• 2:00pm - Setup
• 2:10pm - Kick off, welcome, and intro
• 2:20pm - Jason Gorringe, Australia Post: How to get the most out of your
AWS usage
• 2:50pm - Discussion and Q&A
• 3:00pm - Peter Shi, AWS: Developing a Cost Management Dashboard that
provides high speed to insight
• 4:30pm - Discussion, Q&A, and networking over drinks and snacks
• 5:30pm - Event Concludes
21. Contents
• Why should I build my own dashboard?
• AWS Data Sources
• Data Pipelines into Athena
• Gaining Speed to Insight in Quicksight
(incl. visualization tips and KPIs)
22. Contents
• Why should I build my own dashboard?
• AWS Data Sources
• Data Pipelines into Athena
• Gaining Speed to Insight in Quicksight
(incl. visualization tips and KPIs)
24. Pick the tool that provides the cost visibility and
speed to insight that you need
Simple, Static, Small
environment
Complex, Dynamic,
Large environment
1. Monthly AWS Invoice
2. AWS Billing
console
3. AWS Cost Explorer
and AWS Budgets
4. AWS Billing File Analysis,
DIY dashboards, and
3rd party tools
26. Contents
• Why should I build my own dashboard?
• AWS Data Sources
• Data Pipelines into Athena
• Gaining Speed to Insight in Quicksight
(incl. visualization tips and KPIs)
27. AWS Data Sources
• AWS Cost and Usage Report (CUR)
• Hourly billing data for each service
+ more info such as RI usage
• Can have a very large number of rows and columns
• AWS CloudWatch data
• Resource Utilization data
• DIY budget and revenue data
• Flat .csv file of how much you’ve budgeted to spend +
revenue generated associated with AWS spend
• AWS CloudTrail
28. Step 1.1: Generate the AWS Cost and Usage
Report (CUR) for your account @ Payer acct. level
• CUR is the data source of
Cost Explorer
• Enable the CUR (5 minute
exercise)
• https://docs.aws.amazon.com/
awsaccountbilling/latest/about
v2/billing-reports-
gettingstarted-
turnonreports.html
29. Step 1.2: Create some DIY budget and revenue data
• Create CSV file with 4 columns:
• Account id
• Month
• Budget (example business constraint)
• Revenue (example business metric)
Other examples:
• minutes on website
• number of devs
30. Step 1.3: Save CloudWatch data across relevant
accounts to S3
• Amazon CloudWatch is a monitoring and management
service that collects and reports resource metrics
• Metrics that indicate effective EC2 use (for many workloads)
includes:
• CPU % utilisation
• Memory % utilisation
• Network IO (to internet and to EBS)
• For the purposes of today’s exercise we’ll use only CPU %
31. Step 1.3: Save CloudWatch data across relevant
accounts to S3
• Open source multi-account example of Cost Optimization:
EC2 Right Sizing solution which collects CloudWatch data
across multiple accounts
https://github.com/saltysoup/cost-optimization-multi
32. Contents
• Why should I build my own dashboard?
• AWS Data Sources
• Data Pipelines into Athena
• Gaining Speed to Insight in Quicksight
(incl. visualization tips and KPIs)
33. What’s a data pipeline and what’s Athena?
• Data pipelines get your data in a format and to a location
where you want it to be, typically in an automated way. Also
known as ETL (Extract, Transform and Load)
• The “Load” portion of this will get our data into Amazon
Athena, our serverless interactive query service that can
query data directly from S3 (Simple Storage Service).
• $5 per TB of data scanned. For the typical $100k p.m. biller
analysing billing data via Athena should cost approx. $5 per
month. However it’s always smart to set an AWS Budget
warning to catch rogue scripts.
34. Step 2.1: Create an automated way to get CUR
billing data into Athena
• Option 1: Use an open source tool
https://bitbucket.org/atlassian/squeegee/wiki/Home
• Option 2: To be updated in next version of slides
35. Step 2.2: Get your DIY Budget and Revenue data
into Athena
• Option 1: Manual via Athena SQL script
• Option 2: Automated method 1:
S3 Event -> call Lambda -> run Athena SQL
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-
example.html
• Option 3: Automated method 2: AWS Glue
36. Step 2.2: Get your DIY Budget and Revenue data
into Athena – Example SQL
CREATE EXTERNAL TABLE IF NOT EXISTS dbname.budget_and_rev_data (
`accountid` string,
`month_` string,
`budget` string,
`revenue` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '"',
'escapeChar' = ''
)
LOCATION 's3://bucketname/budgetandrevenue/'
TBLPROPERTIES ('has_encrypted_data'='false', "skip.header.line.count"="1")
37. Step 2.3: Get your CloudWatch data into Athena
• Option 1: Manual via Athena SQL script
• Option 2: Automated method 1:
S3 Event -> call Lambda -> run Athena SQL
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-
example.html
• Option 3: Automated method 2: AWS Glue
38. Step 2.3: Get your CloudWatch data into Athena –
Example SQL
CREATE EXTERNAL TABLE IF NOT EXISTS
dbname.cw_data (
`humanReadableTimestamp` string,
`timestamp` string,
`accountId` string,
`az` string,
`instanceId` string,
`instanceType` string,
`instanceTags` string,
`ebsBacked` string,
`volumeIds` string,
`instanceLaunchTime` string,
`humanReadableInstanceLaunchTime` string,
`CPUUtilization` string,
`NetworkIn` string,
`NetworkOut` string,
`DiskReadOps` string,
`DiskWriteOps` string
)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.OpenCSVSe
rde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '"',
'escapeChar' = ''
)
LOCATION 's3://bucketname/cw/'
TBLPROPERTIES
('has_encrypted_data'='false',
"skip.header.line.count"="1")
39. Contents
• Why should I build my own dashboard?
• AWS Data Sources
• Data Pipelines into Athena
• Gaining Speed to Insight in Quicksight
(incl. visualization tips and KPIs)
40. What’s QuickSight?
• Amazon QuickSight is a fast, cloud-powered BI service that
makes it easy to build visualizations, perform ad-hoc
analysis, and quickly get business insights from your data.
• Accessed from any browser or mobile device.
• First BI service to offer pay-per-session pricing so no upfront
costs, no annual commitments, and no charges for inactive
users!
42. A part of Speed to Insight is getting data to where you are
familiar with consuming it from (e.g. email).
QuickSight can send dashboards to you via email.
https://aws.amazon.com/blogs/big-data/amazon-quicksight-now-supports-
email-reports-and-data-labels/
43. Step 3.1: Set up a new Athena data source in
QuickSight
• S3 permissions
• Data types
• Date format
• Decimal format
44. Step 3.1: Set up a new Athena data source in
QuickSight – Example SQL for date formatting
SELECT
substring(cast(from_iso8601_timestamp(bill_billingperiodstartdate) AS varchar), 1, 19)
AS billingperiodstartdate,
substring(cast(from_iso8601_timestamp(bill_billingperiodenddate) AS varchar), 1, 19)
AS billingperiodenddate,
substring(cast(from_iso8601_timestamp(lineitem_usagestartdate) AS varchar), 1, 19)
AS usagestartdate,
substring(cast(from_iso8601_timestamp(lineitem_usageenddate) AS varchar), 1, 19)
AS usageenddate,
*
FROM dbname.cost_usage_report
45. Step 3.1: Set up a new Athena data source in
QuickSight – Example Date format syntax for
QuickSight
yyyy-MM-dd HH:mm:ss
46. Step 3.2: Visualize spend by account and month
• This helps us see
• Largest spends
• Changes in spend
• But what do we mean by spend?
47. Step 3.2: Visualize spend by account and month
• What do we mean by spend?
Cost View CUR Column Description
Blended • lineItem/UnblendedCost Cost based on a common rate across
all accounts
Unblended • lineItem/UnblendedCost Cost based on a common rate across
all accounts
Amortized • reservation/AmortizedUpfront
FeeForBillingPeriod
• reservation/UnusedAmortized
UpfrontFeeForBillingPeriod
Amortised value of upfront RI spend
Public On
Demand Cost
• pricing/
publicOnDemandCost
True reflection of usage.
Price that would have been paid if run
on-demand and if no-free tier
48. Step 3.2: Visualize spend by account and month
• What would give better insight?
• Spend by account by week
• Having account names instead of IDs
• Having a granular account structure
• Tagging for visibility into apps, teams, cost centers
49.
50. Step 3.3: Now lets visualize:
“what is my % spend against budget?”
• Create a view in Athena that joins budget and billing data
• Visualise the view in QuickSight
A “view” in database-world is a saved query that does not
store any data but runs that query each time you ask.
This allows the query to be designed to always retrieve the
latest data. The query can source data from one or more
data sources
51.
52. Step 3.4: Why is account X over budget?
Lets see spend by service/product for that account
• Create spend by product for all accounts
• Add a parameterised filter for linked account
• Which service/product significantly increased during April?
53.
54. Step 3.5: Great to see which service, but which
team and app drove this change?
• Which team significantly increased spend during April?
• Which app significantly increased spend during April?
55.
56.
57. Step 3.6: Search for optimization opportunity in
resource sizing via EC2 instance utilization
• Join CloudWatch data with billing data in Athena
(showing peak CPU over 14 days by instance and tag)
and visualize in QuickSight
• Create a join view in Athena
• Visualise the view in QuickSight
58. Step 3.6: Search for optimization opportunity in
resource sizing via EC2 instance utilization - SQL
SELECT
lineitem_resourceid
, month
, lineitem_usageaccountid
, instancetags
, max_cpu
, sum(CAST(unblendedcost_withoutri AS DOUBLE)) AS unblendedcost_withoutri
, sum(CAST(lineitem_unblendedcost AS DOUBLE)) AS lineitem_unblendedcost
FROM "dbname"."cost_usage_report"
INNER JOIN
(SELECT instanceId, instancetags, max(cpuutilization) AS max_cpu FROM "dbname"."cw_data" GROUP BY instanceId,
instancetags) cw
ON lineitem_resourceid = cw.instanceId
GROUP BY
lineitem_resourceid
, month
, lineitem_usageaccountid
, instancetags
, max_cpu
59.
60. Step 3.7: Establish our first KPI
• What is the cost per revenue change month on month
for the account with a business value metric?
• Use the budget and revenue view created earlier
• Visualise the view in QuickSight
via the KPI visual type
61. Step 3.7: Establish our first KPI – Example SQL
SELECT
lineitem_resourceid
, month
, lineitem_usageaccountid
, instancetags
, max_cpu
, sum(CAST(unblendedcost_withoutri AS DOUBLE)) AS unblendedcost_withoutri
, sum(CAST(lineitem_unblendedcost AS DOUBLE)) AS lineitem_unblendedcost
FROM "dbname"."cost_usage_report"
INNER JOIN
(SELECT instanceId, instancetags, max(cpuutilization) AS max_cpu FROM "dbname"."cw_data"
GROUP BY instanceId, instancetags) cw
ON lineitem_resourceid = cw.instanceId
GROUP BY
lineitem_resourceid
, month
, lineitem_usageaccountid
, instancetags
, max_cpu
62.
63. Contents
• Why should I build my own dashboard?
• AWS Data Sources
• Data Pipelines into Athena
• Gaining Speed to Insight in Quicksight
(incl. visualization tips and KPIs)
64. REAGroup has driven cost governance and good cost
behaviour through Finance working with Engineering
A talk about their story is here:
http://bit.ly/FinOpsAtREA
66. Next step options (if useful)
• Have a chat with your TAM about this
• If you need help we have AWS ProServe who can help,
Let your account manager know and ask to CC Peter
• Try this yourself
68. Agenda
• 2:00pm - Setup
• 2:10pm - Kick off, welcome, and intro
• 2:20pm - Jason Gorringe, Australia Post: How to get the most out of your
AWS usage
• 2:50pm - Discussion and Q&A
• 3:00pm - Peter Shi, AWS: Developing a Cost Management Dashboard that
provides high speed to insight
• 4:30pm - Survey, Discussion, Q&A, and networking over drinks and snacks
• 5:30pm - Event Concludes