This document provides an overview of Amazon Web Services. It begins with an agenda that lists Amazon Cloud Platform, Amazon Compute Services, and Amazon Services. It then covers various AWS services like EC2, S3, RDS, SQS, SNS, CloudWatch, ELB, and Auto Scaling in detail, describing their key features, pricing models, and providing demos. The document aims to educate users on how to get started with AWS and leverage its various cloud-based services.
Amazon DynamoDB is a fully managed NoSQL database service for applications that need consistent, single-digit millisecond latency at any scale. This talk explores DynamoDB capabilities and benefits in detail and discusses how to get the most out of your DynamoDB database. We go over schema design best practices with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We also explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, Streams, Time-to-Live (TTL), and more.
An MPI-IO Cloud Cluster Bioinformatics Summer Project (BDT205) | AWS re:Inven...Amazon Web Services
Researchers at Clemson University assigned a student summer intern to explore bioinformatics cloud solutions that leverage MPI, the OrangeFS parallel file system, AWS CloudFormation templates, and a Cluster Scheduler. The result was an AWS cluster that runs bioinformatics code optimized using MPI-IO. We give an overview of the process and show how easy it is to create clusters in AWS.
AWS re:Invent 2016: T2: From Startups to Enterprise, Performance for a Low Co...Amazon Web Services
In this session, customers learn more about the T2 instance type and the performance and cost savings it can bring to startups, SMBs, and enterprises. Customers will share best practices and tips for how they use T2 instances across workloads including development and test, production web servers, continuous integration and more.
Customer Case Study: Land Registry as a Service in the Cloud - AWS PS Summit ...Amazon Web Services
Landgate undertook transform this platform into an as-a-service offering for other land jurisdictions. How was this done? What is the security posture? What is the availability? What was the business impact? And why is it that inspecting Land Title certificates didn't result in people accidentally being shown pictures of Beyoncé. Come find out.
Speaker: James Bromberger, Associate Director & National Cloud Lead - Ajilon.
Level: 300
In this session, we walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we walk through how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we provide an overview of the solution that we've implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.
Digital Media Workflows on AWS provides an overview of how media companies can leverage AWS services for their digital media workflows. It describes how AWS offers scalable storage, compute and media processing services that media companies can use across the entire value chain from content ingestion, management and processing to delivery and analytics. Specific services highlighted include Amazon S3, AWS Storage Gateway, Amazon EBS, Amazon Glacier, Amazon EC2, Amazon EMR, Amazon Elastic Transcoder, AWS Lambda and AWS Elemental MediaLive for various stages of the media workflow. Reference architectures are also presented showing how the different AWS services can be integrated to build automated and serverless media pipelines for applications like digital asset management, OTT delivery and live streaming.
SRV206 Getting Started with Amazon CloudFront Content Delivery NetworkAmazon Web Services
Whether you are building an e-commerce site or a business application, security is a key consideration when architecting your website or application. In this session, you will learn more about some of the things CloudFront does behind the scenes to protect the delivery of your content such as OCSP Stapling and Perfect Forward Secrecy. We will also share best practices on how you can use CloudFront to securely deliver content end-to-end, control who accesses your content, how to shield your origins from the Internet, and getting a A+ on SSL labs.
Amazon DynamoDB is a fully managed NoSQL database service for applications that need consistent, single-digit millisecond latency at any scale. This talk explores DynamoDB capabilities and benefits in detail and discusses how to get the most out of your DynamoDB database. We go over schema design best practices with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We also explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, Streams, Time-to-Live (TTL), and more.
An MPI-IO Cloud Cluster Bioinformatics Summer Project (BDT205) | AWS re:Inven...Amazon Web Services
Researchers at Clemson University assigned a student summer intern to explore bioinformatics cloud solutions that leverage MPI, the OrangeFS parallel file system, AWS CloudFormation templates, and a Cluster Scheduler. The result was an AWS cluster that runs bioinformatics code optimized using MPI-IO. We give an overview of the process and show how easy it is to create clusters in AWS.
AWS re:Invent 2016: T2: From Startups to Enterprise, Performance for a Low Co...Amazon Web Services
In this session, customers learn more about the T2 instance type and the performance and cost savings it can bring to startups, SMBs, and enterprises. Customers will share best practices and tips for how they use T2 instances across workloads including development and test, production web servers, continuous integration and more.
Customer Case Study: Land Registry as a Service in the Cloud - AWS PS Summit ...Amazon Web Services
Landgate undertook transform this platform into an as-a-service offering for other land jurisdictions. How was this done? What is the security posture? What is the availability? What was the business impact? And why is it that inspecting Land Title certificates didn't result in people accidentally being shown pictures of Beyoncé. Come find out.
Speaker: James Bromberger, Associate Director & National Cloud Lead - Ajilon.
Level: 300
In this session, we walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we walk through how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we provide an overview of the solution that we've implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.
Digital Media Workflows on AWS provides an overview of how media companies can leverage AWS services for their digital media workflows. It describes how AWS offers scalable storage, compute and media processing services that media companies can use across the entire value chain from content ingestion, management and processing to delivery and analytics. Specific services highlighted include Amazon S3, AWS Storage Gateway, Amazon EBS, Amazon Glacier, Amazon EC2, Amazon EMR, Amazon Elastic Transcoder, AWS Lambda and AWS Elemental MediaLive for various stages of the media workflow. Reference architectures are also presented showing how the different AWS services can be integrated to build automated and serverless media pipelines for applications like digital asset management, OTT delivery and live streaming.
SRV206 Getting Started with Amazon CloudFront Content Delivery NetworkAmazon Web Services
Whether you are building an e-commerce site or a business application, security is a key consideration when architecting your website or application. In this session, you will learn more about some of the things CloudFront does behind the scenes to protect the delivery of your content such as OCSP Stapling and Perfect Forward Secrecy. We will also share best practices on how you can use CloudFront to securely deliver content end-to-end, control who accesses your content, how to shield your origins from the Internet, and getting a A+ on SSL labs.
Amazon EC2 forms the backbone of the compute platform for hundreds of thousands of AWS customers, but understanding how to fully utilize EC2 and related services can be challenging.
In this webinar, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Learning Objectives:
Understand how to use Amazon EC2 beyond a simple single instance use case
Learn about instance bootstrapping, AMIs and Elastic IPs
Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
Find out how to create an Amazon RDS database and how to test failover between Availability Zones
Who Should Attend:
Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
Utah Code Camp is a computer technology conference hosted annually by Utah Geek Events in Salt Lake City, UT. This presentation is an introduction to cloud computing and the Amazon AWS Cloud platform.
Explore Amazon DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over best practices for schema design with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, DynamoDB Streams, and more. We also provide lessons learned from operating DynamoDB at scale, including provisioning DynamoDB for IoT.
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint.
AWS re:Invent 2016: Lessons Learned from a Year of Using Spot Fleet (CMP205)Amazon Web Services
Over the last year, Yelp has transitioned its scalable and reliable parallel task execution system, Seagull, from On-Demand and Reserved Instances entirely to Spot Fleet. Seagull runs over 28 million tests per day, launches more than 2.5 million Docker containers per day, and uses over 10,000 vCPUs in Spot Fleet at peak capacity. To deal with rising infrastructure costs for Seagull, we have extended our in-house Auto Scaling Engine called FleetMiser to scale the Spot Fleet in response to demand. FleetMiser has reduced Seagull’s cluster costs by 60% in the past year and saved Yelp thousands of dollars every month.
In this session, we describe how Yelp uses Spot Fleet for Seagull and lessons we’ve learned over the past year, along with our recommendations on how to use it reliably (pro tip: don’t get outbid for your whole Spot Fleet). We conclude by looking at our future plans for extending Spot Fleet usage at Yelp.
AWS 201 - A Walk through the AWS Cloud: Introduction to Amazon CloudFrontAmazon Web Services
How to accelerate your online end user experience using Amazon CloudFront?
Today end users expect to be able to view media content anytime, anywhere and on any device. Amazon CloudFront is a web service for content delivery used to distribute content to end users around the globe with low latency, high data transfer speeds in a cost effective manner. Amazon CloudFront can be used to deliver your entire website, including dynamic, static, streaming, and interactive content using a global network of edge locations. Requests for your content are automatically routed to the nearest edge location, so content is delivered with the best possible performance.
Join this webinar to learn about Amazon CloudFront’s unique Content Delivery Network (CDN), how it works and the benefits it provides. We will walk you through common real life challenges our customers face and how AWS builds a solution that combines performance, pricing and a really simple set up.
Attend this session to find out about:
• Common business challenges and how Amazon CloudFront can resolve them
• Workloads that can benefit from Amazon CloudFront such as software downloads (large files, gaming), video streaming (live and VOD) and whole site delivery (web acceleration)
• Enhancing brand value, monetizing content and implementing security options e.g. DRM and DDOS
• Other AWS services (transcode, storage, compute, DNS) to architect with Amazon CloudFront to effectively drive costs down and simplify workflows
• Leveraging the AWS Partner Network to architect additional elements to your workflow like DRM and Reporting
AWS re:Invent 2016: Introduction to Amazon CloudFront (CTD205)Amazon Web Services
End users expect to be able to view static, dynamic, and streaming content anytime, anywhere, and on any device. Amazon CloudFront is a web service that accelerates delivery of your websites, APIs, video content, or other web assets to end users around the globe with low latency, high data transfer speeds, and no commitments. In this session, learn what a content delivery network (CDN) such as Amazon CloudFront is and how it works, the benefits it provides, common challenges and needs, performance, recently released features like HTTP/2 and IPV6 support, pricing, and examples of how customers are using CloudFront.
AWS 201 - A Walk through the AWS Cloud: Delivering Static and Dynamic Content...Amazon Web Services
This document provides an overview and agenda for a presentation on delivering static and dynamic content using Amazon CloudFront. The presentation covers why a content delivery network (CDN) is needed, an introduction to Amazon CloudFront, how to architect with CloudFront, features of CloudFront, a demo, benefits, and case studies. It discusses how CloudFront provides a global network to improve performance and reduce costs of content delivery.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
In this session, we will walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we will discuss how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we will provide an overview of the solution that we've implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.
AWS October Webinar Series - Using Spot Instances to Save up to 90% off Your ...Amazon Web Services
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this webinar, we dive into best practices and new features that will help you realize immediate cost savings, maximize compute capacity within your budget, and maintain application availability and performance with less up-front or ongoing development effort. Attendees leave with practical knowledge of Spot bidding strategies, market trends, instance selection and benchmarking, and fault-tolerant architecture with examples taken from common Spot use cases such as web services, big data/analytics, media processing, and continuous integration workloads.
AWS re:Invent 2016: Deep Learning, 3D Content Rendering, and Massively Parall...Amazon Web Services
Accelerated computing is on the rise because of massively parallel, compute-intensive workloads such as deep learning, 3D content rendering, financial computing, and engineering simulations. In this session, we provide an overview of our accelerated computing instances, including how to choose instances based on your application needs, best practices and tips to optimize performance, and specific examples of accelerated computing in real-world applications.
AWS re:Invent 2016: How Thermo Fisher Is Reducing Mass Spectrometry Experimen...Amazon Web Services
Mass spectrometry is the gold standard for determining chemical compositions, with spectrometers often measuring the mass of a compound down to a single electron. This level of granularity produces an enormous amount of hierarchical data that doesn't fit well into rows and columns. In this talk, learn how Thermo Fisher is using MongoDB Atlas on AWS to allow their users to get near real-time insights from mass spectrometry experiments—a process that used to take days. We also share how the underlying database service used by Thermo Fisher was built on AWS.
Cloud Front & Serving Media From the Edge - AWS India Summit 2012Amazon Web Services
The document discusses using AWS CloudFront to serve media content from the edge globally. Some key benefits of CloudFront include low latency, high bandwidth, redundancy, scalability, and cost-effectiveness. CloudFront provides a content delivery network that can deliver both static and dynamic content using its global edge locations. It supports various media delivery protocols and formats. Customers have reported strong performance, ease of use, flexibility, and significant cost savings from using CloudFront.
AWS re:Invent 2016: How to Launch a 100K-User Corporate Back Office with Micr...Amazon Web Services
Learn how to build a scalable, compliance-ready, and automated deployment of the Microsoft “backoffice” servers for 100K users running on AWS. In this session, we show a reference architecture deployment of Exchange, SharePoint, Skype for Business, SQL Server and Active Directory in a single VPC. We discuss the following: (1) how the solution is automated for 100K users, (2) how the solution is enabled for compliance (e.g., FedRAMP, HIPAA, PCI), and (3) how the solution is built from modular 10K user blocks. Attendees should have knowledge of AWS CloudFormation, PowerShell, instance bootstrapping, VPCs, and Amazon Route 53, as well as the relevant Microsoft technologies.
AWS re:Invent 2016 : announcement, technical demos and feedbacksEmmanuel Quentin
Slides of our intervention with Mathieu Mailhos about re:Invent 2016 :
- Annoucements
- Technical demonstration of Athena, monitoring via Lambda and step function
- Feedbacks
Scripts available here : https://gist.github.com/manuquentin/adee523b60a4723e9e4819ea69713ab6
AWS re:Invent 2016: How Citus Enables Scalable PostgreSQL on AWS (DAT207)Amazon Web Services
Join the principal engineer of Citus Cloud for a brief overview of Citus, best use cases for it, and a drill down into how it's run and managed as a hosted service on top of AWS. The orchestration of Citus is homegrown, but comes from years of experience of running millions of PostgreSQL databases on top of AWS. Even if you aren't looking to leverage Citus to help you scale out, in this session you'll gain insights applicable to running and managing your stateful services on top of AWS. Citus is a PostgreSQL extension that transforms the database into a distributed, horizontally scalable database. Companies like Cloudflare use Citus to process 40 TB per day. With Citus MX, applications can take advantage of every node in the cluster for writes and yielding near-linear write scaling. Citus MX provide up to 500,000 durable writes per second.
The document provides an overview of using Amazon Web Services (AWS) for high-performance computing (HPC) clusters. It discusses how AWS enables scientists to build HPC clusters on demand that can scale up and down based on workload needs. Specific solutions and services mentioned include Alces Flight for launching ready-to-compute HPC clusters on AWS in minutes, the AWS Spot Market for accessing spare computing capacity at low costs, and examples of using AWS for scientific workloads like satellite image analysis and computational fluid design simulations.
The document discusses content delivery networks (CDNs) and Amazon CloudFront. A CDN improves performance and reliability by caching content across globally distributed edge servers close to users. CloudFront is AWS's CDN that provides low latency, high bandwidth, redundancy, scalability and cost-effectiveness. It supports dynamic and static content delivery via HTTP, RTMP and more. Customers can use CloudFront to improve website performance and user experience.
Batch Processing with Containers on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the options for running batch workloads on AWS
- Learn how to architect a containerized batch processing service on Amazon ECS
- Learn best practices for optimizing and scaling complex batch workload requirements
Batch processing is useful when you need to periodically analyze large amounts of data, but configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. Containers provide a great solution for running batch jobs by providing easily managed, scalable, and portable code environments.
In this tech talk, we’ll show you how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We’ll discuss AWS Batch, our fully managed batch-processing service, and show you how to architect your own batch processing service using the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for your batch jobs.
This document provides an overview of Microsoft's Azure cloud services platform. It discusses key Azure capabilities and services including compute, storage, SQL Azure database, service bus, and access control. Azure provides scalable infrastructure and platform services that allow developers to build and host applications in the cloud using familiar .NET tools. The document also demonstrates a sample grid computing application built on Azure and highlights reasons to consider cloud computing such as reducing costs, improving scalability, and reducing IT overhead.
Leveraging Amazon Web Services for Scalable Media Distribution and Analytics ...Amazon Web Services
This document discusses how Amazon Web Services (AWS) can be leveraged for scalable media distribution and analytics. It provides an overview of AWS' global infrastructure including regions, availability zones, and edge locations around the world. It then discusses how AWS services like EC2, ELB, auto-scaling, Route53, CloudFront, ElastiCache, DynamoDB, and EMR can be used to build scalable architectures that satisfy six rules: service all web requests, serve requests as fast as possible, handle requests at any scale, simplify architecture with services, automate operational management, and leverage unique cloud properties like instance types and analytics with EMR.
Amazon EC2 forms the backbone of the compute platform for hundreds of thousands of AWS customers, but understanding how to fully utilize EC2 and related services can be challenging.
In this webinar, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Learning Objectives:
Understand how to use Amazon EC2 beyond a simple single instance use case
Learn about instance bootstrapping, AMIs and Elastic IPs
Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
Find out how to create an Amazon RDS database and how to test failover between Availability Zones
Who Should Attend:
Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
Utah Code Camp is a computer technology conference hosted annually by Utah Geek Events in Salt Lake City, UT. This presentation is an introduction to cloud computing and the Amazon AWS Cloud platform.
Explore Amazon DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over best practices for schema design with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, DynamoDB Streams, and more. We also provide lessons learned from operating DynamoDB at scale, including provisioning DynamoDB for IoT.
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint.
AWS re:Invent 2016: Lessons Learned from a Year of Using Spot Fleet (CMP205)Amazon Web Services
Over the last year, Yelp has transitioned its scalable and reliable parallel task execution system, Seagull, from On-Demand and Reserved Instances entirely to Spot Fleet. Seagull runs over 28 million tests per day, launches more than 2.5 million Docker containers per day, and uses over 10,000 vCPUs in Spot Fleet at peak capacity. To deal with rising infrastructure costs for Seagull, we have extended our in-house Auto Scaling Engine called FleetMiser to scale the Spot Fleet in response to demand. FleetMiser has reduced Seagull’s cluster costs by 60% in the past year and saved Yelp thousands of dollars every month.
In this session, we describe how Yelp uses Spot Fleet for Seagull and lessons we’ve learned over the past year, along with our recommendations on how to use it reliably (pro tip: don’t get outbid for your whole Spot Fleet). We conclude by looking at our future plans for extending Spot Fleet usage at Yelp.
AWS 201 - A Walk through the AWS Cloud: Introduction to Amazon CloudFrontAmazon Web Services
How to accelerate your online end user experience using Amazon CloudFront?
Today end users expect to be able to view media content anytime, anywhere and on any device. Amazon CloudFront is a web service for content delivery used to distribute content to end users around the globe with low latency, high data transfer speeds in a cost effective manner. Amazon CloudFront can be used to deliver your entire website, including dynamic, static, streaming, and interactive content using a global network of edge locations. Requests for your content are automatically routed to the nearest edge location, so content is delivered with the best possible performance.
Join this webinar to learn about Amazon CloudFront’s unique Content Delivery Network (CDN), how it works and the benefits it provides. We will walk you through common real life challenges our customers face and how AWS builds a solution that combines performance, pricing and a really simple set up.
Attend this session to find out about:
• Common business challenges and how Amazon CloudFront can resolve them
• Workloads that can benefit from Amazon CloudFront such as software downloads (large files, gaming), video streaming (live and VOD) and whole site delivery (web acceleration)
• Enhancing brand value, monetizing content and implementing security options e.g. DRM and DDOS
• Other AWS services (transcode, storage, compute, DNS) to architect with Amazon CloudFront to effectively drive costs down and simplify workflows
• Leveraging the AWS Partner Network to architect additional elements to your workflow like DRM and Reporting
AWS re:Invent 2016: Introduction to Amazon CloudFront (CTD205)Amazon Web Services
End users expect to be able to view static, dynamic, and streaming content anytime, anywhere, and on any device. Amazon CloudFront is a web service that accelerates delivery of your websites, APIs, video content, or other web assets to end users around the globe with low latency, high data transfer speeds, and no commitments. In this session, learn what a content delivery network (CDN) such as Amazon CloudFront is and how it works, the benefits it provides, common challenges and needs, performance, recently released features like HTTP/2 and IPV6 support, pricing, and examples of how customers are using CloudFront.
AWS 201 - A Walk through the AWS Cloud: Delivering Static and Dynamic Content...Amazon Web Services
This document provides an overview and agenda for a presentation on delivering static and dynamic content using Amazon CloudFront. The presentation covers why a content delivery network (CDN) is needed, an introduction to Amazon CloudFront, how to architect with CloudFront, features of CloudFront, a demo, benefits, and case studies. It discusses how CloudFront provides a global network to improve performance and reduce costs of content delivery.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
In this session, we will walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we will discuss how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we will provide an overview of the solution that we've implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.
AWS October Webinar Series - Using Spot Instances to Save up to 90% off Your ...Amazon Web Services
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this webinar, we dive into best practices and new features that will help you realize immediate cost savings, maximize compute capacity within your budget, and maintain application availability and performance with less up-front or ongoing development effort. Attendees leave with practical knowledge of Spot bidding strategies, market trends, instance selection and benchmarking, and fault-tolerant architecture with examples taken from common Spot use cases such as web services, big data/analytics, media processing, and continuous integration workloads.
AWS re:Invent 2016: Deep Learning, 3D Content Rendering, and Massively Parall...Amazon Web Services
Accelerated computing is on the rise because of massively parallel, compute-intensive workloads such as deep learning, 3D content rendering, financial computing, and engineering simulations. In this session, we provide an overview of our accelerated computing instances, including how to choose instances based on your application needs, best practices and tips to optimize performance, and specific examples of accelerated computing in real-world applications.
AWS re:Invent 2016: How Thermo Fisher Is Reducing Mass Spectrometry Experimen...Amazon Web Services
Mass spectrometry is the gold standard for determining chemical compositions, with spectrometers often measuring the mass of a compound down to a single electron. This level of granularity produces an enormous amount of hierarchical data that doesn't fit well into rows and columns. In this talk, learn how Thermo Fisher is using MongoDB Atlas on AWS to allow their users to get near real-time insights from mass spectrometry experiments—a process that used to take days. We also share how the underlying database service used by Thermo Fisher was built on AWS.
Cloud Front & Serving Media From the Edge - AWS India Summit 2012Amazon Web Services
The document discusses using AWS CloudFront to serve media content from the edge globally. Some key benefits of CloudFront include low latency, high bandwidth, redundancy, scalability, and cost-effectiveness. CloudFront provides a content delivery network that can deliver both static and dynamic content using its global edge locations. It supports various media delivery protocols and formats. Customers have reported strong performance, ease of use, flexibility, and significant cost savings from using CloudFront.
AWS re:Invent 2016: How to Launch a 100K-User Corporate Back Office with Micr...Amazon Web Services
Learn how to build a scalable, compliance-ready, and automated deployment of the Microsoft “backoffice” servers for 100K users running on AWS. In this session, we show a reference architecture deployment of Exchange, SharePoint, Skype for Business, SQL Server and Active Directory in a single VPC. We discuss the following: (1) how the solution is automated for 100K users, (2) how the solution is enabled for compliance (e.g., FedRAMP, HIPAA, PCI), and (3) how the solution is built from modular 10K user blocks. Attendees should have knowledge of AWS CloudFormation, PowerShell, instance bootstrapping, VPCs, and Amazon Route 53, as well as the relevant Microsoft technologies.
AWS re:Invent 2016 : announcement, technical demos and feedbacksEmmanuel Quentin
Slides of our intervention with Mathieu Mailhos about re:Invent 2016 :
- Annoucements
- Technical demonstration of Athena, monitoring via Lambda and step function
- Feedbacks
Scripts available here : https://gist.github.com/manuquentin/adee523b60a4723e9e4819ea69713ab6
AWS re:Invent 2016: How Citus Enables Scalable PostgreSQL on AWS (DAT207)Amazon Web Services
Join the principal engineer of Citus Cloud for a brief overview of Citus, best use cases for it, and a drill down into how it's run and managed as a hosted service on top of AWS. The orchestration of Citus is homegrown, but comes from years of experience of running millions of PostgreSQL databases on top of AWS. Even if you aren't looking to leverage Citus to help you scale out, in this session you'll gain insights applicable to running and managing your stateful services on top of AWS. Citus is a PostgreSQL extension that transforms the database into a distributed, horizontally scalable database. Companies like Cloudflare use Citus to process 40 TB per day. With Citus MX, applications can take advantage of every node in the cluster for writes and yielding near-linear write scaling. Citus MX provide up to 500,000 durable writes per second.
The document provides an overview of using Amazon Web Services (AWS) for high-performance computing (HPC) clusters. It discusses how AWS enables scientists to build HPC clusters on demand that can scale up and down based on workload needs. Specific solutions and services mentioned include Alces Flight for launching ready-to-compute HPC clusters on AWS in minutes, the AWS Spot Market for accessing spare computing capacity at low costs, and examples of using AWS for scientific workloads like satellite image analysis and computational fluid design simulations.
The document discusses content delivery networks (CDNs) and Amazon CloudFront. A CDN improves performance and reliability by caching content across globally distributed edge servers close to users. CloudFront is AWS's CDN that provides low latency, high bandwidth, redundancy, scalability and cost-effectiveness. It supports dynamic and static content delivery via HTTP, RTMP and more. Customers can use CloudFront to improve website performance and user experience.
Batch Processing with Containers on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the options for running batch workloads on AWS
- Learn how to architect a containerized batch processing service on Amazon ECS
- Learn best practices for optimizing and scaling complex batch workload requirements
Batch processing is useful when you need to periodically analyze large amounts of data, but configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. Containers provide a great solution for running batch jobs by providing easily managed, scalable, and portable code environments.
In this tech talk, we’ll show you how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We’ll discuss AWS Batch, our fully managed batch-processing service, and show you how to architect your own batch processing service using the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for your batch jobs.
This document provides an overview of Microsoft's Azure cloud services platform. It discusses key Azure capabilities and services including compute, storage, SQL Azure database, service bus, and access control. Azure provides scalable infrastructure and platform services that allow developers to build and host applications in the cloud using familiar .NET tools. The document also demonstrates a sample grid computing application built on Azure and highlights reasons to consider cloud computing such as reducing costs, improving scalability, and reducing IT overhead.
Leveraging Amazon Web Services for Scalable Media Distribution and Analytics ...Amazon Web Services
This document discusses how Amazon Web Services (AWS) can be leveraged for scalable media distribution and analytics. It provides an overview of AWS' global infrastructure including regions, availability zones, and edge locations around the world. It then discusses how AWS services like EC2, ELB, auto-scaling, Route53, CloudFront, ElastiCache, DynamoDB, and EMR can be used to build scalable architectures that satisfy six rules: service all web requests, serve requests as fast as possible, handle requests at any scale, simplify architecture with services, automate operational management, and leverage unique cloud properties like instance types and analytics with EMR.
The document summarizes the 2015 Amazon Web Services re:Invent conference. It highlights the growth in attendance from 9,000 to 19,000. It outlines new computing and database services announced as well as analytics, security, and management tools. Examples are given of how Netflix and a content management system benefited from migrating to AWS. Lessons learned focused on not all features transferring directly and the learning curve involved. The document encourages hands-on learning with AWS free services and attending next year's conference.
Cloud computing infrastructure provides on-demand computing resources and platforms through utility computing models. The document defines public and private clouds and compares two major cloud platforms, Amazon EC2 and Google AppEngine. Both platforms provide scalable, reliable computing resources on demand but differ in their abstraction levels and programming models. While cloud computing offers benefits like reduced costs and maintenance, adoption challenges include availability, data security, and software licensing issues.
The Cloud as a Platform - Cloud Connections 2011 Keynote - Jinesh VariaAmazon Web Services
The Cloud as a Platform Keynote Presentation delivered at Cloud Connections Conference (DevConnections) April 19, 2011 by Jinesh Varia, Technology Evangelist, Amazon
Amazon Web Services for the .NET DeveloperRob Gillen
This document provides an overview and introduction to Amazon Web Services (AWS) for .NET developers. It discusses various AWS computing and storage services including Elastic Compute Cloud (EC2), Simple Storage Service (S3), Simple Queue Service (SQS), SimpleDB, and Elastic Block Storage (EBS). The document outlines key concepts for these services and provides a walkthrough of setting up a Windows machine on EC2 and interacting with AWS services through code examples. It also covers tips and tools for using AWS and addresses questions from attendees.
The document discusses the benefits of cloud computing using Amazon Web Services (AWS) for government agencies. It outlines how AWS provides elastic, pay-per-use infrastructure that avoids large capital expenditures and allows agencies to scale up or down as needed. The document also provides examples of how government agencies like NASA, USDA, and the US Treasury have used AWS for applications hosting, geo-location services, and mission data processing. It discusses AWS's security features, certifications, and shared responsibility model.
Scalability strategies for cloud based system architectureSangJin Kang
- Scalability & Availability for the Global Markets
- Global scaled Scalability, Availability and Security
- Architecture for 100, 1K, 100K, 500K, 1M and 10M global users
- Auto-Scaling
- Understand Cloud Services
- Cloud Demo(AWS, GCP, Azure and Cloudflare)
- Wrap-Up
The document provides an overview of the Azure platform and its components. It discusses how Azure is designed for massive scale and how its services like compute, storage, SQL Azure and AppFabric help applications scale. It provides examples of how these services can be used and highlights key aspects like Azure's pay-as-you-go model, global reach, and tools for development, deployment and management.
The document discusses Microsoft Azure, a cloud computing platform. It provides an overview of key Azure concepts like scalability, flexible pricing models, and global datacenter infrastructure. It also describes Azure services like compute, storage, SQL databases, and AppFabric that help developers build and scale applications in the cloud. Commercial pricing information is included to show how Azure offers flexible consumption-based pricing based on actual usage.
This document discusses how to reduce spending on AWS through various techniques:
1. Paying for cloud resources only when they are used through the pay-as-you-go model avoids upfront costs and allows turning off unused capacity.
2. Using reserved instances when capacity needs are predictable provides significant discounts compared to on-demand pricing.
3. Architecting applications in a "cost aware" manner, such as leveraging caching, auto-scaling, managed services, and right-sizing instances can optimize costs.
4. Taking advantage of AWS's economies of scale through consolidated billing and free services helps lower overall spend. Planning workload usage of spot instances can achieve up to 85% savings.
Dr. Werner Vogels discusses the power of infrastructure as a service provided by Amazon Web Services (AWS). AWS provides on-demand access to computing resources, databases, storage, and other services on a pay-as-you-go basis. This allows customers to avoid upfront costs and scale resources up or down as needed. AWS sees billions of requests per day to services like Amazon S3 storage and continues innovating with new services and lower prices to benefit customers. When choosing a cloud provider, customers should consider requirements around security, performance, cost, flexibility, speed of innovation, and the partner's ability to deliver a reliable cloud platform.
This document provides an overview and introduction to Amazon Web Services (AWS). It discusses AWS's global infrastructure including regions and availability zones. It also describes various AWS computing, storage, database, deployment/administration and application services like EC2, S3, RDS, Elastic Beanstalk, and CloudFront. The document emphasizes AWS's scalability, elasticity, pay-as-you-go model and how customers can build fault tolerant and dynamic applications on AWS.
AWS has different pricing models to match your needs. One example is the different instance types available such as On-Demand, Reserved and Spot Instances. Customers can develop cost-saving strategies based upon their usage patterns, models and growth expectations. In some cases, a set of larger instances can be cheaper than multiple small instances. Learn how to size your AWS applications to maximize your use and minimize your spend. Companies such as Pinterest take very active roles to constantly reduce their spend; learn how they do it and develop your own cost-saving approaches.
Jeff Barr gave a presentation on Amazon Web Services and how developers can use them to build scalable web applications. He discussed Amazon's EC2, S3, and SQS services. EC2 provides virtual servers, S3 provides storage, and SQS provides message queuing. He gave examples of how companies like GigaVox Media have used these services to reduce costs and improve scalability compared to managing their own infrastructure. Barr concluded by taking questions and providing resources for learning more about AWS.
Talk on "Building Highly Scalable Web Applications" by Jeff Barr at IWMW 2007.
See http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-2007/talks/barr/
Solving enterprise challenges through scale out storage & big compute finalAvere Systems
Google Cloud Platform, Avere Systems, and Cycle Computing experts will share best practices for advancing solutions to big challenges faced by enterprises with growing compute and storage needs. In this “best practices” webinar, you’ll hear how these companies are working to improve results that drive businesses forward through scalability, performance, and ease of management.
The slides were from a webinar presented January 24, 2017. The audience learned:
- How enterprises are using Google Cloud Platform to gain compute and storage capacity on-demand
- Best practices for efficient use of cloud compute and storage resources
- Overcoming the need for file systems within a hybrid cloud environment
- Understand how to eliminate latency between cloud and data center architectures
- Learn how to best manage simulation, analytics, and big data workloads in dynamic environments
- Look at market dynamics drawing companies to new storage models over the next several years
Presenters communicated a foundation to build infrastructure to support ongoing demand growth.
Advantages of Cassandra's masterless architectureDuy Lâm
This document summarizes the advantages and cautions of Cassandra's masterless architecture. The advantages include its peer-to-peer design, automatic data distribution without a single point of failure, and built-in replication. However, cautions are noted since Cassandra is a NoSQL database, which means it has different data modeling than SQL and lacks features like transactions and complex queries. Real-world examples are provided to illustrate Cassandra's data distribution and limited query capabilities.
KMS TechCon 2014 - Interesting in JavaScriptDuy Lâm
This document discusses JavaScript concepts like functions, closures, the this keyword, and module patterns. It provides code examples and references for functions, how functions can access outer variables, using closures, issues with global variables, and how the module pattern addresses these issues. The document is intended for a 2014 technical conference on interesting things in JavaScript by Duy Lam of KMS Technology.
Building Single-page Web Applications with AngularJS @ TechCamp Sai Gon 2014Duy Lâm
The document discusses building single-page web applications with AngularJS. It provides an overview of single-page applications and highlights some key features of AngularJS, including its MVC architecture, directives for extending HTML, and filters for formatting expressions. The presentation includes a demo of an AngularJS application and explores some example code to illustrate directives and filters.
Mocha is a JavaScript test framework for node.js that allows for testing of both synchronous and asynchronous code. It supports behavior-driven development and test-driven development interfaces, and allows for exclusive or inclusive tests. Mocha outputs test results to the terminal window using different reporter formats. The presentation provided an overview and demo of Mocha's capabilities.
The document discusses refactoring code. It covers bad smells in code, building tests, and composing methods. It then provides details on self-testing code and catalogs common refactoring techniques like extracting methods, replacing temporary variables, and simplifying conditional expressions.
This document provides an overview of character encoding and introduces Unicode. It defines character encoding as a system that represents characters with codes and specifies how to store characters as byte sequences. The document explains that Unicode maps every known character to a number and can be encoded in different formats, such as UTF-8, UTF-16, and UTF-32. It also notes that a lack of understanding of encodings can cause problems for applications.
The document discusses Microsoft Windows Azure platform and two of its components: AppFabric Access Control and AppFabric Service Bus. AppFabric Access Control uses a claim-based identity model and protocols like SAML, SWT, and WRAP for authentication. AppFabric Service Bus follows an enterprise service bus pattern and acts as a service bus to connect applications and services.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
2. What you can do after this
Public 2
Relational
Database
Service
AutoScaling CloudWatch
Simple Queue
Service
Load
Balancing
…
Notification
EC2
use
Application
7. Usage of Regions and
Availability Zones
Public 7
closer to specific customers, meet legal etc.
Regions
8. Data Transfer Cost
Public 8
EU (Ireland)
EC2
machine
1
EC2
machine
2
Availability
Zone
US-West (Northern California)
$!
$
$
$!
9. Access Interfaces
Public 9
AWS SDKs
(**)
(*) : not all AWS services
(**) : all AWS services
AWS
Management
Console(*)
Java-based
command
line(*)
AWS SDK(**)
Web Service
(REST or
SOAP API)(**)
12. Amazon EC2 Web Service
Basic Storage
Customized
AMIs
Public 12
13. What is EC2 ?
“Amazon Elastic Compute Cloud (Amazon EC2) is a web service
that provides resizable computing capacity that you use to
build and host your software systems”
Public 13
an EC2 instance
Availability Zone
Availability Zone
14. Amazon Machine Image
& Instances
Public 14
AMI
(Template)
- OS: Ubuntu
- Platform: x86
- Storage devices: null
EC2 Instance
(Large Instance Type)
EC2 Instance
(High-CPU Medium
Instance Type)
- 7.5 GB memory
- 4 EC2 Compute Units (*)
- 850 GB instance storage
- 1.7 GB of memory
- 5 EC2 Compute Units
(*)
- 350 GB of instance storage
A sample AMI
Large Instance Type
High-CPU Medium
Instance Type
(*) 1 EC2 Compute Unit = 1.0 - 1.2 GHz 2007 Opteron or 2007 Xeon processor
launch instance
16. Elastic Block Store
& Instance Store
Public 16
Attach
Attach
instance store
(ephemeral store)
EBS volume
Create snapshot
Instance
Instance A
EBS snapshot in Amazon S3
Create volume
Instance B
New EBS volume
Instance A Instance B
Attach
Detach
Backup
Detaching
Persistence
Instance
EBS volume
17. Root device storage
Public 17
AMI backed by instance store
AMI backed by EBS
Instance A
Instance B
Attach more
Attach more
Root device storage
launch instance
launch instance
18. Elastic IP Addresses
Public 18
10.0.0.170
ec2-122-248-202-170...com
Internet user
10.0.0.190
10.0.0.180
ec2-122-248-202-180...com
1.1.1.1 1.1.1.2
19. Pricing Model
Usage hour per EC2 instance
Data Transfer per EC2 instance (both “in”
and “out”) in different Availability Zone or
Region
Data Transfer per Elastic IP Address (both
“in” and “out”)
Other impact factors: Region, OS, Instance Type, Long-term Contract, Bidding
Public 19
21. Amazon EC2 Web Service
Basic Storage
Customized
AMIs
Public 21
22. Storage Types
Elastic Block Store volume Instance store
Persistent V
Cross-instance
access
V
Back up V
Size limits Up to 1TiB per volume Up to 3.3TiB per
instance
Free V
Public 22
24. Block device mapping
Public 24
OS: Amazon Linux
Kernel: aki-13d5aa41
….
Block device mapping
/dev/sdb=none
AMI template
/dev/sdc=ephemeral0
/dev/sdd=snap-a08912c9:15:true
count up based on the instance type
<device name>=<value>
25. EBS Pricing Model
Storage volume (in GB) per month
I/O request (in million)
Other impact factors: Region
Public 25
26. Amazon EC2 Web Service
Basic Storage
Customized
AMIs
Public 26
27. Needs of customized AMI
To meet the your own needs
To share
To sell
Public 27
28. AMI Creation Process
Windows Linux/UNIX
Root
storage
device
EBS-backed
AMI
Instance store-
backed AMI
From existing AMI
Fresh installation
Public 28
2
1
29. Launch the
instance
Customize
instance
Create image from
customized
instance
Create snapshot of
root device
Register image
from snapshot
(1) Creating EBS-Backed AMIs
Public 29
PROMPT> ec2-create-image instance-id
PROMPT> ec2-register --root-device-name
/dev/sda1 -b /dev/sda1=snap-12345678
OR
30. (2) Creating Windows
instance store-backed AMI
Launch the
instance
Customize
instance
Bundle
customized
instance
to S3
Register
bundled
image
Public 30
PROMPT> ec2-bundle-instance instance-id PROMPT> ec2-register <s3-bucket>/image.manifest.xml
-n image_name
OR OR
32. Amazon CloudWatch Web Service
Public 32
• CPU utilization
• Network traffic
• I/O
• Latency
000
EC2 instances
EBS volumes
CloudWatch service
CloudWatch metrics
RDS instances
Load Balancers
33. CloudWatch modes
Public 33
• At 5-minutes frequency
• Free of charge
Basic
• For EC2 instances
• At 1-minute frequency
• $3.5+ per instance per month
Detailed
35. CloudWatch Alarms
“Watches a single metric over a time period and invokes
actions when the value of the metric exceeds a given
threshold over a number of time periods”
Public 35
Amazon
SNS topic
Auto
Scaling
policy OK state
ALARM state
INSUFFICIENT_DATA state
36. Pricing Model
Per EC2 instance / month
Per custom metric / month
Per Alarm / month
API Requests (per 1,000 Get, List, or Put
requests)
Other impact factors: Region
Public 36
40. Overview for Developer
Sticky Sessions
"X-Forwarded-Port", "X-Forwarded-For"
and "X-Forwarded-Proto" Support
Known issue: HTTP 60 seconds timeout
for request
Public 40
41. Pricing Model
Usage hour per Load Balancer instance
Data processed (in GB) per Load Balancer
instance
Other impact factors: Region
Public 41
50. Amazon Relational Database
Web Service
Resizable capacity
for databases
Amazon firewall
Flexible back up
methods
Replication (only on
MySQL)
Monitoring
Public 50
51. Create new DB Instance
Public 51
Update DB Security Group before
connecting to the instance
53. Pricing Model
Per DB Instance Class / month
Storage (in GB) / month
I/O (in million) / month
Back up storage / month
Bandwidth (in GB both “in” / ”out”) / month
Other impact factors: Region, Multi-AZ Deployment, Reserved Instances
Public 53
57. Pricing Model
Storage (in GB) / month
Requests (in 1000 unit) / month
Bandwidth (in GB both “in” / ”out”) / month
Other impact factors: Region, Reduced Redundancy Storage option
Public 57
60. Your application
Amazon Simple Queue
Web Service
Public 60
Machine A
message (text)
HTTP GET or POST request
Machine B
message (text)
HTTP GET or POST response
Message 1
Message 2
Message 3
Message 4
Amazon Queue
61. Key Features
64 KB of text in a message
Not first in, first out delivery of messages
Locking the message: Visibility Timeout
Control access to a queue
Public 61
62. Pricing Model
$0.01 per 10k requests
"out" bandwidth (in GB)
Other impact factors: Region
Public 62
65. Subscriber
Subscriber
Subscriber
Notification
topic
Amazon Simple Notification
Web Service
Public 65
SNS
message (text)
HTTP GET request
Program A
message (text)
message (text)
message (text)
HTTP POST request
Email
Simple Queue Service
receiver@email.com
Queue
http://receiver.com/message
Message (text)
SMS (in US)
Subscriber
800-201-7575
67. Pricing Model
$0.06 per 100k API Requests (free first
100k requests) / month
Amount of notifications (free first part) /
month
"out" bandwidth (in GB)
Other impact factors: Region
Public 67
Objectives:
Understand how to integrate Amazon services into your application
Deploy /manage your application on EC2
Region
Regions are dispersed (spread in wide area) and located in separate geographic areas (US, EU, etc.). Each EC2 Region is designed to be completely isolated from the other Amazon EC2 Regions.
This achieves the greatest possible failure independence and stability, and it makes the locality of each EC2 resource unambiguous
Region list:
US East (Northern Virginia): us-east-1
US West (Oregon) : us-west-2
US West (Northern California) : us-west-1
EU (Ireland) : eu-west-1
Asia Pacific (Singapore) : ap-southeast-1
Asia Pacific (Tokyo) : ap-northeast-1
South America (Sao Paulo) : sa-east-1
Availability Zone
Availability Zones are distinct locations within a Region that are engineered to be isolated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region
However, failures can occur that affect the availability of instance resources that are in the same location. Although this is rare, if you host all your Amazon EC2 instances in a single location that is affected by such a failure, your instances will be unavailable.
By launching instances in separate Regions, you can design your application to be closer to specific customers or to meet legal or other requirements.
By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location
While standard Amazon EC2 Regional Data Transfer charges of $.01 per GB in/out apply when transferring data between an Amazon EC2 instance and Amazon RDS DB Instance in different Availability Zones of the same Region
There is no additional charge for data transferred between Amazon SimpleDB and other Amazon Web Services within the same Region
The command-line client needs to be installed with some configuration
And each service has its own command-line package, so it needs to download separately
Quote from AWS EC2 Documentation
“resizable”: With EC2, you use and pay for only the capacity that you need. This eliminates the need to make large and expensive hardware purchases, reduces the need to forecast traffic, and enables you to automatically scale your IT resources to deal with changes in requirements or spikes in popularity related to your application or service
Diagram
Simple visualization of EC2 in Amazon cloud platform: “instance” ~ “virtual machine”
An Amazon Machine Image (AMI): is a template that contains a software configuration: operating system, application server, applications. If an instance fails, you can launch a new one from the AMI. Amazon publishes many AMIs that contain common software configurations for public use. In addition, members of the AWS developer community have published their own custom AMIs
Instance Type: a specification that defines the memory, CPU, storage capacity, and hourly cost for an instance. Some instance types are designed for standard applications, whereas others are designed for CPU-intensive applications, or memory-intensive applications, etc.
EC2 instance: an virtual machine
You launch AMIs at your own risk. Amazon cannot vouch for the integrity or security of AMIs shared by other EC2 users. Therefore, you should treat shared
Public AMIs are available from Amazon
Should get a public AMI ID from a trusted source (use at your own risk)
AMIs as you would any foreign code that you might consider deploying in your own data center and perform the appropriate due diligence. Ideally, you should get the AMI ID from a trusted source (a web site, another EC2 user, etc). If you do not know the source of an AMI, we recommend that you search the forums for comments on the AMI before launching it. Conversely, if you have questions or observations about a shared AMI, feel free to use the AWS forums to ask or comment
Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with Amazon EC2 instances. Amazon EBS volumes are highly available and reliable storage volumes that can be attached to any running instance. The attached Amazon EBS volumes are exposed as storage volumes that persist independently from the life of the instance
Persistence : the AMI backed instance store doesn’t support Stopped state. Rebooting doesn’t cause losing data on instance store
The amount of instance store hdd depends on instance type
Private address is reachable from within the Amazon EC2 network
Public address that are directly mapped to each other through Network Address Translation (NAT) and are reachable from the Internet
If you use dynamic DNS to map an existing DNS name to a new instance's public IP address, it might take up to 24 hours for the IP address to propagate through the Internet. As a result, new instances might not receive traffic while terminated instances continue to receive request
You can associate one Elastic IP address with only one instance at a time. When you associate an Elastic IP address with an instance, its current public IP address is released to the Amazon EC2 public IP address pool. If you disassociate an Elastic IP address from the instance, the instance is automatically assigned a new public IP address within a few minutes
All accounts are limited to 5 Elastic IP addresses because public (IPV4) Internet addresses are a scarce public resource
To ensure our customers are efficiently using Elastic IP addresses, we impose a small hourly charge when these IP addresses are not mapped to an instance. When these IP addresses are mapped to an instance, they are free of charge
Limit total storage is applied whichever you reach first
The volume need not be attached to a running instance in order to take a snapshot. The snapshots can also be shared with specific AWS accounts or made public
launch instance from snapshot: demo later
Amazon EBS snapshots are incremental backups, meaning that only the blocks on the device that have changed since your last snapshot will be saved. If you have a device with 100GiB of data, but only 5GiB of data have changed since your last snapshot, only the 5GiB of modified data will be stored back to Amazon S3. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume
No device: to use this option only when you want to suppress a block device from attaching at launch time
If you customize your instance with ephemeral storage devices or additional EBS volumes besides the root device, the new AMI contains block device mapping information for those storage devices and volumes. When you then launch an instance from your new AMI, the instance automatically launches with the additional devices and volumes
$3.50 per instance per month (the per metric price below x 7 pre-defined metrics per instance). Custom metrics: memory usage, transaction volumes, or error rates …
In the following figure, the alarm threshold is set to 3 and the minimum breach is 3 periods. That is, the alarm invokes its action only when the threshold is breached for 3 consecutive periods. In the figure, this happens with the third through fifth time periods, and the alarm's state is set to ALARM. At period six, the value dips below the threshold, and the state reverts to OK. Later, during the ninth time period, the threshold is breached again, but not for the necessary three consecutive periods. Consequently, the alarm's state remains OK
An alarm has three possible states:
OK—The metric is within the defined threshold
ALARM—The metric is outside of the defined threshold
INSUFFICIENT_DATA—The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state
Elastic Load Balancing can detect the health of Amazon EC2 instances. When it detects unhealthy load-balanced Amazon EC2 instances, it no longer routes traffic to those Amazon EC2 instances and spreads the load across the remaining healthy Amazon EC2 instances
Elastic Load Balancing supports the ability to stick user sessions to specific EC2 instances
Elastic Load Balancing supports use of both the Internet Protocol version 4 and 6 (IPv4 and IPv6)
Sticky Sessions
Enables the load balancer to bind a user's session to a specific application instance. This ensures that all requests coming from the user during the session will be sent to the same application instance
Load-balancer-generated HTTP cookies, which allow browser-based session lifetimes
Application-generated HTTP cookies, which allow application-specific session lifetimes
"X-Forwarded-Port" , "X-Forwarded-For" and "X-Forwarded-Proto" Support
Because load balancers intercept traffic between clients and servers, your server access logs contain only the IP address of the load balancer. To see the original IP address/port/protocol of the client, use the X-Forwarded-* request header. Elastic Load Balancing stores the IP address of the client in the X-Forwarded-For request header and passes the header along to your server
HTTP 60 seconds timeout for request
When a load balancer forwards a HTTP request to instance, if the instance sends back the response after 60 seconds, the load balancer will automatically kill that HTTP routine and client will receive empty response (no http headers)
$ curl –i http://host/balacing
Auto Scaling allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define. Auto Scaling monitors the health of each EC2 instance that it launches. If any instance terminates unexpectedly, Auto Scaling detects the termination and launches a replacement instance. This capability helps you maintain a fixed, desired number of EC2 instances automatically.
Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage.
Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees.
Scheduled time in Unix cron syntax format
Resizable capacity for databases : modify cpu, memory and storage at any time, even in running server instance
Amazon firewall : control access to your DB Instances. A DB Security Group acts like a firewall controlling network access to your DB Instance
Flexible back up methods : automated backups and DB Snapshots. Automated backups automatically back up your DB Instance during a specific, user-definable backup window, and keeps the backups for a limited, user-specified period of time (called the backup retention period); you can later recover your database to any point in time during that retention period. DB Snapshots are user-created snapshots that enable you to back up your DB Instance to a known state, and restore to that specific state at any time. Amazon RDS keeps all DB Snapshots until you delete them.
Flexible scaling : Currently, replication is only supported for the MySQL engine. We plan to support replication options for Oracle in the future.
Amazon RDS for MySQL provides two distinct replication options to serve different purposes.
Monitoring: monitor metrics with Amazon CloudWatch
Auto Minor Version Upgrade option enables your DB Instance to receive minor engine version upgrades automatically when they become available
DB Instance Class ~ EC2 Instance Type : indicate the CPU + RAM amount of DB instance
DB Instance Identifier is a customer-supplied identifier for a DB Instance. This identifier specifies a particular DB Instance when interacting with the Amazon RDS API and commands. The DB Instance identifier must be unique for that customer in an AWS region
Database Name depends on the database engine in use:
For the MySQL database engine, the Database Name is the name of a database hosted in your Amazon DB Instance. An Amazon DB Instance can host multiple databases. Databases hosted by the same DB Instance must have a unique name within that instance
For the Oracle database engine, Database Name is used to set the value of ORACLE_SID, which must be supplied when connecting to the Oracle RDS instance.
If you are looking to use replication to increase database availability while protecting your latest database updates against unplanned outages, consider running your DB Instance as a Multi-AZ deployment. When you create or modify your DB Instance to run as a Multi-AZ deployment, Amazon RDS will automatically provision and manage a “standby” replica in a different Availability Zone. In the event of planned database maintenance, DB Instance failure, or an Availability Zone failure, Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention. Multi-AZ deployments utilize synchronous replication, making database writes concurrently on both the primary and standby so that the standby will be up-to-date in the event a failover occurs
If you are looking to take advantage of MySQL’s built-in replication to scale beyond the capacity constraints of a single DB Instance for read-heavy database workloads, Amazon RDS makes it easier with Read Replicas. You can create a Read Replica of a given “source” DB Instance using the AWS Management Console or CreateDBInstanceReadReplica API. Once the Read Replica is created, database updates on the source DB Instance will be propagated to the Read Replica. You can create multiple Read Replicas for a given source DB Instance and distribute your application’s read traffic amongst them. In particular, updates are applied to your Read Replica(s) after they occur on the source DB Instance (“asynchronous” replication), and replication lag can vary significantly. This means recent database updates made to a standard (non Multi-AZ) source DB Instance may not be present on associated Read Replicas in the event of an unplanned outage on the source DB Instance. As such, Read Replicas do not offer the same data durability benefits as Multi-AZ deployments. While Read Replicas can provide some read availability benefits, they and are not designed to improve write availability.
Data transferred between Amazon RDS and Amazon EC2 Instances in the same Availability Zone is free.
Data transferred between Availability Zones for replication of Multi-AZ deployments is free.
It allows customer to store and retrieve any amount of data on the web
Control access to buckets and objects
Allow only downloading on anonymous user
Don’t allow specific users to get list of objects in a bucket
Restrict the access to a bucket / object from specific IP address
Versioning objects in a bucket : Versioning is a means of keeping multiple variants of an object in the same bucket. In one bucket, for example, you can have two objects with the same key, but different version IDs, such as photo.gif (version 111111) and photo.gif (version 121212). You might enable versioning to prevent objects from being deleted or overwritten by mistake, or to archive objects so that you can retrieve previous versions of them
Data transferred between Amazon RDS and Amazon EC2 Instances in the same Availability Zone is free.
Data transferred between Availability Zones for replication of Multi-AZ deployments is free.
Run ruby scripts to setup first and remember to clean up
It allows customer to store and retrieve any amount of data on the web
Access control : to grant another AWS account a particular type of access to your queue (e.g., SendMessage) or for a specific period of time
A topic is a communication channel to send messages and subscribe to notifications. It provides an access point for publishers and subscribers to communicate with each other
Currently Amazon SNS will only accept US phone numbers as valid subscription end-points.
Control access: to grant another AWS account a particular type of topic action (e.g., Publish) or to limit subscriptions to your topic to only the HTTPS protocol (avoid spam in email)