A recap of some of the most interesting things learned from the AWS re:Invent 2013 Conference. Easily the most intense and educational conference I've ever attended.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Speaker:
Shaun Pearce, AWS Solutions Architect
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Technical 101: AWS Innovation at Scale
This session, gives an insider view of some the innovations that help make the AWS Cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking rotocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing. Rodney will also dive into fundamental database work AWS is delivering to open up scaling and performance limits, reduce costs, and eliminate much of the administrative burden of managing databases. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
Speaker: Rodeny Haywood, Manager Solutions Architecture, Amazon Web Services
This session, led by James Hamilton, VP and Distinguished Engineer, gives an insider view of some the innovations that help make the AWS cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking protocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing. James will also dive into fundamental database work AWS is delivering to open up scaling and performance limits, reduce costs, and eliminate much of the administrative burden of managing databases. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
AWS Database Services-Philadelphia AWS User Group-4-17-2018Bert Zahniser
The document summarizes a presentation on Amazon Web Services (AWS) database services. It provides an overview of AWS Relational Database Service (RDS) and other database offerings, including benefits of RDS like scalability and availability features. Specific RDS configurations, security options, monitoring, and pricing are also discussed. Non-relational database services and migration tools are briefly mentioned.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Are you challenged today with getting non-digital information into a digital format? Are you trying to find the most cost effective storage solutions for your digital content? Do you want to share your libraries rich information with a global audience? Attend this webinar to learn how to digitize, store and share your information quickly, efficiently and at the lowest cost possible.
SRV401 Deep Dive on Amazon Elastic File System (Amazon EFS)Amazon Web Services
In this session we will review Amazon EFS and how it delivers fully managed, petabyte-scale file storage for Amazon EC2 instances. Large scale and consistent performance make Amazon EFS ideal for web and content serving, enterprise applications, media processing, container storage, and Big Data analytics use cases. Session attendees will learn how to identify appropriate applications for use with Amazon EFS, understand performance details and security models, and hear how established customers are using it in production. The target audience is file system administrators, application developers, and application owners that operate or build file-based applications that require consistent latencies at cloud scale.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Speaker:
Shaun Pearce, AWS Solutions Architect
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Technical 101: AWS Innovation at Scale
This session, gives an insider view of some the innovations that help make the AWS Cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking rotocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing. Rodney will also dive into fundamental database work AWS is delivering to open up scaling and performance limits, reduce costs, and eliminate much of the administrative burden of managing databases. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
Speaker: Rodeny Haywood, Manager Solutions Architecture, Amazon Web Services
This session, led by James Hamilton, VP and Distinguished Engineer, gives an insider view of some the innovations that help make the AWS cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking protocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing. James will also dive into fundamental database work AWS is delivering to open up scaling and performance limits, reduce costs, and eliminate much of the administrative burden of managing databases. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
AWS Database Services-Philadelphia AWS User Group-4-17-2018Bert Zahniser
The document summarizes a presentation on Amazon Web Services (AWS) database services. It provides an overview of AWS Relational Database Service (RDS) and other database offerings, including benefits of RDS like scalability and availability features. Specific RDS configurations, security options, monitoring, and pricing are also discussed. Non-relational database services and migration tools are briefly mentioned.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Are you challenged today with getting non-digital information into a digital format? Are you trying to find the most cost effective storage solutions for your digital content? Do you want to share your libraries rich information with a global audience? Attend this webinar to learn how to digitize, store and share your information quickly, efficiently and at the lowest cost possible.
SRV401 Deep Dive on Amazon Elastic File System (Amazon EFS)Amazon Web Services
In this session we will review Amazon EFS and how it delivers fully managed, petabyte-scale file storage for Amazon EC2 instances. Large scale and consistent performance make Amazon EFS ideal for web and content serving, enterprise applications, media processing, container storage, and Big Data analytics use cases. Session attendees will learn how to identify appropriate applications for use with Amazon EFS, understand performance details and security models, and hear how established customers are using it in production. The target audience is file system administrators, application developers, and application owners that operate or build file-based applications that require consistent latencies at cloud scale.
JustGiving – Serverless Data Pipelines, API, Messaging and Stream ProcessingLuis Gonzalez
What to Expect from the Session
• Recap of some AWS services
• Event-driven data platform at JustGiving
• Serverless computing
• Six serverless patterns
• Serverless recommendations and best practices
Join AWS at this session to understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
Speakers:
Andreas Chatzakis, AWS Solutions Architect
Pete Mounce, Senior Developer, JustEat
This document provides an overview of a presentation on cloud architecture and anti-architecture patterns. The presentation discusses moving a company's primary data store from a centralized SQL database to a distributed Cassandra database in the cloud. An initial prototype backup solution was overengineered, becoming complex and taking too long to implement fully. This highlighted the importance of defining anti-architecture constraints upfront to guide development in a simpler direction. The presentation concludes with a discussion of differences between the company's existing datacenter architecture and goals for a cloud architecture, focusing on replacing centralized components with distributed and decoupled alternatives.
Building a Just-in-Time Application Stack for AnalystsAvere Systems
Slide presentation from Webinar on February 17, 2016.
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
Day 2 - Amazon RDS - Letting AWS run your Low Admin, High Performance DatabaseAmazon Web Services
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and re-sizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. In this webinar we review the different types of Amazon RDS available and how to move your existing databases to Amazon RDS with minimum disruption.
Reasons to attend:
- Learn how Amazon RDS can reduce the overhead of running high performance mission critical databases.
- Learn how to migrate your existing database workloads into Amazon RDS running on the AWS Cloud.
- Learn how to scale up and scale down your Amazon RDS instance and save money with reserved instances.
Netflix on Cloud - combined slides for Dev and OpsAdrian Cockcroft
This document contains slides from a presentation given by Adrian Cockcroft on Netflix's use of cloud computing on Amazon Web Services (AWS). The summary includes:
1) Netflix moved most of its infrastructure to AWS to leverage AWS's scale and features rather than building its own datacenters, as capacity growth was unpredictable and datacenters were inflexible.
2) Netflix uses many AWS services including EC2, S3, EBS, EMR and more. It deployed a large movie encoding farm on EC2, stores content on S3, uses EMR/Hadoop for log analysis, and a CDN for content delivery.
3) Netflix has learned that cloud tools don't always scale for large
Deliver Best-in-Class HPC Cloud Solutions Without Losing Your MindAvere Systems
While cloud computing offers virtually unlimited capacity, harnessing that capacity in an efficient, cost effective fashion can be cumbersome and difficult at the workload level. At the organizational level, it can quickly become chaos.
You must make choices around cloud deployment, and these choices could have a long-lasting impact on your organization. It is important to understand your options and avoid incomplete, complicated, locked-in scenarios. Data management and placement challenges make having the ability to automate workflows and processes across multiple clouds a requirement.
In this webinar, you will:
• Learn how to leverage cloud services as part of an overall computation approach
• Understand data management in a cloud-based world
• Hear what options you have to orchestrate HPC in the cloud
• Learn how cloud orchestration works to automate and align computing with specific goals and objectives
• See an example of an orchestrated HPC workload using on-premises data
From computational research to financial back testing, and research simulations to IoT processing frameworks, decisions made now will not only impact future manageability, but also your sanity.
Amazon RDS for Microsoft SQL: Performance, Security, Best Practices (DAT303) ...Amazon Web Services
Come learn about architecting high-performance applications and production workloads using Amazon RDS for SQL Server. Understand how to migrate your data to an Amazon RDS instance, apply security best practices, and optimize your database instance and applications for high availability.
AWS Webcast - How to Migrate On-premise NAS Storage to Cloud NAS StorageAmazon Web Services
In this webinar, Amazon Web Services Solutions Architect Kyle Lichtenberg and SoftNAS Solutions Architect Mark Bichlmeier will discuss moving SaaS applications from on-premise to the AWS cloud using NAS storage. This webinar will also feature an in-depth case study on Recommind. Ranked among the fastest growing companies on Deloitte’s 2014 Technology Fast 500(tm), Recommind was faced with driving greater scale, agility, and cost savings out of its hosting operations for its SaaS-based business. Should Recommind maximize operational efficiencies and costs for its brick and mortar data centers or go all-in and provide its SaaS applications to thousands of customers from the cloud? In this webinar, you will learn: • Alternatives considered in moving SaaS applications from on-premise to the cloud • How to migrate on-premise applications to the AWS cloud and use cloud NAS storage • How to build high-availability cloud NAS storage on AWS for multi-tenant environments • How to configure cloud NAS storage on AWS for IOPS requirements • How to configure iSCSI for use through AWS VPCs • How to archive to S3 cloud disks
AWS Summit London 2014 | Maximising EC2 and EBC Performance (400)Amazon Web Services
This advanced technical session is ideal for customers that are looking to maximise the performance of AWS Elastic Block Store (EBS) storage to support workloads with demanding IO performance requirements. If you need to run high IO workloads on EBS such as NoSQL or RBDMS systems then attend this session to find out how to optimise your EBS configuration to enable this.
Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances. In this technical session, you will discover how Amazon EBS can take your application deployments on EC2 to the next level. Session attendees will learn about the Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. We discuss how to maximize Amazon EBS performance, with a special emphasis on low-latency, high-throughput applications like transactional and NoSQL databases, and big data analysis frameworks like Hadoop and Kafka. We will also dive deep and discuss Elastic Volumes, our latest EBS feature that allows you to dynamically increase capacity, tune performance, and change the type of EBS volumes on the fly. Throughout, we share tips for success.
The document provides best practices for cloud architecture. It discusses when to cloudify an application based on factors like unpredictable capacity needs, elasticity requirements, and agility in development. It also discusses when not to cloudify, such as if network latency is a concern or vendor lock-in is important. The document then discusses database normalization practices and design considerations for scaling out applications in a stateless manner using services. It emphasizes automation, loose coupling between services, and service discovery mechanisms.
Overview and Best Practices for Amazon Elastic Block Store - September 2016 W...Amazon Web Services
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we present the differences between the types of Amazon EBS block storage so that you can best understand which storage type to use for your different application deployments. We discuss how to maximize Amazon EBS performance with a special eye towards low-latency and high-throughput applications. We discuss Amazon EBS encryption and share best practices for Amazon EBS snapshot management. Throughout, we share tips for success.
Learning Objectives:
• Learn about the latest updates to EBS
• Learn about best practices for using EBS.
Who Should Attend:
• Application admins, DBAs, database and big data architects
The correct answer is B. To enable encryption for future RDS database backups, we need to modify the backup section of the database configuration in RDS and toggle the "Enable encryption" checkbox. This will encrypt all new backups taken after this change. The other options are incorrect:
A) Enabling default encryption on the S3 bucket won't encrypt existing backups or future RDS backups taken by RDS.
C) Creating an encrypted snapshot from an unencrypted one doesn't help meet the requirements - we need future automated backups from RDS to be encrypted.
So the best option is B - modifying the database configuration directly in RDS to enable encryption for all new automated backups.
The answer is B.
- WOW Air moved their booking engine and content management system to AWS to handle scaling for successful sales campaigns, taking advantage of Amazon RDS and EC2 auto-scaling.
- They used RDS for MySQL and PostgreSQL to avoid managing databases themselves and easily scale their instances vertically and horizontally. Cross-region replication on RDS helped serve users from multiple regions.
- The document discusses high availability features of RDS like Multi-AZ deployment and Amazon Aurora, as well as tools for migrating databases to RDS from on-premises or other database engines.
(DAT204) NoSQL? No Worries: Build Scalable Apps on AWS NoSQL ServicesAmazon Web Services
In this session, we discuss the benefits of NoSQL databases and take a tour of the main NoSQL services offered by AWS—Amazon DynamoDB and Amazon ElastiCache. Then, we hear from two leading customers, Expedia and Mapbox, about their use cases and architectural challenges, and how they addressed them using AWS NoSQL services, including design patterns and best practices. You will walk out of this session having a better understanding of NoSQL and its powerful capabilities, ready to tackle your database challenges with confidence.
Amazon Web Services - Relational Database Service Meetupcyrilkhairallah
The document discusses Amazon Relational Database Service (RDS), a managed database service. It provides an overview of RDS and how it can be used to deploy, operate, and scale databases in the cloud more easily without manual administration. Key topics covered include how to scale databases with RDS, optimize costs using reserved instances, monitor databases with CloudWatch, take automated backups, and perform other administrative tasks without managing the underlying infrastructure.
Advanced data migration techniques for Amazon RDSTom Laszewski
Migrating on premise data from Oracle and MySQL Databases to AWS Oracle and MySQL RDS. These techniques will work for AWS EC2 as well. Scripts included in the slides.
Spark 101 – First Steps To Distributed Computing - Demi Ben-Ari @ Ofek AlumniDemi Ben-Ari
The world has changed and having one huge server won’t do the job anymore, when you’re talking about vast amounts of data, growing all the time the ability to Scale Out would be your saviour.
This lecture will be about the basics of Apache Spark and distributed computing and the development tools needed to have a functional environment.
Bio:
Demi Ben-Ari, Sr. Data Engineer @Windward, Ofek Alumni
Has over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Co-Founder of the “Big Things” Big Data community: http://somebigthings.com/big-things-i...
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...Amazon Web Services
The document discusses building HPC clusters on AWS in an automated way using infrastructure as code. It provides an overview of why customers use AWS for HPC/HTC workloads due to benefits like time to research, innovation, scalability, cost savings using spot instances, and data services. The document outlines challenges in automating cluster deployment, integrating storage, networking, and services, and discusses how Fermilab is using AWS for various HEP workloads like NOvA data processing and CMS Monte Carlo simulation through their HEP Cloud Facility project.
This document provides an overview of scalable architecture strategies on AWS. It discusses:
1. Scaling the infrastructure seamlessly by adding more resources as needed to support growth in users and traffic, without performance drops or practical limits.
2. How Sanlih E-Television used AWS to support its online strategy and estimated 30% savings over other cloud providers due to AWS's stability, competitive pricing, and ability to integrate internet and mobile services.
3. Different strategies for scaling architectures on AWS including separating databases from application servers, using caching, offloading static content to S3, and implementing auto-scaling and load balancing.
The document discusses how to build cloud-enabled apps that can scale on AWS. It covers scaling vertically by increasing instance sizes, scaling horizontally by adding more instances, using auto-scaling to dynamically scale based on demand, distributing load with an ELB, scaling databases using read replicas and sharding, and taking advantage of managed database services like RDS and DynamoDB for easier administration. It also discusses decomposing applications into small, stateless components and using infrastructure as code for continuous deployment and agility.
JustGiving – Serverless Data Pipelines, API, Messaging and Stream ProcessingLuis Gonzalez
What to Expect from the Session
• Recap of some AWS services
• Event-driven data platform at JustGiving
• Serverless computing
• Six serverless patterns
• Serverless recommendations and best practices
Join AWS at this session to understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
Speakers:
Andreas Chatzakis, AWS Solutions Architect
Pete Mounce, Senior Developer, JustEat
This document provides an overview of a presentation on cloud architecture and anti-architecture patterns. The presentation discusses moving a company's primary data store from a centralized SQL database to a distributed Cassandra database in the cloud. An initial prototype backup solution was overengineered, becoming complex and taking too long to implement fully. This highlighted the importance of defining anti-architecture constraints upfront to guide development in a simpler direction. The presentation concludes with a discussion of differences between the company's existing datacenter architecture and goals for a cloud architecture, focusing on replacing centralized components with distributed and decoupled alternatives.
Building a Just-in-Time Application Stack for AnalystsAvere Systems
Slide presentation from Webinar on February 17, 2016.
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
Day 2 - Amazon RDS - Letting AWS run your Low Admin, High Performance DatabaseAmazon Web Services
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and re-sizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. In this webinar we review the different types of Amazon RDS available and how to move your existing databases to Amazon RDS with minimum disruption.
Reasons to attend:
- Learn how Amazon RDS can reduce the overhead of running high performance mission critical databases.
- Learn how to migrate your existing database workloads into Amazon RDS running on the AWS Cloud.
- Learn how to scale up and scale down your Amazon RDS instance and save money with reserved instances.
Netflix on Cloud - combined slides for Dev and OpsAdrian Cockcroft
This document contains slides from a presentation given by Adrian Cockcroft on Netflix's use of cloud computing on Amazon Web Services (AWS). The summary includes:
1) Netflix moved most of its infrastructure to AWS to leverage AWS's scale and features rather than building its own datacenters, as capacity growth was unpredictable and datacenters were inflexible.
2) Netflix uses many AWS services including EC2, S3, EBS, EMR and more. It deployed a large movie encoding farm on EC2, stores content on S3, uses EMR/Hadoop for log analysis, and a CDN for content delivery.
3) Netflix has learned that cloud tools don't always scale for large
Deliver Best-in-Class HPC Cloud Solutions Without Losing Your MindAvere Systems
While cloud computing offers virtually unlimited capacity, harnessing that capacity in an efficient, cost effective fashion can be cumbersome and difficult at the workload level. At the organizational level, it can quickly become chaos.
You must make choices around cloud deployment, and these choices could have a long-lasting impact on your organization. It is important to understand your options and avoid incomplete, complicated, locked-in scenarios. Data management and placement challenges make having the ability to automate workflows and processes across multiple clouds a requirement.
In this webinar, you will:
• Learn how to leverage cloud services as part of an overall computation approach
• Understand data management in a cloud-based world
• Hear what options you have to orchestrate HPC in the cloud
• Learn how cloud orchestration works to automate and align computing with specific goals and objectives
• See an example of an orchestrated HPC workload using on-premises data
From computational research to financial back testing, and research simulations to IoT processing frameworks, decisions made now will not only impact future manageability, but also your sanity.
Amazon RDS for Microsoft SQL: Performance, Security, Best Practices (DAT303) ...Amazon Web Services
Come learn about architecting high-performance applications and production workloads using Amazon RDS for SQL Server. Understand how to migrate your data to an Amazon RDS instance, apply security best practices, and optimize your database instance and applications for high availability.
AWS Webcast - How to Migrate On-premise NAS Storage to Cloud NAS StorageAmazon Web Services
In this webinar, Amazon Web Services Solutions Architect Kyle Lichtenberg and SoftNAS Solutions Architect Mark Bichlmeier will discuss moving SaaS applications from on-premise to the AWS cloud using NAS storage. This webinar will also feature an in-depth case study on Recommind. Ranked among the fastest growing companies on Deloitte’s 2014 Technology Fast 500(tm), Recommind was faced with driving greater scale, agility, and cost savings out of its hosting operations for its SaaS-based business. Should Recommind maximize operational efficiencies and costs for its brick and mortar data centers or go all-in and provide its SaaS applications to thousands of customers from the cloud? In this webinar, you will learn: • Alternatives considered in moving SaaS applications from on-premise to the cloud • How to migrate on-premise applications to the AWS cloud and use cloud NAS storage • How to build high-availability cloud NAS storage on AWS for multi-tenant environments • How to configure cloud NAS storage on AWS for IOPS requirements • How to configure iSCSI for use through AWS VPCs • How to archive to S3 cloud disks
AWS Summit London 2014 | Maximising EC2 and EBC Performance (400)Amazon Web Services
This advanced technical session is ideal for customers that are looking to maximise the performance of AWS Elastic Block Store (EBS) storage to support workloads with demanding IO performance requirements. If you need to run high IO workloads on EBS such as NoSQL or RBDMS systems then attend this session to find out how to optimise your EBS configuration to enable this.
Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances. In this technical session, you will discover how Amazon EBS can take your application deployments on EC2 to the next level. Session attendees will learn about the Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. We discuss how to maximize Amazon EBS performance, with a special emphasis on low-latency, high-throughput applications like transactional and NoSQL databases, and big data analysis frameworks like Hadoop and Kafka. We will also dive deep and discuss Elastic Volumes, our latest EBS feature that allows you to dynamically increase capacity, tune performance, and change the type of EBS volumes on the fly. Throughout, we share tips for success.
The document provides best practices for cloud architecture. It discusses when to cloudify an application based on factors like unpredictable capacity needs, elasticity requirements, and agility in development. It also discusses when not to cloudify, such as if network latency is a concern or vendor lock-in is important. The document then discusses database normalization practices and design considerations for scaling out applications in a stateless manner using services. It emphasizes automation, loose coupling between services, and service discovery mechanisms.
Overview and Best Practices for Amazon Elastic Block Store - September 2016 W...Amazon Web Services
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we present the differences between the types of Amazon EBS block storage so that you can best understand which storage type to use for your different application deployments. We discuss how to maximize Amazon EBS performance with a special eye towards low-latency and high-throughput applications. We discuss Amazon EBS encryption and share best practices for Amazon EBS snapshot management. Throughout, we share tips for success.
Learning Objectives:
• Learn about the latest updates to EBS
• Learn about best practices for using EBS.
Who Should Attend:
• Application admins, DBAs, database and big data architects
The correct answer is B. To enable encryption for future RDS database backups, we need to modify the backup section of the database configuration in RDS and toggle the "Enable encryption" checkbox. This will encrypt all new backups taken after this change. The other options are incorrect:
A) Enabling default encryption on the S3 bucket won't encrypt existing backups or future RDS backups taken by RDS.
C) Creating an encrypted snapshot from an unencrypted one doesn't help meet the requirements - we need future automated backups from RDS to be encrypted.
So the best option is B - modifying the database configuration directly in RDS to enable encryption for all new automated backups.
The answer is B.
- WOW Air moved their booking engine and content management system to AWS to handle scaling for successful sales campaigns, taking advantage of Amazon RDS and EC2 auto-scaling.
- They used RDS for MySQL and PostgreSQL to avoid managing databases themselves and easily scale their instances vertically and horizontally. Cross-region replication on RDS helped serve users from multiple regions.
- The document discusses high availability features of RDS like Multi-AZ deployment and Amazon Aurora, as well as tools for migrating databases to RDS from on-premises or other database engines.
(DAT204) NoSQL? No Worries: Build Scalable Apps on AWS NoSQL ServicesAmazon Web Services
In this session, we discuss the benefits of NoSQL databases and take a tour of the main NoSQL services offered by AWS—Amazon DynamoDB and Amazon ElastiCache. Then, we hear from two leading customers, Expedia and Mapbox, about their use cases and architectural challenges, and how they addressed them using AWS NoSQL services, including design patterns and best practices. You will walk out of this session having a better understanding of NoSQL and its powerful capabilities, ready to tackle your database challenges with confidence.
Amazon Web Services - Relational Database Service Meetupcyrilkhairallah
The document discusses Amazon Relational Database Service (RDS), a managed database service. It provides an overview of RDS and how it can be used to deploy, operate, and scale databases in the cloud more easily without manual administration. Key topics covered include how to scale databases with RDS, optimize costs using reserved instances, monitor databases with CloudWatch, take automated backups, and perform other administrative tasks without managing the underlying infrastructure.
Advanced data migration techniques for Amazon RDSTom Laszewski
Migrating on premise data from Oracle and MySQL Databases to AWS Oracle and MySQL RDS. These techniques will work for AWS EC2 as well. Scripts included in the slides.
Spark 101 – First Steps To Distributed Computing - Demi Ben-Ari @ Ofek AlumniDemi Ben-Ari
The world has changed and having one huge server won’t do the job anymore, when you’re talking about vast amounts of data, growing all the time the ability to Scale Out would be your saviour.
This lecture will be about the basics of Apache Spark and distributed computing and the development tools needed to have a functional environment.
Bio:
Demi Ben-Ari, Sr. Data Engineer @Windward, Ofek Alumni
Has over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Co-Founder of the “Big Things” Big Data community: http://somebigthings.com/big-things-i...
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...Amazon Web Services
The document discusses building HPC clusters on AWS in an automated way using infrastructure as code. It provides an overview of why customers use AWS for HPC/HTC workloads due to benefits like time to research, innovation, scalability, cost savings using spot instances, and data services. The document outlines challenges in automating cluster deployment, integrating storage, networking, and services, and discusses how Fermilab is using AWS for various HEP workloads like NOvA data processing and CMS Monte Carlo simulation through their HEP Cloud Facility project.
This document provides an overview of scalable architecture strategies on AWS. It discusses:
1. Scaling the infrastructure seamlessly by adding more resources as needed to support growth in users and traffic, without performance drops or practical limits.
2. How Sanlih E-Television used AWS to support its online strategy and estimated 30% savings over other cloud providers due to AWS's stability, competitive pricing, and ability to integrate internet and mobile services.
3. Different strategies for scaling architectures on AWS including separating databases from application servers, using caching, offloading static content to S3, and implementing auto-scaling and load balancing.
The document discusses how to build cloud-enabled apps that can scale on AWS. It covers scaling vertically by increasing instance sizes, scaling horizontally by adding more instances, using auto-scaling to dynamically scale based on demand, distributing load with an ELB, scaling databases using read replicas and sharding, and taking advantage of managed database services like RDS and DynamoDB for easier administration. It also discusses decomposing applications into small, stateless components and using infrastructure as code for continuous deployment and agility.
For people who start to create a cloud service, it’s really important to know how to create a scalable cloud service to fit the growth of the future workloads. In this session, we will introduce how to design a scalable cloud service including AWS services introduction and best practices.
This document provides an overview of migrating applications and workloads to AWS. It discusses key considerations for different migration approaches including "forklift", "embrace", and "optimize". It also covers important AWS services and best practices for architecture design, high availability, disaster recovery, security, storage, databases, auto-scaling, and cost optimization. Real-world customer examples of migration lessons and benefits are also presented.
Learn about the patterns and techniques a business should be using in building their infrastructure on Amazon Web Services to be able to handle rapid growth and success in the early days. From leveraging highly scalable AWS services, to architecting best patterns, there are a number of smart choices you can make early on to help you overcome some typical infrastructure issues.
Presenter: Chris Munns,Solutions Architect, Amazon Web Services
AWS Webcast - Managing Big Data in the AWS Cloud_20140924Amazon Web Services
This presentation deck will cover specific services such as Amazon S3, Kinesis, Redshift, Elastic MapReduce, and DynamoDB, including their features and performance characteristics. It will also cover architectural designs for the optimal use of these services based on dimensions of your data source (structured or unstructured data, volume, item size and transfer rates) and application considerations - for latency, cost and durability. It will also share customer success stories and resources to help you get started.
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
This document provides an overview of AWS (Amazon Web Services) for Java developers. It introduces the speaker and covers various AWS core services including S3, EC2, databases, Elastic Beanstalk, EC2 Container Service, tooling, billing, and monitoring. Serverless architectures using AWS Lambda are also discussed. The document concludes with demos of building serverless projects in Eclipse using AWS services like API Gateway and DynamoDB.
JustGiving | Serverless Data Pipelines, API, Messaging and Stream ProcessingBEEVA_es
PPT de la presentación de Richard T. Freeman en el Meetup de BEEVA. Marzo 2017.
https://www.meetup.com/es-ES/Innovative-technology-BEEVA/events/238027581/
Cloud computing provides on-demand access to computing resources like storage, networking, and servers that can be rapidly provisioned without long wait times. There are public clouds run by third parties and private clouds within a company's own data center. Public clouds offer elastic resources without large upfront costs but less control, while private clouds offer more control within existing infrastructure limitations. Major cloud providers like Amazon Web Services offer infrastructure as a service (IaaS) like computing and storage, and platform as a service (PaaS) that automates services like databases.
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
AWS as a Data Platform for Cloud and On-Premises Workloads | AWS Public Secto...Amazon Web Services
This session discusses the set of data services that AWS offers for managing all types of data, including files, objects, databases, and data warehouses. We will discuss use cases for each AWS data service, including unique capabilities that the cloud enables and hybrid scenarios for integrating and migrating on-premises data to AWS. This session discusses Amazon S3, AWS Storage Gateway, Amazon EBS, Amazon RDS, Amazon Redshift, and native databases running on AWS. It also covers some of the key data and storage capabilities provided by AWS partners, and considerations for integrating with and migrating enterprise data to the cloud.
a session in AWS Riyadh User Group to discuss AWS RDS >> which is fully managed service to handle all Database management and administrations tasks with multiple engines support
This document provides an overview and agenda for an AWS workshop. It introduces the presenter and covers various AWS services including compute (EC2, Lambda), storage (S3, EBS), databases (RDS), and serverless architecture. It also discusses AWS tooling, billing, security, and monitoring. The document concludes by pointing attendees to example labs they can complete to get hands-on experience with AWS.
Accenture Cloud Platform helps customers manage public and private enterprise cloud resources effectively and securely. In this session, learn how we designed and built new core platform capabilities using a serverless, microservices-based architecture that is based on AWS services such as AWS Lambda and Amazon API Gateway. During our journey, we discovered a number of key benefits, including a dramatic increase in developer velocity, a reduction (to almost zero) of reliance on other teams, reduced costs, greater resilience, and scalability. We describe the (wild) successes we’ve had and the challenges we’ve overcome to create an AWS serverless architecture at scale. Session sponsored by Accenture.
AWS Competency Partner
AWS re:Invent 2016: AWS Database State of the Union (DAT320)Amazon Web Services
Raju Gulabani, vice president of AWS Database Services (AWS), discusses the evolution of database services on AWS and the new database services and features we launched this year, and shares our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different shapes and forms. Storage, management, and analysis of this data requires database services that scale and perform in ways not possible before. AWS offers a collection of such database and other data services like Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR to process, store, manage, and analyze data. In this session, we provide an overview of AWS database services and discuss how our customers are using these services today.
Learn how AWS customers save money, time and effort by using AWS's backup and archive services. Organizations of all sizes rely on AWS services to durably safeguard their data off-premises at a surprisingly low cost. This session will illustrate backup and archive architectures that AWS customers are benefitting from today.
Scaling the Platform for Your Startup - Startup Talks June 2015Amazon Web Services
Join AWS at this session to understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
Migrating Your Databases to AWS Deep Dive on Amazon RDS and AWSKristana Kane
This document provides an overview of migrating databases to AWS using Amazon RDS and AWS Database Migration Service (DMS). It discusses how AWS RDS offers scalable, managed relational databases, the different database engines supported by RDS, and key features like security, monitoring, high availability and scaling. It then covers how AWS DMS can be used to migrate databases to AWS with no downtime by continuously replicating and migrating data. Finally, it shares examples of how customers have used RDS and DMS for heterogeneous, homogeneous, large-scale and split migrations.
Repeating History...On Purpose...with ElixirBarry Jones
A dive into the highlights of Elixir that make it the ideal platform for the web...and how all these questions were answered figured out 30 years ago. Presented to Upstate Elixir in Greenville, SC on Nov 16.
jRuby fixes some issues with the Ruby programming language like memory leaks and lack of kernel level threading by running Ruby code on the Java Virtual Machine which has features like a sophisticated garbage collector, just-in-time compilation for improved performance, and native threading; benchmarks show jRuby provides much higher concurrency and better performance than Ruby for background processing and web applications; deploying a Ruby application using jRuby and a Java application server like Torquebox allows it to take advantage of the reliability, scalability and deployment features of the Java platform.
This document introduces PostGIS, an extension to PostgreSQL that adds support for geographic objects allowing location queries to be run in SQL. It discusses geospatial data types and functions in PostGIS for working with spatial features like points, lines, polygons, and rasters. PostGIS allows importing and exporting geospatial data, integration with GIS software, and access to open mapping data sources. It also covers spatial queries and analysis in PostGIS using functions for distance, containment, intersections and more. Additional topics mentioned include pgRouting for routing/navigation, generating maps/images from PostGIS data, and real-world use cases.
This document provides an overview and review of relational database concepts and ActiveRecord functionality in Rails. It discusses the ACID principles of atomicity, consistency, isolation, and durability and how they are achieved. It also covers topics like transactions, locking, callbacks, associations, queries, and using the database console. The document aims to explain why following database rules ensures data integrity and discusses when it may be better to handle things in the database rather than just in Rails code.
Barry Jones introduces himself as the instructor for the Ruby on Rails and PostgreSQL course. He has experience developing applications using various languages and databases. He wishes a course like this had been available when he took over a large Perl to Rails conversion project without knowing Rails or PostgreSQL, which led to issues he later had to fix. The goal of the course is to help students gain proficiency with Rails and PostgreSQL faster to avoid similar mistakes.
My experiences combatting phishing and fraud using DMARC and assorted other techniques in a large eBay-like platform for a niche market...when the site previously did everything over direct user email...for over a decade.
An overview of Ruby, jRuby, Rails, Torquebox, and PostgreSQL that was presented as a 3 hour class to other programmers at The Ironyard (http://theironyard.com) in Greenville, SC in July of 2013. The Rails specific sections are mostly code samples that were explained during the session so the real focus of the slides is Ruby, "the rails way" / workflow / differentiators and PostgreSQL.
PostgreSQL - It's kind've a nifty databaseBarry Jones
This presentation was given to a company that makes software for churches that is considering a migration from SQL Server to PostgreSQL. It was designed to give a broad overview of features in PostgreSQL with an emphasis on full-text search, various datatypes like hstore, array, xml, json as well as custom datatypes, TOAST compression and a taste of other interesting features worth following up on.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
1. AWS re:Invent 2013 Recap
A harrowing tale of heroic geeks
embarking on a never ending quest
for knowledge and a steak dinner
2. Putting things in perspective
• Basics of application constraints
• Address them with software architecture
• THEN re:Invent
– how these tools solve those problems
3. Server Constraints
• Performance
– Processor (super fast)
– RAM (super fast)
– Disk I/O
• Standard Hard Disk (super slow)
• SSD (moderately slow, but no seek time)
– Bandwidth / Network (fast)
• Disk Space
• Disk Reliability
4. Better Living through Architecture
• Efficient Code and Queries
– Lower Processor Usage
– Reduce disk I/O
– Minimize RAM footprint
• Caching
– Reduce disk I/O
– Avoid reprocessing the same code/query
– Avoid calling and waiting for responses from the same external services
– Increase RAM usage
• Background Processes and Queues
– Reduce disk I/O
– This is a low priority and can wait until other resources aren’t occupied
– Predictable processing (no such thing as an overloaded queue)
• Throughput Optimization
– Reduce disk I/O
– Reduce bandwidth usage
– Optimized images: use less disk space, bandwidth
– Query only what you need: minimize bandwidth between database and application server
GOAL:
Minimize Disk I/O
5. Response Time Limits
• 0.1 second
– Limit for having the user feel that the system is reacting instantaneously
– No special feedback is necessary except to display the result
• 1.0 second
– About the limit for the user's flow of thought to stay uninterrupted
– User will notice the delay
– No special feedback is necessary during delays > 0.1s but < 1.0s
– User does lose the feeling of operating directly on the data.
• 10 seconds
– About the limit for keeping the user's attention focused on the dialogue
– Users will want to perform other tasks while waiting
– Should be given feedback indicating expected completion
– Feedback during the delay is more important for variable response times
9. Remove Unnecessary Requests
CDN
• Combine & Minify JS and CSS files to limit requests
• Use public CDNs for common libraries to leverage browser cache (ex - jQuery via Google)
• Removes library from your rolled JS which shrinks the download on redeploy
• Use Image Sprites to minimize requests for multiple images
10. Remove Server Stress
Dynamic Server Capacity
Dynamic Load Balancing
Unlimited Storage
Redundant Storage
Scalable Disk I/O
Distributed Caching
The server request is the bottleneck preventing all other page loads from triggering
12. AWS Overview
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
US West (N. California)
South America (Sao Paulo)
US East (Virginia)
AWS GovCloud (US)
Asia Pacific
(Sydney)
Regions
Asia Pacific
(Singapore)
13. AWS Overview
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
US West (N. California)
US East (Virginia)
AWS GovCloud (US)
Asia Pacific
(Sydney)
Availability Zones
Asia Pacific
(Singapore)
South America (Sao Paulo)
21. Hands on Labs
• Chances to experiment
• Variable complexity
• Lots to choose from
22. Vendors…so many vendors
• Demoing and answering
questions about their
products
• AWS Marketplace
– License software on your
own instances
• Hosted services using
AWS cloud
– Use their APIs
• Services for managing
AWS via AWS APIs
25. Quick Hits
• New at Amazon
– RDS for PostgreSQL
*huge applause*
– DynamoDB
• NoSQL as a service
• Across availability zones
– Virtual Desktops
– AppStream
• Stream graphic intensive
apps to mobile devices
– Kinesis
• Drink from the real time
data firehose
• Notables already there
– Redshift
• Data warehouse
• Interface mirrors Postgres
– Glacier
• Dirt cheap long term
storage
• In house appliance to
mimic/replace tape
• Auto-backup from S3
– MySQL RDS adds streaming
read replicas across
availability zones
26. Scaling for the 1st 10 Million
No Users – start small
• What does your app do?
• Do everything on one server
> 100 Users
• Start infrastructure separation
• Web and Database
> 1K Users
• Start setting up redundancy
• Multiple webs in different zones
• Elastic load balancing across webs
• Multi AZ database
> 10K Users
• Use more redundancy
• Lots of caching
– DB and session caching (Memcached/Elasticache)
– Static assets to CDN
– Use online storage for most things (S3)
> 500K Users
• Service Oriented Architecture
– “Move services into their own tiers of modules. Treat
each of these as 100% separate piece of infrastructure
and scale independently.”
– Loose coupling, using messaging as a buffer (SQS)
> 5M Users
• Typical to run into issues on DB write master
• Consider sharding and/or other DB technologies
– PostgreSQL XC
– MongoDB
– DynamoDB
Musts at ALL levels
• Good monitoring, metrics and logging tools
Things that are helpful on most levels
• Auto scaling – increase/decrease your resources as
load requires
27. Dynamic CDN?
• Cloudfront CDN speeds up dynamic content
– Geographic SSL endpoint
– Use a single keep-alive
– Hold fewer connections due to consistent user
bandwidth
• Connections to your server are always Tier 1
• Cloudfront handles the longer download times
28. SOA w/ SQS + SNS
• SOA = Service Oriented Architecture
• SNS = Simple Notification Service
• SQS = Simple Queue Service
• Each service has a queue
• Each service has a notification service
• The queues subscribe to whatever notifications
that particular service cares about
• A job processes the messages in the queue
29. Automating Media Flows
Ingest – Fast file transfers
• Companies:
• Aspera (used by Netflix)
• Attunity Cloudbeam
• Signiant
• Open Source: Tsunami UDP
Process
• Elastic Transcoder
• Reduced Redundancy Storage (S3)
• Backup with Glacier
• Select a Thumbnail
• Extract several samples
• Ask Mechanical Turk to choose
• Turks are people too
Deliver
• Save in Database
• Add to CloudSearch index
• Serve with Application
• Streamed via Cloudfront CDN
Amazon Services used…
• S3
• Glacier
• RDS/DynamoDB
• Elastic Transcoder
• Simple Workflow (SWF)
• Mechanical Turk
• EC2
• Cloudfront
• CloudSearch
30. Big Data Analytics
• Amazon Redshift
– Analytic, Indexless Database
– Huge Queries, Fast
– Load data, process data, kill instance
– Query interface mirrors PostgreSQL
• Amazon Elastic Map Reduce (EMR) / Hadoop
– Mortar Framework, pig, and lipstick (for pig)
• Track timing on each piece of each job
• Visually breakdown how the job is working
• Identify time constraints / bottlenecks
• Schedule cluster lifetime
• Keeps historical operations data after cluster
destroyed
• Store results in Dynamo/Mongo instead of S3
• JasperSoft
– Open source Business Intelligence (BI)
– Available in AWS Marketplace
– Works with almost any backend
• Hadoop, EMR, Redshift, PostgreSQL
31. Amazon Kinesis
Overview
• Streaming map/reduce
• Routes incoming data by type
to ensure appropriate
processing
• Auto-provision instances to
handle streaming load
• Allows failover for several
seconds for streaming data
• Integrates with DynamoDB to
store incoming results
• Uses Java to implement the
processing logic
Use Case: Twitter Firehose
…from the movie UHF…funny movie
And yes that guy is Kramer from Seinfeld
32. MONyog
• Monitors MySQL
• Makes recommendations based on
– Current configuration
– Best practices
• Sort of like an auto-tune for MySQL
33. Hybrid Cloud
Eucalyptus
• Open source Amazon
infrastructure
• Develop and deploy against
Amazon APIs in your own
datacenter
• Portable automation code
VPN
• Setup Virtual Private Cloud
(VPC) within Amazon
network
• Setup dedicated VPN
connection between our
datacenter and Amazon
datacenters
34. Controlling the Flood
• DynamoDB
– Fast write NoSQL database
– Provisioned by preselecting throughput level
• Reads/writes per second
– When you go over…tough
• Simple Queue Service
– Auto-scaling queuing system
– Handles high load fast-writes
– When writes fail due to throughput threshold, stick them in the
queue
– Have a background worker keep trying to write to DynamoDB
until throughput is available
35. Key Lesson: Automate Everything
• 1 server…configure manually
• 2 servers…configure each one…maybe
• > 2 servers…automate or pray
– Puppet: Cross platform provisioning automation
• Yes, even Windows
– Vagrant: Excellent tool for using production virtual
machines in development environments
• Also great for experimenting with Puppet
36. Key Lesson: AWS does not mean best
• Amazon provides a lot of tools…not all of them are perfect
• Amazon tools are usually better integrated
• Elastic Beanstalk was heavily made fun of by PRESENTERS for its
weaknesses
• Systems that have standard APIs are more portable…for example
– Elasticache is Memcached and Redis
– Elastic Map Reduce is Hadoop
– EC2 instances are just virtual servers running an OS
• Many infrastructure vendors within AWS datacenters
• A lot of services are extremely easy to configure compared to
paying somebody else a markup to do it for you
– Offload the complexity
37. Key Lesson: Load Test & Monitor
• You cannot know your bottlenecks until your
application is under stress
• You cannot know your bottlenecks unless you
are monitoring your infrastructure
38. Research it Yourself
Find MOST presentations
http://reinvent.awsevents.com/recap2013.html
Some are missing. There was an amazing session
from Loggly that is not present.
Update: Here it is… http://slidesha.re/1cTl8v7
39. Recommended Presentations
• Coding Tips you should know before distributing your HTML5 Web App on Mobile
– http://bit.ly/1amFtID
• Automated Media Workflows in the Cloud
– http://bit.ly/1iWQglf
• Dynamic Content Acceleration using Cloudfront and Route53
– http://bit.ly/IEklGU
• Controlling the Flood: Massive Message Processing with SQS and DynamoDB
– http://bit.ly/19mzn0o
• Building Scalable Windows and .NET Apps
– http://bit.ly/1eitluH
• 7 use cases in 7 minutes: The Power of Workflow Automation
– http://bit.ly/1bwvdwT
• Professional Grade Cloud for your Hybrid IT needs
– http://bit.ly/1gKI2Jt
• Scaling for your first 10 million users
– http://bit.ly/1dA4SiW
• Drinking Our Own Champagne: How w00t, an Amazon subsidiary, uses AWS
– http://slidesha.re/1kstOSu
• Building a Scalable Digital Asset Management Platform (PBS)
– http://bit.ly/19DiGbd
• Instrumenting Your App Stack in a Dynamically Scaling Environment
– http://bit.ly/1gEchEv
• Intrusion Detection in the Cloud
– http://bit.ly/18C6nf4
40. And about that steak dinner
• 7 hungry men wandered the entire Las Vegas
strip for 3 hours (give or take 2 hours) trying to
make a decision on where to eat a steak…
• The quest finally ended at…
Wolfgang Puck at the MGM Grand