This session discusses the set of data services that AWS offers for managing all types of data, including files, objects, databases, and data warehouses. We will discuss use cases for each AWS data service, including unique capabilities that the cloud enables and hybrid scenarios for integrating and migrating on-premises data to AWS. This session discusses Amazon S3, AWS Storage Gateway, Amazon EBS, Amazon RDS, Amazon Redshift, and native databases running on AWS. It also covers some of the key data and storage capabilities provided by AWS partners, and considerations for integrating with and migrating enterprise data to the cloud.
AWS re:Invent 2016: What’s New with Amazon Redshift (BDA304)Amazon Web Services
In this session, you learn about the latest and hottest features of Amazon Redshift. Join Vidhya Srinivasan, General Manager of Amazon Redshift, to take a deep dive into the architecture and inner workings of Amazon Redshift. You discover how the recent availability, performance, and manageability improvements we’ve made can significantly enhance your end user experience. You also get a glimpse of what we are working on and our plans for the future.
AWS re:Invent 2016: Workshop: Using the Database Migration Service (DMS) for ...Amazon Web Services
It can help you do much more. You can use DMS to consolidate multiple databases into a single database or split a single database into multiple databases. You can also use DMS for data distribution to multiple systems. For both of these use cases your source database can be outside of AWS (on premises) or in AWS (EC2 or RDS). DMS can also be used for near real-time replication of data. Replication can be done to one or more targets within AWS, in the same region or across regions. You can also replicate data from databases within AWS to databases outside of AWS. In this session we will discuss all these usage patterns and help you try them out yourselves.
Prerequisites:
You should have good database knowledge and at least some experience with Amazon RDS or Amazon Aurora.
Participants should have an AWS account established and available for use during the workshop.
Please bring your own laptop.
Best Practices Scaling Web Application Up to Your First 10 Million UsersAmazon Web Services
If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Cloud computing gives you a number of advantages, such as the ability to scale your web application on demand. Join us in this webinar to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
Monitoring Performance of Enterprise Applications on AWS: Understanding the D...Amazon Web Services
Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm in AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility but can add complexity to the enterprise application environment. New Relic helps manage that complexity to give the benefits of the cloud without sacrificing simplicity. In this session, we discuss some of the best practices we’ve learned working with New Relic customers on how to manage applications running in this environment and take advantage of the dynamic nature of the cloud to give you additional insights into your application performance.
Deploy, scale, and manage your Microsoft workloads on AWS. We start our session by discussing why customers want to deploy Microsoft Windows applications on AWS as a cloud platform. We talk about reference architectures and best practices for implementing Microsoft products and technologies including Active Directory, Remote Desktop Gateway, Exchange, SharePoint, and Lync in the AWS cloud. We conclude with best practices for managing and monitoring Microsoft technologies in the AWS cloud.
Speaker: Andy Reay, Solutions Architect, Amazon Web Services
Hybrid IT Approach and Technologies with the AWS Cloud | AWS Public Sector Su...Amazon Web Services
This session is recommended for anyone considering using the AWS cloud to augment their current capabilities. Adoption of cloud computing provides access to the benefits of new deployment models with significant cost and agility benefits. But how can the cloud benefit existing government organizations that have invested large amounts of resources in existing on-premises technologies? This session outlines several key factors to consider from the point of view of the large-scale IT shop stakeholder. Because each organization has its unique set of challenges in cloud adoption, this session compares some of the opportunities and risks of several hybrid cloud use-case models and then helps customers understand the cloud-native and third-party vendor options available that bridge the gap to the cloud for large-scale government environments.
Organizations where cloud adoption has matured into broader enterprise deployment are facing the need to better manage and control their costs and expenditures. Cost optimization at scale is a process that involves a number of changes across the business, including technical, organizational and cultural transformation. In this session, you will learn the fundamentals of cost optimization and how this can be used to help your organization drive costs down and still being able to meet capacity, demand and organizational requirements. Key topics being discussed are right sizing services, optimizing purchase models and implementing a culture of cost management.
AWS re:Invent 2016: What’s New with Amazon Redshift (BDA304)Amazon Web Services
In this session, you learn about the latest and hottest features of Amazon Redshift. Join Vidhya Srinivasan, General Manager of Amazon Redshift, to take a deep dive into the architecture and inner workings of Amazon Redshift. You discover how the recent availability, performance, and manageability improvements we’ve made can significantly enhance your end user experience. You also get a glimpse of what we are working on and our plans for the future.
AWS re:Invent 2016: Workshop: Using the Database Migration Service (DMS) for ...Amazon Web Services
It can help you do much more. You can use DMS to consolidate multiple databases into a single database or split a single database into multiple databases. You can also use DMS for data distribution to multiple systems. For both of these use cases your source database can be outside of AWS (on premises) or in AWS (EC2 or RDS). DMS can also be used for near real-time replication of data. Replication can be done to one or more targets within AWS, in the same region or across regions. You can also replicate data from databases within AWS to databases outside of AWS. In this session we will discuss all these usage patterns and help you try them out yourselves.
Prerequisites:
You should have good database knowledge and at least some experience with Amazon RDS or Amazon Aurora.
Participants should have an AWS account established and available for use during the workshop.
Please bring your own laptop.
Best Practices Scaling Web Application Up to Your First 10 Million UsersAmazon Web Services
If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Cloud computing gives you a number of advantages, such as the ability to scale your web application on demand. Join us in this webinar to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
Monitoring Performance of Enterprise Applications on AWS: Understanding the D...Amazon Web Services
Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm in AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility but can add complexity to the enterprise application environment. New Relic helps manage that complexity to give the benefits of the cloud without sacrificing simplicity. In this session, we discuss some of the best practices we’ve learned working with New Relic customers on how to manage applications running in this environment and take advantage of the dynamic nature of the cloud to give you additional insights into your application performance.
Deploy, scale, and manage your Microsoft workloads on AWS. We start our session by discussing why customers want to deploy Microsoft Windows applications on AWS as a cloud platform. We talk about reference architectures and best practices for implementing Microsoft products and technologies including Active Directory, Remote Desktop Gateway, Exchange, SharePoint, and Lync in the AWS cloud. We conclude with best practices for managing and monitoring Microsoft technologies in the AWS cloud.
Speaker: Andy Reay, Solutions Architect, Amazon Web Services
Hybrid IT Approach and Technologies with the AWS Cloud | AWS Public Sector Su...Amazon Web Services
This session is recommended for anyone considering using the AWS cloud to augment their current capabilities. Adoption of cloud computing provides access to the benefits of new deployment models with significant cost and agility benefits. But how can the cloud benefit existing government organizations that have invested large amounts of resources in existing on-premises technologies? This session outlines several key factors to consider from the point of view of the large-scale IT shop stakeholder. Because each organization has its unique set of challenges in cloud adoption, this session compares some of the opportunities and risks of several hybrid cloud use-case models and then helps customers understand the cloud-native and third-party vendor options available that bridge the gap to the cloud for large-scale government environments.
Organizations where cloud adoption has matured into broader enterprise deployment are facing the need to better manage and control their costs and expenditures. Cost optimization at scale is a process that involves a number of changes across the business, including technical, organizational and cultural transformation. In this session, you will learn the fundamentals of cost optimization and how this can be used to help your organization drive costs down and still being able to meet capacity, demand and organizational requirements. Key topics being discussed are right sizing services, optimizing purchase models and implementing a culture of cost management.
AWS re:Invent 2016: Cloud Monitoring - Understanding, Preparing, and Troubles...Amazon Web Services
Applications running in a typical data center are static entities. Dynamic scaling and resource allocation are the norm in AWS. Technologies such as Amazon EC2, Docker, AWS Lambda, and Auto Scaling make tracking resources and resource utilization a challenge. The days of static server monitoring are over.
In this session, we examine trends we’ve observed across thousands of customers using dynamic resource allocation and discuss why dynamic infrastructure fundamentally changes your monitoring strategy. We discuss some of the best practices we’ve learned by working with New Relic customers to build, manage, and troubleshoot applications and dynamic cloud services. Session sponsored by New Relic.
AWS Competency Partner
Lou Osborne takes us on a journey trough Microsoft Windows, Server, SQL, Sharepoint, and how these different solutions can be easily implemented on the AWS Cloud.
AWS re:Invent 2016: Partner-Led Migrations to AWS Starting with the Enterpris...Amazon Web Services
AWS is investing in enterprise migration program initiatives. In this session, learn how you can take advantage of the latest partner programs, tools, and methodologies supporting enterprise migrations. Many enterprises are starting with migrating desktop computing as a first step; we dive into specific partner opportunities and approaches to drive enterprise migration projects in this area.
Planning datacenter migrations can involve thousands of workloads and tens of thousands of servers and are often deeply interdependent. Application discovery and dependency mapping are important early first steps in the migration process, but difficult to perform at scale due to the lack of automated tools. AWS Application Discovery Service is a new service (coming soon) that automatically identifies data center applications and dependencies, and baselines application health and performance to help plan your application migration to AWS quickly and reliably. This talk introduces the new Application Discovery Service capabilities for simplifying the planning process for data center and large scale migrations to AWS. We will discuss how you can use the AWS Application Discovery Service data service to examine the applications running your data center, their attributes, and their dependencies and then use this information to help reduce the time, cost, and risk of migrating applications to AWS.
Get Started Today with Cloud-Ready Contracts | AWS Public Sector Summit 2017Amazon Web Services
In this session, we provide an overview of existing cloud-ready contracts, such as cooperative, federal, and state directed contracts, and walk through steps on how to choose the right one for your procurement. We compare various cloud-ready contracts by identifying scope, end-user eligibility, and primary service offerings to help you make the right choice for your mission needs. Learn More: https://aws.amazon.com/government-education/
AWS offers you the ability to add additional layers of security to your data at rest in the cloud, providing access control as well scalable and efficient encryption features. Flexible key management options allow you to choose whether to have AWS manage the encryption keys or to keep complete control over the keys yourself. In this session, you will learn how to secure data when using AWS services. We will discuss data encryption using Key Management Service, S3 access controls, edge and host access security, and database platform security features.
AWS re:Invent 2016: Develop Your Migration Toolkit (ENT312)Amazon Web Services
Learn about some of the most useful and popular tools that you can leverage at various stages of a migration project. These tools will allow your teams to focus on coordinating the migration and automating as many migration activities as possible.
The Well-Architected workshop is a free, advanced-level workshop that describes the benefits of the AWS Well-Architected Framework. This enables customers to review and improve their cloud architectures and better understand the business impact of their design decisions. It addresses general design principles, best practices, and guidance in five pillars of the Well-Architected Framework. We recommend that attendees of this course have the following pre-requisites: Strong working knowledge of AWS core services and features, as well as previous architectural experience.
AWS GovCloud (US) and the Enterprise | AWS Public Sector Summit 2016Amazon Web Services
A Discussion on Best Practices for Enterprise Adoption and Migration Join us to learn about enterprise best practices to follow to architect, migrate, deploy, and operate workloads while meeting compliance requirements in the AWS GovCloud (US) Region.
Matt Nowina, AWS Toronto Enterprise Solutions Architect, takes us on an overview of Cloud Storage solutions, data migration solutions and strategies, and practical examples of how to move data to the Cloud.
Amazon Web Services provides a number of database management alternatives for all type of customers. You can run managed relational databases, managed NoSQL databases, a petabyte-scale data warehouse, or you can even operate your own online database in the cloud on Amazon EC2. Discover our database offerings and find what service to use according to your existing needs or how to deliver your next big project. Find out about data migration services, tools and best practices for security, availability and scalability, and hear some of the great database success stories from AWS customers.
Speaker: Ari Newman, Account Manager & Rob Carr, Solutions Architect, Amazon Web Services
Featured Customer - Atlassian
Develop a Custom Data Solution Architecture with NorthBayAmazon Web Services
Organizations that have vast amounts of data in legacy applications often experience difficulties delivering that data to business unit end-users. Register to learn how Eliza Corporation and Scholastic overcame this challenge by leveraging a Data Lake solution from NorthBay on AWS to optimize data analytics and provide greater visibility. AWS and NorthBay will give you an in-depth overview of how you can use a Data Lake in conjunction with your existing on-premises or cloud-based Data Warehouse. NorthBay helps organizations scale their ETL and data warehousing workloads using Amazon EMR and Amazon Redshift. Join us to learn: • Best practices for using a Data Lake in conjunction with your existing data warehouse • The key aspects of introducing agile and scrum methodologies into an enterprise • The most impactful cost-savings levers that are addressed via a cloud data warehouse migration
Who should attend: Heads of Analytics, Heads of BI, Analytics Managers, BI Teams, Senior Analysts
Establishing a Scalable, Resilient Web Architecture | AWS Public Sector Summi...Amazon Web Services
Amazon Web Services (AWS) provides an ideal platform for running web architectures. This session describes the foundational services required for deploying an example web architecture. It covers Amazon EC2, Amazon EBS, Elastic Load Balancing, Auto Scaling, Amazon S3, Amazon RDS, and Amazon Machine Images (AMIs) and relates overviews of the services back to the example web architecture. After the initial architecture discussion, we will describe the usage of Amazon S3 for scalable content, Elastic Load Balancing, and Auto Scaling to provide high availability.
Automatisierte Kontrolle und Transparenz in der AWS Cloud – Autopilot für Com...AWS Germany
Vortrag "Automatisierte Kontrolle und Transparenz in der AWS Cloud – Autopilot für Compliance Ihrer Cloud Ressourcen" von Philipp Behre beim AWS Cloud Web Day für Mittelstand und Großunternehmen. Alle Videos und Präsentationen finden Sie hier: http://amzn.to/1VUJZsT
AWS re:Invent 2016: Future-Proofing the WAN and Simplifying Security On Your ...Amazon Web Services
You can leverage the agility and scale of cloud services and consolidate your data centers, but performance and security are only as good as your WAN. Is this session, learn best practices for connecting cloud, data center, and branch sites through public and private networking to maximize performance, minimize costs, and simplify security. Session sponsored by Level 3.
The pathway to the cloud has many different options and levers that customers can pull. This webinar walks customers through actual steps from creating a cloud adoption vision to actually building a migration roadmap with actionable guidance. We’ll go through proven migration patterns, methods and tooling that AWS has leveraged successfully with hundreds of Enterprise customers around the globe. Learn what challenges customers face when planning the migrations to cloud, and how they overcome them to minimize risk and accelerate the adoption.
Simplify Your Database Migration to AWS | AWS Public Sector Summit 2016Amazon Web Services
Migrating a database from one platform to another has been a pain point for many organizations for a long time. Often times, it involves weeks of careful planning and a migration strategy to minimize impact to the business. Many organizations are locked into a database platform even when there are better options available because they don’t want to take up the migration challenge. AWS Data Migration Service helps with live migration of databases across homogenous or heterogeneous database platforms. The service supports homogenous migrations such as Oracle to Oracle, and also heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. The AWS Schema Conversion Tool is a desktop application that makes heterogeneous database migrations easy by automatically converting the source database schema to a format compatible with the target database. The tool helps with conversion of a database schema from an Oracle or Microsoft SQL Server database to an Amazon RDS MySQL DB instance or an Amazon Aurora DB cluster. Join us in this session to explore how these capabilities can simplify your database migration challenge.
Partner Recruitment Webinar: "Join the Most Productive Ecosystem in Big Data ...MongoDB
We are looking for more partners in your region to deal with the increasing demand for MongoDB. This is the slide deck of the webinar, broadcast on 21st May 2014, dedicated to see if a MongoDB partnership could benefit your company as well.
In this presentation you can find out more about:
- Why MongoDB is growing so fast and how you can benefit from this fast changing market
- How existing partners succeed with MongoDB and how they benefit
- Potential business opportunities
To give you some idea of the momentum in EMEA:
- Tens of thousands of active leads visiting our website
- Tens of thousands of registrations for MongoDB Online Education
- 30.000+ members on LinkedIn with MongoDB on their profile
Visit the Partner Program http://www.mongodb.com/partners/partner-program for more general information.
About the speaker: Luca Olivari
Luca Olivari is the Director of Business Development at MongoDB, where he's responsible for building the ecosystem in Europe, The Middle East and Africa.
Prior to MongoDB, Luca worked at Oracle, where he led the MySQL Sales Consulting team in EMEA. Before MySQL, he ran the Database and Business Intelligence practice and then coordinated the Business Development and Strategy team for a Systems Integrator. Luca has a BA in Business and Marketing
AWS re:Invent 2016: Cloud Monitoring - Understanding, Preparing, and Troubles...Amazon Web Services
Applications running in a typical data center are static entities. Dynamic scaling and resource allocation are the norm in AWS. Technologies such as Amazon EC2, Docker, AWS Lambda, and Auto Scaling make tracking resources and resource utilization a challenge. The days of static server monitoring are over.
In this session, we examine trends we’ve observed across thousands of customers using dynamic resource allocation and discuss why dynamic infrastructure fundamentally changes your monitoring strategy. We discuss some of the best practices we’ve learned by working with New Relic customers to build, manage, and troubleshoot applications and dynamic cloud services. Session sponsored by New Relic.
AWS Competency Partner
Lou Osborne takes us on a journey trough Microsoft Windows, Server, SQL, Sharepoint, and how these different solutions can be easily implemented on the AWS Cloud.
AWS re:Invent 2016: Partner-Led Migrations to AWS Starting with the Enterpris...Amazon Web Services
AWS is investing in enterprise migration program initiatives. In this session, learn how you can take advantage of the latest partner programs, tools, and methodologies supporting enterprise migrations. Many enterprises are starting with migrating desktop computing as a first step; we dive into specific partner opportunities and approaches to drive enterprise migration projects in this area.
Planning datacenter migrations can involve thousands of workloads and tens of thousands of servers and are often deeply interdependent. Application discovery and dependency mapping are important early first steps in the migration process, but difficult to perform at scale due to the lack of automated tools. AWS Application Discovery Service is a new service (coming soon) that automatically identifies data center applications and dependencies, and baselines application health and performance to help plan your application migration to AWS quickly and reliably. This talk introduces the new Application Discovery Service capabilities for simplifying the planning process for data center and large scale migrations to AWS. We will discuss how you can use the AWS Application Discovery Service data service to examine the applications running your data center, their attributes, and their dependencies and then use this information to help reduce the time, cost, and risk of migrating applications to AWS.
Get Started Today with Cloud-Ready Contracts | AWS Public Sector Summit 2017Amazon Web Services
In this session, we provide an overview of existing cloud-ready contracts, such as cooperative, federal, and state directed contracts, and walk through steps on how to choose the right one for your procurement. We compare various cloud-ready contracts by identifying scope, end-user eligibility, and primary service offerings to help you make the right choice for your mission needs. Learn More: https://aws.amazon.com/government-education/
AWS offers you the ability to add additional layers of security to your data at rest in the cloud, providing access control as well scalable and efficient encryption features. Flexible key management options allow you to choose whether to have AWS manage the encryption keys or to keep complete control over the keys yourself. In this session, you will learn how to secure data when using AWS services. We will discuss data encryption using Key Management Service, S3 access controls, edge and host access security, and database platform security features.
AWS re:Invent 2016: Develop Your Migration Toolkit (ENT312)Amazon Web Services
Learn about some of the most useful and popular tools that you can leverage at various stages of a migration project. These tools will allow your teams to focus on coordinating the migration and automating as many migration activities as possible.
The Well-Architected workshop is a free, advanced-level workshop that describes the benefits of the AWS Well-Architected Framework. This enables customers to review and improve their cloud architectures and better understand the business impact of their design decisions. It addresses general design principles, best practices, and guidance in five pillars of the Well-Architected Framework. We recommend that attendees of this course have the following pre-requisites: Strong working knowledge of AWS core services and features, as well as previous architectural experience.
AWS GovCloud (US) and the Enterprise | AWS Public Sector Summit 2016Amazon Web Services
A Discussion on Best Practices for Enterprise Adoption and Migration Join us to learn about enterprise best practices to follow to architect, migrate, deploy, and operate workloads while meeting compliance requirements in the AWS GovCloud (US) Region.
Matt Nowina, AWS Toronto Enterprise Solutions Architect, takes us on an overview of Cloud Storage solutions, data migration solutions and strategies, and practical examples of how to move data to the Cloud.
Amazon Web Services provides a number of database management alternatives for all type of customers. You can run managed relational databases, managed NoSQL databases, a petabyte-scale data warehouse, or you can even operate your own online database in the cloud on Amazon EC2. Discover our database offerings and find what service to use according to your existing needs or how to deliver your next big project. Find out about data migration services, tools and best practices for security, availability and scalability, and hear some of the great database success stories from AWS customers.
Speaker: Ari Newman, Account Manager & Rob Carr, Solutions Architect, Amazon Web Services
Featured Customer - Atlassian
Develop a Custom Data Solution Architecture with NorthBayAmazon Web Services
Organizations that have vast amounts of data in legacy applications often experience difficulties delivering that data to business unit end-users. Register to learn how Eliza Corporation and Scholastic overcame this challenge by leveraging a Data Lake solution from NorthBay on AWS to optimize data analytics and provide greater visibility. AWS and NorthBay will give you an in-depth overview of how you can use a Data Lake in conjunction with your existing on-premises or cloud-based Data Warehouse. NorthBay helps organizations scale their ETL and data warehousing workloads using Amazon EMR and Amazon Redshift. Join us to learn: • Best practices for using a Data Lake in conjunction with your existing data warehouse • The key aspects of introducing agile and scrum methodologies into an enterprise • The most impactful cost-savings levers that are addressed via a cloud data warehouse migration
Who should attend: Heads of Analytics, Heads of BI, Analytics Managers, BI Teams, Senior Analysts
Establishing a Scalable, Resilient Web Architecture | AWS Public Sector Summi...Amazon Web Services
Amazon Web Services (AWS) provides an ideal platform for running web architectures. This session describes the foundational services required for deploying an example web architecture. It covers Amazon EC2, Amazon EBS, Elastic Load Balancing, Auto Scaling, Amazon S3, Amazon RDS, and Amazon Machine Images (AMIs) and relates overviews of the services back to the example web architecture. After the initial architecture discussion, we will describe the usage of Amazon S3 for scalable content, Elastic Load Balancing, and Auto Scaling to provide high availability.
Automatisierte Kontrolle und Transparenz in der AWS Cloud – Autopilot für Com...AWS Germany
Vortrag "Automatisierte Kontrolle und Transparenz in der AWS Cloud – Autopilot für Compliance Ihrer Cloud Ressourcen" von Philipp Behre beim AWS Cloud Web Day für Mittelstand und Großunternehmen. Alle Videos und Präsentationen finden Sie hier: http://amzn.to/1VUJZsT
AWS re:Invent 2016: Future-Proofing the WAN and Simplifying Security On Your ...Amazon Web Services
You can leverage the agility and scale of cloud services and consolidate your data centers, but performance and security are only as good as your WAN. Is this session, learn best practices for connecting cloud, data center, and branch sites through public and private networking to maximize performance, minimize costs, and simplify security. Session sponsored by Level 3.
The pathway to the cloud has many different options and levers that customers can pull. This webinar walks customers through actual steps from creating a cloud adoption vision to actually building a migration roadmap with actionable guidance. We’ll go through proven migration patterns, methods and tooling that AWS has leveraged successfully with hundreds of Enterprise customers around the globe. Learn what challenges customers face when planning the migrations to cloud, and how they overcome them to minimize risk and accelerate the adoption.
Simplify Your Database Migration to AWS | AWS Public Sector Summit 2016Amazon Web Services
Migrating a database from one platform to another has been a pain point for many organizations for a long time. Often times, it involves weeks of careful planning and a migration strategy to minimize impact to the business. Many organizations are locked into a database platform even when there are better options available because they don’t want to take up the migration challenge. AWS Data Migration Service helps with live migration of databases across homogenous or heterogeneous database platforms. The service supports homogenous migrations such as Oracle to Oracle, and also heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. The AWS Schema Conversion Tool is a desktop application that makes heterogeneous database migrations easy by automatically converting the source database schema to a format compatible with the target database. The tool helps with conversion of a database schema from an Oracle or Microsoft SQL Server database to an Amazon RDS MySQL DB instance or an Amazon Aurora DB cluster. Join us in this session to explore how these capabilities can simplify your database migration challenge.
Partner Recruitment Webinar: "Join the Most Productive Ecosystem in Big Data ...MongoDB
We are looking for more partners in your region to deal with the increasing demand for MongoDB. This is the slide deck of the webinar, broadcast on 21st May 2014, dedicated to see if a MongoDB partnership could benefit your company as well.
In this presentation you can find out more about:
- Why MongoDB is growing so fast and how you can benefit from this fast changing market
- How existing partners succeed with MongoDB and how they benefit
- Potential business opportunities
To give you some idea of the momentum in EMEA:
- Tens of thousands of active leads visiting our website
- Tens of thousands of registrations for MongoDB Online Education
- 30.000+ members on LinkedIn with MongoDB on their profile
Visit the Partner Program http://www.mongodb.com/partners/partner-program for more general information.
About the speaker: Luca Olivari
Luca Olivari is the Director of Business Development at MongoDB, where he's responsible for building the ecosystem in Europe, The Middle East and Africa.
Prior to MongoDB, Luca worked at Oracle, where he led the MySQL Sales Consulting team in EMEA. Before MySQL, he ran the Database and Business Intelligence practice and then coordinated the Business Development and Strategy team for a Systems Integrator. Luca has a BA in Business and Marketing
When you are starting up a high growth business or a product line, it is always tempting to try to boost sales by building a large channel partner network, on the premise that the bigger the channel, the more zillions of your products and services they can sell. In our experience, however, many businesses attempting this model struggle to provide a sufficiently strong business proposition for multiple channel partners to carry their products and services. Furthermore, they very often end up creating channel conflict by building direct sales motions in their efforts to improve results.
In this session, we walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we walk through how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we provide an overview of the solution that we've implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.
AWS Keynote II - New Services Showcase: Connecting the DotsAmazon Web Services
Let’s enter the new world of Internet of Things (IoT) and event-driven compute, which allows companies to foster innovation and reduce complexity. New services like Amazon API Gateway, AWS Lambda, AWS IoT and Alexa Skills Kit all help to build completely serverless, voice-enabled architectures within minutes without managing any servers. In addition, experience Amazon Lumberyard, a free, cross-platform, 3D game engine to create the highest-quality games, connect your virtual worlds to the vast compute and storage of the AWS Cloud, and engage fans on Twitch. All topped off with a live demo of an engaging 3D game world that uses AWS powered micro-services to connect the dots between the virtual and the real world!
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Join us to learn how the APN can accelerate and support your cloud business strategy. The session will highlight the various routes to market, programs and resources available to AWS Customers and Partners looking to grow and develop their business on AWS.
Rewics (04 mai 2011) - Atelier : « L'informatique dans les nuages
(cloud computing) : vraie révolution ou pétard mouillé ? ». Avec Bruno Schroder, National Technology Officer, Microsoft, et Olivier Loncin, spécialiste Google Apps.
Mobile App Testing with AWS Device Farm - AWS July 2016 Webinar SeriesAmazon Web Services
AWS Device farm lets you improve the quality of your app by testing and interacting with real Android and iOS devices in the AWS Cloud. In this webinar, we will explain how to use Device Farm to run automated tests on 100s of real devices, and get logs, screenshots, and performance data in minutes. We will also show a demo of the Remote Access feature which lets you to interact with physical devices in real time through your web browser.
Learning Objectives:
• Learn how to use device farm to test your app against real devices
AWS Data Transfer Services: Data Ingest Strategies Into the AWS CloudAmazon Web Services
Different types and sizes of data require different strategies. In this session, learn about the various features and services available for migrating data, be it small ongoing transactional data or large multi-petabyte volumes. Come learn how customers are using the latest network, streaming and large scale ingest features for their cloud data migrations to AWS storage services.
xPad - Building Simple Tablet OS with Gtk/WebKitPing-Hsun Chen
Web is becoming the new graphic library, based on the success of xPUD project, we make it further by adding following components: 1) WebKit-based browser with finger scrolling and dobule-tap zooming function, 2) A BPMF-friendly virtual keyboard integrated with SCIM input method and 3) Touch-enhanced user interface based on xPUD’s plate framework. By using simple software stack, we’re pretty confident that xPad could be an alternative of MeeGo or Android as a lightweight and easily customizable tablet OS.
AWS User Group July 2014 - Getting Started with cloud computing and AWS
Getting Started with cloud computing and AWS
Slides for the following AWS User Group Talks:
"Public Cloud and AWS Overview" - Ryan Koop, Director of Products and Marketing at Cohesive @ryankoop
"Getting Started in AWS" - Jonny Sywulak, Continuous Delivery Engineer at Stelligent Systems LLC @jonathansywulak
July Sponsors:
Hosts: Cohesive
Beers and drinks: Cohesive
Pizza: el el see
Organizers: Cohesive
Interested in getting involved next time? Have an idea for a talk? email margaret.walkerATcohesive.net
#AWSChicago
AWS re:Invent 2016: Wild Rydes Takes Off – The Dawn of a New Unicorn (SVR309)Amazon Web Services
Wild Rydes (www.wildrydes.com) needs your help! With fresh funding from its seed investors, Wild Rydes is seeking to build the world’s greatest mobile/VR/AR unicorn transportation system. The scrappy startup needs a first-class webpage to begin marketing to new users and to begin its plans for global domination. Join us to help Wild Rydes build a website using a serverless architecture. You’ll build a scalable website using services like AWS Lambda, Amazon API Gateway, Amazon DynamoDB, and Amazon S3. Join this workshop to hop on the rocket ship!
To complete this workshop, you'll need:
Your laptop
AWS Account
AWS Command Line Interface
Google Chrome
git
Text Editor
Learn about the patterns and techniques a business should be using in building their infrastructure on Amazon Web Services to be able to handle rapid growth and success in the early days. From leveraging highly scalable AWS services, to architecting best patterns, there are a number of smart choices you can make early on to help you overcome some typical infrastructure issues.
Presenter: Chris Munns,Solutions Architect, Amazon Web Services
Building and Managing Scalable Applications on AWS: 1 to 500K usersAmazon Web Services
This presentation session from the Cloud Management, Services and Applications Theatre at Cloud Expo Europe 2014 explores the techniques and AWS services that you can use in order to build high scalability web applications on AWS. It also features a great overview of a high-scalability mobile application built by Myriad Group, and AWS customer, that serves over 41 million users.
AWS Data Transfer Services: Data Ingest Strategies Into the AWS CloudAmazon Web Services
Different types and sizes of data require different strategies. In this session, learn about the various features and services available for migrating data, be it small ongoing transactional data or large multi-petabyte volumes. Come learn how customers are using the latest network, streaming and large scale ingest features for their cloud data migrations to AWS storage services.
Scaling on AWS for the First 10 Million Users (ARC206) | AWS re:Invent 2013Amazon Web Services
Cloud computing gives you a number of advantages in being able to scale on demand, easily replace whole parts of your infrastructure, and much more. As a new business looking to use the cloud, you inevitably ask yourself, Where do I start? Join us at this session to understand some of the common patterns and recommended areas of focus you can expect to work through while scaling an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud. The patterns and practices reviewed in this session will get you there.
Storage is the most clear requirement for digital media. The AWS Cloud has customized solutions that cater to digital media storage, and present an array of options to ingest, store and move digital media, using the Cloud as a transport and storage mechanism.
Erik Durand, the Principal Business Development Manager for AWS Storage, takes us on this analysis of the options, benefits and characteristics of each one.
Presented during the AWS Media and Entertainment Symposium in Toronto
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
Scaling on AWS for the First 10 Million Users at Websummit DublinAmazon Web Services
In this talk from the Dublin Websummit 2014 AWS Technical Evangelist Ian Massingham discusses the techniques that AWS customers can use to create highly scalable infrastructure to support the operation of large scale applications on the AWS cloud.
Includes a walk-through of how you can evolve your architecture as your application becomes more popular and you need to scale up your infrastructure to support increased demand.
Scaling on AWS for the First 10 Million Users at Websummit DublinIan Massingham
In this talk from the Dublin Websummit 2014 AWS Technical Evangelist Ian Massingham discusses the techniques that AWS customers can use to create highly scalable infrastructure to support the operation of large scale applications on the AWS cloud.
Includes a walk-through of how you can evolve your architecture as your application becomes more popular and you need to scale up your infrastructure to support increased demand.
Are you challenged today with getting non-digital information into a digital format? Are you trying to find the most cost effective storage solutions for your digital content? Do you want to share your libraries rich information with a global audience? Attend this webinar to learn how to digitize, store and share your information quickly, efficiently and at the lowest cost possible.
AWS Summit Stockholm 2014 – T1 – Architecting highly available applications o...Amazon Web Services
This session teaches you how to architect scalable, highly-available, and secure applications on AWS. In this session, we cover the differences between traditional and cloud-based availability, how to apply AWS availability options to workloads, architectural design patterns for automatingfault tolerance, and examples of highly available architectures.
Scaling the Platform for Your Startup - Startup Talks June 2015Amazon Web Services
Join AWS at this session to understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
As part of the Introduction to AWS Workshop Series, see how to scale your website from your first user, right up to a complex architecture to support 10 million users.
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. See how Amazon Athena runs serverless analytics on your data and hear about expedited and bulk retrievals from Amazon Glacier. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
When evaluating and planning migrating your data from on premises to the Cloud, you might encounter physical limitations. Amazon offers a suite of tools to help you surmount these limitations by moving data using networks, roads, and technology partners. In this session, we discuss how to move large amounts of data into and out of the Cloud in batches, increments, and streams.
Similar to AWS as a Data Platform for Cloud and On-Premises Workloads | AWS Public Sector Summit 2016 (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
2. Common constraints
Too much, too fast, too many types of data
- Velocity, volume & variety of data
Too constrained
- No capacity
Too expensive
- Costs
Too insecure
- “Unique” compliance requirements
Too complicated
- Skillset gaps
Not always as difficult as originally perceived
3. ENTERPRISE
APPS
DEVELOPMENT & OPERATIONSMOBILE SERVICESAPP SERVICESANALYTICS
Data
Warehousing
Hadoop/S
park
Streaming Data
Collection
Machine
Learning
Elastic
Search
Virtual
Desktops
Sharing &
Collaboration
Corporate
Email
Backup
Queuing &
Notifications
Workflow
Search
Email
Transcoding
One-click App
Deployment
Identity
Sync
Single Integrated
Console
Push
Notifications
DevOps Resource
Management
Application Lifecycle
Management
Containers
Triggers
Resource
Templates
TECHNICAL &
BUSINESS
SUPPORT
Account
Management
Support
Professional
Services
Training &
Certification
Security &
Pricing
Reports
Partner
Ecosystem
Solutions
Architects
MARKETPLACE
Business
Apps
Business
Intelligence
Databases
DevOps
Tools
NetworkingSecurity Storage
Regions
Availability
Zones
Points of
Presence
INFRASTRUCTURE
CORE SERVICES
Compute
VMs, Auto-
scaling, & Load
Balancing
Storage
Object, Blocks,
Archival,
Import/Export
Databases
Relational,
NoSQL, Caching,
Migration
Networking
VPC, DX,
DNS
CDN
Access Control
Identity
Management
Key
Management &
Storage
Monitoring
& Logs
Assessment and
reporting
Resource & Usage
Auditing
SECURITY & COMPLIANCE
Configuration
Compliance
Web application
firewall
HYBRID
ARCHITECTURE
Data Backups
Integrated App
Deployments
Direct
Connect
Identity
Federation
Integrated
Resource
Management
Integrated
Networking
API
Gateway
IoT
Rules
Engine
Device
Shadows
Device SDKs
Registry
Device
Gateway
Streaming Data
Analysis
Business
Intelligence
Mobile
Analytics
8. What is Snowball? Petabyte scale data transport
E-ink shipping
label
Ruggedized
case
“8.5G Impact”
All data encrypted
end-to-end
50 TB & 80 TB
10Gb network
Rain & dust
resistant
Tamper-resistant
case & electronics
9. New ways to transfer data into the cloud:
AWS Import/Export Snowball
• Now holds 60% more
– New 80 TB model, $250/job
– 50 TB still available in US West and US East for $200/job
• New regional availability
– Currently in US West (Oregon) and US East (N. Virginia)
– US West (N. California), GovCloud (US), Asia Pacific (Sydney), and EU
(Ireland) regions expected by the end of 2016
10. How fast is Snowball?
• Less than 1 day to transfer 50 TB via a 10Gb connection with Snowball, less
than 1 week including shipping
• Number of days to transfer 50 TB via the Internet at typical utilizations
Internet Connection Speed
Utilization 1Gbps 500Mbps 300Mbps 150Mbps
25% 19 38 63 126
50% 9 19 32 63
75% 6 13 21 42
11. How fast is Snowball?
• Less than 1 day to transfer 250 TB via 5x10Gb connections with 5 Snowballs,
less than 1 week including shipping
• Number of days to transfer 250 TB via the Internet at typical utilizations
Internet Connection Speed
Utilization 1Gbps 500Mbps 300Mbps 150Mbps
25% 95 190 316 632
50% 47 95 158 316
75% 32 63 105 211
12. How is my data transported securely?
• All data is encrypted with 256-bit
encryption by the Snowball client
• Keys are managed by AWS Key
Management Service (AWS KMS) and are
never sent to the Snowball appliance
• Strong chain of custody
• Tamper-resistant case
• Tamper-resistant electronics (TPM)
• Each Snowball appliance is erased
according to NIST 800-88 media
sanitization guidelines between every job
13. Pricing
Dimension Price
Usage charge per job $200.00 (50 TB)
$250.00 (80 TB)
Extra day charge (first 10 days* are free) $15.00
Data transfer In $0.00/GB
Data transfer out $0.03/GB
Shipping** Varies
Amazon S3 charges Standard storage and request
fees apply
* Starts one day after the appliance is delivered to you. The first day the appliance is received at your site and the last day the appliance is shipped out are also free and
not included in the 10-day free usage time.
** Shipping charges are based on your shipment destination and the shipping option (e.g., overnight, 2-day) you choose.
16. Introducing Amazon S3 Transfer Acceleration
S3 Bucket
AWS Edge
Location
Uploader
Optimized
throughput!
Typically 50% to 400% faster
Change your endpoint, not your code
56 global Edge Locations
No firewall exceptions
No client software required
18. Rio De Janeiro Warsaw New York Atlanta Madrid Virginia Melbourne Paris Los Angeles Seattle Tokyo Singapore
Time[hrs]
500 GB upload from these Edge Locations to a bucket in Singapore
Public Internet
How fast is S3 transfer acceleration?
S3 Transfer acceleration
19. Getting started
1. Enable S3 transfer acceleration on your S3 bucket.
2. Update your application or destination URL to
<bucket-name>.s3-accelerate.amazonaws.com.
3. Done!
20. How much will it help me?
https://s3.amazonaws.com/s3-accelerate-speedtest/cheetah-speedtest.html
21. Pricing*
Dimension Price/GB
Data transfer in from Internet** $0.04 (Edge Location in US, EU, JP)
$0.08 (Edge Location in rest of the world)
Data transfer out to Internet $0.04
Data transfer out to another AWS Region $0.04
Amazon S3 charges Standard data transfer charges apply
* Plus standard Amazon S3 data transfer charges apply
** Accelerated performance or there is no bandwidth charge
23. Use cases
• Connect on-premises resources to resources in VPC
• Faster connectivity
• Dedicated connection
• Less operational overhead versus VPN
• Multiple VPCs supported
• Dedicated connectivity to your public AWS services
• S3 for data ingestion
• EC2 public interfaces
• Owned by the customer
• Other customers’ services
• Cost
• Data transfer out is $.02 or $.03 per GB in NA versus $.09 to the Internet
25. GovCloud
North America Direct Connect locations
(Las Vegas)
(Seattle)
(NYC)
(Ashburn)
(San Jose)
(LA)
(Dallas)
• Private (VPC) access to 1 designated
region
• Public (S3) access to all US regions
DX Location choice provides:
(Santa Clara)
(Portland)
29. Selecting the right object storage for your needs
S3
S3-IA
Glacier
L
i
f
e
c
y
c
l
e
Available
S3: 99.99%
S3-IA: 99.9%
Performant
Low Latency
High throughput
Secure
SSE, client
encryption, IAM
integration
Event
notifications
SQS, SNS, and
Lambda
Versioning
Keep multiple
copies
automatically
Cross-region
replication
Common
namespace
Define storage
class per object
Durable
99.999999999%
Scalable
Elastic capacity
No preset limits
“Hot” data
Active and/or
temporary data
“Warm” data
Infrequently
accessed data
“Cold” data
Archive and
compliance data
30. Selecting the right object storage for your needs
S3
S3-IA
Glacier
L
i
f
e
c
y
c
l
e
Available
S3: 99.99%
S3-IA: 99.9%
Performant
Low Latency
High Throughput
≥ 30 Days≥ 128K
≥ 90 Days
Durable
99.999999999%
Scalable
Elastic capacity
No preset limits
> 0K$0.007/GB per month
$0.0125/GB per month
“Hot” data
Active and/or
temporary data
“Warm” data
Infrequently
accessed data
“Cold” data
Archive and
compliance data
≥ 0 Days> 0K$0.03/GB per month
3 – 5 Hrs
$0.01/GB retrieval
$0.01/GB retrieval < 5%
32. ST1/SC1 performance
Burst bucket based on MB/sec (vs. IOPS)
• Scales with size of volume
• Max throughput 500 MB/sec (ST1),
250 MB/sec (SC1)
Use cases
• Workloads with majority sequential I/O
• EMR, Kafka, Hadoop, Splunk/log processing,
media
33. Throttling
• I/O requests of 1 MB or less count as 1 MB I/O credit
• Sequential I/Os are merged into 1 MB I/O credits
• Throttle designed to reward streaming and big data workloads with large data
sets, large I/O block sizes, and sequential I/O patterns.
• Small, random I/Os are inefficient and quickly drain the burst bucket
40. Trade-offs with a managed service
Fully managed host and OS
• No access to the database host operating system
• Limited ability to modify configuration that is managed on the host operating system
• No functions that rely on configuration from the host OS
Fully managed storage
• Max storage limits
• SQL Server—4 TB
• MySQL, MariaDB, PostgreSQL, Oracle—6 TB
• Aurora—64 TB
• Growing your database is a process
43. Bring your on-premises databases into AWS
Move data to the same or a different database engine
Start your first migration in 10 minutes or less
Keep your apps running during the migration
Replicate from on premises, EC2 or RDS to EC2 or RDS
One-time migration or ongoing replication
AWS DMS
45. Amazon Kinesis: streaming data done the AWS way
Makes it easy to capture, deliver, and process real-time data streams
Pay as you go, no up-front costs
Elastically scalable
Right services for your specific use cases
Real-time latencies
Easy to provision, deploy, and manage
46. Amazon Kinesis Streams
• For technical developers
• Build your own custom
applications that process or
analyze streaming data
• GA at re:Invent 2013
Amazon Kinesis Firehose
• For all developers, data
scientists
• Easily load massive volumes of
streaming data into
Amazon S3, Amazon Redshift,
and Amazon Elasticsearch
Service
• GA at re:Invent 2015
Amazon Kinesis Analytics
• For all developers, data
scientists
• Easily analyze data streams
using standard SQL queries
• Preview
Amazon Kinesis: streaming data made easy
Services make it easy to capture, deliver, and process streams on AWS
48. Amazon Kinesis Firehose
Load massive volumes of streaming data into Amazon S3, Amazon Redshift, and
Amazon ES
• Zero administration: Capture and deliver streaming data into S3, Amazon Redshift, and Amazon ES
without writing an application or managing infrastructure.
• Direct-to-data store integration: Batch, compress, and encrypt streaming data for delivery into
data destinations in as little as 60 secs using simple configurations.
• Seamless elasticity: Seamlessly scales to match data throughput without intervention.
Capture and submit
streaming data to Firehose
Analyze streaming data using
your favorite BI tools
Firehose loads streaming data continuously into
S3, Amazon Redshift, and Amazon ES
52. Big data challenges for our customers
Lots of data
Lots and lots of questions
Few insights
Who are my top
customers and what
are they buying?
Which devices are
showing time for
maintenance?
What is my
product
profitability by
region?
Why is my most
profitable region
not growing?
How much
inventory do
I have?
Has my fraud
account expense
increased?
How is my marketing
campaign
performing?
How is my
employee
satisfaction
trending?
53. Costs too much
Traditional business intelligence
Takes too long
Pay $ million before seeing first analysis
3 year TCO $150 to $250 per user per month
Spend 6 to 12 months of consulting and
SW implementation time
54. Business user
QuickSight API
Data prep Metadata SuggestionsConnectors SPICE
Business user
QuickSight UI
Mobile devices Web browsers
Partner BI products
Amazon
S3
Amazon
Kinesis
Amazon
DynamoDB
Amazon
EMR
Amazon
Redshift
Amazon RDSFiles Apps
Direct connect
JDBC/ODBC
On-premises data
55. Easy exploration of AWS data
Securely discover and connect to AWS data
Quickly explore AWS data sources
• Relational databases
• NoSQL databases
• Amazon EMR, Amazon S3, files
• Streaming data sources
Easily import data from any table or file
Automatic detection of data types
Amazon EMR
Amazon Kinesis
Amazon Dynamo DB
Amazon Redshift
Amazon RDS
Amazon S3
File Upload
Third Party
56. Fast insights with SPICE
• Super-fast, Parallel, In-memory, Calculation Engine
• 2x to 4x compression columnar data
• Compiled queries with machine code generation
• Rich calculations
• SQL–like syntax
• Very fast response time to queries
• Fully managed – no hardware or software to license
58. Key benefits
Easy cluster
creation and
configuration
management
Support for ELK Security with AWS IAM
Monitoring with
Amazon CloudWatch
Auditing with
AWS CloudTrail
Integration options
with other AWS
services (CloudWatch
Logs, Amazon
DynamoDB, Amazon
S3, Amazon Kinesis)
61. (16)Europe
Amsterdam (2)
Dublin
Frankfurt (3)
London (3)
Madrid
Marseille
Milan
Paris (2)
Stockholm
Warsaw
(2)South America
Rio de Janeiro
Sao Paulo
(21)North America
Ashburn, VA (3)
Atlanta, GA
Chicago, IL
Dallas, TX (2)
Hayward, CA
Jacksonville, FL
Los Angeles, CA (2)
Miami, FL
Newark, NJ
New York, NY (3)
Palo Alto, CA
Seattle, WA
San Jose, CA
South Bend, IN
St. Louis, MO
(14)Asia
Chennai
Hong Kong (2)
Manila
Melbourne
Mumbai
Osaka
Singapore (2)
Seoul(2)
Sydney
Taipei
Tokyo (2)
An extensive global network
64. CloudFront with AWS WAF
CloudFront
Edge
location WAF
users
hackers
bad bots
site
scraping
SQL Injection,
XSS, other attacks
legitimate
traffic
Malicious traffic is blocked by WAF rules at Edge Locations
• Can be custom origin
• Can be static and dynamic content
• Show the other on premises + S3
EC2ELBS3
AND/OR
Customer on-premises environment
Origin server Origin storage
Welcome. Thank you for attending this session.
My name is Joe Healy and I am at Consultant within the WWPS Professional Services team at AWS based in Herndon.
In this session, I am going to provide an overview of some of the available AWS services which can help enable you to leverage AWS as a Data Platform.
Prior to this potentially lofty goal, there are some possible constraints that may be in place which are preventing the start of your journey.
These constraints can come in many forms:
- Volume, variety, velocity of Data
- No available capacity (bandwidth, people..)
- Too Expensive – I have paid for my equipment, there isn’t anything else to pay for
- Not secure enough for your requirements
Too complex. We operate fine the way we do things. We can’t adapt to this Cloud model. It isn’t applicable to us
Hopefully during this session, I am able to address some of these concerns to show you how you can overcome the Perceived barrier(s) to adoption
Looking at this chart, you can see that there are many capabilities available from AWS which address many of your constraints and requirements.
Areas such as:
- Security and Compliance - The visibility into as well as the governance of the security controls that you have within AWS is staggering. We have validated these capabilities by having our processes and procedures measured against many of the industry compliance standards. You can read about these on our Security/Compliance website. FedRAMP, PCI, HIPAA are just a few examples
- Infrastructure – Our infrastructure is available so that you can build and deploy your applications in the most cost effective, highly available and secure manner. We currently have 12 Public regions throughout the world (4 in the US), each comprised of at least 2 Availability Zones as well as 56 Edge Locations which enable you to satisfy your user experience requirements by bringing content closer to them among other things.
Support – Through our Support team, Solutions Architects, Professional Services, Account Managers you have a tremendous set of human resources to help you with your specific journey.
Partners – Our Partners are the force multiplier providing in depth assistance through Consulting or Managed Services or through the individual solutions provided by the ISV’s from SaaS offerings or our Marketplace.
Hybrid – Picking up and dropping your entire infrastructure into AWS isn’t a realistic short term goal for most companies. In reality it may never be a long term goal either. So you will have to integrate between one or more locations. We provide some excellent capabilities to make your AWS infrastructure as much of a logical extension of your existing environments.
Services – Whether you plan to do a simple lift and shift to EC2 from your existing environment, or if you are planning to move higher in the Stack to lessen the administrative/operational burden, AWS is consistently evolving the portfolio of services available to meet your requirements.
Given the broad depth of services that are available and the limited time that we have available, I will narrow down the examples of services in this session.
There are different goals for the different types of data that you have. There is a tremendous level of Value and Insight in your data. There are also a series of steps that data needs to pass through to reach its specific goal. In this session, we will categorize those Steps into the following Phases:
Move – You have to get your data from Point A to Point B. As the quantity of data increases and the transfer timeframe requirements shrink, this introduces a complex issue.
Store – Once at AWS, you need to choose a data platform to meet your data requirements from a availability, performance and price perspective.
Process – To extract and correlate the information within the data, you need to perform some type of processing.
Deliver – The Insight discovered in your data is worthless if you aren’t able to make it accessible by your customer or community.
This will not be an exhaustive session. I will try to focus on some of the newer capabilities/features that are available. Working with your AWS Account team or referencing our documentation will enable you to dive deep on all of the services we have.
So lets start with the Move Phase of our Journey.
Snowball
The Import/Export service introduced this new capability last year. Prior to Snowball, if you wanted to use the Import/Export service you had to purchased your own external hard drives. The maximum device capacity supported was 16TB. Depending on the interface type available on your device (eSATA, USB3/2) the transfer speed to/from the device could vary wildly.
To help streamline and standardize this process, the Snowball appliance was developed. It has been greatly received.
So what is it?
What is AWS Import/Export Snowball?
Snowball is a new AWS Import/Export offering that provides a petabyte-scale data transfer service that uses Amazon-provided storage devices for transport.
With the launch of Snowball customers are now able to use highly secure, rugged Amazon-owned Network Attached Storage (NAS) devices, called Snowballs, to ship their data.
Once received and set up, customers are able to copy up to 80TB data from their on-prem file system to the Snowball via the Snowball client software via a 10Gbps network interface.
Prior to transfer to the Snowball all data is encrypted by 256-bit GSM encryption by the client.
When customers finish transferring data to the device they simply ship it back to an AWS facility where the data is ingested at high speed into Amazon S3.
Compare and contrast Internet vs 1x Snowball.
Compare and contrast Internet vs 5x Snowball.
From a security perspective the Snowball device itself is always treated as untrusted as it passes through multiple parties – AWS, the customer, and the shipper. For this reason all data is encrypted before it is ever written to the device, and the keys for encryption are only stored on the host performing the encryption, never the Snowball itself.
Additionally, AWS supports a strong chain of custody through the entire process, providing notifications of each step in the process so you always know where your Snowball, and your data, are at all times.
The device itself has been custom designed to be tamper resistant, leveraging custom hardware to make the device difficult to physically compromise, as well as tamper evident seals which are verified upon receipt.
The device also leverages an industry standard trusted platform module, providing independent verification of the devices firmware which will not allow the Snowball to boot if it detects that the device has been compromised.
Perhaps the Snowball device doesn’t meet your specific requirements.
Maybe you can’t facilitate bringing an external device into your network.
Or, the timeframes or cycle of time you need to conduct your transfers may not meet your deadlines. There are logistic steps that are out of your control. It is a streamed line process, but you are talking about a physical device that needs to be received, configured, data transferred, shipped back to AWS and the data copied off. Many of those steps you don’t have control over.
This next service, may help you address some of those restrictions, if they exist.
Amazon S3 Transfer Acceleration is a new capability that was added this year which simplifies and potentially increases the speed that data is transferred directly to/from an S3 bucket without any third party utilities or software.
If you currently use any WAN optimization products to make your point to point transfers to AWS more efficient, you will be interest in learning more about this service.
Leveraging the Internet for file transfers can be a frustrating task as much of the path to your destination is out of your control.
You may have extremely high Internet bandwidth, but as your data travels through the public internet, you are susceptible to any weak link along the way.
Solutions that exist to try to mitigate this problem are extremely complex. They require custom proprietary software installed on EVERY client initiating the transfer and in lots of cases special software installed at where the data is getting ingested as well.
Finally these typically require a large up front fee, a minimum payment amount and are prohibitively expensive.
This is why we’re happy to introduce S3 Transfer Acceleration, a way to move data faster over long geographic distances. “Long distances” means across or between continents, not across town. It ensures that your data moves as fast as your first mile, and removes the vagaries of intermediate networks.
S3-XA has shown typical performance benefits of up to 400% (5x) in optimal conditions that we’ve seen from internal testing and our beta customer results.
S3-XA is extremely simple to use. As it is a feature of S3, you simply need to enable your bucket with a checkbox, and change your endpoint.
To mitigate the problem we described earlier about the long paths a file transfer takes. S3-XA leverages our 56 POP locations to insure your transfers travel a shorter distance on the public internet and then travel the remaining portion over an optimized route via the Amazon backbone.
Since S3-XA is an extension of S3, it uses standard TCP and HTTP and thus does not require any firewall exceptions or custom software installation.
This is how the flow of a request transferred through S3 XA looks like:
The client’s request hits Route 53 which resolves the acceleration endpoint to the best POP latency wise.
From there, S3 Transfer Acceleration selects the fastest path to send data over persistent connections to EC2 proxy fleet over HTTPS in the same AWS Region as the S3 bucket. We maximize the send and receive windows here to maximize customer’s utilization of the available bandwidth.
From here, the request is finally sent to S3.
The service achieves acceleration thanks to:
- Routing optimized to maximize routing on AMZN network
- TCP optimizations along the path to maximize data transfer
- Persistent connections to minimize connection setup and maximize connection reuse
See how much geography hurts?
In general, the farther your bucket, the more benefit from moving over the AWS network.
Just 2 small steps. The setup is that simple.
Behind the scenes, a CloudFront distribution and R53 Alias record is created for every bucket endpoint and the request is routed through an accelerated path
To determine if S3-XA is something that will benefit you and your customers, we developed an S3-XA Speed Checker to compare the likely transfer speed for a given endpoint.
The tool compares the upload speed of S3 and S3-XA from the destination where the tool is running on to a other S3 regions.
Depending on where your S3 bucket lives, you can determine if S3-XA will give you the performance benefits you desire before turning the feature on.
Data Transfer In from Internet depends on the location from where the request originated.
No request fees.
Simple per GB pricing.
Legal approved language on fast or free: For uploads only, Each time you use Amazon S3 Transfer Acceleration to transfer an object, we will check whether Amazon S3 Transfer Acceleration likely will be faster than a regular Amazon S3 transfer. To do this, we will use the origin location of the object transferred and the location of the Edge Location processing the accelerated transfer relative to the destination AWS Region. If we determine, in our sole discretion, that Amazon S3 Transfer Acceleration likely was not faster than a regular Amazon S3 transfer of the same object to the same destination AWS Region, we will not charge for that use of Amazon S3 Transfer Acceleration.
Available directly in 1Gb or 10Gb port speeds.
Through a partner, you can go down as low as 50Mbps. Some examples being any of the major Telcos which you may be working with already.
A partner can remove much of the administrative burden for managing your connectivity.
Worldwide locations
North America Direct Connect Locations
4 AWS Regions
10 Direct Connect Locations to leverage
Each Direct Connect Location has a 1 to 1 relationship with a specific region for Private Connectivity (Private VIF to a VPC VGW)
Each Direct Connect Location has a 1 to all relationship to the AWS Regions when using a Public Interface and Public Services (S3)
The list is growing to provide more options and to bring the “last mile” distance down between your infrastructure and the chosen Region.
Now that we understand how to move data to AWS, lets discuss some options how to optimally store the data
S3 and EBS are the two most common services to leverage for Storage. S3 for Object storage, and EBS for block based storage via a EC2 mounted file system.
I wanted to showcase some abilities and new(er) features in both of these services which may be of interest.
With each object that you store in S3, there are 4 available storage classes.
1. Standard - 11 9's durability...
2. RRS - 4 9's of durability
3. SIA - 11 9's of durability, less available and duration/access taxes
4. Glacier - 11 9's of durability, 3-5 hour SLA for object access retrieval (very cold)
We will primarily focus Standard, SIA and Glacier
In looking at these three storage classes (Standard, SIA and Glacier) you can see their purpose with respect to the expected Hotness of the data stored within their respective class.
S3 Standard is for your "Hottest" data that you need to have the protection Standard provides but also the direct and immediate accessibility of it.
S3-IA – Keeps your data warm, just in case you need direct and immediate access to it. But you are going to pay a request fee per object.
Glacier – This is your archive data which isn’t meant to be directly accessible. There is a 3-5 hour SLA for each object to be retrieved. This is Cold, Archive data
Lifecycle Policies can managed the change of Storage Class for your objects to meet your business rules for the type of data stored in a bucket.
As you move from Standard (Hot) object classes down to Glacier (Cold), your storage price at the object level decreases, however the accessibility decreases and you are price for object retrieval increases.
Prices vary between regions. These prices are representing our US-East-1 region
Recently, a new volume type was added to the Elastic Block Store service.
These are the ST1 and SC1 volume types which are classified as being Throughput Optimized versus measuring in terms of IOPS like the other volumes.
The volumes work based on a burst credit model, similar to the T2 EC2 instance family which have a CPU bursting model.
Depending on the Volume type (SC1, ST1) you will have a baseline level of throughput and also based on the volume type and size, you will receive additional credit and burst ceilings for each TB of volume size.
These volumes are optimized for sequential workloads.
Historically for these use cases, you would launch EC2 instances which have Ephemeral/Instance storage available to the EC2 instance. The quantity and type of storage varies from instance type to instance type (availability as well).
If you need a lot of local, ephemeral storage for your application you were forced to choose a very large EC2 instance, even though you may not have needed all of the CPU/Memory resources that came with it. Also, you need to build in the data protection for the data as well since these are temporary or Ephemeral disks. Meaning, as soon as you shutdown the server everything is erased. So you were potentially doing some replication to a standby system for the resiliency as you can’t perform EBS snapshots against these volumes either.
Now that you can allocate and attach EBS volumes, you can leverage EC2 snapshot capability for your RPO/RTO objectives and you can right size the EC2 instance type to meet your CPU/Memory and performance requirements.
As I mentioned before, these are optimized for Sequential I/O. Those credits available are depleted based on the size of the I/O requests.
So if you do a bunch of very small random IO requests, each request will deplete a full 1MB I/O credit.
Sequential IO requests are merged and depleted at the same 1MB I/O credit, but you will deplete them much slower.
You can still provision up to a 16TB volume, but you have to start at 500GB.
The price benefits can be substantial, depending on the I/O requirements.
Now we move into the Process Phase of the Journey.
There are many more services available that would meet the Process Phase of our Data.
However, I wanted to spend a little time on a few.
RDS is our Relational Database service where you can outsource the administration and operational tasks for your RDBMS to us and you can focus on the schema, data and the security of it.
A newer capability we have is the Database Migration Service. I wanted to give some details about how this service can be used
Kinesis is a great service for handling the ingestion and processing of streaming data. There are some new features that you may not be aware of.
Quick overview of what RDS is.
This is a Deep Dive so there are some assumptions that some of the basics with RDS and the benefits are already understood.
We are going to touch on many of these in more depth throughout the presentation.
RDS is a managed database service. This service allows you more time to focus on your application: You focus on Schema Design, query construction, query optimization, and building your application.
Infra Mgmt
AWS does patching
AWS Handles backup and replication
AWS manages the Infrastructure and making sure that it is healthy
You focus on your application
HA and automated failover management
High end features that you can do on your own but you get automatically.
Instant Provisioning
Simple and Fast to deploy
When you need to launch a new database or change your existing one you can at any point in time with no need to wait for infrastructure to be ordered or configured.
Scale up/Down
Simple and fast to scale. You can change your configuration to meet your needs when you want to.
Cost-effective
No Cost to get started
Pay only for what you consume
Application Compatibility
* Six different engines to choose from
* There are many popular applications, or even your own custom code code, that you may be running on your own infrastructure and it and it can still work on RDS. If you are using one of the engines that are currently supported then there is a good chance you can get it working on RDS.
When you think about all that it takes to get new database infrastructure and an actual database up and running there are a lot of things that an expert DBA and infrastructure person would have to do. With RDS you are getting this with just a few clicks and are up and running in a manner of minutes.
There are lots of different choices for your database engine on RDS. Each of these engines operate differently, offer different functionality, and have different licensing requirements.
Everyone has their favorite engine and they use them for specific purposes.
On the commercial side we have Oracle and Microsoft SQL Server
On the open source side we have MySQL, PostgreSQL, and MariaDB
And in its own category we have Amazon Aurora which is a My SQL compatible relational database built to take advantage of many of the properties that exist with modern cloud computing.
--------
MariaDB - https://en.wikipedia.org/wiki/MariaDB
MariaDB – Fork of the MySQL database. Led by the original developers of MySQL after concerns when it was acquired by Oracle and that the project might become a closed project. Works to maintain high compatibly with MySQL. Also has features to support non-blocking operations and progress reporting.
RDS is a managed service so in some cases you cannot do everything like you might do with a database running on EC2 or in your own data center. AWS is doing some of the administration so there are some tradeoffs.
It is important for you to understand some of the limitations that exist within RDS as you look to use it.
The RDS service fully manages the host, operating system, and database version that you are running on. This takes a lot of burden off your hands but you also get no access to the database host operating system, limited ability to modify configuration that is normally managed on the host operating system, and generally no access to functions that rely on configuration from the host operating system. If one of the reasons that you primarily access the host operating system is for metrics we have made some improvements in that space in order to help you along and we will talk about those later on.
All of your storage on RDS is also managed. Once again this takes a lot of burden off of you from an administrative standpoint but it also means there are some limits. You can’t just order more or larger disks and have them swapped in. You cannot connect your database to a different backend SAN. There are storage limits of 4TB with SQLServer, 6TB with MySQL, MariaDB, PostgreSQL, and Oracle, and 64 TB with Aurora. If you choose to grow the size of your database you have to actually tell the RDS service that you want more storage so that it can provision more storage. If you have hit the max then that is all you can do and you will have to figure out if you need to shard across multiple RDS instances, purge some of your current data, or maybe look at archiving old data to another environment.
There are gaps in what you can do with a fully managed vs RDS but the gap is narrowing as we roll out new functionality.
For a more robust database architecture you are going to want to look at having a Multi Availability Zone configuration.
With a Multi Availability Zone configuration you are going to chose which Availability Zone you want your primary database instance to be in. The RDS service will then choose to have a standby instance and storage in another availability zone of the AWS Region that you are operating in. The instance will be of the same type as your master and the storage will the the same configuration and size as your primary.
The RDS service will then take responsibility for ensuring that your primary is healthy and that your standby is in a state that you can recover to. Your data on the primary database is regularly replicated to the storage in the standby configuration. This standby is only there to handle failover from your primary, it is not something that you can login to or access when the primary is up and working.
The failover conditions that this configuration handles are:
Loss of availability in primary AZ
Loss of network connectivity to primary
Compute unit failure on primary
Storage failure on primary
-----------------------
SQL Server uses mirroring to support this functionality
Move data to the same or different database engine
~ Supports Oracle, Microsoft SQL Server, MySQL, PostgreSQL, MariaDB, Amazon Aurora, Amazon Redshift (soon)
Keep your apps running during the migration
~ DMS minimizes impact to users by capturing and applying data changes
Start your first migration in 10 minutes or less
~ The AWS Database Migration Service takes care of infrastructure provisioning and allows you to setup your first database migration task in less than 10 minutes
Replicate within, to or from AWS EC2 or RDS
~ After migrating your database, use the AWS Database Migration Service to replicate data into your Redshift data warehouses, cross-region to other RDS instances, or back to on-premises
The Schema Conversion Tool is available for your more complicated heterogeneous platform migrations – Oracle DB -> MySQL/Aurora/PostgreSQL for example
- It will analyze objects such as database views, stored procedures and functions and convert that logic over to the target database.
- Anything not converted is clearly marked for review
Amazon Kinesis – Service for Data streaming
Easy to use: Focus on quickly launching data streaming applications instead of managing infrastructure.
Real-Time: Collect real-time data streams and promptly respond to key business events and operational triggers.
Flexible: Choose the service, or combination of services, for your specific data streaming use cases.
Three different capabilities available (or will be soon) within Kinesis
Kinesis Streams – For your near real time data streaming and associated processing application. Highly customizable and scalable.
Kinesis Firehose – Simplified data ingestion endpoint for consolidating and loading of data into other AWS Services such as S3.
Kinesis Analytics – Perform inline analysis on your data streams using standard SQL queries. In preview at the moment.
Lots of way to process streaming data, just like there are lots of ways to process batch data with Hadoop
Open source from AWS
Ingestion data from many data sources
Load data into one or more data targets from a single Stream for specific analytic or retention requirements
A Kinesis Firehose stream can load data directly to S3, Redshift and Amazon Elasticsearch. One or all three can be leveraged by the same stream to meet your specific requirements.
The service manages the scalability of the underlying resources to meet your streaming data requirements.
Now to the final Phase which is Deliver.
These services will help in the delivery of Insights and analytics gathered from the data.
Amazon Quicksight
Questions that organizations have for their data
Traditional Business Intelligence tools are very expensive and complicated to implement.
Quicksight makes it easy for all employees to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data.
Quicksight integrates automatically with other AWS Data services such as RDS, RedShift, S3 or even flat files.
Super-fast, Parallel, In-memory Calculation Engine(SPICE) – Enabling users to run interactive queries against complex datasets and get rapid responses
If you are using an ELK stack presently (Elasticsearch, Logstash, Kibana) then you know the complexity involved with managing this platform.
The new Amazon Elasticsearch Service removes the administrative/operational burden allowing you to focus on the indices and visualizations.
Amazon ES offers several features which we will go through in more detail shortly but some quick highlights.
There are several options with the console, SDK, or CLI to eaasily set up the cluster with optimized configuration to match your application needs
The service exposes the underlying Elasticsearch API so you can easily migrate existing workloads, It comes with built in Kibana and we have released a logstash output plugin that makes it easy for you to connect your logstash instances to your domains running in Amazon ES
You have several options to secure your cluster using AWS IAM. We will walk through this in more detail
The service also comes with several integrations with other AWS services like CloudWatch logs, DynamoDB etc to make the experience of connecting all these services a lot easier for you.
Here is a sample dashboard using Kibana 4 running on an Amazon ES domain. This shows VPC flow logs data being visualized in a Kibana dashboard.
This list has changed. We now have 56 Edge locations located around the world.
This is on top of the 12 Regions.
If you are able to bring content closer to your users, the experience will be better.
So no matter if you are using a custom origin or AWS, and no matter the content type, CloudFront will work with you to improve your users’ experience.
User to CloudFront
Routing based on lowest latency
SSL termination close to viewers
CloudFront to Origin
TCP optimizations
Keep-alive connections
Network paths monitoring
http verb optimization (get,put,etc)
Let’s talk about why we built the WAF based on customer feedback.
Initially the WAF will be a CDN offering, but will be extended shortly after launch to include ELB