Sony Interactive Entertainment engineers presented on their journey moving mission-critical applications from a single AWS region to an active-active multi-region architecture. They modeled their application dependencies as a graph using Neo4j to identify services ready for multi-region and plan the migration order. Key lessons included validating data replication technologies through testing, redesigning some services to be multi-region native, and implementing centralized configuration to isolate applications within a region.
AWS re:Invent 2016: How News UK Centralized Cloud Governance Through Policy M...Amazon Web Services
When you run a complex AWS environment with thousands of Amazon EC2 instances, more than half a petabyte of object storage, and support the largest daily newspapers in the UK, you need a world-class cloud management strategy. For companies like News Corp, implementing policies that automate infrastructure schedules, right-size workloads, and manage and modify reservations is critical. As you scale your cloud infrastructure, defining centralized governance rules while enabling decentralized management is key to running an optimized cloud.
This session is designed for advanced operations, infrastructure, and engineering teams to improve/deploy optimization strategies. It covers the five best cloud management practices, including automating Reserved Instance modifications, setting policies to ensure proper tagging, and scheduling lights-on/lights-off policies. Session sponsored by CloudHealth Technologies.
AWS re:Invent 2016: Zero to Google Chrome in 60 Minutes: Lightweight and Inex...Amazon Web Services
You’ve bet big with Amazon WorkSpaces to remove challenges managing your physical fleet of Macs and PCs. Now what? In this session, we’ll demonstrate how you can deploy a rich cloud-based Windows experience on lightweight hardware to reign in management issues, improve TCO, and be at parity with your traditional environment. We’ll take you through the client device ecosystem – from Zero to thin to Google Chrome and Chromium OS clients – and strengthen your ability to determine the right client device strategy moving forward. Live product demonstrations will be provided as we journal how customers are moving to lightweight devices, and what best practices we’ve learned along the way.
Getting Started with AWS Internet of Things - AWS Summit Cape Town 2017Amazon Web Services
AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. In this tech talk, we will discuss how constrained devices can leverage AWS IoT to send data to the cloud and receive commands back to the device from the cloud using the protocol of their choice. We will use the AWS IoT Starter Kit to demonstrate building a real connected product, securely connect with AWS IoT using MQTT, WebSockets, and HTTP protocols, and show how developers and businesses can leverage features of AWS IoT like Device Shadows and the Rules Engine, which provides message processing and integration with other AWS services.
AWS Speaker: Boaz Ziniman, Technical Evangelist - Amazon Web Services
AWS re:Invent 2016: What’s New with Amazon Redshift (BDA304)Amazon Web Services
In this session, you learn about the latest and hottest features of Amazon Redshift. Join Vidhya Srinivasan, General Manager of Amazon Redshift, to take a deep dive into the architecture and inner workings of Amazon Redshift. You discover how the recent availability, performance, and manageability improvements we’ve made can significantly enhance your end user experience. You also get a glimpse of what we are working on and our plans for the future.
AWS re:Invent 2016| GAM302 | Sony PlayStation: Breaking the Bandwidth Barrier...Amazon Web Services
As systems and user bases grow, a once abundant resource can become scarce. While scaling out PlayStation services to millions of users at over a 100,000 requests/second, network throughput became a precious resource to optimize for. Alex and Dustin talk about how the microservices that power Playstation achieved low latency interactions while conserving on precious network bandwidth. These services powered by Amazon Elastic Load Balancing and Amazon DynamoDB benefitted from soft-state optimizations, a pattern that is used in complex interactions such as searching through a user’s social graph in sub 100 ms, or a user’s game library in 7 ms. As a developer utilizing Amazon Web services, you will discover new patterns and implementations which will better utilize your network, instances, and load balancers in order to deliver personalized experiences to millions of users while saving costs.
AWS re:Invent 2016: Accelerating the Transition to Broadcast and OTT Infrastr...Amazon Web Services
In this session, we show how to seamlessly transition VOD, live, and other advanced media workflows from on-premises deployments to the cloud. Cinépolis will provide an overview of their transcoding solution on AWS and how they have seamlessly expanded the solution increasing their customer reach. We'll show real world examples of the API calls used to configure and control all elements of the workflow including compression and origination. And how standard AWS services can be media-optimized with Elemental Technologies to form a robust live solution.
AWS re:Invent 2016: How News UK Centralized Cloud Governance Through Policy M...Amazon Web Services
When you run a complex AWS environment with thousands of Amazon EC2 instances, more than half a petabyte of object storage, and support the largest daily newspapers in the UK, you need a world-class cloud management strategy. For companies like News Corp, implementing policies that automate infrastructure schedules, right-size workloads, and manage and modify reservations is critical. As you scale your cloud infrastructure, defining centralized governance rules while enabling decentralized management is key to running an optimized cloud.
This session is designed for advanced operations, infrastructure, and engineering teams to improve/deploy optimization strategies. It covers the five best cloud management practices, including automating Reserved Instance modifications, setting policies to ensure proper tagging, and scheduling lights-on/lights-off policies. Session sponsored by CloudHealth Technologies.
AWS re:Invent 2016: Zero to Google Chrome in 60 Minutes: Lightweight and Inex...Amazon Web Services
You’ve bet big with Amazon WorkSpaces to remove challenges managing your physical fleet of Macs and PCs. Now what? In this session, we’ll demonstrate how you can deploy a rich cloud-based Windows experience on lightweight hardware to reign in management issues, improve TCO, and be at parity with your traditional environment. We’ll take you through the client device ecosystem – from Zero to thin to Google Chrome and Chromium OS clients – and strengthen your ability to determine the right client device strategy moving forward. Live product demonstrations will be provided as we journal how customers are moving to lightweight devices, and what best practices we’ve learned along the way.
Getting Started with AWS Internet of Things - AWS Summit Cape Town 2017Amazon Web Services
AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. In this tech talk, we will discuss how constrained devices can leverage AWS IoT to send data to the cloud and receive commands back to the device from the cloud using the protocol of their choice. We will use the AWS IoT Starter Kit to demonstrate building a real connected product, securely connect with AWS IoT using MQTT, WebSockets, and HTTP protocols, and show how developers and businesses can leverage features of AWS IoT like Device Shadows and the Rules Engine, which provides message processing and integration with other AWS services.
AWS Speaker: Boaz Ziniman, Technical Evangelist - Amazon Web Services
AWS re:Invent 2016: What’s New with Amazon Redshift (BDA304)Amazon Web Services
In this session, you learn about the latest and hottest features of Amazon Redshift. Join Vidhya Srinivasan, General Manager of Amazon Redshift, to take a deep dive into the architecture and inner workings of Amazon Redshift. You discover how the recent availability, performance, and manageability improvements we’ve made can significantly enhance your end user experience. You also get a glimpse of what we are working on and our plans for the future.
AWS re:Invent 2016| GAM302 | Sony PlayStation: Breaking the Bandwidth Barrier...Amazon Web Services
As systems and user bases grow, a once abundant resource can become scarce. While scaling out PlayStation services to millions of users at over a 100,000 requests/second, network throughput became a precious resource to optimize for. Alex and Dustin talk about how the microservices that power Playstation achieved low latency interactions while conserving on precious network bandwidth. These services powered by Amazon Elastic Load Balancing and Amazon DynamoDB benefitted from soft-state optimizations, a pattern that is used in complex interactions such as searching through a user’s social graph in sub 100 ms, or a user’s game library in 7 ms. As a developer utilizing Amazon Web services, you will discover new patterns and implementations which will better utilize your network, instances, and load balancers in order to deliver personalized experiences to millions of users while saving costs.
AWS re:Invent 2016: Accelerating the Transition to Broadcast and OTT Infrastr...Amazon Web Services
In this session, we show how to seamlessly transition VOD, live, and other advanced media workflows from on-premises deployments to the cloud. Cinépolis will provide an overview of their transcoding solution on AWS and how they have seamlessly expanded the solution increasing their customer reach. We'll show real world examples of the API calls used to configure and control all elements of the workflow including compression and origination. And how standard AWS services can be media-optimized with Elemental Technologies to form a robust live solution.
AWS re:Invent 2016: AWS Database State of the Union (DAT320)Amazon Web Services
Raju Gulabani, vice president of AWS Database Services (AWS), discusses the evolution of database services on AWS and the new database services and features we launched this year, and shares our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different shapes and forms. Storage, management, and analysis of this data requires database services that scale and perform in ways not possible before. AWS offers a collection of such database and other data services like Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR to process, store, manage, and analyze data. In this session, we provide an overview of AWS database services and discuss how our customers are using these services today.
AWS re:Invent 2016: How Pitney Bowes is transforming their business in the cl...Amazon Web Services
Pitney Bowes is reinventing its business based on a SaaS and a cloud-based model to deliver services for their clients globally centered on the Pitney Bowes Commerce Cloud. The Pitney Bowes Commerce Cloud is a commerce enabler, providing access to solutions, analytics and APIs across the full commerce continuum with speed and agility to help clients identify customers, locate opportunities, enable communications, power shipping from anywhere to everywhere, and manage payments. During this session, the Pitney Bowes team will discuss the strategy behind the Commerce Cloud, how we accelerated the speed of innovation and creation of new product solutions by taking full advantage of the AWS platform, from Cloud Infrastructure Services (Compute, Big Data, Storage and Content Delivery, Databases and Networking) , to Analytics and Internet of Things Services. Pitney Bowes’ applications are deployed in Docker containers using AWS Elastic Beanstalk, and are utilizing S3, Amazon RDS, SQS, SNS, and ElastiCache. CloudWatch and CloudFormation are being used to manage the solutions that Pitney Bowes’ brings to the market. Additionally, the Pitney Bowes Data and Analytics platform is powered by Elastic Map Reduce (EMR), Spark, Aurora, DynamoDB and Postgres.
The session will also discuss Pitney Bowes’ partnership with AWS to deploy Pitney Bowes’ APIs for Location Intelligence via the AWS Marketplace, allowing developers, customers and partners to access innovative location intelligence data services in an easy to consume and highly reliable manner.
AWS re:Invent 2016: Turner's cloud native media supply chain for TNT, TBS, Ad...Amazon Web Services
As Turner continues to make the transition from a traditional broadcast organization to a consumer-centric, data-driven media company, we are being challenged to re-think our approach to content supply. There is a need to achieve new levels of agility, flexibility and scalability to meet the rapidly evolving demands of our top media brands - including TBS, TNT, Cartoon Network, Adult Swim and CNN. To that end, we are transitioning the infrastructure that acquires, processes and distributes media for consumer-facing systems to the cloud. At the core of this environment is our Supply Chain Management application. The SCM app provides business and technical process management via an HTML based UI framework, State Machine, Rules Engine, Cost Model, Forms Service. We took advantage of several AWS specific services, including Lambda, S3, Dynamo DB, SNS, Elastic, Cloud Formation and Code Commit. The entire system is instance-less with all application code running in either the browser or within Lambda's. To ease development and debugging we created a method to run all JS libraries in the browser, switching to Lambda when we deploy with Code Commit. Cloud media processing infrastructure is BEING created on demand via an integration with SDVI. The SDVI and SCM apps exchange events and data via SNS and S3.
AWS re:Invent 2016: IoT Blueprints: Optimizing Supply for Smart Agriculture f...Amazon Web Services
30% of global food produce is wasted in the supply chain: storage, movement, and delivery. By using AWS IOT to enable sensors to manage the supply chain and big data to understand patterns, industrial companies can gain efficiencies in electricity and transportation.
Migration Recipes for Success - AWS Summit Cape Town 2017 Amazon Web Services
Now that you have earmarked workloads for migration, it's time to look at the various tools and methodologies that are available to help customers shift applications to AWS. This session highlights some of the key AWS tools, services and approaches that organisations are using to successfully migrate to the cloud.
AWS Speaker: Sven Hansen, Solution Architect - Amazon Web Services
Customer Speaker: Pieter Breed – Core Platform Engineer Zoona
This session will cover common customer implementations and patterns for building connected/smart home implementations with AWS IoT. This includes the end-user experience for onboarding a smart home appliance and then integrating it with the AWS ecosystem (for targeted push notifications, predictive maintenance, and so on). iRobot will join us to discuss their smart home integrations with the Roomba 980 and AWS IoT.
Amazon Kinesis provides services for you to work with streaming data on AWS. Learn how to load streaming data continuously and cost-effectively to Amazon S3 and Amazon Redshift using Amazon Kinesis Firehose without writing custom stream processing code. Get an introduction to building custom stream processing applications with Amazon Kinesis Streams for specialised needs.
As you begin to move out of your data center and develop a cloud-first strategy, you'll need support for large-scale migrations to AWS. In this session, CSC shares details about the journey to AWS by some of our largest enterprise customers. We provide best practices for planning your large-scale migrations and focus on business processes in addition to technology. We show how CSC used this approach to migrate to AWS as part of our separation last year into two publicly traded companies: CSC and CSRA. In less than six months, CSC took our 56-year-old company and broke it into two companies, one of which was brand new and without any infrastructure or enterprise applications. We explain how we leveraged the AWS Partner ecosystem to achieve this incredible IT challenge. Session sponsored by CSC.
AWS Competency Partner
Come along to this session to learn how large scale systems like SAP, Oracle, Microsoft and others are being used by enterprise customers of all shapes and sizes. In this session you will discover some of the challenges and approaches that will make you successful in deploying and operating these systems on AWS. This is a must session for enterprise customers that are looking at moving material workloads into the cloud.
AWS re:Invent 2016: Deliver Engaging Experiences with Custom Apps Built on Sa...Amazon Web Services
Your developers are the most important part of transforming your customer interactions into engaging experiences. Salesforce App Cloud, which brings together Heroku, Force.com and Lightning, abstracts away infrastructure and devops complexity, so you can focus on what matters most: building differentiated experiences through apps. Reducing time to market and letting you iterate fast helps you rise above the competition and build lasting customer relationships. In this session, you hear from Zayo, a leading global communications infrastructure services provider, and how they are leveraging the power of integrating the Salesforce and AWS platforms to deliver highly engaging customer experiences, enhancing developer productivity and driving faster innovation cycles. We spotlight Heroku Connect, which makes it easy to extend and synchronize your customer data between Salesforce and AWS and enhance it in ways that empower your developers to do what they do best: innovate. Session sponsored by Salesforce.
Accelerating Software Delivery with AWS Developer Tools & AWS Mobile services...Amazon Web Services
Software release cycles are now measured in days instead of months. Cutting edge companies are continuously delivering high-quality software at a fast pace. In this session, we will cover how you begin your DevOps journey by sharing best practices and tools by the "two pizza" engineering teams at Amazon. We will showcase how you can accelerate developer productivity by implementing continuous integration and delivery workflows. We will also cover an introduction to AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy, the services inspired by Amazon's internal devloper tools and DevOps practice.
AWS Speaker : Ian Massingham, Sr Mgr, Technical Evangelist - Amazon Web Services
AWS re:Invent 2016: Event Handling at Scale: Designing an Auditable Ingestion...Amazon Web Services
How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
Real-time data processing serverless architecture can eliminate the need to provision and manage servers required to process files or streaming data in real time. In this session, we will cover the fundamentals of using AWS Lambda to process data in real-time from push sources such as AWS Iot and pull sources such as Amazon DynamoDB Streams or Amazon Kinesis. We'll also discuss best practices and do a deep dive into AWS Lambda real-time stream processing.
Introduction to Cloud Computing with Amazon Web Services-ASEAN Workshop Serie...Amazon Web Services
The AWS Workshop Series Online is a series of live webinars designed for IT professionals who are looking to leverage the AWS Cloud to build and transform their business, are new to the AWS Cloud or looking to further expand their skills and expertise. In the first of this series, we will cover 'Introduction to Cloud Computing with Amazon Web Services'.
Modern data architectures for real time analytics and engagementAmazon Web Services
The AWS Workshop Series Online is a series of live webinars designed for IT professionals who are looking to leverage the AWS Cloud to build and transform their business, are new to the AWS Cloud or looking to further expand their skills and expertise. In this series, we will cover:" Modern Data Architectures for Real-time Analytics and Engagement'.
NEW LAUNCH! Delivering Powerful Graphics-Intensive Applications from the AWS ...Amazon Web Services
AWS provides unprecedented computational power for graphics-intensive applications in areas such as design, engineering simulations, and 3D content rendering. Together, Amazon EC2 Elastic GPUs and Amazon AppStream provide the capabilities necessary for end users to access and run these applications. In this session, you learn more about Elastic GPUs and Amazon AppStream, and how you can run graphics-intensive applications on AWS. You also hear from ANSYS, a leader in engineering simulation software, and why they are moving the ANSYS Enterprise Cloud to Elastic GPUs and Amazon AppStream to deliver a better experience for customers.
AWS re:Invent 2016 was AWS’ largest event yet with over 32,000 attendees, 400 breakout sessions, and two keynotes of new product announcements. In this talk, we’ll explore the core themes of AWS re:Invent 2016 such as serverless and artificial intelligence. We will also drill down into several of the services and features unveiled including AWS Batch, AWS Shield, Aurora for Postgres, X-Ray, Polly, Lex, Rekognition, AWS Step Functions. Light appetizers and refreshments will be provided.
AWS re:Invent 2016: Building Big Data Applications with the AWS Big Data Plat...Amazon Web Services
Building big data applications often requires integrating a broad set of technologies to store, process, and analyze the increasing variety, velocity, and volume of data being collected by many organizations. In this session, we show how you can build entire big data applications using a core set of managed services including Amazon S3, Amazon Kinesis, Amazon EMR, Amazon Elasticsearch Service, Amazon Redshift, and Amazon QuickSight.
We walk you through the steps of building and securing a big data application using the AWS Big Data Platform. We also share best practices and common use cases for AWS big data services, including tips to help you choose the best services for your specific application.
AWS platform has developed rapidly over the past few years through continuous iteration and innovations. In this session we provide a high level overview of the AWS platform and how customers leverage this to create highly available and scalable infrastructure. This session provides the required knowledge on how to get started with AWS.
High Availability Application Architectures in Amazon VPC (ARC202) | AWS re:I...Amazon Web Services
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual data center that you define. In this session you learn how to leverage the VPC networking constructs to configure a highly available and secure virtual data center on AWS for your application. We cover best practices around choosing an IP range for your VPC, creating subnets, configuring routing, securing your VPC, establishing VPN connectivity, and much more. The session culminates in creating a highly available web application stack inside of VPC and testing its availability with Chaos Monkey.
AWS re:Invent 2016: AWS Database State of the Union (DAT320)Amazon Web Services
Raju Gulabani, vice president of AWS Database Services (AWS), discusses the evolution of database services on AWS and the new database services and features we launched this year, and shares our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different shapes and forms. Storage, management, and analysis of this data requires database services that scale and perform in ways not possible before. AWS offers a collection of such database and other data services like Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR to process, store, manage, and analyze data. In this session, we provide an overview of AWS database services and discuss how our customers are using these services today.
AWS re:Invent 2016: How Pitney Bowes is transforming their business in the cl...Amazon Web Services
Pitney Bowes is reinventing its business based on a SaaS and a cloud-based model to deliver services for their clients globally centered on the Pitney Bowes Commerce Cloud. The Pitney Bowes Commerce Cloud is a commerce enabler, providing access to solutions, analytics and APIs across the full commerce continuum with speed and agility to help clients identify customers, locate opportunities, enable communications, power shipping from anywhere to everywhere, and manage payments. During this session, the Pitney Bowes team will discuss the strategy behind the Commerce Cloud, how we accelerated the speed of innovation and creation of new product solutions by taking full advantage of the AWS platform, from Cloud Infrastructure Services (Compute, Big Data, Storage and Content Delivery, Databases and Networking) , to Analytics and Internet of Things Services. Pitney Bowes’ applications are deployed in Docker containers using AWS Elastic Beanstalk, and are utilizing S3, Amazon RDS, SQS, SNS, and ElastiCache. CloudWatch and CloudFormation are being used to manage the solutions that Pitney Bowes’ brings to the market. Additionally, the Pitney Bowes Data and Analytics platform is powered by Elastic Map Reduce (EMR), Spark, Aurora, DynamoDB and Postgres.
The session will also discuss Pitney Bowes’ partnership with AWS to deploy Pitney Bowes’ APIs for Location Intelligence via the AWS Marketplace, allowing developers, customers and partners to access innovative location intelligence data services in an easy to consume and highly reliable manner.
AWS re:Invent 2016: Turner's cloud native media supply chain for TNT, TBS, Ad...Amazon Web Services
As Turner continues to make the transition from a traditional broadcast organization to a consumer-centric, data-driven media company, we are being challenged to re-think our approach to content supply. There is a need to achieve new levels of agility, flexibility and scalability to meet the rapidly evolving demands of our top media brands - including TBS, TNT, Cartoon Network, Adult Swim and CNN. To that end, we are transitioning the infrastructure that acquires, processes and distributes media for consumer-facing systems to the cloud. At the core of this environment is our Supply Chain Management application. The SCM app provides business and technical process management via an HTML based UI framework, State Machine, Rules Engine, Cost Model, Forms Service. We took advantage of several AWS specific services, including Lambda, S3, Dynamo DB, SNS, Elastic, Cloud Formation and Code Commit. The entire system is instance-less with all application code running in either the browser or within Lambda's. To ease development and debugging we created a method to run all JS libraries in the browser, switching to Lambda when we deploy with Code Commit. Cloud media processing infrastructure is BEING created on demand via an integration with SDVI. The SDVI and SCM apps exchange events and data via SNS and S3.
AWS re:Invent 2016: IoT Blueprints: Optimizing Supply for Smart Agriculture f...Amazon Web Services
30% of global food produce is wasted in the supply chain: storage, movement, and delivery. By using AWS IOT to enable sensors to manage the supply chain and big data to understand patterns, industrial companies can gain efficiencies in electricity and transportation.
Migration Recipes for Success - AWS Summit Cape Town 2017 Amazon Web Services
Now that you have earmarked workloads for migration, it's time to look at the various tools and methodologies that are available to help customers shift applications to AWS. This session highlights some of the key AWS tools, services and approaches that organisations are using to successfully migrate to the cloud.
AWS Speaker: Sven Hansen, Solution Architect - Amazon Web Services
Customer Speaker: Pieter Breed – Core Platform Engineer Zoona
This session will cover common customer implementations and patterns for building connected/smart home implementations with AWS IoT. This includes the end-user experience for onboarding a smart home appliance and then integrating it with the AWS ecosystem (for targeted push notifications, predictive maintenance, and so on). iRobot will join us to discuss their smart home integrations with the Roomba 980 and AWS IoT.
Amazon Kinesis provides services for you to work with streaming data on AWS. Learn how to load streaming data continuously and cost-effectively to Amazon S3 and Amazon Redshift using Amazon Kinesis Firehose without writing custom stream processing code. Get an introduction to building custom stream processing applications with Amazon Kinesis Streams for specialised needs.
As you begin to move out of your data center and develop a cloud-first strategy, you'll need support for large-scale migrations to AWS. In this session, CSC shares details about the journey to AWS by some of our largest enterprise customers. We provide best practices for planning your large-scale migrations and focus on business processes in addition to technology. We show how CSC used this approach to migrate to AWS as part of our separation last year into two publicly traded companies: CSC and CSRA. In less than six months, CSC took our 56-year-old company and broke it into two companies, one of which was brand new and without any infrastructure or enterprise applications. We explain how we leveraged the AWS Partner ecosystem to achieve this incredible IT challenge. Session sponsored by CSC.
AWS Competency Partner
Come along to this session to learn how large scale systems like SAP, Oracle, Microsoft and others are being used by enterprise customers of all shapes and sizes. In this session you will discover some of the challenges and approaches that will make you successful in deploying and operating these systems on AWS. This is a must session for enterprise customers that are looking at moving material workloads into the cloud.
AWS re:Invent 2016: Deliver Engaging Experiences with Custom Apps Built on Sa...Amazon Web Services
Your developers are the most important part of transforming your customer interactions into engaging experiences. Salesforce App Cloud, which brings together Heroku, Force.com and Lightning, abstracts away infrastructure and devops complexity, so you can focus on what matters most: building differentiated experiences through apps. Reducing time to market and letting you iterate fast helps you rise above the competition and build lasting customer relationships. In this session, you hear from Zayo, a leading global communications infrastructure services provider, and how they are leveraging the power of integrating the Salesforce and AWS platforms to deliver highly engaging customer experiences, enhancing developer productivity and driving faster innovation cycles. We spotlight Heroku Connect, which makes it easy to extend and synchronize your customer data between Salesforce and AWS and enhance it in ways that empower your developers to do what they do best: innovate. Session sponsored by Salesforce.
Accelerating Software Delivery with AWS Developer Tools & AWS Mobile services...Amazon Web Services
Software release cycles are now measured in days instead of months. Cutting edge companies are continuously delivering high-quality software at a fast pace. In this session, we will cover how you begin your DevOps journey by sharing best practices and tools by the "two pizza" engineering teams at Amazon. We will showcase how you can accelerate developer productivity by implementing continuous integration and delivery workflows. We will also cover an introduction to AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy, the services inspired by Amazon's internal devloper tools and DevOps practice.
AWS Speaker : Ian Massingham, Sr Mgr, Technical Evangelist - Amazon Web Services
AWS re:Invent 2016: Event Handling at Scale: Designing an Auditable Ingestion...Amazon Web Services
How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
Real-time data processing serverless architecture can eliminate the need to provision and manage servers required to process files or streaming data in real time. In this session, we will cover the fundamentals of using AWS Lambda to process data in real-time from push sources such as AWS Iot and pull sources such as Amazon DynamoDB Streams or Amazon Kinesis. We'll also discuss best practices and do a deep dive into AWS Lambda real-time stream processing.
Introduction to Cloud Computing with Amazon Web Services-ASEAN Workshop Serie...Amazon Web Services
The AWS Workshop Series Online is a series of live webinars designed for IT professionals who are looking to leverage the AWS Cloud to build and transform their business, are new to the AWS Cloud or looking to further expand their skills and expertise. In the first of this series, we will cover 'Introduction to Cloud Computing with Amazon Web Services'.
Modern data architectures for real time analytics and engagementAmazon Web Services
The AWS Workshop Series Online is a series of live webinars designed for IT professionals who are looking to leverage the AWS Cloud to build and transform their business, are new to the AWS Cloud or looking to further expand their skills and expertise. In this series, we will cover:" Modern Data Architectures for Real-time Analytics and Engagement'.
NEW LAUNCH! Delivering Powerful Graphics-Intensive Applications from the AWS ...Amazon Web Services
AWS provides unprecedented computational power for graphics-intensive applications in areas such as design, engineering simulations, and 3D content rendering. Together, Amazon EC2 Elastic GPUs and Amazon AppStream provide the capabilities necessary for end users to access and run these applications. In this session, you learn more about Elastic GPUs and Amazon AppStream, and how you can run graphics-intensive applications on AWS. You also hear from ANSYS, a leader in engineering simulation software, and why they are moving the ANSYS Enterprise Cloud to Elastic GPUs and Amazon AppStream to deliver a better experience for customers.
AWS re:Invent 2016 was AWS’ largest event yet with over 32,000 attendees, 400 breakout sessions, and two keynotes of new product announcements. In this talk, we’ll explore the core themes of AWS re:Invent 2016 such as serverless and artificial intelligence. We will also drill down into several of the services and features unveiled including AWS Batch, AWS Shield, Aurora for Postgres, X-Ray, Polly, Lex, Rekognition, AWS Step Functions. Light appetizers and refreshments will be provided.
AWS re:Invent 2016: Building Big Data Applications with the AWS Big Data Plat...Amazon Web Services
Building big data applications often requires integrating a broad set of technologies to store, process, and analyze the increasing variety, velocity, and volume of data being collected by many organizations. In this session, we show how you can build entire big data applications using a core set of managed services including Amazon S3, Amazon Kinesis, Amazon EMR, Amazon Elasticsearch Service, Amazon Redshift, and Amazon QuickSight.
We walk you through the steps of building and securing a big data application using the AWS Big Data Platform. We also share best practices and common use cases for AWS big data services, including tips to help you choose the best services for your specific application.
AWS platform has developed rapidly over the past few years through continuous iteration and innovations. In this session we provide a high level overview of the AWS platform and how customers leverage this to create highly available and scalable infrastructure. This session provides the required knowledge on how to get started with AWS.
High Availability Application Architectures in Amazon VPC (ARC202) | AWS re:I...Amazon Web Services
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual data center that you define. In this session you learn how to leverage the VPC networking constructs to configure a highly available and secure virtual data center on AWS for your application. We cover best practices around choosing an IP range for your VPC, creating subnets, configuring routing, securing your VPC, establishing VPN connectivity, and much more. The session culminates in creating a highly available web application stack inside of VPC and testing its availability with Chaos Monkey.
AWS re:Invent 2016: From Resilience to Ubiquity - #NetflixEverywhere Global A...Amazon Web Services
Building and evolving a pervasive, global service requires a multi-disciplined approach that balances requirements with service availability, latency, data replication, compute capacity, and efficiency. In this session, we’ll follow the Netflix journey of failure, innovation, and ubiquity. We'll review the many facets of globalization and then delve deep into the architectural patterns that enable seamless, multi-region traffic management; reliable, fast data propagation; and efficient service infrastructure. The patterns presented will be broadly applicable to internet services with global aspirations.
AWS re:Invent 2016: Deep Dive on AWS Cloud Data Migration Services (ENT210)Amazon Web Services
When evaluating and planning migrating your data from on premises to the Cloud, you might encounter physical limitations. Amazon offers a suite of tools to help you surmount these limitations by moving data using networks, roads, and technology partners. In this session, we discuss how to move large amounts of data into and out of the Cloud in batches, increments, and streams.
AWS re:Invent 2016: Reduce Your Blast Radius by Using Multiple AWS Accounts P...Amazon Web Services
This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation. Though managing multiple accounts can be difficult, we will present an upcoming AWS solution that will help automate the process for controlling cross- account access by managing roles across multiple accounts.
AWS re:Invent 2016: Design Patterns for High Availability: Lessons from Amazo...Amazon Web Services
At AWS, the availability of our services is non-negotiable. While building our own services, such as Amazon CloudFront, we learn from and develop our own design patterns for high availability. In this session, we review several of these design patterns, and we show how you can implement the patterns in your own services or applications built on top of AWS using services such as Amazon Kinesis, AWS Elastic Beanstalk, or AWS Lambda.
AWS re:Invent 2016: Become an AWS IAM Policy Ninja in 60 Minutes or Less (SAC...Amazon Web Services
Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is "yes," this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type.
High Performance No+SQL for Mission-critical ApplicationsFairCom
This webinar was given by Randal Hoff, VP of Engineering at FairCom in February 2016.
In this Webinar, we’re going to focus on maximizing high-performance database throughput using NoSQL technology, while still being able to use SQL over the same single instance of NoSQL data. We’ll start off with a quick overview on FairCom’s unique approach to NoSQL technology, and how this approach provides not-only High-Performance and High Availability, but also affords for a nice blending of NoSQL and SQL over the same data. We’ll then finish with some real-world use cases of how this technology has been successfully deployed in performance sensitive environments.
Analytics & Reporting for Amazon Cloud LogsCloudlytics
A deep dive into the Cloudytics Reports section, with the Following reports in detail & how they can help you with your business use case:
- Geo Tracker Report
- IP Tracker Report
- Timeline Report
- ELB Tracker
- CloudFront Cost Analyzer
- Custom Function
World's best AWS Cloud Log Analytics & Management ToolCloudlytics
Overview of Cloudlytics Features, Pricing Details, Offers to AWS Activate Customers, AWS Marketplace Info & A Sneak Preview of All the Analytics ( The Reports Section will be covered in Detail in our Next Presentation.)
DPACC Acceleration Progress and DemonstrationOPNFV
The session provides an update to on the DPACC project within the OPNFV with a brief discussion on APIs and implementation progress. This session will review the API definition progress and follow up with a demo highlighting a common application as the vNF running on top of the DPACC defined layers. The demo will highlight the use of both hardware and software acceleration utilizing the DPACC defined acceleration layers. The demonstrationIt will highlight the progress in optimizing performance and latency characteristics of a platform to realize the vision of NFV while meeting stringent requirements, particularly for certain workloads, required by carriers.
(MBL303) Get Deeper Insights Using Amazon Mobile Analytics | AWS re:Invent 2014Amazon Web Services
Choosing the right mobile analytics solution can help you understand user behavior, engage users, and maximize user lifetime value. After this session, you will understand how you can learn more about your users and their behavior quickly across platforms with just one line of code using Amazon Mobile Analytics.
ARC202 Architecting for High Availability - AWS re: Invent 2012Amazon Web Services
Amazon Web Services (AWS) provides a platform that is ideally suited for building highly available systems, enabling you to build reliable, affordable, fault-tolerant systems that operate with a minimal amount of human interaction. This session covers many of the high-availability and fault-tolerance concepts and features of the various services that you can use to build highly reliable and highly available applications in the AWS Cloud: architectures involving multiple Availability Zones, including EC2 best practices and RDS Multi-AZ deployments; loosely coupled and self-healing systems involving SQS and Auto Scaling; networking best practices for high availability, including Elastic IP addresses, load balancing, and DNS; leveraging services that inherently are built with high-availability and fault tolerance in mind, including S3, Elastic Beanstalk and more.
Cohesive Networks Support Docs: VNS3 Configuration for Amazon VPC Cohesive Networks
Use this VNS3 set up guide to get started in the Amazon Cloud (AWS) VPC public cloud environments.
About VNS3:
VNS3 delivers cloud networking and NFV functionality for virtual and cloud environments. The VNS3 virtual network security appliance includes a router, switch, stateful firewall, VPN support (IPsec and SSL), and protocol redistributor, and extensible NFV optimized for all major cloud providers. VNS3 cloud networks are configured and managed through the VNS3 Manager web-based UI or resetful API.
VNS3 is available in: Amazon Web Services EC2, Amazon Web Services VPC, Microsoft Azure, CenturyLink Cloud, Google Compute Engine (GCE), Rackspace, IBM SoftLayer, ElasticHosts, Verizon Terremark vCloud Express, InterRoute, Abiquo, Openstack, Flexiant, Eucalyptus, Abiquo, HPE Helion, VMware (all formats), Citrix, Xen, KVM, and more.
VNS3 supports most IPsec data center solutions, including: Preferred Most models from Cisco Systems*, Juniper, Watchguard, Dell SONICWALL, Netgear, Fortinet, Barracuda Networks, Check Point*, Zyxel USA, McAfee Retail, Citrix Systems, Hewlett Packard, D-Link, WatchGuard, Palo Alto Networks, OpenSwan, pfSense, Vyatta, and any IPsec device that supports IKE1 or IKE2, AES256 or AES128 or 3DES, SHA1 or MD5, and most importantly NAT-Traversal standards.
Building a Multi-Region Cluster at Target (Aaron Ploetz, Target) | Cassandra ...DataStax
Lessons learned from a year spent building a Cassandra cluster over multiple regions, data centers, and providers. Will discuss our successes and learnings on replication, operations, and application development.
About the Speaker
Aaron Ploetz Lead Technical Architect, Target
Aaron is a Lead Technical Architect for Target, where he coaches development teams on modeling and building applications for Cassandra. He is active in the Cassandra tags on StackOverflow, and has also contributed patches to cqlsh. Aaron holds a B.S. in Management/Computer Systems from the University of Wisconsin-Whitewater, a M.S. in Software Engineering and Database Technologies from Regis University, and is a 2x DataStax MVP for Apache Cassandra.
Increase Your Mission Critical Application Performance without Breaking the B...DataCore Software
In virtualized environments, mission critical applications get bogged down, leading to user complaints. Root cause analysis has shown that inadequate storage performance is the culprit. But, fixing these performance issues will cost 5 to 7 times your current storage.
In this presentation, learn about a revolutionary solution that combines Skyera’s advanced All Flash Arrays (AFA) with DataCore’s innovative Software-defined Storage platform. This solution will easily accelerate your SQL Servers at a price that fits your budget.
In this session, our automation engineers will talk about automation internals and demonstrate some new features like importing existing clusters into MMS, converting between storage engines, managing authentication, and interacting with our automation capabilities using the MMS API.
Triangle Devops Meetup covering Netflix open source, cloud architecture, and what Andrew did in his first year working as a senior software engineer in the cloud platform group.
AWS Serverless Community Day Keynote and Vendia Launch 6-26-2020Tim Wagner
Hear Tim Wagner, CEO and co-founder of Vendia and "Father of Serverless" talk about the evolution of Serverless over the years and how Vendia is taking it into a cross-cloud future.
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
This is a small introduction to microservices. you can find the differences between microservices and monolithic applications. You will find the pros and cons of microservices. you will also find the challenges (Business/ technical) that you may face while implementing microservices.
For the Computer Measurement Group workshop in San Diego November 2013. Also presented to a student class at UC Santa Barbara. What is Cloud Native. Capacity and Performance benchmarks. Cost Optimization Techniques - content co-developed with Jinesh Varia of AWS.
RightScale Roadtrip - Accelerate to CloudRightScale
The Accelerate to Cloud keynote will help you understand the current state of cloud adoption, identify the business value for your organization, and provide you a framework to plot your course to cloud adoption.
Following simple patterns of good application design can allow you to scale your application for your customers easily. This presentation dives into the 12 factor application design and demo how this applies to containers and deployments on Amazon ECS and Fargate. We'll take a look at tooling that can be used to simplify your workflow and help you adopt the principles of the 12 factor application.
.Net Microservices with Event Sourcing, CQRS, Docker and... Windows Server 20...Javier García Magna
Good technical practices you can follow with (micro)services but can be applied to almost anything: discovery (microphone/consul), security, resilience (polly), composition, ssecurity (jwt/oauth2)... And then an example with a CQRS application, and how docker can be used in Windows 2016. Lastly a brief summary of what Service Fabric is and its programming models.
Leapfrog into Serverless - a Deloitte-Amtrak Case Study | Serverless Confere...Gary Arora
This talk was delivered at the Serverless Conference in New York City in 2017. Deloitte and Amtrak built a Serverless Cloud-Native solution on AWS for real-time operational datastore and near real-time reporting data mart that modernized Amtrak's legacy systems & applications. With Serverless solutions, we are able leapfrog over several rungs of computing evolution.
Gary Arora is a Cloud Solutions Architect at Deloitte Consulting, specializing on Azure & AWS.
Cloud-Native Data: What data questions to ask when building cloud-native appsVMware Tanzu
While a number of patterns and architectural guidelines exist for cloud-native applications, a discussion about data often leads to more questions than answers. For example, what are some of the typical data problems encountered, why are they different, and how can they be overcome?
Join Prasad Radhakrishnan from Pivotal and Dave Nielsen from Redis Labs as they discuss:
- Expectations and requirements of cloud-native data
- Common faux pas and strategies on how you can avoid them
Presenters:
Prasad Radhakrishnan, Platform Architecture for Data at Pivotal
Dave Nielsen, Head of Ecosystem Programs at Redis Labs
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
3. What to expect from the session
• Architecture Background
• AWS global infrastructure
• Single vs Multi-Region?
• Multi-Region AWS Services
• Case Study: Sony’s Multi-Region Active/Active Journey
• Design approach
• Lessons learned
• Migrating without downtime
11. Single region high-availability approach
• Leverage multiple Availability Zones (AZs)
Availability Zone A Availability Zone B Availability Zone C
us-east-1
12. Reminder: Region-wide AWS services
• Amazon Simple Storage Service (Amazon S3)
• Amazon Elastic File System (Amazon EFS)
• Amazon Relational Database Services (RDS)
• Amazon DynamoDB
• And many more…
14. Good Reasons for Multi-Region
• Lower latency to a subset of customers
• Legal and regulatory compliance (i.e. data
sovereignty)
• Satisfy disaster recovery requirements
16. Multi-Region services
• Amazon Route 53 (Managed DNS)
• S3 with cross-region replication
• RDS multi-region database replication
• And many more…
• EBS snapshots
• AMI
17. Amazon Route 53
• Health checks
• Send traffic to healthy infrastructure
• Latency-based routing
• Geo DNS
• Weighted Round Robin
• Global footprint via 60+ POPs
• Supports AWS and non-AWS resources
19. S3 – cross-region replication
Automated, fast, and reliable asynchronous replication of data across AWS regions
• Only replicates new PUTs. Once
S3 is configured, all new uploads
into a source bucket will be
replicated
• Entire bucket or prefix based
• 1:1 replication between any 2
regions / storage classes
• Transition S3 ownership from
primary account to sub-account
Use cases:
• Compliance—store data hundreds of miles apart
• Lower latency—distribute data to regional customers
• Security—create remote replicas managed by separate AWS accounts
Source
(Virginia)
Destination
(Oregon)
20. RDS cross-region replication
• Move data closer to customers
• Satisfy disaster recovery requirements
• Relieve pressure on database master
• Promote read-replica to master
• AWS managed service
24. What to expect from the session
• Architecture Background
• AWS global infrastructure
• Single vs Multi-Region?
• Enabling AWS services
• Case Study: Sony Multi-Region Active/Active
• Design approach
• Lessons learned
• Migrating without downtime
25.
26. Who is talking?
Alexander Filipchik (PSN: LaserToy)
Principal Software Engineer
at Sony Interactive Entertainment
Dustin Pham
Principal Software Engineer
at Sony Interactive Entertainment
28. Small team, large responsibility
• Service team ran like a startup
• Less than 10 core people working on new PS3 store
services
• PSN’s user base was already in the several hundred
millions of users
• Relied on quick iterations of architecture on AWS
34. Delivered new store
• Great job, now onto the PS4
• PS4 launch – 1 million users at once on Day 1, Hour 1
• Designing for many different use cases at scale
36. Next step: make highly available
• Highly available for us: multiregion active/active
• Raising key questions:
• How does one move a large set of critical apps with
hundreds of terabytes of live data?
• How did we architect every aspect to allow for multiregional,
active-active?
• How do we turn on active-active without user impact?
• User impact includes Hardware (ps3/ps4/etc.) and Game
partners!
• Where do we even begin?
38. Applications
• First question to answer: What does it mean to be
multiregional?
• Different people had different answers:
• Active/stand-by vs. active/active
• Full data replication vs. partial
• Automatic failover vs. manual
• Etc.
40. Agreement
• “You should be able to lose 1 of anything” approach.
• Which means, we should be able to survive without any
visible impact losing of:
• 1 server
• 1 Availability Zone
• 1 region
41. Starting with uncertainty
• Multiple macro and micro services
• Stateless and stateful services
• They depend on multiple technologies
• Some are multiregional and some are not
• Documentation was as always: out of date
44. Stages of grief
• Denial – can’t be true, let’s check again
• Anger – we told everyone to be active/active ready!!!
• Bargaining – active/stand-by?
• Depression – we can’t do it
• Acceptance – let’s work to fix it, we have 6 months…
45. What it tells us
• We can’t just put things in two regions and expect them
to work
• We will need to do some work to:
• Migrate services to technology which is multiregional by
design
• Somehow make underlying technology multiregional
46. Scheduling/optimization problem
• There is work that should be done on both apps and
infrastructure side
• We need to schedule it so we can get results faster
and minimize waits
• And we wanted machine to help us
47. The world’s leading graph database
That can store a graph of 30B nodes
Here to help us to deal with our problem
48. Why Neo4J
• Graph engine and we are dealing with a graph
• Query language that is very powerful
• Can be populated programmatically
• Can show us something we didn’t expect
49. How to use it?
• Model
• Identify nodes and relations
• Tracing
• Code analyzer
• Talking to people
• Generate the graph
• Run queries
50. Model example
• Nodes
• Users
• Technology: (Cassandra, Redis)
• multiregional: true/false
• Service (applications)
• stateless: true/false
• Edges
• Usage patterns (read, write)
56. What to do next
• Validate multiregional technologies do actually work
• Figure out what to do with non-multiregional technologies
• Move services in the following order:
57. Validating our main DB (Cassandra)
A lot of unknowns:
• Will it work?
• Will performance degrade?
• How eventual is multiregional eventual consistency?
• Will we hit any roadblocks?
• Well, how many roadblocks will we hit?
58. What did we know?
Netflix is doing it on AWS and they actually tested it
They wrote 1M records in one region of a multiregion
cluster
500 ms later read in other clusters was initiated
All records were successfully read
59. Well…
Some questions to answer:
Should we just trust the
Netflix’s results and just
replicate data and see what
happens?
Is their experiment applicable
to our situation?
Can we do better?
Break
Something
Free
Coffee
Say,
"there's
gotta be a
better way
to do this"
HOW TO GET AN ENGINEER'S
ATTENTION
60. Cassandra validation strategy
• Use production load/data
• Simulate disruptions
• Track replication latencies
• Track lost mutations
• Cassandra modifications were required
65. Things that are not multiregional by design
We gave teams 2 options:
• Redesign if is critical to user’s experience
• If not in the critical path (batch jobs)
• active/passive
• master/slave
• Use Kafka as a replication backbone (recommended)
73. Phase 1: Infrastructure key points
• Building infrastructure in new region must be fully
automated (Infrastructure as Code)
• Regional communication decisions
• VPNs?
• Over Internet?
• Do infrastructures have to match exactly?
• 1st region evolved organically
• 2nd region should be blueprint for all new region DCs
75. Phase 2: Data option 1 replication over VPN
Public Subnet
ELBs
Data tier
Inbound tier
Outbound tier
Region 2
VPN
76. Phase 2: Data option 1 replication over VPN
• Pros
• Setting up VPN with current network architecture would be
easier on data tier
• Secure
• Managing data nodes intercommunication is straight forward
and has lower operational overhead
• Cons
• Limit on throughput
• Data set is large and can quickly saturate VPN
• Scaling more applications in future will be complicated!
77. Phase 2: Data option 2 replication over ENIs with public IPs
Private subnet
Public subnet
ELBs
Data tier
Inbound tier
Outbound tier
Region 2
SSL
SSL
78. Phase 2: Data option 2 replication over ENIs with public IPs
• Pros
• Not network constrained
• Able to add more applications + data without need of building
new infrastructure to support
• Cons
• Operationally, more orchestration (Cassandra, for example,
needs to know other node Elastic IPs)
• Internode data transfer security is a must
80. Phase 3: App tier + cache strategy
• Applications communicate within a region only
• Applications do not call another region’s databases,
caches, or applications
• Isolation creates for predictable failure cases and clearly
defines failure domains
• Monitoring and alerting are greatly simplified in this
model
82. Phase 4: Client routing
• Predictable “sticky” routing to avoid user bounce via
Georouting
• Data replication manages cross region state
• Allows for routing to stateless services
• Ability to do % based routing to manage different failure
scenarios
84. Software design for multiregion deployments
• Typical software architecture
APIs
Business Logic
Data Access
Cross
Cutting
Config
85. Software design for multiregion deployments
Region 1 Region 2
Remember when we mentioned to have application tier call patterns to be
isolated in a region? How do we achieve this simply?
86. Software configuration approaches
• An application config to connect to a database could
look like:
cassandra.seeds=10.0.1.16,10.0.1.17
• A naïve approach would be to have an application have
multiple configs per deployable depending on its region
cassandra.seeds.region1=10.0.1.16,10.0.1.17
cassandra.seeds.region2=10.0.2.16,10.0.2.16
• This, of course, results in an app config management
nightmare, especially now with 2 regions
87. Software configuration approaches
• What if we
implemented a
basic “central"
way of
configuration
Region x
Region x
Local DB
Where are my C*
Seeds?
IPs are x.x.x.xcassandra.seeds=cass-
seed1, cass-seed2
cass-seed1 resolves to
x.x.x.x
88. Simplified software configuration (context)
• Context is made available to application which contains:
• Data Center/region
• Endpoint short-name resolution
• Environment (Dev, QA, Prod, A/B)
• Database connection details
• Context is the responsibility of the infrastructure itself
and is provided through build automation, AWS tagging,
etc.
• App is responsible for behaving correctly off of context
89. Infrastructure as code
• New regions must be built through automation
• Specification of services to Terraform
• Internal tool and DSL was built to manage domain
specific needs
• Example:
• Specify an app requires Cassandra and SNS
• Generates Terraform to create security groups for ports 9160,
7199-7999, build SNS, build ELB for app, etc.
90. Database automation
• Ansible run to assist
in build Cassandra in
public subnet and
associate EIPs to
every new node
• Manages network
rules (whitelisting)
• Manages certificates
and SSL
Private Subnet
Public Subnet
ELBs
Outbound Tier
Region 2
SSL
SSL
92. Monitoring through proper tagging
• Part of the “Context” applications are aware of is the
region
• Adds “region” to any app logs
• Region tags then added in metrics and can be surfaced
in grafana or any monitoring of your choice
• Cross-regional monitoring key metrics and alerting
• Data replication (hints in Cassandra, seconds behind master
in MySQL, etc.)
• Data in/out
93. Putting it all together
Region 1 Region 2
Create
infrastructure
Replicate
DNS
95. Lessons learned
• Data synchronization is super critical, so dependency
map based off of the data technologies first.
• Always run your own benchmarking.
• Do not allow legacy to control other region’s design. Find
a healthy transition and balance between old and new.
• Applications must be context-driven.
• Depending on your data load, Cross-regional VPNs may
not make sense.