¿Qué es eso del desarrollo sin servidores? ¿Qué lenguajes puedo utilizar? ¿Cómo hago cosas como autenticación, o guardar en base de datos, o enviar notificaciones? ¿Esto escala? A todas estas preguntas, y a alguna más, intentaré dar respuesta en esta sesión, donde haré una pequeña demo de montar una app muy sencilla y desplegarla en la nube sin preocuparnos de gestionar infraestructura. Charla realizada por primera vez para AlcarriaConf 2021
In this session, we will introduce Amazon RedShift, a new petabyte scale data warehouse service. We'll walk through the basics of the Redshift architecture, launching a new cluster and run SQL queries across a large scale, public dataset. After demonstrating how easy it is to get started with RedShift, we will show how to visualize and query large scale datasets, running queries, reports, and analytics against millions of rows of records in just a few seconds.
Analitica de datos en tiempo real con Apache Flink y Apache BEAMjavier ramirez
Trabajar en tiempo real con datos que se mueven muy rápido no es trivial, sobre todo con volúmenes de datos elevados. Apache Flink y Apache BEAM están específicamente diseñadas para ese caso de uso. En esta charla te contaré los retos de la analítica en tiempo real, cuál es la arquitectura de Apache Flink, qué es Apace BEAM, y cómo usan estas herramientas empresas para hacer desde procesos triviales hasta gestionar billones de eventos al día con latencias de milisegundos. Por supuesto, haremos una demo :)
Streaming analytics on Google Cloud Platform, by Javier Ramirez, teowakijavier ramirez
Do you think you can write a system to get data from sensors across the world, do real time analytics, and display the data on a dashboard in under 100 lines of code? Would you like to add some monitoring and autoscaling too? And what about serverless? In this talk I'll show you all the technologies GCP offers to build such a system reliably and at scale.
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2’s purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Amazon Kinesis provides services for you to work with streaming data on AWS. Learn how to load streaming data continuously and cost-effectively to Amazon S3 and Amazon Redshift using Amazon Kinesis Firehose without writing custom stream processing code. Get an introduction to building custom stream processing applications with Amazon Kinesis Streams for specialised needs.
In this session, we'll review the features and architecture of the new AWS Data Pipeline service and explain how you can use it to better manage your data-driven workloads. We'll then go over a few examples of setting up and provisioning a pipeline in the system.
BDA403 How Netflix Monitors Applications in Real-time with Amazon KinesisAmazon Web Services
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this talk, we’ll first discuss why Netflix chose Amazon Kinesis Streams over other streaming data solutions like Kafka to address these challenges at scale. We’ll then dive deep into how Netflix uses Amazon Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we will cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this talk, you’ll take away techniques and processes that you can apply to your large-scale networks and derive real-time, actionable insights.
In this session, we will introduce Amazon RedShift, a new petabyte scale data warehouse service. We'll walk through the basics of the Redshift architecture, launching a new cluster and run SQL queries across a large scale, public dataset. After demonstrating how easy it is to get started with RedShift, we will show how to visualize and query large scale datasets, running queries, reports, and analytics against millions of rows of records in just a few seconds.
Analitica de datos en tiempo real con Apache Flink y Apache BEAMjavier ramirez
Trabajar en tiempo real con datos que se mueven muy rápido no es trivial, sobre todo con volúmenes de datos elevados. Apache Flink y Apache BEAM están específicamente diseñadas para ese caso de uso. En esta charla te contaré los retos de la analítica en tiempo real, cuál es la arquitectura de Apache Flink, qué es Apace BEAM, y cómo usan estas herramientas empresas para hacer desde procesos triviales hasta gestionar billones de eventos al día con latencias de milisegundos. Por supuesto, haremos una demo :)
Streaming analytics on Google Cloud Platform, by Javier Ramirez, teowakijavier ramirez
Do you think you can write a system to get data from sensors across the world, do real time analytics, and display the data on a dashboard in under 100 lines of code? Would you like to add some monitoring and autoscaling too? And what about serverless? In this talk I'll show you all the technologies GCP offers to build such a system reliably and at scale.
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2’s purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Amazon Kinesis provides services for you to work with streaming data on AWS. Learn how to load streaming data continuously and cost-effectively to Amazon S3 and Amazon Redshift using Amazon Kinesis Firehose without writing custom stream processing code. Get an introduction to building custom stream processing applications with Amazon Kinesis Streams for specialised needs.
In this session, we'll review the features and architecture of the new AWS Data Pipeline service and explain how you can use it to better manage your data-driven workloads. We'll then go over a few examples of setting up and provisioning a pipeline in the system.
BDA403 How Netflix Monitors Applications in Real-time with Amazon KinesisAmazon Web Services
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this talk, we’ll first discuss why Netflix chose Amazon Kinesis Streams over other streaming data solutions like Kafka to address these challenges at scale. We’ll then dive deep into how Netflix uses Amazon Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we will cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this talk, you’ll take away techniques and processes that you can apply to your large-scale networks and derive real-time, actionable insights.
AWS re:Invent 2016: Event Handling at Scale: Designing an Auditable Ingestion...Amazon Web Services
How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...Amazon Web Services
Building big data applications often requires integrating a broad set of technologies to store, process, and analyze the increasing variety, velocity, and volume of data being collected by many organizations.
Using a combination of Amazon EMR, a managed Hadoop framework, and Amazon Redshift, a managed petabyte-scale data warehouse, organizations can effectively address many of these requirements.
In this webinar, we will show how organizations are using Amazon EMR and Amazon Redshift to build more agile and scalable architectures for big data. We will look into how you can leverage Spark and Presto running on EMR, to address multiple data processing requirements. We will also share best practices and common use cases to integrate EMR and Redshift.
Learning Objectives:
• Best practices for building a big data architecture that includes Amazon EMR and Amazon Redshift
• Understand how to use technologies such as Amazon EMR, Presto and Spark to complement your data warehousing environment
• Learn key use cases for Amazon EMR and Amazon Redshift
Who Should Attend:
• Data architects, Data management professionals, Data warehousing professionals, BI professionals
AWS Lambda Supports Parallelization Factor for Kinesis and DynamoDB Event Sou...Swapnil Pawar
AWS Lambda now supports Parallelization Factor, a feature that allows you to process one shard of a Kinesis or DynamoDB data stream with more than one Lambda invocation simultaneously. This new feature allows you to build more agile stream processing applications on volatile data traffic.
By default, Lambda invokes a function with one batch of data records from one shard at a time. For a single event source mapping, the maximum number of concurrent Lambda invocations is equal to the number of Kinesis or DynamoDB shards.
Now you can specify the number of concurrent batches that Lambda polls from a shard via a Parallelization Factor from 1 (default) to 10. For example, when Parallelization Factor is set to 2, you can have 200 concurrent Lambda invocations at maximum to process 100 Kinesis data shards. This helps scale up the processing throughput when the data volume is volatile and the IteratorAge is high.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
Serverlesss Big Data Analytics with Amazon Athena and QuicksightAmazon Web Services
Check out how you can easily query raw data in various formats in Amazon S3, transform it into a canonical form, analyze it, and build dashboards to get more insights from your data.
Building analytics applications requires more than just one good service. It requires the ability to capture a vast amount of data, and react to data changes in real time. It requires flexible tools which enable end users to work in the way they can be most productive, and which addresses the needs of both data consumers, as well as data scientists. This analysis won't just be about data exploration and reports, but must be able to support the largest scale, complex machine and deep learning models imaginable. Across it all, strong governance, security, and cataloguing is essential. In this session, come to hear about how to build a full stack analytics application using AWS Services. We'll see how to capture static and dynamic data in real time, and react to data changes. We'll see AWS Services which perform analytics from drag-and-drop, through simple query-on-files, and into exascale data science. At the end, we'll have a data lake architecture that will meet the demands of the most sophisticated analytics customers for many years to come.
AWS Speaker: Ian Robinson, Specialist Solution Architect, Big Data and Analytics, EMEA - Amazon Web Services
AWS re:Invent 2016: Building Big Data Applications with the AWS Big Data Plat...Amazon Web Services
Building big data applications often requires integrating a broad set of technologies to store, process, and analyze the increasing variety, velocity, and volume of data being collected by many organizations. In this session, we show how you can build entire big data applications using a core set of managed services including Amazon S3, Amazon Kinesis, Amazon EMR, Amazon Elasticsearch Service, Amazon Redshift, and Amazon QuickSight.
We walk you through the steps of building and securing a big data application using the AWS Big Data Platform. We also share best practices and common use cases for AWS big data services, including tips to help you choose the best services for your specific application.
AWS re:Invent 2016: What’s New with Amazon Redshift (BDA304)Amazon Web Services
In this session, you learn about the latest and hottest features of Amazon Redshift. Join Vidhya Srinivasan, General Manager of Amazon Redshift, to take a deep dive into the architecture and inner workings of Amazon Redshift. You discover how the recent availability, performance, and manageability improvements we’ve made can significantly enhance your end user experience. You also get a glimpse of what we are working on and our plans for the future.
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS re:Invent 2016 was AWS’ largest event yet with over 32,000 attendees, 400 breakout sessions, and two keynotes of new product announcements. In this talk, we’ll explore the core themes of AWS re:Invent 2016 such as serverless and artificial intelligence. We will also drill down into several of the services and features unveiled including AWS Batch, AWS Shield, Aurora for Postgres, X-Ray, Polly, Lex, Rekognition, AWS Step Functions. Light appetizers and refreshments will be provided.
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS Services Overview and Quarterly Update - April 2017 AWS Online Tech TalksAmazon Web Services
• Overview of AWS New & Existing Services
• Advice for Getting Started
Join the “AWS Services Overview and Quarterly Update” webinar to take a fast-paced 45-minute tour through our broad range of new and existing services. We will also provide an update so you can review and catch up on the biggest updates from the past quarter. During the webinar, you will have the opportunity to propose questions for the live Q&A session following the presentation.
"Amgen discovers, develops, manufactures, and delivers innovative human therapeutics, helping millions of people in the fight against serious illnesses. In 2014, Amgen implemented a solution to offload ETL data across a diverse data set (U.S. pharmaceutical prescriptions and claims) using Amazon EMR. The solution has transformed the way Amgen delivers insights and reports to its sales force. To support Amgen’s entry into a much larger market, the ETL process had to scale to eight times its existing data volume. We used Amazon EC2, Amazon S3, Amazon EMR, and Amazon Redshift to generate weekly sales reporting metrics.
This session discusses highlights in Amgen's journey to leverage big data technologies and lay the foundation for future growth: benefits of ETL offloading in Amazon EMR as an entry point for big data technologies; benefits and challenges of using Amazon EMR vs. expanding on-premises ETL and reporting technologies; and how to architect an ETL offload solution using Amazon S3, Amazon EMR, and Impala."
Managing Data with Amazon ElastiCache for Redis - August 2016 Monthly Webinar...Amazon Web Services
Many data sets, such as time-series collections or Internet of Things (IoT) deployments can include huge numbers of sensor reports and other data points, which can be a challenge to manage and aggregate. Amazon ElastiCache for Redis provides an on-demand managed service with the performance and scalability to turn big data into useful information. Join us to learn how to use Amazon ElastiCache to create serverless solutions that lets you rapidly make use of large and multisource data sets.
Learning Objectives:
• Learn how to ingest and analyze sensor data using Amazon ElastiCache for Redis and the AWS IoT Service
• Learn how to use ElastiCache Redis for Time-Series data
In this session, you'll learn what’s new and hot with AWS Lambda. Come learn about what we’ve been working on and what we are planning for the future. You'll get a hands-on demonstration of some our newest features.
AWS re:Invent 2016: Simplified Data Center Migration—Lessons Learned by Live ...Amazon Web Services
As the global leader of live entertainment, Live Nation promotes and produces over 22,000 events annually, operates out of 37 countries, and cultivates over 530 million fans globally. To focus on the growth of the business and shed increasing infrastructure costs, the company made the strategic decision to get out of the data center business and go all in with the cloud. Using instrumental services like AWS Import/Export Snowball, VM Import/Export, AWS CloudFormation and AWS Identity and Access Management, VP Cloud Services Jake Burns quickly and efficiently migrated priority business and operational applications, allowing for immediate cost efficiencies. Learn how AWS offerings like Snowball played a decisive role in Live Nation's ability to easily migrate data and enable end users to quickly access applications to minimize operational impact.
Convert and Migrate Your NoSQL Database or Data Warehouse to AWS - July 2017Amazon Web Services
Learning Objectives:
- Understand the use cases for migrating or replicating databases to the cloud
- Learn about the benefits of cloud-native databases for performance and costs reduction
- See how AWS Database Migration Service helps with your migration and how AWS Schema Conversion Tool makes conversions simple and quick
Moving or replicating your databases to the cloud should be simple and inexpensive. AWS has recently enhanced the AWS Database Migration Service and the AWS Schema Conversion Tool with new data sources to increase your migration options. You can now export from MongoDB databases and Greenplum, IBM Netezza, HPE Vertica, Teradata, Oracle DW and Microsoft SQL Server data warehouses to AWS. Learn how to export and migrate your data and procedural code with minimal downtime to the cloud database of your choice, including cloud-native offerings such as Amazon Aurora, Amazon DynamoDB and Amazon Redshift.
SRV301 Getting the most Bang for your buck with #EC2 #WinningAmazon Web Services
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2’s purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
SRV203 Getting Started with AWS Lambda and the Serverless CloudAmazon Web Services
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more. In this session, you'll learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We'll introduce you to the basics of building with Lambda and how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We'll also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
Getting Started with AWS Lambda and the Serverless Cloud - AWS Summit Cape T...Amazon Web Services
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more. In this session, you’ll learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We’ll introduce you to the basics of building with Lambda and how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We’ll also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
AWS Speaker : Danilo Poccia, Technical Evangelist - Amazon Web Services
AWS re:Invent 2016: Event Handling at Scale: Designing an Auditable Ingestion...Amazon Web Services
How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...Amazon Web Services
Building big data applications often requires integrating a broad set of technologies to store, process, and analyze the increasing variety, velocity, and volume of data being collected by many organizations.
Using a combination of Amazon EMR, a managed Hadoop framework, and Amazon Redshift, a managed petabyte-scale data warehouse, organizations can effectively address many of these requirements.
In this webinar, we will show how organizations are using Amazon EMR and Amazon Redshift to build more agile and scalable architectures for big data. We will look into how you can leverage Spark and Presto running on EMR, to address multiple data processing requirements. We will also share best practices and common use cases to integrate EMR and Redshift.
Learning Objectives:
• Best practices for building a big data architecture that includes Amazon EMR and Amazon Redshift
• Understand how to use technologies such as Amazon EMR, Presto and Spark to complement your data warehousing environment
• Learn key use cases for Amazon EMR and Amazon Redshift
Who Should Attend:
• Data architects, Data management professionals, Data warehousing professionals, BI professionals
AWS Lambda Supports Parallelization Factor for Kinesis and DynamoDB Event Sou...Swapnil Pawar
AWS Lambda now supports Parallelization Factor, a feature that allows you to process one shard of a Kinesis or DynamoDB data stream with more than one Lambda invocation simultaneously. This new feature allows you to build more agile stream processing applications on volatile data traffic.
By default, Lambda invokes a function with one batch of data records from one shard at a time. For a single event source mapping, the maximum number of concurrent Lambda invocations is equal to the number of Kinesis or DynamoDB shards.
Now you can specify the number of concurrent batches that Lambda polls from a shard via a Parallelization Factor from 1 (default) to 10. For example, when Parallelization Factor is set to 2, you can have 200 concurrent Lambda invocations at maximum to process 100 Kinesis data shards. This helps scale up the processing throughput when the data volume is volatile and the IteratorAge is high.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
Serverlesss Big Data Analytics with Amazon Athena and QuicksightAmazon Web Services
Check out how you can easily query raw data in various formats in Amazon S3, transform it into a canonical form, analyze it, and build dashboards to get more insights from your data.
Building analytics applications requires more than just one good service. It requires the ability to capture a vast amount of data, and react to data changes in real time. It requires flexible tools which enable end users to work in the way they can be most productive, and which addresses the needs of both data consumers, as well as data scientists. This analysis won't just be about data exploration and reports, but must be able to support the largest scale, complex machine and deep learning models imaginable. Across it all, strong governance, security, and cataloguing is essential. In this session, come to hear about how to build a full stack analytics application using AWS Services. We'll see how to capture static and dynamic data in real time, and react to data changes. We'll see AWS Services which perform analytics from drag-and-drop, through simple query-on-files, and into exascale data science. At the end, we'll have a data lake architecture that will meet the demands of the most sophisticated analytics customers for many years to come.
AWS Speaker: Ian Robinson, Specialist Solution Architect, Big Data and Analytics, EMEA - Amazon Web Services
AWS re:Invent 2016: Building Big Data Applications with the AWS Big Data Plat...Amazon Web Services
Building big data applications often requires integrating a broad set of technologies to store, process, and analyze the increasing variety, velocity, and volume of data being collected by many organizations. In this session, we show how you can build entire big data applications using a core set of managed services including Amazon S3, Amazon Kinesis, Amazon EMR, Amazon Elasticsearch Service, Amazon Redshift, and Amazon QuickSight.
We walk you through the steps of building and securing a big data application using the AWS Big Data Platform. We also share best practices and common use cases for AWS big data services, including tips to help you choose the best services for your specific application.
AWS re:Invent 2016: What’s New with Amazon Redshift (BDA304)Amazon Web Services
In this session, you learn about the latest and hottest features of Amazon Redshift. Join Vidhya Srinivasan, General Manager of Amazon Redshift, to take a deep dive into the architecture and inner workings of Amazon Redshift. You discover how the recent availability, performance, and manageability improvements we’ve made can significantly enhance your end user experience. You also get a glimpse of what we are working on and our plans for the future.
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS re:Invent 2016 was AWS’ largest event yet with over 32,000 attendees, 400 breakout sessions, and two keynotes of new product announcements. In this talk, we’ll explore the core themes of AWS re:Invent 2016 such as serverless and artificial intelligence. We will also drill down into several of the services and features unveiled including AWS Batch, AWS Shield, Aurora for Postgres, X-Ray, Polly, Lex, Rekognition, AWS Step Functions. Light appetizers and refreshments will be provided.
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS Services Overview and Quarterly Update - April 2017 AWS Online Tech TalksAmazon Web Services
• Overview of AWS New & Existing Services
• Advice for Getting Started
Join the “AWS Services Overview and Quarterly Update” webinar to take a fast-paced 45-minute tour through our broad range of new and existing services. We will also provide an update so you can review and catch up on the biggest updates from the past quarter. During the webinar, you will have the opportunity to propose questions for the live Q&A session following the presentation.
"Amgen discovers, develops, manufactures, and delivers innovative human therapeutics, helping millions of people in the fight against serious illnesses. In 2014, Amgen implemented a solution to offload ETL data across a diverse data set (U.S. pharmaceutical prescriptions and claims) using Amazon EMR. The solution has transformed the way Amgen delivers insights and reports to its sales force. To support Amgen’s entry into a much larger market, the ETL process had to scale to eight times its existing data volume. We used Amazon EC2, Amazon S3, Amazon EMR, and Amazon Redshift to generate weekly sales reporting metrics.
This session discusses highlights in Amgen's journey to leverage big data technologies and lay the foundation for future growth: benefits of ETL offloading in Amazon EMR as an entry point for big data technologies; benefits and challenges of using Amazon EMR vs. expanding on-premises ETL and reporting technologies; and how to architect an ETL offload solution using Amazon S3, Amazon EMR, and Impala."
Managing Data with Amazon ElastiCache for Redis - August 2016 Monthly Webinar...Amazon Web Services
Many data sets, such as time-series collections or Internet of Things (IoT) deployments can include huge numbers of sensor reports and other data points, which can be a challenge to manage and aggregate. Amazon ElastiCache for Redis provides an on-demand managed service with the performance and scalability to turn big data into useful information. Join us to learn how to use Amazon ElastiCache to create serverless solutions that lets you rapidly make use of large and multisource data sets.
Learning Objectives:
• Learn how to ingest and analyze sensor data using Amazon ElastiCache for Redis and the AWS IoT Service
• Learn how to use ElastiCache Redis for Time-Series data
In this session, you'll learn what’s new and hot with AWS Lambda. Come learn about what we’ve been working on and what we are planning for the future. You'll get a hands-on demonstration of some our newest features.
AWS re:Invent 2016: Simplified Data Center Migration—Lessons Learned by Live ...Amazon Web Services
As the global leader of live entertainment, Live Nation promotes and produces over 22,000 events annually, operates out of 37 countries, and cultivates over 530 million fans globally. To focus on the growth of the business and shed increasing infrastructure costs, the company made the strategic decision to get out of the data center business and go all in with the cloud. Using instrumental services like AWS Import/Export Snowball, VM Import/Export, AWS CloudFormation and AWS Identity and Access Management, VP Cloud Services Jake Burns quickly and efficiently migrated priority business and operational applications, allowing for immediate cost efficiencies. Learn how AWS offerings like Snowball played a decisive role in Live Nation's ability to easily migrate data and enable end users to quickly access applications to minimize operational impact.
Convert and Migrate Your NoSQL Database or Data Warehouse to AWS - July 2017Amazon Web Services
Learning Objectives:
- Understand the use cases for migrating or replicating databases to the cloud
- Learn about the benefits of cloud-native databases for performance and costs reduction
- See how AWS Database Migration Service helps with your migration and how AWS Schema Conversion Tool makes conversions simple and quick
Moving or replicating your databases to the cloud should be simple and inexpensive. AWS has recently enhanced the AWS Database Migration Service and the AWS Schema Conversion Tool with new data sources to increase your migration options. You can now export from MongoDB databases and Greenplum, IBM Netezza, HPE Vertica, Teradata, Oracle DW and Microsoft SQL Server data warehouses to AWS. Learn how to export and migrate your data and procedural code with minimal downtime to the cloud database of your choice, including cloud-native offerings such as Amazon Aurora, Amazon DynamoDB and Amazon Redshift.
SRV301 Getting the most Bang for your buck with #EC2 #WinningAmazon Web Services
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2’s purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
SRV203 Getting Started with AWS Lambda and the Serverless CloudAmazon Web Services
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more. In this session, you'll learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We'll introduce you to the basics of building with Lambda and how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We'll also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
Getting Started with AWS Lambda and the Serverless Cloud - AWS Summit Cape T...Amazon Web Services
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more. In this session, you’ll learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We’ll introduce you to the basics of building with Lambda and how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We’ll also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
AWS Speaker : Danilo Poccia, Technical Evangelist - Amazon Web Services
Join us to learn about the state of serverless computing from Dr. Tim Wagner, General Manager of AWS Lambda. Dr. Wagner discusses the latest developments from AWS Lambda and the serverless computing ecosystem. He talks about how serverless computing is becoming a core component in how companies build and run their applications and services, and he also discusses how serverless computing will continue to evolve.
by Rahul Sareen, Sr. IoT Consultant, AWS Professional Services
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more. In this session, you’ll learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We’ll introduce you to the basics of building with Lambda and how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We’ll also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
How to build and deploy serverless apps - AWS Summit Cape Town 2018Amazon Web Services
Speaker: Alex Casalboni, AWS
Customer Speaker: Impression Signatures
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. It means that you can build web, mobile, and IoT backends, run stream processing or big data workloads, build chatbots, run code at the edge, and more. In this session, learn how to get started with serverless computing with AWS Lambda and managed services such as Amazon API Gateway, Amazon Kinesis, and Amazon DynamoDB. We introduce you to the basics of building with AWS Lambda, as well as how to properly perform CI/CD for your serverless application. We will discuss a method for automating the deployment of serverless applications using services such as AWS CodePipeline and AWS CodeBuild, and techniques such as canary deployments and automatic rollbacks.
With AWS Lambda, you can easily build scalable microservices for mobile, web, and IoT applications or respond to events from other AWS services without managing infrastructure. In this session, you’ll see demonstrations and hear more about newly launched features. We’ll show you how to use Lambda to build web, mobile, or IoT backends and voice-enabled apps, and we'll show you how to extend both AWS and third party services by triggering Lambda functions. We’ll also provide productivity and performance tips for getting the most out of your Lambda functions and show how cloud native architectures use Lambda to eliminate “cold servers” and excess capacity without sacrificing scalability or responsiveness.
2016-06 - Design your api management strategy - AWS - Microservices on AWSSmartWave
Morning session started with a presentation on working with a micro-services API gateway in hybrid architectures, by Jean-Pierre LeGoaller, Architect at AWS. We learned how to greatly reduce coding efforts, make applications far more efficient, and decrease errors all at the same time, using small and flexible Micro-services with an API Gateway. Jean-Pierre then illustrated the benefits of AWS lambda function to run seamlessly codes as a service in AWS high-availability compute infrastructure.
With AWS Lambda, you can easily build scalable microservices for mobile, web, and IoT applications or respond to events from other AWS services without managing infrastructure. In this session, you’ll see demonstrations and hear more about newly launched features. We’ll show you how to use Lambda to build web, mobile, or IoT backends and voice-enabled apps, and we'll show you how to extend both AWS and third party services by triggering Lambda functions. We’ll also provide productivity and performance tips for getting the most out of your Lambda functions and show how cloud native architectures use Lambda to eliminate “cold servers” and excess capacity without sacrificing scalability or responsiveness.
AWS' philosophy and recommended best practices for building microservices applications, how AWS services like Lambda and API gateway benefit developers building microservices apps, and how customers are using these two and other AWS services to deliver their microservices apps
The State of Serverless Computing | AWS Public Sector Summit 2017Amazon Web Services
oin us to learn about the state of serverless computing from Dougal Ballantyne, Principal Product Manager, Serverless. Dougal Ballantyne discusses the latest developments from AWS Lambda and the serverless computing ecosystem. He talks about how serverless computing is becoming a core component in how companies build and run their applications and services, and he also discusses how serverless computing will continue to evolve. Learn More: https://aws.amazon.com/government-education/
Getting Started with Serverless Architectures - August 2016 Monthly Webinar S...Amazon Web Services
Serverless architectures allow you to build and run applications and services without having to manage infrastructure. With serverless architectures, your application still runs on servers, but all the server management is done by AWS .
In this webinar, you will learn how to build applications and services using a serverless architecture. We will discuss how you can use AWS Lambda to run code for any type of application or backend service; use Amazon DynamoDB to store application data with high scalability and redundancy; and use Amazon API Gateway to create and manage secure API endpoints. We will run through a demo setting up a web application using this architecture, and we will discuss best practices and patterns used by our customers to run serverless applications.
Learning Objectives:
• Understand the basics of serverless architectures
• Learn how to use Lambda, API Gateway, and DynamoDB to run web applications
With AWS Lambda, you can easily build scalable microservices for mobile, web, and IoT applications or respond to events from other AWS services without managing infrastructure. In this session, you’ll see demonstrations and hear more about newly launched features. We’ll show you how to use Lambda to build web, mobile, or IoT backends and voice-enabled apps, and we’ll show you how to extend both AWS and third party services by triggering Lambda functions. We’ll also provide productivity and performance tips for getting the most out of your Lambda functions and show how cloud native architectures use Lambda to eliminate “cold servers” and excess capacity without sacrificing scalability or responsiveness.
How to use Lambda to build web, mobile, or IoT backends and voice-enabled apps, and we'll show you how to extend both AWS and third party services by triggering Lambda functions.
Migrating your .NET Applications to the AWS Serverless PlatformAmazon Web Services
Windows and .NET-based workloads are first-class citizens on AWS. In this session, we show how you can easily move an existing .NET application to the AWS cloud and take advantage of it serverless capabilities. We will cover migration and architectural considerations for porting your C# application to AWS Lambda, and using API Gateway to create a façade for your application to safely make changes as you migrate.
Speakers:
Stephen Liedig, Public Sector Solutions Architect, Amazon Web Services
Shane Baldacchino, Solutions Architect, Amazon Web Services
With AWS Lambda, you can easily build scalable microservices for mobile, web, and IoT applications or respond to events from other AWS services without managing infrastructure. In this session, you’ll see demonstrations and hear more about newly launched features. We’ll show you how to use Lambda to build web, mobile, or IoT backends and voice-enabled apps, and we'll show you how to extend both AWS and third party services by triggering Lambda functions. We’ll also provide productivity and performance tips for getting the most out of your Lambda functions and show how cloud native architectures use Lambda to eliminate “cold servers” and excess capacity without sacrificing scalability or responsiveness.
Hubo un tiempo en el que casi cualquier componente de software requería pagar una licencia. Afortunadamente, hoy en día gracias al software libre y de código abierto, se puede desarrollar prácticamente cualquier aplicación usando componentes gratuitos.
Pero, si el software es gratis, ¿Quién lo desarrolla? ¿Trabaja la comunidad de software libre de forma altruista? ¿Se puede desarrollar software libre de forma profesional? De hecho, hay quien dice que el código abierto tal y como lo conocimos ya no existe, y que lo que hay hoy en día es otra cosa.
En esta charla hablaré de cómo se puede monetizar el código libre, y de algunos posibles conflictos que puedes encontrarte en el camino.
Además, te contaré cómo hacemos desde QuestDB para desarrollar una base de datos de código abierto y mantener un equipo estable viviendo de ello. Comentaré también algunas situaciones problemáticas a las que proyectos muy destacados se han enfrentado, o que se enfrentan a día de hoy.
QuestDB: The building blocks of a fast open-source time-series databasejavier ramirez
(talk delivered at OSA CON 23)
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed.
We will learn how it deals with data ingestion, and which SQL extensions it implements for working with time-series efficiently.
We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or data deduplication.
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...javier ramirez
QuestDB es una base de datos open source de alto rendimiento. Mucha gente nos comentaba que les gustaría usarla como servicio, sin tener que gestionar las máquinas. Así que nos pusimos manos a la obra para desarrollar una solución que nos permitiese lanzar instancias de QuestDB con provisionado, monitorización, seguridad o actualizaciones totalmente gestionadas.
Unos cuantos clusters de Kubernetes más tarde, conseguimos lanzar nuestra oferta de QuestDB Cloud. Esta charla es la historia de cómo llegamos ahí. Hablaré de herramientas como Calico, Karpenter, CoreDNS, Telegraf, Prometheus, Loki o Grafana, pero también de retos como autenticación, facturación, multi-nube, o de a qué tienes que decir que no para poder sobrevivir en la nube.
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...javier ramirez
How would you build a database to support sustained ingestion of several hundreds of thousands rows per second while running near real-time queries on top?
In this session I will go over some of the technical decisions and trade-offs we applied when building QuestDB, an open source time-series database developed mainly in JAVA, and how we can achieve over four million row writes per second on a single instance without blocking or slowing down the reads. There will be code and demos, of course.
We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Deduplicating and analysing time-series data with Apache Beam and QuestDBjavier ramirez
Time series data pipelines tend to prioritise speed and freshness over completeness and integrity. In such scenarios, it is very common to ingest duplicate data, which may be fine for many analytical use cases, but is very inconvenient for others.
There are many open source databases built specifically for the speed and query semantics of time series, and most of them lack automatic deduplication of events in near real-time. One such database is QuestDB, which requires a manual batch process to deduplicate ingested data.
In this talk, we will see how we can successfully use Apache Beam to deduplicate streaming time series, which can then be analysed by a time series database.
Relational databases were created a long time ago for a simpler world. Even if they are still awesome tools for generic workloads, there are some things they cannot do well.
In this session I will speak about purpose-built databases that you can use for specific business scenarios. We will see the type of queries you can run on a Graph database, a Document Database, and a Time-Series database. We will then see how a relational database could also be used for the same use cases, just in a much more complex way.
Your Timestamps Deserve Better than a Generic Databasejavier ramirez
If you are storing records with a timestamp in your database, it is very likely a time series database can make your life easier.
However, time series databases are still the great unknown for a large part of the tech community.
In this talk, I will show you what use cases they are good for, what they give you that you cannot get from a traditional database, and when it is a good idea (and when it is not) to use them.
For the demos, we will be using QuestDB, the fastest open-source time series database.
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...javier ramirez
En esta sesión voy a contar las decisiones técnicas que tomamos al desarrollar QuestDB, una base de datos Open Source para series temporales compatible con Postgres, y cómo conseguimos escribir más de cuatro millones de filas por segundo sin bloquear o enlentecer las consultas.
Hablaré de cosas como (zero) Garbage Collection, vectorización de instrucciones usando SIMD, reescribir en lugar de reutilizar para arañar microsegundos, aprovecharse de los avances en procesadores, discos duros y sistemas operativos, como por ejemplo el soporte de io_uring, o del balance entre experiencia de usuario y rendimiento cuando se plantean nuevas funcionalidades.
Processing and analysing streaming data with Python. Pycon Italy 2022javier ramirez
Data used to be a batch thing, but more and more we get unbounded streams of data, fast or slow, that we need to process and analyse in near real time.
In this talk I’ll show you how you can use Apache Flink and QuestDB to build reliable streaming data pipelines that can grow as much as you need.
QuestDB: ingesting a million time series per second on a single instance. Big...javier ramirez
In this session I will show you the technical decisions we made when building QuestDB, the open source, Postgres compatible, time-series database, and how we can achieve a million row writes per second without blocking or slowing down the reads.
Servicios e infraestructura de AWS y la próxima región en Aragónjavier ramirez
AWS está montando una región de infraestructura en Aragón. Vale, pero ¿Qué significa eso? ¿Es tan diferente de un centro de datos convencional o de otros proveedores de nube? (Spoiler: Sí). En esta sesión te cuento por qué. Hay video en https://catedrasamcadt.unizar.es/noticias/el-momento-tecnologico-actual-contado-por-trabajadores-de-amazon-web-services/
AWS launched publicly on March 2006 with just one service, starting the age of the public cloud. You might think after 15 years everything in cloud has already been invented, but that's simply not the case.
In this session I want to show you how AWS is reinventing the cloud in areas like computing, machine learning, databases and analytics, or cloud infrastructure.
In this webinar we explain which are some of the problems of streaming analytics, and why they are different to batch/big data analytics. Then we go into introducing some basic streaming concepts, like event queues, event processors, event vs processing time, and delivery guarantees. We end this first part of the series presenting a few of the most common open source components for streaming (Kafka, Spark, Flink, Cassandra, or ElasticSearch) and we mention the different options you have to run them on AWS.
Getting started with streaming analytics: Setting up a pipelinejavier ramirez
In this session I will show you how to create a simple streaming analytics pipeline, first using open source tools and developing locally, then moving to a VM, then moving to fully managed AWS services. The session will serve as an introduction to some details of Apache Kafka, Apache Flink, ElasticSearch, Amazon Managed Streaming for Kafka, Kinesis Data Analytics, and Amazon ElasticSearch. It will be an almost slideless presentation, as I will spent most of the time at the command line and the IDE.
Getting started with streaming analytics: Deep Divejavier ramirez
Now that we know how to create simple streaming analytics pipelines, it is time to learn something more interesting. In this session I will show you how to add Complex Event Processing to your Apache Flink (or Kinesis Data Analytics) application using JAVA. For those of you that prefer SQL, I will show you how to run streaming analytics using only SQL.
Getting started with streaming analytics: streaming basics (1 of 3)javier ramirez
In this webinar we explain which are some of the problems of streaming analytics, and why they are different to batch/big data analytics. Then we go into introducing some basic streaming concepts, like event queues, event processors, event vs processing time, and delivery guarantees. We end this first part of the series presenting a few of the most common open source components for streaming (Kafka, Spark, Flink, Cassandra, or ElasticSearch) and we mention the different options you have to run them on AWS.
Monitorización de seguridad y detección de amenazas con AWSjavier ramirez
La seguridad es nuestra prioridad número uno. Cuando despliegas tu infraestructura y aplicaciones en la nube, hay que tener en cuenta que muchas de las prácticas de seguridad son iguales a las que se llevan a cabo tradicionalmente cuando trabajas on-premises, pero hay otros mecanismos que son específicos a AWS y que te ayudan a operar de forma segura.
En este webinar, vamos a explicarte las bases de la monitorización de seguridad y de la detección de amenazas, y veremos cómo servicios como Amazon GuardDuty y AWS Security Hub te ayudan a tener una visión completa, te permiten cumplir con tus requisitos de compliance, y te permiten detectar amenazas en tus cargas de trabajo.
Consulta cualquier fuente de datos usando SQL con Amazon Athena y sus consult...javier ramirez
Los desarrolladores hoy en día usamos diferentes bases de datos en función de las necesidades de nuestras aplicaciones. Por ejemplo, si estamos construyendo una red social, puede que usemos una base de datos orientada a grafos como Amazon Neptune, o si nuestro requisito es soportar esquemas muy flexibles quizás usemos Amazon DocumentDB, o si necesitamos latencias super bajas, quizás usemos Amazon DynamoDB. O incluso Amazon ElastiCache con Redis.
Es cada vez más normal encontrar aplicaciones complejas compuestas de diferentes servicios y diferentes almacenes de datos. Esto, que es genial desde el punto de vista de escoger la herramienta ideal para cada caso de uso, hace que la analítica de datos se complique al no tener toda la información en una sola base de datos relacional.
En este webinar, te presentamos la funcionalidad de consultas federadas de Athena, que te permite lanzar consultas SQL contra cualquiera de tus bases de datos, tanto en AWS como on premises. Además, en una sola SELECT puedes consultar diferentes fuentes de datos y hacer joins entre ellas. Para que todo quede más claro, te lo contaremos con una demo consultando mediante SQL una base de datos que no soporta SQL de manera nativa.
Recomendaciones, predicciones y detección de fraude usando servicios de intel...javier ramirez
La implementación de modelos de aprendizaje automático para resolver desafíos de negocios complejos, como detección de fraude, recomendaciones o predicción de series de datos es difícil si se quiere partir desde cero. Sin embargo, utilizando herramientas de AWS, implementar esos modelos está al alcance de cualquier empresa que sea capaz de subir un fichero a la nube, y llamar a un API cuando quiera obtener resultados. Basados en la tecnología de aprendizaje automático que se perfeccionó gracias a años de uso en Amazon.com, Amazon Forecast, Amazon Personalize, y Amazon Fraud Detector permiten a cualquiera sin experiencia previa en aprendizaje automático integrar estas tecnologías en sus aplicaciones. En este video aprenderás cuáles son las dificultades de crear modelos de predicción para los casos ya mencionados, verás como AWS acelera el difícil trabajo que se necesita para diseñar, entrenar e implementar un modelo personalizado para tus datos, y te contaremos todo lo que necesitas para poder empezar a integrar estos modelos en tu aplicación. Por supuesto, veremos demos de cómo funcionan
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
1. Primeros pasos en desarrollo
serverless
Javier Ramírez
Developer Advocate - Amazon Web Services
@supercoco9 ramirez
2. We are witnessing a paradigm shift
Experiment,
innovate
more often
Release
features
faster
Build better
products
Focus on
business
logic
Decouple
software
systems
Win
customers
Win
customers
75% of organizations use or plan to use serverless
technologies within the next two years.1
* 451 Research, The Journey to Serverless-First: Enterprise Stories
https://aws.amazon.com/resources/analyst-reports/451-serverless-first-enterprises-on-aws/
3. Sumo Logic’s analysis of its AWS customers’ activities found a modest increase in
those running Lambdas, going from 36% in 2019 to 39% in 2020. In early 2020,
Datadog reported that more than 40% of its AWS customers had adopted Lambda,
with that figure almost doubling among AWS customers that also used containers.
https://thenewstack.io/adoption-of-aws-lambda-serverless-stalls/
https://www.sumologic.com/brief/continuous-intelligence-report/
4. Development transformation at Amazon (and everywhere else)
1994-2001 2002+
Monolithic architecture +
hierarchical organization
Decoupled services +
Two-pizza teams
6. Amazon S3 @ re:Invent 2018:
In 2020 S3 stored over 100 trillion (1014,
or 100,000,000,000,000) objects, and
regularly peaks at tens of millions of
requests per second.
8. Computing evolution – A paradigm shift
LEVEL
OF
ABSTRACTION
FOCUS ON BUSINESS LOGIC
PHYSICAL MACHINES
Requires “guess” planning
Lives for years on-premises
Heavy investments (capex)
Low innovation factor
Deploy in months
9. Computing evolution – A paradigm shift
LEVEL
OF
ABSTRACTION
FOCUS ON BUSINESS LOGIC
VIRTUAL MACHINES
Hardware independence
Faster provisioning speed (minutes/hours)
Trade capex for opex
More scale
Elastic resources
Faster speed and agility
Reduced maintenance
10. Computing evolution – A paradigm shift
LEVEL
OF
ABSTRACTION
FOCUS ON BUSINESS LOGIC
CONTAINERIZATION
Platform independence
Consistent runtime environment
Higher resource utilization
Easier and faster deployments
Isolation and sandboxing
Start speed (deploy in seconds)
11. No server is easier to manage than
"no server.”
Werner Vogels—Amazon CTO
12. Computing evolution – A paradigm shift
AWS Lambda
AWS Fargate
LEVEL
OF
ABSTRACTION
FOCUS ON BUSINESS LOGIC
Continuous scaling
Fault tolerance built-in
Pay for value
Zero maintenance
SERVERLESS
13. AWS operational responsibility models
On-Premises Cloud
Less More
Compute Virtual Machine
EC2 Elastic Beanstalk AWS Lambda
Fargate
Databases MySQL MySQL on EC2
RDS MySQL RDS Aurora Aurora Serverless DynamoDB
Storage Storage
S3
Messaging ESBs
Amazon MQ Kinesis SQS / SNS
Analytics
Hadoop Hadoop on EC2 EMR Elasticsearch Service Athena
14. Serverless means …
No servers to provision
or manage
Scales with usage
Never pay for idle Availability and fault
tolerance built in
18. Introduction to AWS Lambda
• Function-as-a-Service
• Run code without provisioning or managing servers
• Pay only for the compute time you consume
• Automatically runs your code with high availability
• Scale with usage
19. Lambda handles
• Load balancing
• Auto scaling
• Handling failures
• Security isolation
• OS management
• Managing utilization
(and many other things) for you
20. Comparison of operational responsibility
AWS Lambda
Serverless functions
AWS Fargate
Serverless containers
ECS/EKS
Container-management as a service
EC2
Infrastructure-as-a-Service
More opinionated
Less opinionated
AWS manages Customer manages
• Data source integrations
• Physical hardware, software, networking,
and facilities
• Provisioning
• Application code
• Container orchestration, provisioning
• Cluster scaling
• Physical hardware, host OS/kernel,
networking, and facilities
• Application code
• Data source integrations
• Security config and updates, network config,
management tasks
• Container orchestration control plane
• Physical hardware software, networking,
and facilities
• Application code
• Data source integrations
• Work clusters
• Security config and updates, network config,
firewall, management tasks
• Physical hardware software,
networking, and facilities
• Application code
• Data source integrations
• Scaling
• Security config and updates, network config,
management tasks
• Provisioning,managing scaling and
patching of servers
21. Customer uses cases: Lambda at scale
processes 4,000 requests
per second
processes half a trillion
validations of stock trades daily
triggers 1.2 billion Lambda
requests each month
Scaled up to 20,000
concurrent Lambda executions
in testing
25. Serverless Applications
Event source Function
Node.js
Python
Java
C#
Go
Ruby
PowerShell
Runtime API
Changes in
data state
Requests to
endpoints
Changes in
Resource state
27. Serverless Applications
Event source Services
Changes in
data state
Requests to
endpoints
Changes in
Resource state
Function
Node.js
Python
Java
C#
Go
Ruby
PowerShell
Runtime API
29. Dead-Letter Queue
• Asynchronous Lambda invocations are retried two more
times (3 times total)
• Lambda can forward payloads that were not processed
to a dead-letter queue (IF configured!)
• A mechanism to handle exceptions and failures
gracefully
30. Monitoring and debugging Lambda functions
• AWS Lambda console includes a dashboard for functions
• Lists all Lambda functions
• Easy editing of resources, event sources and other settings
• At-a-glance metrics
• Metrics automatically reported to Amazon CloudWatch for each Lambda
function
• Requests
• Errors
• Latency
• Throttles
31. AWS X-Ray
Profile and troubleshoot
serverless applications:
• Lambda instruments
incoming requests for all
supported languages and can
capture calls made in code
• API Gateway inserts a tracing
header into HTTP calls as
well as reports data back to
X-Ray itself
var AWSXRay = require(‘aws-xray-sdk-core‘);
var AWS = AWSXRay.captureAWS(require(‘aws-sdk’));
S3Client = AWS.S3();
32. Anatomy of a Lambda Function
Handler() function
Function to be executed upon
invocation
Event object
Data sent during Lambda
function Invocation
Context object
Methods available to interact
with runtime information
(request ID, log group, more)
import json
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello World!')
}
33. Lambda Layers
Lets functions easily share code: Upload layer
once, reference within any function
Promote separation of responsibilities, lets
developers iterate faster on writing business logic
Built in support for secure sharing by ecosystem
34. Lambda Layers: Uses cases
• Custom code, that is used by more than one function
• Libraries, modules, frameworks to simplify the
implementation of your business logic
• Security/monitoring service
• Shared code that does not change frequently
• Bring your own Runtime
• C++
• Rust
• PHP
35. Fine-grained Pricing
Buy compute time in 1ms
increments
Low request charge
No hourly, daily, or monthly
minimums
No per-device fees
Never pay for idle
Free Tier
1M requests and 400,000 GBs of compute.
Every month, every customer.
37. AWS Compute Offerings
Service
I want to
configure
servers, storage,
networking, and
my OS
I want to run
servers,
configure
applications, and
control scaling
Run my code
when it’s
needed
How do I
choose?
Amazon EC2 Amazon ECS AWS Lambda
AWS Fargate
I want to run my
containers
38. Serverless compute engine
for containers
Long-running
Bring existing code
Fully-managed orchestration
AWS Fargate
The two Serverless compute options
Serverless event-driven
code execution
Short-lived
All language runtimes
Data source integrations
AWS Lambda
41. Orchestration for serverless apps
“I want to sequence functions”
“I want to select functions based on data”
“I want to run functions in parallel”
“I want to retry functions”
“I want to try/catch/finally”
“I want to run code for hours”
AWS Step Functions
42. AWS Step Functions
Visualize in the
Console
Define in JSON Monitor
Executions
Easily coordinate multiple Lambda functions using visual workflows
43. Processing a new bank account application
requires some service coordination
Data
checking
Verify identity documents
Check address
Human review
List flagged applications
Handle human decisions
Account applications
Accept new application
Consolidate data checks
Human review?
Approve or reject
45. There are still a lot of things to think about
• Deployment and Infrastructure as code
• Secrets/configuration management
• Simplifying code management
• Debugging/troubleshooting
• Performance controls
• Security