Algorithmia is a startup with a mission to make state of the art machine learning discoverable by everyone&emdash;they offer the largest algorithm marketplace in the world, with over 2500 algorithms supporting tens of thousands of application developers. Algorithma is the first company to make deep learning, one of the most conceptually difficult areas of computing, accessible to any company via microservices. In this session, you learn how this startup has selected and optimized Amazon EC2 instances for various algorithms (including the latest generation of GPU optimized instances), to create a flexible and scalable platform. They also share their architecture and best practices for getting any computationally-intensive application started quickly.
Strategic Uses for Cost Efficient Long-Term Cloud StorageAmazon Web Services
Compared to storing long-term datasets on-premises, archiving in the cloud is a smart alternative whether you’re looking for an active archive solution, tape replacement, or to fulfill a compliance requirement. Learn how AWS customers are simplifying their archiving strategy and meeting compliance needs using Amazon Glacier. Hear how customers have evolved their backup and disaster recovery architectures and replaced tape solutions by turning to AWS for a more cost efficient, durable and agile solution. We will showcase Sony DADC's active archive deployment on Glacier and demo how some of our financial service customers have set up compliant archives to meet their regulatory objectives.
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...Amazon Web Services
Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.
Getting Started with Big Data and HPC in the Cloud - August 2015Amazon Web Services
How can you use Big Data to grow your business and discover new opportunities? When organizations effectively capture, analyze, visualize and apply big data insights to their business goals, they differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line. With Amazon Web Services, businesses and researchers can easily fulfill their high performance computing (HPC) requirements with the added benefit of ad-hoc provisioning, pay-as-you-go pricing and faster time-to-results. Join this session to understand how to run HPC applications in AWS cloud, and about different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
AWS re:Invent 2016: Getting the most Bang for your buck with #EC2 #Winning (C...Amazon Web Services
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO.
In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2’s purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Building and scaling your containerized microservices on Amazon ECSAmazon Web Services
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
From dev and test to large-scale HANA production deployments, enterprise usage of SAP on AWS is rapidly growing across all verticals and geographies. Gain insights on the AWS SAP offering and partnership and why a cloud first approach makes business, technical and financial sense for the numerous SAP solutions that are certified and ready to be deployed today.
AW Speaker: Michael Needham, Sr Mgr, Solutions Architecture - Amazon Web Services
Deep Dive on Elastic File System - February 2017 AWS Online Tech TalksAmazon Web Services
Organizations face significant challenges moving their applications to the cloud when they require a standard file system interface for accessing their cloud data. In this technical session, we will explore the world’s first cloud-scale file system and its targeted use cases. Attendees will learn about the Amazon Elastic File System (EFS) features and benefits, how to identify applications that are appropriate for use with Amazon EFS, and details about its performance and security models. We will highlight and demonstrate how to deploy Amazon EFS in one of our most common use cases and will share tips for success throughout.
Learning Objectives:
• Recognize why and when to use Amazon EFS
• Understand key technical/security concepts
• Learn how to leverage EFS’s performance
• See a demo of EFS in action
• Review EFS’s economics
Architectures for HPC and HTC Workloads on AWS | AWS Public Sector Summit 2017Amazon Web Services
Researchers and IT professionals using High Performance Computing (HPC) and High Throughput Computing (HTC) need large scale infrastructure in order to move their research forward. Neuroimaging employs a variety of computationally demanding techniques with which to interrogate the structure and function of the living brain. Tara Madhyastha with the University of Washington, Department of Radiology, is demonstrating these methods at scale. This session will provide reference architectures for running your workloads on AWS, enabling you to achieve scale on demand, and reduce your time to science. We will also debunk myths about HPC in the cloud and show techniques for running common on-premises workloads in the cloud. Learn More: https://aws.amazon.com/government-education/
Strategic Uses for Cost Efficient Long-Term Cloud StorageAmazon Web Services
Compared to storing long-term datasets on-premises, archiving in the cloud is a smart alternative whether you’re looking for an active archive solution, tape replacement, or to fulfill a compliance requirement. Learn how AWS customers are simplifying their archiving strategy and meeting compliance needs using Amazon Glacier. Hear how customers have evolved their backup and disaster recovery architectures and replaced tape solutions by turning to AWS for a more cost efficient, durable and agile solution. We will showcase Sony DADC's active archive deployment on Glacier and demo how some of our financial service customers have set up compliant archives to meet their regulatory objectives.
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...Amazon Web Services
Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.
Getting Started with Big Data and HPC in the Cloud - August 2015Amazon Web Services
How can you use Big Data to grow your business and discover new opportunities? When organizations effectively capture, analyze, visualize and apply big data insights to their business goals, they differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line. With Amazon Web Services, businesses and researchers can easily fulfill their high performance computing (HPC) requirements with the added benefit of ad-hoc provisioning, pay-as-you-go pricing and faster time-to-results. Join this session to understand how to run HPC applications in AWS cloud, and about different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
AWS re:Invent 2016: Getting the most Bang for your buck with #EC2 #Winning (C...Amazon Web Services
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO.
In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2’s purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Building and scaling your containerized microservices on Amazon ECSAmazon Web Services
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
From dev and test to large-scale HANA production deployments, enterprise usage of SAP on AWS is rapidly growing across all verticals and geographies. Gain insights on the AWS SAP offering and partnership and why a cloud first approach makes business, technical and financial sense for the numerous SAP solutions that are certified and ready to be deployed today.
AW Speaker: Michael Needham, Sr Mgr, Solutions Architecture - Amazon Web Services
Deep Dive on Elastic File System - February 2017 AWS Online Tech TalksAmazon Web Services
Organizations face significant challenges moving their applications to the cloud when they require a standard file system interface for accessing their cloud data. In this technical session, we will explore the world’s first cloud-scale file system and its targeted use cases. Attendees will learn about the Amazon Elastic File System (EFS) features and benefits, how to identify applications that are appropriate for use with Amazon EFS, and details about its performance and security models. We will highlight and demonstrate how to deploy Amazon EFS in one of our most common use cases and will share tips for success throughout.
Learning Objectives:
• Recognize why and when to use Amazon EFS
• Understand key technical/security concepts
• Learn how to leverage EFS’s performance
• See a demo of EFS in action
• Review EFS’s economics
Architectures for HPC and HTC Workloads on AWS | AWS Public Sector Summit 2017Amazon Web Services
Researchers and IT professionals using High Performance Computing (HPC) and High Throughput Computing (HTC) need large scale infrastructure in order to move their research forward. Neuroimaging employs a variety of computationally demanding techniques with which to interrogate the structure and function of the living brain. Tara Madhyastha with the University of Washington, Department of Radiology, is demonstrating these methods at scale. This session will provide reference architectures for running your workloads on AWS, enabling you to achieve scale on demand, and reduce your time to science. We will also debunk myths about HPC in the cloud and show techniques for running common on-premises workloads in the cloud. Learn More: https://aws.amazon.com/government-education/
Deep Dive: Amazon EC2 Elastic GPUs - May 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Get an overview of Elastic GPUs
- Dive deep on the technical capabilities of Elastic GPUs
- Learn best practices when using Elastic GPUs
Amazon EC2 Elastic GPUs allow you to easily attach low-cost graphics acceleration to current generation EC2 instances. With Elastic GPUs, you choose the GPU resources that are sized for your workload, so you can accelerate the graphics performance of your applications for a fraction of the cost of stand-alone graphics instances. In this tech talk, we will provide a deep dive on the capabilities of Elastic GPUs and its use case.
Introduction to Storage on AWS - AWS Summit Cape Town 2017Amazon Web Services
With AWS, you can choose the right storage service for the right use case. This session shows the range of AWS choices that are available to you: Amazon S3, Amazon EBS, Amazon EFS, Amazon Glacier and Cloud Data Migration solutions.
AWS re:Invent 2016: Bring Microsoft Applications to AWS to Save Money and Sta...Amazon Web Services
Running Microsoft workloads on AWS is easy and can save you money. This session will cover how to bring your own Microsoft licenses to AWS, and then demonstrate using PowerShell to import your Windows Server image from Vmware or Hyper-V, configure Windows KMS with your license key, and launch an EC2 Dedicated Host. We will discuss ways you can use AWS Config rules to manage license compliance.
ENT313 Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum E...Amazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsAmazon Web Services
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
- Learn how AWS can help you process and make better use of your data with meaningful insights.
- Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
- Learn about real time data processing with Amazon Kinesis.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
AWS re:Invent 2016: [REPEAT] How EA Leveraged Amazon Redshift and AWS Partner...Amazon Web Services
In November 2015, Capital Games launched a mobile game accompanying a major feature film release. The back end of the game is hosted in AWS and uses big data services like Amazon Kinesis, Amazon EC2, Amazon S3, Amazon Redshift, and AWS Data Pipeline. Capital Games will describe some of their challenges on their initial setup and usage of Amazon Redshift and Amazon EMR. They will then go over their engagement with AWS Partner 47lining and talk about specific best practices regarding solution architecture, data transformation pipelines, and system maintenance using AWS big data services. Attendees of this session should expect a candid view of the process to implementing a big data solution. From problem statement identification to visualizing data, with an in-depth look at the technical challenges and hurdles along the way.
Learn how Amazon Redshift, our fully managed, petabyte-scale data warehouse, can help you quickly and cost-effectively analyze all of your data using your existing business intelligence tools. Get an introduction to how Amazon Redshift uses massively parallel processing, scale-out architecture, and columnar direct-attached storage to minimize I/O time and maximize performance. Learn how you can gain deeper business insights and save money and time by migrating to Amazon Redshift. Take away strategies for migrating from on-premises data warehousing solutions, tuning schema and queries, and utilizing third party solutions.
AWS re:Invent 2016: Deep Learning, 3D Content Rendering, and Massively Parall...Amazon Web Services
Accelerated computing is on the rise because of massively parallel, compute-intensive workloads such as deep learning, 3D content rendering, financial computing, and engineering simulations. In this session, we provide an overview of our accelerated computing instances, including how to choose instances based on your application needs, best practices and tips to optimize performance, and specific examples of accelerated computing in real-world applications.
Amazon Web Services provides startups with the low cost, easy to use infrastructure needed to scale and grow any size business. Attend this session and learn how to migrate your startup to AWS and make the most out of the platform.
AWS re:Invent 2016: Introduction to Managed Database Services on AWS (DAT307)Amazon Web Services
Which database is best suited for your use case? Should you choose a relational database or NoSQL or a data warehouse for your workload? Would a managed service like Amazon RDS, Amazon DynamoDB, or Amazon Redshift work better for you, or would it be better to run your own database on Amazon EC2? FanDuel has been running its fantasy sports service on Amazon Web Services (AWS) since 2012. You will learn best practices and insights from FanDuel’s successful migrations from self-managed databases on EC2 to fully-managed database services.
AWS re:Invent 2016: JustGiving: Serverless Data Pipelines, Event-Driven ETL, ...Amazon Web Services
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), application programming interfaces (API), clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. Building scalable big data pipelines with automated extract-transform-load (ETL) and machine learning processes can address these limitations. JustGiving is the world’s largest social platform for online giving. In this session, we describe how we created several scalable and loosely coupled event-driven ETL and ML pipelines as part of our in-house data science platform called RAVEN. You learn how to leverage AWS Lambda, Amazon S3, Amazon EMR, Amazon Kinesis, and other services to build serverless, event-driven, data and stream processing pipelines in your organization. We review common design patterns, lessons learned, and best practices, with a focus on serverless big data architectures with AWS Lambda.
BDA302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
Migrating Your Databases to AWS: Deep Dive on Amazon RDS and AWS Database Mig...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
AWS Speaker: Andrew Kane, Solutions Architect - Amazon Web Services
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we discuss reference architecture, design patterns, and best practices for assembling technologies to meet your big data challenges. We will also build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. Hear about Amazon Glacier and new capabilities to get access to your data faster with expedited retrievals. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
AWS re:Invent 2016: Best Practices for Data Warehousing with Amazon Redshift ...Amazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...Amazon Web Services
Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.
AWS re:Invent 2016: FINRA in the Cloud: the Big Data Enterprise (ENT313)Amazon Web Services
Large-scale enterprise migration can be a complex undertaking, especially for organizations that re-architect solutions to leverage the benefits of the Cloud. FINRA, which regulates US equities and options markets, recently completed a 2.5-year migration and re-architecture of its Big Data platform. Their platform consumes billions of market events every day. FINRA has developed scalable platforms and services on AWS that enable migrating enterprise applications and business functions to the Cloud quickly. Their data management platform takes advantage of AWS storage and compute products. In this session, IT influencers and decision makers will learn lessons from FINRA’s migration, including how to create an enterprise-class Cloud architecture and which technology skills are required for transitioning to the Cloud. We also share examples of the business value FINRA has realized.
This is an 1 hour presentation on Neural Networks, Deep Learning, Computer Vision, Recurrent Neural Network and Reinforcement Learning. The talks later have links on how to run Neural Networks on
Deep Dive: Amazon EC2 Elastic GPUs - May 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Get an overview of Elastic GPUs
- Dive deep on the technical capabilities of Elastic GPUs
- Learn best practices when using Elastic GPUs
Amazon EC2 Elastic GPUs allow you to easily attach low-cost graphics acceleration to current generation EC2 instances. With Elastic GPUs, you choose the GPU resources that are sized for your workload, so you can accelerate the graphics performance of your applications for a fraction of the cost of stand-alone graphics instances. In this tech talk, we will provide a deep dive on the capabilities of Elastic GPUs and its use case.
Introduction to Storage on AWS - AWS Summit Cape Town 2017Amazon Web Services
With AWS, you can choose the right storage service for the right use case. This session shows the range of AWS choices that are available to you: Amazon S3, Amazon EBS, Amazon EFS, Amazon Glacier and Cloud Data Migration solutions.
AWS re:Invent 2016: Bring Microsoft Applications to AWS to Save Money and Sta...Amazon Web Services
Running Microsoft workloads on AWS is easy and can save you money. This session will cover how to bring your own Microsoft licenses to AWS, and then demonstrate using PowerShell to import your Windows Server image from Vmware or Hyper-V, configure Windows KMS with your license key, and launch an EC2 Dedicated Host. We will discuss ways you can use AWS Config rules to manage license compliance.
ENT313 Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum E...Amazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsAmazon Web Services
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
- Learn how AWS can help you process and make better use of your data with meaningful insights.
- Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
- Learn about real time data processing with Amazon Kinesis.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
AWS re:Invent 2016: [REPEAT] How EA Leveraged Amazon Redshift and AWS Partner...Amazon Web Services
In November 2015, Capital Games launched a mobile game accompanying a major feature film release. The back end of the game is hosted in AWS and uses big data services like Amazon Kinesis, Amazon EC2, Amazon S3, Amazon Redshift, and AWS Data Pipeline. Capital Games will describe some of their challenges on their initial setup and usage of Amazon Redshift and Amazon EMR. They will then go over their engagement with AWS Partner 47lining and talk about specific best practices regarding solution architecture, data transformation pipelines, and system maintenance using AWS big data services. Attendees of this session should expect a candid view of the process to implementing a big data solution. From problem statement identification to visualizing data, with an in-depth look at the technical challenges and hurdles along the way.
Learn how Amazon Redshift, our fully managed, petabyte-scale data warehouse, can help you quickly and cost-effectively analyze all of your data using your existing business intelligence tools. Get an introduction to how Amazon Redshift uses massively parallel processing, scale-out architecture, and columnar direct-attached storage to minimize I/O time and maximize performance. Learn how you can gain deeper business insights and save money and time by migrating to Amazon Redshift. Take away strategies for migrating from on-premises data warehousing solutions, tuning schema and queries, and utilizing third party solutions.
AWS re:Invent 2016: Deep Learning, 3D Content Rendering, and Massively Parall...Amazon Web Services
Accelerated computing is on the rise because of massively parallel, compute-intensive workloads such as deep learning, 3D content rendering, financial computing, and engineering simulations. In this session, we provide an overview of our accelerated computing instances, including how to choose instances based on your application needs, best practices and tips to optimize performance, and specific examples of accelerated computing in real-world applications.
Amazon Web Services provides startups with the low cost, easy to use infrastructure needed to scale and grow any size business. Attend this session and learn how to migrate your startup to AWS and make the most out of the platform.
AWS re:Invent 2016: Introduction to Managed Database Services on AWS (DAT307)Amazon Web Services
Which database is best suited for your use case? Should you choose a relational database or NoSQL or a data warehouse for your workload? Would a managed service like Amazon RDS, Amazon DynamoDB, or Amazon Redshift work better for you, or would it be better to run your own database on Amazon EC2? FanDuel has been running its fantasy sports service on Amazon Web Services (AWS) since 2012. You will learn best practices and insights from FanDuel’s successful migrations from self-managed databases on EC2 to fully-managed database services.
AWS re:Invent 2016: JustGiving: Serverless Data Pipelines, Event-Driven ETL, ...Amazon Web Services
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), application programming interfaces (API), clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. Building scalable big data pipelines with automated extract-transform-load (ETL) and machine learning processes can address these limitations. JustGiving is the world’s largest social platform for online giving. In this session, we describe how we created several scalable and loosely coupled event-driven ETL and ML pipelines as part of our in-house data science platform called RAVEN. You learn how to leverage AWS Lambda, Amazon S3, Amazon EMR, Amazon Kinesis, and other services to build serverless, event-driven, data and stream processing pipelines in your organization. We review common design patterns, lessons learned, and best practices, with a focus on serverless big data architectures with AWS Lambda.
BDA302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
Migrating Your Databases to AWS: Deep Dive on Amazon RDS and AWS Database Mig...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
AWS Speaker: Andrew Kane, Solutions Architect - Amazon Web Services
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we discuss reference architecture, design patterns, and best practices for assembling technologies to meet your big data challenges. We will also build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. Hear about Amazon Glacier and new capabilities to get access to your data faster with expedited retrievals. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
AWS re:Invent 2016: Best Practices for Data Warehousing with Amazon Redshift ...Amazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...Amazon Web Services
Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.
AWS re:Invent 2016: FINRA in the Cloud: the Big Data Enterprise (ENT313)Amazon Web Services
Large-scale enterprise migration can be a complex undertaking, especially for organizations that re-architect solutions to leverage the benefits of the Cloud. FINRA, which regulates US equities and options markets, recently completed a 2.5-year migration and re-architecture of its Big Data platform. Their platform consumes billions of market events every day. FINRA has developed scalable platforms and services on AWS that enable migrating enterprise applications and business functions to the Cloud quickly. Their data management platform takes advantage of AWS storage and compute products. In this session, IT influencers and decision makers will learn lessons from FINRA’s migration, including how to create an enterprise-class Cloud architecture and which technology skills are required for transitioning to the Cloud. We also share examples of the business value FINRA has realized.
This is an 1 hour presentation on Neural Networks, Deep Learning, Computer Vision, Recurrent Neural Network and Reinforcement Learning. The talks later have links on how to run Neural Networks on
Deep Learning Frameworks 2019 | Which Deep Learning Framework To Use | Deep L...Simplilearn
Deep Learning covers all the essential Deep Learning frameworks that are necessary to build AI models. In this presentation, you will learn about the development of essential frameworks such as TensorFlow, Keras, PyTorch, Theano, etc. You will also understand the programming languages used to build the frameworks, the different companies that use these frameworks, the characteristics of these Deep Learning frameworks, and type of models that were built using these frameworks. Now, let us get started with understanding the different popular Deep Learning frameworks being used in industries.
Below are the different Deep Learning frameworks we'll be discussing in this presentation:
1. TensorFlow
2. Keras
3. PyTorch
4. Theano
5. Deep Learning 4 Java
6. Caffe
7. Chainer
8. Microsoft CNTK
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations, and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning, and artificial intelligence
Learn more at https://www.simplilearn.com/deep-learning-course-with-tensorflow-training
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
Delivery of a new Bio-informatics infrastructure at the Wellcome Trust Sanger Center. We include how to programatically create, manage and provide providence for images used both at Sanger and elsewhere using open source tools and continuous integration.
Big Data Analytics (ML, DL, AI) hands-onDony Riyanto
Ini adalah slide tambahan dari materi pengenalan Big Data Analytics (di file berikutnya), yang mengajak kita mulai hands-on dengan beberapa hal terkait Machine/Deep Learning, Big Data (batch/streaming), dan AI menggunakan Tensor Flow
Deep Learning Frameworks Using Spark on YARN by Vartika SinghData Con LA
Abstract:- Traditional machine learning and feature engineering algorithms are not efficient enough to extract complex and nonlinear patterns hallmarks of big data. Deep learning, on the other hand, helps translate the scale and complexity of the data into solutions like molecular interaction in drug design, the search for subatomic particles and automatic parsing of microscopic images. Co-locating a data processing pipeline with a deep learning framework makes data exploration/algorithm and model evolution much simpler, while streamlining data governance and lineage tracking into a more focused effort. In this talk, we will discuss and compare the different deep learning frameworks on Spark in a distributed mode, ease of integration with the Hadoop ecosystem, and relative comparisons in terms of feature parity.
Cloud Computing Was Built for Web Developers—What Does v2 Look Like for Deep...Databricks
What we call the public cloud was developed primarily to manage and deploy web servers. The target audience for these products is Dev Ops. While this is a massive and exciting market, the world of Data Science and Deep Learning is very different — and possibly even bigger. Unfortunately, the tools available today are not designed for this new audience and the cloud needs to evolve. This talk would cover what the next 10 years of cloud computing will look like.
The PPT contains the following content:
1. What is Google Cloud Study Jam
2. What is Cloud Computing
3. Fundamentals of cloud computing
4. what is Generative AI
5. Fundamentals of Generative AI
6. Breif overview on Google Cloud Study Jam.
7. Networking Session.
Talk given at first OmniSci user conference where I discuss cooperating with open-source communities to ensure you get useful answers quickly from your data. I get a chance to introduce OpenTeams in this talk as well and discuss how it can help companies cooperate with communities.
Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...Amazon Web Services
Scientists, developers, and other technologists from many different industries are taking advantage of Amazon Web Services to perform big data workloads from analytics to using data lakes for better decision making to meet the challenges of the increasing volume, variety, and velocity of digital information. This session will feature UCB's RISELab (Real time Intelligent Secure Execution), a new lab recently created at UCB to enable computers to make intelligent, real-time decisions. You will hear how they are building on their earlier success with AMPLab to enable applications to interact intelligently and securely with their environment in real time, wherever computing decisions need to interact with the world. From cybersecurity to coordinating fleets of self-driving cars and drones to earthquake warning systems, you will come away with insight on how they are using AWS to develop and experiment with the systems for important research. Learn More: https://aws.amazon.com/government-education/
Using Crowdsourced Images to Create Image Recognition Models with Analytics Z...Maurice Nsabimana
Volunteers around the world increasingly act as human sensors to collect millions of data points. A team from the World Bank trained deep learning models, using Apache Spark and BigDL, to confirm that photos gathered through a crowdsourced data collection pilot matched the goods for which observations were submitted.
In this talk, Maurice Nsabimana, a statistician at the World Bank, and Jiao Wang, a software engineer on the Big Data Technology team at Intel, demonstrate a collaborative project to design and train large-scale deep learning models using crowdsourced images from around the world. BigDL is a distributed deep learning library designed from the ground up to run natively on Apache Spark. It enables data engineers and scientists to write deep learning applications in Scala or Python as standard Spark programs-without having to explicitly manage distributed computations. Attendees of this session will learn how to get started with BigDL, which runs in any Apache Spark environment, whether on-premise or in the Cloud.
HPC and cloud distributed computing, as a journeyPeter Clapham
Introducing an internal cloud brings new paradigms, tools and infrastructure management. When placed alongside traditional HPC the new opportunities are significant But getting to the new world with micro-services, autoscaling and autodialing is a journey that cannot be achieved in a single step.
Similar to AWS re:Invent 2016: Bringing Deep Learning to the Cloud with Amazon EC2 (CMP314) (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
2. * As of 1 October 2016
2009
48
280
722
82
2011 2013 2015
AWS has been continually expanding its services to support virtually any cloud workload
and now has more than 70 services that range from compute, storage, networking,
database, analytics, application services, deployment, management and mobile. AWS
has launched a total of 706 new features and/or services year to date* - for a total of
2,601 new features and/or services since inception in 2006.
AWS Pace of Innovation
4. Machine learning
Machine learning is the technology that
automatically finds patterns in your data and
uses them to make predictions for new data
points as they become available
Your data + machine learning = smart applications
5. New P2 GPU Instance Types
• New EC2 GPU instance type for accelerated computing
• Offers up to 16 NVIDIA K80 GPUs (8 K80 cards) in a single instance
• The 16xlarge size provides:
• A combined 192 GB of GPU memory, 40 thousand CUDA cores
• 70 teraflops of single precision floating point performance
• Over 23 teraflops of double precision floating point performance
• Example workloads include:
• Deep learning, computational fluid dynamics, computational finance,
seismic analysis, molecular modeling, genomics, VR content rendering,
accelerated databases
6. P2 Instance Types
Three P2 instance sizes:
Instance Size GPUs GPU
Peer to
Peer
vCPUs Memory
(GiB)
Network
Bandwidth*
p2.xlarge 1 - 4 61 1.25Gbps
p2.8xlarge 8 Y 32 488 10Gbps
p2.16xlarge 16 Y 64 732 20Gbps
*In a placement group
7. P2 Instance Summary
High-performance GPU instance types, with many innovations
• Based on NVIDIA K80 GPUs, with up to 16 GPUs in a single instance
• With dedicated peer-to-peer connections supporting GPUDirect
• Intel Broadwell processors with up to 64 vCPUs and up to 732GiB RAM
• 20Gbps network on EC2, using Elastic Network Adaptor (ENA)
• Supporting a wide variety of ISV applications and open-source
frameworks
8.
9. Diego Oppenheimer – CEO, Algorithmia
Product developer, entrepreneur, extensive background
in all things data.
Microsoft: PowerPivot, PowerBI, Excel, and SQL Server
Founder of algorithmic trading startup
BS/MS Carnegie Mellon University
11. A marketplace for algorithms...
We host algorithms
Anyone can turn their algorithms into scalable web services
Typical users: scientists, academics, domain experts
We make them discoverable
Anyone can use and integrate these algorithms into their solutions
Typical users: businesses, data scientists, app developers, IoT makers
We make them monetizable
Users of algorithms pay for algorithms they use
Typical scenarios: heavy-load use cases with large user base
12. Sample algorithms (2600+ and growing daily)
● Text analysis summarizer, sentence tagger, profanity detection
● Machine learning digit recognizer, recommendation engines
● Web crawler, scraper, pagerank, emailer, html to text
● Computer vision image similarity, face detection, smile detection
● Audio & video speech recognition, sound filters, file conversions
● Computation linear regression, spike detection, fourier filter
● Graph traveling salesman, maze generator, theta star
● Utilities parallel for-each, geographic distance, email validator
14. Scale?
• Support the workloads of 32,000 developers
• Mixed use CPU/GPU
• Spikey traffic
• Heterogeneous hardware
15. Hosting Deep Learning
Cloud hosting of deep learning models can be especially challenging
due to complex hardware and software dependencies
At Algorithmia we had to:
• Learn how to host and scale the 5 most common deep learning
frameworks (more coming).
• Deal with scaling and spikey traffic on GPUs
• Deal with multitenancy on GPUs
• Build an extensible and dynamic architecture to support deep
learning
16. First: What is Deep Learning?
• Deep learning uses artificial neurons similar to the brain to
represent high-dimensional data
• The mammal brain is organized in a deep architecture, e.g., the visual
system has 5 to 10 levels. [1]
• Deep learning excels in tasks where the basic unit (a single pixel,
frequency or word) has very little meaning in and of itself, but
contains high-level structure. Deep nets have been effective at
learning this structure without human intervention.
[1] Serre, Kreiman, Kouh, Cadieu, Knoblich, & Poggio,
17. What is Deep Learning Being Applied To?
Primarily: huge growth in unstructured data
• Pictures
• Videos
• Audio
• Speech
• Websites
• Emails
• Reviews
• Log files
• Social media
18. Today’s Use Cases
• Computer vision
• Image classification
• Object detection
• Face recognition
• Natural language
• Speech to test
• Chatbots
• Q&A systems (Siri, Alexa, Google Now)
• Machine translation
• Optimization
• Anomaly detection
• Recommender systems
19. Why Now?
…and why is deep learning suddenly everywhere?
Advances in research
• LeCun, Gradient-Based Learning Applied to
Document Recognition,1998
• Hinton, A Fast Learning Algorithm for Deep
Belief Nets, 2006
• Bengio, Learning Deep Architectures for AI,
2009
Advances in hardware
• GPUs: 10x performance, 5x energy efficiency
http://www.nvidia.com/content/events/geoInt2015/LBrown_DL_Image_ClassificationGEOINT.pdf
20. Deep Learning Hardware (2016)
GPUs: NVIDIA is dominating
One of the first GPU neural nets was on a NVIDIA
GTX 280 up to 9 layers neural network (2010
Ciresan and Schmidhuber)
• NVIDIA chips tend to outperform AMD
• More importantly, all the major frameworks use
CUDA as a first-class citizen. Poor support for
AMD’s OpenCL.
21. Deep Learning Hardware
GPU:
• Becoming more tailored for deep learning (e.g., Pascal chipset)
Custom hardware:
• FPGA (AWS F1, MSFT Project Catapult)
ASIC:
• Google TPU
• IBM TrueNorth
• Nervana Engine
• Graphcore IPUs
22. GPU Deep Learning Dependencies
Meta deep learning framework
Deep learning framework
cuDNN
CUDA
NVIDIA driver
GPU
24. Theano
Created by Université de Montréal
Theano pioneered the trend of using a symbolic graph for programming a network.
Very mature framework, good support for many kinds of networks.
Pros:
• Use Python + Numpy
• Declarative computational graph
• Good support for RNNs
• Wrapper frameworks make it more accessible (Keras, Lasagne, Blocks)
• BSB License
Cons:
• Low level framework
• Error messages can be unhelpful
• Large models can have long compile times
• Weak support for pre-trained models
25. Torch
Created by collaboration of various researchers. Used by DeepMind (prior to
Google). Torch is a general scientific computing framework for Lua.
Torch is more flexible than TensorFlow and Theano in that it’s imperative while
TF/Theano are declarative. That makes some operations (e.g, beam search) much
easier to do.
Pros:
• Very flexible multidimensional array engine
• Multiple back ends (CUDA and OpenMP)
• Lots of pre-trained models available
Cons:
• Lua
• Not good for recurrent networks
• Lack of commercial support
26. Caffe
Created by Berkley Vision and Learning center and community contributors.
Probably the most used framework today, certainly for CV.
Pros:
• Optimized for feedforward networks, convolutional nets and image processing.
• Simple Python API
• BSD License
Cons:
• C++/CUDA for new GPU layers
• Limited support for recurrent networks (recently added)
• Cumbersome for big networks (GoogLeNet, ResNet)
27. TensorFlow
Created by Google.
TensorFlow is written with a Python API over a C/C++ engine. TensorFlow generates a
computational graph (e.g., series of matrix operations) and performs automatic
differentiation.
Pros:
• Uses Python + Numpy
• Lots of interest from community
• Highly parallel, and designed to use various back ends (software, gpu, asic)
• Apache License
Cons:
• Slower than other frameworks [1]
• More features, more abstractions than torch
• Not many pre-trained models yet
[1] https://arxiv.org/pdf/1511.06435v3.pdf
28. Networks for Training
Where to get networks:
• If you’re just interested in using deep learning to classify images, you can
usually find off-the-shelf networks.
• VGG, GoogleNet, AlexNet, SqueezeNet
• Caffe Model Zoo
29. Training vs. Running
Deep learning generally consists of two phases: training and running.
Training deep learning models is challenging, with many solutions available
today.
Running deep learning models (at scale) is the next step, and has its own
challenges.
30. Hosting Deep Learning
Making deep learning models available as an API represents a
unique set of challenges that are rarely, if ever, addressed in
tutorials.
31. Why ML in the Cloud?
• Need to react to live user data
• Don’t want to manage own servers
• Need enough servers to sustain max load. You can save money
using cloud services
• Limited compute capacity on mobile
33. Capacity required at peak
Capacity required at peak
Potential for ghost/wasted
capacity
Without elastic
compute on GPUs
our cost would be
~75% more
36. Why P2s?
• More video memory
• 12 GB per GPU
• Modern CUDA support
• More CUDA cores to run in parallel
• New messages
• In particular, we had that problem with CUDA 3.0 not allowing
us to share memory as efficiently
• Price per flop
37. Customer Showcase:
• CSDisco offers cutting-edge eDiscovery technology for attorneys “DISCO ML”.
• DISCO ML is a deep learning based exploration tool that asynchronously re-learns as
attorneys move through their normal discovery workflows.
• “The proprietary, multi-layer artificial network uses deep learning and arrays of GPUs
to unpack and process learned information quickly. Combining Google’s advanced
Word2Vec embeddings with powerful convolutional and recurrent neural networks for
text...”
38. Customer Showcase:
Why they chose to host their ML on Algorithmia ?
• Scalability: Required scalable GPU based compute fabric for their neural net based
ML approach to on-board hundreds of new customers without taxing their engineering
department.
• Flexibility: Expected high peaks of usage during certain hours.
• Reduce ghost compute: excess capacity = unnecessary cost
• Ability to chain algorithms: Process is a series of operations from scoring, training ,
validation, querying – each is its own model hosted on Algorithmia. Algorithmia
provides an easy way to pipe algorithms into each other.
40. Challenge #1: New Hardware, Latest CUDA
AWS: G2 (Grid K520- 2013) and P2 instances
Azure: N-Series
Google: Just announced preview
SoftLayer: various cards including Tesla K80 and M60.
Small providers: Nimbix, Cirrascale, Penguin
41. Challenge #2: Language Bindings
You probably already have an existing stack in some programming
language. How does it talk to your deep learning framework?
Hope you like Python ( or Lua)
Solution: Services!
42. Challenge #3: Large Models
Deep learning models are getting larger
• State of the art networks are easily multi-gigabyte
• Need to be loaded and scaled
Solutions:
• More hardware
• Smaller models
44. SqueezeNet
Hypothesis: the networks we’re using today are much larger and
more complicated than they need to be.
Enter SqueezeNet: AlexNet-level accuracy with 50x fewer
parameters and < 0.5 MB model size.
Not quite state of the art, but close, and MUCH easier to host.
[1] Iandola, Han, Moskewicz, Ashraf, Dally, Keutz
45. Model Compression
Recent efforts at aimed at pruning the size of networks
“Reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy.” (Han, Mao, Dally -2015)
46. Challenge #4: GPU Sharing
GPUs were not designed to be shared like CPUs
• Limited amount of video memory
• Even with multi-context management, memory overflows and
unrestricted pointer logic are very dangerous for other
applications
• Developers need a way to share GPU resources safely from
potentially malicious applications.
47. Challenge #5: GPU Sharing - Containers
Docker – new standard in deploying applications, but adds an
additional layer of challenges to GPU computing.
• NVIDIA drivers must match inside and outside containers.
• CUDA drivers must match inside and outside containers.
• Some algorithms require X windows, which must be started
outside the container and mounted inside
• Nvidia-docker container is helpful but not a complete solution.
• New AWS Deep Learning AMI -> Huge step in right direction.
48. Lessons Learned
• Deep learning in the cloud is still in its infancy
• Hosting deep learning models is the next logical step after the
training model, but the difficulty is underappreciated.
• Tooling and frameworks are making things easier, but there is a
lot of opportunity for improvement
Big picture: the challenges involved with creating DL models is only
half the problem. Deploying them is an entirely different skillset.