Increasingly, mobile and other connected devices are leveraging the scalability and capabilities of the cloud to deliver services to end users. However, connecting these devices to the cloud presents unique challenges. Resource constraints make it impossible to use many common frameworks and transport restrictions make it difficult to use dynamic cloud resources. In this session, learn how you can develop and deploy highly-scalable global solutions using Amazon Web Services (Amazon Virtual Private Cloud, Elastic IP addresses, Amazon Route 53, Auto Scaling) and tools like Puppet. Hear how Panasonic and Banjo architect their cloud infrastructure from both a start-up and enterprise perspective.
Erlang factory SF 2011 "Erlang and the big switch in social games"Paolo Negri
talk given at erlang factory 2011 about using erlang to build social games backends
Watch the video of this presentation http://vimeo.com/22144057#at=0
Slides of erlang factory 2011 London talk "Designing for performance with erlang"
Video of this presentation available at http://vimeo.com/26715793#at=0
These are the slides from my talk about the AppScale project at the SBonRails meetup. It covers AppScale as well as Google App Engine and the research projects have come out of it, including Neptune, a Ruby DSL focused on computation-heavy workloads.
(ISM301) Engineering Netflix Global Operations In The CloudAmazon Web Services
Operating a massively scalable, constantly changing, distributed global service is a daunting task. We innovate at breakneck speed to attract new customers and stay ahead of the competition. This means more features, more experiments, more deployments, more engineers making changes in production environments, and ever-increasing complexity. Simultaneously improving service availability and accelerating rate of change seems impossible on the surface. At Netflix, operations engineering is both a technical and organizational construct designed to accomplish just that by integrating disciplines like continuous delivery, fault injection, regional traffic management, crisis response, best practice automation, and real-time analytics. In this talk, designed for technical leaders seeking a path to operational excellence, we'll explore these disciplines in depth and how they integrate and create competitive advantages.
At Yahoo! over the past year we have helped migrate hundreds of our grids? users to YARN. Our YARN clusters have in aggregate run over 18 million jobs with more than 3 billion tasks consuming over 10 thousand years of compute time. With one single cluster running 90 thousand jobs a day. From this experience we would like to share what we have learned about running YARN well, how this is different from running a 1.0 based cluster, and what it takes to migrate your jobs to YARN from 1.0.
Erlang factory SF 2011 "Erlang and the big switch in social games"Paolo Negri
talk given at erlang factory 2011 about using erlang to build social games backends
Watch the video of this presentation http://vimeo.com/22144057#at=0
Slides of erlang factory 2011 London talk "Designing for performance with erlang"
Video of this presentation available at http://vimeo.com/26715793#at=0
These are the slides from my talk about the AppScale project at the SBonRails meetup. It covers AppScale as well as Google App Engine and the research projects have come out of it, including Neptune, a Ruby DSL focused on computation-heavy workloads.
(ISM301) Engineering Netflix Global Operations In The CloudAmazon Web Services
Operating a massively scalable, constantly changing, distributed global service is a daunting task. We innovate at breakneck speed to attract new customers and stay ahead of the competition. This means more features, more experiments, more deployments, more engineers making changes in production environments, and ever-increasing complexity. Simultaneously improving service availability and accelerating rate of change seems impossible on the surface. At Netflix, operations engineering is both a technical and organizational construct designed to accomplish just that by integrating disciplines like continuous delivery, fault injection, regional traffic management, crisis response, best practice automation, and real-time analytics. In this talk, designed for technical leaders seeking a path to operational excellence, we'll explore these disciplines in depth and how they integrate and create competitive advantages.
At Yahoo! over the past year we have helped migrate hundreds of our grids? users to YARN. Our YARN clusters have in aggregate run over 18 million jobs with more than 3 billion tasks consuming over 10 thousand years of compute time. With one single cluster running 90 thousand jobs a day. From this experience we would like to share what we have learned about running YARN well, how this is different from running a 1.0 based cluster, and what it takes to migrate your jobs to YARN from 1.0.
Evolution of Netflix's cloud security strategy. Includes cloud-based key management and hybrid security controls that span traditional datacenter and public cloud.
More Nines for Your Dimes: Improving Availability and Lowering Costs using Au...Amazon Web Services
Running your Amazon EC2 instances in Auto Scaling groups allows you to improve your application's availability right out of the box. Auto Scaling replaces impaired or unhealthy instances automatically to maintain your desired number of instances (even if that number is one). You can also use Auto Scaling to automate the provisioning of new instances and software configurations as well as to track of usage and costs by app, project, or cost center. Of course, you can also use Auto Scaling to adjust capacity as needed - on demand, on a schedule, or dynamically based on demand. In this session, we show you a few of the tools you can use to enable Auto Scaling for the applications you run on Amazon EC2.
Latest version of the Netflix Cloud Architecture story was given at Gluecon May 23rd 2012. Gluecon rocks, and lots of Van Halen references were added for the occasion. There tradeoff between developer driven high functionality AWS based PaaS, and operations driven low cost portable PaaS is discussed. The three sections cover the developer view, the operator view and the builder view.
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...Amazon Web Services
Netflix's migration to the cloud as our primary streaming control plane was paralleled by our move from traditional IT and centralized operations to a more decentralized DevOps organizational model. In this session, we explore the relationship between technical infrastructure and organization and how to find the right balance of centralized and decentralized operations. We also cover the rationale, goals, strategies, and technologies applied to accomplish this daunting task. We reflect on where we stand today and how we've realized many of our goals.
What an Enterprise Can Learn from Netflix, a Cloud-native Company (ENT203) | ...Amazon Web Services
In moving its streaming product to the cloud, Netflix has been able to realize tremendous benefits in scalability, performance, and availability. The biggest benefit came from moving to a service-based architecture, which allowed engineering teams to accelerate their development cycle and innovate more quickly. However, cloud migration was a substantial effort. We mobilized resources across the company over several years, reorganized our engineering and operations teams, developed new security policies, migrated to the DevOps operations model, and even embraced a new product architecture. In this talk, we trace the evolution of the Netflix cloud model, both the successes and the challenges, and present them in a way that’s maximally useful to enterprises considering making the move to the cloud.
Netflix Development Patterns for Scale, Performance & Availability (DMG206) |...Amazon Web Services
This session explains how Netflix is using the capabilities of AWS to balance the rate of change against the risk of introducing a fault. Netflix uses a modular architecture with fault isolation and fallback logic for dependencies to maximize availability. This approach allows for rapid independent evolution of individual components to maximize the pace of innovation and A/B testing, and offers nearly unlimited scalability as the business grows. Learn how we balance managing change to (or subtraction from) the customer experience, while aggressively scraping barnacle features that add complexity for little value.
Slide deck for a presentation at OSCON 2011 about why Netflix uses web technology for TV user interfaces and how we maximize performance for a broad range of devices.
Moonbot Studios Shoots for the Cloud to Meet Deadlines and Manage Costs
Threatened by deadlines for Academy award submissions, Moonbot Studios faced a shortage of rendering capacity while working on Taking Flight, its newest animated short film, and other important projects. As a small studio with a matching budget, the team did what it does best—it got creative and solved the problem with what they first called “magic.”
In this webinar, the Moonbot team will tell its tale of sending its rendering capacity to Google Compute Engine and how they defied networking odds by caching data close to the animators with an Avere vFXT. Hear Moonbot’s pipeline supervisor tell how they turned cloud data center distance into a non-issue, met deadlines, and gained quantitative benefits that sparked energy in this small team of creative aviators.
In this session, you will learn:
•What drove the Moonbot Studios to move to the cloud
•How they moved complex renders to Google Compute Engine, overcoming data access roadblocks
•Measurable results including speed, economics, flexibility, and creative freedom
The Moonbot Studios flight to the cloud will be supported by Google Cloud Platform and Avere Systems for a complete overview of how the technologies help bring new ideas to life.
Embracing Failure - Fault Injection and Service Resilience at NetflixJosh Evans
A presentation given at AWS re:Invent on how Netflix induces failure to validate and harden production systems. Technologies discussed include the Simian Army (Chaos Monkey, Gorilla, Kong) and our next gen Failure Injection Test framework (FIT).
A collection of information taken from previous presentations that was used as drill down for supporting discussion of specific topics during the tutorial.
Engineering Netflix Global Operations in the CloudJosh Evans
Delivered at re:Invent 2015.
Operating a massively scalable, constantly changing, distributed global service is a daunting task. We innovate at breakneck speed to attract new customers and stay ahead of the competition. This means more features, more experiments, more deployments, more engineers making changes in production environments, and ever-increasing complexity. Simultaneously improving service availability and accelerating rate of change seems impossible on the surface. At Netflix, operations engineering is both a technical and organizational construct designed to accomplish just that by integrating disciplines like continuous delivery, fault injection, regional traffic management, crisis response, best practice automation, and real-time analytics. In this talk, designed for technical leaders seeking a path to operational excellence, we'll explore these disciplines in depth and how they integrate and create competitive advantages.
Cloud Services Powered by IBM SoftLayer and NetflixOSSaspyker
This presentation covers our work starting with Acme Air web scale and transitioning to operational lessons learned in HA, automatic recovery, continuous delivery, and operational visibility. It shows the port of the Netflix OSS cloud platform to IBM's cloud - SoftLayer and use of RightScale.
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
Delivery of a new Bio-informatics infrastructure at the Wellcome Trust Sanger Center. We include how to programatically create, manage and provide providence for images used both at Sanger and elsewhere using open source tools and continuous integration.
FOSS4G In The Cloud: Using Open Source to build Cloud based Spatial Infrastru...Mohamed Sayed
Using Open Source and Cloud Computing principles, these slides walk through the architectural patterns for building scalable cloud services. The second part of the presentation focuses on profiling common geolocation tasks like importing large datasets and rendering map tiles.
Evolution of Netflix's cloud security strategy. Includes cloud-based key management and hybrid security controls that span traditional datacenter and public cloud.
More Nines for Your Dimes: Improving Availability and Lowering Costs using Au...Amazon Web Services
Running your Amazon EC2 instances in Auto Scaling groups allows you to improve your application's availability right out of the box. Auto Scaling replaces impaired or unhealthy instances automatically to maintain your desired number of instances (even if that number is one). You can also use Auto Scaling to automate the provisioning of new instances and software configurations as well as to track of usage and costs by app, project, or cost center. Of course, you can also use Auto Scaling to adjust capacity as needed - on demand, on a schedule, or dynamically based on demand. In this session, we show you a few of the tools you can use to enable Auto Scaling for the applications you run on Amazon EC2.
Latest version of the Netflix Cloud Architecture story was given at Gluecon May 23rd 2012. Gluecon rocks, and lots of Van Halen references were added for the occasion. There tradeoff between developer driven high functionality AWS based PaaS, and operations driven low cost portable PaaS is discussed. The three sections cover the developer view, the operator view and the builder view.
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...Amazon Web Services
Netflix's migration to the cloud as our primary streaming control plane was paralleled by our move from traditional IT and centralized operations to a more decentralized DevOps organizational model. In this session, we explore the relationship between technical infrastructure and organization and how to find the right balance of centralized and decentralized operations. We also cover the rationale, goals, strategies, and technologies applied to accomplish this daunting task. We reflect on where we stand today and how we've realized many of our goals.
What an Enterprise Can Learn from Netflix, a Cloud-native Company (ENT203) | ...Amazon Web Services
In moving its streaming product to the cloud, Netflix has been able to realize tremendous benefits in scalability, performance, and availability. The biggest benefit came from moving to a service-based architecture, which allowed engineering teams to accelerate their development cycle and innovate more quickly. However, cloud migration was a substantial effort. We mobilized resources across the company over several years, reorganized our engineering and operations teams, developed new security policies, migrated to the DevOps operations model, and even embraced a new product architecture. In this talk, we trace the evolution of the Netflix cloud model, both the successes and the challenges, and present them in a way that’s maximally useful to enterprises considering making the move to the cloud.
Netflix Development Patterns for Scale, Performance & Availability (DMG206) |...Amazon Web Services
This session explains how Netflix is using the capabilities of AWS to balance the rate of change against the risk of introducing a fault. Netflix uses a modular architecture with fault isolation and fallback logic for dependencies to maximize availability. This approach allows for rapid independent evolution of individual components to maximize the pace of innovation and A/B testing, and offers nearly unlimited scalability as the business grows. Learn how we balance managing change to (or subtraction from) the customer experience, while aggressively scraping barnacle features that add complexity for little value.
Slide deck for a presentation at OSCON 2011 about why Netflix uses web technology for TV user interfaces and how we maximize performance for a broad range of devices.
Moonbot Studios Shoots for the Cloud to Meet Deadlines and Manage Costs
Threatened by deadlines for Academy award submissions, Moonbot Studios faced a shortage of rendering capacity while working on Taking Flight, its newest animated short film, and other important projects. As a small studio with a matching budget, the team did what it does best—it got creative and solved the problem with what they first called “magic.”
In this webinar, the Moonbot team will tell its tale of sending its rendering capacity to Google Compute Engine and how they defied networking odds by caching data close to the animators with an Avere vFXT. Hear Moonbot’s pipeline supervisor tell how they turned cloud data center distance into a non-issue, met deadlines, and gained quantitative benefits that sparked energy in this small team of creative aviators.
In this session, you will learn:
•What drove the Moonbot Studios to move to the cloud
•How they moved complex renders to Google Compute Engine, overcoming data access roadblocks
•Measurable results including speed, economics, flexibility, and creative freedom
The Moonbot Studios flight to the cloud will be supported by Google Cloud Platform and Avere Systems for a complete overview of how the technologies help bring new ideas to life.
Embracing Failure - Fault Injection and Service Resilience at NetflixJosh Evans
A presentation given at AWS re:Invent on how Netflix induces failure to validate and harden production systems. Technologies discussed include the Simian Army (Chaos Monkey, Gorilla, Kong) and our next gen Failure Injection Test framework (FIT).
A collection of information taken from previous presentations that was used as drill down for supporting discussion of specific topics during the tutorial.
Engineering Netflix Global Operations in the CloudJosh Evans
Delivered at re:Invent 2015.
Operating a massively scalable, constantly changing, distributed global service is a daunting task. We innovate at breakneck speed to attract new customers and stay ahead of the competition. This means more features, more experiments, more deployments, more engineers making changes in production environments, and ever-increasing complexity. Simultaneously improving service availability and accelerating rate of change seems impossible on the surface. At Netflix, operations engineering is both a technical and organizational construct designed to accomplish just that by integrating disciplines like continuous delivery, fault injection, regional traffic management, crisis response, best practice automation, and real-time analytics. In this talk, designed for technical leaders seeking a path to operational excellence, we'll explore these disciplines in depth and how they integrate and create competitive advantages.
Cloud Services Powered by IBM SoftLayer and NetflixOSSaspyker
This presentation covers our work starting with Acme Air web scale and transitioning to operational lessons learned in HA, automatic recovery, continuous delivery, and operational visibility. It shows the port of the Netflix OSS cloud platform to IBM's cloud - SoftLayer and use of RightScale.
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
Delivery of a new Bio-informatics infrastructure at the Wellcome Trust Sanger Center. We include how to programatically create, manage and provide providence for images used both at Sanger and elsewhere using open source tools and continuous integration.
FOSS4G In The Cloud: Using Open Source to build Cloud based Spatial Infrastru...Mohamed Sayed
Using Open Source and Cloud Computing principles, these slides walk through the architectural patterns for building scalable cloud services. The second part of the presentation focuses on profiling common geolocation tasks like importing large datasets and rendering map tiles.
Eric Proegler Oredev Performance Testing in New ContextsEric Proegler
Virtualization, Cloud Deployments, and Cloud-Based Tools have challenged and changed performance testing practices. Today’s performance tester can summons tens of thousands of virtual users from the cloud in a few minutes at a cost far lower than the expensive on-premise installations of yesteryear.
Meanwhile, systems under test have changed more. Updated software stacks have increased the complexity of scripting and performance measurement, but the biggest changes are in the nature and quantities of resources powering the systems. Interpreting resource usage when resources are shared on a private virtualization platform is exceedingly difficult. Understanding resources when they live in a large public cloud is impossible.
Bare Metal Provisioning for Big Data - OpenStack最新情報セミナー(2016年12月)VirtualTech Japan Inc.
Bare Metal Provisioning for Big Data - OpenStack最新情報セミナー(2016年12月)
講師:崔 祐碩(Rakuten)
アジェンダ:
- Virtualization VS Bare Metal
- About Bare Metal management system at Rakuten
- Ready to Provisioning
- What is Next?
Understanding Elastic Block Store Availability and PerformanceAmazon Web Services
Depending on your application needs, Elastic Block Store’s volumes can be configured for optimal performance and higher availability. In this session, we will present the different design characteristics of EBS Standard and Provisioned IOPS volumes, provide technical insights on how to think about EBS performance and availability, and share best practices to achieve higher availability and performance.
Scalable Media Processing in the Cloud (MED302) | AWS re:Invent 2013Amazon Web Services
The cloud empowers you to process media at scale in ways that were previously not possible, enabling you to make business decisions that are no longer constrained by infrastructure availability. Hear about best practices to architect scalable, highly available, high-performance workflows for digital media processing. In addition, this session covers AWS and partner solutions for transcoding, content encryption (watermarking and DRM), QC, and other processing topics.
AWS re:Invent 2013 Scalable Media Processing in the CloudDavid Sayed
Presentation from AWS re:Invent 2013. See session video here: http://www.youtube.com/watch?v=MjZdiDotRU8
Presentation is in two parts: (1) Introduction to moving workloads to the cloud, (2) deep dive on how the BBC moved their playout to the cloud.
Scalable and Reliable Logging at PinterestKrishna Gade
At Pinterest, hundreds of services and third-party tools that are implemented in various programming languages generate billions of events every day. To achieve scalable and reliable low latency logging, there are several challenges: (1) uploading logs that are generated in various formats from tens of thousands of hosts to Kafka in a timely manner; (2) running Kafka reliably on Amazon Web Services where the virtual instances are less reliable than on-premises hardware; (3) moving tens of terabytes data per day from Kafka to cloud storage reliably and efficiently, and guaranteeing exact one time persistence per message.
In this talk, we will present Pinterest’s logging pipeline, and share our experience addressing these challenges. We will dive deep into the three components we developed: data uploading from service hosts to Kafka, data transportation from Kafka to S3, and data sanitization. We will also share our experience in operating Kafka at scale in the cloud.
Similar to Cloud Connected Devices on a Global Scale (CPN303) | AWS re:Invent 2013 (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
2. Banjo
•
•
•
•
•
Real-time location meets social data
An engineering-focused company
Events recommendation, alert, & discovery
Top Developer and Editor’s Choice in Google Play
Named Top 10 World Innovator in Local - Fast
Company
3. Growth Factors
• Grew from 0 to 5+ Million in 2 years
• Indexed over 700 Million profiles
• Processing Billions of location-based social
posts
• Geospatial indexing for 500K+ posts per hour
• Categorized 1000’s of event types
• Over 50 Million background jobs processed daily
5. First 9 Months, from 0 to Million
•
•
•
•
Amazon EC2 deployment with Rubber
No background jobs, frontend instances only
Hosted MongoDB clusters
0 DevOp
6. Challenges @ 1M Users
•
•
•
•
Limited engineering resources
Not too agile with Rubber
Outgrew hosted MongoDB limit
No DevOp
7. Growing to 2M+ Users
•
•
•
•
•
Migrated from EC2 instances to Heroku
Delayed jobs: GirlFriday -> Qu -> Sidekiq
In-house MongoDB clusters on EC2
Social graph increased to 300M+ profiles
1 DBA / DevOp
8. Challenges @ 2M Users
• Explosion of social graph size
• Cost to process background jobs
• Latency to poll external social feeds
9. Banjo @ 4M+ Users
• 100x Heroku workers
• Social graph increased to 400M+ profiles
• Indexed one month of global location-based
posts
• 10 Millions of background jobs processed daily
• Still -1 DBA / DevOp
10. Challenges @ 4M Users
-I
• Heroku Dynos limited to 512MB of memory, slow
CPU
• Heroku routing latency becomes obvious
• Bloated codebase, limited forking for
concurrency
• Power users with large social graph churns data
11. Now, 6 Million Users
•
•
•
•
•
•
•
Social graph increased to 700M profiles
Heroku -> Elastic Beanstalk
Service-oriented architecture
Unicorn -> Elastic Beanstalk with Nginx + Passenger
50 Million background jobs processed daily
Hundreds of EC2 instances
And... still 1 DBA/Dev-Op
12. Heroku PROS / CONS
• The Pros:
–
–
–
–
Brainless deploy / rollback flow
Instant availability of dynos and workers
Zero setup & maintenance cost
No Dev-Op need
• The Cons:
–
–
–
–
Limited memory & CPU make it hard for concurrency
Routing layer latency
No built-in auto-scaling, limited available zones (US/EU)
Not enough control, limited access when there are platform issues
13. Elastic Beanstalk - PROS / CONS
• The Pros:
–
–
–
–
Choice of instance types, Availability Zones
Increased concurrency with Passenger / Nginx, support for auto-scaling
Low latency with Amazon Route 53 & Elastic Load Balancing
Cost efficient
• The Cons:
–
–
–
–
Initial setup cost for beanstalk containers and environments
Slow container updates - currently Ruby 1.9 / Passenger 3.0.17 + Nginx 1.23
Time to spin up new instances for seamless deploys
There’s some learning curve to Elastic Beanstalk scripts
14. Managed EC2 Instances
• MongoDB instances (DBA)
• Elastic Beanstalk managed environments (Eng)
• Heroku managed services (Eng)
• Elastic Beanstalk + Heroku can easily be
managed by small-sized, agile engineering
team
15. Recommendation for startups:
• Start prototyping on small scale PaaS services
• Add-ons are really helpful
Papertrail, NewRelic, Hosted MongoDB/Redis/Memcached/Metrics
• Pager alerts with ScoutApp, Pingdom,
PagerDuty
• Make use of health & metrics dashboards
• Deploy frequently & scale up along the way
19. Two roads diverged in a wood, and I—
I took the one less traveled by …
Robert Frost
20. Understanding “Small” And “Cloud”
• What is “small”?
– Production scale that required minimal cost
– Devices that Moore forgot
– Speed in MHz, memory in KB, solution designed around resources
• What is “cloud”?
– HTML/XML for transport
– SSL for security
– Solutions typically don’t consider resource-constrained devices
21. Implications Of Small Devices
• Support for whitelisting
– Yes, it is still done
– No, “open up all outbound traffic” is not acceptable
• One-stop connectivity
– Not every protocol uses TCP
– UDP is still great for some things, and required for others (NTP)
22. Cloud System Requirements
•
•
•
•
Support whitelisting (fixed IP address set)
Support UDP as well as TCP
Support Auto Scaling and Elastic Load Balancing
Off-instance logging and monitoring
• AWS gets us 90% there – the last 10% is our focus
today
24. Meeting The
Requirements
(IP1 .. IPn)
• Reuse when possible,
invent when necessary
• Reuse =
Amazon Web Services:
– Amazon VPC
– EIP addresses
– Amazon Route 53
25. Application-managed IP Addresses
• “Standard” EIP are not enough
– All addresses must be active all the time
– Addresses must move to adapt to scale changes
– Support multiple addresses per instance for low-scale periods
• Application-managed IP addresses fill the gaps
– All addresses can be active (assigned to an instance)
– API control of EIP assignment provides migration during scaling,
and this can be done “cleanly” by the application
However, only VPC instances allow multiple EIP
assignments
26. Chickens And Eggs
• Managed IP addresses require
multiple EIPs and configuration
• VPC is required to allow multiple EIP
management
• Configuration requires Puppet and
AWS access
• Puppet and AWS require access to
the network (from the VPC)
• Network access requires instance
configuration and a VPC bridge
• Instance configuration is part of
application configuration (managed
IP address information)
• Rinse, lather, repeat…
27. Breaking The Cycle
• Each VPC requires a bridge for network access
• Putting a Puppet Master on this bridge breaks
the cycle of network access/configuration
–
–
–
–
Allows the use of VPC security groups to control access
Use a lightweight instance
Assign any EIP to allow external access
Configure to support VPC/Internet bridging
• All VPC instances are configured to use the
Puppet Master as their gateway (initially)
29. What About All Those Masters?
• World-wide support requires many VPCs
– Multiple Availability Zones
– Multiple regions
• Each VPC requires network access
– Each VPC requires a bridge
• We solved one problem, and introduced another
– Puppet Master configuration
30. Mastering The Puppet Masters
• Amazon S3 is an excellent choice for Puppet
Master configuration
– Global, highly available
– Excellent security (access control) and logging
– Sharable between accounts
• One-way synchronization from Amazon S3 to
distributed Puppet Masters solves the
configuration problem
31. What About Performance?
• We cannot stop here – we don’t want all traffic to
always go through a bridge
• So we do not stop here, we only configure here
– Access to the Puppet Master and the network allows access to
our configuration
– Our configuration includes information about our EIP pool, as
well as whether we need to acquire additional EIPs
– If we don’t need an EIP, then we continue to use our the
bridge/Puppet Master
32. Tools For Instance Configuration
• Instance metadata
– Instance ID, user data (always available)
– AWS (requires Internet access)
• Remember, instance data uses AWS API calls
• Puppet
– Configuration rules
– Unsecured files
• Amazon S3
– Secured files (use role-based API authentication)
33. EIP Management
• DNS (Amazon Route 53) for address configuration
– Configure a master name that contain all EIPs (for configuration)
– Configure host name for regional EIPs (latency-based)
• Each instance knows the master name
• Use EIP APIs to intersect the master list with the
EIP list of the instance’s VPC
• Instances find their neighbors and share the EIPs
• Each instance periodically checks itself
34. EIP Pseudo-Code – Startup
Get a Primary Public IP – repeat until successful
Allocate Network Interfaces and Private IPs (based on instance type)
Notify application of all Public IPs acquired with a Network Interface
35. EIP Pseudo-Code – Periodic
Use DNS, EIP APIs to determine current pool for my region, intersect
Validate my Primary Public IP – get one if required
Validate configured Public IPs – release if no longer configured
Check Scale Group, determine address count per instance (ROUND UP)
Determine Public IP changes, and allocate/release with application’s help
Release a Public IP if I have too many (application determines which)
Allocate all required Public IPs if I have too few
If there are nodes without an address, give one up
Instances are ordered, and know who will give up an address
Application picks the least used address
36. EIP Pseudo-Code – Shutdown
Release all additional Network Interfaces
Release all Public IPs except my Primary Public IP (for logging)
The instance then terminates, freeing the Primary Public IP
37. IP1 .. IPn
VPC (per AZ)
User Data
Instance Data
Route 53
Bridge/Puppet Master
Amazon S3
EIP Management
39. Adding Back The 90%
• Our configured instances play nice with AWS
–
–
–
–
–
Bootstrapping through AMI or Cloud Init
Auto Scaling groups set user and instance data
Load balancers managed with Auto Scaling groups
Latency-based Route 53 address for TCP/HTTP
Latency-based Route 53 address for UDP and whitelists
• Internet access for remote logging
• Amazon CloudWatch for monitoring