Minimizing customer impact is a key feature in successfully rolling out frequent code updates. Learn how to leverage the AWS cloud so you can minimize bug impacts, test your services in isolation with canary data, and easily roll back changes. Learn to love deployments, not fear them, with a blue/green architecture model. This talk walks you through the reasons it works for us and how we set up our AWS infrastructure, including package repositories, Elastic Load Balancing load balancers, Auto Scaling groups, internal tools, and more to help orchestrate the process. Learn to view thousands of servers as resources at your command to help improve your engineering environment, take bigger risks, and not spend weekends firefighting bad deployments.
Netflix changed its data pipeline architecture recently to use Kafka as the gateway for data collection for all applications which processes hundreds of billions of messages daily. This session will discuss the motivation of moving to Kafka, the architecture and improvements we have added to make Kafka work in AWS. We will also share the lessons learned and future plans.
Netflix changed its data pipeline architecture recently to use Kafka as the gateway for data collection for all applications which processes hundreds of billions of messages daily. This session will discuss the motivation of moving to Kafka, the architecture and improvements we have added to make Kafka work in AWS. We will also share the lessons learned and future plans.
Common issues with Apache Kafka® Producerconfluent
Badai Aqrandista, Confluent, Senior Technical Support Engineer
This session will be about a common issue in the Kafka Producer: producer batch expiry. We will be discussing the Kafka Producer internals, its common causes, such as a slow network or small batching, and how to overcome them. We will also be sharing some examples along the way!
https://www.meetup.com/apache-kafka-sydney/events/279651982/
source : http://www.opennaru.com/cloud/msa/
마이크로서비스는 애플리케이션 구축을 위한 아키텍처 기반의 접근 방식입니다. 마이크로서비스를 전통적인 모놀리식(monolithic) 접근 방식과 구별 짓는 기준은 애플리케이션을 핵심 기능으로 세분화하는 방식입니다. 각 기능을 서비스라고 부르며, 독립적으로 구축하고 배포할 수 있습니다. 이는 개별 서비스가 다른 서비스에 부정적 영향을 주지 않으면서 작동(또는 장애가 발생)할 수 있음을 의미합니다.
Apache Hadoop and Spark on AWS: Getting started with Amazon EMR - Pop-up Loft...Amazon Web Services
Amazon EMR is a managed service that makes it easy for customers to use big data frameworks and applications like Apache Hadoop, Spark, and Presto to analyze data stored in HDFS or on Amazon S3, Amazon’s highly scalable object storage service. In this session, we will introduce Amazon EMR and the greater Apache Hadoop ecosystem, and show how customers use them to implement and scale common big data use cases such as batch analytics, real-time data processing, interactive data science, and more. Then, we will walk through a demo to show how you can start processing your data at scale within minutes.
AWS' philosophy and recommended best practices for building microservices applications, how AWS services like Lambda and API gateway benefit developers building microservices apps, and how customers are using these two and other AWS services to deliver their microservices apps
Introducing Apache Kafka - a visual overview. Presented at the Canberra Big Data Meetup 7 February 2019. We build a Kafka "postal service" to explain the main Kafka concepts, and explain how consumers receive different messages depending on whether there's a key or not.
In this session we’ll take a high-level overview of AWS Lambda, a serverless compute platform that has changed the way that developers around the world build applications. We’ll explore how Lambda works under the hood, the capabilities it has, and how it is used. By the end of this talk you’ll know how to create Lambda based applications and deploy and manage them easily.
Speaker: Chris Munns - Principal Developer Advocate, AWS Serverless Applications, AWS
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...Simplilearn
This presentation AWS S3 will help you understand what is cloud storage, types of storage, life before Amazon S3, what is S3 ( Amazon Simple Storage Service ), benefits of S3, objects and buckets, how does Amazon S3 work along with the explanation on features of AWS S3. Amazon S3 is a storage service for the Internet. It is a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at a relatively low cost. Amazon S3 gives a simple web service interface that can be used to store and restore any amount of data. Using this, developers can build applications that make use of Internet storage with ease. Amazon S3 is designed to be highly flexible and scalable. Now, lets deep dive into this presentation and understand what Amazon S3 actually is.
Below topics are explained in this AWS S3 presentation:
1. What is Cloud storage?
2. Types of storage
3. Before Amazon S3
4. What is S3
5. Benefits of S3
6. Objects and buckets
7. How does Amazon S3 work
8. Features of S3
This AWS certification training is designed to help you gain in-depth understanding of Amazon Web Services (AWS) architectural principles and services. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS Cloud platform powers hundreds of thousands of businesses in 190 countries, and AWS certified solution architects take home about $126,000 per year.
This AWS certification course will help you learn the key concepts, latest trends, and best practices for working with the AWS architecture – and become industry-ready aws certified solutions architect to help you qualify for a position as a high-quality AWS professional.
The course begins with an overview of the AWS platform before diving into its individual elements: IAM, VPC, EC2, EBS, ELB, CDN, S3, EIP, KMS, Route 53, RDS, Glacier, Snowball, Cloudfront, Dynamo DB, Redshift, Auto Scaling, Cloudwatch, Elastic Cache, CloudTrail, and Security. Those who complete the course will be able to:
1. Formulate solution plans and provide guidance on AWS architectural best practices
2. Design and deploy scalable, highly available, and fault tolerant systems on AWS
3. Identify the lift and shift of an existing on-premises application to AWS
4. Decipher the ingress and egress of data to and from AWS
5. Select the appropriate AWS service based on data, compute, database, or security requirements
6. Estimate AWS costs and identify cost control mechanisms
This AWS course is recommended for professionals who want to pursue a career in Cloud computing or develop Cloud applications with AWS. You’ll become an asset to any organization, helping leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud.
Learn more at: https://www.simplilearn.com/
Do you want to run your code without the cost and effort of provisioning and managing servers? Find out how in this deep dive session on AWS Lambda, which allows you to run code for virtually any type of application or back end service – all with zero administration. During the session, we’ll look at a number of key AWS Lambda features and benefits, including automated application scaling with high availability; pay-as-you-consume billing; and the ability to automatically trigger your code from other AWS services or from any web or mobile app.
Kafka Tutorial - Introduction to Apache Kafka (Part 1)Jean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka? This slide deck is a tutorial for the Kafka streaming platform. This slide deck covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example to demonstrate failover of brokers as well as consumers. Then it goes through some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have also expanded on the Kafka design section and added references. The tutorial covers Avro and the Schema Registry as well as advance Kafka Producers.
In this session we'll discuss and demonstrate key concepts and design patterns for continuous deployment and integration using technologies like AWS OpsWorks and Chef to enable better control of applications and infrastructures.
Common issues with Apache Kafka® Producerconfluent
Badai Aqrandista, Confluent, Senior Technical Support Engineer
This session will be about a common issue in the Kafka Producer: producer batch expiry. We will be discussing the Kafka Producer internals, its common causes, such as a slow network or small batching, and how to overcome them. We will also be sharing some examples along the way!
https://www.meetup.com/apache-kafka-sydney/events/279651982/
source : http://www.opennaru.com/cloud/msa/
마이크로서비스는 애플리케이션 구축을 위한 아키텍처 기반의 접근 방식입니다. 마이크로서비스를 전통적인 모놀리식(monolithic) 접근 방식과 구별 짓는 기준은 애플리케이션을 핵심 기능으로 세분화하는 방식입니다. 각 기능을 서비스라고 부르며, 독립적으로 구축하고 배포할 수 있습니다. 이는 개별 서비스가 다른 서비스에 부정적 영향을 주지 않으면서 작동(또는 장애가 발생)할 수 있음을 의미합니다.
Apache Hadoop and Spark on AWS: Getting started with Amazon EMR - Pop-up Loft...Amazon Web Services
Amazon EMR is a managed service that makes it easy for customers to use big data frameworks and applications like Apache Hadoop, Spark, and Presto to analyze data stored in HDFS or on Amazon S3, Amazon’s highly scalable object storage service. In this session, we will introduce Amazon EMR and the greater Apache Hadoop ecosystem, and show how customers use them to implement and scale common big data use cases such as batch analytics, real-time data processing, interactive data science, and more. Then, we will walk through a demo to show how you can start processing your data at scale within minutes.
AWS' philosophy and recommended best practices for building microservices applications, how AWS services like Lambda and API gateway benefit developers building microservices apps, and how customers are using these two and other AWS services to deliver their microservices apps
Introducing Apache Kafka - a visual overview. Presented at the Canberra Big Data Meetup 7 February 2019. We build a Kafka "postal service" to explain the main Kafka concepts, and explain how consumers receive different messages depending on whether there's a key or not.
In this session we’ll take a high-level overview of AWS Lambda, a serverless compute platform that has changed the way that developers around the world build applications. We’ll explore how Lambda works under the hood, the capabilities it has, and how it is used. By the end of this talk you’ll know how to create Lambda based applications and deploy and manage them easily.
Speaker: Chris Munns - Principal Developer Advocate, AWS Serverless Applications, AWS
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...Simplilearn
This presentation AWS S3 will help you understand what is cloud storage, types of storage, life before Amazon S3, what is S3 ( Amazon Simple Storage Service ), benefits of S3, objects and buckets, how does Amazon S3 work along with the explanation on features of AWS S3. Amazon S3 is a storage service for the Internet. It is a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at a relatively low cost. Amazon S3 gives a simple web service interface that can be used to store and restore any amount of data. Using this, developers can build applications that make use of Internet storage with ease. Amazon S3 is designed to be highly flexible and scalable. Now, lets deep dive into this presentation and understand what Amazon S3 actually is.
Below topics are explained in this AWS S3 presentation:
1. What is Cloud storage?
2. Types of storage
3. Before Amazon S3
4. What is S3
5. Benefits of S3
6. Objects and buckets
7. How does Amazon S3 work
8. Features of S3
This AWS certification training is designed to help you gain in-depth understanding of Amazon Web Services (AWS) architectural principles and services. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS Cloud platform powers hundreds of thousands of businesses in 190 countries, and AWS certified solution architects take home about $126,000 per year.
This AWS certification course will help you learn the key concepts, latest trends, and best practices for working with the AWS architecture – and become industry-ready aws certified solutions architect to help you qualify for a position as a high-quality AWS professional.
The course begins with an overview of the AWS platform before diving into its individual elements: IAM, VPC, EC2, EBS, ELB, CDN, S3, EIP, KMS, Route 53, RDS, Glacier, Snowball, Cloudfront, Dynamo DB, Redshift, Auto Scaling, Cloudwatch, Elastic Cache, CloudTrail, and Security. Those who complete the course will be able to:
1. Formulate solution plans and provide guidance on AWS architectural best practices
2. Design and deploy scalable, highly available, and fault tolerant systems on AWS
3. Identify the lift and shift of an existing on-premises application to AWS
4. Decipher the ingress and egress of data to and from AWS
5. Select the appropriate AWS service based on data, compute, database, or security requirements
6. Estimate AWS costs and identify cost control mechanisms
This AWS course is recommended for professionals who want to pursue a career in Cloud computing or develop Cloud applications with AWS. You’ll become an asset to any organization, helping leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud.
Learn more at: https://www.simplilearn.com/
Do you want to run your code without the cost and effort of provisioning and managing servers? Find out how in this deep dive session on AWS Lambda, which allows you to run code for virtually any type of application or back end service – all with zero administration. During the session, we’ll look at a number of key AWS Lambda features and benefits, including automated application scaling with high availability; pay-as-you-consume billing; and the ability to automatically trigger your code from other AWS services or from any web or mobile app.
Kafka Tutorial - Introduction to Apache Kafka (Part 1)Jean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka? This slide deck is a tutorial for the Kafka streaming platform. This slide deck covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example to demonstrate failover of brokers as well as consumers. Then it goes through some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have also expanded on the Kafka design section and added references. The tutorial covers Avro and the Schema Registry as well as advance Kafka Producers.
In this session we'll discuss and demonstrate key concepts and design patterns for continuous deployment and integration using technologies like AWS OpsWorks and Chef to enable better control of applications and infrastructures.
Microservices in action at the Dutch National Police - Bert Jan Schrijver - C...Codemotion
At the Cloud, Big Data and Internet division of the Dutch National Police, 4 DevOps teams use the latest open source technology to build high tech, cloud native web applications using Spring Boot, Angular 5, Spark, Kafka and Jenkins 2. I'll share our experiences and real-world use cases for microservices. I’ll show how 4 teams work together on one product and I’ll talk about how we apply the principles of DevOps and Continuous Delivery. I’ll show how we handle security, build pipelines, test automation, performance tests, service discovery, automated deployments, monitoring and more!
DevOps, Continuous Integration and Deployment on AWS: Putting Money Back into...Amazon Web Services
Organizations around the globe are leveraging the cloud to accomplish world-changing missions. This session will address how AWS can help organizations put more money toward their mission and scale outreach and operations to achieve more with less. Hear some of AWS’s most advanced customers on how their organizations handle DevOps, continuous integration and deployment. Learn how these practices allow them to rapidly develop, iterate, test and deploy highly-scalable web applications and core operational systems on AWS. The discussion will focus on best practices, lessons learned, and the specific technologies and services they use.
Using AWS to Build a Scalable Big Data Management & Processing Service (BDT40...Amazon Web Services
By turning the data center into an API, AWS has enabled Sumo Logic to build a very large scale IT operational analytics platform as a service at unprecedented scale and velocity. Based around Amazon EC2 and Amazon S3, the Sumo Logic system is ingesting many terabytes of unstructured log data a day while at the same time delivering real-time dashboards and supporting hundreds of thousands of queries against the collected data. When co-founder and CTO Christian Beedgen started Sumo Logic, it was obvious that the service would have to scale quickly and elastically, and AWS has been providing the perfect infrastructure for this endeavor from the start.
In this talk, Christian dives into the core Sumo Logic architecture and explains which AWS services are making Sumo Logic possible. Based around an in-house developed automation and continuous deployment system, Sumo Logic is leveraging Amazon S3 in particular for large-scale data management and Amazon DynamoDB for cluster configuration management. By relying on automation, Sumo Logic is also able to perform sophisticated staging of new code for rapid deployment. Using the log-based instrumentation of the Sumo Logic codebase, Christian will dive into the performance characteristics achieved by the system today and share war stories about lessons learned along the way.
I'm talking about how Ansible helps Backbase establish testing pipeline to ensure the quality of Customer Experience Platform - the leading horizontal portal software. This is done by utilizing the concept of immutable infrastructure to provision on-demand infrastructure use it and the dispose.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
8. Data ingestion
Service A
Service A
UI
Service A
Service A
API
Sensors
Termination server
Termination server
Termination server
Termination server
Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
Sensors
Sensors
External service Elastic Load Balancing load balancer
Content Router
Content Router
Service A
Service A
Processor 1
Service A
Service A
Processor 2
9. •Fortune 500, Think Tanks, Non-Profits
•100K+ events per second
–Expected to hit 500K EPS by end of 2015
•Each enterprise customer can generate 2-4 TBs of data per day
•Microservice architecture
•Polyglot environment
High scale, big data
13. Solving for the problems
•OMG, all servers need to be patched??
•I’m afraid to restart that service; it’s been running for 2 years
•Large rolling restarts
•Deployment fear
–Friday night deploys
•B/G for event processing?
14. Our primary objectives for deployments
•Minimize customer impact
–Customers should have no indication that anything has changed
•Maximize engineer’s weekends
–Avoid burnout
•Reduce dependencies of rollouts
–Everything goes out together, 50+ services, 1000+ VMS
15. Leveraging AWS
•Programmable data centers
•Nodes are ephemeral
•It should be easier to re-create an environment than to fix it
—Think like the cloud
16. What is blue-green?
Router
Web
server
App
server
Application v1
Shared
database
Web
server
App
server
Application v2
x
x
17. What is blue-green?
•Full cluster BG
–Everything goes out together
–Indiana Jones: “idol switch”
•App-based BG
–Each app or team controls their ownblue-green deployments
18. Data plane
The data plane
can’t blue-green all the things
Blue cluster
Green cluster
Kafka
DynamoDB
Redis
Amazon RDS pgsql
Amazon Redshift
Amazon Glacier
Amazon S3
19. When do we deploy?
•Teams deploy end of sprint releases together
•Hot-fix/Upgrades are performed via rolling restart deployments frequently
•Early on deployments took an entire day
–Lack of automation
•Deploys today generally take 45 minutes
–Everyone has run a deployment
20. Sustaining engineer
•Every team member including QA has run deployments
•Builds confidence, understanding, and redundancy
•Ensures documentation is up to date and all things are automated that can be.
Sustaining engineer badge of honor
shirt after their tour of duty
21. Deployment day
•Apt repo synchronized and locked down
•Data plane migrations applied
•“Green” cluster is launched (1000s of machines)
•IT tests run
•Canary customers
•Logging and error checks
•Active-active
•“Blue” marked as inactive, decommissioned
22. Keys to success
Pro tip: It’s not just flipping load balancers
23. Keys to success
Automate all the things
•jr devs should be able to run your deploy system
24. Keys to success
Instrumentation & Metrics
https://github.com/codahale/metrics
https://github.com/rcrowley/go-metrics
25. Keys to success
Use a provisioning system
•Chef
•Puppet
•Salt
•baked AMIs
26. Keys to success
Live integration / regression test suites
Test
System
Send deterministic input values
Verify processed state
34. Elevator pitch on Kafka
•Distributed commit log
•Similar to a message queue
•Allows for replaying messages from earlier in the stream in case of failure
35. Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Sensors
Termination server
Termination server
Termination server
Termination server
Content Router
Content Router
Sensors
•Blue is running;normal operation
•Content Routers are writing to the “active” topics in Kafka
•Blue processors read from the “active” topics
Sensors
Active topic
Active topic
External service ELB load balancer
It all starts with a running cluster
37. Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
External service ELB load balanceer
Sensors
Termination server
Termination server
Termination server
Termination server
Termination server
Termination server
Termination server
Termination server
Content Router
Content Router
Sensors
Sensors
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Active topic
Launching new cluster
Active topic
Active topic
Inactive Topic
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Content Router
Content Router
•Green cluster is launched
•Termination servers are kept out of the ELB load balancer by failing health checks
•Content Routers write to the “active” topics
•Processors in green read from the “inactive” topics
39. Getting the size right
•Sizing of our autoscale groups is determined programmatically
–Admin page allows for setting mix / max
–Script determines appropriate desired-capacity based on running cluster
•Launching is then as simple as updating the autoscale groups to the new sizes
defcurrent_counts(region='us-east-1'):
proc = Popen(
"as-describe-auto-scaling-groups “
“--region {} “
“--max-records=600".format(region),
shell=False, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
iferr:
raiseException(err)
counts = {}
forline inout.splitlines():
if"AUTO-SCALING-GROUP"not inline:
continue
parts = line.split()
group = parts[1]
current = parts[-2]
counts[group] = int(current)
returncounts
42. User data and Chef get things rolling
•Inside out Chef bootstrapping
–Didn’t feel comfortable running `wget … | bash`
•Custom version of Chef installer
–Version of Chef
–Where to find the Chef servers
–Which role to run
–Which environment (dev, integ, blue, green)
43. Testing the new stuff
External service ELB load balancer
Sensors
Termination server
Termination server
Termination server
Termination server
Sensors
Active topic
Active topic
Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
Termination server
Termination server
Termination server
Termination server
Integration tests
Active topic
Inactive Topic
Content Router
Content Router
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Content Router
Content Router
•Test customer(s) are *canaried
•Integration test suite is run by connecting to a termination server directly
•Tests pass; then we canary real customers
45. •Canary information is stored in zookeeper
•Fortunately we dogfood our own tech
•This affords us the ability to use ourselves as canaries for new code
•The inactive processing cluster is set to read from the .inactivetopics
•The standard Kafka topics with .inactiveappended
•The ingestion layer has a watcher on that znode and routes any canaried customer to a the .inactive topics
•Ex. regular traffic goes to foo.bar, canary traffic goes to foo.bar.inactive
•When we are ready to test real traffic we mark several customers as canaries and start the monitoring process to determine any issues
Canary customers
46. Canary customers
Sensors
External service ELB load balancer
Event ingestor
Kafka
Green Processors
Inactive Topic
Regular Traffic
Active topic
Blue Processors
Active topic
Inactive Topic
Canary Traffic
Customer 123
Customer 456
50. IT tests run
•Integration tests are run
–~3000 tests in total
–Test customer must be “canaried”
•If any tests fail, we triage and determine if it is still possible to move forward
•Testing is only done when we are passing 100%—no exceptions!
53. Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
Trust, but verify!
Sensors
Termination server
Termination server
Termination server
Termination server
Sensors
Active Topic
Active Topic
Inactive Topic
Sensors
External service ELB load balancer
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Content Router
Content Router
Inactive Topic
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
•Monitor green services
•Verify health of the cluster by inspecting graphicaldata and log outputs
•Rerun tests with load
55. Logging and errorchecking
•Every server forwards its relevant logs to Splunk
•Several dashboards have been set up with common things to watch for
•Raw logs are streamed in near real-time and we watch specifically for log-level ERROR
•This is one of our most important steps, as it gives us the most insight into the health of the system as a whole
57. Moving customers over
Termination server
Termination server
Termination server
Termination server
Termination server
Termination server
Termination server
Termination server
Sensors
Sensors
Sensors
External service ELB load blaancer
Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
Active topic
Active topic
Content Router
Content Router
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Content Router
Content Router
Active topic
Active topic
•Flip all customers back away from canary
•Activate green cluster
•Event processors and consuming services in blue and green now write to and consume the “active” topics
•We are in a state of active-activefor a few minutes
58. Each node in the data processing layer has a watcher on a particular znode which tells the environment whether it is active (use standard Kafka topics) or inactive (append .inactiveto the topics)
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Active Topic
Kafka
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Active -active
Inactive Topic
Ingestion
59. Inactive Topic
Active topic
When we are ready to make the switch, we start by making the new cluster active and enter into an active-active state where both processing clusters are doing work.
Kafka
Green, switch
to active!
Active Topic
This is where is it paramount that code is forward compatible since two different code bases will be doing work simultaneously
Active -active
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Ingestion
60. However, blue and green are fully partitioned and there is no intercommunication between the clusters. This allows for things like changes in serialization for inter- service communication.
Active Topic
Kafka
Active Topic
Active -active
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Ingestion
61. Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
Flipping the switch
Termination server
Termination server
Termination server
Termination server
Content Router
Content Router
Sensors
Sensors
Sensors
External service ELB load balancer
Termination server
Termination server
Termination server
Termination server
Content Router
Content Router
Active topic
Active topic
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
Inactive topic
Active topic
•We deactivate Blue, which forces Termination Servers in Blue to fail health checks and all Blue sensors disconnect
•Blue processors switch to read from the “inactive” topic
•Once all consumers of the “inactive” topic have caught up to thehead of the stream, Blue can be decommissioned
62. Out with the old…
Termination server
Termination server
Termination server
Termination server
Content Router
Content Router
Kafka
DynamoDB
Redis
Amazon RDS
Amazon Redshift
Amazon Glacier
Amazon S3
Data plane
Active topic
Active topic
Sensors
Sensors
Sensors
External service ELB load balancer
Service A
Service A
Processor 1
Service A
Service A
Processor 2
Service A
Service A
Processor 3
Service A
Service A
Processor 4
•Green is now the active cluster
•If we need to roll back code, we have a snapshot of the repository in Amazon S3
•We haven’t had to roll back code… yet
65. Half-baked AMIs
We use a process to create “half-baked” AMIs, which speed up deployments
•JVM (for our Scala code base)
•Common tools and configurations
•Latest updates to make sure patches are up to date
•Build plan is run twice daily
Green Server
Green Server
Green Server
Green Server
Green Server
Green server
Green Server
Green Server
Green Server
Green Server
Green Server
Blue server
Half-baked-AMI
Auto Scaling group
1
AMI
Auto Scale Group
Amazon S3
67. How code graduates -Development
Commit on main
Development apt repo
Auto deploy changed
roles
Development cluster
68. How code graduates -Production
Create release-X.X.X or
hotfix-X.X.X branches
Integration apt repo
Production apt repo
Same exact
Binary
Integration cluster
Integration apt repo
Sync specified
Packages for integ
New production cluster
75. Data plane migrations
•Migrations applied to the database are forward only
•We have past experiences with two way migrations, but the cost outweigh the benefits.
•Code must be forward compatible in case rollbacks are necessary
•Database schemas are only modified via migrations even in development and integration environments
•We use an in-house migration service(based on flyway) to parallelize the process
76. Final Thoughts
•blue-green deployments can be done in many ways
•Our requirement of never losing customer data made this the best solution for us
•The automation and tooling around our deployment system were built over many months and was a lot of work(built by 2 people –Hi Dennis!)
•But it is completely worth it, knowing we have a very reliable, fault-tolerant system