AWS Lambda is a powerful and flexible tool for solving diverse business problems, from traditional grid computing to scheduled batch processing workflows. Cloud native solutions using AWS Lambda enable architectures that depart from traditional enterprise application design. These new design patterns can provide substantially increased performance and reduced costs. In this session, learn how Fannie Mae re-architected one of their mission-critical traditional grid computing applications to a modern serverless solution using AWS Lambda. Learn More: https://aws.amazon.com/government-education/
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
SRV317_Unlocking High Performance Computing for Financial Services with Serve...Amazon Web Services
AWS helps financial services institutions run risk and pricing scenario calculations against large datasets in shorter timeframes and at lower cost. In this session, we will discuss how high performance computing (HPC) and grid computing patterns in the cloud are evolving to leverage serverless architectures with AWS Lambda. Also in this session, Fannie Mae discusses how it migrated a mission-critical, financial modeling application to Lambda from an on-premises grid computing infrastructure. It will describe the journey to serverless computing to develop the first serverless high performance computing (HPC) platform in its industry. Fannie Mae will also cover how Lambda has enabled the company to reliably perform quadrillions of calculations each month, at a fraction of the cost and effort.
SRV317_Unlocking High Performance Computing for Financial Services with Serve...Amazon Web Services
AWS helps financial services institutions run risk and pricing scenario calculations against large datasets in shorter timeframes and at lower cost. In this session, we will discuss how high performance computing (HPC) and grid computing patterns in the cloud are evolving to leverage serverless architectures with AWS Lambda. Also in this session, Fannie Mae discusses how it migrated a mission-critical, financial modeling application to Lambda from an on-premises grid computing infrastructure. It will describe the journey to serverless computing to develop the first serverless high performance computing (HPC) platform in its industry. Fannie Mae will also cover how Lambda has enabled the company to reliably perform quadrillions of calculations each month, at a fraction of the cost and effort.
Serverless Architectures in Banking: OpenWhisk on IBM Bluemix at SantanderDaniel Krook
Presentation at IBM InterConnect on March 21, 2017.
Santander is one of the largest companies in the world, yet size is no guarantee of future survival given several challenges in the retail banking industry, primarily from disruptive new startups and a changing regulatory landscape. Success requires cutting-edge cloud computing solutions that achieve better resource utilization through automatic application scaling to match demand; and an associated, finer-grained cost model that helps distribute compute load at a lower cost. Learn how IBM and Santander partnered to create next-generation solutions for retail banking with the OpenWhisk open source project hosted on IBM Bluemix, which enables serverless architectures for event driven programming.
"Is serverless another passing technology fad or the new standard for application deployment in cloud computing?” It’s a good question and the topic of this presentation. We will discuss the current state of serverless computing and the many considerations before investing time and resources in serverless infrastructure.
For many, data center priorities have shifted from absolute uptime and performance to ”move fast and break things” as espoused by Silicon Valley, a great mantra for those with limited legacy systems and a greenfield of new products. Though the question for many enterprises though is "How does serverless integrate into their existing data center strategy?"
The discussion will not only explain the state of today’s growing serverless landscape but how you can integrate your existing data center with a cloud-native serverless architecture.
SRV205 Architectures and Strategies for Building Modern Applications on AWSAmazon Web Services
Rapid growth of technology and tooling in the cloud has enabled us to build modern applications that are more secure, scalable, and focused on our business. In this session, we cover the key compute primitives that enable us to accelerate towards building and running modern, cloud-native applications. We highlight what we’ve learned from customers running applications with AWS Lambda and AWS Fargate, two modern compute technologies for running applications in the cloud. In addition, we cover architecture patterns of modern application, key primitives required for building modern systems, steps you can take to start building and monitoring modern applications today, and secrets to fearlessly going faster and farther in the cloud.
Overview of Cloud Computing from the CFO perspective. Focuses on business advantages, costs, risks, and organizational impact across a wide range of emerging platforms.
These tech talk slides help you understand the ways in which AWS addresses the challenges of grid computing, and how to select the best approach to meet your requirements. To illustrate the advantages of a scalable platform OpenGamma will describe their Margin Management service. Running on AWS serverless technology allows them to transparently match capacity to meet fluctuations in customer demand.
Join this tech talk to understand the ways in which AWS addresses the challenges of grid computing, and how to select the best approach to meet your requirements. To illustrate the advantages of a scalable platform OpenGamma will describe their Margin Management service. Running on AWS serverless technology allows them to transparently match capacity to meet fluctuations in customer demand.
Automate Your Big Data Workflows (SVC201) | AWS re:Invent 2013Amazon Web Services
As troves of data grow exponentially, the number of analytical jobs that process the data also grows rapidly. When you have large teams running hundreds of analytical jobs, coordinating and scheduling those jobs becomes crucial. Using Amazon Simple Workflow Service (Amazon SWF) and AWS Data Pipeline, you can create automated, repeatable, schedulable processes that reduce or even eliminate the custom scripting and help you efficiently run your Amazon Elastic MapReduce (Amazon EMR) or Amazon Redshift clusters. In this session, we show how you can automate your big data workflows. Learn best practices from customers like Change.org, KickStarter and UnSilo on how they use AWS to gain business insights from their data in a repeatable and reliable fashion.
GigaSpaces - Getting Ready For The Cloudgigaspaces
Mr Nati Shalom, Founder and CTO of GigaSpaces
Nati is responsible for defining the technology roadmap and the direction of GigaSpaces products as they relate to standards adaptations, architecture, and product design.
He has more than 10 years of experience with distributed technology and architecture namely CORBA, Jini, J2EE, Grid and SOA. He has been working for the past ten years with some of the leading Israeli companies, such as ECI, Comverse, BMC, Elisra, Rafael, and Amdocs. He has led the development of the first Reverse BID exchange in the Israeli Yellow Pages. He previously worked with IONA, and was responsible for the penetration of their products and technology, to most of the leading ISV's in Israel.
As the Head of the Israeli Grid consortium, Mr. Shalom is recognized as a software visionary and industry leader, he is a frequent presenter at industry conferences and is actively involved in evangelizing Space Based Architecture, Data Grid patterns, and Cloud Computing.
______________________________________________________________________________________________________________
Topic - Getting Ready for the Cloud – Technology
In this session Nati will describe what is the latest developments in the industry on cloud computing, and where he feels this will be going. He will also share his experience on how to design and deploy enterprise applications in a cloud/grid computing platform, what to take into account while developing or deploying applications on the cloud, and demonstrate how to transition applications to run on the Cloud without needing to completely re-architect them. Standard Application Servers as we've known them only partially address enterprises' needs for scalability. As a result, a new class of application servers has emerged, focused on massive scalability. In this session, we will explore some of the common characteristics of these servers while looking at how to migrate an existing Java EE web app to a scale-out application server, relatively seamlessly.
Included is a 10-minute demo on turning an existing tier-based application into a tierless scaled out application running on the Amazon EC2 Cloud. In this live demo session, we will also use the cloud-based environment to demonstrate how you can add dynamic scaling, self healing and improved performance with almost no changes to your code.
Five Early Challenges Of Building Streaming Fast Data ApplicationsLightbend
There is a unification happening between data and microservice architectures: the demand for availability, scalability, and resilience is forcing Fast Data architectures to become like microservice architectures, while organizations building microservices find their data requirements are also evolving. At the center of it all is stream data processing, which is about more than just extracting information faster. It’s about embracing wholesale change in how organizations build data-centric applications.
Yet getting started with streaming and Fast Data systems provides a number of tough questions and challenges to enterprises, which we’ve encapsulated into 5 major categories:
1. Choosing among streaming frameworks. How to select the right stream processing frameworks (e.g. Akka Streams, Spark, Flink, Kafka Streams) for different use cases?
2. Integrating with application architecture. How to best integrate microservices with streaming data services?
3. Operational challenges. What do you need to know about deploying, managing and monitoring our application clusters in the long term?
4. Decreasing Costs. How can you minimize costs by keeping our infrastructure footprint small, while not trading off performance?
5. Applying Machine Learning. How can you start using Machine Learning, Deep Learning and AI to your advantage?
In this webinar, Lightbend’s Senior Product Director, Craig Blitz reviews the implications of these decisions, and give you a preview of what Lightbend is doing to make these choices more straightforward with our upcoming Fast Data Platform - an integrated platform that helps you build, deploy and run Fast Data and streaming applications easily and reliably.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
2. Agenda
- What is Serverless?
- What is AWS Lambda?
- How is it used?
- Why do I care?
- Look what Fannie Mae did!
- Total enlightenment
3. About Smartronix
• Premier Partner for all 5 years
• Inaugural Managed Services Partner
• Inaugural Migration Delivery Partner
• Inaugural Big Data Competency
• Inaugural DevOps Competency Partner
• 1st to bring Federal government in to AWS
• 1st to implement FISMA Moderate/ FedRAMP
solutions (NIST 800-53 Rev. 4)
• One of the largest channel resellers
• Successfully completed FedRAMP 3PAO
Assessment for Managed Services
• Named Leader in Gartner MQ for Public Cloud MSP,
Worldwide – March 2017
4. AWS Compute Services Overview
Service
Unit
Layer
EC2
Virtual
Machine
Hardware
ECS
App
OS
LAMBDA
Function
Runtime
6. Serverless?
- Serverless : adjective - “1. I don’t have to manage a virtual
machine, operating system, patch management, scaling
service, load balancing, availability, fault tolerance,
provisioning, antivirus, anti-malware, vulnerability scanning,
continuous monitoring, access control, rightsizing, server
tuning, intrusion detection, hardware affinity, OS
dependencies, …ad nauseum”
AND
- I only pay for what I use!
7. Too good to be true…
- OK, with some limitations:
- Limited function/code size (250 MB code package)
- Asynchronous and stateless *
- 500 MB temp directory
- 300 second runtime
- 128MB to 1.5 GB Memory limitations
- 3000 concurrent function executions *
* Note: Many of these limitations are easily addressable!
10. Old School Problem Solving
- Problem Statement:
- I need to run quadrillions of cash flow simulations on tens of millions of
loans every month under various economic models to determine risk.
- Old School Approach:
- Build a massive compute and shared storage infrastructure that at
capacity meets the PEAK business requirement
- License an expensive GRID control platform to orchestrate the job
scheduling and data pipelines
- Old School Result:
- Very expensive server and storage infrastructure with high management
burden and inconvenient utilization
11. …Slightly Less Old School Problem Solving
- Circa 2016 “Catch All” Approach
- “MOVE IT TO THE CLOUD,” says every IT talking head
- “LIFT AND SHIFT,” says every new cloud engineer
- License an expensive GRID control platform to orchestrate the job
scheduling and data pipelines
- Circa 2016 Result
- Somewhat less expensive server and storage infrastructure with high
management burden and slightly more convenient utilization patterns.
Better but not great.
Or… You can Re-Think your approach and do what Fannie Mae did!
13. Fannie Mae Business
Fannie Mae is a leading source of
financing for mortgage lenders:
• Providing access to affordable mortgage
financing in all market conditions.
• Effectively managing and reducing risk
to our business, taxpayers, and the
housing finance system.
In 2016, Fannie Mae provided $637B in
liquidity to the mortgage market, enabling
• 1.1M home purchase ,
• 1.4 M refinancing,
• 724K rental housing units.
14. Fannie Mae Financial Modeling
Financial Modeling is a Monte-Carlo simulation process to project future cash flows
which is used for managing the mortgage risk on a daily basis:
• Underwriting and valuation
• Risk management
• Financial reporting
• Loss mitigation and loan removal
~10 Quadrillion (10𝑥1015
) cash flow
projections each month in hundreds
of economic scenarios.
15. Fannie Mae Financial Modeling Infrastructure
High Performance Computing grids are the key infrastructure components for
financial modeling at Fannie Mae.
Current Environment Issues
- No longer meets growing business needs
- 7 years old with limited non-elastic compute, storage, and IO capacity
- Costly server and storage refresh
- Complex API
- It takes more than half a year to add incremental compute capacity and
develop any new application.
16. Ideal New Solution Requirements
New secure capability that helps us react to the rapidly
changing market
- Near infinite compute and unlimited storage with high availability
- Simple distributed computing API
- Efficient cost model
- Maximizes re-use of existing code base
- Short time to deploy solution
- Reduce operational burden – reliable and easy to manage
- Enable use of innovative services “adjacent” to our data
17. Fannie Mae’s Journey
In 2016, Fannie Mae began to work with AWS and Smartronix to build the first
serverless HPC computing platform in the industry using AWS Lambda. This is also
the first pilot program for Fannie Mae to develop an AWS cloud native application.
Minimal code refactoring was required and within a month we were able to run a
successful proof of concept.
By March 2017, Fannie Mae successfully deployed the first financial modeling
application to preproduction and ran on 15,000 concurrent executions
By June 2017, production migration of first workload!
18. Serverless HPC Reference Architecture
Map-reduce framework is used for simple parallel workload:
• Input file in S3 input bucket is split using EC2 to n triggers, which are saved in S3 event bucket.
• Lambda automatically ramps up n concurrent executions and outputs to S3 mapper bucket.
• EC2 is used to aggregate outputs and write final result to S3 reducer bucket.
Amazon S3
Input
Amazon
EC2
Splitter
…
AWS Lambda
Mappers
Amazon
EC2
Reducer
AmazonS3
Mapper
Result
Amazon
Reducer
Result…
Amazon S3
Event
19. Results!
Lambda service configuration:
• Initial burst rate = 3,000, incremental rate > 240
per minute, throttle limit = 15,000.
• Lambda ramps up automatically from 3,000 to
15,000 concurrent executions.
Application result:
• One simulation run of ~ 20 million mortgages takes
1.5 hours, >4 times faster than the existing process.
• Performance doesn’t degrade during ramp up period.
• Lambdas’ CPU efficiency is close to 100%. Actual elapsed time is consistent with
the estimated elapsed time based on Lambda billing time.
Number of New
Lambda Invocations
Every 5 Minutes
Maximum Concurrent
Lambdas = 15,000
20. HPC Grid – On Premises
Idle or constrained capacity
High CapEx costs
High maintenance burden
Performance constrained
Long time to add capacity
License fees
Long time to deliver new service
Single environment availability
Service Comparison
Serverless HPC with Lambda
Scales to meet demand
Pay per use (actual vCPU usage)
Fully managed service
Horizontal scale
Near infinite capacity on-demand
No added license fees
Rapid CI/CD – low complexity
High business resiliency
21. Summary
• Cloud Native thinking has potential for enormous value
• Traditional approaches can hamper your cloud adoption
• Don’t be afraid to refactor
• Establish architectural patterns with distributed systems
thinking from the start
• Serverless = Enterprise grade
• STOP DOING UNDIFFERENTIATED HEAVY LIFTING!
Focus your efforts on your code not your infrastructure.
22. Thank You!
Bin Lu, Robert Groat
rgroat@smartronix.com, @groatr
cloudassured@smartronix.com