Presentation at IBM InterConnect on March 21, 2017.
Santander is one of the largest companies in the world, yet size is no guarantee of future survival given several challenges in the retail banking industry, primarily from disruptive new startups and a changing regulatory landscape. Success requires cutting-edge cloud computing solutions that achieve better resource utilization through automatic application scaling to match demand; and an associated, finer-grained cost model that helps distribute compute load at a lower cost. Learn how IBM and Santander partnered to create next-generation solutions for retail banking with the OpenWhisk open source project hosted on IBM Bluemix, which enables serverless architectures for event driven programming.
Automated Apache Kafka Mocking and Testing with AsyncAPI | Hugo Guerrero, Red...HostedbyConfluent
Apache Kafka is getting used as an event backbone in new organizations every day. We would love to send every byte of data through the event bus. Like traditional REST APIs, a contract-first approach is very useful when designing event-driven architectures. In the case of asynchronous APIs, we have the AsyncAPI specification to document the endpoints where the schema of the records become the main part of the contract payload. Microcks allows us to deploy a testing and mocking platform to have a unified view of the endpoints to speed-up application delivery.
In this session we will:
- Go over the evolution of API specifications
- Review the approach for contract-first design with Apache Kafka
- Introduce the AsyncAPI specification
- Take an overview of an implementation example for automated mocking and testing
Streaming ETL for Data Lakes using Amazon Kinesis Firehose - May 2017 AWS Onl...Amazon Web Services
Learning Objectives:
- Understand key requirements for collecting, preparing, and loading streaming data into data lakes
- Get an overview of transmitting data using Amazon Kinesis Firehose
- Learn how to perform data transformations with Amazon Kinesis Firehose
Data lakes enable your employees across the organization to access and analyze massive amounts of unstructured and structured data from disparate data sources, many of which generate data continuously and rapidly. Making this data available in a timely fashion for analysis requires a streaming solution that can durably and cost-effectively ingest this data into your data lake. Amazon Kinesis Firehose is a fully managed service that makes it easy to prepare and load streaming data into AWS. In this tech talk, we will provide an overview of Amazon Kinesis Firehose and dive deep into how you can use the service to collect, transform, batch, compress, and load real-time streaming data into your Amazon S3 data lakes.
Analyzing Streaming Data in Real-time with Amazon KinesisAmazon Web Services
As more and more organizations strive to gain real-time insights into their business, streaming data has become ubiquitous. Typical streaming data analytics solutions require specific skills and complex infrastructure. However, with Amazon Kinesis Analytics, you can analyze streaming data in real-time with standard SQL—there is no need to learn new programming languages or processing frameworks.
In this session, we dive deep into the capabilities of Amazon Kinesis Analytics using real-world examples. We’ll present an end-to-end streaming data solution using Amazon Kinesis Streams for data ingestion, Amazon Kinesis Analytics for real-time processing, and Amazon Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Amazon Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system.
Best practices in IBM Operational Decision Manager Standard 8.7.0 topologiesPierre Feillet
This deck has been presented at IBM InterConnect conference in 2016. It describes the ODM 8.7.x architecture, integration touchpoints, and recommended topologies for DevOps.
Microservice Orchestration at any Scale - Zalando Tech Meetup 09/2017 Zeebe
Presentation given by Thorben Lindhauer and Daniel Meyer at Camunda Night at Zalando Inno Lab https://www.meetup.com/Zalando-Tech-Events-Berlin/events/242890035/
Comment l’architecture événementielle révolutionne la communication dans le S...Vincent Lepot
Les slides de notre talk au Meetup "Les Frenchies du web #1" présentation l'archictecture événementielle mise en place chez Meetic autour d'Apache Kafka
La Duck Conf : "Event driven : est-ce que je suis prêt ?"OCTO Technology
Présentation de Wassel Alazhar - OCTO Technology
Les architectures orientées événement, ou comment exploiter
les moments-clés d'un business. La promesse est alléchante,
mais pourquoi ces architectures sont-elles si complexes à
maîtriser ?
Comment bien identifier les événements métier autour
desquels articuler nos services ? Quel découpage fait sens pour
mieux les exploiter ?
Automated Apache Kafka Mocking and Testing with AsyncAPI | Hugo Guerrero, Red...HostedbyConfluent
Apache Kafka is getting used as an event backbone in new organizations every day. We would love to send every byte of data through the event bus. Like traditional REST APIs, a contract-first approach is very useful when designing event-driven architectures. In the case of asynchronous APIs, we have the AsyncAPI specification to document the endpoints where the schema of the records become the main part of the contract payload. Microcks allows us to deploy a testing and mocking platform to have a unified view of the endpoints to speed-up application delivery.
In this session we will:
- Go over the evolution of API specifications
- Review the approach for contract-first design with Apache Kafka
- Introduce the AsyncAPI specification
- Take an overview of an implementation example for automated mocking and testing
Streaming ETL for Data Lakes using Amazon Kinesis Firehose - May 2017 AWS Onl...Amazon Web Services
Learning Objectives:
- Understand key requirements for collecting, preparing, and loading streaming data into data lakes
- Get an overview of transmitting data using Amazon Kinesis Firehose
- Learn how to perform data transformations with Amazon Kinesis Firehose
Data lakes enable your employees across the organization to access and analyze massive amounts of unstructured and structured data from disparate data sources, many of which generate data continuously and rapidly. Making this data available in a timely fashion for analysis requires a streaming solution that can durably and cost-effectively ingest this data into your data lake. Amazon Kinesis Firehose is a fully managed service that makes it easy to prepare and load streaming data into AWS. In this tech talk, we will provide an overview of Amazon Kinesis Firehose and dive deep into how you can use the service to collect, transform, batch, compress, and load real-time streaming data into your Amazon S3 data lakes.
Analyzing Streaming Data in Real-time with Amazon KinesisAmazon Web Services
As more and more organizations strive to gain real-time insights into their business, streaming data has become ubiquitous. Typical streaming data analytics solutions require specific skills and complex infrastructure. However, with Amazon Kinesis Analytics, you can analyze streaming data in real-time with standard SQL—there is no need to learn new programming languages or processing frameworks.
In this session, we dive deep into the capabilities of Amazon Kinesis Analytics using real-world examples. We’ll present an end-to-end streaming data solution using Amazon Kinesis Streams for data ingestion, Amazon Kinesis Analytics for real-time processing, and Amazon Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Amazon Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system.
Best practices in IBM Operational Decision Manager Standard 8.7.0 topologiesPierre Feillet
This deck has been presented at IBM InterConnect conference in 2016. It describes the ODM 8.7.x architecture, integration touchpoints, and recommended topologies for DevOps.
Microservice Orchestration at any Scale - Zalando Tech Meetup 09/2017 Zeebe
Presentation given by Thorben Lindhauer and Daniel Meyer at Camunda Night at Zalando Inno Lab https://www.meetup.com/Zalando-Tech-Events-Berlin/events/242890035/
Comment l’architecture événementielle révolutionne la communication dans le S...Vincent Lepot
Les slides de notre talk au Meetup "Les Frenchies du web #1" présentation l'archictecture événementielle mise en place chez Meetic autour d'Apache Kafka
La Duck Conf : "Event driven : est-ce que je suis prêt ?"OCTO Technology
Présentation de Wassel Alazhar - OCTO Technology
Les architectures orientées événement, ou comment exploiter
les moments-clés d'un business. La promesse est alléchante,
mais pourquoi ces architectures sont-elles si complexes à
maîtriser ?
Comment bien identifier les événements métier autour
desquels articuler nos services ? Quel découpage fait sens pour
mieux les exploiter ?
AWS Summit 2014 Brisbane - Breakout 6
Technical deep dive in to 10 AWS Cloud best practices with in-depth look at the tips and tricks of architecting on the AWS platform.
Presenter: Dean Samuels, Solutions Architect, Amazon Web Services
This slide deck provides the basics of how to build an Azure Logic App. This presentation was presented by Kuppurasu Nagaraj, a Microsoft MVP during the TechMeet360 event organized by BizTalk360, held on December 17, 2016 at Coimbatore.
Following Well Architected Frameworks - Lunch and Learn.pdfAmazon Web Services
The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, cost optimization and operational excellence when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud. In this session, you'll learn how to use the Well-Architected Framework to follow AWS guidelines and best practices to your architecture on AWS.
Amazon CI/CD Practices for Software Development Teams - SRV320 - Anaheim AWS ...Amazon Web Services
At Amazon, continuous integration and continuous delivery (CI/CD) techniques enable collaboration, increase agility, and deliver a high-quality product faster. In this talk, we walk you through the practices we use for both the CI and the CD of software delivery. For CI, we showcase how we incorporate pull requests to increase team collaboration. We also demonstrate how to optimize CI workflows for speed with caching, code analysis, and integration testing. For CD, we share example safety mechanisms, including canary testing, rollbacks, and Availability Zone redundancy. We use the AWS developer tools that were designed based on the internal Amazon tooling: AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeDeploy, and AWS X-Ray.
API Platform and Symfony: a Framework for API-driven ProjectsLes-Tilleuls.coop
Install API Platform. Design the API data model as a set of tiny plain old PHP classes. Instantly get:
- Fully featured dev environment with Symfony Flex and React containers, HTTP/2 and HTTPS support and a cache proxy
- Pagination, data validation, access control, relation embedding, filters and error handling
- Support for modern REST API formats: JSON-LD/Hydra, OpenAPI/Swagger, JSONAPI, HAL, JSON...
- GraphQL support
- An API responding in a just few milliseconds thanks to the builtin invalidation based cache mechanism
- A dynamically created Material Design admini interface (a la Sonata / EasyAdmin - but 100% client-side) built with React.
- Client apps skeletons: React/Redux, React Native, Vue.js, Angular...
Finally, deploy in 1 command on Google Container Engine or any cloud with a Kubernetes instance with the provided Helm chart.
Yes, you just need is describing a data model, just a few line of codes to get all of that!
How can you accelerate the delivery of new, high-quality services? How can you be able to experiment and get feedback quickly from your customers? To get the most out of the agility afforded by serverless and containers, it is essential to build CI/CD pipelines that help teams iterate on code and quickly release features. In this talk, we demonstrate how developers can build effective CI/CD release workflows to manage their serverless or containerized deployments on AWS. We cover infrastructure-as-code (IaC) application models, such as AWS Serverless Application Model (AWS SAM) and new imperative IaC tools. We also demonstrate how to set up CI/CD release pipelines with AWS CodePipeline and AWS CodeBuild, and we show you how to automate safer deployments with AWS CodeDeploy.
Introducing Amazon Connect-Keynote-Enterprise Connect 2017Amazon Web Services
Introducing Amazon Connect at Enterprise Connect 2017. Amazon Connect is a simple to use, cloud-based contact center service. This new service from Amazon Web Services is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations.
Gentle introduction to Azure ARM templates and other deployment options, both imperative and declarative, such as Terraform, Ansible, or even azcli or PowerShell.
Apache Kafka vs RabbitMQ: Fit For Purpose / Decision TreeSlim Baltagi
Kafka as a streaming data platform is becoming the successor to traditional messaging systems such as RabbitMQ. Nevertheless, there are still some use cases where they could be a good fit. This one single slide tries to answer in a concise and unbiased way where to use Apache Kafka and where to use RabbitMQ. Your comments and feedback are much appreciated.
Azure has a complete offering in Servereless space with Functions and Logic Apps. Logic Apps is a PaaS orchestrating engine of microservices. We will see how to use, for example applying it into IoT world.
AWS re:Invent 2016: Building Complex Serverless Applications (GPST404)Amazon Web Services
Provisioning, scaling, and managing physical or virtual servers—and the applications that run on them—has long been a core activity for developers and system administrators. The expanding array of managed AWS cloud services, including AWS Lambda, Amazon DynamoDB, Amazon API Gateway and more, increasingly allows organizations to focus on delivering business value without worrying about managing the underlying infrastructure or paying for idle servers and other fixed costs of cloud services. In this session, we discuss the design, development, and operation of these next-generation solutions on AWS. Whether you're developing end-user web applications or back-end data processing systems, join us in this session to learn more about building your applications without servers.
In this webinar, you'll learn about the foundational security blocks and how to start using them effectively to create robust and secure architectures. Discover how Identity and Access management is done and how it integrates with other AWS services. In addition, learn how to improve governance by using AWS Security Hub, AWS Config and CloudTrail to gain unprecedented visibility of activity in the account. Subsequently use AWS Config rules to rectify configuration issues quickly and effectively.
Building a Real-Time Security Application Using Log Data and Machine Learning...Sri Ambati
Building a Real-Time Security Application Using Log Data and Machine Learning- Karthik Aaravabhoomi
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
AWS Summit 2014 Brisbane - Breakout 6
Technical deep dive in to 10 AWS Cloud best practices with in-depth look at the tips and tricks of architecting on the AWS platform.
Presenter: Dean Samuels, Solutions Architect, Amazon Web Services
This slide deck provides the basics of how to build an Azure Logic App. This presentation was presented by Kuppurasu Nagaraj, a Microsoft MVP during the TechMeet360 event organized by BizTalk360, held on December 17, 2016 at Coimbatore.
Following Well Architected Frameworks - Lunch and Learn.pdfAmazon Web Services
The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, cost optimization and operational excellence when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud. In this session, you'll learn how to use the Well-Architected Framework to follow AWS guidelines and best practices to your architecture on AWS.
Amazon CI/CD Practices for Software Development Teams - SRV320 - Anaheim AWS ...Amazon Web Services
At Amazon, continuous integration and continuous delivery (CI/CD) techniques enable collaboration, increase agility, and deliver a high-quality product faster. In this talk, we walk you through the practices we use for both the CI and the CD of software delivery. For CI, we showcase how we incorporate pull requests to increase team collaboration. We also demonstrate how to optimize CI workflows for speed with caching, code analysis, and integration testing. For CD, we share example safety mechanisms, including canary testing, rollbacks, and Availability Zone redundancy. We use the AWS developer tools that were designed based on the internal Amazon tooling: AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeDeploy, and AWS X-Ray.
API Platform and Symfony: a Framework for API-driven ProjectsLes-Tilleuls.coop
Install API Platform. Design the API data model as a set of tiny plain old PHP classes. Instantly get:
- Fully featured dev environment with Symfony Flex and React containers, HTTP/2 and HTTPS support and a cache proxy
- Pagination, data validation, access control, relation embedding, filters and error handling
- Support for modern REST API formats: JSON-LD/Hydra, OpenAPI/Swagger, JSONAPI, HAL, JSON...
- GraphQL support
- An API responding in a just few milliseconds thanks to the builtin invalidation based cache mechanism
- A dynamically created Material Design admini interface (a la Sonata / EasyAdmin - but 100% client-side) built with React.
- Client apps skeletons: React/Redux, React Native, Vue.js, Angular...
Finally, deploy in 1 command on Google Container Engine or any cloud with a Kubernetes instance with the provided Helm chart.
Yes, you just need is describing a data model, just a few line of codes to get all of that!
How can you accelerate the delivery of new, high-quality services? How can you be able to experiment and get feedback quickly from your customers? To get the most out of the agility afforded by serverless and containers, it is essential to build CI/CD pipelines that help teams iterate on code and quickly release features. In this talk, we demonstrate how developers can build effective CI/CD release workflows to manage their serverless or containerized deployments on AWS. We cover infrastructure-as-code (IaC) application models, such as AWS Serverless Application Model (AWS SAM) and new imperative IaC tools. We also demonstrate how to set up CI/CD release pipelines with AWS CodePipeline and AWS CodeBuild, and we show you how to automate safer deployments with AWS CodeDeploy.
Introducing Amazon Connect-Keynote-Enterprise Connect 2017Amazon Web Services
Introducing Amazon Connect at Enterprise Connect 2017. Amazon Connect is a simple to use, cloud-based contact center service. This new service from Amazon Web Services is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations.
Gentle introduction to Azure ARM templates and other deployment options, both imperative and declarative, such as Terraform, Ansible, or even azcli or PowerShell.
Apache Kafka vs RabbitMQ: Fit For Purpose / Decision TreeSlim Baltagi
Kafka as a streaming data platform is becoming the successor to traditional messaging systems such as RabbitMQ. Nevertheless, there are still some use cases where they could be a good fit. This one single slide tries to answer in a concise and unbiased way where to use Apache Kafka and where to use RabbitMQ. Your comments and feedback are much appreciated.
Azure has a complete offering in Servereless space with Functions and Logic Apps. Logic Apps is a PaaS orchestrating engine of microservices. We will see how to use, for example applying it into IoT world.
AWS re:Invent 2016: Building Complex Serverless Applications (GPST404)Amazon Web Services
Provisioning, scaling, and managing physical or virtual servers—and the applications that run on them—has long been a core activity for developers and system administrators. The expanding array of managed AWS cloud services, including AWS Lambda, Amazon DynamoDB, Amazon API Gateway and more, increasingly allows organizations to focus on delivering business value without worrying about managing the underlying infrastructure or paying for idle servers and other fixed costs of cloud services. In this session, we discuss the design, development, and operation of these next-generation solutions on AWS. Whether you're developing end-user web applications or back-end data processing systems, join us in this session to learn more about building your applications without servers.
In this webinar, you'll learn about the foundational security blocks and how to start using them effectively to create robust and secure architectures. Discover how Identity and Access management is done and how it integrates with other AWS services. In addition, learn how to improve governance by using AWS Security Hub, AWS Config and CloudTrail to gain unprecedented visibility of activity in the account. Subsequently use AWS Config rules to rectify configuration issues quickly and effectively.
Building a Real-Time Security Application Using Log Data and Machine Learning...Sri Ambati
Building a Real-Time Security Application Using Log Data and Machine Learning- Karthik Aaravabhoomi
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
IBM Bluemix OpenWhisk: Serverless Conference 2016, London, UK: The Future of ...OpenWhisk
Learn more about the IBM Bluemix OpenWhisk, a serverless event-driven compute platform, which quickly executes application logic in response to events or direct invocations from web/mobile apps or other endpoints.
Serverless Design Patterns for Rethinking Traditional Enterprise Application ...Amazon Web Services
AWS Lambda is a powerful and flexible tool for solving diverse business problems, from traditional grid computing to scheduled batch processing workflows. Cloud native solutions using AWS Lambda enable architectures that depart from traditional enterprise application design. These new design patterns can provide substantially increased performance and reduced costs. In this session, learn how Fannie Mae re-architected one of their mission-critical traditional grid computing applications to a modern serverless solution using AWS Lambda. Learn More: https://aws.amazon.com/government-education/
Cloud-native Data: Every Microservice Needs a Cachecornelia davis
Presented at the Pivotal Toronto Users Group, March 2017
Cloud-native applications form the foundation for modern, cloud-scale digital solutions, and the patterns and practices for cloud-native at the app tier are becoming widely understood – statelessness, service discovery, circuit breakers and more. But little has changed in the data tier. Our modern apps are often connected to monolithic shared databases that have monolithic practices wrapped around them. As a result, the autonomy promised by moving to a microservices application architecture is compromised.
With lessons from the application tier to guide us, the industry is now figuring out what the cloud-native architectural patterns are at the data tier. Join us to explore some of these with Cornelia Davis, a five year Cloud Foundry veteran who is now focused on cloud-native data. As it happens, every microservice needs a cache and this evening will drill deep on that topic. She’ll cover a variety of caching patterns and use cases, and demonstrate how their use helps preserve the autonomy that is driving agile software delivery practices today.
Cloud Native Architectures with an Open Source, Event Driven, Serverless Plat...Daniel Krook
IBM keynote at CloudNativeCon / KubeCon in Seattle, Washington on November 8, 2016.
https://cnkc16.sched.org/event/8K4c
New cloud programming models enabled by serverless architectures are emerging, allowing developers to focus more sharply on creating their applications and less on managing their infrastructure. The OpenWhisk project started by IBM provides an open source platform to enable these cloud native, event driven applications.
Daniel Krook, Senior Software Engineer, IBM
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
Intel IT Open Cloud - What's under the Hood and How do we Drive it?Odinot Stanislas
L'IT d'Intel fait sa révolution et s'impose d'agir comme un "Cloud Service Provider". La transformation est initiée avec au programme la mise en place d'un Cloud Fédéré, Interopérable et Open mais aussi d'un framework de maturité, du DevOps et de la prise de risque. Bref, vraiment intéressant
Today, the large public Clouds - Azure and AWS - deploy at high-speed a diversity of services and features. Between Azure Functions, Lambda, Event Grid, Simple Workflow Service or Logic Apps, what to choose? Shall I go on Microservices? Event-Driven? Lambda Architecture? Deploy on Serverless? Containers? Modern Compute? Let's put a bit of order in all that. Enter the Modern Architecture, the foundation of all the new wave of Cloud services and not only. Session focused on application and infrastructure architecture, live examples based on Cloud, perspectives and roadmap of the corresponding services at Microsoft.
The Ideal Approach to Application Modernization; Which Way to the Cloud?Codit
Determine your best way to modernize your organization’s applications with Microsoft Azure.
Want to know more? Don't hesitate to download our White Paper 'Making the Move to Application Modernization; Your Compass to Cloud Native': http://bit.ly/39XylZp
Containers as Infrastructure for New Gen AppsKhalid Ahmed
Khalid will share on emerging container technologies and their role in supporting an agile cloud-native application development model. He will discuss the basics of containers compared to traditional virtualization, review use cases, and explore the open-source container management ecosystem.
DeFi, short for Decentralized Finance, is a movement that aims to offer financial services and products that are open to everyone, without the need for intermediaries.
Commit to the Cause, Push for Change: Contributing to Call for Code Open Sour...Daniel Krook
Materials for the OPEN TALK: Commit to the Cause, Push for Change: Contributing to Call for Code Open Source Projects session at DeveloperWeek Virtual on February 18, 2020
https://www.developerweek.com/conference/
Daniel Krook
IBM, Chief Technology Officer for the Call for Code Global Initiative
Andres Meira
Grillo, Founder & CEO
Lakshyana K.C.
Build Change, Technology Consultant
Call for Code is a multi-year program that calls on developers to create practical, effective, and high-quality applications based on one or more IBM Cloud services (for example, web, mobile, data, analytics, AI, IoT, or weather) or Red Hat platforms (including OpenShift) to build a solution that can have an immediate and lasting impact on humanitarian issues as open source projects. In this session you'll learn more about the solutions built to tackle natural hazards, climate change, and the pandemic. What sets Call for Code apart from other technology-for-good competitions is the commitment to deploy the winning solutions with the IBM Service Corps and to help teams build sustainable open source communities through The Linux Foundation. Join us at this talk to hear about the most recent winning projects, get an update on previous year's progress, and learn about how to contribute to two projects directly from the developers.
Engaging Open Source Developers to Develop Tech for Good through Code and Res...Daniel Krook
Materials for the Engaging Open Source Developers to Develop Tech for Good through Code and Response™ with The Linux Foundation session at Open Source Summit on July 1, 2020
https://sched.co/c3YP
The Call for Code Global Initiative is a five-year program that calls on developers to create practical, effective, and high-quality applications based on one or more IBM Cloud services (for example, web, mobile, data, analytics, AI, IoT, or weather) or Red Hat platforms (including OpenShift) to build a solution that can have an immediate and lasting impact on humanitarian issues as open source projects. Building on the success of the 2018 and 2019 competitions, the Call for Code 2020 Global Challenge asks teams of developers, data scientists, designers, business analysts, subject matter experts and more to build solutions that significantly address climate change through solutions for energy and water sustainability and resilience to natural disasters. Learn about this year's Call for Code Challenge (which has a top prize of $200K USD), be inspired by the 2018 and 2019 winners (Project OWL and Prometeo), and discover the new Code and Response™ with The Linux Foundation initiative.
COVID-19 and Climate Change Action Through Open Source TechnologyDaniel Krook
Materials for the COVID-19 and Climate Change Action Through Open Source Technology keynote at DeveloperWeek on June 16, 2020
https://www.developerweek.com/global/
Call for Code a five-year program that inspires developers to create practical, effective, and high-quality applications that can have an immediate and lasting impact on humanitarian issues as sustainable open source projects. Building on the success of the 2018 and 2019 competitions, the Call for Code 2020 Global Challenge asks teams of programmers, data scientists, designers, business analysts, subject matter experts, and more to build solutions that significantly address climate change through solutions for energy and water sustainability and disaster resiliency. A second track was added for solutions to the social and business aspects of COVID-19 which include crisis communications, remote education, and community cooperation. Learn about this year's Call for Code Challenge (which has a top prize of $200K USD), be inspired by the 2018 and 2019 winners (Project OWL and Prometeo), and discover the new Code and Response™ with The Linux Foundation initiative which supports the most promising solutions.
Materials for the Serverless APIs with Apache OpenWhisk session at OSCON on July 19, 2018
https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/67393
Ever been frustrated with a conference schedule app that freezes up when everyone opens it right after the first day’s keynotes? Ever played a mobile game that was so popular that its backend couldn’t keep up with real-time multiplayer interaction? If you’re an app developer, chances are that you’re looking for a better mobile backend architecture that can effectively match user demand at the exact moment it’s needed while taking advantage of new per-request cost models promised by serverless technologies.
The Apache OpenWhisk project (supported by IBM, Adobe, Red Hat, and others) provides a polyglot, autoscaling environment for deploying cloud-native applications driven by data, message, and REST API call events. Daniel Krook explains why serverless architectures are great for cloud workloads and when to consider OpenWhisk in particular for your next web, mobile, IoT, bot, or analytics project.
Workshop: Develop Serverless Applications with IBM Cloud FunctionsDaniel Krook
Materials for the IBM Cloud Functions workshop at Index on February 20, 2018
https://developer.ibm.com/indexconf/
http://bit.ly/index-serverless
Learn the basics and strengths of IBM Cloud Functions (powered by Apache OpenWhisk). In this workshop, you will learn how to develop serverless applications composed of loosely coupled microservice-like functions. You'll play with the CLI and development tools becoming an IBM Cloud Functions star by implementing a weather bot using IBM's Weather Company Data service and Slack. You will also investigate how to use other components like our API Gateway integration. Finally, you will get a preview of new technologies we are developing for IBM Cloud Functions.
Event specifications, state of the serverless landscape, and other news from ...Daniel Krook
Presentation at Serverlessconf Paris on February 15, 2018.
https://paris.serverlessconf.io/
This is an update to the early talk at Serverlessconf NYC at:
https://www.slideshare.net/DanielKrook/the-cncf-on-serverless
The Cloud Native Computing Foundation (CNCF) Serverless Working Group - with participation from IBM, AWS, Microsoft, Red Hat, VMware, Nuclio, Serverless Inc., Huawei and many others - has been working on an open eventing specification and mapping the state of the serverless landscape, including the features of public cloud serverless platforms and the capabilities of on premises and open source Functions-as-a-Service projects. In this lightning talk you'll hear about those efforts, see the newly published whitepaper on serverless use cases, and learn how you can help steer serverless adoption through participation in the CNCF.
The CNCF point of view on Serverless
Presentation at Serverlessconf NYC on October 11, 2017.
https://nyc.serverlessconf.io/
The CNCF Serverless Working Group - with participation from IBM, AWS, Google, Huawei, Red Hat, VMware and many others - has been working on guidance to help end developers understand serverless computing. relative to other cloud-native deployment options such as container orchestration (for example, Kubernetes) and Platform-as-a-Service (for example, Cloud Foundry and OpenShift). A soon-to-be-published whitepaper aims to educate users about the right workloads for serverless, help them make sense of the landscape of service providers, and recommend open source projects for inclusion in the CNCF. In this lightning talk you'll hear about our work and learn how you can help steer serverless adoption and project support from the CNCF.
Serverless architectures are rapidly gaining interest from developers but it can be hard to understand when a serverless platform makes the most sense for their next application and how long a given provider might be around to support their apps. The CNCF aims to help users learn about serverless and support emerging open source projects that can run, debug, and monitor the next generation of cloud-native applications.
Building serverless applications with Apache OpenWhisk and IBM Cloud FunctionsDaniel Krook
Presentation at Functions17 in Toronto, Canada on August 25, 2017.
https://functions.world
Video, code, links: https://github.com/krook/functions17
Apache OpenWhisk on IBM Bluemix provides a powerful and flexible environment for deploying cloud-native applications driven by data, message, and API call events. Daniel Krook explains why serverless architectures are attractive for many emerging cloud workloads and when you should consider OpenWhisk for your next project. Daniel then shows you how to get started with OpenWhisk on IBM Cloud Functions right away, using several samples on GitHub.
Daniel Krook, Software Architect & Developer Advocate, IBM
Building serverless applications with Apache OpenWhiskDaniel Krook
IBM presentation at the O'Reilly Open Source Convention in Austin, Texas on May 10, 2017.
https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/61295
Apache OpenWhisk on IBM Bluemix provides a powerful and flexible environment for deploying cloud-native applications driven by data, message, and API call events. Daniel Krook explains why serverless architectures are attractive for many emerging cloud workloads and when you should consider OpenWhisk for your next project. Daniel then shows you how to get started with OpenWhisk on Bluemix right away, using several samples on GitHub.
Daniel Krook, Software Architect, IBM
Containers vs serverless - Navigating application deployment optionsDaniel Krook
IBM presentation at the O'Reilly Open Source Convention Container Day in Austin, Texas on May 9, 2017.
https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/61403
New technologies seem to arrive fast and furious these days. We were just getting used to our new container world when serverless arrived. But is it better, faster, and cheaper, as the hype suggests?
Daniel Krook explores a real application packaged using popular open source container technology and walks you through a migration to an event-oriented serverless paradigm, discussing the trade-offs and pros and cons of each approach to application deployment and examining when serverless benefit applications and when it doesn’t.
You’ll learn considerations for using serverless API frameworks and how to reuse some of your containerization strategy as you move from more traditional application models to an event-driven world.
Daniel Krook, Software Architect, IBM
Serverless architectures built on an open source platformDaniel Krook
IBM keynote at the O'Reilly Software Architecture Conference in New York City on April 5, 2017.
https://conferences.oreilly.com/software-architecture/sa-ny/public/schedule/detail/60432
Daniel Krook explores Apache OpenWhisk on IBM Bluemix, which provides a powerful and flexible environment for deploying cloud-native applications driven by data, message, and API call events.
Daniel Krook, Software Architect, IBM
Build a cloud native app with OpenWhiskDaniel Krook
IBM OpenWhisk presentation and demo for developerWorks TV on December 14, 2016.
https://developer.ibm.com/tv/build-a-cloud-native-app-with-apache-openwhisk/
New cloud programming models enabled by serverless architectures are emerging, allowing developers to focus more sharply on creating their applications and less on managing their infrastructure. The OpenWhisk project started by IBM provides an open source platform to enable these cloud native, event driven applications.
At this live coding event, Daniel Krook provide an overview of serverless architectures, introduce the OpenWhisk programming model, and then deploy an OpenWhisk application on IBM Bluemix, while you watch, step-by-step.
Daniel Krook, Senior Software Engineer, IBM
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...Daniel Krook
Presentation at the OpenStack Summit in Barcelona, Spain on October 25, 2016.
http://bit.ly/os-kub-oci-cncf
Containers along with next generation topics such as orchestration and serverless computing continue to draw interest across the application developer and data center operator communities because of the enormous potential of the technology and the rapid pace of change.
As the potential of Docker continues to evolve, Kubernetes emerges as the leading orchestration technology, and the OpenStack Magnum project has matured, many want to see shared governance over the baseline container specification and associated runtime and format/image to protect investments and enable confident adoption of this emerging technology.
Join this session to learn the latest about the Open Container Initiative (www.opencontainers.org) and the Cloud Native Computing Foundation (cncf.io) - both collaborative projects of the Linux Foundation - that drive the latest cloud native technologies and projects and see how they relate to Magnum and Kuryr.
Daniel Krook, Senior Software Engineer, IBM
Jeffrey Borek, Program Director, Open Tech, IBM
Sarah Novotny, Senior Kubernetes Community Manger, Google
Serverless architectures are one of the hottest trends in cloud computing this year, and for good reason. There are several technical capabilities and business factors coming together to make this approach compelling from both an application development and deployment cost perspective. The new OpenWhisk project provides an open source platform to enable these cloud-native, event-driven applications.
This talk will lay out the technical and business drivers behind the rise of serverless architectures, provide an introduction to the OpenWhisk open source project (and describe how it differs from other services like AWS Lambda), and give a demonstration showing how to start developing with this new cloud computing model using the OpenWhisk implementation available on IBM Bluemix.
Presented on October 12, 2016 at the NYC Bluemix meetup
OpenWhisk - A platform for cloud native, serverless, event driven appsDaniel Krook
Cloud computing has recently evolved to enable developers to write cloud native applications better, faster, and cheaper using serverless technology.
OpenWhisk provides an open source platform to enable cloud native, serverless, event driven applications.
This presentation lays out the technical and business drivers behind the rise of serverless architectures, and provides an intro to the OpenWhisk open source project.
Presented at Cloud Native Day in Toronto, Canada on August 25, 2016.
Containers, OCI, CNCF, Magnum, Kuryr, and You!Daniel Krook
Presentation at the OpenStack Summit in Austin, Texas on April 28, 2016.
http://bit.ly/os-oci-cncf-ses
The technology industry has been abuzz about cloud workload containerization since the open source Docker project became a phenomenon in early 2014.
Meanwhile, an OpenStack Containers Team was formed and the Magnum project launched to provide users with a convenient Containers-as-a-Service solution for OpenStack environments.
As the potential of both technologies emerged, many wanted to see shared governance over the baseline container specification and runtime technology to ensure an open cloud ecosystem.
This past December, two new groups were launched with a goal of creating open, industry standards. The first called the Open Container Initiative (http://www.opencontainers.org), and the second called the Cloud Native Computing Foundation (http://cncf.io)
Jeffrey Borek - Program Director, Open Tech, IBM - @JeffBorek
Daniel Krook - Senior Software Engineer, IBM - @DanielKrook
Val Bercovici - Global Cloud CTO, NetApp/SolidFire - @valb00
Taking the Next Hot Mobile Game Live with Docker and IBM SoftLayerDaniel Krook
Presentation at the IBM InterConnect Conference in Las Vegas, Nevada on February 24, 2016.
Mobile games are the fastest-growing sector of the $70 billion video game industry, far outpacing traditional consoles. But companies that aspire to create the next hot title have to account for more than just the app downloaded to a user device. They must prepare for huge spikes in game play with scalable backends to handle massive data and transactions behind socially linked user profiles and global leaderboards. This talk looks at how IBM successfully partnered with Firemonkeys, a major studio that had hit their vertical scaling limit, to design and deploy a new Docker-based architecture on SoftLayer. This scale-out architecture is able to handle an order of magnitude more customers for their next major release.
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. ...Daniel Krook
Presentation at the OpenStack Summit in Tokyo, Japan on October 29, 2015.
http://sched.co/49vI
This talk will cover the pros and cons of four different OpenStack deployment mechanisms. Puppet, Chef, Ansible, and Salt for OpenStack all claim to make it much easier to configure and maintain hundreds of OpenStack deployment resources. With the advent of large-scale, highly available OpenStack deployments spread across multiple global regions, the choice of which deployment methodology to use has become more and more relevant.
Beyond the initial day-one deployment, when it comes to the day-two and beyond questions of updating and upgrading existing OpenStack deployments, it becomes all the more important choose the right tool.
Come join the Bluebox and IBM team to discuss the pros and cons of these approaches. We look at each of these four tools in depth, explore their design and function, and determine which scores higher than others to address your particular deployment needs.
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
Paul Czarkowski - Cloud Engineer at Blue Box, an IBM company
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
The Containers Ecosystem, the OpenStack Magnum Project, the Open Container In...Daniel Krook
Presentation at the OpenStack Summit in Tokyo, Japan on October 27, 2015.
http://sched.co/49x0
The technology industry has been abuzz about cloud workload containerization since the open source Docker project became a phenomenon in early 2014.
Meanwhile, an OpenStack Containers Team was formed and the Magnum project launched to provide users with a convenient Containers-as-a-Service solution for OpenStack environments.
As the potential of both technologies emerged, many wanted to see shared governance over the baseline container specification and runtime technology to ensure an open cloud ecosystem.
This past June, a new group was formed with a goal of creating open, industry standards around container formats and runtimes, called the Open Container Initiative (http://www.opencontainers.org).
So how will OpenStack Magnum influence - and be influenced by - the new OCI group? Why is the OCI under the stewardship of the Linux Foundation? What is the scope of the OCI effort? What project goals and/or principles will guide their work?
Attend this session to learn the following:
* A brief history of the open container ecosystem and the major benefits that containerization provides
* An overview of the Magnum CaaS plugin architecture and design goals
* Insider details on the the progress of the Linux Foundation Open Container Initiative (and the related Cloud Native Computing Foundation)
* What it all means for deploying container orchestration engines on your cloud with OpenStack Magnum
Megan Kostick - Software Engineer, Cloud and Open Source Technologies, IBM
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
Jeffrey Borek - WW Program Director, Open Technologies and Partnerships, Cloud Computing
Quickly build and deploy a scalable OpenStack Swift application using IBM Blu...Daniel Krook
Slides from the 2015 OpenStack Summit on May 18.
http://sched.co/35rZ
Sample code here: http://bit.ly/ibm-bos
Object Storage services are a powerful tool when used as a backing store for your application and OpenStack Swift is now easy to integrate with your application. In this interactive session, IBM developers will demonstrate how you can use Bluemix (IBM's Cloud Foundry offering) and IBM DevOps Services to create a scalable Node.js application backed by Swift. The session will show how - using only a browser - a developer can employ Bluemix tools to clone, develop, deploy, and manage an application in minutes. The team will then describe how developers can then extend the application by using another one of the available services or by incorporating Bluemix into their existing developer workflows.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Navigating the Metaverse: A Journey into Virtual Evolution"
Serverless Architectures in Banking: OpenWhisk on IBM Bluemix at Santander
1. Serverless Architectures in Banking:
Apache OpenWhisk on IBM Bluemix at Santander
IBM InterConnect 2017 – March 21, 2017
2. 1
About the speakers
Daniel Krook
Software Architect/Engineer
& Developer Advocate at IBM
krook@us.ibm.com
Luis Enriquez
Head of Platform Engineering &
Architecture at Santander Group
luis.enriquez@gruposantander.com
3. 2
Agenda
1 2 3 4
Results, conclusions,
future directions
Serverless
architectures
Apache OpenWhisk
on IBM Bluemix
Check processing
overview and solution
5. 4
Goals and results of the OpenWhisk Proof of Concept
Goals & Principles
• Hybrid solution
• Greater deployment choices
• Avoid vendor lock-in
• Scalability and elasticity
• Respond to workload peaks
• Asynchronous and event-driven
• Developer-friendly solution
• Efficiency
Results
• Automated process, reducing time
and error avoidance
• Elasticity, bursting into the cloud
• Simple and easy to maintain
technical solution
• Significant cost saving potential
6. 5
Agenda
2 3 4
Results, conclusions,
future directions
Serverless
architectures
Apache OpenWhisk
on IBM Bluemix
Check processing
overview and solution
1
7. 6
With a serverless platform developers focus
more on code, less on infrastructure
Bare
metal
Virtual
machines
Containers
Functions
Decreasing concern (and control) over stack implementation
Increasingfocusonbusinesslogic
8. 7
Serverless platforms address 12 Factors for developers
I Codebase Handled by developer (Manage versioning of functions themeselves)
II Dependencies Handled by developer, facilitated by serverless platform (Runtimes and packages)
III Config Handled by platform (Environment variables or injected event parameters)
IV Backing services Handled by platform (Connection information injected as event parameters)
V Build, release, run Handled by platform (Deployed resources immutable and internally versioned)
VI Processes Handled by platform (Single stateless containers used)
VII Port binding Handled by platform (Actions or functions automatically discovered)
VIII Concurrency Handled by platform (Process model hidden and scales in response to demand)
IX Disposability Handled by platform (Lifecycle hidden from user, fast startup and elastic scale prioritized)
X Dev/prod parity Handled by developer (Developer is deployer. Scope of what differs narrower)
XI Logs Handled by platform (Developer writes to console.log, platform streams logs)
XII Admin processes Handled by developer (No distinction between one off processes and long running)
9. 8
Emerging workloads are a good fit for event driven programming
Execute app logic in response to database change
Perform edge analytics in response to sensor input
Provide cognitive computing via a conversational bot
Schedule tasks according to a specific timetable
Invoke autoscaled mobile backend services
10. 9
New cost models more accurately charge for compute time
While many applications must still be deployed in an always on model,
serverless architectures provide an alternative that can result in substantial
cost savings for a variety of event driven workloads.
Applications billed by
compute time (millisecond)
rather than reserved memory
(GB/hour).
Means a greater linkage
between cloud resources
used and business
operations executed.
11. 10
Technological and business factors make serverless compelling
Serverless architectures are gaining traction
Cost models getting more granular and efficient
Growth of event driven workloads that need automated scale
Platforms to facilitate cloud native design for developers
13. 12
OpenWhisk enables these serverless, event-driven workloads
Serverless deployment and operations model
Optimized utilization, fine grained metering at any scale
Flexible, extensible, polyglot programming model
Open source and open ecosystem (Apache Incubator)
Ability to run in public, private, and hybrid models
Apache
OpenWhisk
a cloud platform
that executes code
in response to events
14. 13
Developers work with triggers, actions, rules, and packages
Data sources define
events they emit as
Triggers.
Developers map
Actions to Triggers
via Rules.
Packages provide
integration with
external services.
T
A
P
R
15. 14
OpenWhisk
Comparison to traditional PaaS or IaaS models
Traditional Model Serverless Model
• Continuous polling often used
• Charged even when idling
• No auto-scaling in response to load
• Introduces event-driven programming model
• Charges only for what is used
• Auto-scales in response to current load
Request Polling
Application
CF Container VM
Trigger
OpenWhisk
Engine
Running
Action
Running
Action
Running
Action
Idle compute
resources
Deploy action within
milliseconds, run it, free up
resources
Pool of Actions
JS Swift Docker
17. 16
Business Drivers at Santander for a Serverless Architecture – 1/2
What value do microservices and serverless architectures provide?
Compared to a PaaS offering, FaaS charges the customer
based on the actual time used by the service itself. Server
uptime is not billed (serverless).
Independent scalability, integration and delivery pipelines,
testability and development flows make it more streamlined and
automated, resulting into less maintenance efforts and
savings on operations and development costs.
Provides a great way to quickly and reliably connect or relay
private/public/hybrid SOA or Cloud APIs at low cost
$ ¥
€ £
Billing Model
Low Complexity
Integration
Capability
18. 17
• However, outcome depends on each scenario
• Not everything can or should rely on FaaS. E.g: very active back-ends, complex
front-end applications etc. would simply underperform
• OpenWhisk in particular are excellent to design a web of microservices whose
purpose is to relay or orchestrate other services (e.g. IoT, reactive post-
processes applied on other Cloud feeds etc.)
• Microservices are another tool for architects to support the general IT Cloud
transition, and as such should be used in conjunction with other solutions.
Business Drivers at Santander for a Serverless Architecture – 2/2
What value do microservices and serverless architectures provide?
19. 18
Scenario:
This PoC intends to present how OpenWhisk could improve the following business process:
Bank clerks manual entry of routing and account numbers when cashing Santander Bank
customers’ checks.
The purpose of this proof of concept is to show how OpenWhisk can be used for an event-driven,
serverless architecture, that processes the deposit of checks to a bank account using optical
character recognition (OCR), replacing manual inputs and avoid correlated human errors.
Proof of Concept: “OpenChecks” check processing
OpenWhisk by the example: service enablement and orchestration
20. 19
Check data parsing with OCR overview
OCR will be used to parse the
data at the bottom of the
check representing:
• The routing number
• The account number
If this information is not
readable or does not follow
the presented format, the
check will be considered
invalid.
Routing number Deposit from account number
The hand-written amount data is not currently parsable nor is the deposit to account information
provided on a check itself. This data needs to be passed as metadata (that is, encoded in the file
name as supplied by the bank clerk).
21. 20
Deployment model approaches evaluated
This proof of concept had three different deployment models, each one with its advantages and
disadvantages.
Deployment of the
computing engine on
Cloud
• Serverless
computing
Deployment of the
computing engine on
premises
• Sensitive data
• Avoid Vendor lock-in
Deployment of the
computing model on
both Cloud and on-
premises
• Total cost of
ownership
Cloud Local Cloud Bursting
24. 23
Workload split between public and private OpenWhisk
and Cloudant instances (with hybrid scheduling)
25. 24
• Checks are scanned and uploaded by the front-office clerks
• OpenWhisk Bluemix resizes the scans in smaller sizes and stores them along with
the originals into remote databases
• Databases are replicated over to on-prem servers
• On-prem OpenWhisk kickstarts the OCR, parses the checks and stores the result
into a local database
• Statistics such as the total amount processed, total checks that could not be parsed
with success etc. are calculated by either the local or remote OpenWhisk systems,
alternatively, based on an arbitrary dispatching method. These stats are stored in a
remote database, which is replicated over a local instance continuously.
• Clerks connect a local front-end to consult these statistics from the local
database.
“OpenChecks” OCR in a hybrid environment
Hybrid Deployment with Cloud Bursting: Workflow Highlights
26. 25
Proof of Concept: OpenChecks OCR
Hybrid Deployment with Cloud Bursting: Demo Front-end Statistics Screenshot
27. 26
• No data resides only in the cloud, there’s always a local replica. If necessary
(regulatory concerns), there’s a way to use only on-prem storage.
• Tasks are split in a hybrid way: part of the flow is done on-prem, the rest on the
cloud.
• This simulation stresses on the versatile nature of OpenWhisk: it both
orchestrates (e.g. database change feeds handlers, statistics computation
dispatcher) and processes (e.g. image resize, statistics computation). It is both
the foreman and the laborer.
• General deployment as well as DevOps integration is quick and should not
disrupt other services
• Big data document-based CouchDB (or Cloudant) is used in anticipation of large
data volumes
• Communication relies entirely on HTTPS REST APIs
Proof of Concept: OpenChecks OCR
Hybrid Deployment with Cloud Bursting: Architectural Highlights
29. 28
Cost savings estimation from a check processing use case
1 https://www.federalreserve.gov/paymentsystems/check_govcheckprocannual.htm
Estimating that
• Number of USA check transactions in 2016: 60 million1
• Average time of execution in seconds: 7 seconds
• Allocated memory per execution in GB: 0.256 GB
• Cost per GB-second of execution: 0.000017 USD
With these estimations we can predict that the
total yearly cost to process every paper check in 2016
would be approximately $1,830 USD if based on OpenWhisk.
Yearly Cost = # of Executions
x Average Time (in seconds)
x Allocated Memory per Execution
x $ per GB/second
30. 29
• As of today, the OCR service can only cover efficiently Account and Routing numbers
for US bank checks. In the future, other technologies should be surveyed in order to
handle the amounts on the checks.
• The front-end is currently showing the checks statistics, for demonstration purposes.
It should be enhanced to allow for data correction and validation from the clerks.
• Beyond use by the clerks at the banks, the same logic could be used to support
mobile check deposit (with deposit to acount information inferred from the user, and
amount data input manually.
• Another OpenWhisk function could be created in a similar fashion to integrate with
Santander Bank internal systems of record.
Proof of Concept: OpenChecks OCR
Hybrid Deployment with Cloud Bursting: Challenges and Potential Improvements
31. 30
• The full cost of an on premises cluster of virtual machines or containers to run
OpenWhisk (and CouchDB) in a highly available configuration should be weighed
against the lower cost of using it on a hosted instance in Bluemix. There is cost
versus risk, but the key thing is you have flexibility to decide with OpenWhisk.
• Use the new release of the Watson text analysis service rather than packaging
Tesseract with MICR training data in a Docker container (or using Tesseract.js).
• With serverless, cost is tightly bound to value gained, so code optimizations are
very important at scale.
• There has been a lot of work done to make OpenWhisk actions runnable and
testable locally outside the OpenWhisk environment. This is key to a an end to
end workflow that requires versioning of functions.
• OpenWhisk native sequences and triggers/feeds should be preferred over manual
programmatic action chaining in order to support composability.
Proof of Concept: OpenChecks OCR
Hybrid Deployment with Cloud Bursting: Challenges and Potential Improvements
32. 31
Why use OpenWhisk on IBM Bluemix?
Provides a rich ecosystem of building blocks from various domains.
Supports an open ecosystem that allows sharing microservices via packages.
Takes care of low-level details such as scaling, load balancing, logging and fault tolerance.
Hides infrastructural complexity allowing developers to focus on business logic.
Allows developers to compose solutions using modern abstractions and chaining.
Charges only for code that runs.
Is open and designed to support an open community.
Supports multiple runtimes and arbitrary binary programs encapsulate in Docker containers.