Moving from the Iron Age to the Cloud Age in computing is supposed to save us money yet many migrations seem to cost more in the long run and result in infrastructures as complex to manage as what we had before. This is often the result of the so called “lift & shift” approach many take – it’s a short term win that doesn’t address why you wanted to move to the cloud in the first place.
The Cloud Age affords us the opportunity not to treat our infrastructure as something special, instead as something disposable. By applying the practices of continuous integration and delivery to our infrastructure and configuration management we can built truly scalable infrastructures to host our application’s wildest dreams.
In this talk we will look at the tools and processes that can be adopted to truly make use of the possibilities of the Cloud.
Moving to the cloud isn’t easy, transforming your engineering team to adopt to the cloud and services lifestyle is therefore crucial. It all starts with creating a common understanding of the engineering and development principles which are important in the cloud, which are different then building regular applications. This session will take you on a road trip based on the presenters experience developing and more importantly operating Azure Active Directory, SQL Server Azure and most recently the Xbox Live Services to support Xbox One.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera ClusterContinuent
Galera Cluster vs. Continuent Tungsten Clusters
Building a Geo-Scale, Multi-Region and Highly Available MySQL Cloud Back-End
This second installment of our High Noon series of on-demand webinars is focused on Galera Cluster (including MariaDB Cluster & Percona XtraDB Cluster). It looks at some of the key characteristics of Galera Cluster and how it fares as a MySQL HA / DR / Geo-Scale solution, especially when compared to Continuent Tungsten Clustering.
Watch this webinar to learn how to do better MySQL HA / DR / Geo-Scale.
AGENDA
- Goals for the High Noon Webinar Series
- High Noon Series: Tungsten Clustering vs Others
- Galera Cluster (aka MariaDB Cluster & Percona XtraDB Cluster)
- Key Characteristics
- Certification-based Replication
- Galera Multi-Site Requirements
- Limitations Using Galera Cluster
- How to do better MySQL HA / DR / Geo-Scale?
- Galera Cluster vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
General introduction of Git and its feature set. Subversion migration strategies using git-svn, subgit or github enterprise. Suitable for different audience types managers, developers, etc.
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMwarePuppet
Here are the slides from Neil Armitage's PuppetConf 2016 presentation called Changing the Engine While in Flight. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Infrastructure as Code to Maintain your SanityDewey Sasser
Infrastructure as Code is all the rage, but suffers the same problems as any other code: it can easily become an unmanageable plate of spaghetti. Organizing your IaC is critical but the methods are different than traditional program code. We present an organizational pattern for IaC that has proven itself across multiple technologies in multiple cloud systems to allow isolation of concerns, stability, and controlled rollouts and maintain your sanity while doing so.
Moving to the cloud isn’t easy, transforming your engineering team to adopt to the cloud and services lifestyle is therefore crucial. It all starts with creating a common understanding of the engineering and development principles which are important in the cloud, which are different then building regular applications. This session will take you on a road trip based on the presenters experience developing and more importantly operating Azure Active Directory, SQL Server Azure and most recently the Xbox Live Services to support Xbox One.
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera ClusterContinuent
Galera Cluster vs. Continuent Tungsten Clusters
Building a Geo-Scale, Multi-Region and Highly Available MySQL Cloud Back-End
This second installment of our High Noon series of on-demand webinars is focused on Galera Cluster (including MariaDB Cluster & Percona XtraDB Cluster). It looks at some of the key characteristics of Galera Cluster and how it fares as a MySQL HA / DR / Geo-Scale solution, especially when compared to Continuent Tungsten Clustering.
Watch this webinar to learn how to do better MySQL HA / DR / Geo-Scale.
AGENDA
- Goals for the High Noon Webinar Series
- High Noon Series: Tungsten Clustering vs Others
- Galera Cluster (aka MariaDB Cluster & Percona XtraDB Cluster)
- Key Characteristics
- Certification-based Replication
- Galera Multi-Site Requirements
- Limitations Using Galera Cluster
- How to do better MySQL HA / DR / Geo-Scale?
- Galera Cluster vs Tungsten Clustering
- About Continuent & Its Solutions
PRESENTER
Matthew Lang - Customer Success Director – Americas, Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scaleable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
General introduction of Git and its feature set. Subversion migration strategies using git-svn, subgit or github enterprise. Suitable for different audience types managers, developers, etc.
PuppetConf 2016: Changing the Engine While in Flight – Neil Armitage, VMwarePuppet
Here are the slides from Neil Armitage's PuppetConf 2016 presentation called Changing the Engine While in Flight. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Infrastructure as Code to Maintain your SanityDewey Sasser
Infrastructure as Code is all the rage, but suffers the same problems as any other code: it can easily become an unmanageable plate of spaghetti. Organizing your IaC is critical but the methods are different than traditional program code. We present an organizational pattern for IaC that has proven itself across multiple technologies in multiple cloud systems to allow isolation of concerns, stability, and controlled rollouts and maintain your sanity while doing so.
Presentation given at the OpenStack summit in Paris (Kilo) on Tue Nov 4th.
Last summit I had the pleasure to present a talk which encountered some success "Are enterprise ready for the OpenStack transformation?" (also published on SlideShare) . This talk is a follow up on what are the best practices that are successful in operating the transformation. We will first focus on identifying the right use cases for a generic enterprise, then define a roadmap with an organisational and a technical track, to finish with the definition what would be our success criterias for our group. This will happen as a workshop summary based on the multiple engagements eNovance has been delivering over the past 2 years.
Continuous Deployment Applied at MyHeritageRan Levy
Learn how continuous deployment was applied at MyHeritage. Check out how we automated the process using MCollective, RPM, Jenkins, unit and integration tests using JUnit, PHPUnit, Cucumber and more.
Accelerating Devops via Data Virtualization | DelphixDelphixCorp
“Accelerating DevOps Using Data Virtualization” at the Collaborate 2016 conference in Las Vegas. It discusses the inevitability of data virtualization and its many use cases.
JELASTIC IS THE PIONEER AND VISIONARY IN THE CLOUD INDUSTRYRuslan Synytsky
Jelastic’s Platform-as-Infrastructure is rapidly becoming the standard for hosting service providers worldwide and is penetrating the enterprise market by delivering a superior turnkey cloud environment at a fraction of the cost of existing virtualization solutions.
Webinar: What’s Breaking Your VMware Backups? And How You Can Fix Them QuicklyStorage Switzerland
Backing up VMware successfully has always been a challenge. The introduction of the cloud ever increasing scale of VMware infrastructure continues to give backups fits and makes it even harder.
Please join George Crump, Lead Analyst at Storage Switzerland and our guest speaker W. Curtis Preston, Chief Technical Architect at Druva, for a discussion on the new challenges with VMware backups, and how to address them successfully.
How to make sure a website can survive go-live and cope with ever increasing
traffic and amounts of data: knowing what to measure and log, during both
development and production phases; load testing ; identifying bottlenecks;
preventing disasters
The purpose of this paper is to demonstrate that it is possible to have an Odoo deployment that costs less than $100/month for 50 concurrent users. Moreover, this system will be always available and fault-tolerant and very much scalable. All this, because of its cloud architecture.
Big data nowadays is a new challenge to be managed, not as a barrier to grow up business. Data storages costs relatively is inexpensive, with more transactions generated from social media, machine, and sensors, data increased from pieces by pieces into pentabytes.
This slide explained what the challenges of Big Data (Volume, Velocity, and Variety) and give a solution how to managed them.
There are many tools that could help to solve the problems, but the main focus tools in this slide is Apache Hadoop.
Presentation given at the OpenStack summit in Paris (Kilo) on Tue Nov 4th.
Last summit I had the pleasure to present a talk which encountered some success "Are enterprise ready for the OpenStack transformation?" (also published on SlideShare) . This talk is a follow up on what are the best practices that are successful in operating the transformation. We will first focus on identifying the right use cases for a generic enterprise, then define a roadmap with an organisational and a technical track, to finish with the definition what would be our success criterias for our group. This will happen as a workshop summary based on the multiple engagements eNovance has been delivering over the past 2 years.
Continuous Deployment Applied at MyHeritageRan Levy
Learn how continuous deployment was applied at MyHeritage. Check out how we automated the process using MCollective, RPM, Jenkins, unit and integration tests using JUnit, PHPUnit, Cucumber and more.
Accelerating Devops via Data Virtualization | DelphixDelphixCorp
“Accelerating DevOps Using Data Virtualization” at the Collaborate 2016 conference in Las Vegas. It discusses the inevitability of data virtualization and its many use cases.
JELASTIC IS THE PIONEER AND VISIONARY IN THE CLOUD INDUSTRYRuslan Synytsky
Jelastic’s Platform-as-Infrastructure is rapidly becoming the standard for hosting service providers worldwide and is penetrating the enterprise market by delivering a superior turnkey cloud environment at a fraction of the cost of existing virtualization solutions.
Webinar: What’s Breaking Your VMware Backups? And How You Can Fix Them QuicklyStorage Switzerland
Backing up VMware successfully has always been a challenge. The introduction of the cloud ever increasing scale of VMware infrastructure continues to give backups fits and makes it even harder.
Please join George Crump, Lead Analyst at Storage Switzerland and our guest speaker W. Curtis Preston, Chief Technical Architect at Druva, for a discussion on the new challenges with VMware backups, and how to address them successfully.
How to make sure a website can survive go-live and cope with ever increasing
traffic and amounts of data: knowing what to measure and log, during both
development and production phases; load testing ; identifying bottlenecks;
preventing disasters
The purpose of this paper is to demonstrate that it is possible to have an Odoo deployment that costs less than $100/month for 50 concurrent users. Moreover, this system will be always available and fault-tolerant and very much scalable. All this, because of its cloud architecture.
Big data nowadays is a new challenge to be managed, not as a barrier to grow up business. Data storages costs relatively is inexpensive, with more transactions generated from social media, machine, and sensors, data increased from pieces by pieces into pentabytes.
This slide explained what the challenges of Big Data (Volume, Velocity, and Variety) and give a solution how to managed them.
There are many tools that could help to solve the problems, but the main focus tools in this slide is Apache Hadoop.
Focus on Your Analysis, Not Your SQL CodeDATAVERSITY
Analysts in the line of business deal with a myriad of time-consuming data preparation and analytic challenges that often require IT or DBA intervention to deliver a requested dataset. Others have taught themselves “enough SQL to be dangerous”, learning the necessary code to extract the data needed to answer their business question. Self-service data analytics empowers these business analysts to take control of the entire analytics process, delivering the necessary results for better business decisions.
Join us to learn how self-service data analytics allows analysts to:
- Utilize a drag-and-drop workflow for data and analytic processes without writing code
- Minimize data movement and ensure data integrity through in-database capabilities
- Easily work across relational and non-relational databases to deliver faster business results
Self-service data analytics delivers a repeatable process that is transparent to not only business analysts, but also SQL coders and decision makers across the organization.
Presentacion de etica y deontologia wolfgang salazar89wolfgang
El hombre siempre ha dedicado mucho trabajo al desarrollo de dispositivos y estructuras que hacen más útiles los recursos naturales. Esos hombres fueron la iniciativa del ingeniero de la era moderna. La diferencia más significativa entre aquellos antiguos ingenieros y los de nuestro día, es el conocimiento en el que se basa sus obras.
Webinar: Fighting Fraud with Graph DatabasesDataStax
Modern fraud detection has significant engineering challenges. From managing the ingestion and scale, to the analysis of those patterns in real-time. We'll first take a look at how DataStax Enterprise Graph, powered by the industry’s best version of Apache Cassandra™, can meet those requirements to help you save the day.
Why we love pgpool-II and why we hate it!PGConf APAC
This talk was presented at pgDay Asia 2017. This details some of the great features of pgpool and some practical challenges faced by the speaker. It concludes with some tips while using pgpool and when not to use pgpool
Presentación del Seminario “Big Data: Herramienta de Futuro en el Marco Europeo” organizado por la Cámara de Comercio de Cartagena e impartido por Daniel Jadraque de Datary.
These days you can't go far without encountering XML or JSON and in the world of the web these data types are ubiquitous. Since version 8.3 XML has been supported as a data type and JSON support was introduced in 9.2. We'll be looking at what advantages there are in storing your data with these data types and how we can query and manipulate our data once it's stored.
Recently we have started to find that MySQL is holding us back when it comes to our vision of growing a sustainable and scalable database. This talk will discuss the process Rant & Rave followed in order to migrate our core database. By discussing some of the challenges we overcame from mapping datatypes to differences in syntax it is hoped that other MySQL users will be better equipped to make the move to PostgreSQL.
With 9.4 came logical decoding but what is it and how can it be used? Besides being a precursor to bi-directional replication there are plenty of use cases for this and many don't even require you to implement a plugin. We'll look at trigger-less auditing, partial replication and full statement replication.
Migration to cloud is no easy task. Start small and learn the core technologies before leveraging the advanced features of the cloud. The cultural change will affect the whole organization from development to business management and sales.
Cloud native applications are the future of software. Modern software is stateless, provided from cloud to heterogeneous clients on demand and designed to be scalable and resilient.
Best practices for application migration to public clouds interop presentationesebeus
Best Practices for Application Migration to Public Clouds
Talk given at Interop May, 2013.
Whether you are thinking of migrating 1 application or 8000 applications to the cloud, the odds of success increase if best practices are followed. Do you know what those best practices are?
As hustler Mike McDermott said in the 1998 poker movie Rounders, “If you can't spot the sucker in the first half hour at the table, then you ARE the sucker.”
Anyone with a credit card can sit at the table of trying to move applications to public clouds. Those who want to succeed, study and learn from consistent winners. There are some hands to fold, some to play cautiously, and some to play aggressively.
This session covered best practices from helping 15 Fortune 1000 companies successfully migrate to cloud solutions.
Who should attend?
Anyone who wants to improve their odds of successfully migrating applications to public clouds.
Key Takeaways
• What are the key business considerations to address prior to migration?
• Which application workloads are suitable for public clouds?
• Which applications to replatform? Which to refactor?
• What are key considerations for replatforming and refactoring?
• What are key cloud application design concepts?
Solving k8s persistent workloads using k8s DevOps styleMayaData
Solving k8s persistent workloads
using k8s DevOps style. Presented at Container_stack-Zurich-2019
-How Hardware trends enforce a change in the way we do things
-Storage limitations bubble up
-Infrastructure as code
OSDC 2018 | Migrating to the cloud by Devdas BhagatNETWAYS
This is an experience report of a migration from self-hosted services to running in the cloud. While there have been plenty of business case studies showing the benefits of a cloud migration, there are very few reports on the IT side of the migration. This talk covers the migration of Spilgames (a small Dutch games publisher) from a self-hosted Openstack and hardware based infrastructure to Google cloud, challenges, tooling (and lack thereof). This migration is still work in progress, and the talk will cover as much detail as possible.
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
Future of Apache CloudStack: modular, distributed, and hackableDarren Shepherd
CloudStack has been often critiqued because of its architecture. This session is to talk about the efforts that have been done and are in progress to move CloudStack's architecture forward. How we can achieve the simplicity of a monolithic system, but scalability of a distributed architecture. How we can keep the speed and reliability of Java, but leverage scripting languages for easier integrations.
How can we get CloudStack to be the best and easiest system to deploy and maintain for clouds from 10s to 100s of thousands of hypervisors.
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
[DPE Summit] How Improving the Testing Experience Goes Beyond Quality: A Deve...Roberto Pérez Alcolea
It is well known that organizations connect software testing with software quality: making sure that the code does what it supposed to do.
Unfortunately, many organizations believe that testing is a slow process that causes stagnancy in the project. Organizations say that due to slow testing process they are not able to meet set milestones, but it doesn’t have to be this way.
The testing stage is also part of the developer experience, and making it such that engineers are productive and continue delivering software not only fast but with confidence is crucial.
In this talk, we will explore a few approaches that we are taking in order to deliver a more consistent and delightful testing experience for JVM engineers at Netflix. The end goal: speed up engineers’ feedback loop by running tests locally constantly as much as possible.
Continuous is a hot topic the past two years, but what are the implications if you choose to implement this in you company? Continuous delivery not only impacts the way you arrange the way you work together in an agile way, you also might to reconsider the way you have architected your systems. In order to enable your team to deliver features at high speed and high frequency means you need to carefully architect your system in such a way that you can easily change parts of the system without having downtime. In this session I will dive into some important architectural concepts that you might want to consider if you are building systems that support continuous delivery. Things I will cover are concepts like micro architectures, leveraging cloud solutions to slowly roll out changes cross scale units, design for failure and use of e.g. circuit breaker patterns and how you can provide real time information so you can see how the rollout of your change affects the product in production
The Effectiveness, Efficiency and Legitimacy of Outsourcing Your Data DataCentred
Presentation given by our CEO Mike Kelly at this year's Excellence in Policing conference talking about the benefits of cloud computing and the Effectiveness, Efficiency and Legitimacy of outsourcing data. The presentation looks at the long term trends supporting the adoption of cloud technologies and dispels some of the myths and reasons why not to adopt cloud.
The presentation concludes with an examination of the benefits of utilising cloud technology and examines how best to adopt a cloud approach.
Software Architecture for Cloud InfrastructureTapio Rautonen
Distributed systems are hard to build. Software architecture must be carefully crafted to suit cloud infrastructure.
Design for failure. Learn from failure. Adopt new cloud compatible design patterns and follow the guidelines during the journey of building cloud native applications.
As engineers we spend much of our time getting stuff to production and making sure our infrastructure doesn’t burn down out right. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
From things like Infrastructure as Code, Service Discovery and Config Management to replicated databases, caching strategies and geo spatial considerations of the replicas. We have tried, failed and tried again until we got to a solution that works for us.
This allows for teams to quickly put infrastructure in place while allowing teams to seperate deployment and release phases of their work without having to switch over big bang style.
This talk will guide us through the moving parts of our highly reliable and available drupal setup. The audience will see an analysis of the good, the bad and the ugly side of our setup and will show ways for them to validate theirs.
Mtc learnings from isv & enterprise interactionGovind Kanshi
This is one of the dated presentation for which I keep getting requests for, please do reach out to me for status on various things as Azure keeps fixing/innovating whole of things every day.
There are bunch of other things I can help you on to ensure you can take advantage of Azure platform for oss, .net frameworks and databases.
From Warehouses to Lakes: The Value of StreamsMike Fowler
You have a beautifully modelled warehouse and a lake with all your business data but you’re still looking at the past and making decisions from what happened yesterday. In this talk we’ll look at how you can really get value from your data and make decisions as events happen and even before they do.
From Warehouses to Lakes: The Value of StreamsMike Fowler
Every business has a wealth of data but getting value from data is hard. We've tried Data Warehouses and Data Lakes, and while both give us insights we are after, they present their own challenges. Perhaps most challenging of all is making decisions based on yesterday's data. In this talk we'll look at how you can start using your data to make decisions as events happen in your business and how we can even make predictions too. Best of all, we can populate our Data Lakes and Data Warehouses at the same time keeping all the historic analytics in place.
Getting Started with Machine Learning on AWSMike Fowler
Machine Learning (ML) is an exciting field that Cloud Computing has helped to accelerate. AWS has played a big part in this with it’s continually expanding range of services from the simply named Machine Learning through to SageMaker. But how do you get started? Thankfully you don’t need to become an expert in linear algebra or statistics, all you need to begin is good idea of the life-cycle of a ML project and a passing familiarity with these AWS services. In this talk we’ll outline a typical ML project and review services such as SageMaker and Rekognition so that you can begin to make use of them in your own projects.
This talk looks at converting an existing GCP serverless application into one build using Firebase. Firebase helps to simplify deployment, particularly around simple web hosting. The talk also looks at how easy it is to use GCP services integrated with Firebase such as authentication and Cloud Firestore.
Reducing Pager Fatigue Using a Serverless ML BotMike Fowler
Being woken up at 3 am by the pager is never fun but seeing an incident resolve before you’ve even left the bed is maddening. Sleepily the next day you tune the alert for a better night’s sleep yet more untuned alerts sing to you in your sleep. After a few rounds of alert-tuning whack-a-mole you wonder: Could I predict if an incident will resolve itself?
This is the story of how a weary engineer used a Cloud ML model with Cloud Functions to reduce pager noise. Recounting some of the challenges faced, we’ll explore training a model with a limited data set & continual training in a serverless environment. We’ll also explore the implications of using a bot as a first responder to a pager.
Have you got data in AWS but don’t know how to get started with Machine Learning? My talk will help you make sense of AWS’ offerings and show you how to use them without having to become a mathematician first. See the full talk on YouTube: https://youtu.be/3phjk1CxhXM
Debezium is a Kafka Connect plugin that performs Change Data Capture from your database into Kafka. This talk demonstrates how this can be leveraged to move your data from one database platform such as MySQL to PostgreSQL. A working example is available on GitHub (github.com/gh-mlfowler/debezium-demo).
Leveraging Automation for a Disposable InfrastructureMike Fowler
Moving from the Iron Age to the Cloud Age in computing is supposed to save us money, but many migrations seem to cost more in the long run and result in an infrastructure that is as complex to manage as the one we had before. This is often due to the so called “lift & shift” approach many take – it’s a short term win that doesn’t address why you wanted to move to the cloud in the first place.
The Cloud Age affords us the opportunity to not treat our infrastructure as something special, but as something disposable. By applying the practices of Continuous Integration and delivery to our infrastructure and configuration management, we can build truly scalable infrastructures to host our application’s wildest dreams.
In this talk, we will look at the tools and processes that can be adopted to truly make use of the possibilities of the Cloud.
Managing your own PostgreSQL servers is sometimes a burden your business does not want. In this talk we will provide an overview of some of the public cloud offerings available for hosted PostgreSQL and discuss a number of strategies for migrating your databases with a minimum of downtime.
Terraform is an open source tool that helps you control your infrastructure configuration through code. This talk will serve as a primer showing how to build a basic infrastructure in the Google Cloud and how we can re-use our code to construct multiple, identical environments.
Managing your own PostgreSQL servers is sometimes a burden your business does not want. In this talk we will provide an overview of some of the public cloud offerings available for hosted PostgreSQL and discuss a number of strategies for migrating your databases with a minimum of downtime.
Managing your own PostgreSQL servers is sometimes a burden your business does not want. In this talk we will provide an overview of some of the public cloud offerings available for hosted PostgreSQL and discuss a number of strategies for migrating your databases with a minimum of downtime.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
2. ●
Currently a Senior Site Reliability Engineer for
Claranet|Bashton
●
Background in Software Engineering, Systems
Engineering & System Administration
●
Contributed to several open source projects (YAWL,
PostgreSQL & Terraform)
●
Been using Linux & Open Source software since
1999
About Me
3. ●
Typical cloud migration strategies
●
The drawbacks of typical cloud migration strategies
●
The five theses of a disposable infrastructure
●
How we build a disposable infrastructure
Overview
4. ●
Duplicate existing estate in the cloud
●
Use cloud as spare/batch capacity
●
Brave New World
– Greenfield development
– “Version 2.0”
Approaching Cloud Migration
5. ●
Direct mapping of existing infrastructure to the cloud
– Load balancers become Elastic Load Balancers
– SANs become Buckets or Elastic File Systems
●
Minimal operational change required
– Everything is the same just in a new location
●
Perceived as a “quick win” to cloud adoption
– Little AWS/GCP/Azure specific knowledge required
The Appeal of a Lift & Shift
6. ●
No “legacy” baggage
●
Free reign for experimentation
●
Perceived as a “low risk” path to cloud adoption
– If it doesn’t work, switch it off
– “No risk” to existing production environment
The Appeal of a Brave New World
7. ●
Are we just building a traditional but virtual data
centre?
– Lift & Shift is operationally the same
– Brave New World isn’t part of the Real World
●
How are we leveraging the power of a dynamic
infrastructure?
– Our infrastructure is scalable, but is the application?
●
Are we using IAAS where we could be using PAAS?
– e.g. Running a message broker instead of SQS
Are we really “doing cloud”?
8. ●
We’re changing only where our hardware is
– Instance size based on current hardware size
– No change to deployment process
●
Under utilisation of resource
– Still paying for excess capacity
●
Stunted scalability
– We can throw more virtual hardware at it
– Add additional node behind load balancers
The Penalty of a Lift & Shift
9. ●
Organisationally isolated
– Limited impact to existing practices
– Leads to a “Us vs. Them” mentality
●
Focus is usually on application functionality with
infrastructure seen as a necessity
●
Project has a high risk of failure
– Care free scoping leads to an unfocused project
– Significant time can be lost to integrating with the
old world
The Penalty of a Brave New World
10. ●
Conway’s Law: “Organisations which design systems … are
constrained to produce designs which are copies of the
communication structures of these organisations”
●
Kief Morris: “In many cases, applying existing patterns will, at
best, miss out on opportunities to leverage newer technology to
simplify and improve the architecture. At worst, replicating existing
patterns with the newer platforms will involve adding even more
complexity.”
●
Fred Brooks: “A scaling-up of a software entity is not merely a
repetition of the same elements in larger size; it is necessarily an
increase in the number of different elements. In most cases, the
elements interact with each other in some nonlinear fashion, and
the complexity of the whole increases much more than linearly.”
Breaking the Mould
11. 1) The attitude you have to your environment will
determine the limits of your scalability
2) Continuous integration (CI) and delivery (CD) is a must
3) Your applications need to be (re)built to fit a dynamic
infrastructure
4) Dynamic infrastructure must be treated as a first class
citizen in any cloud project
5) Planning to fail will lead to success
The Five Theses
12. ●
The more you care about individual things the more
they will hold your attention
●
In a truly scalable environment you should only care
about the combination of many individual things
Attitude
Thesis 1
The attitude you have to your environment will
determine the limits of your scalability
13. ●
You treat your servers like pets
– You give them names (igloo, husky, snowshoe)
– You give them homes (racks on site or co-located)
– If they fail, you do everything you can to save them
●
Every server is an investment
– Often the best hardware that can be afforded
– Amortised over years
– Excess capacity to allow for growth
●
Provisioning new servers takes weeks
Attitude: Living in the Iron Age
14. ●
You treat your servers like cattle
– They have identifiers
– You care only where they are geographically
– If they fail, you put them down and get a new one
●
Your architecture is your investment
– Configuration is chosen for your current load
– Pay for what you use
– Capacity can be added when required
●
Provisioning new servers takes seconds
Attitude: Living in the Cloud Age
15. ●
Are we simply herding our pets?
– In a Lift & Shift this is almost certainly so
– Scaling groups is a start but it is not the end
●
How are we managing our virtual servers?
– Complex cloud-init scripts?
– Traditional configuration management?
Attitude: Is Pets v Cattle enough?
16. ●
Everything is a package and can be discarded
●
You treat your servers like single use products
– They’re pre-packaged for a particular purpose
– You still care only where they are geographically
– If they fail, you toss it away and grab another
●
You automate everything
– Servers should be immutable
– Never make a manual change
Attitude: The Disposable Infrastructure
17. ●
Repeatability brings reliability and predictability
●
Defining a build pipeline:
– Ensures the same process is followed for every
change
– Provides an audit trail for every change
– Gives visibility of your value stream
Be Continuous
Thesis 2
Continuous integration (CI) and delivery (CD) is a must
18. ●
Your developers probably already practice CI
– It is the standard for code development
– The output of CI can be the start of CD
●
Continuous delivery doesn’t have to mean continuous
deployment
– Build pipelines can have approval stages
– Every change should be deployable
Be Continuous
19. ●
Many applications expect a static infrastructure
– Hard-coded assumptions that an IP address won’t
change once an application is started
●
Many applications are cluster unaware
– Sticky sessions on load balancers can help
– Some protocols don’t load balance well
Reactoring to the Cloud
Thesis 3
Your applications need to be (re)built to fit a dynamic
infrastructure
20. ●
Refactor to contemporary architectural approaches
– Service Oriented Architectures & Microservices
– Transition from stateful services to stateless
●
Package everything using distribution packagers
– The output of your build pipeline is a RPM/DEB
– Your $CM_TOOL already supports this
●
Chose a deployment strategy
– Machine images vs. containers
Reactoring to the Cloud
21. ●
Fear not vendor lock in, savings are to be reaped
leveraging commodity services
– Use SQS instead of automating the installation and
configuration of a message broker and accepting
the operational burden of maintaining it
– Careful abstraction of the API will allow porting to a
different platform if absolutely necessary
Reactoring to the Cloud
22. ●
Design the infrastructure in parallel to the cloud aware
application changes
●
Mandate every instance is part of a scaling group to
enforce cluster awareness
●
Use the same principles for infrastructure development
as you use for applications
Infrastructure is Code
Thesis 4
Dynamic infrastructure must be treated as a first class
citizen in any cloud project
23. ●
Script/encode everything unless there is no API/tooling
support
●
Deploy the same infrastructure in development, test
and production environments
– Sizing can be parameterised
●
Your deployment pipeline becomes the assembly of
application packages and infrastructure configuration
●
High cohesion and loose coupling applies to
infrastructure as much as it does to applications
Infrastructure is Code
24. ●
If it can go wrong, it will go wrong so think in terms of
when and not if
●
Treating our infrastructure and its hosted applications
as disposable in conjunction with CD eliminates a
number of failure scenarios
Planning to fail
Thesis 5
Planning to fail will lead to success
25. ●
Regularly test your disposability
– Terminate instances at random to ensure resiliency
– Block all network access to an instance
– Chaos Monkey & the Simian Army
– Trigger failovers for less disposable services
●
Constantly churning disposable instances helps
prevent configuration drift of immutable servers
Planning to fail
26. ●
Availability and durability cost
– Favour numerous small instances over a handful of
large instances
●
Identify points of failure and assess:
– How often will this failure occur?
– How do I mitigate this failure?
– How do I test this failure to ensure mitigation?
– Is the cost of mitigation worth the customer impact
during failure?
Planning to fail
27. ●
Be honest in assessing the worth of your business
– Do you really need to double your costs to run in
multiple regions?
– Trello, Slack & many other high profile companies –
including Amazon - were affected by the S3 outage
Planning to fail
28. ●
Data is not disposable and is probably more important
than your availability
●
Ship log files to CloudWatch or Stackdriver
●
Make back-ups and regularly test they restore
– Gitlab had 5 separate backup processes, they all
failed
– Consider storing backups in both S3 & Google
– Store backups in multiple regions
Data is not Disposable
29. ●
If you must use persistent disks:
– Use multiple disks and use RAID-1
– Snapshot the disks regularly
●
Test the durability of your data
– User error is your biggest risk
●
“I forgot the WHERE clause”
●
“I thought I was in the test environment”
– Regularly exercise data loss & recovery scenarios
in development and test environments
Data is not Disposable
30. ●
We will have two identical CI pipelines for the
applications, the output of each being AMI images
●
A separate CD pipeline executes infrastructure code
and rolls out the new AMIs
●
Goal is to promote infrastructure and AMIs between
environments
Under Construction
Let us assume we have a front end web application
which places orders in a queue for subsequent
asynchronous fulfilment by a separate application
backed by a database. We’ve already refactored our
applications for the cloud.
31. ●
Source our code from a repo, build and test
●
Package our application as a DEB or RPM
●
Place our artifact into a S3 repository
●
Run Packer to generate a new AMI
Application Pipeline
32. ●
https://packer.io
●
Can create many different machine images
●
Consider creating a base image to control OS updates
●
Use normal configuration management tools
– Support for Ansible, Chef & Puppet
– Can just write shell script if you must
●
Use placeholders for configuration to be filled by
launch scripts
Packer
33. ●
Triggered by new AMIs or Terraform code changes
●
Apply Terraform to update the infrastructure
●
Run integration tests to verify application build
●
Wait for approval before promotion to next
environment
Infrastructure Pipeline
34. ●
https://terraform.io
●
Declarative language for the construction of
infrastructure
●
Supports all major vendors
●
State can be stored in buckets to facilitate sharing
●
Separate out infrastructure layers
– Minimises blast radius of changes
– Keep persistent apart from disposable
Terraform
35. ●
Any instance can be terminated
●
Resilient to zone failure
●
Cross-region read replica allows DR for region failure
– Just need to run Terraform to add the instances
when required and update Route 53
Deployed Infrastructure