How do you operate over 1,200 deployments on a single BOSH Director? In the past many talks have had the Topic of Cloud Foundry at scale. But how about the underlying automation layer? BOSH has its own set of challenges and limits for running VMs and Deployments at scale. Learn which obstacles and limits came up and how we solved them with the help of the BOSH core development team. Learn how we monitor the directors, be it via logging and metrics or performance indicators. We’ll also show you how we automate BOSH itself to ensure the best experience for end users, and to keep them blissfully unaware of the complexity of the processes working on their behalf After this talk you will also be able to run at least 1,200 deployments on your directors.
How to Meet Your P99 Goal While Overcommitting Another WorkloadScyllaDB
Meeting a tight P99 latency goal is hard, it's harder when running
multiple workloads with a mix of real time sensitive and analytical workloads.
In this presentation, I will cover the Scylla schedulers and controllers and demonstrate how they guarantee a good level of resource isolation.
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiCeph Community
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Greg Farnum,Red Hat RADOS Core Developer
Josh Durgin, Red Hat RADOS Lead
Kefu Chai, Red Hat Senior Software Engineer
HBase-2.0.0 has been a couple of years in the making. It is chock-a-block full of a long list of new features and fixes. In this session, the 2.0.0 release manager will perform the impossible, describing the release content inside the session time bounds.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
P99CONF — What We Need to Unlearn About Persistent StorageScyllaDB
System software engineers have long been taught that disks are slow and sequential I/O is key to performance. With SSD drives I/O really got much faster but not simpler. In this brave new world of rocket-speed throughputs an engineer has to distinguish sustained workload from bursts, (still) take care about I/O buffer sizes, account for disks’ internal parallelism and study mixed I/O characteristics in advance. In this talk we will share some key performance measurements of the modern hardware we’re taking at ScyllaDB and our opinion about the implications for the database and system software design.
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...ScyllaDB
RedHat built a distributed object storage solution named Ceph which first debuted ten years ago. Now we are seeing rapid developments in the industry and we want to take advantage of them. In this talk, we will briefly introduce Ceph, revisit the problems we are seeing when profiling its I/O performance with flash device, and explain why we want to embrace the future by switching to Seastar. We’ll share our experiences with the audience of how and when we are porting our software to this framework.
How to Meet Your P99 Goal While Overcommitting Another WorkloadScyllaDB
Meeting a tight P99 latency goal is hard, it's harder when running
multiple workloads with a mix of real time sensitive and analytical workloads.
In this presentation, I will cover the Scylla schedulers and controllers and demonstrate how they guarantee a good level of resource isolation.
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiCeph Community
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Greg Farnum,Red Hat RADOS Core Developer
Josh Durgin, Red Hat RADOS Lead
Kefu Chai, Red Hat Senior Software Engineer
HBase-2.0.0 has been a couple of years in the making. It is chock-a-block full of a long list of new features and fixes. In this session, the 2.0.0 release manager will perform the impossible, describing the release content inside the session time bounds.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
P99CONF — What We Need to Unlearn About Persistent StorageScyllaDB
System software engineers have long been taught that disks are slow and sequential I/O is key to performance. With SSD drives I/O really got much faster but not simpler. In this brave new world of rocket-speed throughputs an engineer has to distinguish sustained workload from bursts, (still) take care about I/O buffer sizes, account for disks’ internal parallelism and study mixed I/O characteristics in advance. In this talk we will share some key performance measurements of the modern hardware we’re taking at ScyllaDB and our opinion about the implications for the database and system software design.
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...ScyllaDB
RedHat built a distributed object storage solution named Ceph which first debuted ten years ago. Now we are seeing rapid developments in the industry and we want to take advantage of them. In this talk, we will briefly introduce Ceph, revisit the problems we are seeing when profiling its I/O performance with flash device, and explain why we want to embrace the future by switching to Seastar. We’ll share our experiences with the audience of how and when we are porting our software to this framework.
Unikraft: Fast, Specialized Unikernels the Easy WayScyllaDB
P99 CONF
Unikernels are famous for providing excellent performance in terms of boot times, throughput and memory consumption, to name a few metrics. However, they are infamous for making it hard and extremely time consuming to extract such performance, and for needing significant engineering effort in order to port applications to them. We introduce Unikraft, a novel micro-library OS that (1) fully modularizes OS primitives so that it is easy to customize the unikernel and include only relevant components and (2) exposes a set of composable, performance-oriented APIs in order to make it easy for developers to obtain high performance.
Our evaluation using off-the-shelf applications such as nginx, SQLite, and Redis shows that running them on Unikraft results in a 1.7x-2.7x performance improvement compared to Linux guests. In addition, Unikraft images for these apps are around 1MB, require less than 10MB of RAM to run, and boot in around 1ms on top of the VMM time (total boot time 3ms-40ms). Unikraft is a Linux Foundation open source project and can be found at www.unikraft.org.
Presentation given at the GoSF meetup on July 20, 2016. It was also recorded on BigMarker here: https://www.bigmarker.com/remote-meetup-go/GoSF-EVCache-Peripheral-I-O-Building-Origin-Cache-for-Images
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 InstanceScyllaDB
In this talk I will walk you through the performance tuning steps that I took to serve 1.2M JSON requests per second from a 4 vCPU c5 instance, using a simple API server written in C.
At the start of the journey the server is capable of a very respectable 224k req/s with the default configuration. Along the way I made extensive use of tools like FlameGraph and bpftrace to measure, analyze, and optimize the entire stack, from the application framework, to the network driver, all the way down to the kernel.
I began this wild adventure without any prior low-level performance optimization experience; but once I started going down the performance tuning rabbit-hole, there was no turning back. Fueled by my curiosity, willingness to learn, and relentless persistence, I was able to boost performance by over 400% and reduce p99 latency by almost 80%.
Seastore: Next Generation Backing Store for CephScyllaDB
Ceph is an open source distributed file system addressing file, block, and object storage use cases. Next generation storage devices require a change in strategy, so the community has been developing crimson-osd, an eventual replacement for ceph-osd intended to minimize cpu overhead and improve throughput and latency. Seastore is a new backing store for crimson-osd targeted at emerging storage technologies including persistent memory and ZNS devices.
hbaseconasia2017: HBase Practice At XiaoMiHBaseCon
Zheng Hu
We'll share some HBase experience at XiaoMi:
1. How did we tuning G1GC for HBase Clusters.
2. Development and performance of Async HBase Client.
hbaseconasia2017 hbasecon hbase xiaomi https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
G1 has been around for quite some time now and since JDK 9 it is the default garbage collector in OpenJDK. The community working on G1 is big and the contributions over the last few years have made a significant impact on the overall performance. This talk will focus on some of these features and how they have improved G1 in various ways, including smaller memory footprint and shorter P99 pause times. We will also take a brief look at what features we have lined up for the future.
Percona XtraBackup - New Features and ImprovementsMarcelo Altmann
Percona XtraBackup is an open-source hot backup utility for MySQL - based servers that doesn't lock your database during the backup. In this talk, we will cover the latest development and new features introduced on Xtrabackup and its auxiliary tools: - Page Tracking - Azure Blob Storage Support - Exponential Backoff - Keyring Components - and more.
Unikraft: Fast, Specialized Unikernels the Easy WayScyllaDB
P99 CONF
Unikernels are famous for providing excellent performance in terms of boot times, throughput and memory consumption, to name a few metrics. However, they are infamous for making it hard and extremely time consuming to extract such performance, and for needing significant engineering effort in order to port applications to them. We introduce Unikraft, a novel micro-library OS that (1) fully modularizes OS primitives so that it is easy to customize the unikernel and include only relevant components and (2) exposes a set of composable, performance-oriented APIs in order to make it easy for developers to obtain high performance.
Our evaluation using off-the-shelf applications such as nginx, SQLite, and Redis shows that running them on Unikraft results in a 1.7x-2.7x performance improvement compared to Linux guests. In addition, Unikraft images for these apps are around 1MB, require less than 10MB of RAM to run, and boot in around 1ms on top of the VMM time (total boot time 3ms-40ms). Unikraft is a Linux Foundation open source project and can be found at www.unikraft.org.
Presentation given at the GoSF meetup on July 20, 2016. It was also recorded on BigMarker here: https://www.bigmarker.com/remote-meetup-go/GoSF-EVCache-Peripheral-I-O-Building-Origin-Cache-for-Images
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 InstanceScyllaDB
In this talk I will walk you through the performance tuning steps that I took to serve 1.2M JSON requests per second from a 4 vCPU c5 instance, using a simple API server written in C.
At the start of the journey the server is capable of a very respectable 224k req/s with the default configuration. Along the way I made extensive use of tools like FlameGraph and bpftrace to measure, analyze, and optimize the entire stack, from the application framework, to the network driver, all the way down to the kernel.
I began this wild adventure without any prior low-level performance optimization experience; but once I started going down the performance tuning rabbit-hole, there was no turning back. Fueled by my curiosity, willingness to learn, and relentless persistence, I was able to boost performance by over 400% and reduce p99 latency by almost 80%.
Seastore: Next Generation Backing Store for CephScyllaDB
Ceph is an open source distributed file system addressing file, block, and object storage use cases. Next generation storage devices require a change in strategy, so the community has been developing crimson-osd, an eventual replacement for ceph-osd intended to minimize cpu overhead and improve throughput and latency. Seastore is a new backing store for crimson-osd targeted at emerging storage technologies including persistent memory and ZNS devices.
hbaseconasia2017: HBase Practice At XiaoMiHBaseCon
Zheng Hu
We'll share some HBase experience at XiaoMi:
1. How did we tuning G1GC for HBase Clusters.
2. Development and performance of Async HBase Client.
hbaseconasia2017 hbasecon hbase xiaomi https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
G1 has been around for quite some time now and since JDK 9 it is the default garbage collector in OpenJDK. The community working on G1 is big and the contributions over the last few years have made a significant impact on the overall performance. This talk will focus on some of these features and how they have improved G1 in various ways, including smaller memory footprint and shorter P99 pause times. We will also take a brief look at what features we have lined up for the future.
Percona XtraBackup - New Features and ImprovementsMarcelo Altmann
Percona XtraBackup is an open-source hot backup utility for MySQL - based servers that doesn't lock your database during the backup. In this talk, we will cover the latest development and new features introduced on Xtrabackup and its auxiliary tools: - Page Tracking - Azure Blob Storage Support - Exponential Backoff - Keyring Components - and more.
Logging at OVHcloud :
Logs Data platform est la plateforme de collecte, d'analyse et de gestion centralisée de logs d'OVHcloud. Cette plateforme a pour but de répondre aux challenges que constitue l'indexation de plus de 4000 milliards de logs par une entreprise comme OVHcloud. Cette présentation vous décrira l'architecture générale de Logs Data Platform autour de ses composants centraux Elasticsearch et Graylog et vous décrira les différentes problématiques de scalabilité, disponibilité, performance et d'évolutivité qui sont le quotidien de l'équipe Observability à OVHcloud.
The Dark Side Of Go -- Go runtime related problems in TiDB in productionPingCAP
Ed Huang, CTO of PingCAP, talked at Go System Conference about dealing with the typical and profound issues related to Go’s runtime as your systems become more complex. Taking TiDB as an example, he demonstrated how these problems can be reproduced, located, and analyzed in production.
Faster, better, stronger: The new InnoDBMariaDB plc
For MariaDB Enterprise Server 10.5, the default transactional storage engine, InnoDB, has been significantly rewritten to improve the performance of writes and backups. Next, we removed a number of parameters to reduce unnecessary complexity, not only in terms of configuration but of the code itself. And finally, we improved crash recovery thanks to better consistency checks and we reduced memory consumption and file I/O thanks to an all new log record format.
In this session, we’ll walk through all of the improvements to InnoDB, and dive deep into the implementation to explain how these improvements help everything from configuration and performance to reliability and recovery.
Kafka on ZFS: Better Living Through Filesystems confluent
(Hugh O'Brien, Jet.com) Kafka Summit SF 2018
You’re doing disk IO wrong, let ZFS show you the way. ZFS on Linux is now stable. Say goodbye to JBOD, to directories in your reassignment plans, to unevenly used disks. Instead, have 8K Cloud IOPS for $25, SSD speed reads on spinning disks, in-kernel LZ4 compression and the smartest page cache on the planet. (Fear compactions no more!)
Learn how Jet’s Kafka clusters squeeze every drop of disk performance out of Azure, all completely transparent to Kafka.
-Striping cheap disks to maximize instance IOPS
-Block compression to reduce disk usage by ~80% (JSON data)
-Instance SSD as the secondary read cache (storing compressed data), eliminating >99% of disk reads and safe across host redeployments
-Upcoming features: Compressed blocks in memory, potentially quadrupling your page cache (RAM) for free
We’ll cover:
-Basic Principles
-Adapting ZFS for cloud instances (gotchas)
-Performance tuning for Kafka
-Benchmarks
Redis Developers Day 2014 - Redis Labs TalksRedis Labs
These are the slides that the Redis Labs team had used to accompany the session that we gave during the first ever Redis Developers Day on October 2nd, 2014, London. It includes some of the ideas we've come up with to tackle operational challenges in the hyper-dense, multi-tenants Redis deployments that our service - Redis Cloud - consists of.
OpenNebulaConf 2013 - How Can OpenNebula Fit Your Needs: A European Project F...OpenNebula Project
BonFIRE is an european project which aims at providing a ”multi-site cloud facility for applications, services and systems research and experimentation”. Grouping different research cloud providers behind a common set of tools, APIs and services, it enables users to run their experiment against a heterogeneous set of infrastructure, hypervisors, networks, etc …
BonFIRE, and thus the (OpenNebula) testbeds, provide a relatively small set of images used to boot VMs. However, the experimental nature of BonFIRE projects results in a big ”turnover” of running VMs. Lot of VMs are used for a time period between a few hours and a few days, and an experiment startup can trigger deployment of many VMs at same time on a small set of OpenNebula workers, which does not correspond to usual Cloud workflow.
Default OpenNebula is not optimized for such usecase (small amount of worker nodes, high VMs turnover). However, thanks to its ability to be easily modified at each level of a Cloud deployment workflow, OpenNebula has been tuned to make it fit better with BonFIRE deployment process. This presentation will explain how to change OpenNebula TM and VMM to improve the parrallel deployment of many VMs in a short amount of time, reducing time needed to deploy an experiment to its lowest without lot of expensive hardware.
Bio:
I’m a french system engineer, working at Inria french research laboratory for 2 years, and involved in free software development and support (French Ubuntu community, House automation software, etc …). Inside Myriads team at Inria, I work on a few European research projects including BonFIRE (http://www.bonfire-project.eu), as well as on free Grid5000 project.
How Can OpenNebula Fit Your Needs: A European Project FeedbackNETWAYS
BonFIRE is an european project which aims at providing a ”multi-site cloud facility for applications, services and systems research and experimentation”. Grouping different research cloud providers behind a common set of tools, APIs and services, it enables users to run their experiment against a heterogeneous set of infrastructure, hypervisors, networks, etc …
BonFIRE, and thus the (OpenNebula) testbeds, provide a relatively small set of images used to boot VMs. However, the experimental nature of BonFIRE projects results in a big ”turnover” of running VMs. Lot of VMs are used for a time period between a few hours and a few days, and an experiment startup can trigger deployment of many VMs at same time on a small set of OpenNebula workers, which does not correspond to usual Cloud workflow.
Default OpenNebula is not optimized for such usecase (small amount of worker nodes, high VMs turnover). However, thanks to its ability to be easily modified at each level of a Cloud deployment workflow, OpenNebula has been tuned to make it fit better with BonFIRE deployment process. This presentation will explain how to change OpenNebula TM and VMM to improve the parrallel deployment of many VMs in a short amount of time, reducing time needed to deploy an experiment to its lowest without lot of expensive hardware.
Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
Would your users like their Lotus Notes client to perform faster? Do some applications and clients seem to load slowly? Come join Francie Tanner to learn where to look to find out what’s wrong and resolve it. Find out how to debug your client, deal with outdated ODS’, network latency and application performance issues and more importantly understand why you should care. Gather best practices on how to streamline location and connection documents and why the catalog.nsf is so important. Improve your IBM Lotus Notes client installations to provide a better experience for a happier you and happier end users!
This session will cover performance-related developments in Red Hat Gluster Storage 3 and share best practices for testing, sizing, configuration, and tuning.
Join us to learn about:
Current features in Red Hat Gluster Storage, including 3-way replication, JBOD support, and thin-provisioning.
Features that are in development, including network file system (NFS) support with Ganesha, erasure coding, and cache tiering.
New performance enhancements related to the area of remote directory memory access (RDMA), small-file performance, FUSE caching, and solid state disks (SSD) readiness.
PGConf APAC 2018 - High performance json postgre-sql vs. mongodbPGConf APAC
Speakers: Dominic Dwyer & Wei Shan Ang
This talk was presented in Percona Live Europe 2017. However, we did not have enough time to test against more scenario. We will be giving an updated talk with a more comprehensive tests and numbers. We hope to run it against citusDB and MongoRocks as well to provide a comprehensive comparison.
https://www.percona.com/live/e17/sessions/high-performance-json-postgresql-vs-mongodb
In file systems, large sequential writes are more beneficial than small random writes, and hence many storage systems implement a log structured file system. In the same way, the cloud favors large objects more than small objects. Cloud providers place throttling limits on PUTs and GETs, and so it takes significantly longer time to upload a bunch of small objects than a large object of the aggregate size. Moreover, there are per-PUT calls associated with uploading smaller objects.
In Netflix, a lot of media assets and their relevant metadata is generated and pushed to cloud.
We would like to propose a strategy to compact these small objects into larger blobs before uploading them to Cloud. We will discuss how to select relevant smaller objects, and manage the indexing of these objects within the blob along with modification in reads, overwrites and deletes.
Finally, we would showcase the potential impact of such a strategy on Netflix assets in terms of cost and performance.
Automating the Entire PostgreSQL Lifecycle anynines GmbH
Striving for a full automation of the PostgreSQL lifecycle is a solvable challenge. Learn about strategies how to automate this RDBMS, see an exemplary architecture and find out which automation technology is the right tool for the job. Bosh or Kubernetes.
Kill Your Productivity - As Efficient as Possibleanynines GmbH
This is the slide deck anynines Lead Engineer for PaaS - Sven Schmidt - used for his talk at the Cloud Foundry Summit EU 2018 Unconference. Learn about obstacles for productivity and how to avoid them.
This video is part of our talk about BOSH held by the CEO of anynines - Julian Fischer (Twitter: @fischerjulian) - at the SUSECON 2016 in Washington, D.C..
Digital Transformation Case Study | anynines anynines GmbH
The slides are part of our talk about the "Digital Transformation Case Study" held by CEO of anynines - Julian Fischer (Twitter: @fischerjulian) - at the Pivotal Digital Transformation Forum 2016 in Istanbul.
Experience Report: Cloud Foundry Open Source Operations | anyninesanynines GmbH
Cloud Foundry and OpenStack are the biggest Open Source projects in their domain. As IaaS and PaaS walk hand in hand the idea of combining both worlds is close. anynines is running their public Cloud Foundry offering on top of OpenStack for more than three years with two years running on a self-hosted OpenStack setup. As head of public Paas operations Julian Weber has gained a lot of knowledge to share about setting up and operating Cloud Foundry installations. This presentation leads the audience through the journey of adopting the Cloud Foundry Open Source version and breeding it to a highly available and production ready Cloud Foundry setup. The listener is guided through the analysis of potential single points of failure in standard CF Open Source setups up to required changes in the Cloud Foundry OS release to reach our goal. As this talk is about Cloud Foundry operations we also need to talk about experiences with BOSH as a general purpose tool for software lifecycle management of big distributed systems and possible improvements to the BOSH tool set and workflows. The talk will enable advanced DevOps to dive deeper into the technical details of setting up production ready Cloud Foundry installations based on Cloud Foundry Open Source.
Delivering a production Cloud Foundry Environment with Bosh | anyninesanynines GmbH
anynines CEO Julian Fischer leads through how to build a failure proof Cloud Foundry environment using infrastructure availability zones with Bosh including a SPOF-free Cloud Foundry runtime and on-demand provisioning data services.
Building a Production Grade PostgreSQL Cloud Foundry Service | anyninesanynines GmbH
Slides to the talk held at the Cloud Foundry Summit in Santa Clara 2016 about building a on-demand provisioning PostgreSQL Cloud Foundry Services being able to deploy dedicated PostgreSQL servers and 3-node-async-replicating clusters using Bosh.
The slides cover important design decisions such as single PostgreSQL server vs. PostgreSQL clusters, shared vs. dedicated PostgreSQL servers, pre-provisioning vs. on-demand provisioning of vms, the right choice of the automation technology as well as a draft of a resulting architecture.
Cloud Infrastructures Slide Set 8 - More Cloud Technologies - Mesos, Spark | ...anynines GmbH
Beside IaaS and PaaS there is a growing number of Cluster-Managers for maintaining spezialised Compute Frameworks. In this set of slides you will find a short introduction of the Cluster-Manager Apache Mesos and the Compute Framework Apache Spark.
Cloud infrastructures - Slide Set 6 - BOSH | anyninesanynines GmbH
The basic training Cloud Foundry BOSH describes the features and architecture of BOSH and ends with a practical example in the form of a demonstration of a BOSH release. This contains the BOSH components such as Bosh Director, Bosh Health Monitor, Bosh Worker, Bosh Agent and the Bosh Stemcell. The concepts Bosh Release, Bosh Job and Bosh Deployment are separated from each other.
Um die Bedeutung moderner Cloud-Technologien einschätzen zu können, werden zunächst Grundlagen herkömmlicher Cluster-Architekturen behandelt. Darunter zählen Konzepte wie vertikale und horizontale Skalierung, Load-Balancing, Storage-Arten, usw.
Einleitung in die Vorlesung Cloud Infrastrukturen mit den Themen Cloud Foundry, OpenStack, Lean Startup, Kanban, IaaS und PaaS. Einführung in die Cloud-Terminologie sowie Überblick über die Interessen des Marktes hinter den Cloud-Konzepten.
Running Cloud Foundry for 12 months - An experience report | anyninesanynines GmbH
anynines ran a public PaaS located in a German datacenter based on Cloud Foundry. In more than 12 months of running a Cloud Foundry PaaS man lessons about security, high availability, open stack and many other exciting topics have been learned. See how Bosh can be used and how it shouldn't be used. Learn how to perform Cloud Foundry upgrades and read how to harden Cloud Foundry by adding more fault tolerance with pacemaker.
NSA - No thanks - Build your own cloud with OpenStack and Cloud Foundry | any...anynines GmbH
Nowadays no week goes by without new revelations about privacy breaches. How can we escape from NSA’s all-seeing eye? Where avoiding US cloud providers would be the obvious answer, you don’t want to sacrifice the productivity benefits you get from the cloud. Luckily there’s no need for that! Learn how to build a separate Amazon EC2, S3 and Heroku with Open Source software and get familiar with the basics of the free infrastructure software OpenStack and the Cloud Foundry platform framework.
This talks explains why there should be a European Cloud and how to build it. Sharing, the foundation of every Cloud leads to the question why not share IaaS and PaaS globally? Looking at latest security news in conjunction with having a look at Safe Harbour and Patriot Act leads to the question where to draw the line between security and freedom. Building a European cloud helps to allow European customers to draw their own line. OpenStack and Cloud Foundry are suitable open source technologies to build such a cloud.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
7. BOSH Setup
Overbosh: Deploys runtime and Underbosh
Underbosh: Deploys service brokers and services (where we have 1200
deployments on) uses credhub colocated on overbosh
Utilsbosh: Utilities and Prometheus monitoring
7
8. Lessons Learned
● Deploy less with create-env
● Only create-env utilsbosh
● Deploy other directors with Utilsbosh
● Using Overbosh as Credhub provider for Underbosh can be
suboptimal
○ Recreate of the Overbosh means people cannot create services
○ Dedicated Credhub/UAA deployment better
● Using an external RDS does not solve all problems
○ More about that later
8
10. IO Credits are Fun
● AWS has limited IOPS for ssd disks at a rate of 3 IOPS/GB on gp2
● None on magnetic volumes (st1, sc1, standard)
● AWS-Stage BOSH DB runs on gp2
● AWS-Prod BOSH DB runs on standard
● You can see disk IOPS budget on the disk in cloud watch
● Unless its an RDS instance
● You have to create an alert for each single volume
10
11. Effects
● Database in AWS-P is consistently slower, but no variation in answer
times
● Database in AWS-S went unresponsive at some points
○ BOSH sometimes sends a few thousand requests in which do large joins
● EU-P BOSH has 50GB of standard disk (3$/mo)
● EU-S BOSH has 1TB of GP2 disk (119$/mo)
11
12. Things That Drain Your IOPS
● The daily snapshot task, even if snapshots are disabled
○ Made less severe
● Bosh vms/deployments
○ More later
● If your IOPS on the director disk get depleted repeatedly, take a
magnetic storage like sc1 or st1
○ Slower than gp2 at max speed
○ Costs half
○ Consistent and fast enough for BOSH
12
14. September 2018
● 670 Deployments
● BOSH director is very slow
● Some queries take 2-3 minutes to complete
● Scaling BOSH and DB does only bring minor reductions
● M4.2xlarge RDS is a bit faster, but does not solve it
● More disk IOPS does not help
14
15. Solution
● Updating the director
● Reason was that every bosh vms made the bosh also select
deployment configs for each deployment separately
○ Even though it was not part of the output
● SAP stumbled over the issue first and fixed it
15
16. November 2018
● BOSH unresponsive or very slow
● No uploads/deploys possible
● Persistent disk 50% free
● “df -i” showed all inodes exhausted
● BOSH stores task logs on disk
○ And deletes regularly
○ If you have 900 deployments and prometheus bosh exporter does a bosh vms every 5
minutes you create tasks faster than bosh cleans them up
○ 1.8m task log folders on disk
○ Every one contained 0-3 log files
16
17. Solution
● Removing some older log files (1.79m)
● Scaling the disk
● Notifying BOSH core
● Set up alert for Inode usage on all persistent disks
● Switch from bosh exporter to graphite hm plugin
● BOSH core made the director more aggressive at purging old task
logs
○ Went from 1.6m task logs on disk to just 18.000
17
18. December 2018
● BOSH very slow
● Sometimes locks up for minutes
● Database works on some queries longer than BOSH waits
● Whenever a service is deployed or updated
18
19. Investigation
● Turns out when you use `bosh tasks -r` it queries the last 30 tasks
● We had 3.5m tasks in the DB
● Query: `SELECT * FROM “tasks” WHERE (“deployment_name” =
‘d27eda6’) ORDER BY “timestamp” DESC LIMIT 30`
○ No index on deployment_name
○ So if only 29 tasks are there it crawls through all 3.5m lines to find task 30.
○ Most deployments have less than 30 tasks in the DB
19
20. Solution
● Change to -r=1
● Make a deploy task for each deployment to make sure there is one
task
● Issue with BOSH core (No. 2105)
● BOSH core fix:
○ BOSH deletes old tasks faster so you have less (10 instead of 2 in each run)
○ Put an index on task types
○ 3.5m tasks > 1100 tasks in the DB
20
22. Things You Should Monitor
● Network IP exhaustion
○ IaaS dependent, but running out of IPs during deploys is suboptimal
○ Especially when customer notices first
● Disk IOPS (depending on IaaS)
● Quota limitations
○ Record holder is azure where a limit increase took 9 days
● CPU credits on important instances
● Disk inode usage, not just how full it is in terms of data
● Certificate expiration
● Check if metrics are missing
22
23. What 1200 deployments taught us
● BOSH team is usually rather fast at fixing issues that block the
director
● BOSH itself is pretty stable
● Change from the Prometheus bosh exporter to the graphite hm plugin
● For most smaller to medium environments t2.large (2cpu, 8GB ram
with burst CPU) or equivalent is plenty
● For large environments a m5.xlarge or m5.2xlarge is enough
○ Disk IO/Network speed will most likely be the bottleneck
23
24. Advice
● Don’t overdo it on the worker count
○ Our biggest director still has only 9 workers for tasks
○ The others have usually 3-4 workers
● Otherwise you run the risk of CPU starving yourself when you use all
workers simultaneously
24