In this presentation, we will present the performance measurement metrics of leading cloud providers - AWS, Google Cloud, Microsoft Azure, and Digital Ocean. We’ll give you useful tools to measure your own cloud performance and a handy guide on how to calculate cloud TCO (total cost of ownership). In addition, you’ll learn how to estimate correctly your market positioning and perform better than the cloud giants.
Boyan Krosnov is a Co-Founder and Chief Product Officer of StorPool Storage. He has been part of the technical teams building 5 service providers from scratch in 4 countries. In most of these projects, he has designed the architecture, led the technical teams, and managed the implementation of projects in the millions.
Propelling IoT Innovation with Predictive AnalyticsSingleStore
In this session, Nikita Shamgunov, CTO and co-founder of MemSQL, will conduct a live demonstration based on real-time data from 2 million sensors on 197,000 wind turbines installed on wind farms around the world. This Internet of Things (IoT) simulation explores the ways utility companies can integrate new data pipelines into established infrastructure. Attendees will learn how to deploy this breakthrough technology composed of Apache Kafka, a real-time message queue; Streamliner, an integrated Apache Spark solution; MemSQL Ops, a cluster management and monitoring interface; and a set of simulated data producers written in Python. By applying machine learning to analyze millions of data points in real time, the data pipeline predicts and visualizes health of wind farms at global scale. This architecture propels innovation in the energy industry and is replicable across other IoT applications including smart cities, connected cars, and digital healthcare.
In this session, Boyan Krosnov, CPO of StorPool will discuss a private cloud setup with KVM achieving 1M IOPS per hyper-converged (storage+compute) node. We will answer the question: What is the optimum architecture and configuration for performance and efficiency?
CEPH DAY BERLIN - DISK HEALTH PREDICTION AND RESOURCE ALLOCATION FOR CEPH BY ...Ceph Community
Ceph is intelligent. However, users usually make resource request with no guarantee because of no visibility of underlying disk health, no idea of resource availability and no prediction of future demands. Now machine learning can make it happen. We'll present how machine learning technologies help predict Ceph OSD health, predictive impact on clusters and resolutions. We'll take Kubernetes working with Ceph as an example.
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiCeph Community
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Greg Farnum,Red Hat RADOS Core Developer
Josh Durgin, Red Hat RADOS Lead
Kefu Chai, Red Hat Senior Software Engineer
Propelling IoT Innovation with Predictive AnalyticsSingleStore
In this session, Nikita Shamgunov, CTO and co-founder of MemSQL, will conduct a live demonstration based on real-time data from 2 million sensors on 197,000 wind turbines installed on wind farms around the world. This Internet of Things (IoT) simulation explores the ways utility companies can integrate new data pipelines into established infrastructure. Attendees will learn how to deploy this breakthrough technology composed of Apache Kafka, a real-time message queue; Streamliner, an integrated Apache Spark solution; MemSQL Ops, a cluster management and monitoring interface; and a set of simulated data producers written in Python. By applying machine learning to analyze millions of data points in real time, the data pipeline predicts and visualizes health of wind farms at global scale. This architecture propels innovation in the energy industry and is replicable across other IoT applications including smart cities, connected cars, and digital healthcare.
In this session, Boyan Krosnov, CPO of StorPool will discuss a private cloud setup with KVM achieving 1M IOPS per hyper-converged (storage+compute) node. We will answer the question: What is the optimum architecture and configuration for performance and efficiency?
CEPH DAY BERLIN - DISK HEALTH PREDICTION AND RESOURCE ALLOCATION FOR CEPH BY ...Ceph Community
Ceph is intelligent. However, users usually make resource request with no guarantee because of no visibility of underlying disk health, no idea of resource availability and no prediction of future demands. Now machine learning can make it happen. We'll present how machine learning technologies help predict Ceph OSD health, predictive impact on clusters and resolutions. We'll take Kubernetes working with Ceph as an example.
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiCeph Community
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Greg Farnum,Red Hat RADOS Core Developer
Josh Durgin, Red Hat RADOS Lead
Kefu Chai, Red Hat Senior Software Engineer
CEPH DAY BERLIN - 5 REASONS TO USE ARM-BASED MICRO-SERVER ARCHITECTURE FOR CE...Ceph Community
Arm-based micro-server architecture intro and why you would want to use it.
*Smallest failure domain
*Linear scale out on capacity & performance (data sharing)
*Easy to use Ceph Management GUI (short intro on UVS manager)
*Power saving on hyper-scale DC
*Lower TCO
*Use case sharing
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAYCeph Community
Deploying Ceph can be a hassle. And once it's up and running it can be time consuming to add servers: install an operating system, configure the network, configure Ceph, ...|But that doesn't have to be that hard.|We've build a full Ceph management suite to help you with these tasks.|With croit, you just need to plug in your server and our zero touch provisioning takes care of the rest. croit boots a customized Linux image that is ready to be used directly from memory: no installation necessary!||We'll show a live demo, deploying a cluster from scratch and performing a few typical admin tasks; all without using the command line at all. In addition, we show you how croit can help you save a lot of money.
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...ScyllaDB
RedHat built a distributed object storage solution named Ceph which first debuted ten years ago. Now we are seeing rapid developments in the industry and we want to take advantage of them. In this talk, we will briefly introduce Ceph, revisit the problems we are seeing when profiling its I/O performance with flash device, and explain why we want to embrace the future by switching to Seastar. We’ll share our experiences with the audience of how and when we are porting our software to this framework.
StorPool presents at Cloud Field Day - the leading technology event focused on the impact of cloud technologies on enterprise IT. During the event, the high-performance block storage specialist will showcase how its storage technology allows cloud builders to easily outperform cloud titans like AWS, Microsoft Azure and GCP.
Performance is of major importance for modern applications and workloads. No matter if you run a private cloud or deliver public cloud services for customers, you need to ensure the excellent performance for the workloads running on the cloud. Often misunderstood, storage has a direct impact not only on the reliability of cloud services, but also on the performance of the entire cloud.
https://storpool.com/news/storpool-presents-at-cloud-field-day-9
CEPH DAY BERLIN - 5 REASONS TO USE ARM-BASED MICRO-SERVER ARCHITECTURE FOR CE...Ceph Community
Arm-based micro-server architecture intro and why you would want to use it.
*Smallest failure domain
*Linear scale out on capacity & performance (data sharing)
*Easy to use Ceph Management GUI (short intro on UVS manager)
*Power saving on hyper-scale DC
*Lower TCO
*Use case sharing
CEPH DAY BERLIN - CEPH MANAGEMENT THE EASY AND RELIABLE WAYCeph Community
Deploying Ceph can be a hassle. And once it's up and running it can be time consuming to add servers: install an operating system, configure the network, configure Ceph, ...|But that doesn't have to be that hard.|We've build a full Ceph management suite to help you with these tasks.|With croit, you just need to plug in your server and our zero touch provisioning takes care of the rest. croit boots a customized Linux image that is ready to be used directly from memory: no installation necessary!||We'll show a live demo, deploying a cluster from scratch and performing a few typical admin tasks; all without using the command line at all. In addition, we show you how croit can help you save a lot of money.
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...ScyllaDB
RedHat built a distributed object storage solution named Ceph which first debuted ten years ago. Now we are seeing rapid developments in the industry and we want to take advantage of them. In this talk, we will briefly introduce Ceph, revisit the problems we are seeing when profiling its I/O performance with flash device, and explain why we want to embrace the future by switching to Seastar. We’ll share our experiences with the audience of how and when we are porting our software to this framework.
StorPool presents at Cloud Field Day - the leading technology event focused on the impact of cloud technologies on enterprise IT. During the event, the high-performance block storage specialist will showcase how its storage technology allows cloud builders to easily outperform cloud titans like AWS, Microsoft Azure and GCP.
Performance is of major importance for modern applications and workloads. No matter if you run a private cloud or deliver public cloud services for customers, you need to ensure the excellent performance for the workloads running on the cloud. Often misunderstood, storage has a direct impact not only on the reliability of cloud services, but also on the performance of the entire cloud.
https://storpool.com/news/storpool-presents-at-cloud-field-day-9
Azure Native Qumulo scales elastically for common High Performance Compute (HPC) workloads based on application requirements for: Financial Services, Automotive, Genomics / Life Sciences, Media and Entertainment, Energy, Oil and Gas, and more. Performance can be increased (and elastically decreased) much higher than the examples shown here. These slides offer a glimpse into ANQ's HPC capabilities, although at a smaller scale. We invite YOU to do your own testing (with a free ANQ trial) and work with us to test your HPC workloads in Azure.
Get Your Head in the Cloud - Lessons in GPU Computing with Schlumbergerinside-BigData.com
In this presentation from the GPU Technology Conference, Wyatt Gorman from Google and Abhishek Gupta from Schlumberger present: Get Your Head in the Cloud - Lessons in GPU Computing with Schlumberger.
"Demand for GPUs in High Performance Computing is only growing, and it is costly and difficult to keep pace in an entirely on-premise environment. We will hear from Schlumberger on why and how they are utilizing cloud-based GPU-enabled computing resources from Google Cloud to supply their users with the computing power they need, from exploration and modeling to visualization."
Watch the video: https://wp.me/p3RLHQ-kcl
Learn more: https://www.blog.google/products/google-cloud/schlumberger-chooses-gcp-to-deliver-new-oil-and-gas-technology-platform/
and
https://www.nvidia.com/en-us/gtc/
Implementing data and databases on K8s within the Dutch governmentDoKC
A small walkthrough of projects within the dutch government running Data(bases) on OpenShift. This talk shares success stories, provides a proven recipe to `get it done` and debunks some of the FUD.
About Sebastiaan:
I have always been a weird DBA, trying to combine Databases with out-of-the-box thinking and a DevOps mindset. Around 2016 I fell in love with both Postgres and Kubernetes, and I then committed my life to enabling Dutch organisations with running their Database workloads CloudNative.
Over the last few years I worked as a private contractor for 2 large government agencies doing exactly that, and I want to share my and others (success stories) hoping to enable and inspire Data on Kubernetes adoption.
Quantifying the Noisy Neighbor Problem in OpenstackNodir Kodirov
Two of the desirable features for private clouds are better control and predictable performance. Although public clouds have been extensively researched to characterize their unpredictable performance, private clouds have received less scrutiny.
In this talk, we will present how production workloads interfere with each other in an Openstack based cloud. We draw lessons from a several month long study of running workloads in different configurations on highly available implementation of Openstack. We study the impact of noisy neighbors on the network and storage IO performance of applications. We also look at the performance metrics of Openstack control plane and how the API calls are impacted with more number of entities like networks, routers, VMs, volumes. Our study relies on a tool that we developed to create clean and noisy workload deployments, using micro-benchmarks as well as enterprise workloads such as Hadoop, Jenkins and Redis.
The state of Hive and Spark in the Cloud (July 2017)Nicolas Poggi
Originally presented at the BDOOP and Spark Barcelona meetup groups: http://meetu.ps/3bwCTM
Cloud providers currently offer convenient on-demand managed big data clusters (PaaS) with a pay-as-you-go model. In PaaS, analytical engines such as Spark and Hive come ready to use, with a general-purpose configuration and upgrade management. Over the last year, the Spark framework and APIs have been evolving very rapidly, with major improvements on performance and the release of v2, making it challenging to keep up-to-date production services both on-premises and in the cloud for compatibility and stability. The talk compares:
• The performance of both v1 and v2 for Spark and Hive
• PaaS cloud services: Azure HDinsight, Amazon Web Services EMR, Google Cloud Dataproc
• Out-of-the-box support for Spark and Hive versions from providers
• PaaS reliability, scalability, and price-performance of the solutions
Using BigBench, the new Big Data benchmark standard. BigBench combines SQL queries, MapReduce, user code (UDF), and machine learning, which makes it ideal to stress Spark libraries (SparkSQL, DataFrames, MLlib, etc.).
OSDC 2018 | Three years running containers with Kubernetes in Production by T...NETWAYS
The talk gives a state of the art update of experiences with deploying applications in Kubernetes on scale. If in clouds or on premises, Kubernetes took over the leading role as a container operating system. The central paradigm of stateless containers connected to storage and services is the core of Kubernetes. However, it can be extended to distributed databases, Machine Learning, Windows VMs in Kubernetes. All these applications have been considered as edge cases a few years ago, however, are going more and more mainstream today.
Webinar: High Performance MongoDB Applications with IBM POWER8MongoDB
Innovative companies are building Internet of Things, mobile, content management, single view, and big data apps on top of MongoDB. In this session, we'll explore how the IBM POWER8 platform brings new levels of performance and ease of configuration to these solutions which already benefit from easier and faster design and development using MongoDB.
The OpenEBS Hangout #4 was held on 22nd December 2017 at 11:00 AM (IST and PST) where a live demo of cMotion was shown . Storage policies of OpenEBS 0.5 were also explained
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
Running Projects in Application Containers, System Containers & VMs - Jelasti...Jelastic Multi-Cloud PaaS
The benefits of virtualization and cloud technologies already became clear with all published articles and millions of speeches. However, more available options produce the "problem of choice". There is a question which periodically comes up - what virtualization technology to choose for a specific use case. In this session, we'll analyze the difference of running the projects inside application containers, system containers and VMs. We will cover the peculiarities in deployment, resource usage efficiency, cloud interoperability and security for each type, as well as discuss what options are more appropriate for different cases. In addition, we’ll review the possibilities of running your application inside a Kubernetes cluster, what configurations should be taken into account, and how to overcome the barriers on the way to more efficient Kubernetes hosting.
Webinar recording https://www.youtube.com/watch?v=8m_8PL8mXsU
Learn more about efficient Kubernetes hosting https://jelastic.com/kubernetes-hosting/
Container Types https://jelastic.com/blog/container-types/
Containers and VMs on same Host https://jelastic.com/blog/container-virtual-machines-hosted-together/
Next Generation Cloud Computing With Google - RightScale Compute 2013RightScale
Speaker: Martin Gannholm - Lead Engineer, Google
Google Cloud Platform provides everything you need to build, run, and scale social, mobile, and online applications. Already, tens of thousands of popular applications like Khan Academy, Angry Birds, SnapChat, and Pulse are benefiting from the power of running on top of Google infrastructure. Come join Google as we go deep on how to best leverage our technology with RightScale to build your next masterpiece.
Architecting Analytic Pipelines on GCP - Chicago Cloud Conference 2020Mariano Gonzalez
Modernizing analytics data pipelines to gain the most of your data while optimizing costs can be challenging. However, today cloud providers offer a good set of services that can help with this endeavor. We will do a tour across some GCP services during this hands-on session, using DataFlow (apache beam) as the backbone to architect a modern analytics pipeline to wire them all together.
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. In this talk we’ll show you how to get the most out of your KVM cloud and how to optimize it for performance: You’ll understand why performance matters and how to measure it properly. We’ll teach you how to optimize CPU and memory for ultimate performance and how to tune the storage layer for performance. You’ll find out what are the main components of an efficient new-age cloud and which network components work best. In addition, you’ll learn how to select the right hardware to achieve unmatched performance for your new-age cloud and applications.
Venko Moyankov is an experienced system administrator and solutions architect at StorPool storage. He has experience with managing large virtualizations, working in telcos, designing and supporting the infrastructure of large enterprises. In the last year, his focus has been in helping companies globally to build the best storage solution according to their needs and projects.
Wie wir inzwischen ja alle wissen, wird die Bedrohungslage durch Ransomware bestehen bleiben. Aktuelle Versionen gehen gezielt auf Unternehmen des Mittelstandes und greifen Online-Backups an. Die einzige Versicherung gegen wochenlangen Ausfall und hohe Kosten sind offline-fähige Medien, die außerhalb des Online-Systems gelagert werden können (Air Gap). Neben Tape, das vor allem für Petabyte-Archive noch seine Berechtigung hat, kann man aber auch moderne Wechselmedien so gestalten, dass sie Random Access, hohe, eingebaute Sicherheit und dennoch Offline-Fähigkeit bieten.
Christian Peschke ist als COO bei der FAST LTA seit über 10 Jahren in der Entwicklung und Produktion von Sekundärspeichertechnologien tätig. Geprägt durch seine interdisziplinären Verantwortlichkeiten, sowie durch seine Vorliebe für die Restaurierung seltener Oldtimer, verfügt er über ein ausgeprägtes Detaildenken sowie einen Hang für clevere Lösungen.
Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherungdata://disrupted®
Cloud- und Objekt-basierte Speicher gewinnen zunehmend an Bedeutung, da sie einfach zu integrieren sind und viele Nachteile traditioneller Dateisysteme vermeiden. Cloud Provider und Hersteller von Object Storage Produkten verwenden heute überwiegend Festplatten als Speichermedien. Die extrem steigenden Datenmengen, die auf diesen Systemen gespeichert werden, führen zu Anforderungen nach Tiering (z.B. von inaktiven Daten) und Absicherung (z.B. gegen Ransomware) in eine S3 Speicherklasse, die wirtschaftlich und sicher ist. Dafür eignet sich insbesondere ein Tape-basierter Object Storage. Der Vortrag erörtert die technische Konzeption und Vorteile dieses Ansatzes.
Thomas Thalmann ist Geschäftsführer der PoINT Software & Systems GmbH. Er besitzt mehr als 20 Jahre Erfahrung im Storage-Markt aus unterschiedlichen Positionen in den Bereichen Entwicklung, Produkt- und Projektmanagement sowie Pre-Sales und Consultancy. Seit 1994 ist er für PoINT Software & Systems tätig und seit 2015 Geschäftsführer.
Rook: Storage for Containers in Containers – data://disrupted® 2020data://disrupted®
In this talk Kim-Norman Sahm and Alexander Trost dive into the challenges of storage for containerized applications on Kubernetes. We'll see how the current state is and how Rook can help with that. We are going to especially look at Ceph run through Rook here, but nonetheless trying not to lose sight of the whole picture. There is a lot to keep in mind storage as is, but everything gets more complex with storage for containers. From what type of storage to how much and how "safe" it should, all questions that should be asked and most of them which should be answered as well. Rook's project site https://rook.io/
Kim-Norman Sahm is CTO of Cloudical. He also works as Executive Cloud Architect at Cloudical. Previously, he was OpenStack Cloud Architect at T-Systems (operational services GmbH) and noris network AG. He is an expert of the technologies OpenStack, Ceph and Kubernetes (CKA).
Alexander Trost works as a DevOps Engineer at Cloudical Deutschlang GmbH and is a Certified Kubernetes Administrator (CKA). He is one of four maintainers of the Rook.io Project and is engaged in several more open source projects, for example a Prometheus exporter for Dell Hardware (Dell OMSA Metrics), k8s-vagrant-multi-node an easy local multi node Kubernetes environment, and others. Besides Containers and Kubernetes he is expert on Software Defined Storage, Golang and Continuous Integration (with GitLab CI). He passionately enjoys working on open source projects, such as Rook, Ancientt and other projects.
Storage Benchmarks - Voodoo oder Wissenschaft? – data://disrupted® 2020data://disrupted®
Das neue Storage ist geliefert und betriebsbereit, und das erste was der Storage-Admin macht, ist ein dd. Mehr oder weniger zufrieden schaut er dann auf den Durchsatz und freut sich, dass der Einkauf dieses Mal was ordentliches gekauft hat. Oder auch nicht. Der Vortrag erläutert, weshalb es so schwierig ist, aussagekräftige Storage-Benchmarks zu fahren und geht kurz auf verschiedene Benchmarking-Tools ein. Im zweiten Teil des Vortrags gehe ich etwas genauer auf die Internas des SPC-1-Benchmarks des Storage Performance Council ein, und zeige, wie sinnvoll oder weniger sinnvoll es ist, sich bei der Beschaffung auf vermeintlich objektive Performance-Messungen zu verlassen.
Wolfgang Stief ist seit Mitte der 1990er Jahre als Dipl.-Ing. in der IT-Branche tätig. Nach vielen Jahren in Support und Presales bei einem Sun-Partner startete er 2011 in die Selbständigkeit. Als Technologieberater und Erklärbär ist er freiberuflich tätig im technischen Marketing mit einem Fokus auf Enterprise Storage, und arbeitet redaktionell für storage-forum.de. Daneben ist er aktiv im Unternehmensvorstand der sys4 AG und beschäftigt sich mit der Historie längst verglühter aber nicht vergessener IT-Konzerne.
Datenspeicherung 2020 bis 2030 – immer noch auf Festplatten? – data://disrupt...data://disrupted®
Toshiba berichtet über aktuelle Trends und zukünftige Entwicklungen bei den Komponenten zur Datenspeicherung – Mobil, zu Hause, in den Rechenzentren und in der Cloud. Dabei werden Szenarien basierend auf Speicherung mit Tapes, Festplatten und SSD anhand der Anforderungen in Sachen Kapazität, Performance, Leistungsaufnahme, Kosten und Produktionskapazitäten miteinander verglichen. Mit einem Einblick in neue Technologien aus den Festplatten-Entwicklungslaboratorien des Konzerns und unter Berücksichtigung der oben genannten Anforderungen wird es einen Ausblick auf die Datenspeicherlandschaft der nächsten 10 Jahre geben. Ein besonderer Focus im Bereich Festplatten liegt diesmal auf der Bewertung der derzeit kontrovers diskutierten SMR („shingled magnetic recording“) Technologie.
Rainer Kaese ist seit über 25 Jahren bei Toshiba. Zunächst spezialisierte er sich auf anwendungsspezifische ICs, leitete das ASIC Design Center und später das Business Development Team für ASIC- und Foundry-Produkte. Derzeit ist er für die Einführung der Enterprise HDD-Produkte von Toshiba in Rechenzentren, Cloud Computing und Enterprise-Anwendungen verantwortlich.
Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020data://disrupted®
Leistungsverbesserung in allen Bereichen, Reduzierung der Kosten und überragender Schutz der Daten: Die fortschreitende Digitalisierung sowie neue Gesetze und Richtlinien zur Archivierung von Daten stellen den Ursprung der Datenarchivierung und dem damit verbundenen spektakulären Wachstum dar. Die Tape-Technologie nimmt mit einem Marktanteil von 60% eine dominante Rolle im Bereich der Datenarchivierung ein. Ein jährliches Wachstum von 24% zeigt den ungebrochenen Wachstumstrend der Technologie. Im Vortrag erfahren Sie, welche Hürden Tape-Hersteller überwinden mussten, um die Technologie zukunftssicher zu gestalten. Dabei zeigen wir, wie sich der Speichermarkt in den letzten Jahren zugunsten von Tape entwickelt hat und was wir für die nächsten Jahre prognostizieren können. Weiterhin möchten wir die neuste Tape-Generation LTO9 vorstellen, die bereits in den Startlöchern steht. Sie beweist, welches Potenzial in der Technologie steckt und erweitert die Parameter weiter, um den Nutzern bestmögliche Leistung für Themen der Datenspeicherung an die Hand zu geben: 18 TB Kapazität und 400 MB/s Transferrate sind nur zwei der überzeugenden Eckdaten. Zudem haben wir uns auch den Bedürfnissen von Object-Storage-Anwendern angenommen. Im Vortrag erfahren Sie, wie Fujifilm es nun ermöglicht, auch Ihre Daten objektbasiert auf Tape zu schreiben und Sie so von großen Kostenersparnissen sowie nicht da gewesener Sicherheit im Object Storage Bereich profitieren können.
Florian Brendel begann im Jahr 2016 bei Fujifilm Recording Media im Bereich Business Development für den deutschen Markt. Seit 2017 ist er für die DACH-Region zuständig und berät und betreut Unternehmen in Bereich Speicherlösungen mit Fokus auf Großkunden, welche Speicherkapazitäten im Petabyte-Bereich vorhalten.
Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020data://disrupted®
Die Bedrohungslage verschärft sich: Die Allianz bewertet Cyber Incidents als größtes Geschäftsrisiko überhaupt. Das BSI spricht von „massenhafter Verbreitung raffinierter Angriffsmethoden durch Organisierte Kriminalität.“ An vorderster Front stehen hier natürlich IT-Sicherheitssysteme. Aber wie die letzten Angriffe – unter anderem mit Ransomware – zeigten, greifen diese nicht immer. Und dann sind oft sehr viele oder sogar alle Daten inklusive der Backupdaten zerstört. Hier hilft dann nur noch die Daten von einem Physical-Air-Gap-Medium zurückzusichern. Sehen Sie, warum Tape das sicherste Medium ist. Erfahren Sie auch, warum nahezu alle große Cloud-Service-Provider Tape einsetzen. Sehen Sie die zukünftige Roadmap, Produktentwicklung und die weiteren Einsatzgebiete von Tape.
Josef Weingand ist Business Development Manager für Tape Storage bei IBM, zuständig für die DACH-Region. Er hat über 23 Jahre Erfahrung im Tape Storage und arbeitet sowohl im Technical Support als auch im Sales Support für Data-Protection-, Data-Retention- und Tape-Lösungen. Er hat an mehreren IBM Redbooks mitgearbeitet und einige Patente im Storage-Bereich erarbeitet.
HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...data://disrupted®
HCI einfach einfach. Lassen Sie sich zeigen, wie Scale HC3 ihre IT-Infrastruktur vereinfacht. Scale HC3 ist das Smartphone der Infrastruktur. Skalierbarkeit, Effizienz, Performance und Kosten im Griff!
Thorsten Schäfer ist ein sehr erfahrener Vertriebsleiter mit mehr als 15 Jahren Erfahrung in der Computer-Software-Branche, der sich mit Hyper Converged (HCI), Software Defined Storage (SDS), Storage Area Network (SAN), VMware, Storage und VDI auskennt. Starker Vertriebsprofi sowie sehr starker technischer Hintergrund: als erstes deutscher DCME – aber auch in den Bereichen Business Development, Consulting, Engineering, Channel Development & Management.
William van Collenburg verfügt über fast 20 Jahre Erfahrung in der IT und arbeitet sowohl im Service- als auch im technischen Bereich. Sein Hauptaugenmerk lag immer auf dem Wissenstransfer in einem verständlichen Weg. Aufgrund der Tatsache, dass er seine Erfahrung vielfältig ist, aus vielen Bereichen der IT und Technologien, kann er Kunden nicht nur helfen, sondern auch bei Problemen zur Hand gehen. William hat großes Wissen und Erfahrung mit vielen Speicher-, Netzwerk- und (Desktop-) Virtualisierungstechnologien. Er hat unter anderem bei EMC gearbeitet, war Trainer bei Dell, Pre-Sales-Berater für Virtualisierung bei VMware und Citrix sowie Pre-Sales Vertriebsingenieur bei Springpath (jetzt Cisco Hyperflex).
Zuverlässiger IT-Betrieb braucht redundante Datenhaltung. Die letzten 30 Jahre nutze man dazu RAID-Systeme, die Technologie ist ausgereift und war bisher völlig ausreichend. Mit zunehmenden Festplattengrößen und dem Einzug von All-Flash-Arrays sowie verteilten Dateisystemen ist eine „einfache“ Redundanz wie sie RAID-Mechanismen bieten nicht mehr ausreichend. Erasure Coding schafft hier Abhilfe. Im Vortrag erklärt Wolfgang Stief, wie Erasure Coding grundsätzlich funktioniert. im ersten Teil erläutert er ein paar Grundideen hinter und Grundbegriffe zu Erasure Coding. Im zweiten Teil beschreibt unser Experte beispielhaft derzeit übliche Implementierungen von Erasure Coding. Er versucht dabei, möglichst ohne Mathematik auszukommen. Außerdem werden im Vortrag Vorteile aber auch Probleme, Grenzen und Fallstricke beim Einsatz von Erasure Coding beleuchtet. Abschließend folgen beispielhaft einige bereits heute übliche Anwendungen für Erasure Coding.
Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...data://disrupted®
Dieser Vortrag ist ein Praxisbericht wie Nextcloud im öffentlichen Sektor als sicherere und DSGVO-konforme Datenaustauschplatform eingesetzt wird. Mittlerweile setzt die deutsche Bundesregierung sowie weitere europäische und aussereuropäische Regierungen auf Nextcloud. Auch deutsche Landesverwaltungen, sowie Schul- und Forschungeinrichtungen betreiben inzwischen eigene Nextcloud-Instanzen. Dieser Vortrag gibt einen Überblick aus der Praxis, warum on-premises Cloud File Sync und Share immer wichtiger wird, und was die Erfolgsfaktoren sind.
Auch die eingesetzten unterschiedlichen Architekturen mit den entsprechenden Vor- und Nachteilen werden beleuchtet. Zusätzlich stellt der Vortrag bestehende und zukünftige Funktionalitäten vor die für den sicheren und skalierbaren Betrieb von on-premises Cloud Services wichtig sind.
Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)data://disrupted®
The story of the plan that was just crazy enough to work! Learn how Booking.com failed its way to success on a multi-year journey away from single-purpose storage-appliances, predatory-licensing, and over-complicated networking to create a unique storage solution for their hyper-scale private-cloud environment.
Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)data://disrupted®
Erfahren Sie, was die Hersteller von Tape Technologien aus technischer Sicht alles beachten müssen, damit Tapes mit einer Kapazität von bis zu 20TB und einer Transferrate von bis zu 400MB/s, bei exzellenter Performance, auf den Markt gebracht werden können. Lernen Sie, was die Zukunft der Technologie bringt und woran wir heute schon forschen. Fokus wird hierbei auf den technologischen Unterschieden zwischen der herkömmlichen LTO Technologie und der Enterprise (Jaguar) Technologie von IBM liegen. Die 3592 Technologie bietet nicht nur 67% mehr Speicherkapazität, eine 10-fach höhere Datenintegrität, eine längere Lebensdauer und weniger Migrationszyklen als die LTO8 Technologie, sondern beinhaltet einige technische Features, die den Zugriff auf die Daten um bis zu 50% beschleunigen.
CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.data://disrupted®
Um im Recruiting erfolgreich zu sein, müssen Unternehmen lernen, bewerberorientiert zu denken und zu handeln. Die Candidate Experience hat das Ziel, den Bewerbungsprozess Ihrer potenziellen Mitarbeiter in ein positives, motivierendes Erlebnis zu wandeln. Doch wie und wo ansetzen? Kennen Sie die Anforderungen Ihrer Bewerber?
get-a-MINT zeigt Ihnen, wie Sie das Konzept der Candidate Experience praktisch auf Ihre Recruitingprozesse übertragen können und die Candidate Journey Ihrer bewerber optimieren.
Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...data://disrupted®
Ein wesentliches und immer drängenderes Problem ist das Wachstum inaktiver unstrukturierter Daten. Der Vortrag zeigt auf, wie diese Herausforderung mit Hilfe von HSM/ILM in Verbindung mit Cloud/Objekt-basierten Technologien gelöst wird. Insbesondere die Integration unterschiedlicher S3 Speicherklassen mit Einbindung von Erasure Coding geschützten Tape-Speichersystemen bietet viele Vorteile hinsichtlich Kosten, Datensicherheit und Skalierbarkeit.
In einem flotten Ritt durch das Land der aktuellen Storage-Buzzwords erläutere ich kurz ein paar technische Hintergründe zu den Begriffen und versuche, nach Sinn und Unsinn zu unterscheiden. Mitreisende werden am Ende der halben Stunde ein grobes Verständnis für Dinge wie Storage Class Memory, Persistent Memory oder Computational Storage haben, und können ein paar Begriffe mit nach Hause nehmen, die beim selbständigen weiterforschen helfen.
Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)data://disrupted®
Zur Unterstützung von Big Data und Machine Learning Szenarien wurde am Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH) der TU Dresden eine neue Speicherlandschaft mit „NVMe Storage“ (2 PB Kapazität und 2 TB/s Bandbreite, <100us Latenz) und „Warm Archive“ auf Basis des S3-Protokolls (10 PB Kapazität und 50 GB/s Bandbreite) aufgebaut. Dr. Michael Kluge vom ZIH (Abteilungsleiter System- und Dienstentwurf) erläutert die besonderen Anforderungen dieses Projektes und berichtet vom Aufbau und Betrieb der Umgebung.
Während das Gros der Storage-Hersteller in IIoT/IoT noch den nächsten großen Storage-Hype sehen, zeigt uns die Analytics-Welt andere wegweisende Trends in der Verarbeitung der Daten aus Sensor und Sensorkette auf. So bringt Google beispielsweise mit Coral bereits Neuronale Netze als embedded KI in die Chipsets der Sensoren zum Preis weniger Cent. Wie in der Mathematik gelernt, geht der Trend nun auch hier in Richtung Vereinfachung und Reduktion bereits in den ersten Schritten der Aufgabenstellung. Redundanzen und anderes unnützes Beiwerk braucht weder übertragen oder gar gespeichert werden ...
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Benchmarking your cloud performance with top 4 global public clouds
1. Benchmarking your cloud performance with top 4
global public clouds
Boyan Krosnov
data://disrupted
2020
2. ● Chief of Product & co-founder at StorPool
● 20+ years in ISPs, SDN, SDS
● IT Infrastructure with a focus on invention,
performance & efficiency
About me
https://www.linkedin.com/in/krosnov/
bk@storpool.com
3. About StorPool
● NVMe software-defined storage for VMs and containers
● Scale-out, HA, API-controlled
● Since 2011, in commercial production use since 2013
● Based in Sofia, Bulgaria
● Mostly virtual disks for KVM
● … and bare metal Linux hosts
● Also used with VMWare, Hyper-V, XenServer
● Integrations into OpenStack/Cinder, Kubernetes Persistent
Volumes, CloudStack, OpenNebula, OnApp
3
4. Why performance
● Better application performance -- e.g. time to load a page, time to
rebuild, time to execute specific query
● Happier customers in cloud and multi-tenant environments
● ROI, TCO - Lower cost per delivered resource (per VM) through
higher density
● Public cloud - win customers over from your competitors
● Private cloud - do more with less; win applications / workloads /
teams over from public cloud
5. 1. Understanding performance
2. Benchmarks of public clouds
3. How to measure measure and optimize your own cloud
4. What's in a TCO
5. Conclusion
Agenda
14. 1. Understanding performance
2. Benchmarks of public clouds
3. How to measure and optimize your own cloud
4. What's in a TCO
5. Conclusion
Agenda
15. * - ramdisk used to reduce usable RAM to 16 GB
VMs and block storage
Provider Instance name region
monthly cost
(with 12 month
commitment)
vCPUs RAM free -m
AWS Compute optimized: c5.2xlarge us-east-2 $245 8 16GB 15,437
Google Cloud General purpose: n2-8vcpu-16gb us-central1 $197 8 32GB 32,116*
Microsoft Azure
Compute optimized:
Standard_F8s_v2 - 8 vcpus, 16
GiB memory
East US 2 $235 8 16GB 15,962
Digital Ocean CPU Optimized Droplet: 16GB sfo2 $160 8 16GB 16,039
Katapult ROCK-24 London $120 8 24GB 23,458*
Storage volume
Size of volume
[GiB]
IOPS limit Monthly cost
AWS - EBS gp2 1024 3,072.00 $102
Google Cloud - SSD persistent disk 1T 1024 15,000.00 $174
Microsoft Azure - Premium SSD 1T 1024 3,500.00 $123
DigitalOcean - Block Storage 1T 1024 10,000.00 $102
Katapult Shared disk NVMe (StorPool-based) 1024 unlimited $154
16. ● Storage heavy, a little CPU
○ FIO, rsync
● Storage & CPU
○ pgbench, sysbench
● CPU, RAM*
○ coremark
● Network*
* - future additions to our suite
Tools used
17. Results - FIO Storage type
FIO rand r/w
QD1 latency
[ms]
FIO QD1
random r/w
IOPS
FIO QD64
random r/w
IOPS
Katapult 1T ($153) StorPool-based 0.10 ms 10,101 IOPS 113,447 IOPS
AWS EBS gp2 1T ($102) 0.36 ms 2,762 IOPS 3,123 IOPS
Google Cloud SSD Persistent Disk 1T
($174)
0.72 ms 1,386 IOPS 15,436 IOPS
Azure Premium SSD 1T ($124) 8.18 ms 122 IOPS 5,100 IOPS
DO Block Storage 1T ($102) 3.34 ms 299 IOPS 1,044 IOPS
18. Results - rsync
storage type seconds to re-sync
Katapult 1T ($153)
StorPool-based
85
AWS EBS gp2 1T ($102) 176
Google Cloud SSD Persistent
Disk 1T ($174)
281
Azure Premium SSD 1T ($124) 431
DO Block Storage 1T ($102) 1,303
24. 1. Understanding performance
2. Benchmarks of public clouds
3. How to measure and optimize your own cloud
4. What's in a TCO
5. Conclusion
Agenda
25. ● Design benchmarks which reflect your use-case and application
● Measure what matters. Examples:
○ developer productivity - simple SQL database for up to X users, so no
need to pay for complexity of clusters; runs CI/tests in half the time
○ Efficiency - $ per user, $ per features
● If you can't measure what matters directly, find good proxies. Example:
○ "I can't run my entire stack as a benchmark, but I know it consists of
a load balancer and a transaction-heavy database, so I'll use a load
balancer and a DB benchmark"
Benchmarks
26. Storage benchmarks
Beware: lots of snake oil out there!
● performance numbers from hardware configurations totally
unlike what you’d use in production
● synthetic tests with high iodepth - 10 nodes, 10 workloads *
iodepth 256 each. (because why not)
● testing with ramdisk backend
● synthetic workloads don't approximate real world
27. ● Previous version of our tools and methodology:
○ https://storpool.com/storage-performance-and-resilience-
testing
● We'll be releasing updated tools and method with the write-up
in the next month
○ coremark, fio, rsync, pgbench, sysbench
● Until then drop us an email at info@storpool.com
Benchmarks
28. 1. Your existing hardware can give you more
a. See Venko's talk on KVM optimization (tomorrow 11am)
b. fast networking (OVS-DPDK), fast storage (StorPool)
2. If you are building a new cloud - optimize for your use-case
a. per-rack power limit
b. per-core performance, per-core memory, per-core storage
c. per-core cost
Hardware
30. ● Hardware
● Host OS and hypervisor (KVM)
● Virtual networking, service mesh
● Storage
Optimization areas
31. 1. Understanding performance
2. Benchmarks of public clouds
3. How to measure and optimize your own cloud
4. What's in a TCO
5. Conclusion
Agenda
32. ● Define minimum service level
● When comparing options use TCO tool (large spreadsheet) to find
lowest cost per delivered unit of infrastructure (fixed-size
VM/container with associated storage and networking)
● 100s of parameters
● Usable for both public and private scenarios
TCO approach
34. 1. Datacenter
- power, cooling, max power per rack, remote hands
2. Compute
- servers, CPUs, RAM, minimum core performance, cloud
orchestration, management cost
3. Storage
- storage servers, drives, software, management cost
4. Network
- virtual network, CPU/RAM allocation, software, management
cost
- public/wide area network, IP transit cost
What to include
35. 1. Understanding performance
2. Benchmarks of public clouds
3. How to measure and optimize your own cloud
4. What's in a TCO
5. Conclusion
Agenda
36. 1. You can't judge a VM by its vCPUs and vRAM
2. Measure what matters to you
3. If you are a public or private cloud 2x,3x, higher application
performance (per $ !) than hyperscalers is within reach. Half price
for the same workload!
4. On your next project work with partners who understand
performance. You can gain a lot!
Conclusions