This document provides an overview and planning guidelines for a first Ceph cluster. It discusses Ceph's object, block, and file storage capabilities and how it integrates with OpenStack. Hardware sizing examples are given for a 1 petabyte storage cluster with 500 VMs requiring 100 IOPS each. Specific lessons learned are also outlined, such as realistic IOPS expectations from HDD and SSD backends, recommended CPU and RAM per OSD, and best practices around networking and deployment.
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
In this presentation, i have explained how Ceph Object Storage Performance can be improved drastically together with some object storage best practices, recommendations tips. I have also covered Ceph Shared Data Lake which is getting very popular.
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
In this session, you'll learn how RBD works, including how it:
Uses RADOS classes to make access easier from user space and within the Linux kernel.
Implements thin provisioning.
Builds on RADOS self-managed snapshots for cloning and differential backups.
Increases performance with caching of various kinds.
Uses watch/notify RADOS primitives to handle online management operations.
Integrates with QEMU, libvirt, and OpenStack.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
In this presentation, i have explained how Ceph Object Storage Performance can be improved drastically together with some object storage best practices, recommendations tips. I have also covered Ceph Shared Data Lake which is getting very popular.
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
BlueStore is a new storage backend for Ceph OSDs that consumes block devices directly, bypassing the local XFS file system that is currently used today. It's design is motivated by everything we've learned about OSD workloads and interface requirements over the last decade, and everything that has worked well and not so well when storing objects as files in local files systems like XFS, btrfs, or ext4. BlueStore has been under development for a bit more than a year now, and has reached a state where it is becoming usable in production. This talk will cover the BlueStore design, how it has evolved over the last year, and what challenges remain before it can become the new default storage backend.
Ceph is a open source , software defined storage excellent and the only ( i would say ) storage backend as a cloud storage. Ceph is the Future of Storage. In this presentation i am explaining ceph and openstack briefly , you would definitely enjoy it.
In this session, you'll learn how RBD works, including how it:
Uses RADOS classes to make access easier from user space and within the Linux kernel.
Implements thin provisioning.
Builds on RADOS self-managed snapshots for cloning and differential backups.
Increases performance with caching of various kinds.
Uses watch/notify RADOS primitives to handle online management operations.
Integrates with QEMU, libvirt, and OpenStack.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Ceph scale testing with 10 Billion ObjectsKaran Singh
In this performance testing, we ingested 10 Billion objects into the Ceph Object Storage system and measured its performance. We have observed deterministic performance, check out this presentation to know the details.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
DPDK (Data Plane Development Kit) Overview by Rami Rosen
* Background and short history
* Advantages and disadvantages
- Very High speed networking acceleration in L2
- How this acceleration is achieved (hugepages, optimizations)
- rte_kni (and KCP)
- VPP (and FD.io project) , providing routing and switching.
- TLDK (Transport Layer Development Kit, TCP/UDP)
* Anatomy of a simple DPDK application.
* Development and governance model
* Testpmd: DPDK CLI tool
* DDP - Dynamic Device Profiles
Rami Rosen is a Linux Kernel expert, the author of "Linux Kernel Networking", Apress, 2014.
Rami had published two articles about DPDK in the last year:
"Network acceleration with DPDK"
https://lwn.net/Articles/725254/
"Userspace Networking with DPDK"
https://www.linuxjournal.com/content/userspace-networking-dpdk
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
레드햇의 Etsuji Nakai 씨의 "OpenStack: Inside Out" 한글 번역본입니다.
다시 한번 좋은 문서를 공유해주신 Etsuji Nakai 씨에게 감사를 드립니다.
http://www.slideshare.net/enakai/open-stack-insideoutv10
Best Practices & Performance Tuning - OpenStack Cloud Storage with Ceph - In this presentation, we discuss best practices and performance tuning for OpenStack cloud storage with Ceph to achieve high availability, durability, reliability and scalability at any point of time. Also discuss best practices for failure domain, recovery, rebalancing, backfilling, scrubbing, deep-scrubbing and operations
Ceph scale testing with 10 Billion ObjectsKaran Singh
In this performance testing, we ingested 10 Billion objects into the Ceph Object Storage system and measured its performance. We have observed deterministic performance, check out this presentation to know the details.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
DPDK (Data Plane Development Kit) Overview by Rami Rosen
* Background and short history
* Advantages and disadvantages
- Very High speed networking acceleration in L2
- How this acceleration is achieved (hugepages, optimizations)
- rte_kni (and KCP)
- VPP (and FD.io project) , providing routing and switching.
- TLDK (Transport Layer Development Kit, TCP/UDP)
* Anatomy of a simple DPDK application.
* Development and governance model
* Testpmd: DPDK CLI tool
* DDP - Dynamic Device Profiles
Rami Rosen is a Linux Kernel expert, the author of "Linux Kernel Networking", Apress, 2014.
Rami had published two articles about DPDK in the last year:
"Network acceleration with DPDK"
https://lwn.net/Articles/725254/
"Userspace Networking with DPDK"
https://www.linuxjournal.com/content/userspace-networking-dpdk
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
레드햇의 Etsuji Nakai 씨의 "OpenStack: Inside Out" 한글 번역본입니다.
다시 한번 좋은 문서를 공유해주신 Etsuji Nakai 씨에게 감사를 드립니다.
http://www.slideshare.net/enakai/open-stack-insideoutv10
Best Practices & Performance Tuning - OpenStack Cloud Storage with Ceph - In this presentation, we discuss best practices and performance tuning for OpenStack cloud storage with Ceph to achieve high availability, durability, reliability and scalability at any point of time. Also discuss best practices for failure domain, recovery, rebalancing, backfilling, scrubbing, deep-scrubbing and operations
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
Accelerating hbase with nvme and bucket cacheDavid Grier
This set of slides describes some initial experiments which we have designed for discovering improvements for performance in Hadoop technologies using NVMe technology
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
Ceph is a mature open source software-defined storage solution that was created over a decade ago.
During that time new faster storage technologies have emerged including NVMe and Persistent memory.
The crimson project aim is to create a better Ceph OSD that is more well suited to those faster devices. The crimson OSD is built on the Seastar C++ framework and can leverage these devices by minimizing latency, cpu overhead, and cross-core communication. This talk will discuss the project design, our current status, and our future plans.
Presentation on how GRNET uses Ceph as a storage backend on its Cloud Computing services. Technical specs, lessons learned, future plans.
Presentation held at the 1st GEANT SIG-CISS Meeting in Amsterdam, 2017-09-25.
GRNET - Greek Research and Technology network is the state-owned Greek NREN.
Vulkan and DirectX12 share many common concepts, but differ vastly from the APIs most game developers are used to. As a result, developing for DX12 or Vulkan requires a new approach to graphics programming and in many cases a redesign of the Game Engine. This lecture will teach the basic concepts common to Vulkan and DX12 and help developers overcome the main problems that often appear when switching to one of the new APIs. It will explain how those new concepts will help games utilize the hardware more efficiently and discuss best practices for game engine development.
For more, visit http://developer.amd.com/
Storage Spaces Direct - the new Microsoft SDS star - Carsten RachfahlITCamp
Storage Spaces Direct will provide new unseen possibilities for Microsoft Hypervisor Hyper-V. These are on one hand a high performant, high available Scale-Out Fileserver with the possibility to use internal not shared disks like SATA HDDs and SSDs and even NVMe Devices. On the other hand, you can build a Hyper-converged Hyper-V Cluster where the VMs and their Storage are running on the same Servers. And let’s not forget Azure Stack! The first version of Microsoft Private/Hosted Cloud solution will only be supported on the hyper-converged S2D infrastructure. Join this session to learn about this great new technology that will have its role in the future Private and Hosted Cloud infrastructure implementations.
How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...Mirantis
Learn how to ease the burden of Kubernetes operational challenges with DevOpsCare, powered by Lens. Get seamless visibility into monitoring, managing and security your cloud native apps. Automate in CI/CD and find out policy-based best practices so developers can go back to building applications.
Are you worried about granting too much access to resources on your Kubernetes cluster? With the extensible framework of Kubernetes, there is scarcely a day without a new tool popping up. In order to ensure the tools, users, and applications have appropriate security policies, a streamlined onboarding process is required.
Using Kubernetes to make cellular data plans cheaper for 50M usersMirantis
Use case of Kubernetes based NFV infrastructure used in production to run an open source evolved packet core. Presented by Facebook Connectivity and Mirantis at KubeCon + CloudNativeCon Europe 2020.
Slides from webinar by Mirantis about how to build a basic edge cloud using surveillance cameras. Watch the webinar recording at: https://bit.ly/mirantis-edge-cloud
Comparison of Current Service Mesh ArchitecturesMirantis
Learn the differences between Envoy, Istio, Conduit, Linkerd and other service meshes and their components. Watch the recording including demo at: https://info.mirantis.com/service-mesh-webinar
OpenStack and the IoT: Where we are, where we're going, what we need to get t...Mirantis
OpenStack Austin discussion from Spring, 2016, with Sean Collins, Niki Acosta, Nick Chase, Xiaoping Chen, Alexander Adamov discussing issues such as security, architecture, and other technical and social issues.
Boris Renski: OpenStack Summit Keynote Austin 2016Mirantis
We tend to split the cloud world today into just two paradigms - public and private. Public works. Private doesn’t…. Or so says Gartner. Let’s compare side by side.
Digital Disciplines: Attaining Market Leadership through the CloudMirantis
Keynote by Joe Weinman, author of Cloudonomics and Digital Disciplines, at OpenStack Silicon Valley 2015.
Joe wraps the event by delineating four generic strategies used by leading tech companies and traditional blue chips to leverage a variety of information technologies: information excellence, i.e., better processes; solution leadership, i.e., cloud-connected smart, digital products and services; collective intimacy, i.e., big-data-based algorithms to enhance customer relationships; and accelerated innovation through open source, challenges, and innovation networks.
Decomposing Lithium's Monolith with Kubernetes and OpenStackMirantis
Keynote by Lachlan Evenson, Team Lead of Cloud Platform Engineering at Lithium Technologies, at OpenStack Silicon Valley 2015.
Application developers are rapidly moving to container-based models for dynamic service delivery and efficient cluster management. In this session, we will discuss a OpenStack production environment that is rapidly evolving to leverage a hybrid cloud platform to deliver containerized micro services in a SaaS Development/Continuous Integration environment. Kubernetes is being used to simplify and automate the service delivery model across the public/private (OpenStack, AWS, GCE) environments and is being introduced in a way that eliminates extra overhead and engineering effort. Lithium is actively contributing to key open source upstream projects and working closely with its engineering/development teams to optimize software efficiency with an elastic cloud architecture that delivers on the benefits of cloud automation.
OpenStack: Changing the Face of Service DeliveryMirantis
Keynote by Lew Tucker, VP and CTO of Cloud Computing at Cisco, at OpenStack Silicon Valley 2015.
As more companies move to software-driven infrastructures, OpenStack opens up new possibilities for traditional network service providers, media production, and content providers. Micro-services, and carrier-grade service delivery become the new watchwords for those companies looking to disrupt traditional players with virtualized services running on OpenStack.
Keynote by Diane Bryant, SVP and GM of the Data Center Group at Intel, at OpenStack Silicon Valley 2015.
Cloud computing provides tremendous agility and efficiency to organizations are the driver of the digital service economy. In her keynote, Diane Bryant will discuss how Intel was an early leader in adoption of cloud computing under her tenure as CIO and how this experience has shaped broader strategy to deliver tens of thousands of new clouds across the enterprise with Intel’s new Cloud for All Initiative. Attendees can expect to learn about OpenStack’s critical role in shaping the future of the enterprise data center and learn more about key industry efforts to drive enterprise readiness to the OpenStack platform.
Containers for the Enterprise: It's Not That SimpleMirantis
Keynote by Alex Polvi, CEO of CoreOS, at OpenStack Silicon Valley 2015.
Containers are rapidly finding their way into enterprise data centers. But enterprises like to consume complete products. How do technologies like containers make their way from hyperscale ubiquity to enterprise nirvana? Alex offers some clues.
Protecting Yourself from the Container ShakeoutMirantis
Keynote by Boris Renski, Co-Founder and CMO of Mirantis, and Lachlan Evenson, Team Lead of Cloud Platform Engineering at Lithium Technologies, at OpenStack Silicon Valley 2015.
The Docker-fueled container craze is much less of a threat to VMs or OpenStack than it is to PaaS vendors. The story of “do it the way Google does it” is proving just as tough to monetize with enterprises as commodity cloud was. Boris will talk about OpenStack as a “safe harbor” from the coming container shakeout, leveraging the project’s maturity as a place to try various container strategies until the winner emerges.
Lachlan Evenson of Lithium will join Boris to share how Lithium deployed Kubernetes on OpenStack.
Keynote by James Staten, Chief Strategist of the Cloud + Enterprise division of Microsoft, at OpenStack Silicon Valley 2015.
Clinton campaign manager James Carville reminded his team often that driving change came through winning the hearts and minds of the people and where government affects them the most: “It’s the economy, stupid.” In helping enterprises make the shift to cloud, the biggest issue isn’t the technology but the process change organizations have to go through that determine success. In this session, James Staten, chief strategist for the Microsoft Cloud+Enterprise division, and former lead cloud analyst at Forrester Research will share his findings and recommendations for helping enterprise organizations, particularly IT Orgs, successfully navigate a change to the cloud.
Keynote by OpenStack Foundation Executive Director Jonathan Bryce at OpenStack Silicon Valley 2015.
Hundreds of companies are running millions of cores in production with OpenStack. The work continues, but the platform is mature. Now, the community is evolving OpenStack into a platform for innovation—a reliable environment in which to test, try and adopt new technologies as they prove themselves.
Amit Tank of DIRECTV will join Jonathan Bryce to discuss his organization's plans for using OpenStack as the one platform for integration of VMs, containers and emergent technologies down the road.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
5. What is Ceph?
• Ceph is a free & open distributed storage
platform that provides unified object, block,
and file storage.
6. Object Storage
• Native RADOS objects:
– hash-based placement
– synchronous replication
– radosgw provides S3 and Swift compatible
REST access to RGW objects (striped over
multiple RADOS objects).
7. Block storage
• RBD block devices are thinly provisioned
over RADOS objects and can be accessed
by QEMU via librbd library.
8. File storage
• CephFS metadata servers (MDS) provide
a POSIX-compliant file system backed by
RADOS.
– …still experimental?
9. How does Ceph fit with OpenStack?
• RBD drivers for OpenStack tell libvirt to
configure the QEMU interface to librbd
• Ceph benefits:
– Multi-node striping and redundancy for block
storage (Cinder volumes, Nova ephemeral
drives, Glance images)
– Copy-on-write cloning of images to volumes
– Unified pool of storage nodes for all types of
data (objects, block devices, files)
– Live migration of Ceph-backed VMs
11. Questions
• How much net storage, in what tiers?
• How many IOPS?
– Aggregated
– Per VM (average)
– Per VM (peak)
• What to optimize for?
– Cost
– Performance
12. Rules of thumb and magic numbers
#1
• Ceph-OSD sizing:
– Disks
• 8-10 SAS HDDs per 1 x 10G NIC
• ~12 SATA HDDs per 1 x 10G NIC
• 1 x SSD for write journal per 4-6 OSD drives
– RAM:
• 1 GB of RAM per 1 TB of OSD storage space
– CPU:
• 0.5 CPU core’s/1 Ghz of a core per OSD disk (1-2 CPU cores for SSD
drives)
• Ceph-mon sizing:
13. Rules of thumb and magic numbers
#2
• For typical “unpredictable” workload we usually
assume:
– 70/30 read/write IOPS proportion
– ~4-8KB random read pattern
• To estimate Ceph IOPS efficiency we usually
take*:
– 4-8K Random Read - 0.88
– 4-8K Random Write - 0.64
* Based on benchmark data and semi-empirical evidence
15. Sizing example #1 – Ceph-mon
• Ceph monitors
– Sample spec:
• HP DL360 Gen9
• CPU: 2620v3 (6 cores)
• RAM: 64 GB
• HDD: 1 x 1 Tb SATA
• NIC: 1 x 1 Gig NIC, 1 x 10 Gig
– How many?
• 1 x ceph-mon node per 15-20 OSD nodes
16. Sizing example #2 – Ceph-OSD
• Lower density, performance optimized
– Sample spec:
• HP DL380 Gen9
• CPU: 2620v3 (6 cores)
• RAM: 64 GB
• HDD (OS drives): 2 x SATA 500 Gb
• HDD (OSD drives): 20 x SAS 1.8 Tb (10k RPM, 2.5 inch – HGST
C10K1800)
• SSD (write journals): 4 x SSD 128 GB Intel S3700
• NIC: 1 x 1 Gig NIC, 4 x 10 Gig (2 bonds – 1 for ceph-public and 1 for ceph-
replication)
– 1000/ (20 * 1.8 *0.85) * 3 = 98 servers to serve 1 petabyte net
17. Expected performance Level
• Conservative drive rating: 250 IOPs
• Cluster read rating:
– 250*20*98*.88 = 431200 IOPs
– Approx 800 IOPs per VM capacity is available
• Cluster write rating:
– 250*20*98*.88 = 313600 IOPs
– Approx 600 IOPs per VM capacity is available
18. Sizing example #3 – Ceph-OSD
• Higher density, cost optimized
– Sample spec:
• SuperMicro 6048R
• CPU: 2 x 2630v3 (16 cores)
• RAM: 192 GB
• HDD (OS drives): 2 x SATA 500 Gb
• HDD (OSD drives): 28 x SATA 4 Tb (7.2k RPM, 3.5 inch – HGST 7K4000)
• SSD (write journals): 6 x SSD 128 GB Intel S3700
• NIC: 1 x 1 Gig NIC, 4 x 10 Gig (2 bonds – 1 for ceph-public and 1 for ceph-
replication)
– 1000/ (28 * 4 * 0.85) * 3 = 32 servers to serve 1 petabyte net
19. Expected performance Level
• Conservative drive rating: 150 IOPs
• Cluster read rating:
– 150*32*28*0.88 = 118,272 IOPs
– Approx 200 IOPs per VM capacity is available
• Cluster write rating:
– 150*32*28*0.64 = 86,016 IOPs
– Approx 150 IOPs per VM capacity is available
20. What about SSDs?
• In Firefly (0.8x) an OSD is limited by ~5k
IOPS, so full-SSD Ceph cluster performance
disappoints
• In Giant (0.9x) ~30k per OSD should be
reachable, but has yet to be verified
22. How we deploy (with Fuel)
Select storage options ⇒ Assign roles to nodes ⇒ Allocate
disks:
23. What we now know #1
• RBD is not suitable for IOPS heavy workloads:
– Realistic expectations:
• 100 IOPS per VM (OSD with HDD disks)
• 10K IOPS per VM (OSD with SSD disks, pre-Giant Ceph)
• 10-40 ms latency expected and accepted
• high CPU load on OSD nodes at high IOPS (bumping up CPU requirements)
• Object storage for tenants and applications has limitations
– Swift API gaps in radosgw:
• bulk operations
• custom account metadata,
• static website
• expiring objects
• object versioning
24. What we now know #2
• NIC bonding
– Do linux bonds, not OVS bonds. OVS underperforms and eats CPU
• Ceph and OVS
– DON’T. Bridges for Ceph networks should be connected directly to
NIC bond, not via OVS integration bridge
• Nova evacuate doesn’t work with Ceph RBD-backed VMs
– Fixed in Kilo
• Ceph network co-location (with Management, public etc)
– DON’T. Dedicated NICs/NIC pairs for Ceph-public and Ceph-
internal (replication)
• Tested ceph.conf
– Enable RBD cache and others -
https://bugs.launchpad.net/fuel/+bug/1374969