Converged Network Adapters have been around for a while. Now UniPlex takes things a step further giving you the very first Converged Fabric with Fabric Attached Memory, NVMe over Fabrics, SDN and PCIe Device Sharing capabilities.
A deep five into offloading techniques for Oracle database servers, that takes both hardware and software solution into consideration. The focus is clearly to boost the efficiency of your already paid licenses.
The UniPlex 1000 is a 24-bay NVMe JBOF that connects to hosts via PCIe, supporting up to 64 GB/s throughput and 16 million IOPS. It comes in locked and unlocked versions, with the locked version containing pre-installed drives and the unlocked allowing any drives. The JBOF can expand into an appliance using I/O drawers and storage controllers, and start small with a single JBOF then grow by connecting multiple JBOFs and tiers of storage.
Building an open memory-centric computing architecture using intel optaneUniFabric
OOW 2017 presentation showcasing Fabric Attached Memory with 2 node RAC system based on two standard x86 servers running 200 GB/s with a per licensed cpu core data rate of 25 GB/s.
Sometimes you are happy with you Desktop System, but you just need more RAM. Normally you would need to buy a Server Board ... or you try our Desktop Memory & PCIe Expansion solution and save a lot of money.
vScaleDB is a universal database acceleration solution that provides in-memory read path optimization, access to data in the fastest tier using software-defined memory, and universal I/O offloading to save CPU cycles and licenses on the database host. It allows optimization of data center rack design through addition of hardware accelerators using PCIe and includes OptaneGRID I/O modules for maximum performance.
The document describes the UniPlex T1 Storage Supercharger, a solution that adds Tier 1 in-memory performance using NVDIMM to an existing storage array design. It builds on the proven Enigma III SDM Storage Appliance and uses software-defined memory spanning across NVDIMM (Tier 1), UniPlex NVMe PCIe JBODs (Tier 2), and an organization's existing storage (Tier 3 capacity tier). This provides a cost-efficient upgrade for high-performance workloads starting at $10,000 per "T1 brick".
In-memory computing is a reality. So are the limits of memory capacity. Data size constantly increases, while application developers and IT staff push for in-memory efficiencies; the conclusion is inevitable: we need to be able to access more memory than the DRAM capacity that the server provides. ScaleMP’s Software Defined Memory (SDM) technology allows for more system memory to be available per server, far beyond the hardware limits, by utilizing memory from other nodes (over fabric) or from locally installed non-volatile memory (NVM) such as NAND Flash or 3D XPoint – transparently and without any changes to operating system or applications. We shall present the benefits of SDM, discuss the relevant use-cases, and share performance data.
Software Defined Memory (SDM) uses new technologies like non-volatile RAM and flash storage to treat memory and storage as a unified persistent resource without traditional performance tiers. This can optimize Oracle database I/O performance by bypassing buffer caches and using fast kernel threads. Benchmarks showed a Plexistor SDM solution outperforming a traditional two-node Oracle RAC cluster. The best approach is to use fast storage like 3D XPoint as the secondary tier to maintain high performance even with cache misses. Combining SDM with solutions like FlashGrid and Oracle RAC could provide extremely high performance.
A deep five into offloading techniques for Oracle database servers, that takes both hardware and software solution into consideration. The focus is clearly to boost the efficiency of your already paid licenses.
The UniPlex 1000 is a 24-bay NVMe JBOF that connects to hosts via PCIe, supporting up to 64 GB/s throughput and 16 million IOPS. It comes in locked and unlocked versions, with the locked version containing pre-installed drives and the unlocked allowing any drives. The JBOF can expand into an appliance using I/O drawers and storage controllers, and start small with a single JBOF then grow by connecting multiple JBOFs and tiers of storage.
Building an open memory-centric computing architecture using intel optaneUniFabric
OOW 2017 presentation showcasing Fabric Attached Memory with 2 node RAC system based on two standard x86 servers running 200 GB/s with a per licensed cpu core data rate of 25 GB/s.
Sometimes you are happy with you Desktop System, but you just need more RAM. Normally you would need to buy a Server Board ... or you try our Desktop Memory & PCIe Expansion solution and save a lot of money.
vScaleDB is a universal database acceleration solution that provides in-memory read path optimization, access to data in the fastest tier using software-defined memory, and universal I/O offloading to save CPU cycles and licenses on the database host. It allows optimization of data center rack design through addition of hardware accelerators using PCIe and includes OptaneGRID I/O modules for maximum performance.
The document describes the UniPlex T1 Storage Supercharger, a solution that adds Tier 1 in-memory performance using NVDIMM to an existing storage array design. It builds on the proven Enigma III SDM Storage Appliance and uses software-defined memory spanning across NVDIMM (Tier 1), UniPlex NVMe PCIe JBODs (Tier 2), and an organization's existing storage (Tier 3 capacity tier). This provides a cost-efficient upgrade for high-performance workloads starting at $10,000 per "T1 brick".
In-memory computing is a reality. So are the limits of memory capacity. Data size constantly increases, while application developers and IT staff push for in-memory efficiencies; the conclusion is inevitable: we need to be able to access more memory than the DRAM capacity that the server provides. ScaleMP’s Software Defined Memory (SDM) technology allows for more system memory to be available per server, far beyond the hardware limits, by utilizing memory from other nodes (over fabric) or from locally installed non-volatile memory (NVM) such as NAND Flash or 3D XPoint – transparently and without any changes to operating system or applications. We shall present the benefits of SDM, discuss the relevant use-cases, and share performance data.
Software Defined Memory (SDM) uses new technologies like non-volatile RAM and flash storage to treat memory and storage as a unified persistent resource without traditional performance tiers. This can optimize Oracle database I/O performance by bypassing buffer caches and using fast kernel threads. Benchmarks showed a Plexistor SDM solution outperforming a traditional two-node Oracle RAC cluster. The best approach is to use fast storage like 3D XPoint as the secondary tier to maintain high performance even with cache misses. Combining SDM with solutions like FlashGrid and Oracle RAC could provide extremely high performance.
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
This document discusses Intel Memory Drive Technology, which allows using solid state drives as virtual memory. It explains that using SSDs for swap space can provide lower latency than conventional hard disks. Intel Memory Drive Technology uses software to treat SSD storage as memory and provide a pooled memory resource across servers. This approach allows in-memory applications like Oracle and SAP databases to process more data than would physically fit in server memory. The document provides examples of using Intel Optane drives or selected NVMe SSDs with this technology. It suggests performance is usually within 3-5% of using pure DRAM for many workloads.
This document provides an overview of JetStor's data storage platform. It introduces the JetStor SAN/NAS Platform which offers a single architecture for datastore, backup, disaster recovery, file storage and production storage. The platform includes various storage array models suited for hybrid-flash block storage, all-flash block storage, file storage and unified storage. Key features highlighted include RAID-EE for faster rebuild times, thin provisioning, snapshots, replication, tiering and a centralized management system. Performance comparisons show JetStor arrays outperforming other solutions. The document promotes JetStor's all-flash arrays for demanding workloads like VDI and virtualization clustering.
This session covers the engineering strategies and lessons learned at IBM creating industry leading in-memory data warehousing technology for use with both cloud and on-premises software. Along with rich in-memory SQL support for OLAP, data mining, and data warehousing leveraging memory optimized parallel vector processing, we’ll showcase the in-database analytics for R, spatial, and the built-in synchronization with Cloudant JSON NoSQL. We'll take a closer look at the architectural strategy for treating RAM as the new disk (and worth avoiding access to), while dramatically constraining the potential cost pressures of in-memory technology. We’ll describe how we designed for super-simplicity with load-and-go no-tuning technology for any size system, and of course… a demo. Ridiculously easy to use and freakishly fast. Not your grandmother’s IBM database.
NVMe and NVMe over fabrics promises to change the flash and networking industry. NVMe enables storage systems to tap into the full potential of flash storage and NVMe allows those systems to deliver in-server latencies. NVMe will fundamentally change storage. Are you ready? Join Storage Switzerland and Tegile for this webinar as they provide you with a path to NVMe.
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to KnowStorage Switzerland
The document discusses NVMe, NVMe over Fabrics, and the future of composable storage. It begins by explaining that NVMe is a protocol designed for solid state storage that improves upon SCSI. NVMe over Fabrics allows networked NVMe to provide near in-server performance for shared storage. This paves the way for composable storage, which uses orchestration to dynamically allocate independent storage resources according to application needs. Kaminario was presented as offering a converged NVMe and NVMe-over-Fabrics all-flash array that preserves full functionality while improving agility.
Linux is usually at the edge of implementing new storage standards, and NVMe over Fabrics is no different in this regard. This presentation gives an overview of the Linux NVMe over Fabrics implementation on the host and target sides, highlighting how it influenced the design of the protocol by early prototyping feedback. It also tells how the lessons learned during developing the NVMe over Fabrics, and how they helped reshaping parts of the Linux kernel to support NVMe over Fabrics and other storage protocols better."
This presentation was delivered at LinuxCon Japan 2016 by Christoph Hellwig
XPDDS17: How to Abstract Hardware Acceleration Device in Cloud Environment - ...The Linux Foundation
Intel® QuickAssist Technology (QAT) offers acceleration for the compute-intensive workloads of cryptography and compression. It supports Single Root I/O Virtualization (SR-IOV), which allows a single physical device to be shared by multiple guests. To better support fair sharing of capacity in a multi-tenant environment, Intel supports the concept of service level agreements. The service level is expressed using the abstraction of “acceleration units”. In this talk we will explain why we chose to define such an abstraction, and why specifying the capacity using raw throughput or operation rate alone is insufficient for accelerators – in brief, because the capacity is so heavily dependent on factors such as algorithm, direction (encrypt/sign/compress vs. decrypt/verify/decompress), key size, request size, compression level, etc. We go on to describe how such SLAs can be used to ensure that guests can be guaranteed some minimum acceleration capacity, and/or limited to some maximum. Finally, we describe use cases where this might be useful, such as when offering “acceleration as a service” in a cloud or Network Functions Virtualization (NFV) environment.
Salesforce uses Ceph for various storage needs including block storage, replacing some SAN scenarios, and as a general purpose blob store. They are experimenting with multiple small Ceph clusters across different availability zones. Performance testing shows good random read and write speeds for SSD-only pools. Challenges include scaling to meet their needs, ensuring security and isolation across multiple tenants, and managing clusters across many data centers.
This document summarizes the specifications and features of the TDS-16489U dual-processor application and storage server. Key points include:
- It features two Intel Xeon E5-2600 V3 series processors with support for up to 1TB of RAM and four 10GbE ports.
- It can run multiple virtual machines for applications like Windows Server, SQL Server, and Exchange while storing VMs directly on the server for high storage efficiency.
- It supports PCIe NVMe SSDs, 40GbE networking expansion, IPMI management, and VM disaster recovery via Double-Take Availability.
The document provides an introduction to NVMe over Fabrics, including:
- What NVMe over Fabrics is and its advantages like end-to-end NVMe semantics and low latency remote storage.
- How NVMe is being expanded to support message-based operations over various fabrics like RDMA, Fibre Channel, and Ethernet.
- Examples of how NVMe over Fabrics is being implemented in data center architectures and storage solutions.
XPDDS17: Xen-lite for ARM: Adapting Xen for a Samsung Exynos MicroServer with...The Linux Foundation
This document summarizes a presentation on adapting Xen for multi-tenant virtualization on ARM-based embedded devices with FPGA acceleration. It discusses moving PV drivers and I/O handling into the hypervisor to reduce overhead. Performance tests show MicroVisor reduces boot times and I/O latency compared to stock Xen. Integrating FPGA acceleration for networking and storage aims to improve performance for network function virtualization workloads on low-power edge devices.
A Key-Value Store for Data Acquisition SystemsIntel® Software
1) DAQDB is a key-value store designed for data acquisition systems to provide fast pre-computing and long-term storage of large volumes of data from experiments like the LHC.
2) It uses optimized data structures like adaptive radix tries and distributed locking to process over 20,000 data fragments every millisecond from multiple sources at throughput of over 100 Gbps.
3) The storage is distributed across persistent memory and NVMe devices to maximize performance while ensuring reliability and persistence of data.
Varrow datacenter storage today and tomorrowpittmantony
The document summarizes changes in datacenter storage technologies. It discusses typical storage types used today like DAS, SAN, and NAS and how new technologies are changing them. Technologies discussed include PCIe flash, all-flash arrays, denser drives, InfiniBand, and cloud storage. It suggests storage architectures may move away from RAID with new flash-based solutions and caching algorithms optimized for flash performance rather than spinning disks.
The document provides information about QNAP's new Enterprise Storage NAS product line. It discusses the new Enterprise OS built on FreeBSD and ZFS, which provides enterprise-class features like unlimited snapshots, data deduplication, compression, high availability and more. It also compares the new product to QNAP's existing Turbo NAS line and other solutions, highlighting advantages like performance, efficiency, data integrity and protection.
Virtualizing SQL Server workloads can provide high availability, flexibility, and portability while maximizing performance. Key considerations for virtualized SQL include properly sizing and configuring virtual CPUs, memory, storage, and networking. Features like SQL Always-On clustering allow for high availability without shared storage. Host-based backups and SQL maintenance plans are both important for backup strategies. Templates simplify deployment and updates of virtualized SQL servers.
NVMe over Fabrics defines an architecture that supports transmitting the NVMe block storage protocol over networking fabrics like Ethernet, Fibre Channel, and InfiniBand. This allows NVMe devices to be accessed from longer distances within a data center while maintaining low latency. NVMe over Fabrics is expected to provide solutions in 2016 that can scale to hundreds of NVMe devices in large, shared storage systems. Looking ahead, post-flash memory solutions using NVMe over Fabrics may achieve latencies around 20-25 microseconds by 2017.
RONNIEE Express: A Dramatic Shift in Network Architectureinside-BigData.com
In this slidecast, Emilio Billi from A3 Cube presents an overview of the company's RONNIEE Express network architecture.
"RONNIEE Express is a new High-Performance Cluster and data plane Interconnect based on a disruptive pure memory-mapped communication paradigm."
Learn more: http://www.a3cube-inc.com
Watch the video presentation: http://insidehpc.com/2014/02/25/ronniee-express-dramatic-shift-network-architecture/
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
This document discusses Intel Memory Drive Technology, which allows using solid state drives as virtual memory. It explains that using SSDs for swap space can provide lower latency than conventional hard disks. Intel Memory Drive Technology uses software to treat SSD storage as memory and provide a pooled memory resource across servers. This approach allows in-memory applications like Oracle and SAP databases to process more data than would physically fit in server memory. The document provides examples of using Intel Optane drives or selected NVMe SSDs with this technology. It suggests performance is usually within 3-5% of using pure DRAM for many workloads.
This document provides an overview of JetStor's data storage platform. It introduces the JetStor SAN/NAS Platform which offers a single architecture for datastore, backup, disaster recovery, file storage and production storage. The platform includes various storage array models suited for hybrid-flash block storage, all-flash block storage, file storage and unified storage. Key features highlighted include RAID-EE for faster rebuild times, thin provisioning, snapshots, replication, tiering and a centralized management system. Performance comparisons show JetStor arrays outperforming other solutions. The document promotes JetStor's all-flash arrays for demanding workloads like VDI and virtualization clustering.
This session covers the engineering strategies and lessons learned at IBM creating industry leading in-memory data warehousing technology for use with both cloud and on-premises software. Along with rich in-memory SQL support for OLAP, data mining, and data warehousing leveraging memory optimized parallel vector processing, we’ll showcase the in-database analytics for R, spatial, and the built-in synchronization with Cloudant JSON NoSQL. We'll take a closer look at the architectural strategy for treating RAM as the new disk (and worth avoiding access to), while dramatically constraining the potential cost pressures of in-memory technology. We’ll describe how we designed for super-simplicity with load-and-go no-tuning technology for any size system, and of course… a demo. Ridiculously easy to use and freakishly fast. Not your grandmother’s IBM database.
NVMe and NVMe over fabrics promises to change the flash and networking industry. NVMe enables storage systems to tap into the full potential of flash storage and NVMe allows those systems to deliver in-server latencies. NVMe will fundamentally change storage. Are you ready? Join Storage Switzerland and Tegile for this webinar as they provide you with a path to NVMe.
Webinar: NVMe, NVMe over Fabrics and Beyond - Everything You Need to KnowStorage Switzerland
The document discusses NVMe, NVMe over Fabrics, and the future of composable storage. It begins by explaining that NVMe is a protocol designed for solid state storage that improves upon SCSI. NVMe over Fabrics allows networked NVMe to provide near in-server performance for shared storage. This paves the way for composable storage, which uses orchestration to dynamically allocate independent storage resources according to application needs. Kaminario was presented as offering a converged NVMe and NVMe-over-Fabrics all-flash array that preserves full functionality while improving agility.
Linux is usually at the edge of implementing new storage standards, and NVMe over Fabrics is no different in this regard. This presentation gives an overview of the Linux NVMe over Fabrics implementation on the host and target sides, highlighting how it influenced the design of the protocol by early prototyping feedback. It also tells how the lessons learned during developing the NVMe over Fabrics, and how they helped reshaping parts of the Linux kernel to support NVMe over Fabrics and other storage protocols better."
This presentation was delivered at LinuxCon Japan 2016 by Christoph Hellwig
XPDDS17: How to Abstract Hardware Acceleration Device in Cloud Environment - ...The Linux Foundation
Intel® QuickAssist Technology (QAT) offers acceleration for the compute-intensive workloads of cryptography and compression. It supports Single Root I/O Virtualization (SR-IOV), which allows a single physical device to be shared by multiple guests. To better support fair sharing of capacity in a multi-tenant environment, Intel supports the concept of service level agreements. The service level is expressed using the abstraction of “acceleration units”. In this talk we will explain why we chose to define such an abstraction, and why specifying the capacity using raw throughput or operation rate alone is insufficient for accelerators – in brief, because the capacity is so heavily dependent on factors such as algorithm, direction (encrypt/sign/compress vs. decrypt/verify/decompress), key size, request size, compression level, etc. We go on to describe how such SLAs can be used to ensure that guests can be guaranteed some minimum acceleration capacity, and/or limited to some maximum. Finally, we describe use cases where this might be useful, such as when offering “acceleration as a service” in a cloud or Network Functions Virtualization (NFV) environment.
Salesforce uses Ceph for various storage needs including block storage, replacing some SAN scenarios, and as a general purpose blob store. They are experimenting with multiple small Ceph clusters across different availability zones. Performance testing shows good random read and write speeds for SSD-only pools. Challenges include scaling to meet their needs, ensuring security and isolation across multiple tenants, and managing clusters across many data centers.
This document summarizes the specifications and features of the TDS-16489U dual-processor application and storage server. Key points include:
- It features two Intel Xeon E5-2600 V3 series processors with support for up to 1TB of RAM and four 10GbE ports.
- It can run multiple virtual machines for applications like Windows Server, SQL Server, and Exchange while storing VMs directly on the server for high storage efficiency.
- It supports PCIe NVMe SSDs, 40GbE networking expansion, IPMI management, and VM disaster recovery via Double-Take Availability.
The document provides an introduction to NVMe over Fabrics, including:
- What NVMe over Fabrics is and its advantages like end-to-end NVMe semantics and low latency remote storage.
- How NVMe is being expanded to support message-based operations over various fabrics like RDMA, Fibre Channel, and Ethernet.
- Examples of how NVMe over Fabrics is being implemented in data center architectures and storage solutions.
XPDDS17: Xen-lite for ARM: Adapting Xen for a Samsung Exynos MicroServer with...The Linux Foundation
This document summarizes a presentation on adapting Xen for multi-tenant virtualization on ARM-based embedded devices with FPGA acceleration. It discusses moving PV drivers and I/O handling into the hypervisor to reduce overhead. Performance tests show MicroVisor reduces boot times and I/O latency compared to stock Xen. Integrating FPGA acceleration for networking and storage aims to improve performance for network function virtualization workloads on low-power edge devices.
A Key-Value Store for Data Acquisition SystemsIntel® Software
1) DAQDB is a key-value store designed for data acquisition systems to provide fast pre-computing and long-term storage of large volumes of data from experiments like the LHC.
2) It uses optimized data structures like adaptive radix tries and distributed locking to process over 20,000 data fragments every millisecond from multiple sources at throughput of over 100 Gbps.
3) The storage is distributed across persistent memory and NVMe devices to maximize performance while ensuring reliability and persistence of data.
Varrow datacenter storage today and tomorrowpittmantony
The document summarizes changes in datacenter storage technologies. It discusses typical storage types used today like DAS, SAN, and NAS and how new technologies are changing them. Technologies discussed include PCIe flash, all-flash arrays, denser drives, InfiniBand, and cloud storage. It suggests storage architectures may move away from RAID with new flash-based solutions and caching algorithms optimized for flash performance rather than spinning disks.
The document provides information about QNAP's new Enterprise Storage NAS product line. It discusses the new Enterprise OS built on FreeBSD and ZFS, which provides enterprise-class features like unlimited snapshots, data deduplication, compression, high availability and more. It also compares the new product to QNAP's existing Turbo NAS line and other solutions, highlighting advantages like performance, efficiency, data integrity and protection.
Virtualizing SQL Server workloads can provide high availability, flexibility, and portability while maximizing performance. Key considerations for virtualized SQL include properly sizing and configuring virtual CPUs, memory, storage, and networking. Features like SQL Always-On clustering allow for high availability without shared storage. Host-based backups and SQL maintenance plans are both important for backup strategies. Templates simplify deployment and updates of virtualized SQL servers.
NVMe over Fabrics defines an architecture that supports transmitting the NVMe block storage protocol over networking fabrics like Ethernet, Fibre Channel, and InfiniBand. This allows NVMe devices to be accessed from longer distances within a data center while maintaining low latency. NVMe over Fabrics is expected to provide solutions in 2016 that can scale to hundreds of NVMe devices in large, shared storage systems. Looking ahead, post-flash memory solutions using NVMe over Fabrics may achieve latencies around 20-25 microseconds by 2017.
RONNIEE Express: A Dramatic Shift in Network Architectureinside-BigData.com
In this slidecast, Emilio Billi from A3 Cube presents an overview of the company's RONNIEE Express network architecture.
"RONNIEE Express is a new High-Performance Cluster and data plane Interconnect based on a disruptive pure memory-mapped communication paradigm."
Learn more: http://www.a3cube-inc.com
Watch the video presentation: http://insidehpc.com/2014/02/25/ronniee-express-dramatic-shift-network-architecture/
Do more Apache Cassandra distributed database work with AMD EPYC 7601 processorsPrincipled Technologies
Private clouds require an investment in hardware that can often be costly—a cost that grows along with the size of distributed database workloads a business deploys. The new AMD EPYC processor architecture can help ease that burden by increasing the available number of cores per socket, which could let businesses get more distributed database work done per server compared to a previous generation Intel Xeon E5-2699 v4 processor architecture. In our tests, we found that a cluster based on 32-core AMD EPYC 7601 processors increased the operations per second an Apache Cassandra distributed database could process by 50 percent over a same-sized cluster based on 22-core Intel Xeon processors E5-2699 v4.3 This means that businesses seeking to run these reliable, elastic databases on a private cloud setup could do so on an AMD EPYC 7601 processor-based server platform and experience faster updates and shorter data load times.
Cozystack: Free PaaS platform and framework for building cloudsAndrei Kvapil
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
You can use Cozystack to build your own cloud or to provide a cost-effective development environments.
Enea NFV Access is a complete NFVI platform designed for deployment on white box uCPEs at the customer premise, and optimized for common vCPE and SD-WAN use cases. Not based on OpenStack, it is able to provide full throughput and performance with minimal footprint. It depends on as little as one core and scales to high-end Intel Xeon devices, leading to high deployment flexibility.
Enea NFV Access is a virtualization and management platform designed for white box universal customer premise equipment. It provides a small footprint and high networking performance for SD-WAN and security applications using virtual network functions. Enea NFV Access supports any white box hardware or virtual network functions, integrates with any orchestrator, and manages virtual infrastructure and functions through an integrated end-to-end solution over NETCONF.
Service Fabric is the foundational technology powering core Azure infrastructure and large-scale Microsoft services such as Azure Cosmos DB, Azure SQL Database, Dynamics 365, and Cortana. Come to this session for a developer’s tour and dives into the latest and greatest of Service Fabric capabilities, including containers, low-latency data processing, .NET Core 2.0 and VS 2017 integration. We are also going to immerse you with our future roadmap that makes building containerized microservice applications much easier.
This document discusses opportunities for Arm in data center and edge computing infrastructure. It outlines Arm's growing footprint in servers through partners like AWS, Ampere, Marvell, and provides an overview of the Neoverse roadmap. It also discusses how Arm can address markets like smartNICs and uCPE through integrated solutions with better performance and cost than x86.
Daniel Firestone and Gabriel Silva's presentation from the 2017 Open Networking Summit.
SDN is at the foundation of all large scale networks in the public cloud, such as Microsoft Azure - at past ONSes, Microsoft has detailed how all of Azure's virtual networks, load balancing, and security operate on SDN. But how do we make a software network scale to an era of 40, 50, and 100 gigabit networks on servers, providing great performance to end customers with ever increasing VM and container scale and density?
In this presentation, Daniel Firestone and Gabriel Silva will detail Azure Accelerated Networking, using Azure's FPGA-based SmartNICs. They will show how using FPGAs, we can achieve the programmability of a software network with the performance of a hardware one. They will detail how this and other host SDN advances have led to huge performance increases for Linux VMs in particular, and Linux-based NFV appliances, giving Azure industry-leading network performance.
Presented by Eran Bello at the "NFV & SDN Summit" held March 2014 in Paris, France
Ideal for Cloud DataCenter, Data Processing Platforms and Network Functions Virtualization
Leading SerDes Technology: High Bandwidth – Advanced Process
10/40/56Gb VPI with PCIe 3.0 Interface
10/40/56Gb High Bandwidth Switch: 36 ports of 10/40/56Gb or 64 ports of 10Gb
RDMA/RoCE technology: Ultra Low Latency Data Transfer
Software Defined Networking: SDN Switch and Control End to End Solution
Cloud Management: OpenStack integration
Paving the way to 100Gb/s Interconnect
End to End Network Interconnect for Compute/Processing and Switching
Software Defined Networking
High Bandwidth, Low Latency and Lower TCO: $/Port/Gb
Design and implementation of a reliable and cost-effective cloud computing in...Francesco Taurino
This document summarizes the INFN Napoli experience in designing and implementing a reliable and cost-effective cloud computing infrastructure. Key aspects included using existing hardware, virtualization and clustering technologies to consolidate services and reduce costs. A network with redundant switches and storage servers using GlusterFS provided high availability. Custom tools were developed to simplify administration tasks like provisioning, migration, and load balancing of virtual machines. The solution provided an efficient and reliable private cloud with over one year of uninterrupted uptime.
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Shuquan Huang
Today data scientist is turning to cloud for AI and HPC workloads. However, AI/HPC applications require high computational throughput where generic cloud resources would not suffice. There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model.
In this session, we will introduce OpenStack Acceleration Service – Cyborg, which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). We will also discuss Rack Scale Design (RSD) technology and explain how physical hardware resources can be dynamically aggregated to meet the AI/HPC requirements. The ability to “compose on the fly” with workload-optimized hardware and accelerator devices through an API allow data center managers to manage these resources in an efficient automated manner.
We will also introduce an enhanced telemetry solution with Gnnochi, bandwidth discovery and smart scheduling, by leveraging RSD technology, for efficient workloads management in HPC/AI cloud.
The document discusses QNAP's QIoT and QuAI solutions. QIoT allows users to build a private IoT cloud platform using QNAP NAS devices and containers. It supports various protocols and development boards. QuAI is an AI developer package that allows data scientists to develop AI models using QNAP NAS with GPU acceleration. It supports frameworks like TensorFlow and Caffe. QuAI aims to address issues like limited resources on personal devices and ease of setting up the required environment. It allows quick setup of AI modeling with tools for GPU configuration and container management.
This document provides an overview of container orchestration with Kubernetes. It begins with recapping container and Docker concepts like namespaces, cgroups, and union filesystems. It then introduces Kubernetes architecture including components like kube-apiserver, kubelet and kube-proxy. Common Kubernetes objects like pods, services, replica sets and deployments are described. The document also covers Kubernetes networking with options like NodePort, LoadBalancer and Ingress. Additional topics include service discovery, logging/monitoring and persistent storage.
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
Webinar Session - https://youtu.be/_5MfGMf8PG4
In this webinar, we share how the Container Attached Storage pattern makes performance tuning more tractable, by giving each workload its own storage system, thereby decreasing the variables needed to understand and tune performance.
We then introduce MayaStor, a breakthrough in the use of containers and Kubernetes as a data plane. MayaStor is the first containerized data engine available that delivers near the theoretical maximum performance of underlying systems. MayaStor performance scales with the underlying hardware and has been shown, for example, to deliver in excess of 10 million IOPS in a particular environment.
CESNET and INVEA-TECH demonstrated transferring data at 100 Gbps over a single PCIe interface using an FPGA card and PCIe bifurcation. They used two of the FPGA's PCIe x8 interfaces connected to a single PCIe x16 slot, allowing the FPGA to achieve over 107 Gbps of throughput. While this shows PCIe bifurcation can enable high-speed transfers without an additional PCIe switch, scaling the packet processing on the CPU remains a challenge requiring distribution across multiple CPU cores.
The document discusses Nexenta Storage in the Cloud and its key features and benefits for cloud storage. It summarizes that NexentaStor is a software-based unified storage appliance that runs on standard hardware and offers features like compression, thin provisioning, deduplication, high availability and end-to-end data integrity. Several case studies are presented showing how NexentaStor has provided cost-effective cloud storage and management solutions for large companies moving to the public cloud.
Building a Raspberry Pi Robot with Dot NET 8, Blazor and SignalRPeter Gallagher
In this session delivered at NDC Oslo 2024, I talk about how you can control a 3D printed Robot Arm with a Raspberry Pi, .NET 8, Blazor and SignalR.
I also show how you can use a Unity app on an Meta Quest 3 to control the arm VR too.
You can find the GitHub repo and workshop instructions here;
https://bit.ly/dotnetrobotgithub
2. Architecture
UniFabric Appliance
SDN (ExpressNIC)
PCIe Card I/O DrawerNVMe Gateway
NVMe LUN
PCIe Device Sharing
Client 01
PCIe Device Sharing
PCIe Retimer Card
PCIe Switching Engine
Device Lending API
SRIOV
Client 02
PCIe Retimer Card
Client 03
PCIe Retimer Card
NIC NIC GPU
NVMe
NIC FPGAPMEM
Fabric Attached Memory
3. Core Idea
We have had Converged Network Adapters for a while enabling us to a
Unified Wire for multiple protocols
UniFabric takes that idea further giving an Ultra Low Latency
Converged Fabric:
Ultra Low Latency Software Defined Networking using
ExpressNIC technology
Device Sharing using PCIe Non-Transparent Bridge
Share GPUs, FPGAs, NIC or any other PCIe device accross
the Data Center permanently or on demand as easy as
zoning a LUN
NVMe over Fabrics protocol transition layers allowing you to reuse
your existing storage arrays, but cut latency and cpu on the client
using the NVMe protocol
Use what's already there (PCIe) using simple PCIe Retimer Card
to connect to UniFabric