It’s very tiny slides about compositions for functional.
We can do free discussion in this session about what’s functional and/or what’s problem for functional?
These answers are yourself.
A Lisp-like lightweight functional language on .NET.
This slide contains how to generate expressions from Nesp parser.
ML勉強会 #2 https://ml-lang.connpass.com/event/58151/
https://github.com/kekyo/Nesp
05 high density openpower dual-socket p9 system design exampleYutaka Kawai
The document describes the design of a new high-density dual-socket OpenPOWER server system using Power9 CPUs. It discusses the disadvantages of the company's current product lineup and proposes a new concept using two-socket Power9 nodes in a 3U chassis with direct-attached memory and PCIe fabric backplane. The design process for the new "Nicole" motherboard is outlined, including surprises encountered during development related to power and memory requirements. Debugging issues are also summarized, such as CPUs not working between Power8 and Power9, incorrect voltage rail connections, and signal integrity problems.
04 accelerating dl inference with (open)capi and posit numbersYutaka Kawai
This was presented by Louis Ledoux and Marc Casas at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/1a/presentation_louis_ledoux_posit.pdf
It’s very tiny slides about compositions for functional.
We can do free discussion in this session about what’s functional and/or what’s problem for functional?
These answers are yourself.
A Lisp-like lightweight functional language on .NET.
This slide contains how to generate expressions from Nesp parser.
ML勉強会 #2 https://ml-lang.connpass.com/event/58151/
https://github.com/kekyo/Nesp
05 high density openpower dual-socket p9 system design exampleYutaka Kawai
The document describes the design of a new high-density dual-socket OpenPOWER server system using Power9 CPUs. It discusses the disadvantages of the company's current product lineup and proposes a new concept using two-socket Power9 nodes in a 3U chassis with direct-attached memory and PCIe fabric backplane. The design process for the new "Nicole" motherboard is outlined, including surprises encountered during development related to power and memory requirements. Debugging issues are also summarized, such as CPUs not working between Power8 and Power9, incorrect voltage rail connections, and signal integrity problems.
04 accelerating dl inference with (open)capi and posit numbersYutaka Kawai
This was presented by Louis Ledoux and Marc Casas at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/1a/presentation_louis_ledoux_posit.pdf
This was presented by Dan Horák (Red Hat) at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/d2/op-eu-2019-desktop-openpower.pdf
02 ai inference acceleration with components all in open hardware: opencapi a...Yutaka Kawai
This was presented by Peng Fei GOU (IBM China) at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/68/NVDLA%20on%20OpenCAPI.pdf
01 high bandwidth acquisitioncomputing compressionall in a boxYutaka Kawai
This document discusses high bandwidth data acquisition, computing, and compression using an IBM Power9 server. It presents two options for the server configuration:
Option A involves intensive GPU processing using Nvidia GPUs with high bandwidth connectivity. Option B doubles the bandwidth by using two Power9 sockets, each connected to multiple GPUs and FPGAs with OpenCAPI links.
The document then discusses the steps involved: data acquisition with FPGAs, using unified host-GPU memory to reduce bandwidth needs, performing intensive computation on GPUs or FPGAs, hardware compression of data using the Power9's built-in NX-Gzip engine, and the high bandwidth capabilities of the AC922 server platform. Bandwidth tests
This document discusses OpenCAPI acceleration using the OpenCAPI Acceleration Framework (oc-accel). It provides an overview of the oc-accel components and workflow, benchmarks the OC-Accel bandwidth and latency, and provides examples of how to fully utilize OC-Accel capabilities to accelerate functions on an FPGA. The document also outlines the OC-Accel development process and previews upcoming features like support for ODMA to port existing PCIe accelerators to OpenCAPI.
The document describes a hybrid memory subsystem (HMS) developed by BittWare that combines different memory technologies including Samsung zNAND, Samsung DDR4 SDRAM, and Everspin MRAM. The HMS has a capacity of 1.5TB or 3TB, uses an OpenCAPI 3.0 interface, and is optimized for sequential workloads with an average read latency of around 1us and bandwidth of 20GB/s. It is designed to provide memory expansion and persistence without major application changes at a lower cost than using only DRAM.
0 foundation update__final - Mendy FurmanekYutaka Kawai
This slide was presented by Mendy Furmanek at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/9c/Final%20-%20Mendy%20F..pdf
This document describes job descriptions for an OpenPOWER AE China and Taiwan team. It outlines that NDA and SOW documents must be signed to receive AE support. The key items of the SOW include the scope of services to assist a partner in developing a server based on POWER technology for up to 1 person year. It also details facilities, hours of coverage, charges, deliverables, completion criteria, and tools/services to be provided such as training, documentation, and debug boards at no initial charge.
IBM has a long history of contributing to and supporting open source projects including Linux kernel, Docker, Kubernetes, and OpenStack. In recent years, IBM has expanded its efforts in open hardware by forming the OpenPOWER foundation to foster innovation around its POWER processors, contributing to open chip designs and reference architectures, and pledging further contributions to grow an open hardware ecosystem. This includes opening the POWER instruction set architecture, providing open reference designs, and establishing open governance.
This document provides an overview of the SNAP framework, which utilizes Power CAPI technology to enable coherent acceleration between CPUs and FPGAs. Key points:
- CAPI allows direct memory access between CPUs and FPGAs, avoiding overhead of device drivers and memory copies. This reduces latency significantly compared to traditional PCIe.
- The SNAP framework uses CAPI to share memory coherently between applications running on CPUs and accelerators implemented on FPGAs.
- It includes a kernel driver, user library, and models the hardware interface to allow co-simulation of applications and accelerators.
- This framework takes advantage of features like DMA, atomic operations, and wake-ups to provide
This document summarizes an OpenPOWER/OpenCAPI meetup that took place on October 23, 2019 in Tokyo. The meetup included introductions, updates from the OpenPOWER foundation, feedback from the OpenPOWER Summit 2019 in North America, light talks from Xilinx Japan, KIOXIA, and NEC, as well as a Q&A session and free discussions.
1) The document introduces ExpEther and Wireless ExpEther, which extend PCI Express over Ethernet and provide reliable low-latency wireless connections, respectively.
2) ExpEther allows PCIe devices to be disaggregated over Ethernet networks while maintaining compatibility with existing software. Wireless ExpEther aggregates multiple wireless links to provide a virtual reliable connection with latency under 1ms.
3) NEC offers these technologies as IP cores and evaluation modules to enable wireless solutions for applications that require latency under 10ms, such as industrial robots, AGVs, and machine tools.
The document outlines the agenda for an OpenPOWER and OpenCAPI Meetup held on July 17, 2019 in Tokyo. The agenda included introductions, updates from the OpenPOWER foundation, presentations on NEC ExpEther virtual PCIe over Ethernet technology, an IBM AC922 performance demo, the H3 Falcon2 PCIe gen4 system, a light talk from Xilinx Japan, Q&A, and free discussions. Links were also provided to the Meetup group page and future events.
The 2018 OpenCAPI Contest attracted participants from universities and independent design houses. It helped promote the capabilities of CAPI/OpenCAPI and sparked further development work. After the contest, more design houses contacted IBM to discuss CAPI/OpenCAPI solutions, and universities recognized its advantages and are pursuing related research. The most effective way to further business is collaboration between IBM, design houses, and universities to develop demo solutions and bring real products to market.
The document discusses an OCP 48V solution presented at a 2019 Tokyo meetup. It references expansion boards, rackspace, and Google and Rackspace's P9 48V OCP design for powering servers more efficiently using a 48V standard. Component specifications are provided for PSUC1, C2 and PSUCE1, CE2 capacitors. A GitHub link is also included for the Zaius-Barreleye-G2 design.