Get an introduction to a PMDK based on the Non-Volatile Memory (NVM) Programming Model from SNIA*. Review the goals, successes, and challenges that still remain.
Create C++ Applications with the Persistent Memory Development KitIntel® Software
Persistent memory retains data after a program crash or power failure. This demonstration shows how to make your application aware of persistent memory using the Persistent Memory Development Kit and includes a C++ code sample walk-through.
A Key-Value Store for Data Acquisition SystemsIntel® Software
Get an overview of the Data Acquisition Database design. It's based on the Persistent Memory Development Kit (PMDK) and Storage Performance Development Kit (SPDK) to leverage Intel® Optane™ DC persistent memory and non-volatile memory express (NVMe) drives.
Big Data Uses with Distributed Asynchronous Object StorageIntel® Software
Learn about the architecture and features of Distributed Asynchronous Object Storage (DAOS). This open source object store is based on the Persistent Memory Development Kit (PMDK) for massively distributed non-volatile memory applications.
Debugging Tools & Techniques for Persistent Memory ProgrammingIntel® Software
Learn about pmempool, a Persistent Memory Development Kit tool that helps you prevent, diagnose, and recover from data corruption. The session also covers other debugging tools for persistent memory programming.
Use cases like high-performance computing (HPC), AI, and IoTA can generate a huge volume of data. Learn how Intel® Optane™ DC persistent memory can be an alternative to DRAM for applications that benefit from a very large volatile memory capacity.
Learn the ways to access persistent memory from Java*. Review how to use the Low-Level Persistence Library in the Persistent Memory Development Kit to retrofit the open source database Cassandra* for persistent memory.
Create C++ Applications with the Persistent Memory Development KitIntel® Software
Persistent memory retains data after a program crash or power failure. This demonstration shows how to make your application aware of persistent memory using the Persistent Memory Development Kit and includes a C++ code sample walk-through.
A Key-Value Store for Data Acquisition SystemsIntel® Software
Get an overview of the Data Acquisition Database design. It's based on the Persistent Memory Development Kit (PMDK) and Storage Performance Development Kit (SPDK) to leverage Intel® Optane™ DC persistent memory and non-volatile memory express (NVMe) drives.
Big Data Uses with Distributed Asynchronous Object StorageIntel® Software
Learn about the architecture and features of Distributed Asynchronous Object Storage (DAOS). This open source object store is based on the Persistent Memory Development Kit (PMDK) for massively distributed non-volatile memory applications.
Debugging Tools & Techniques for Persistent Memory ProgrammingIntel® Software
Learn about pmempool, a Persistent Memory Development Kit tool that helps you prevent, diagnose, and recover from data corruption. The session also covers other debugging tools for persistent memory programming.
Use cases like high-performance computing (HPC), AI, and IoTA can generate a huge volume of data. Learn how Intel® Optane™ DC persistent memory can be an alternative to DRAM for applications that benefit from a very large volatile memory capacity.
Learn the ways to access persistent memory from Java*. Review how to use the Low-Level Persistence Library in the Persistent Memory Development Kit to retrofit the open source database Cassandra* for persistent memory.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph is an open source distributed storage system designed for scalability and reliability. Ceph's block device, RADOS block device (RBD), is widely used to store virtual machines, and is the most popular block storage used with OpenStack.
In this session, you'll learn how RBD works, including how it:
* Uses RADOS classes to make access easier from user space and within the Linux kernel.
* Implements thin provisioning.
* Builds on RADOS self-managed snapshots for cloning and differential backups.
* Increases performance with caching of various kinds.
* Uses watch/notify RADOS primitives to handle online management operations.
* Integrates with QEMU, libvirt, and OpenStack.
Bridging Big - Small, Fast - Slow with Campaign Storageinside-BigData.com
Peter Braam presented this deck at the MSST 2017 Mass Storage Conference.
"Economic considerations and technology developments are necessitating widely usable tiered storage. Untroubled by the worries of transparency and performance, Campaign Storage—invented at Los Alamos National Laboratory—offers radical revisions of old workflows and adapts to new technologies. But it also leverages widely available technologies and interfaces to offer stability from the ground up and blend in with the past. We'll discuss how a simple combination of components can support scalability, data analytics and efficient integration with memory based storage."
Peter Braam is a scientist and entrepreneur focused on large scale computing problems. After obtaining a PhD in mathematics under Michael Atiyah, he was an academic at several universities including Oxford, CMU and Cambridge. One of his startup companies developed the Lustre file system which is widely used. Most other products he designed were sold to major corporations. From 2013, Peter has been assisting computing design for the SKA telescope as a consultant. Currently Peter is doing research in storage and also architecting a product for Campaign Storage, LLC.
Watch the video: http://wp.me/p3RLHQ-gNC
Learn more: http://campaignstorage.com/
and
http://storageconference.us/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed_Hat_Storage
Red Hat Gluster Storage is open, software-defined storage that helps you manage big, unstructured, and semistructured data. This product is based on the open source project GlusterFS, a distributed scale-out file system technology, and focuses on file sharing, analytics, and hyper-converged use cases.
In this session, you will:
See real-life case studies about Red Hat Gluster Storage’s usage in production environments, including ideal workloads.
Learn about the Red Hat Gluster Storage roadmap, including innovations from the GlusterFS community pipeline.
Gain insights into how the product will be integrated with Red Hat Enterprise Virtualization (including hyperconvergence), Red Hat Satellite, and Red Hat Enterprise Linux OpenStack Platform.
In this session, we'll discuss new volume types in Red Hat Gluster Storage. We will talk about erasure codes and storage tiers, and how they can work together. Future directions will also be touched on, including rule based classifiers and data transformations.
You will learn about:
How erasure codes lower the cost of storage.
How to configure and manage an erasure coded volume.
How to tune Gluster and Linux to optimize erasure code performance.
Using erasure codes for archival workloads.
How to utilize an SSD inexpensively as a storage tier.
Gluster's erasure code and storage tiering design.
Red Hat Storage Server Administration Deep DiveRed_Hat_Storage
"In this session for administrators of all skill levels, you’ll get a deep technical dive into Red Hat Storage Server and GlusterFS administration.
We’ll start with the basics of what scale-out storage is, and learn about the unique implementation of Red Hat Storage Server and its advantages over legacy and competing technologies. From the basic knowledge and design principles, we’ll move to a live start-to-finish demonstration. Your experience will include:
Building a cluster.
Allocating resources.
Creating and modifying volumes of different types.
Accessing data via multiple client protocols.
A resiliency demonstration.
Expanding and contracting volumes.
Implementing directory quotas.
Recovering from and preventing split-brain.
Asynchronous parallel geo-replication.
Behind-the-curtain views of configuration files and logs.
Extended attributes used by GlusterFS.
Performance tuning basics.
New and upcoming feature demonstrations.
Those new to the scale-out product will leave this session with the knowledge and confidence to set up their first Red Hat Storage Server environment. Experienced administrators will sharpen their skills and gain insights into the newest features. IT executives and managers will gain a valuable overview to help fuel the drive for next-generation infrastructures."
"Data classification" is an umbrella term covering things: locality-aware data placement, SSD/disk or normal/deduplicated/erasure-coded data tiering, HSM, etc. They share most of the same infrastructure, and so are proposed (for now) as a single feature.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
This session will cover performance-related developments in Red Hat Gluster Storage 3 and share best practices for testing, sizing, configuration, and tuning.
Join us to learn about:
Current features in Red Hat Gluster Storage, including 3-way replication, JBOD support, and thin-provisioning.
Features that are in development, including network file system (NFS) support with Ganesha, erasure coding, and cache tiering.
New performance enhancements related to the area of remote directory memory access (RDMA), small-file performance, FUSE caching, and solid state disks (SSD) readiness.
Scaleable PHP Applications in KubernetesRobert Lemke
Kubernetes is also called the "distributed Linux of the cloud" – which implies that it provides fundamental infrastructure, which can solve a lot of challenges. Let’s see how PHP applications fit into this picture. In this presentation, we are going to explore when Kubernetes is a good fit for operating your PHP application and how it can be done in practice. We’ll look at the whole lifecycle: how to build your application, create or choose the right Docker images, deploy and scale, and how to deal with performance and monitoring. At the end you will have a good understanding about all the different stages and building blocks for running a PHP application with Kubernetes in production.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph is an open source distributed storage system designed for scalability and reliability. Ceph's block device, RADOS block device (RBD), is widely used to store virtual machines, and is the most popular block storage used with OpenStack.
In this session, you'll learn how RBD works, including how it:
* Uses RADOS classes to make access easier from user space and within the Linux kernel.
* Implements thin provisioning.
* Builds on RADOS self-managed snapshots for cloning and differential backups.
* Increases performance with caching of various kinds.
* Uses watch/notify RADOS primitives to handle online management operations.
* Integrates with QEMU, libvirt, and OpenStack.
Bridging Big - Small, Fast - Slow with Campaign Storageinside-BigData.com
Peter Braam presented this deck at the MSST 2017 Mass Storage Conference.
"Economic considerations and technology developments are necessitating widely usable tiered storage. Untroubled by the worries of transparency and performance, Campaign Storage—invented at Los Alamos National Laboratory—offers radical revisions of old workflows and adapts to new technologies. But it also leverages widely available technologies and interfaces to offer stability from the ground up and blend in with the past. We'll discuss how a simple combination of components can support scalability, data analytics and efficient integration with memory based storage."
Peter Braam is a scientist and entrepreneur focused on large scale computing problems. After obtaining a PhD in mathematics under Michael Atiyah, he was an academic at several universities including Oxford, CMU and Cambridge. One of his startup companies developed the Lustre file system which is widely used. Most other products he designed were sold to major corporations. From 2013, Peter has been assisting computing design for the SKA telescope as a consultant. Currently Peter is doing research in storage and also architecting a product for Campaign Storage, LLC.
Watch the video: http://wp.me/p3RLHQ-gNC
Learn more: http://campaignstorage.com/
and
http://storageconference.us/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
Ceph is designed around the assumption that all components of the system (disks, hosts, networks) can fail, and has traditionally leveraged replication to provide data durability and reliability. The CRUSH placement algorithm is used to allow failure domains to be defined across hosts, racks, rows, or datacenters, depending on the deployment scale and requirements.
Recent releases have added support for erasure coding, which can provide much higher data durability and lower storage overheads. However, in practice erasure codes have different performance characteristics than traditional replication and, under some workloads, come at some expense. At the same time, we have introduced a storage tiering infrastructure and cache pools that allow alternate hardware backends (like high-end flash) to be leveraged for active data sets while cold data are transparently migrated to slower backends. The combination of these two features enables a surprisingly broad range of new applications and deployment configurations.
This talk will cover a few Ceph fundamentals, discuss the new tiering and erasure coding features, and then discuss a variety of ways that the new capabilities can be leveraged.
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed_Hat_Storage
Red Hat Gluster Storage is open, software-defined storage that helps you manage big, unstructured, and semistructured data. This product is based on the open source project GlusterFS, a distributed scale-out file system technology, and focuses on file sharing, analytics, and hyper-converged use cases.
In this session, you will:
See real-life case studies about Red Hat Gluster Storage’s usage in production environments, including ideal workloads.
Learn about the Red Hat Gluster Storage roadmap, including innovations from the GlusterFS community pipeline.
Gain insights into how the product will be integrated with Red Hat Enterprise Virtualization (including hyperconvergence), Red Hat Satellite, and Red Hat Enterprise Linux OpenStack Platform.
In this session, we'll discuss new volume types in Red Hat Gluster Storage. We will talk about erasure codes and storage tiers, and how they can work together. Future directions will also be touched on, including rule based classifiers and data transformations.
You will learn about:
How erasure codes lower the cost of storage.
How to configure and manage an erasure coded volume.
How to tune Gluster and Linux to optimize erasure code performance.
Using erasure codes for archival workloads.
How to utilize an SSD inexpensively as a storage tier.
Gluster's erasure code and storage tiering design.
Red Hat Storage Server Administration Deep DiveRed_Hat_Storage
"In this session for administrators of all skill levels, you’ll get a deep technical dive into Red Hat Storage Server and GlusterFS administration.
We’ll start with the basics of what scale-out storage is, and learn about the unique implementation of Red Hat Storage Server and its advantages over legacy and competing technologies. From the basic knowledge and design principles, we’ll move to a live start-to-finish demonstration. Your experience will include:
Building a cluster.
Allocating resources.
Creating and modifying volumes of different types.
Accessing data via multiple client protocols.
A resiliency demonstration.
Expanding and contracting volumes.
Implementing directory quotas.
Recovering from and preventing split-brain.
Asynchronous parallel geo-replication.
Behind-the-curtain views of configuration files and logs.
Extended attributes used by GlusterFS.
Performance tuning basics.
New and upcoming feature demonstrations.
Those new to the scale-out product will leave this session with the knowledge and confidence to set up their first Red Hat Storage Server environment. Experienced administrators will sharpen their skills and gain insights into the newest features. IT executives and managers will gain a valuable overview to help fuel the drive for next-generation infrastructures."
"Data classification" is an umbrella term covering things: locality-aware data placement, SSD/disk or normal/deduplicated/erasure-coded data tiering, HSM, etc. They share most of the same infrastructure, and so are proposed (for now) as a single feature.
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
HKG15-401: Ceph and Software Defined Storage on ARM servers
---------------------------------------------------
Speaker: Yazen Ghannam Steve Capper
Date: February 12, 2015
---------------------------------------------------
★ Session Summary ★
Running Ceph in the colocation, ongoing optimizations
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250828
Video: https://www.youtube.com/watch?v=RdZojLL7ttk
Etherpad: http://pad.linaro.org/p/hkg15-401
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
This session will cover performance-related developments in Red Hat Gluster Storage 3 and share best practices for testing, sizing, configuration, and tuning.
Join us to learn about:
Current features in Red Hat Gluster Storage, including 3-way replication, JBOD support, and thin-provisioning.
Features that are in development, including network file system (NFS) support with Ganesha, erasure coding, and cache tiering.
New performance enhancements related to the area of remote directory memory access (RDMA), small-file performance, FUSE caching, and solid state disks (SSD) readiness.
Scaleable PHP Applications in KubernetesRobert Lemke
Kubernetes is also called the "distributed Linux of the cloud" – which implies that it provides fundamental infrastructure, which can solve a lot of challenges. Let’s see how PHP applications fit into this picture. In this presentation, we are going to explore when Kubernetes is a good fit for operating your PHP application and how it can be done in practice. We’ll look at the whole lifecycle: how to build your application, create or choose the right Docker images, deploy and scale, and how to deal with performance and monitoring. At the end you will have a good understanding about all the different stages and building blocks for running a PHP application with Kubernetes in production.
The Forefront of the Development for NVDIMM on Linux KernelYasunori Goto
This is talk for Open Source Summit Japan 2020
--------------------------
NVDIMM (Non Volatile DIMM) is the most interesting device, because it has not only characteristic of memory but also storage. To support NVDIMM, Linux kernel provides three access methods for users. - Storage (Sector) mode - Filesystem DAX(=Direct Access) mode - Device DAX mode. In the above three methods, Filesystem DAX is the most expected access method, because applications can write data to the NVDIMM area directory, and it is easier to use than Device DAX mode. So, some software already uses it with official support. However, Filesystem-DAX is still "experimental status" in the upstream community due to some difficult issues . In this session, Yasunori Goto will talk to the forefront of the development of NVDIMM, and Ruan Shiyang will talk about his challenge with the latest status from CLK2019.
Persistent Memory Programming: The Current State of the Ecosysteminside-BigData.com
In this presentation, Andy Rudoff from Intel reports on the latest developments around persistent memory programming. He’ll describing current discussions in the SNIA NVM Programming Technical Work Group, the current state of operating system support, recent tool and library development, and finally he’ll describe some of the upcoming challenges for high performance persistent memory use.
Watch the video: https://wp.me/p3RLHQ-gUP
Learn more: http://storageconference.us/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
C++ Programming and the Persistent Memory Developers KitIntel® Software
Topics
Introduction to Persistent Memory
Introduction to Persistent Memory Developers Kit (PMDK)
Working with PMDK
Persistent Memory Programming with PMDK C++ Bindings
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...OpenShift Origin
Extending OpenShift Origin: Build Your Own Cartridge
Presenters: Bill DeCoste
Cartridges allow developers to provide services running on top of the Red Hat OpenShift Platform-as-a-Service (PaaS). OpenShift already provides cartridges for numerous web application frameworks and databases. Writing your own cartridges allows you to customize or enhance an existing service, or provide new services. In this session, the presenter will discuss best practices for cartridge development and the latest changes in the OpenShift cartridge support.
* Latest changes made in the platform to ease cartridge development
* OpenShift Cartridges vs. plugins
* Outline for development of a new cartridge
* Customization of existing cartridges
* Quickstarts: leveraging a cartridge or cartridges to provide a complete application
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
Programming on Windows 8.1: The New Stream and Storage Paradigm (Raffaele Ria...ITCamp
Looking at the Windows 8.1 development platform, the streams and storage management are totally different from the past. Streams classes have changed, files and folders management is radically different and a new set of classes exist in the WinRT library to support the Windows Store application model and the new asynchronous paradigm.
After a brief overview of asynchronous pattern in WinRT, the session will dig into the new streams and storage APIs showing practical examples of use for modern Windows Store applications.
Build your own discovery index of scholary e-resourcesMartin Czygan
Providing discovery systems for eresourcesis essential for library services today.
Commercial search engine indices have been a widely used solution in recent years. In
contrast, running an own discovery service is undoubtedly a challenging task but promises
full control over data processing, enrichment, performance and quality. Building an own
aggregated index of eresourcesincludes gathering the right mix of data sources, clearing
licensing issues, and negotiating data availability. Technically, these threads are resumed by
data harvesters, filters and workflow orchestration tools.
Containerization is more than the new Virtualization: enabling separation of ...Jérôme Petazzoni
Docker offers a new, lightweight approach to application
portability. Applications are shipped using a common container format,
and managed with a high-level API. Their processes run within isolated
namespaces which abstract the operating environment, independently of
the distribution, versions, network setup, and other details of this
environment.
This "containerization" has often been nicknamed "the new
virtualization". But containers are more than lightweight virtual
machines. Beyond their smaller footprint, shorter boot times, and
higher consolidation factors, they also bring a lot of new features
and use cases which were not possible with classical virtual machines.
We will focus on one of those features: separation of operational
concerns. Specifically, we will demonstrate how some fundamental tasks
like logging, remote access, backups, and troubleshooting can be
entirely decoupled from the deployment of applications and
services. This decoupling results in independent, smaller, simpler
moving parts; just like microservice architectures break down large
monolithic apps in more manageable components.
AI for All: Biology is eating the world & AI is eating Biology Intel® Software
Advances in cell biology and creation of an immense amount of data are converging with advances in Machine learning to analyze this data. Biology is experiencing its AI moment and driving the massive computation involved in understanding biological mechanisms and driving interventions. Learn about how cutting edge technologies such as Software Guard Extensions (SGX) in the latest Intel Xeon Processors and Open Federated Learning (OpenFL), an open framework for federated learning developed by Intel, are helping advance AI in gene therapy, drug design, disease identification and more.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
4. SPDK, PMDK & Vtune™ Summit 4
Persistentmemoryiscoming…
Byte-addressable, use it like memory
But it is persistent
Actually had been shipping from some vendors
Later named NVDIMM-N
Small capacity 16-32 GB
All access was through a driver interface when I first started looking at them
5. SPDK, PMDK & Vtune™ Summit 5
Persistentmemoryiscoming…
Step 1: how should it be exposed to applications
How to name it, re-attach to it
How to enforce permissions
How to back it up, manage it
And some less technical goals, but just as important
– Represent the interests of the ISVs
– Avoid vendor lock-in to a product-specific API
– As an Intel employee, acknowledge that Intel-specific doesn’t work here
Headed to SNIA…
Persistent Memory
User
Space
Kernel
Space
Standard
File API
NVDIMM Driver
Application
File System
ApplicationApplication
Standard
Raw Device
Access
Storage File Memory
Load/Store
Management Library
Management UI
Standard
File API
Mgmt.
PM-Aware
File System
MMU
Mappings
6. SPDK, PMDK & Vtune™ Summit 6
Ancienthistory
June 2012
Formed the NVM Programming TWG
Immediate participation from key OSVs, ISVs, IHVs
January 2013
Held the first PM Summit (actually called “NVM Summit”)
July 2013
Created first GitHub thought experiments (“linux-examples”)
January 2014
TWG published rev 1.0 of the NVM Programming Model
7. SPDK, PMDK & Vtune™ Summit 7
SNIAModelSuccess…andthenwhat?!
Open a pmem file on a pmem-aware file system
Map it into your address space
Okay, you’ve got a pointer to 3TB of memory, have fun!
The model is necessary, but not sufficient for an easy to program resource
Gathering requirements yielded fairly obvious top priorities:
Need a way to track pmem allocations (like malloc/free, but pmem-aware)
Need a way to make transactional updates
Need a library of pmem-aware containers: lists, queues, etc.
Need to make pmem programming not so error-prone
9. SPDK, PMDK & Vtune™ Summit 9
GOALS
Make persistent programming easier
Especially allocation, transactions, atomic operations
Validate thoroughly to save developers implementation time
Performance tune it, improving over time
Later we realized we needed additional goals…
Help simplify RAS (bad block tracking, recovery)
Create new libraries for new use cases as they come up
Track new hardware features (example: MOVDIR64B)
10. SPDK, PMDK & Vtune™ Summit 10
Theresult…PMDK
PMDK Provides a Menu of Libraries
Developers pull in just what they need
– Transaction APIs
– Persistent memory allocators
Instead of re-inventing the wheel
– PMDK libraries are fully validated
– PMDK libraries are performance tuned
PMDK Provides Tools for Developers
PMDK is Open Source and Product-Neutral
pmem
User
Space
Kernel
Space
Application
Load/Store
Standard
File API
PM-Aware
File System
MMU
Mappings
PMDK
Libraries
11. SPDK, PMDK & Vtune™ Summit 11
CurrentStateofPMDK
Core libraries, roughly ten of them, in PMDK repo on GitHub
Over 8000 commits over a period of about five years
Dozens of users that we know about
– Some open source
– Some closed source
– Some code stealers (which we encourage)
Most intense activity has been on libpmemobj, the most flexible library
Team took over maintenance of libmemkind
For volatile use cases
Lots of interesting additions since the initial set of libraries…
12. SPDK, PMDK & Vtune™ Summit 12
PMDKEvolution
New libraries based on use cases and customer feedback
Java support (See talk by Olasoji Denloye later today)
C++ support
– Some of the most interesting & challenging work in this space
– Lots more in this summit about C++ (See Piotr’s talk later today)
Libpmemkv (See talk by Rob Dickinson later today)
libvmemcache (See talk by Usha and Piotr tomorrow)
Tools support (VTune, pmemcheck, pmreorder, etc.)
13. SPDK, PMDK & Vtune™ Summit 13
PMDKFuture
We’re not done!
As more use cases emerge, decide if current libraries cover them
– Invent new libraries when it makes sense
Get community more engaged
– So far, only a few pull requests from outside PMDK team
– Use SPDK as a model!
Continue to tune, enhance, refine what we have
– Example: more C++ containers, better C++ performance
– Example: support more languages