Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.
Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.
This talk will focus on a brief overview of Kubernetes, with a brief demo, and then more of an in-depth focus on issues we've faced moving PHP projects into Docker and Kubernetes like signal propagation, init systems, and logging.
Talk from Cape Town PHP meetup on Feb. 7, 2016:
https://www.meetup.com/Cape-Town-PHP-Group/events/237226310/
Code: https://github.com/zoidbergwill/kubernetes-php-examples
Slides as markdown: http://www.zoidbergwill.com/presentations/2017/kubernetes-php/index.md
Composer is a dependency manager for PHP that allows projects to declare and install dependencies. It works by defining dependencies in a composer.json file and installing them into a vendor directory. This ensures all environments have identical dependency versions. Composer also handles autoloading so dependencies can be used immediately after including the vendor/autoload.php file. It is commonly used to manage library dependencies within a project and distribute PHP libraries to others via Packagist.
This document provides release notes and supplementary information for Delphi 7. It notes that some components have been deprecated and recommends newer alternatives. It also describes changes made to string handling functions, warnings added by the compiler, and issues fixed in streaming of subcomponents. Finally, it provides notes on various other topics like Apache, UDDI, Windows XP input, and databases.
5/13/13 presentation to Austin DevOps Meetup Group, describing our system for deploying 15 websites and supporting services in multiple languages to bare redhat 6 VMs. All system-wide software is installed using RPMs, and all application software is installed using GIT or Tarball.
LIGGGHTS is an Open Source Discrete Element Method Particle Simulation Software developed by Sandia National Labs. LIGGGHTS stands for LAMMPS Improved for General Granular and Granular Heat Transfer Simulations. The higher programming language C++ is used to write the code of LIGGGHTS, which can be run either in a single- or multi processor. In this document we will discuss about the LIGGGHTS installation for the Linux operating system Ubuntu 12.04 LTS, 13.04, 14.04 LTS and 16.04 LTS. In order to install LIGGGHTS we need few libraries, like libvtk5-dev, libeigen2-dev,
libopenmpi-dev, a C++ compiler, Open MPI, LPP and Paraview. In this article we will discuss the installation procedure of each one in details.
Chicago Docker Meetup Presentation - MediaflyMediafly
This document discusses how Bryan Murphy uses Docker at his company Mediafly. It begins by introducing Bryan and his background. It then describes what Mediafly does, including content management systems, secure content delivery, document and video processing, and customizable user interfaces. The document highlights aspects of Mediafly that make it interesting, such as being multi-device, multi-tenant, service oriented, and distributed. It provides examples of technologies used at Mediafly and some key metrics. The document then discusses why Docker is used at Mediafly, covering benefits like being developer friendly, enabling faster iteration and testing, managing dependencies, sharing environments, standardization, isolation, and infrastructure freedom.
Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.
This talk will focus on a brief overview of Kubernetes, with a brief demo, and then more of an in-depth focus on issues we've faced moving PHP projects into Docker and Kubernetes like signal propagation, init systems, and logging.
Talk from Cape Town PHP meetup on Feb. 7, 2016:
https://www.meetup.com/Cape-Town-PHP-Group/events/237226310/
Code: https://github.com/zoidbergwill/kubernetes-php-examples
Slides as markdown: http://www.zoidbergwill.com/presentations/2017/kubernetes-php/index.md
Composer is a dependency manager for PHP that allows projects to declare and install dependencies. It works by defining dependencies in a composer.json file and installing them into a vendor directory. This ensures all environments have identical dependency versions. Composer also handles autoloading so dependencies can be used immediately after including the vendor/autoload.php file. It is commonly used to manage library dependencies within a project and distribute PHP libraries to others via Packagist.
This document provides release notes and supplementary information for Delphi 7. It notes that some components have been deprecated and recommends newer alternatives. It also describes changes made to string handling functions, warnings added by the compiler, and issues fixed in streaming of subcomponents. Finally, it provides notes on various other topics like Apache, UDDI, Windows XP input, and databases.
5/13/13 presentation to Austin DevOps Meetup Group, describing our system for deploying 15 websites and supporting services in multiple languages to bare redhat 6 VMs. All system-wide software is installed using RPMs, and all application software is installed using GIT or Tarball.
LIGGGHTS is an Open Source Discrete Element Method Particle Simulation Software developed by Sandia National Labs. LIGGGHTS stands for LAMMPS Improved for General Granular and Granular Heat Transfer Simulations. The higher programming language C++ is used to write the code of LIGGGHTS, which can be run either in a single- or multi processor. In this document we will discuss about the LIGGGHTS installation for the Linux operating system Ubuntu 12.04 LTS, 13.04, 14.04 LTS and 16.04 LTS. In order to install LIGGGHTS we need few libraries, like libvtk5-dev, libeigen2-dev,
libopenmpi-dev, a C++ compiler, Open MPI, LPP and Paraview. In this article we will discuss the installation procedure of each one in details.
Chicago Docker Meetup Presentation - MediaflyMediafly
This document discusses how Bryan Murphy uses Docker at his company Mediafly. It begins by introducing Bryan and his background. It then describes what Mediafly does, including content management systems, secure content delivery, document and video processing, and customizable user interfaces. The document highlights aspects of Mediafly that make it interesting, such as being multi-device, multi-tenant, service oriented, and distributed. It provides examples of technologies used at Mediafly and some key metrics. The document then discusses why Docker is used at Mediafly, covering benefits like being developer friendly, enabling faster iteration and testing, managing dependencies, sharing environments, standardization, isolation, and infrastructure freedom.
An Overview of the IHK/McKernel Multi-kernel Operating SystemLinaro
By Balazs Gerofi, RIKEN Advanced Institute For Computational Science
RIKEN Advanced Institute for Computation Science is in charge of leading the development of Japan's next generation flagship supercomputer, the successor of the K. Part of this effort is to design and develop a system software stack that suits the needs of future extreme scale computing. In this talk, we focus on operating system (OS) requirements for HPC and discuss IHK/McKernel, a multi-kernel based operating system framework. IHK/McKernel runs Linux with a light-weight kernel (LWK) side-by-side on compute nodes with the primary motivation of providing scalable, consistent performance for large scale HPC simulations, but at the same time to retain a fully Linux compatible execution environment. We provide an overview of the project and discuss the status of its support for ARM architecture.
Balazs Gerofi Bio
Research Scientist at RIKEN Advanced Institute For Computational Science.
Email
bgerofi@riken.jp
For more info on The Linaro High Performance Computing (HPC) visit https://www.linaro.org/sig/hpc/
Adopt DevOps philosophy on your Symfony projects (Symfony Live 2011)Fabrice Bernhard
This is the presentation given at the Symfony Live 2011 conference. It is an introduction to the new agile movement spreading in the technical operations community called DevOps and how to adopt it on web development projects, in particular Symfony projects.
Plan of the slides :
- Configuration Management
- Development VM
- Scripted deployment
- Continuous deployment
Tools presented in the slides:
- Puppet
- Vagrant
- Fabric
- Jenkins / Hudson
The document provides instructions for getting started with Hyperledger Fabric blockchain technology. It covers three parts: 1) Using Blockchain as a Service on Bluemix cloud and starting a network locally, 2) Creating a smart contract in Java, and 3) Developing the client-side application. For part one, it explains how to deploy a blockchain network on Bluemix and also how to start a local network using Docker Compose with 4 peers, 1 CA, and sample configuration files. Part two describes how to build a Java smart contract using Eclipse that implements a crop insurance agreement between a farmer and insurer. Part three will cover initializing, deploying, querying and testing the client application.
The document describes a lab setup containing a Biclops pan-tilt unit with a Microsoft Kinect mounted on top. It provides instructions for installing and configuring the necessary ROS packages and libraries to control the Biclops unit and access the Kinect functionality. This includes creating a Biclops ROS package, services for homing the unit, modeling the unit in URDF, and implementing teleoperation of the unit using keyboard keys. It also provides instructions for installing Kinect drivers and libraries on Linux.
The document discusses two serverless computing platforms that support Swift - OpenWhisk and Fn.
OpenWhisk is an open source system that is event-driven, containerized, and allows chaining of actions. It is hosted on Bluemix but can be difficult to deploy elsewhere. Fn is container-native and deploys functions as containers communicating via standard input/output. Both allow simple Swift functions to be deployed and called remotely with REST APIs or command line tools. The document provides examples of writing, deploying and calling functions on each platform.
This document provides instructions for installing various developer tools including Git, Vim, Java, Tomcat, Maven, and Psi Probe on Linux, Mac OSX, and Windows. It then outlines 3 homework assignments: 1) creating a basic Git repository, 2) forking and cloning a provided repository, adding a feature, resolving conflicts, and deploying the application, and 3) using Psi Probe to manage Tomcat web applications. Step-by-step instructions are provided for completing each task along with explanations of commands used.
OSCamp Kubernetes 2024 | Zero-Touch OS-Infrastruktur für Container und Kubern...NETWAYS
In Kubernetes stellen wir Anwendungen als Instanz eines vordefinierten Container-Images bereit, dessen Eigenschaften deklarativ konfiguriert werden. Dies erleichtert die Automatisierung und Reproduzierbarkeit von Deployments, was wiederum das Betriebsrisiko verringert. Was wäre, wenn wir diese Eigenschaften auf die Serverprovisionierung ausweiten und das Betriebssystem selbst wie eine Anwendung in Kubernetes behandeln würden? Was wäre, wenn wir, anstatt Allzweck-Distributionen an unsere Bedürfnisse anzupassen, unseren Ansatz, wie ein “Cloud-Native” Betriebssystem funktionieren soll, von Grund auf überdenken würden? Unter Anwendung der gleichen Erwartungen, die wir an die Handhabung von Kubernetes-Anwendungen haben, präsentieren wir einen alternativen Ansatz für die Bereitstellung, Konfiguration und Lebenszyklusverwaltung des Betriebssystems. Mithilfe einer strikten Trennung von Betriebssystem und Anwendungen zeigen wir, wie ein wartbares, unveränderliches, imagebasiertes Betriebssystem erstellt werden kann. Und indem wir dieses Konzept erweitern, machen wir Provisionierunged problemlos und automatische Updates risikoarm. In diesem Vortrag werden wir auch einige der neuesten Entwicklungen zu Betriebssystemen behandeln und über das etablierte Konzept eines Container-Linux hinausgehen, hin zu einer Zukunft, die auf composable images Images mit systemd-sysext und einem generischen Modell für Image-baiserte Linux-Architekturen basiert.
This document provides instructions for installing, securing, and maintaining FreeBSD servers. It discusses pre-installation planning including partitioning, software selection, and kernel customization. Post-installation tasks covered include rebuilding the operating system to incorporate updates, installing software via packages and ports, and preparing for automated upgrades. The goal is to provide a secure, optimized system tailored to the server's purpose through careful configuration and removal of unnecessary components.
This document summarizes a workshop on network automation tools including Chef and Zero Touch Provisioning.
The agenda includes demonstrating ZTP to boot three bare metal switches, using Chef to orchestrate the baseline configuration of two switches and enforce configuration statements, creating a VXLAN tunnel between two leaf switches using Cisco's CVX, and starting an Opendaylight controller to configure Openflow on switches.
The workshop will require some Virtualbox experience and a notebook with at least 4GB RAM and 10GB storage. Software needed includes Virtualbox, Hypervisor, and virtualization solutions. Attendees should be DevOps engineers interested in the network side of DevOps.
The workshop will prepare VMs, demonstrate
This document discusses developing exploits for routers running MIPS binaries. It begins by setting up a Debian MIPS environment using QEMU for testing exploits. The document then analyzes a stack overflow vulnerability in MiniUPnPd version 1.0 as a target. Details are provided on obtaining the MiniUPnPd binary from router firmware, setting up remote debugging of the binary, and triggering the vulnerability with a long SOAP request. The document concludes by discussing restrictions in writing the exploit and finding an appropriate return-oriented programming chain to execute shellcode.
Automated Image & Restore (AIR) is an open source forensic imaging tool with a graphical user interface. It provides an easy front-end for disk/partition imaging using dd and dcfldd commands. Key features include support for hashing algorithms, SCSI tape drives, network imaging, splitting images, and detailed session logging. The tutorial demonstrates installing and using AIR to create a forensic image of a file on a Linux system and copy it to a CD-ROM for evidence preservation.
B-Translator helps to get feedback about l10n (translations of the programs). It tries to collect very small translation contributions from a wide crowd of people and to dilute them into something useful. It is developed as a Drupal7 profile and the code is hosted on GitHub. Here I describe the development setup and process that I use for this project. Most of the the tips are project specific, however some of them can be used on any Drupal project.
Kernel developers may have experience in writing makefiles for the linux kernel. In many cases, maybe just adding lines like
"obj-$(CONFIG_FOO) += foo.o" to a makefile. But, probably there
is not many people really know what's going on behind this cool
build system.
In this talk, Cao jin will dive into the Kbuild internals. Starting from
the basics of GNU Make, he will explain how Kbuild works, and in the end, produces vmlinux, bzImage, modules. The talk will also focus on some smart tricks used in Kbuild. At last, he will give a introduction about how Xen project is related with this config/build system.
The document provides an agenda for an embedded C programming lecture that includes the following topics: definitions of embedded systems and the differences between C for embedded systems and embedded C, the code compilation process and types of errors, code compilation using the command line, and a quick revision of C language syntax. It concludes with assigning a task for students.
Continuous Integration & Development with GitlabAyush Sharma
GitLab CI is a part of GitLab, a web application with an API that stores its state in a database. It manages projects/builds and provides a nice user interface, besides all the features of GitLab. GitLab Runner is an application which processes builds.
The document provides instructions for a lab on Snort and firewall rules. It describes:
1) Setting up the virtual environment and configuring networking on the CyberOps Workstation VM.
2) Explaining the differences between firewall and IDS rules while noting their similarities, such as both having matching and action components.
3) Having students run commands to start a malware server, use Snort to monitor traffic, and download a file from the server to trigger an alert, observing the alert in the Snort log.
LibOS as a regression test framework for Linux networking #netdev1.1Hajime Tazaki
This document describes using the LibOS framework to build a regression testing system for Linux networking code. LibOS allows running the Linux network stack in a library, enabling deterministic network simulation. Tests can configure virtual networks and run network applications and utilities to identify bugs in networking code by detecting changes in behavior across kernel versions. Example tests check encapsulation protocols like IP-in-IP and detect past kernel bugs. Results are recorded in JUnit format for integration with continuous integration systems.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
An Overview of the IHK/McKernel Multi-kernel Operating SystemLinaro
By Balazs Gerofi, RIKEN Advanced Institute For Computational Science
RIKEN Advanced Institute for Computation Science is in charge of leading the development of Japan's next generation flagship supercomputer, the successor of the K. Part of this effort is to design and develop a system software stack that suits the needs of future extreme scale computing. In this talk, we focus on operating system (OS) requirements for HPC and discuss IHK/McKernel, a multi-kernel based operating system framework. IHK/McKernel runs Linux with a light-weight kernel (LWK) side-by-side on compute nodes with the primary motivation of providing scalable, consistent performance for large scale HPC simulations, but at the same time to retain a fully Linux compatible execution environment. We provide an overview of the project and discuss the status of its support for ARM architecture.
Balazs Gerofi Bio
Research Scientist at RIKEN Advanced Institute For Computational Science.
Email
bgerofi@riken.jp
For more info on The Linaro High Performance Computing (HPC) visit https://www.linaro.org/sig/hpc/
Adopt DevOps philosophy on your Symfony projects (Symfony Live 2011)Fabrice Bernhard
This is the presentation given at the Symfony Live 2011 conference. It is an introduction to the new agile movement spreading in the technical operations community called DevOps and how to adopt it on web development projects, in particular Symfony projects.
Plan of the slides :
- Configuration Management
- Development VM
- Scripted deployment
- Continuous deployment
Tools presented in the slides:
- Puppet
- Vagrant
- Fabric
- Jenkins / Hudson
The document provides instructions for getting started with Hyperledger Fabric blockchain technology. It covers three parts: 1) Using Blockchain as a Service on Bluemix cloud and starting a network locally, 2) Creating a smart contract in Java, and 3) Developing the client-side application. For part one, it explains how to deploy a blockchain network on Bluemix and also how to start a local network using Docker Compose with 4 peers, 1 CA, and sample configuration files. Part two describes how to build a Java smart contract using Eclipse that implements a crop insurance agreement between a farmer and insurer. Part three will cover initializing, deploying, querying and testing the client application.
The document describes a lab setup containing a Biclops pan-tilt unit with a Microsoft Kinect mounted on top. It provides instructions for installing and configuring the necessary ROS packages and libraries to control the Biclops unit and access the Kinect functionality. This includes creating a Biclops ROS package, services for homing the unit, modeling the unit in URDF, and implementing teleoperation of the unit using keyboard keys. It also provides instructions for installing Kinect drivers and libraries on Linux.
The document discusses two serverless computing platforms that support Swift - OpenWhisk and Fn.
OpenWhisk is an open source system that is event-driven, containerized, and allows chaining of actions. It is hosted on Bluemix but can be difficult to deploy elsewhere. Fn is container-native and deploys functions as containers communicating via standard input/output. Both allow simple Swift functions to be deployed and called remotely with REST APIs or command line tools. The document provides examples of writing, deploying and calling functions on each platform.
This document provides instructions for installing various developer tools including Git, Vim, Java, Tomcat, Maven, and Psi Probe on Linux, Mac OSX, and Windows. It then outlines 3 homework assignments: 1) creating a basic Git repository, 2) forking and cloning a provided repository, adding a feature, resolving conflicts, and deploying the application, and 3) using Psi Probe to manage Tomcat web applications. Step-by-step instructions are provided for completing each task along with explanations of commands used.
OSCamp Kubernetes 2024 | Zero-Touch OS-Infrastruktur für Container und Kubern...NETWAYS
In Kubernetes stellen wir Anwendungen als Instanz eines vordefinierten Container-Images bereit, dessen Eigenschaften deklarativ konfiguriert werden. Dies erleichtert die Automatisierung und Reproduzierbarkeit von Deployments, was wiederum das Betriebsrisiko verringert. Was wäre, wenn wir diese Eigenschaften auf die Serverprovisionierung ausweiten und das Betriebssystem selbst wie eine Anwendung in Kubernetes behandeln würden? Was wäre, wenn wir, anstatt Allzweck-Distributionen an unsere Bedürfnisse anzupassen, unseren Ansatz, wie ein “Cloud-Native” Betriebssystem funktionieren soll, von Grund auf überdenken würden? Unter Anwendung der gleichen Erwartungen, die wir an die Handhabung von Kubernetes-Anwendungen haben, präsentieren wir einen alternativen Ansatz für die Bereitstellung, Konfiguration und Lebenszyklusverwaltung des Betriebssystems. Mithilfe einer strikten Trennung von Betriebssystem und Anwendungen zeigen wir, wie ein wartbares, unveränderliches, imagebasiertes Betriebssystem erstellt werden kann. Und indem wir dieses Konzept erweitern, machen wir Provisionierunged problemlos und automatische Updates risikoarm. In diesem Vortrag werden wir auch einige der neuesten Entwicklungen zu Betriebssystemen behandeln und über das etablierte Konzept eines Container-Linux hinausgehen, hin zu einer Zukunft, die auf composable images Images mit systemd-sysext und einem generischen Modell für Image-baiserte Linux-Architekturen basiert.
This document provides instructions for installing, securing, and maintaining FreeBSD servers. It discusses pre-installation planning including partitioning, software selection, and kernel customization. Post-installation tasks covered include rebuilding the operating system to incorporate updates, installing software via packages and ports, and preparing for automated upgrades. The goal is to provide a secure, optimized system tailored to the server's purpose through careful configuration and removal of unnecessary components.
This document summarizes a workshop on network automation tools including Chef and Zero Touch Provisioning.
The agenda includes demonstrating ZTP to boot three bare metal switches, using Chef to orchestrate the baseline configuration of two switches and enforce configuration statements, creating a VXLAN tunnel between two leaf switches using Cisco's CVX, and starting an Opendaylight controller to configure Openflow on switches.
The workshop will require some Virtualbox experience and a notebook with at least 4GB RAM and 10GB storage. Software needed includes Virtualbox, Hypervisor, and virtualization solutions. Attendees should be DevOps engineers interested in the network side of DevOps.
The workshop will prepare VMs, demonstrate
This document discusses developing exploits for routers running MIPS binaries. It begins by setting up a Debian MIPS environment using QEMU for testing exploits. The document then analyzes a stack overflow vulnerability in MiniUPnPd version 1.0 as a target. Details are provided on obtaining the MiniUPnPd binary from router firmware, setting up remote debugging of the binary, and triggering the vulnerability with a long SOAP request. The document concludes by discussing restrictions in writing the exploit and finding an appropriate return-oriented programming chain to execute shellcode.
Automated Image & Restore (AIR) is an open source forensic imaging tool with a graphical user interface. It provides an easy front-end for disk/partition imaging using dd and dcfldd commands. Key features include support for hashing algorithms, SCSI tape drives, network imaging, splitting images, and detailed session logging. The tutorial demonstrates installing and using AIR to create a forensic image of a file on a Linux system and copy it to a CD-ROM for evidence preservation.
B-Translator helps to get feedback about l10n (translations of the programs). It tries to collect very small translation contributions from a wide crowd of people and to dilute them into something useful. It is developed as a Drupal7 profile and the code is hosted on GitHub. Here I describe the development setup and process that I use for this project. Most of the the tips are project specific, however some of them can be used on any Drupal project.
Kernel developers may have experience in writing makefiles for the linux kernel. In many cases, maybe just adding lines like
"obj-$(CONFIG_FOO) += foo.o" to a makefile. But, probably there
is not many people really know what's going on behind this cool
build system.
In this talk, Cao jin will dive into the Kbuild internals. Starting from
the basics of GNU Make, he will explain how Kbuild works, and in the end, produces vmlinux, bzImage, modules. The talk will also focus on some smart tricks used in Kbuild. At last, he will give a introduction about how Xen project is related with this config/build system.
The document provides an agenda for an embedded C programming lecture that includes the following topics: definitions of embedded systems and the differences between C for embedded systems and embedded C, the code compilation process and types of errors, code compilation using the command line, and a quick revision of C language syntax. It concludes with assigning a task for students.
Continuous Integration & Development with GitlabAyush Sharma
GitLab CI is a part of GitLab, a web application with an API that stores its state in a database. It manages projects/builds and provides a nice user interface, besides all the features of GitLab. GitLab Runner is an application which processes builds.
The document provides instructions for a lab on Snort and firewall rules. It describes:
1) Setting up the virtual environment and configuring networking on the CyberOps Workstation VM.
2) Explaining the differences between firewall and IDS rules while noting their similarities, such as both having matching and action components.
3) Having students run commands to start a malware server, use Snort to monitor traffic, and download a file from the server to trigger an alert, observing the alert in the Snort log.
LibOS as a regression test framework for Linux networking #netdev1.1Hajime Tazaki
This document describes using the LibOS framework to build a regression testing system for Linux networking code. LibOS allows running the Linux network stack in a library, enabling deterministic network simulation. Tests can configure virtual networks and run network applications and utilities to identify bugs in networking code by detecting changes in behavior across kernel versions. Example tests check encapsulation protocols like IP-in-IP and detect past kernel bugs. Results are recorded in JUnit format for integration with continuous integration systems.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
When it is all about ERP solutions, companies typically meet their needs with common ERP solutions like SAP, Oracle, and Microsoft Dynamics. These big players have demonstrated that ERP systems can be either simple or highly comprehensive. This remains true today, but there are new factors to consider, including a promising new contender in the market that’s Odoo. This blog compares Odoo ERP with traditional ERP systems and explains why many companies now see Odoo ERP as the best choice.
What are ERP Systems?
An ERP, or Enterprise Resource Planning, system provides your company with valuable information to help you make better decisions and boost your ROI. You should choose an ERP system based on your company’s specific needs. For instance, if you run a manufacturing or retail business, you will need an ERP system that efficiently manages inventory. A consulting firm, on the other hand, would benefit from an ERP system that enhances daily operations. Similarly, eCommerce stores would select an ERP system tailored to their needs.
Because different businesses have different requirements, ERP system functionalities can vary. Among the various ERP systems available, Odoo ERP is considered one of the best in the ERp market with more than 12 million global users today.
Odoo is an open-source ERP system initially designed for small to medium-sized businesses but now suitable for a wide range of companies. Odoo offers a scalable and configurable point-of-sale management solution and allows you to create customised modules for specific industries. Odoo is gaining more popularity because it is built in a way that allows easy customisation, has a user-friendly interface, and is affordable. Here, you will cover the main differences and get to know why Odoo is gaining attention despite the many other ERP systems available in the market.
Mobile app Development Services | Drona InfotechDrona Infotech
Drona Infotech is one of the Best Mobile App Development Company In Noida Maintenance and ongoing support. mobile app development Services can help you maintain and support your app after it has been launched. This includes fixing bugs, adding new features, and keeping your app up-to-date with the latest
Visit Us For :
1. CI/CD
Contents
Overview
Bluefield Run
OMB Tests
IMB Tests
MPICH Tests
NAS Tests
References
Overview
CI/CD falls under DevOps (the joining of
development and operations) and combines the
practices of continuous integration and continuous
delivery. CI/CD automates much or all of the manual
2. human intervention traditionally needed to get new
code from a commit into production such as build,
test, and deploy, as well as infrastructure
provisioning. With a CI/CD pipeline, developers can
make changes to code that are then automatically
tested and pushed out for delivery and
deployment. With CI/CD right code releases happen
faster.
Continuous integration is the practice of integrating
all code changes into the main branch of a shared
source code repository early and often, automatically
testing each change when commit or merge happens,
and automatically kicking off a build. With
continuous integration, errors and security issues can
be identified and fixed more easily, and much earlier
in the software development lifecycle.
Continuous delivery is a software development
practice that works in conjunction with continuous
integration to automate the infrastructure
provisioning and application release process.
Once code has been tested and built as part of the CI
process, continuous delivery takes over during the
final stages to ensure it can be deployed packaged
with everything it needs to deploy to any
environment at any time. Continuous delivery can
cover everything from provisioning the infrastructure
3. to deploying the application to the testing or
production environment.
With continuous delivery, the software is built so that
it can be deployed to production at any time. Then
one can trigger the deployments manually or move to
continuous deployment where deployments are
automated as well.
Directory Tree
The pipeline is living in
/global/home/users/rgopal/CITest/usr/bin. This is the
base directory and where the gitlab runner is
installed. From here, the structure of the directory
looks like:
/global/home/users/rgopal/CITest/usr/bin
|src/
|...
|<git pull location>
|builds/
|<commit_hash>
|gcc/
|install/
|logs/
|<commit_hash>
|nas
|mpich
4. |imb
|omb
|tests/
|tmp/
|<commit_hash>
|...
|mv2
src/
The location that the gitlab-runner clones the most
recent commit. This directory is checked for changes
at each new job. Do not change any files here in any
of the jobs.
builds/
The install location of the built files from the most
recent commit
logs/
Where is testing logs live
tests/
The binaries of all of the external tests (not built
alongside mv2)
5. tmp/
Location of temporary files.
Pipeline Structure
Currently, the pipeline is divided into 4 phases:
build
test
verify
clean
Within each phase, there are separate jobs being
executed at the same time. For each new job, the
gitlab runner makes a clone from master into a
directory and tries to start from clean slate.
build
For the build step, the entire repo is copied into
$GITLAB_BASE_DIR/tmp/mv2. This is done
because if we run a build in the directory, the
following phases will notice the files have changed.
They'll try to make a reset and will eventually
complain.
It then runs autogen, make, make install and installs
to /builds/<commit_hash>/gcc.
6. test
This phase just submits batch jobs using sbatch.
Currently, we're using the thor partition. The scripts
will save the output to
$GITLAB_BASE_DIR/logs/<commit_hash> and
when it's done, it will generate a file in
$GITLAB_BASE_DIR/tmp/<commit_hash>.
This phase should be completed in a couple of
seconds. We're just submitting jobs to sbatch and
then checking for them in the verify phase
verify
Checks for the status of the tests that we submitted to
sbatch. The scripts in the test phase should generate a
.done file in
$GITLAB_BASE_DIR/tmp/<commit_hash>. The
verify scripts loop through and check for that. Once
it's found, it will grep the output from the batch job to
look for errors and log them in
$GITLAB_BASE_DIR/logs/<commit_hash>/<test>/
clean
Cleans up any extra files, removes the builds and any
generated hostfiles
7. Bluefield Run
Building
After cloning, run ./build.sh. This script will run
autogen, configure, make, and make install. If you
open the script and set ARMSRC_DIR to another
clone of this repo in a separate folder (make sure it's
on the same commit), it will launch a parallel build.
Note: Open the script and set LICENSE=0 before
running in order to build without the need for a
license file.
Note 2: Building on ARM takes a long time. Wait
around 20 minutes for it to complete. The host build
will finish much faster. Don't forget that both host
and ARM need to finish before you can run!
The script uses a separate set of configure flags in
order to allow both SRC_DIR and ARMSRC_DIR to
have the same --prefix. Basically, the ARMSRC_DIR
flags will build mpicc, mpispawn, proxy_program,
etc. all appended with the -arm suffix to distinguish
the host binaries from arm binaries.
If you see an error "cannot execute binary file",
please make sure that file ./install/bin/mpispawn
reports as x86 executable, and file
./install/bin/proxy_program reports as aarch64
8. executable. If either is wrong, then run either cp
proxy_program-arm proxy_program or cp mpispawn-
x86 mpispawn. If these files don't exist, rerun make
&& make install to regenerate them.
Environment Setup
On HPCAC, the Thor hosts have two physical HCAs
plugged in. One is the BF2, and the other is a
ConnectX-6:
[rgopal@thor011 xsc]$ ibstat
CA 'mlx5_0'
CA type: MT4123
Number of ports: 1
Firmware version: 20.30.1004
Hardware version: 0
Node GUID: 0x98039b03008553e6
System image GUID: 0x98039b03008553e6
Port 1:
State: Active
Physical state: LinkUp
Rate: 100
Base lid: 60
LMC: 0
SM lid: 9
Capability mask: 0x2651e848
Port GUID: 0x98039b03008553e6
Link layer: InfiniBand
9. CA 'mlx5_1'
CA type: MT4123
Number of ports: 1
Firmware version: 20.30.1004
Hardware version: 0
Node GUID: 0x98039b03008553e7
System image GUID: 0x98039b03008553e6
Port 1:
State: Active
Physical state: LinkUp
Rate: 100
Base lid: 41
LMC: 0
SM lid: 1
Capability mask: 0x2651e848
Port GUID: 0x98039b03008553e7
Link layer: InfiniBand
CA 'mlx5_2'
CA type: MT41686
Number of ports: 1
Firmware version: 24.30.1004
Hardware version: 0
Node GUID: 0x043f720300ec7f0e
System image GUID: 0x043f720300ec7f0e
Port 1:
State: Active
Physical state: LinkUp
Rate: 100
10. Base lid: 301
LMC: 0
SM lid: 9
Capability mask: 0x2651e848
Port GUID: 0x043f720300ec7f0e
Link layer: InfiniBand
CA 'mlx5_3'
CA type: MT41686
Number of ports: 1
Firmware version: 24.30.1004
Hardware version: 0
Node GUID: 0x043f720300ec7f0f
System image GUID: 0x043f720300ec7f0e
Port 1:
State: Active
Physical state: LinkUp
Rate: 100
Base lid: 4
LMC: 0
SM lid: 1
Capability mask: 0x2651e848
Port GUID: 0x043f720300ec7f0f
Link layer: InfiniBand
On the ARM cores, only the BF-2 is visible:
[rgopal@thor-bf11 ~]$ ibstat
CA 'mlx5_0'
11. CA type: MT41686
Number of ports: 1
Firmware version: 24.30.1004
Hardware version: 0
Node GUID: 0x043f720300ec7f12
System image GUID: 0x043f720300ec7f0e
Port 1:
State: Active
Physical state: LinkUp
Rate: 100
Base lid: 321
LMC: 0
SM lid: 9
Capability mask: 0x2641e848
Port GUID: 0x043f720300ec7f12
Link layer: InfiniBand
CA 'mlx5_1'
CA type: MT41686
Number of ports: 1
Firmware version: 24.30.1004
Hardware version: 0
Node GUID: 0x043f720300ec7f13
System image GUID: 0x043f720300ec7f0e
Port 1:
State: Active
Physical state: LinkUp
Rate: 100
Base lid: 51
12. LMC: 0
SM lid: 1
Capability mask: 0x2641e848
Port GUID: 0x043f720300ec7f13
Link layer: InfiniBand
In order to run the offload, set the following snippet
in ~/.bashrc to select the BF-2 on the host while also
setting the BF-2 on the ARM cores.
STR=`hostname`
SUB="bf"
if [[ "$STR" == *"$SUB"* ]]; then
export MV2_IBA_HCA=mlx5_0
else
export MV2_IBA_HCA=mlx5_2
fi
Running
Create a hostfile as usual. (a hostfile contains the
list of hostnames of nodes to launch in the MPI
job
Create a file called dpufile. Fill it with the
individual hostnames of each BlueField. Don't
write any WPN information (like thor-bf01:2 or
thor-bf01nthor-bf01) since the launcher will
launch 8 WPN automatically. For example, you
13. can generate a dpufile like this if you have a job
allocation with SLURM: scontrol show
hostnames | grep bf | tee ./dpufile Set
MV2_USE_DPU=1 as an environment variable
in mpirun_rsh.
Full run command example:
./bin/mpirun_rsh -np 128 -hostfile ./hostfile -dpufile
./dpufile MV2_USE_DPU=1 ./libexec/osu-micro-
benchmarks/mpi/collective/osu_ialltoall
OMB tests
Osu MicroBenchmarks (OMB) is a benchmark suite
developed by NOWLAB that is included in every
installation of MVAPICH2 (including MVAPICH2-
DPU) in the install-prefix/libexec/osu-micro-
benchmarks folder as a binary after building, and in
the osu_benchmarks folder as source code.
Additionally, it can be downloaded as a standalone
package here: http://mvapich.cse.ohio-
state.edu/benchmarks/.
The benchmark suite has tests for MPI point-to-point
(sending and recving between exactly two processes),
collectives (communication among groups of
processes), and RMA (one-sided put and get (sending
14. without a corresponding recv, or recv'ing without a
corresponding send), and others)
As of this writing, the MVAPICH2-DPU package
supports MPI_Ialltoall, MPI_Ibcast, and
MPI_Iallgather collective offloads.
This is a good picture describing what these
collectives do. Each row is a buffer (sendbuf or
recvbuf) on a single process:
15. Also, it is important to note that there is an I in front
of the collective name. This means the collective is
actually nonblocking. To demonstrate this, let me
show the usage of a blocking alltoall (i.e.,
MPI_Alltoall) and compare it with a nonblocking
alltoall (i.e. MPI_Ialltoall).
16. MPI_Alltoall:
// Recvbuf empty
MPI_Alltoall(sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, comm); // May take some time
to complete
// Recvbuf guaranteed to be full
MPI_Ialltoall
MPI_Request request;
// Recvbuf empty
MPI_Ialltoall(sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, comm, &request); // Returns
instantly
// Do another job while the non-blocking all to all
progresses heavy_computation which_does_not
depend on recvbuf()
// Recvbuf may or may not be filled
MPI_Wait(&request, MPI_STATUS_IGNORE); //
May or may not take much time to complete,
depending on how long heavy_computation
which_does_not_depend_on_recvbuf took
// Recvbuf guaranteed to be filled
17. Since MVAPICH2-DPU supports MPI_Ialltoall,
MPI_Ibcast, and MPI_Iallgather, we are mainly
interested in the output of osu_ialltoall,
osu_iallgather, and osu_ibcast. Binaries for these can
be found in the install-prefix/libexec/osu-micro-
benchmarks/mpi/collective folder.
Following MPI tests are included in the OMB
package:
Point-to-Point MPI Benchmarks: Latency, multi-
threaded latency, multi-pair latency, multiple
bandwidth / message rate test, bandwidth,
bidirectional bandwidth
Collective MPI Benchmarks: Collective latency
tests for various MPI collective operations such as
MPI_Allgather,
MPI_Barrier,
MPI_Alltoall,
MPI_Bcast,
MPI_Allreduce,
MPI_Gather,
MPI_Reduce, MPI_Reduce_Scatter, MPI_Scatter
and vector collectives.
Non-Blocking Collective (NBC) MPI
Benchmarks: Collective latency and Overlaptests
for various MPI collective operations such as
MPI_Iallgather, MPI_Iallreduce, MPI_Ialltoall,
MPI_Ibarrier, MPI_Ibcast, MPI_Igather,
18. MPI_Ireduce, MPI_Iscatter and vector
collectives.
One-sided MPI Benchmarks: one-sided put
latency, one-sided put bandwidth, one-sided put
bidirectional bandwidth, one-sided get latency,
one-sided get bandwidth, one-sided accumulate
latency, compare and swap latency, fetch and
operate and get_accumulate latency for
MVAPICH2 (MPI-2 and MPI-3).
omb-refactor.sh runs OMB tests.
Then the tests are run on 1, 2, 4, 8 and 16 Nodes with
full subscription
It is run with MV2_USE_DPU=0 and
MV2_USE_DPU=1.
All tests are run with the below options
COMMON="MV2_DEBUG_SHOW_BACKTRAC
E=2 MV2_ENABLE_AFFINITY=0"
MV2_DEBUG_SHOW_BACKTRACE
Show a backtrace when a process fails on errors like
”Segmentation faults”, ”Bus error”, ”Illegal
Instruction”, ”Abort” or ”Floating point exception”.
MV2_ENABLE_AFFINITY
19. Enable CPU affinity by setting
MV2_ENABLE_AFFINITY to 1 or disable it by
setting MV2_ENABLE_AFFINITY to 0.
Set this to limit, in seconds, of the execution time of
the mpi application. This overwritesthe
MV2_MPIRUN_TIMEOUT parameter.
Point-to-Point MPI Benchmarks
osu_latency - Latency Test
osu_latency_mt - Multi-threaded Latency Test
osu_latency_mp - Multi-process Latency Test
osu_bw - Bandwidth Test
osu_bibw - Bidirectional Bandwidth Test
osu_mbw_mr - Multiple Bandwidth / Message
Rate Test
osu_multi_lat - Multi-pair Latency Test
Pt2Pt tests are run on 1 node[2ppn] and 2
nodes[1ppn]
Total Tests: (7 tests * 2 scenarios[host/dpu]) = 14
Collective MPI Benchmarks
osu_allgather - MPI_Allgather Latency Test
osu_allgatherv - MPI_Allgatherv Latency Test
osu_allreduce - MPI_Allreduce Latency Test
osu_alltoall - MPI_Alltoall Latency Test
20. osu_alltoallv - MPI_Alltoallv Latency Test
osu_barrier - MPI_Barrier Latency Test
osu_bcast - MPI_Bcast Latency Test
osu_gather - MPI_Gather Latency Test
osu_gatherv - MPI_Gatherv Latency Test
osu_reduce - MPI_Reduce Latency Test
osu_reduce_scatter - MPI_Reduce_scatter
Latency Test
osu_scatter - MPI_Scatter Latency Test
osu_scatterv - MPI_Scatterv Latency Test
Non-Blocking Collective (NBC) MPI Benchmarks
osu_iallgather - MPI_Iallgather Latency Test
osu_iallgatherv - MPI_Iallgatherv Latency Test
osu_iallreduce - MPI_Iallreduce Latency Test
osu_ialltoall - MPI_Ialltoall Latency Test
osu_ialltoallv - MPI_Ialltoallv Latency Test
osu_ialltoallw - MPI_Ialltoallw Latency Test
osu_ibarrier - MPI_Ibarrier Latency Test
osu_ibcast - MPI_Ibcast Latency Test
osu_igather - MPI_Igather Latency Test
osu_igatherv - MPI_Igatherv Latency Test
osu_ireduce - MPI_Ireduce Latency Test
osu_iscatter - MPI_Iscatter Latency Test
osu_iscatterv - MPI_Iscatterv Latency Test
Collective tests are run on 1,2,4,8 and 16 nodes with
full subscription [ 16ppn]
21. Total Tests: (26 tests * 2 scenarios[host/dpu]) = 54 – 1
= 53
Note: ialltoall is a time consuming test hence it is
run with the max message size of 32KB, rest all are
run with the default max message size
One-sided MPI Benchmarks
osu_put_latency - Latency Test for Put with
Active/Passive Synchronization
osu_get_latency - Latency Test for Get with
Active/Passive Synchronization
osu_put_bw - Bandwidth Test for Put with
Active/Passive Synchronization
osu_get_bw - Bandwidth Test for Get with
Active/Passive Synchronization
osu_put_bibw - Bi-directional Bandwidth Test
for Put with Active Synchronization
osu_acc_latency - Latency Test for Accumulate
with Active/Passive Synchronization
osu_cas_latency - Latency Test for Compare and
Swap with Active/Passive Synchronization
osu_fop_latency - Latency Test for Fetch andOp
with Active/Passive Synchronization
22. Latency Test for
with Active/Passive
osu_get_acc_latency -
Get_accumulate
Synchronization
RMA tests are run one or two nodes with ppn = 2 or
ppn = 1 respectively
Total Tests: (9 tests * 2 scenarios[host/dpu]) = 18
IMB Tests
The objectives of the Intel® MPI Benchmarks are:
•Provide a concise set of benchmarks targeted at
measuring the most important MPI functions.
• Set forth a precise benchmark methodology.
•Report bare timings rather than provide
interpretation of the measured results. Show
throughput values if and only if these values are well-
defined. Intel® MPI Benchmarks is developed using
ANSI C plus standard MPI
Intel® MPI Benchmarks performs a set of MPI
global communication operations for a range
performance measurements for point-to-point and
of
message sizes. The generated benchmark data fully
characterizes:
23. •performance of a cluster system, including node
performance, network latency, and throughput
• efficiency of the MPI implementation used
The Intel® MPI Benchmarks package consists of the
following components:
• IMB-MPI1 - benchmarks for MPI-1 functions.
• Two components for MPI-2 functionality:
• IMB-EXT - one-sided communications
benchmarks.
• IMB-IO - input/output (I/O) benchmarks.
• Two components for MPI-3 functionality:
• IMB-NBC - benchmarks for nonblocking
collective (NBC) operations.
• IMB-RMA - one-sided communications
benchmarks. These benchmarks measure the Remote
Memory Access (RMA) functionality introduced in
the MPI-3 standard. Each component constitutes a
separate executable file. You can run all of the
supported benchmarks, or specify a single executable
file in the command line to get results for a specific
subset of benchmarks.
imb-refactor.sh runs IMB tests.
24. On a single node IMB tests are run with full
subscription. 16 nodes and 16 ppn = 256 processes. It
is run with MV2_USE_DPU=0 and
MV2_USE_DPU=1.
Then the tests are repeated on 2 Nodes and 4 Nodes
For two and four nodes maximum message size is
limited to 64KB and number of iterations are kept at
500. Here too tests are run with MV2_USE_DPU=0
and 1
Number of iterations for all the tests is set to 500.
Tests are run with “multi” option set to 1 and without
it.
This defines whether the benchmark runs in the
multiple mode or not.
All tests are run with the below options
COMMON="MV2_DEBUG_SHOW_BACKTRAC
E=2 MV2_ENABLE_AFFINITY=0
MV2_SUPPRESS_JOB_STARTUP_PERFORMAN
CE_WARNING=1 MPIEXEC_TIMEOUT=300"
MV2_DEBUG_SHOW_BACKTRACE
25. Show a backtrace when a process fails on errors like
”Segmentation faults”, ”Bus error”, ”Illegal
Instruction”, ”Abort” or ”Floating point exception”.
MV2_ENABLE_AFFINITY
Enable CPU affinity by setting
MV2_ENABLE_AFFINITY to 1 or disable it by
setting MV2_ENABLE_AFFINITY to 0.
Set this to limit, in seconds, of the execution time of
the mpi application. This overwritesthe
MV2_MPIRUN_TIMEOUT parameter.
The following table lists all IMB-NBC benchmarks:
29. Total Tests: (19 tests * 2 scenarios[host/dpu] * 2
modes [multi/non-multi] ) = 76
IMB-RMA Benchmarks
The table below lists all IMB-RMA benchmarks
30.
31. Total Tests: (19 tests * 2 scenarios[host/dpu] * 2
modes [multi/non-multi] ) = 76
MPICH Tests
The MVAPICH2 MPI library (by Network Based
Computing Laboratory at Ohio State University) is a
derivative of the MPICH (by Argonne National
Laboratory). There exists some tests originally from
MPICH that can be found in the ./test folder of a
fresh clone of the MVAPICH2 code. There are
folders with tests for multiple parts of the code--
pt2pt, collectives, rma, etc.
Each folder has a testlist file that can be read in by a
script to know which tests to run. Since the
32. MVAPICH2-DPU library at this time only has
support for dpu-based collectives, we are mainly
interested in passing tests within the ./test/mpi/coll
folder.
MPICH collective tests
Total Tests: (169 tests * 2 scenarios[host/dpu] ) = 338
There are many dense tests here such as
"alltoall", "bcasttest", "redscat", "red_scat_block",
"gather_big", "opprod"
"nbicbcast", "nbicallreduce", "nbic"
MPICH comm, pt2pt and rma are yet to be
investigated.
NAS Tests
The NAS Parallel Benchmarks (NPB) are a small set
of programs designed to help evaluate the
performance of parallel supercomputers. The
benchmarks are derived from computational fluid
dynamics (CFD) applications and consist of five
kernels and three pseudo-applications in the original
"pencil-and-paper" specification (NPB 1). The
33. benchmark suite has been extended to include new
benchmarks for unstructured adaptive meshes,
parallel I/O, multi-zone applications, and
computational grids. Problem sizes in NPB are
predefined and indicated as different classes.
Reference implementations of NPB are available in
commonly-used programming models like MPI and
OpenMP (NPB 2 and NPB 3).
NAS has 9 benchmarks. Following are NAS
benchmarks
"mg.B.x”, "cg.B.x", "ft.B.x", "lu.B.x",
"is.B.x","sp.B.x","bt.B.x.ep_io",
"bt.B.x.mpi_io_full"
"ep.B.x",
All benchmarks are run successfully on 2 nodes with
2ppn.
It is yet to be scaled up to 4,8 and 16 nodes with full
subscription.
References
1.OMB user guide
2.IMB user guide
3.Wiki
End of Document