This document provides best practices for using CMake, including:
- Set the cmake_minimum_required version to ensure modern features while maintaining backward compatibility.
- Use targets to define executables and libraries, their properties, and dependencies.
- Fetch remote dependencies at configure time using FetchContent or integrate with package managers like Conan.
- Import library targets rather than reimplementing Find modules when possible.
- Treat CUDA as a first-class language in CMake projects.
While CMake has become the de-facto standard buildsystem for C++, it's siblings CTest and CPack are less well known. This talk gives a lightspeed introduction into these three tools and then focuses on best practices on building, testing, and packaging.
Git, CMake, Conan - How to ship and reuse our C++ projects?Mateusz Pusz
These are the slides from my CppCon 2018 talk. You can find the recording here: https://www.youtube.com/watch?v=S4QSKLXdTtA. All my other talks can be found here: https://train-it.eu/resources.
---
Git and CMake are already established standards in our community. However, it is uncommon to see them being used in an efficient way. As a result, many C++ projects have big problems with either importing other dependencies or making themselves easy to import by others. It gets even worse when we need to build many different configurations of our package on one machine.
That presentation lists and addresses the problems of build system and packaging that we have with large, multi-platform, C++ projects that often have many external and internal dependencies. The talk will present how proper use of CMake and Conan package manager can address those use cases. It will also describe current issues with the cooperation of those tools.
If you've attended or seen my talk at C++Now 2018, that time you can expect much more information on Conan and package creation using that tool. I will also describe how the integration of CMake and Conan changed over the last few months.
Το CMake είναι το πιο διαδεδομένο εργαλείο για να "χτίσεις" projects γραμμένα σε C++ για το 2021.
Το CMake δε μεταγλωτίζει το ίδιο τον κώδικα αλλά παράγει τις κατάλληλες παραμέτρους για άλλα εργαλεία (π.χ. make) τα οποία αναλαμβάνουν τη μεταγλώτισση.
Η χρήση εργαλείων όπως το CMake είναι μονόδρομος όταν ένα έργο σε C++ περιλαμβάνει πολλά αρχεία, διάφορες παραμέτρους, εξωτερικά dependencies κλπ. Σε αυτή την περίπτωση η ανάπτυξή του γίνεται εκθετικά δυσκολότερη όσο το μέγεθός του αυξάνεται, εάν δεν υιοθετηθεί χρήση εργαλείων όπως το CMake.
邏 Στο εργαστήριο θα δείξουμε πως μπορούμε να στήσουμε ένα τυπικό project γραμμένο σε C++ και θα καλύψουμε τα πιο βασικά σενάρια που χρειάζεται να γνωρίζει κάποιος όπως:
✅ Παραγωγή εκτελέσιμου αρχείου
✅ Καθορισμός του include path
✅ Δημιουργία βιβλιοθήκης για static ή dynamic linking
✅ Ελεγχος των διάφορων compilation flags
✅ Δημιουργία functions εντός του CMake
✅ Παραμετροποίηση μέσω options
CMake is an open-source cross-platform build system. It is increasingly becoming the build system of choice for open source projects. The Qt project recently announced that Qbs, the replacement build system for qmake, will no longer be supported and future efforts will focus on CMake. It may become the default build system for Qt version 6.
CMake has offered support for building Qt applications for some time, and is supported within the Qt Creator IDE. In this webinar we will:
-Introduce you to CMake
-Cover its basic features and how to use it
-Show some CMake configurations including Qt-based applications
-Prove how easy it is to use Cmake with Qt so you'll be ready to use it for your C++ and Qt-based applications!
While CMake has become the de-facto standard buildsystem for C++, it's siblings CTest and CPack are less well known. This talk gives a lightspeed introduction into these three tools and then focuses on best practices on building, testing, and packaging.
Git, CMake, Conan - How to ship and reuse our C++ projects?Mateusz Pusz
These are the slides from my CppCon 2018 talk. You can find the recording here: https://www.youtube.com/watch?v=S4QSKLXdTtA. All my other talks can be found here: https://train-it.eu/resources.
---
Git and CMake are already established standards in our community. However, it is uncommon to see them being used in an efficient way. As a result, many C++ projects have big problems with either importing other dependencies or making themselves easy to import by others. It gets even worse when we need to build many different configurations of our package on one machine.
That presentation lists and addresses the problems of build system and packaging that we have with large, multi-platform, C++ projects that often have many external and internal dependencies. The talk will present how proper use of CMake and Conan package manager can address those use cases. It will also describe current issues with the cooperation of those tools.
If you've attended or seen my talk at C++Now 2018, that time you can expect much more information on Conan and package creation using that tool. I will also describe how the integration of CMake and Conan changed over the last few months.
Το CMake είναι το πιο διαδεδομένο εργαλείο για να "χτίσεις" projects γραμμένα σε C++ για το 2021.
Το CMake δε μεταγλωτίζει το ίδιο τον κώδικα αλλά παράγει τις κατάλληλες παραμέτρους για άλλα εργαλεία (π.χ. make) τα οποία αναλαμβάνουν τη μεταγλώτισση.
Η χρήση εργαλείων όπως το CMake είναι μονόδρομος όταν ένα έργο σε C++ περιλαμβάνει πολλά αρχεία, διάφορες παραμέτρους, εξωτερικά dependencies κλπ. Σε αυτή την περίπτωση η ανάπτυξή του γίνεται εκθετικά δυσκολότερη όσο το μέγεθός του αυξάνεται, εάν δεν υιοθετηθεί χρήση εργαλείων όπως το CMake.
邏 Στο εργαστήριο θα δείξουμε πως μπορούμε να στήσουμε ένα τυπικό project γραμμένο σε C++ και θα καλύψουμε τα πιο βασικά σενάρια που χρειάζεται να γνωρίζει κάποιος όπως:
✅ Παραγωγή εκτελέσιμου αρχείου
✅ Καθορισμός του include path
✅ Δημιουργία βιβλιοθήκης για static ή dynamic linking
✅ Ελεγχος των διάφορων compilation flags
✅ Δημιουργία functions εντός του CMake
✅ Παραμετροποίηση μέσω options
CMake is an open-source cross-platform build system. It is increasingly becoming the build system of choice for open source projects. The Qt project recently announced that Qbs, the replacement build system for qmake, will no longer be supported and future efforts will focus on CMake. It may become the default build system for Qt version 6.
CMake has offered support for building Qt applications for some time, and is supported within the Qt Creator IDE. In this webinar we will:
-Introduce you to CMake
-Cover its basic features and how to use it
-Show some CMake configurations including Qt-based applications
-Prove how easy it is to use Cmake with Qt so you'll be ready to use it for your C++ and Qt-based applications!
An Introduction to Makefile.
about 23 slides to present you a quick start to the make utility, its usage and working principles. Some tips/examples in order to understand and write your own
Makefiles.
In this presentation you will learn why this utility continues to hold its top position in project build software, despite many younger competitors.
Visit Do you know Magazine : https://www.facebook.com/douknowmagazine
An introduction to docker; the concepts; how to use it and why. The presentation is mainly based on the following presentation by docker, but with added info about Docker Compose and Docker Swarm.
https://www.slideshare.net/Docker/docker-101-nov-2016
#container #docker #Trifork #TriforkSelected #GotoConf
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
Tracing MariaDB server with bpftrace - MariaDB Server Fest 2021Valeriy Kravchuk
Bpftrace is a relatively new eBPF-based open source tracer for modern Linux versions (kernels 5.x.y) that is useful for analyzing production performance problems and troubleshooting software. Basic usage of the tool, as well as bpftrace one liners and advanced scripts useful for MariaDB DBAs are presented. Problems of MariaDB Server dynamic tracing with bpftrace and some possible solutions and alternative tracing tools are discussed.
An Introduction to Makefile.
about 23 slides to present you a quick start to the make utility, its usage and working principles. Some tips/examples in order to understand and write your own
Makefiles.
In this presentation you will learn why this utility continues to hold its top position in project build software, despite many younger competitors.
Visit Do you know Magazine : https://www.facebook.com/douknowmagazine
An introduction to docker; the concepts; how to use it and why. The presentation is mainly based on the following presentation by docker, but with added info about Docker Compose and Docker Swarm.
https://www.slideshare.net/Docker/docker-101-nov-2016
#container #docker #Trifork #TriforkSelected #GotoConf
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
Tracing MariaDB server with bpftrace - MariaDB Server Fest 2021Valeriy Kravchuk
Bpftrace is a relatively new eBPF-based open source tracer for modern Linux versions (kernels 5.x.y) that is useful for analyzing production performance problems and troubleshooting software. Basic usage of the tool, as well as bpftrace one liners and advanced scripts useful for MariaDB DBAs are presented. Problems of MariaDB Server dynamic tracing with bpftrace and some possible solutions and alternative tracing tools are discussed.
This a really short and compact introduction to CMake mechanisum and common variables used. Showed in a simple groupe meeting of the REVES team of the INRIA Sophia Antipolis (France) to sudents/PhD.
Slides that were presented during the webrtc Qt Cmake tutorial at IIT-RTC in October 2017 in Chicago. The slides are not yet complete, and will be updated later.
Makefile actually is an old concept from UNIX development. Makefile is based upon compiling rules for a project and improve the project development efficiency. In a big project, there are many files in different folders. Of course you can write a DOS batch file to build whole project. But makefile can judge which steps should be done first, which steps can be ignored, and even more complicated goals. All of these are decided by the rules in makefile, instead of manually specified.
Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.
Simple Build Tool (sbt) is an open source build tool , It is a best choice for Scala projects that aims to do the basics well. It requires Java 1.6 or later.
Are you a QMake user who has not yet familiarized yourself with CMake? If so, this webinar is for you — it’s aimed at anyone using QMake who wants to learn more about CMake and the pros and cons of each. We will:
Provide an introduction to CMake
Discuss the differences in the two build systems and the benefits of using one over the other
Set up a basic project and review some of the potential issues you may run into when starting your new project in CMake or converting from existing QMake projects
Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.
Having an operating system distribution that answers all prerequisites of an offensive actor
are crucial. Especially when it comes to configuration review, source code review or
intrusion tests at all levels, web, infrastructure or external. Automating this process, would
help greatly the team members to focus on other tasks.
We aim at facilitating the engagement process by minimizing the configuration time of the
various tools, the period of deletion of sensitive data as well as the various updates to be
performed.
The paper details the most used approaches for system installation. It also discusses the
Debian Kali-based distribution of "Offensive security". In order to orient the choice of the
automation method we structure the formulated recommendations and give tutorials’
references.
A small introduction to get started on Kubernetes as a user. This explains the main concepts like pod, deployment and services and gives some hints to help you use kubectl command.
These slides were presented in Grenoble Docker meetup in November 2017.
Modern binary build systems have made shipping binary packages for Python much easier than ever before. This talk discusses three of the most popular build systems for Python packages using the new standards developed for packaging.
SciPy22 - Building binary extensions with pybind11, scikit build, and cibuild...Henry Schreiner
Building binary extensions is easier than ever thanks to several key libraries. Pybind11 provides a natural C++ language for extensions without requiring pre-processing or special dependencies. Scikit-build ties the premier C++ build system, CMake, into the Python extension build process. And cibuildwheel makes it easy to build highly compatible wheels for over 80 different platforms using CI or on your local machine. We will look at advancements to all three libraries over the last year, as well as future plans.
Scikit-HEP has grown rapidly over the last few years, not just to serve the needs of the High Energy Physics (HEP) community, but in many ways, the Python ecosystem at large. AwkwardArray, boost-histogram/hist, and iMinuit are examples of libraries that are used beyond the original HEP focus. In this talk, we will look key packages in the ecosystem, at how the collection of 30+ packages was developed and maintained, and the software ecosystem contributions made to packages like cibuildwheel, pybind11, nox, scikit-build, build, and pipx that support this effort, and the Scikit-HEP developer pages, and initial WebAssembly support.
PyCon 2022 -Scikit-HEP Developer Pages: Guidelines for modern packagingHenry Schreiner
This was a PyCon 2022 lightning talk over the Scikit-HEP developer pages. It highlights best practices and guides shown there, and the quick package creation cookiecutter. And finally it demos the Pyodide WebAssembly app embedded into the Scikit-HEP developer pages!
Talk at PyCon2022 over building binary packages for Python. Covers an overview and an in-depth look into pybind11 for binding, scikit-build for creating the build, and build & cibuildwheel for making the binaries that can be distributed on PyPI.
Digital RSE: automated code quality checks - RSE group meetingHenry Schreiner
Given at a local RSE group meeting. Covers code quality practices, focusing on Python but over multiple languages, with useful tools highlighted throughout.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
A Sighting of filterA in Typelevel Rite of Passage
CMake best practices
1. CMake: Best Practices
Henry Schreiner
Software & Computing Roundtable 2021-2-2
Links:
The book:
My blog:
The workshop:
This talk:
launch
launch binder
binder
cliutils.gitlab.io/modern-cmake
iscinumpy.gitlab.io
hsf-training.github.io/hsf-training-cmake-webpage
gitlab.com/CLIUtils/modern-cmake-interactive-talk
6. Features:
Understands when to build/rebuild
Doesn't understand how to build
Generic; can be used for anything
Examples
make : Classic, custom syntax
rake : Ruby make
ninja : Google's entry, not designed to be hand written
7. Build system generator
Understands the files you are building
System independent
You give relationships
Can find libraries, etc
CMake is two-stage; the configuration step runs CMake, the build step runs a build-system
( make , ninja , IDE, etc).
Aside: Modern CMake can run the install step directly without
invoking the build system agian.
8. Build system generator example (CMake):
# 01-rake/CMakeLists.txt
cmake_minimum_required(VERSION 3.11)
project(HelloWorld)
add_executable(hello_world hello_world.c)
10. C/C++ Examples
cmake : Cross-platform Make (also Fortran, CUDA, C#, Swift)
scons : Software Carpentry Construction (Python)
meson : Newer Python entry
basel : Google's build system
Other languages often have their own build system (Rust, Go, Python, ...)
We will follow common convention and call these "build-
systems" for short from now on
11. Why CMake?
It has become a standard
Approximately all IDEs support it
Many libraries have built-in support
Integrates with almost everything
12. Custom Buildsystems are going away
for CMake (note: C++17 only too)
Boost is starting to support CMake at a reasonable level along with BJam
Standout: Google is dual supporting Bazel and CMake
Qt 6 dropped QMake
13. Custom Buildsystems are going away
for CMake (note: C++17 only too)
Boost is starting to support CMake at a reasonable level along with BJam
Standout: Google is dual supporting Bazel and CMake
Qt 6 dropped QMake
Recent highlights
Thrust just received a major CMake overhaul
TBB / Intel PSTL nicely support CMake in recent years
Pybind11's CMake support was ramped up in 2.6
14. (More) Modern CMake
CMake is a new language to learn (and is a bit odd)
Classic CMake (CMake 2.8, from 2009) was ugly and had problems, but that's not
Modern CMake!
15. Modern CMake and !
CMake 3.0 in 2014: Modern CMake begins
CMake 3.1-3.4 had important additions/fixes
CMake 3.12 in mid 2018 completed the "More Modern CMake" phase
Current CMake is 3.19 (2.20 in rc2 phase)
Eras of CMake
Classic CMake: Directory based
Modern CMake: Target based
More Modern CMake: Unified target behavior
CURRENT: Powerful CLI
More Modern CMake
16. Best Practice: Minimum Version is important!
CMake has a (AFAIK) unique version system.
If a file start with this:
Then CMake will set all policies (which cover all behavior changes) to their 3.0 settings. This
in theory means it is extremely backward compatible; upgrading CMake will not break or
change anything at all. In practice, all the behavior changes had very good reasons to
change, so this will be much buggier and less useful than if you set it higher!
cmake_minimum_required(VERSION 3.0)
17. You can also do this:
Then
CMake < 3.4 will be an error
CMake 3.4 -- 3.11 will set 3.4 policies (feature was introduced in 3.12, but syntax is
valid)
CMake 3.12 -- 3.14 will set current policies
CMake 3.15+ will set 3.14 policies
cmake_minimum_required(VERSION 3.4...3.14)
18. Don't: set this low without a range - it will harm users. Setting < 3.9 will break IPO,
for example. Often in a way that can't be fixed by superprojects.
Do: set the highest minimum you can (build systems are hard/ugly enough as it is)
Do: test with the lowest version version you support in at least one job.
Don't: expect a CMake version significanly older than your compiler to work with it
(expecially macOS/Windows, or CUDA).
19. What minimum to choose - OS support:
3.4: The bare minimum. Never set less.
3.7: Debian old-stable.
3.10: Ubuntu 18.04.
3.11: CentOS 8 (use EPEL or AppSteams, though)
3.13: Debian stable.
3.16: Ubuntu 20.04.
3.18: pip
3.19: conda-forge/chocolaty/direct download, etc. First to support Apple Silicon.
What minimum to choose - Features:
3.8: C++ meta features, CUDA, lots more
3.11: IMPORTED INTERFACE setting, faster, FetchContent, COMPILE_LANGUAGE in
IDEs
3.12: C++20, cmake --build build -j N , SHELL: , FindPython
3.14/3.15: CLI, FindPython updates
3.16: Unity builds / precompiled headers, CUDA meta features
3.17/3.18: Lots more CUDA, metaprogramming
20. Best Practice: Running CMake
The classic method:
The modern method is cleaner and more cross-platform-friendly:
(CMake 3.14/3.15) supports -v (verbose), -j N (threads), -t target , and more
mkdir build
cd build
cmake ..
make
cmake -S . -B build
cmake --build build
23. Properties include:
Header include directories
Compile flags and definitions
Link flags
C++ standard and/or compiler features required
Linked libraries
Note that other things include properties, like files (such as LANGUAGE), directories, and
global
24. Tips for packaging
Do: use targets.
Don't: use common names for targets if you want to be used in subdirectory mode.
Don't: write a FindPackage for your own package. Always provide a PackageConfig
instead.
Export your targets to create a PackageTargets.cmake file. You can write a
PackageConfig.cmake.in ; you can recreate / reimport package there (generally use a
shared X.cmake file instead of doing it twice).
Don't: hardcode any system/compiler/config details into exported targets. Only from
shared code in Config, or use Generator expressions.
Never allow your package to be distributed without the config files!
25. Using other projects
1. See if the project provides a <name>Config.cmake file. If it does, you are good to
go!
2. If not, see if CMake for it built-in. If it does, you are good to
go!
3. If not, see if the authors provide a Find<name>.cmake file. If they do, complain
loudly that they don't follow CMake best practices.
4. Write your own Find<name>.cmake and include in in a helper directory. You'll need
to build IMPORTED targets and all that.
5. You can also use FindPkgConfig to piggy back on classic pkg-config.
provides a Find package
26. Best Practice: Handling remote dependencies
CMake can download your dependencies for you, and can integrate with files. It supports
composable (sub-)projects: One project can include another
Does not have namespaces, can cause target collisions!
Build time data and project downloads: ExternalProject (classic)
Configure time downloads FetchContent (new in 3.11+)
You can also use submodules (one of my favorite methods, but use with care)
You can also use Conan.io's CMake integration
31. Conan.io's conan-cmake
Conan.io has a nice CMake integration tool. It should support binaries too, since Conan.io
supports them! You must have conan installed, though - I'm using conan from conda-forge.
It works with old versions of CMake, as well.
cmake_minimum_required(VERSION 3.14)
project(HelloWorld LANGUAGES CXX)
33. You can also make a conanfile.txt and manage all your dependencies there.
add_executable(hello_world hello_fmt.cpp)
target_link_libraries(hello_world PRIVATE CONAN_PKG::fmt)
target_compile_features(hello_world PRIVATE cxx_std_11)
35. Best Practice: Use IMPORTED targets for things you don't
build
Now (3.11+) can be built with standard CMake commands!
Can now be global with IMPORTED_GLOBAL
You'll need to set them back up in your Config file (place in one common location for
both CMakeLists and Config).
add_library(ExternLib IMPORTED INTERFACE)
# Classic # Modern
set_property( target_include_directories(
TARGET ExternLib INTERFACE /my/inc
ExternLib )
APPEND
PROPERTY
INTERFACE_INCLUDE_DIRECTORIES
/my/inc
)
36. Best Practice: Use CUDA as a language
Cuda is now a first-class language in CMake! (3.9+) Replaces FindCuda.
project(MY_PROJECT LANGUAGES CUDA CXX) # Super project might need matching? (3.14 at least)
# Or for optional CUDA support
project(MY_PROJECT LANGUAGES CXX)
include(CheckLanguage)
check_language(CUDA)
if(CMAKE_CUDA_COMPILER)
enable_language(CUDA)
endif()
37. Much like you can set C++ standards, you can set CUDA standards too:
if(NOT DEFINED CMAKE_CUDA_STANDARD)
set(CMAKE_CUDA_STANDARD 11) # Probably should be cached!
set(CMAKE_CUDA_STANDARD_REQUIRED ON)
endif()
38. Much like you can set C++ standards, you can set CUDA standards too:
if(NOT DEFINED CMAKE_CUDA_STANDARD)
set(CMAKE_CUDA_STANDARD 11) # Probably should be cached!
set(CMAKE_CUDA_STANDARD_REQUIRED ON)
endif()
You can add files with .cu extensions and they compile with nvcc. (You can always set the
LANGUAGE property on a file, too). Separable compilation is a property:
set_target_properties(mylib PROPERTIES
CUDA_SEPERABLE_COMPILATION ON)
39. New for CUDA in CMake 3.18 and 3.17!
A new CUDA_ARCHITECTURES property
You can now use Clang as a CUDA compiler
CUDA_RUNTIME_LIBRARY can be set to shared
FindCUDAToolkit is finally available
I rewrote the CUDA versions support in 3.20 to be more maintainable and more
accurate.
40. Best Practice: Check for Integrated Tools First
CMake has a lot of great tools. When you need something, see if it is built-in first!
Useful properties (with CMAKE_* variables, great from the command line_:
INTERPROCEDURAL_OPTIMIZATION : Add IPO
POSITION_INDEPENDENT_CODE : Add -fPIC
<LANG>_COMPILER_LAUNCHER : Add ccache
<LANG>_CLANG_TIDY
<LANG>_CPPCHECK
<LANG>_CPPLINT
<LANG>_INCLUDE_WHAT_YOU_USE
41. Useful modules:
CheckIPOSupported : See if IPO is supported by your compiler
CMakeDependentOption : Make one option depend on another
CMakePrintHelpers : Handy debug printing
FeatureSummary : Record or printout enabled features and found packages
42. Using clang-tidy in GitHub Actions example .github/workflow/format.yml :
on:
pull_request:
push:
jobs:
clang-tidy:
runs-on: ubuntu-latest
container: silkeh/clang:10
steps:
- uses: actions/checkout@v2
- name: Configure
run: cmake -S . -B build -DCMAKE_CXX_CLANG_TIDY="$(which clang-tidy);--warnings-as-errors=*"
- name: Run
run: cmake --build build -j 2
43. Best Practice: Use CMake-Format
There is a CMake formatting tool now too, called cmake-format ! You should use it to
keep your CMake code from becoming messy and keeping it easy to merge conflicts.
Since you should always use pre-commit for formatting and style checking, here's the
.pre-commit-config.yaml :
# CMake formatting
- repo: https://github.com/cheshirekow/cmake-format-precommit
rev: v0.6.13
hooks:
- id: cmake-format
additional_dependencies: [pyyaml]
types: [file]
files: (.cmake|CMakeLists.txt)(.in)?$
44. And here's the GitHub Actions job:
jobs:
pre-commit:
name: Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: pre-commit/action@v2.0.0
45. CompilerDetection and Flag checking
try_compile / try_run can tell you if a flag or file works. However, first check:
CheckCXXCompilerFlag
CheckIncludeFileCXX
CheckStructHasMember
TestBigEndian
CheckTypeSize
46. Best Bad Practice: WriteCompilerDetectionHeader will write out C/C++
macros for your compiler for you!
This has been removed in CMake 3.20. Feel free to set the minimum required to below
3.20, or find another tool, generate once, then keep the generated copy.
49. Best Practice: Use FindPython and the newest possible
CMake (3.18.2+ best)
FindPython is an exciting new way to discover Python.
venv/conda ready.
Multiple runs (using unique caching system).
Not very usable till 3.15, not very usable for PyPy until 3.18.2+ and PyPy 7.3.2 (about
a week old)
But possibly vendorable to 3.7+
50. Scikit-build and CMake wheels
Scikit-build is an adaptor for setuptools (not a true PEP 517 builder). Combined with
pyproject.toml and Pip 10, it can be really useful, though!
Still not quite as reliable as setuptools (distutils) in non-standard setups
Doesn't work well without CMake wheels (but you can bypass PEP 518 if CMake is
already installed)
A little rough around some corners
Development a bit stuck.
51. Further investigation:
If we have some spare time, I can show you through the CMake systems I've helped design:
(Dual CMake / setuptools)
(CUDA and Scikit-Build)
I highly recommend my course and book
on CMake. Everything is linked from my blog,
.
Also, is fantastic, if you already know what you are looking for, it is
some of the best out there. The "new in" directives seem to be finally added in the 3.20
documentation! No more scrolling though old versions!
github.com/pybind/pybind11
github.com/pybind11/scikit_build_example
github.com/cliutils/cli11
github.com/scikit-hep/boost-histogram
github.com/goofit/goofit
hsf-training.github.io/hsf-training-cmake-webpage
cliutils.gitlab.io/modern-cmake
iscinumpy.gitlab.io
CMake's documentation
In [ ]: