RHEL 7 will use systemd as its init system, replacing upstart. Systemd is more than just an init system replacement - it is a system and service manager that provides features like dependency tracking, process supervision, on-demand starting of services, and lightweight boot process. It introduces new unit file types to define system components and their relationships. Customizing services can be done by editing unit files and using systemctl commands.
Systemd: the modern Linux init system you will learn to loveAlison Chaiken
The talk combines a design overview of systemd with some tutorial incofrmation about how to configure it. Systemd's features and pitfalls are illustrated by short demos and real-life examples. Files used in the demos are listed under "Presentations" at http://she-devel.com/
Video of the live presentation will appear here:
http://www.meetup.com/Silicon-Valley-Linux-Technology/events/208133972/
While probably the most prominent, Docker is not the only tool for building and managing containers. Originally meant to be a "chroot on steroids" to help debug systemd, systemd-nspawn provides a fairly uncomplicated approach to work with containers. Being part of systemd, it is available on most recent distributions out-of-the-box and requires no additional dependencies.
This deck will introduce a few concepts involved in containers and will guide you through the steps of building a container from scratch. The payload will be a simple service, which will be automatically activated by systemd when the first request arrives.
A short overview of current technologies plucked from the Texas Linux Fest schedule for 2014. Includes overviews of systemd, popular configuration management tools, docker, distributed log collection, and openstack.
Talk from Embedded Linux Conference, http://elcabs2015.sched.org/event/551ba3cdefe2d37c478810ef47d4ca4c?iframe=no&w=i:0;&sidebar=yes&bg=no#.VRUCknSQQQs
The latest releases of today’s popular Linux distributions include all the tools needed to do interesting things with Linux containers.
For the Makefile MicroVPS project, I set out to build a minimal virtual private server-like environment in a Linux container from scratch.
These are my requirements for the MicroVPS:
Minimal init sequence
Most of what happens in a rc.sysinit file is not needed (or wanted) in a container. However, to work like a virtual private server, the MicroVPS will need some kind of init system. The absolute minimum would be enough to start the network and at least one service.
Native network namespace
The MicroVPS will have a dedicated network namespace. It should be easy to configure.
Native package management
The package set installed in the container image will be managed using native tools like deb or rpm.
Automated build
An automated repeatable build process is a must.
Fast iteration cycle
The building and testing cycle must be fast enough not to drive me insane.
Easy management
It should be easy to distribute, monitor, and run a MicroVPS container.
In this tutorial, I will show how to use the tools included with Linux to build a virtual private server in a Linux container from scratch, using GNU Make to automate the build process.
Kernel Recipes 2015 - Porting Linux to a new processor architectureAnne Nicolas
Getting the Linux kernel running on a new processor architecture is a difficult process. Worse still, there is not much documentation available describing the porting process.
After spending countless hours becoming almost fluent in many of the supported architectures, I discovered that a well-defined skeleton shared by the majority of ports exists. Such a skeleton can logically be split into two parts that intersect a great deal.
The first part is the boot code, meaning the architecture-specific code that is executed from the moment the kernel takes over from the bootloader until init is finally executed. The second part concerns the architecture-specific code that is regularly executed once the booting phase has been completed and the kernel is running normally. This second part includes starting new threads, dealing with hardware interrupts or software exceptions, copying data from/to user applications, serving system calls, and so on.
In this talk I will provide an overview of the procedure, or at least one possible procedure, that can be followed when porting the Linux kernel to a new processor architecture.
Joël Porquet – Joël was a post-doc at Pierre and Marie Curie University (UPMC) where he ported Linux to TSAR, an academic processor. He is now looking for new adventures.
Kernel Recipes 2015 - Kernel dump analysisAnne Nicolas
Kernel dump analysis
Cloud this, cloud that…It’s making everything easier, especially for web hosted services. But what about the servers that are not supposed to crash ? For applications making the assumption the OS won’t do any fault or go down, what can you write in your post-mortem once the server froze and has been restarted ? How to track down the bug that lead to service unavailability ?
In this talk, we’ll see how to setup kdump and how to panic a server to generate a coredump. Once you have the vmcore file, how to track the issue with “crash” tool to find why your OS went down. Last but not least : with “crash” you can also modify your live kernel, the same way you would do with gdb.
Adrien Mahieux – System administrator obsessed with performance and uptime, tracking down microseconds from hardware to software since 2011. The application must be seen as a whole to provide efficiently the requested service. This includes searching for bottlenecks and tradeoffs, design issues or hardware optimization.
Systemd is in all the major distributions nowadays and there is a lot of ways you can take advantages of it. It provides an easy way to manage your system and your services and interacts closely with the kernel features added in the last years like cgroups. This talk will show you how to get the added value of systemd and easily do a lot of things that were complicated in the past.
Systemd: the modern Linux init system you will learn to loveAlison Chaiken
The talk combines a design overview of systemd with some tutorial incofrmation about how to configure it. Systemd's features and pitfalls are illustrated by short demos and real-life examples. Files used in the demos are listed under "Presentations" at http://she-devel.com/
Video of the live presentation will appear here:
http://www.meetup.com/Silicon-Valley-Linux-Technology/events/208133972/
While probably the most prominent, Docker is not the only tool for building and managing containers. Originally meant to be a "chroot on steroids" to help debug systemd, systemd-nspawn provides a fairly uncomplicated approach to work with containers. Being part of systemd, it is available on most recent distributions out-of-the-box and requires no additional dependencies.
This deck will introduce a few concepts involved in containers and will guide you through the steps of building a container from scratch. The payload will be a simple service, which will be automatically activated by systemd when the first request arrives.
A short overview of current technologies plucked from the Texas Linux Fest schedule for 2014. Includes overviews of systemd, popular configuration management tools, docker, distributed log collection, and openstack.
Talk from Embedded Linux Conference, http://elcabs2015.sched.org/event/551ba3cdefe2d37c478810ef47d4ca4c?iframe=no&w=i:0;&sidebar=yes&bg=no#.VRUCknSQQQs
The latest releases of today’s popular Linux distributions include all the tools needed to do interesting things with Linux containers.
For the Makefile MicroVPS project, I set out to build a minimal virtual private server-like environment in a Linux container from scratch.
These are my requirements for the MicroVPS:
Minimal init sequence
Most of what happens in a rc.sysinit file is not needed (or wanted) in a container. However, to work like a virtual private server, the MicroVPS will need some kind of init system. The absolute minimum would be enough to start the network and at least one service.
Native network namespace
The MicroVPS will have a dedicated network namespace. It should be easy to configure.
Native package management
The package set installed in the container image will be managed using native tools like deb or rpm.
Automated build
An automated repeatable build process is a must.
Fast iteration cycle
The building and testing cycle must be fast enough not to drive me insane.
Easy management
It should be easy to distribute, monitor, and run a MicroVPS container.
In this tutorial, I will show how to use the tools included with Linux to build a virtual private server in a Linux container from scratch, using GNU Make to automate the build process.
Kernel Recipes 2015 - Porting Linux to a new processor architectureAnne Nicolas
Getting the Linux kernel running on a new processor architecture is a difficult process. Worse still, there is not much documentation available describing the porting process.
After spending countless hours becoming almost fluent in many of the supported architectures, I discovered that a well-defined skeleton shared by the majority of ports exists. Such a skeleton can logically be split into two parts that intersect a great deal.
The first part is the boot code, meaning the architecture-specific code that is executed from the moment the kernel takes over from the bootloader until init is finally executed. The second part concerns the architecture-specific code that is regularly executed once the booting phase has been completed and the kernel is running normally. This second part includes starting new threads, dealing with hardware interrupts or software exceptions, copying data from/to user applications, serving system calls, and so on.
In this talk I will provide an overview of the procedure, or at least one possible procedure, that can be followed when porting the Linux kernel to a new processor architecture.
Joël Porquet – Joël was a post-doc at Pierre and Marie Curie University (UPMC) where he ported Linux to TSAR, an academic processor. He is now looking for new adventures.
Kernel Recipes 2015 - Kernel dump analysisAnne Nicolas
Kernel dump analysis
Cloud this, cloud that…It’s making everything easier, especially for web hosted services. But what about the servers that are not supposed to crash ? For applications making the assumption the OS won’t do any fault or go down, what can you write in your post-mortem once the server froze and has been restarted ? How to track down the bug that lead to service unavailability ?
In this talk, we’ll see how to setup kdump and how to panic a server to generate a coredump. Once you have the vmcore file, how to track the issue with “crash” tool to find why your OS went down. Last but not least : with “crash” you can also modify your live kernel, the same way you would do with gdb.
Adrien Mahieux – System administrator obsessed with performance and uptime, tracking down microseconds from hardware to software since 2011. The application must be seen as a whole to provide efficiently the requested service. This includes searching for bottlenecks and tradeoffs, design issues or hardware optimization.
Systemd is in all the major distributions nowadays and there is a lot of ways you can take advantages of it. It provides an easy way to manage your system and your services and interacts closely with the kernel features added in the last years like cgroups. This talk will show you how to get the added value of systemd and easily do a lot of things that were complicated in the past.
Learn Red Hat Enterprise Linux 7.1 for IBM z Systems by Examples. This session shows what's new in the installation method, systemd management, rescue mode and how to use the automatic LUN scanning for NPIV FCP devices.
Nagios Conference 2014 - Eric Mislivec - Getting Started With Nagios CoreNagios
Eric Mislivec's presentation on getting started with Nagios Core. The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference.
Linux Server Deep Dives (DrupalCon Amsterdam)Amin Astaneh
Over the past few years the Linux kernel has gained features that allow us to learn more about what's really happening on our servers and the applications that run on them.
This talk will explore how these new features, particularly perf_events and ebpf, enable us to answer questions about what a Drupal site is doing in real time beyond what the standard logs, server performance tools, and even strace will reveal. Attendees will be provided a brief introduction to example uses of these tools to diagnose performance problems.
This talk is intended for attendees that are familiar with Linux, the command line, and have used host observability tools in the past (top, netstat, etc).
Cloud Firewall (CFW) Logging also known as RFD 163 is a feature where we will start logging specific kinds of firewall records in a manner that doesn’t require as many per compute node resources.
This logging will allow us to pay attention to inbound packets that drop. We want to record new TCP connections or connectionless UDP sessions in a manner that fits in nicely and are “aggregatable” into a proper Triton deployment. To activate this, a user has to opt into logging by marking a firewall rule with the "log" attribute.
A talk presented at the Automotive Grade Linux All-Members meeting on September 8, 2015. The focus on why AGL should adopt systemd, and highlights two of the more difficult integration issues that may arise while doing so. The embedded SVG image, courtesy Marko Hoyer of ADIT, is at http://she-devel.com/2015-07-23_amm_demo.svg
Workflow story: Theory versus Practice in large enterprises by Marcin PiebiakNETWAYS
Uphill battle against large enterprise it environments and IT corporate culture. How those difficulties turned out opportunities and clever implementations. Interesting modules, integrations and workflow pieces.
Domino 10 is due in October and if you want to take advantage of all the new feature goodness, you need to know Linux. Domino for Docker is a Linux only release, for example, and now IBM supports RHEL 7 natively with all three V10 server products. Staying current here is critical: RHEL 7 is not the usual Linux upgrade. Join BillMal Your Linux Pal to cover as much RHEL 7 info as he can in an hour: Domino 10 status, systemd, journald, administration, and upgrade tips. This is a very technical session. Let's get ready for October!
Pluggable Infrastructure with CI/CD and DockerBob Killen
The docker cluster ecosystem is still young, and highly modular. This presentation covers some of the challenges we faced deciding on what infrastructure to deploy, and a few tips and tricks in making both applications and infrastructure easily adaptable.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
1. RHEL 7 Update
systemd
July 2014
1 RED HAT | Ingo Börnig
2. OVERVIEW
● RHEL 7.0 will ship with systemd, a new init system that replaces
upstart.
● But systemd is more then a SysVinit/upstart replacement
● It is a system and service manager for Linux.
● It can work as a drop-in replacement for sysvinit.
● It replaces inetd and xinetd for most scenarios
# ps pid
1
PID TTY TIME CMD
1 ? 00:00:01 systemd
2 RED HAT | Ingo Börnig
3. Key Concepts
● UNITS:
● Services, Sockets,
● Devices, Mounts, Automounts, Swaps
● Timers, Paths,
● Targets, Snapshots
● Slices
● Unit/Service Dependency Tracking
● Process tracking with Service information
3 RED HAT | Ingo Börnig
4. Benefits
● Dependency tracking for units and processes
● No more (sleep 60; do something) loops
● Properly kill daemons
● Minimal boot times
● Debuging – no early boot messages are lost
● Easy to learn and backwards compatible.
● Autospawn and Respawn for Services
● Tight integration with cgroups, the default interface in the future
4 RED HAT | Ingo Börnig
5. Systemd - Units
● Naming convention is: name.type
● httpd.service, sshd.socket, or dev-hugepages.mount
● Service – Describe a daemon's type, execution, environment,
and how it's monitored.
● Socket – Endpoint for interprocess communication. File,
network, or Unix sockets.
● Target – Logical grouping of units. Replacement for runlevels.
● Device – Automatically created by the kernel. Can be provided
to services as dependents.
● Mounts, automounts, swap – Monitor the mounting/unmounting
of file systems.
5 RED HAT | Ingo Börnig
6. Systemd - Units
● Snapshots – save the state of units – useful for testing
● Timers – Timer-based activation
● Paths – Uses inotify to monitor a path
● Slices – cgroup hierarchy for resource management.
● Scopes – Organizational units that groups services' worker
processes.
6 RED HAT | Ingo Börnig
7. Systemd – Dependency Resolution
● Example:
● Wait for block device
● Check fle system for device
● Mount fle system
● nfs-lock.service:
● Requires=rpcbind.service network.target
● After=network.target named.service rpcbind.service
● Before=remote-fs-pre.target
7 RED HAT | Ingo Börnig
8. What about my System-V init scripts?
● systemd maintains 99% backwards compatibility with initscripts
and the exceptions are well documented.
● While we do encourage everyone to convert legacy scripts to
service unit files, it's not a requirement.
● Hint: we'll show you how to do this in a few minutes.
● Incompatibilities are listed here:
http://www.freedesktop.org/wiki/Software/systemd/Incompatibilities/
● Converting SysV Init Scripts:
http://0pointer.de/blog/projects/systemd-for-admins-3.html
8 RED HAT | Ingo Börnig
9. Faster Boot times
● Lennart Poettering says that “Fast booting isn't the goal of
systemd, it's a result of a well designed system.”
● As virt/cloud demand continues, the desire for light-weight,
reliable/resilient, and fast images grows.
● A stripped down image can boot in ~2 seconds.
● Less CPU cycles burned during the boot process
● Important for highly dense and dynamic environments.
● Even more important for containers.
9 RED HAT | Ingo Börnig
11. Managing Services - Unit Files
● Via Init:
● Init scripts are stored in /etc/init.d & called from /etc/rc*
● Via systemd:
● Maintainer files: /usr/lib/systemd/system/
● User modifcations: /etc/systemd/system/
● Note: unit files under /etc/ will take precedence over /usr
11 RED HAT | Ingo Börnig
12. Managing Services - Start/Stop
● Via Init:
● $ service httpd {start,stop,restart,reload}
● Via systemctl:
● $ systemctl {start,stop,restart,reload} httpd.service
● Notes:
● systemctl places the “action” before the service name.
● If a unit isn't specifed, .service is assumed.
● systemctl start httpd == systemctl start httpd.service
● Tab completion works great with systemctl, install bash-completion
● systemctl can connect to remote hosts over SSH using “-H”
12 RED HAT | Ingo Börnig
13. Managing Services - Status
● Via Init:
● $ service httpd status
● Via systemctl:
● $ systemctl status httpd.service
● List loaded services:
● systemctl -t service
● List installed services:
● systemctl list-unit-files -t service (similar to chkconfg --list)
● View state:
● systemctl --state failed
13 RED HAT | Ingo Börnig
14. Managing Services - Enable/Disable
● Via Init:
● $ chkconfg httpd {on,off}
● Via systemctl:
● $ systemctl {enable, disable, mask, unmask} httpd.service
● mask – “This will link these units to /dev/null, making it
impossible to start them. This is a stronger version of disable,
since it prohibits all kinds of activation of the unit, including
manual activation. Use this option with care.”
14 RED HAT | Ingo Börnig
16. What Runlevels?
● Runlevels == Targets
● “Runlevels” are exposed via target units
● /etc/inittab is no longer used
● Target names are more relevant:
● multi-user.target vs. runlevel3
● graphical.target vs. runlevel5
● Set the default via: `systemctl enable graphical.target --force`
● Change at run-time via: `systemctl isolate [target]`
16 RED HAT | Ingo Börnig
19. Customizing Service Unit Files
● Unit files can be altered or extended by placing “drop-ins” under:
/etc/systemd/system/foobar.service.d/*.conf
● Changes are applied on top of maintainer unit files.
# cat /etc/systemd/system/httpd.service.d/50httpd.
conf
[Service]
Restart=always
StartLimitInterval=10
StartLimitBurst=5
StartLimitAction=reboot
CPUShares=2048
Nice=10
OOMScoreAdjust=1000
19 RED HAT | Ingo Börnig
20. Customizing Service Unit Files
● Run `systemctl daemon-reload` after making changes to notify
systemd
● Drop-ins will be shown from `systemctl status`
# systemctl status httpd.service
httpd.service The
Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service;
enabled)
DropIn:
/etc/systemd/system/httpd.service.d
└─50httpd.
conf
20 RED HAT | Ingo Börnig
21. Customizing Service Unit Files - Tips!
● Changes to unit files under /usr/lib/systemd/system/ could be
overwritten by updates. DON'T DO IT!
● /etc service files will take precedence over /usr
● Simply delete the drop-in to revert to defaults. Don't forget to run
`systemctl daemon-reload`
● systemd-delta – will show what is overridden and extended
between /usr & /etc.
● man 5 systemd.service, man 5 systemd.exec
21 RED HAT | Ingo Börnig
23. Making Cgroups Easier
● View cgroup hierarchy via systemd-cgls
● View usage stats via systemd-cgtop (use for tuning)
● Default hierarchy
● system.slice – contains system services
● user.slice – contains user sessions
● machine.slice – contains virtual machines and containers
● Services can be promoted to their own slice if necessary.
23 RED HAT | Ingo Börnig
24. Resource Management – Configuration
● systemctl can configure and persist cgroup attributes
● systemctl set-property httpd.service CPUShares=2048
● Add --runtime to not persist the settings:
● systemctl set-property --runtime httpd.service CPUShares=2048
● Alternatively settings can be placed in unit files
● [Service]
● CPUShares=2048
24 RED HAT | Ingo Börnig
26. Remember what an init-file looks like?
#!/bin/bash
#
# httpd Startup script for the Apache HTTP Server
#
# chkconfig: 85
15
# description: The Apache HTTP Server is an efficient and extensible
# server implementing the current HTTP standards.
# processname: httpd
# config: /etc/httpd/conf/httpd.conf
# config: /etc/sysconfig/httpd
# pidfile: /var/run/httpd/httpd.pid
#
### BEGIN INIT INFO
# Provides: httpd
# RequiredStart:
$local_fs $remote_fs $network $named
# RequiredStop:
$local_fs $remote_fs $network
# ShouldStart:
distcache
# ShortDescription:
start and stop Apache HTTP Server
# Description: The Apache HTTP Server is an extensible server
# implementing the current HTTP standards.
### END INIT INFO
# Source function library.
. /etc/rc.d/init.d/functions
if [ f
/etc/sysconfig/httpd ]; then
. /etc/sysconfig/httpd
fi
# Start httpd in the C locale by default.
HTTPD_LANG=${HTTPD_LANG"
C"}
# This will prevent initlog from swallowing up a passphrase
prompt if
# mod_ssl needs a passphrase
from the user.
INITLOG_ARGS=""
# Set HTTPD=/usr/sbin/httpd.worker in /etc/sysconfig/httpd to use a server
# with the threadbased
"worker" MPM; BE WARNED that some modules may not
# work correctly with a threadbased
MPM; notably PHP will refuse to start.
26 RED HAT | Ingo Börnig
27. # Path to the apachectl script, server binary, and shortform
for messages.
apachectl=/usr/sbin/apachectl
httpd=${HTTPD/
usr/sbin/httpd}
prog=httpd
pidfile=${PIDFILE/
var/run/httpd/httpd.pid}
lockfile=${LOCKFILE/
var/lock/subsys/httpd}
RETVAL=0
STOP_TIMEOUT=${STOP_TIMEOUT10}
# check for 1.3 configuration
check13 () {
CONFFILE=/etc/httpd/conf/httpd.conf
GONE="(ServerType|BindAddress|Port|AddModule|ClearModuleList|"
GONE="${GONE}AgentLog|RefererLog|RefererIgnore|FancyIndexing|"
GONE="${GONE}AccessConfig|ResourceConfig)"
if LANG=C grep Eiq
"^[[:space:]]*($GONE)" $CONFFILE; then
echo
echo 1>&2 " Apache 1.3 configuration directives found"
echo 1>&2 " please read /usr/share/doc/httpd2.2.22/
migration.html"
failure "Apache 1.3 config directives test"
echo
exit 1
fi
}
# The semantics of these two functions differ from the way apachectl does
# things attempting
to start while running is a failure, and shutdown
# when not running is also a failure. So we just do it the way init scripts
# are expected to behave here.
start() {
echo n
$"Starting $prog: "
check13 || exit 1
LANG=$HTTPD_LANG daemon pidfile=${
pidfile} $httpd $OPTIONS
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch ${lockfile}
return $RETVAL
}
27 RED HAT | Ingo Börnig
28. # When stopping httpd, a delay (of default 10 second) is required
# before SIGKILLing the httpd parent; this gives enough time for the
# httpd parent to SIGKILL any errant children.
stop() {
echo n
$"Stopping $prog: "
killproc p
${pidfile} d
${STOP_TIMEOUT} $httpd
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm f
${lockfile} ${pidfile}
}
reload() {
echo n
$"Reloading $prog: "
if ! LANG=$HTTPD_LANG $httpd $OPTIONS t
>&/dev/null; then
RETVAL=6
echo $"not reloading due to configuration syntax error"
failure $"not reloading $httpd due to configuration syntax error"
else
# Force LSB behaviour from killproc
LSB=1 killproc p
${pidfile} $httpd HUP
RETVAL=$?
if [ $RETVAL eq
7 ]; then
failure $"httpd shutdown"
fi
fi
echo
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status p
${pidfile} $httpd
RETVAL=$?
;;
restart)
stop
start
;;
28 RED HAT | Ingo Börnig
29. condrestart|tryrestart)
if status p
${pidfile} $httpd >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
graceful|help|configtest|fullstatus)
$apachectl $@
RETVAL=$?
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|tryrestart|
forcereload|
reload|status|fullstatus|graceful|help|configtest}"
RETVAL=2
esac
exit $RETVAL
29 RED HAT | Ingo Börnig
30. Contrast that with a systemd unit file syntax
[Unit]
Description=The Apache HTTP Server
After=network.target remotefs.
target nsslookup.
target
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/httpd
ExecStart=/usr/sbin/httpd $OPTIONS DFOREGROUND
ExecReload=/usr/sbin/httpd $OPTIONS k
graceful
ExecStop=/usr/sbin/httpd $OPTIONS k
gracefulstop
KillSignal=SIGCONT
PrivateTmp=true
[Install]
WantedBy=multiuser.
target
30 RED HAT | Ingo Börnig
31. Test Unit File
● Copy the unit file
● cp [myapp].service /etc/systemd/system/
● Alert systemd of the changes:
● systemctl daemon-reload
● Start service
● systemctl start [myapp].service
● View status
● systemctl status [myapp].service
31 RED HAT | Ingo Börnig
33. The Journal - Logging with systemd
● “The journal is a component of systemd, that captures Syslog
messages, Kernel log messages, initial RAM disk and early boot
messages as well as messages written to STDOUT/STDERR of
all services, indexes them and makes this available to the user”
● Indexed
● Formatted
● Errors in red
● Warnings in bold
● Security
● Reliability
● Intelligently rotated
33 RED HAT | Ingo Börnig
34. Journal
● Does not replace rsyslog in RHEL 7
● rsyslog is enabled by default
● Use rsyslog for traditional logging w/ enterprise features
● The journal is not persistent by default at the moment but a
ring-buffer in /run/log/journal.
● Collects event metadata
● Stored in key-value pairs
● man page: systemd.journal-felds(7)
● journalctl - utility for to viewing the journal.
● Simple (or complex) fltering
● Interleave units, binaries, etc
34 RED HAT | Ingo Börnig
35. Using the Journal
● Enable persistence: `mkdir /var/log/journal`
● View from boot: `journalctl -b`
● Tail -f and -n work as expected:
● journalctl -f ; journalctl -n 50
● Filter by priority: `journalctl -p [level]`
0 emerg
1 alert
2 crit
3 err
4 warning
5 notice
6 debug
35 RED HAT | Ingo Börnig
36. Using the Journal
● Other useful filters:
● --since=yesterday or YYYY-MM-DD (HH:MM:SS)
● --until=YYYY-MM-DD
● -u [unit]
● Pass binary e.g. /usr/sbin/dnsmasq
● View journal felds
● journalctl [tab] [tab]←bash-completion rocks!!
● Entire journal
● journalctl -o verbose (useful for grep)
36 RED HAT | Ingo Börnig
38. Booting
● Boot process is too fast, interactive boot append:
systemd.confirm_spawn=1
● /var/log/boot.log – still works the same
● Enable debugging from grub by appending:
● systemd.log_level=debug systemd.log_target=kmsg
log_buf_len=1M
● Or send dbug info to a serial console: systemd.log_level=debug
systemd.log_target=console console=ttyS0
● Enable early boot shell on tty9
● systemctl enable debug-shell.service
● ln -s /usr/lib/systemd/system/debug-shell.service
/etc/systemd/system/sysinit.target.wants/
● systemctl list-jobs
38 RED HAT | Ingo Börnig
40. Control Groups Made Simple
Resource Management with cgroups can reduce application or VM
contention and improve throughput and predictability
40 RED HAT | Ingo Börnig
41. Slices, Scopes, Services
● In RHEL7 systemd manages cgroups, new concept of
Scopes/Slices:
● Slice – Unit type for creating the cgroup hierarchy for resource
management.
● Scope – Organizational unit that groups a services' worker
processes.
● Service – Process or group of processes controlled by systemd
41 RED HAT | Ingo Börnig
42. Control Groups - Usability Improvements: Scopes
Systemd puts all related worker PIDs into cgroup called a ‘scope’.
● Services
● Apache processes in same services/apache scope
● Mysql processes in same services/Mysql scope
● Apache/Mysql get an equal “slice” of the system
● Users accounts
● All users get an equal “slice”
● Machines
● All containers/VMs get an equal “slice”
● No service/user/machine can dominate system
42 RED HAT | Ingo Börnig
43. Control Groups - Usability Improvements: Slices
Special unit file for assigning resource constraints
Slices get assigned to scopes
● Systemd automatically assigns services to system.slice
● You can override resource with Unit file configuration
● MemoryLimit=1g
● Command Line
● #> systemctl set-property httpd.service CPUShares=524
MemoryLimit=500M
● Systemd will assign Containers to machine.slice
● You can override by editing
● /etc/systemd/system/big-machine.slice
43 RED HAT | Ingo Börnig