Linux executables are in ELF format. The document discusses Linux executable formats, compiling C programs in Linux using GCC, and executing programs. It also covers libraries in Linux including static and shared libraries, error handling using errno and assertions, signals and signal handling, process monitoring and /proc filesystem, and managing command line arguments using getopt_long.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Memory Mapping Implementation (mmap) in Linux KernelAdrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
Current experience shows that a lot of developers working on Xen/Linux kernel use mainly only small set of debugging tools. Often they are sufficient for generic work. However, when unusual problem arises which could not be easily debugged using known tools sometimes they are trying to reinvent the wheel. Goal of this session is to present wide range of debugging tools starting from simplest one to most feature reach solutions in context of Xen/Linux kernel debugging. It will describe pros and cons of printk (serial, debug console, etc.), gdb, gdbsx, kgdb, QEMU, kdump and others. Additionally, there will be some information about possible new solutions and current kexec/kdump developments for Xen.
Ramon Fried covers the following topics:
* What DMA is.
* DMA Buffer Allocations and Management.
* Cache Coherency.
* PCI and DMA.
* dmaengine Framework.
Ramon is an Embedded Linux team leader in TandemG, leading various cutting edge projects in the Linux kernel.
He has years of experience in embedded systems, operating systems and Linux kernel.
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
Linux 4.x Tracing Tools: Using BPF SuperpowersBrendan Gregg
Talk for USENIX LISA 2016 by Brendan Gregg.
"Linux 4.x Tracing Tools: Using BPF Superpowers
The Linux 4.x series heralds a new era of Linux performance analysis, with the long-awaited integration of a programmable tracer: Enhanced BPF (eBPF). Formally the Berkeley Packet Filter, BPF has been enhanced in Linux to provide system tracing capabilities, and integrates with dynamic tracing (kprobes and uprobes) and static tracing (tracepoints and USDT). This has allowed dozens of new observability tools to be developed so far: for example, measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. Tracing superpowers have finally arrived.
In this talk I'll show you how to use BPF in the Linux 4.x series, and I'll summarize the different tools and front ends available, with a focus on iovisor bcc. bcc is an open source project to provide a Python front end for BPF, and comes with dozens of new observability tools (many of which I developed). These tools include new BPF versions of old classics, and many new tools, including: execsnoop, opensnoop, funccount, trace, biosnoop, bitesize, ext4slower, ext4dist, tcpconnect, tcpretrans, runqlat, offcputime, offwaketime, and many more. I'll also summarize use cases and some long-standing issues that can now be solved, and how we are using these capabilities at Netflix."
Part 02 Linux Kernel Module ProgrammingTushar B Kute
Presentation on "Linux Kernel Module Programming".
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
Thinking of fuzzing applications on OS X can quickly lead to a passing conversation of "ooh exotic Mac stuff", "lets fuzz the kernel" or it can otherwise not be thought of as an exciting target, at least for looking for crashes in stuff other than Safari or the iPhone. While there are some intricacies and nuance involved, workaround for security protections to enable debugging and finding tools that work and work well, this research will detail how it can be done in a reliable way and make the topic more tangible and easier to digest, kind of like how people think about using AFL on Linux: it "just works". We'll explore some of the overlooked attack surface of file parsers and some network services on Mac, how to fuzz userland binaries and introduce a new fuzzer that makes setup and crash triage straightforward while poking at some Apple core apps and clients. Have you ever thought "This thing has got to have some bugs" but think twice because it's only on available on Mac and not worth the effort? If so, you may now find yourself both more motivated and better equipped to do some bug hunting on the sleek and eventually accommodating Mac OS.
The Linux kernel is undergoing the most fundamental architecture evolution in history and is becoming a microkernel. Why is the Linux kernel evolving into a microkernel? The potentially biggest fundamental change ever happening to the Linux kernel. This talk covers how companies like Facebook and Google use BPF to patch 0-day exploits, how BPF will change the way features are added to the kernel forever, and how BPF is introducing a new type of application deployment method for the Linux kernel.
Memory Mapping Implementation (mmap) in Linux KernelAdrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
Current experience shows that a lot of developers working on Xen/Linux kernel use mainly only small set of debugging tools. Often they are sufficient for generic work. However, when unusual problem arises which could not be easily debugged using known tools sometimes they are trying to reinvent the wheel. Goal of this session is to present wide range of debugging tools starting from simplest one to most feature reach solutions in context of Xen/Linux kernel debugging. It will describe pros and cons of printk (serial, debug console, etc.), gdb, gdbsx, kgdb, QEMU, kdump and others. Additionally, there will be some information about possible new solutions and current kexec/kdump developments for Xen.
Ramon Fried covers the following topics:
* What DMA is.
* DMA Buffer Allocations and Management.
* Cache Coherency.
* PCI and DMA.
* dmaengine Framework.
Ramon is an Embedded Linux team leader in TandemG, leading various cutting edge projects in the Linux kernel.
He has years of experience in embedded systems, operating systems and Linux kernel.
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
Linux 4.x Tracing Tools: Using BPF SuperpowersBrendan Gregg
Talk for USENIX LISA 2016 by Brendan Gregg.
"Linux 4.x Tracing Tools: Using BPF Superpowers
The Linux 4.x series heralds a new era of Linux performance analysis, with the long-awaited integration of a programmable tracer: Enhanced BPF (eBPF). Formally the Berkeley Packet Filter, BPF has been enhanced in Linux to provide system tracing capabilities, and integrates with dynamic tracing (kprobes and uprobes) and static tracing (tracepoints and USDT). This has allowed dozens of new observability tools to be developed so far: for example, measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. Tracing superpowers have finally arrived.
In this talk I'll show you how to use BPF in the Linux 4.x series, and I'll summarize the different tools and front ends available, with a focus on iovisor bcc. bcc is an open source project to provide a Python front end for BPF, and comes with dozens of new observability tools (many of which I developed). These tools include new BPF versions of old classics, and many new tools, including: execsnoop, opensnoop, funccount, trace, biosnoop, bitesize, ext4slower, ext4dist, tcpconnect, tcpretrans, runqlat, offcputime, offwaketime, and many more. I'll also summarize use cases and some long-standing issues that can now be solved, and how we are using these capabilities at Netflix."
Part 02 Linux Kernel Module ProgrammingTushar B Kute
Presentation on "Linux Kernel Module Programming".
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
Thinking of fuzzing applications on OS X can quickly lead to a passing conversation of "ooh exotic Mac stuff", "lets fuzz the kernel" or it can otherwise not be thought of as an exciting target, at least for looking for crashes in stuff other than Safari or the iPhone. While there are some intricacies and nuance involved, workaround for security protections to enable debugging and finding tools that work and work well, this research will detail how it can be done in a reliable way and make the topic more tangible and easier to digest, kind of like how people think about using AFL on Linux: it "just works". We'll explore some of the overlooked attack surface of file parsers and some network services on Mac, how to fuzz userland binaries and introduce a new fuzzer that makes setup and crash triage straightforward while poking at some Apple core apps and clients. Have you ever thought "This thing has got to have some bugs" but think twice because it's only on available on Mac and not worth the effort? If so, you may now find yourself both more motivated and better equipped to do some bug hunting on the sleek and eventually accommodating Mac OS.
The Linux kernel is undergoing the most fundamental architecture evolution in history and is becoming a microkernel. Why is the Linux kernel evolving into a microkernel? The potentially biggest fundamental change ever happening to the Linux kernel. This talk covers how companies like Facebook and Google use BPF to patch 0-day exploits, how BPF will change the way features are added to the kernel forever, and how BPF is introducing a new type of application deployment method for the Linux kernel.
.NET Core, ASP.NET Core Course, Session 3aminmesbahi
Session 3,
Introducing to Compiler
What is the LLVM?
LLILC
RyuJIT
AOT Compilation
Preprocessors and Conditional Compilation
An Overview on Dependency Injection
Computer and multimedia Week 1 Windows Architecture.pptxfatahozil
The kernel is the most trusted part of the operating system. Multiple rings of protection were among the most revolutionary concepts introduced by the Multics operating system, most general-purpose systems use only two rings, even if the hardware they run on provides more CPU modes than that. For example, Windows 7 and Windows Server 2008 (and their predecessors) use only two rings, with ring 0 corresponding to kernel mode and ring 3 to user mode, because earlier versions of Windows ran on processors that supported only two protection levels.
Many modern CPU architectures (including the popular Intel x86 architecture) include some form of ring protection, although the Windows NT operating system, like Unix, does not fully utilize this feature. Under DOS, the kernel, drivers and applications typically run on ring 3 (however, this is exclusive to the case where protected-mode drivers and/or DOS extenders are used; as a real-mode OS, the system runs with effectively no protection), whereas 386 memory managers such as EMM386 run at ring 0.
Windows Memory
Each process started on x86 version of Windows uses a flat memory model that ranges from 0x00000000 – 0xFFFFFFFF. The lower half of the memory, 0x00000000 – 0x7FFFFFFF, is reserved for user space code. While the upper half of the memory, 0x80000000 – 0xFFFFFFFF, is reserved for the kernel code. The Windows operating system also doesn’t use the segmentation (well actually it does, because it has to), but the segment table contains segment descriptors that use the entire linear address space. There are four segments, two for user and two for kernel mode, which describe the data and code for each of the modes. But all of the descriptors actually contain the same linear address space. This means they all point to the same segment in memory that is 0xFFFFFFFF bits long, proving that there is no segmentation on Windows systems.
The segmentation is actually not used by the Windows system. Therefore we can use the terms “virtual address space” and “linear address space” interchangeably, because they are the same in this particular case. Because of this, when talking about user space code being loaded in the virtual address space from 0x00000000 to 0x7FFFFFFF, we’re actually talking about linear addresses. Those addresses are then sent into the paging unit to be translated into physical addresses. We’ve just determined that even though each process uses a flat memory model that spans the entire 4GB linear address space, it can only use half of it. This is because the other half is reserved for kernel code: the program can thus use, at most, 2GB of memory.
Every process has its own unique value in the CR3 register that points to the process’ page directory table. Because each process has its own page directory table that is used to translate the linear address to physical address, two processes can use the same linear address, while their physical address is different. Okay, so each program has its own ad
THIS PPT CONTAINS THE DETAILS ABOUT THE VARIOUS LANGUAGE PROCESSORS/LANGUAGE TRANSLATORS- THE COMPILER & THE INTERPRETER, OPERATING SYSTEMS & ITS FUNCTION, PARALLEL & CLOUD COMPUTING
This was a paper that I wrote about a CFI system, which works alongside randomization techniques like ASLR. It uses the offsets of branch instruction's destination instead of the absolute address. This works alongside ASLR because randomization based protection mechanisms, do the randomization on PAGE size basis, not inside the page itself...
An analysis of how ASLR works in Linux. All examples are in CentOS 5. This slide is written in Farsi (Persian) language which by now is the only choice.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
2. Table of Contents
- Linux executables.
- Compiling C programs in Linux.
- Executing programs in Linux.
- Using GCC.
- Hello, World!
- Sample password checking program in C.
- Introduction to signals in Linux.
- Linux process monitoring commands (kill, ps, top, …).
- Signal handling in Linux.
3. Linux Executables
- What is ELF?
- File format for executables, Object code, Shared
libraries, and core dumps in Linux.
- Not bound to any particular processor or
architecture.
- A replacement of a.out and coff in many unix-like
operating systems.
- In 1999 was chosen as the standard binary file
format for unix-like OS s.
- Is also used in many non-unix-like OS s.
4. Linux Executables
ELF
Relocatable File (.o files)
Shared Object File (.so files)
Executable File
There are 3 main ELF object files:
Linux executables are in ELF format.
(Executable and Linkable Format)
Core Dumps
6. ELF Layout
ELF Header
Program Header Table
.text
.rodata
.data
……
Section Header Table
In General
Contains
information
about how to
create a
process image
for this program
to run
(necessary in
execution)
Contains
information
about the file’s
sections
(necessary in
linking)
Contains
information
about Program
Header Table
and Section
Header Table
7. Extra Points
- Shared Objects (SO) are position independent (PIC).
- PIC code is a code that may load in different address each
time.
- When compiling a code to be a SO, remember to use -fPIC
option in gcc.
- PE is the file format used for executables in windows.
- Portable Executables (PE files) do not support PIC codes.
8. Compiling C Source
- Compiling a source file to create an executable, is performed in
some steps as bellow:
Preprocessing
Compiling
Assembling
Linking
Compiler does these in
one or more passes
9. Compiling C Source
- Compiler does some analysis on source file:
Lexical
Syntax
Semantic
Optimizing the Code
Code Generation
Symbol Table
Error Handler
Intermediate Code Generation
14. Most simple use of
GCC
- Write the source file in a simple text file with .c extention.
- compile it using this command:
gcc –c myfile.c
Tells gcc not to make the
executable, but only the
object file is enough!
gcc –o myfile myfile.o
Tells gcc to create the
output (executable) file
by this name.
We should give gcc the
object file which we
created before.
16. Libraries in Linux
- Virtually all programs in Linux are linked against one or more
libraries.
- There are two types of library in Linux:
- Static libraries (Archive file, same as windows .LIB file).
- Shared libraries (Shared Object, same as windows DLL).
- Libraries contain code and data that provide services to
independent programs.
17. Static Library
- Is a collection of object files.
- Linker extracts needed object files from archive and attaches
them to your program (as they were provided directly).
- When linker encounters an archive in command line, it
searches the already passed objects to see if there is a
reference to objects in this archive or not.
18. Static Library
You can use the “ar” command to create archives (static libs)
You may create object files to put in the archive using GCC
19. Static Library
- If linker find the reference, it will extract the object from archive
and put it in our exe.
- If linker could not find any references, it shows an error and
stops.
IT IS IMPORTANT TO PASS THE COMMAND LINE OPTIONS
IN CORRECT ORDER
22. Shared Library
- Is also a collection of objects.
- When it is linked into another program, the program does not
contain the whole objects, but just references to the shared library.
- Is not a collection of object files, but a single big object file which is
a combination of object files.
- Shared Libraries are Position Independent Codes, because the
function in a SO, may be loaded at different addresses in different
programs.
23. Shared Library
- The linker just includes the name of the “so” in executable file.
- The Operating System is responsible to find the specified “so” file.
- By default, system searches only “/lib” and “/usr/lib”.
- You can indicate another path by setting the LD_LIBRARY_PATH
environment variable.
24. You can create some PIC
object files to put them in
a shared library, using
GCC.
Using GCC, you can combine some object files and create a shared object (.SO)
Shared Library
25. Libraries in Linux
- The ldd command shows the shared libraries that are linked
into an executable (and their dependencies).
- Static libs, can not point to another lib, so you should include
all dependent libs in GCC command line.
- The included SO s, need to be available during execution.
- The linker will stop searching for libraries when it finds a
directory containing the proper “.so” or “.a”.
- Priority of “.so” is higher than “.a” unless explicitly specified
(-static option in gcc).
26. Shared Library
If you call a function which its source
is not defined, the linker will bring up
an error.
Linker will search the given objects
for specified symbol.
27. The “-L” option indicates
that GCC should also
search in specified folder
for the libraries given later.
The “-l” option indicates the
libraries which is used in
this program.
Shared Library
28. While compiling a source, you should indicate which
libraries (which .SO) are needed.
While executing the code, the indicated SO should be
available otherwise the code will not run.
Shared Library
29. You can use the “LD_LIBRARY_PATH” environment variable to
indicate the address of shared objects which are stored
somewhere other than “/usr/lib” and “/lib”.
You may also use /etc/ld.so.conf and ldconfig
command.
Shared Library
30. What are the other shared objects included in our code?
Shared Library
31. Shared vs. Static
Saves space
Lib upgrades can be
done without upgrading
the whole program.
Users can install software
Without admin privilege
Suitable for mission critical
Codes
Shared Static
33. Error Handling
- A program may encounter a situation which can not work
correctly.
- In case of an error, your program may decide to:
• Ignore the error and continue running.
• Stop working immediately.
• Decide what to do next (is error recoverable?)
- The ability of a program to deal with errors is called “Error
Handling”.
34. Error Handling
- The first step in handling an error is to determine it’s happened.
- In your program, you are responsible of checking for errors.
- In Linux, when calling a system call, if some error happens, the
system call will set the errno global variable.
- Most system calls, return -1 on error and set errno respectively.
- After performing any system call, it’s up to you to check the
return value of a call and deal with probable errors.
35. Error Handling
- The errno variable is global, so you should check it exactly
after desired call.
- errno is thread-safe.
- There are some functions to work with errno and print
meaningful error messages.
- Using strerror() and strerror_r() is an option to deal with errors.
- These functions will return a string describing the error code
passed in the argument.
36. Error Handling
Because value of errno
could possibly change in
each system call, it’s
better to copy it into
another variable
37. Error Handling
Depending on the situation, you may decide to perform some
actions in case of an error and do not terminate your program.
What happens if a function returns -1 on success?
38. Error Handling
- You may also use the assert macro in your C program.
- One may use assert to properly generate some information in
case of unpredicted situations.
- You can disable all assert s in your code providing –
DNDEBUG option in your gcc command line.
- You should never perform any operation in your assert
statement. Just check it.
39. Error Handling
Here, we never expect the i variable to have any
other value than 2. So, we check it in our assert.
In this sample code, the value of i should
never change (always stay 2), but we
change it manually to check the
assertion.
40. Error Handling
When we compile our code with –NDEBUG
option, asserts are ignored during cod
compilation.
assert() works fine!
i is not equal to 2, so
assertion fails
43. Signals
- Signals are mechanisms for communicating with and
manipulating processes in Linux.
- Signals are asynchronous software interrupts.
- In Linux, each signal has it’s specific number.
- Signal names and numbers are defined in
“/usr/include/bits/signum.h”
- A program may receive signals from OS Itself, Other processes
or users.
45. Signals
- A program may do one of lots of things when it receives a
signal.
- For each signal there is a Default Disposition which
determines the default behavior of a program when it receives
this signal (If the program does not specify some specific
action).
- There are some ways to handle the signals in Linux.
-The SIGACTION function can be used to change the action
taken by a process on receipt of a specific signal.
46. Signals
int sigaction( int signum, const struct sigaction * act,
struct sigaction * oldact )
If is not NULL, the old signal
disposition will go here
Signal number
(only trappable signals)
New signal disposition
It’s better to use signal NAMES instead
of NUMBERs here. (Mapping is in
/usr/include/asm/signal.h)
47. Signals
- The sigaction structure is something like this:
struct sigaction
{
void (*sa_handler) (int);
void (*sa_sigaction) (int, siginfo_t *, void *);
sigset_t sa_mask;
int sa_flags;
void (* sa_restorer) (void);
}
SIG_IGN, SIG_DFL or a handler function
which takes an integer as the signal number
If you want more information
about the received signal, you
can set this function instead of
sa_handler.
48. Signals
Assignment to a sig_atomic_t value is
done atomically and can not be
interrupted because of another signal
49. Signals
If you do not press CTRL+C 5 times, This
program will never end itself
50. Signals
- There are also some other functions to work with signals, these
include psignal, strsignal (like what we have in dealing with errors).
- A program may send signal to others using the kill() system call.
- A program may send signal to itself using the kill() or raise()
system call.
- Using pause() you can wait until a handled or killer signal arrives.
- Different Linux kernels have different permission schemes, but
generally one user’s process can’t send signals to other users.
51. Signals
If you do not press CTRL+C 5 times, This
program will never end itself
52. Logging Events
- A program may log events and conditions during it’s run time.
- In Linux, logging can be done in program itself, or by
assistance of Linux facilities.
- Using SYSLOG, a program can log events in different levels of
priority.
- To log events with syslog, the “syslog” service must be started
in system.
53. Logging Events
- You can use the “syslog” function in your program in order to
log the events:
syslog (int priority, const char * format, …)
Priority = facility || level
Format and format
strings are the same as
we use in printf
55. Logging Events
- The “syslog” function will generate a log message.
- The log message will be distributed by “syslogd”.
- The “syslogd” (syslog daemon) should be well configured to
work correctly.
- Configuration file for syslogd is “/etc/syslog.conf.
- Syslog daemon will write the logs to appropriate files and
sends them to suitable devices according to configuration file.
61. /PROC
- Is a pseudo filesystem which contains the processes
information.
- Is not associated with a hardware device (like disk devices).
- Is a window to the running kernel.
- Contents of the files in this directory are not fixed blocks and
are generated bye the Linux kernel when you read them.
- Some files in /proc allow kernel variables to be changed.
62. /PROC
- You can get the information you want by reading the contents
of /proc files in your program.
- Some of these files are:
/proc/cpuinfo
/proc/version
/proc/meminfo
Information about CPU (s)
Version of Linux kernel (uname)
Information about memory usage
/proc/filesystems What filesystem types are loaded in kernel right now
/proc/mounts What filesystems are mounted
63. /PROC
We read /proc
files like regular
files
The name, output format and semantics may
change in new kernel releases.
Can you compute
the system uptime?
64. /PROC
- each process has a directory specified to it in /proc (pid).
- in each pid directory, there are some information about the process.
- the “self” directory, points to the running process itself.
- in each process directory there are some files and subdirectories
indicating some information about that process.
- cmdline, cwd, environ, exe, fd, maps, mem, root , … (see man 5 proc)
65. /PROC
- auxv: This contains the contents of the ELF interpreter information
passed to the process at exec time. The format is one unsigned long ID
plus one unsigned long value for each entry. The last entry contains two
zeros.
- coredump_filter: can be used to control which memory segments are
written to the core dump file.
- cpuset: the root to cpuset pseudo filesystem for this process (which
cpu and which memory unit should this process use)
66. /PROC
- oom_adj, oom_score: if system is out of memory, how the oom_killer
should act with this process?
- limits: Displays the soft limit, hard limit, and units of measurement for
each of the process's resource limits.
- numa_maps: information about Non-Uniform Memory Access policies
and allocation.
- task: information about process threads (each thread has a dir here)
67. /PROC
- lots of system tools in linux use /proc to gather needed information.
- using /proc, you can write your own ps, top, …
- /proc contents might change in different kernel releases.
- /sys is an alternative to /proc and is going to be more organized.
- not all the values in /proc are writeable, some are just to gather
information about running kernel.
- in programming, one may access /proc files like ordinary files.
68. Managing command-line arguments
- By using getopt_long you can manage arguments passed to
your program.
- Traditional argc,argv[] method is also available.
- getopt_long will handle short options as well as long options.
- getopt_long will generate errors in case of bad options and
also will take care of options which require an arguments.
- getopt_long will need the argc , argv[] in order to work
properly.
69. Managing command-line arguments
int getopt_long( int argc, const char * argv[], const char * optstring,
const struct options * longopts, int * longindex )
- getopt_long will take lots of arguments!
This should be null or point to
index of long_options array.
This is an structure which is
filled with options declaration.
argc and argv[ ] !! Array of short options.
70. Indicates that we have two
options and “-p” will take an
argument.
optarg is a pointer to the
provided argument for this
option
71. Creating Processes in Linux
- The simple way: using system function.
- The flexible, secure, complex way: using fork and exec
- By using system you can create a subprocess running the
standard Bourne shell (/bin/sh) and execute a command in it.
- By using fork function you can create a child process which is
an exact copy of it’s parent.
- By using exec family of functions, you can replace the current
process image with a new one.
DOS and Windows API use spawn family instead of fork & exec
72. Creating Processes in Linux
- The system function, uses a shell to invoke the desired
program.
- It has the same features, limitations, and security flaws of the
system’s shell.
int system (const char * command )
system will call this
command by calling
“/bin/sh -c command”
system will return the exit status
of the command (see wait).
127: shell can not be run
-1: any other errors
73. Creating Processes in Linux
Program_1
The remaining code will
continue executing.
Here goes the program_1 code
.
.
system(program_2)
.
.
.
Prgoram_2
Program_2 will run as
a command in Bourne
Shell
74. Creating Processes in Linux
If system fails to execute
“LS”, it will return 127,
otherwise, the “ls” command
will execute.
75. Creating Processes in Linux
- The simple way: using system function.
- The flexible, secure, complex way: using fork and exec
- By using system you can create a subprocess running the
standard Bourne shell (/bin/sh) and execute a command in it.
- By using fork function you can create a child process which is
an exact copy of it’s parent.
- By using exec family of functions, you can replace the current
process image with a new one.
DOS and Windows API use spawn family instead of fork & exec
76. Creating Processes in Linux
- The fork function creates child process which only differ in its
PID with his parent.
- Return value in parent process is PID of child and in child is 0.
pid_t fork ( void )
fork does not need any
additional arguments. It just
creates the same process
as the parent.
fork will return a PID (PID of
child or zero) on success and -1
on failure.
77. Here goes the program_1 code
.
.
fork ()
.
.
.
Creating Processes in Linux
Program_1
Program_2 is an exact copy
of program_1 and will start
executing from this point
Here goes the program_1 code
.
.
fork ()
.
.
.
Prgoram_2
Program_1 will
continue its operation
from this point.
78. Creating Processes in Linux
Calling the fork, will create
the child process in which
we can decide what to do.
79. Creating Processes in Linux
- The exec family, vary slightly in their capabilities and the way
of calling:
• Functions containing the letter ‘p’ in their name, accept a
program name and search it in current execution PATH.
• Those who contain the letter ‘v’ or ‘l’ in their name, accept
the argument list as an array or list for the new program.
• Those who contain the letter ‘e’ in their name, accept an
array of environment variables.
exec replaces the calling process with another one, so it will never return
a value on success but on failure, returns -1.
80. Creating Processes in Linux
Program_1
The remaining program
code, will never execute if
‘exec’ finishes successfully.
Here goes the program_1 code
.
.
exec (program_2)
.
.
.
Prgoram_2
Now, the program_2
will be replaced with
progaram_1 and run
till end.
81. Creating Processes in Linux
- All of the exec family of functions, use just one system call:
execve()
- execl() functions are variadic functions.
- When calling exec, remember that almost all Linux applications,
use argv[0] as their binary image name.
- When using exec family, the new process does not have the
previous’ signal handlers and other stuff.
- The new process has the same values for its PID, PPID, priority
and permissions.
82. Creating Processes in Linux
Calling the exec will end
this program and create a
new one with the same pid,
running BASH.
84. Process Permissions
- There are three types of UIDs for a process, when running a
program, which is not Set-UID:
•Real UID:
Is equal to UID of who actually run the program.
•Effective UID:
Is the one that matters in performing specific actions,
equal to RUID.
•Saved UID:
Is equal to RUID.
85. Process Permissions
- There are three types of UIDs for a process, when running a
program, which is Set-UID:
•Real UID:
Is equal to UID of who actually run the program.
•Effective UID:
Is the one that matters in performing specific actions and
is set to UID of owner of the file.
•Saved UID:
Is set to UID of owner of the file.
So, What is
FSUID!!??
86. Process Permissions
- A process can change it’s real, effective and saved UID during
execution.
- Change of these UIDs, is done using setuid(), setreuid(),
seteuid() system calls (in POSIX).
- There are also some other non-POSIX system calls also
provided in Linux.
- Change of these values is interdependent to themselves.
- Its better that you always work with EUID unless you need to use
others.
88. Process Termination
- Termination in Linux is performed by exit() function.
- When the main() returns, it actually calls the _exit() system call.
- The EXIT_STATUS which is passed to exit(), is the return
value of our program.
- When a program exits, the kernel cleans up all of it’s
resources, terminates the process and signals it’s parent.
- When receiving signals like SIGKILL, the process also
terminates but no clean up could be done inside the process.
- You may also call _exit() directly although it’s not advised.
89. Process Termination
There is no exit()
function, but
compiler puts it in its
place when it sees
the main() does
return.
atexit(), registers exit functions in a stack
90. Process Termination
- When a child exits, kernel sends the SIGCHLD to it’s parent.
- Parent may use this signal to determine if one of it’s children is
dead.
- Parent can get more useful information regarding to child’s
termination, using wait() and waitpid() system calls.
- wait(), simply blocks until child’s status is changed:
•Child was terminated.
•Child was stopped by a signal.
•Child was continued by a signal.
91. Process Termination
- Each parent should take care of cleaning its children when they
finish their job.
- If they do not, the children will remain in the system as Zombies.
- By calling the fork function, you will generate the child process
and by calling the wait function, you will wait for child to finish and
then clean it up.
92. Process Termination
- waitpid() is more powerful than wait()
- using waitpid() you can wait for specific child, knowing it’s pid.
- waitpid() can act in NOHANG mode, which is not blocking.
- There are some internal macros you can use on exit_status
which is set by wait() and waitpid() {#include <sys/wait.h>}.
- All of these macros and options are described in wait.h(P)
94. Zombie processes
- If a program does not clean up its childs and exits, a special
project (init ) will take care of them.
- init will wait on all childs periodically so no zombie remains…
- Calling wait will block the parent process till the child finishes its
job.
- You can find zombie processes in system by using ps command
and searching for “defunct” processes.
95. Parent cleans up the child
after 12 seconds…
Child process will finish its
job after 5 seconds.
Would we have a zombie process in this program?
96. Process Sessions and Groups
- Each process in Linux is member of a process group.
- A session is combination of one or more process groups.
- Each process group has a group leader and each session has a
session leader as well.
- There are in fact only shells who care about sessions (to
perform job control).
- A signal send to a process group, is received by entire process
members.
- setsid, setpgid, getsid and getpgid are the relevant system calls.
97. Daemon processes
- A daemon process is a process that do not interact with user.
- Daemon process run in background and are not associated
with a controlling terminal.
- Daemons are children of init.
- A daemon usually has the `d` letter at the end of it’s name.
- Daemons are usually started at boot time.
- Daemons perform tasks that are more low-level than
interactive user programs’.
98. Daemon processes
- To create a daemon, you should do:
•Call fork() and then exit() in parent. Now the process is
parented by init.
•Close any file descriptors inherited from parent.
•Use chdir() to go to an existing, permanent location in system.
•Use setsid() to assign a session and the process become the
session leader as well as process group.
•Do whatever you want with stdout, stdin, stderr (fd:1,0,2)
100. Scheduling the
execution
- You can call wait immediately in parent process, so it will not
continue its job until the child is cleaned up.
- You can call wait3, wait4, waitpid in parent process (these
functions can run in NOHANG mode).
- Parent process may get informed about child termination using
IPC and SIGNALS when a child process terminates, the parent
will receive SIGCHLD signal.
101. Scheduling the
execution
- Multitasking could be performed cooperatively or preemptively
- In cooperative mode, each process voluntary stops running
(yielding).
- Linux scheduler works preemptively.
- The scheduling algorithm in Linux is round-robin for processes
with different priority and FCFS for same priority ones.
- I/O bound processes yield the processor sooner than processor
bound ones.
-Linux deals with threads, same as processes which share some
kernel resources.
102. Scheduling the
execution-A process may yield the execution using sched_yield() systemcall.
- Kernel itself (scheduler itself) almost in all cases decides better than
programmer.
- In SMP systems, one process may run on different CPUs.
- Processor affinity is the like hood of a process to run on the same CPU.
- Usually processes continue to run on the same CPU.
- Using sched_setaffinity() and sched_getaffinity() a processs may set it’s
affinity.
103. Setting the priority of processes
- In Linux, you can assign a priority value to each process when
executing it.
- Priorities could be given to processes using nice command.
- Nice, does not force the scheduler to run a process with specified
priority, but just advice it.
- Nice values could be in range of -20 (Highest priority) up to +19
(Lowest priority).
Users can assign higher nice values to their processes
but can not reduce the nice value of them.
104. Setting the priority of processes
- You can use nice, setpriority, getpriority in order to get/set the
process nice value.
int nice ( int inc )
nice will add this value
to the previous value of
nice for calling process.
nice will return the new
nice value on success
105. Setting the priority of processes
- You can use nice with inc value of 0 to determine the current
nice value, or you can use getpriority .
int getpriority( int which, int who )
We put this
“PRIO_PROCESS” to get
nice value of a process.
Return value is the nice
value of specified
process.
Is an identifier for
previous argument (put
zero to get the nice of
calling process).
106. The getpriority function, will return the nice value on success which might
be -1, or -1 on error.
so how can we make sure this is not an error number?
107. Because there is no process with this PID, getpriority will return -1.
Using errno, you can determine weather this an error number or a nice
value.
108. Dynamic Code Loading
- Allows you to load some code at run time.
- No need to explicitly link in the code.
- Can be used in applications supporting “plug-ins” to provide
additional functionality.
Third-party developers can use this facility to create shared
libraries and place them in a known location. Your program then
can load the codes in theses libraries automatically.
109. Dynamic Code Loading
- You can use dlopen to open a shared library at run time.
- dlopen can load the code, when it is referenced or when we call
dlopen and also in some other situations which is determined by
“flag”).
void * dlopen( const char * filename, int flag )
The address or the
name of the library
Indicates how to load the
library (usually we use
RTLD_LAZY flag)
dlopen returns a
handle to the library
in success and NULL
in failure
110. Dynamic Code Loading
- You can call the dlsym function to use the handle which dlopen
returns (dlsym obtains the address of a function in the shared
library).
void * dlsym( void * handle,const char * symbol )
The handle which
dlopen returns
Name of the symbol in
the shared library
dlsym returns NULL on
failure and address of
the symbol in memory
on success.
You can also use dlsym to access to a static variable in the shared
111. Dynamic Code Loading
- If the library has already been loaded, dlopen simply increments
the library reference count.
- Calling dlclose you can decrease this count and then unload the
library.
int dlclose( void * handler )
dlclose, takes the
handle which dlopen
returns as its argument.
dlerror returns 0 on
success and non-zero
on failure.
112. Dynamic Code Loading
- In case of a failure in dlopen or dlsym you can call dlerror to
have a human-readable explanation about the error.
char * dlerror( void )
dlerror takes no
arguments.
dlerror returns a
pointer to an
explanation string
(prints it on the output).
113. A handle for dlopen.
Now, we have loaded the
library (the library name is
stored in lib).
We assign the address of
“do_it” function which we
are sure about its existance
in lib library.
Finally we will call our
function (do_it) and then
close the shared library.
114. Threads
- Threads are mechanisms to do more than one job at a time.
- Threads are finer-grained units of execution.
- Threads, unlike processes, share the same address space and
other resources.
- POSIX standard thread API is not included in standard C
library, they are in libpthread.so.
- In Linux, threads are handled by LWPs.
116. Creating threads
- Like processes, each thread has its own Thread-ID of type
pthread_t.
- You can create a thread bye calling the pthread_create function.
int pthread_create( thread_id, attribute,
thread start routine, routine arg )
Thread ID of type
pthread_t
Will return zero
on success.
Joinable or what?
What should this
thread do?
Argument to thread
function.
117. Creating threads
- pthread_create returns immediately and the specified thread
will do its job separately.
- If one of the threads in a program, call exec the whole
process image will be replaced.
- The argument passed to the thread routine is a void * .
- You can pass more data in a structure of type void *.
118. Pay attention to implicit
type conversions
How can you make sure that threads do their job before main finishes?
Look at the order of printed characters
(How does Linux scheduler switches
between threads?)
119. Joining threads
- You can wait for a thread to finish its job using pthread_join.
- pthead_join is something similar to wait function in processes.
- Using pthread_join, you can also take the return value of a
thread.
- A thread, can not call pthread_join to wait for itself, you can
use pthread_self function to get the TID of running thread and
deciding what to do.
120. Joining threads
- Like processes, you can wait for a thread to finish its job…
int pthread_join( pthread_t thread_id, void ** return_value )
Thread ID which you
want to wait for.
Will return zero
on success.
The return value of
thread will be put here.
122. Thread attributes
- Second parameter in pthread_create is the thread attribute.
- Most useful attribute of a thread is joinability.
- If a thread is joinable, it is not automatically cleaned up.
- To clean up a joinable like a child process, you should call
pthread_join .
- A detached thread, is automatically cleaned up.
- A joinable thread may be turned into a detached one, but can not be
made joinable again.
- Using pthread_detach you can turn a joinable thread into detached.
123. Thread attributes
- If you do not clean up the joinable thread, it will become something
like zombie.
- To assign an attribute to a thread, you should:
• Create a pthread_attr_t object.
• Call pthread_attr_init to initialize the attribute object.
• Modify the attributes.
• Pass a pointer to pthread_create.
• Call pthread_attr_destroy to release the attribute object.
124. There is no need to call
pthread_join in case of
detached threads, but we
should still wait for threads to
finish their job.
125. Thread cancelation
- A thread might be terminated by finishing its job or calling
pthread_exit or by a request from another thread.
- The latter case is called “Thread Cancelation”.
- You can cancel a thread using pthread_cancel.
- If the canceled thread is not detached, you should join it after
cancelation, otherwise it will become zombie.
- You can disable cancelation of a thread using
ptherad_setcancelstate().
126. Thread cancelation
- There are two cancel state:
- PTHREAD_CANCEL_ASYNCHRONOUS: Asynchronously cancelable
(cancel at any point of execution)
- PTHREAD_CANCEL_DEFERRED: Synchronously cancelable (thread
checks for cancellation requests)
- There are two cancelation types:
- PTHREAD_CANCEL_DISABLE and PTHREAD_CANCEL_ENABLE.
- It’s a good idea to set the state to Uncancelable when entering critical
section…
127. Thread 2, cancels thread1.
How much time will this
program consume?
129. Critical Section
- The ultimate cause of most bugs involving threads is that they are
accessing the same data at the same time.
- The section of code which is responsible to access the shared data, is
called Critical Section .
- A critical section is part of code that should be executed completely or
not at all (a thread should not be interrupted when it is in this section)
- If you do not protect the Critical Section, your program might crash
because of Race Condition.
130. Race Condition
- Race Condition is a condition in which threads are racing each other
to change the same data structure.
- Because there is no way to know when the system scheduler will
interrupt one thread and execute the other one, the buggy program
may crash once and finish regularly next time.
- To eliminate race conditions, you need a way to make operations
atomic (uninterruptible).
131. In this case, the “number”
variable is the shared resource
which threads race each other
on changing it.
This part of code should be
executed completely or should
not be executed (CS)
132. Mutual Exclusion
- Mutual Exclusion is a method to avoid race conditions.
- In this method, if a thread wants to enter a CS, it first checks
weather another thread is there or not.
- If there is another thread, it would wait until that thread finishes it’s
job.
- If there is not any thread in CS right now, the thread will put a lock
on the critical section.
133. Mutual Exclusion
- Linux guarantees that race conditions do not occur among threads
attempting to lock a MUTEX.
- You can create a MUTEX by creating a variable of type
pthread_mutex_t and then initializing it by pthread_utex_init.
- A simpler way to create a mutex is to initialize it with the special value:
PTHREAD_MUTEX_INITIALIZER
- The mutex variable should be initialized only once.
134. Mutual Exclusion
- A thread may lock a mutex using pthread_mutex_lock and may
unlock it using pthread_mutex_unlock .
- If you forget to unlock a mutex, other threads could not enter the CS.
- Mutual exclusion is a mechanism to allow a thread to block the
execution of another.
- Mutual exclusion opens up the possibility of a new type of bugs,
called deadlock.
135. What is the problem in this code?
Critical Section is protected with
a MUTEX
136. Deadlock
- A deadlock occurs when one or more threads are waiting for
something that will never happen.
- Deadlocks may happen in various conditions. One is that a thread
tries to lock a mutex twice without unlocking it once.
- Double locking a fast mutex (the default kind of mutexes in Linux)
will lead to a deadlock.
- An attempt to lock this kind of mutex blocks until the mutex is
unlocked.
138. Different Kinds of Mutex in LINUX
- A recursive mutex may safley be locked many times by the same
thread.
- This kind of mutex will remember how many times pthread_mutex_lock
was called on it and waits for the same number of unlocks to get
unlocked.
- Linux will detect and flag a double lock on an error-checking mutex.
- The second consecutive call to pthread_mutex_lock for an error-
checking mutex will return the failure code EDEADLK.
139. Mutex Attribute
- The default type of mutexes in Linux is fast mutex.
- You can set arbitrary attribute for a mutex just like thread attributes.
• First you should create an attribute object using
pthrad_mutexattr_t.
• Second you should call pthread_mutexattr_init to initialize it.
• Finally set the mutex type by calling
pthread_mutexattr_setkind_np
141. Mutex Tests
- If you just want to check the state of a mutex and then continue
some other jobs, you can use pthread_mutex_trylock which is a non-
blcoking function.
- If you call this function on an unlocked mutex, you will lock it.
- If the mutex is already locked, pthread_mutex_trylock will return
immediately (instead of getting blocked) and returns the error code
EBUSY.
142. Semaphores
- A semaphore is a counter that can be used to synchronize multiple
threads.
- Using mutexes, you can stop threads from accessing a section of
code sooner than they should.
- Using semaphores, you can stop threads from leaving a section of
code sooner than they should.
- Each semaphore has a counter value which is a non-negative
integer.
143. Semaphores
- Each semaphore supports two basic operations:
• A wait operation decrements the value of the semaphore by 1.
If the value is already zero, the operation blocks until it
becomes positive.
• A post operation increments the value of the semaphore by 1.
if the value was previously zero and a thread is blocked in a
wait operation, it will get unblocked.
144. Semaphores
- If you use semaphores, include <semaphore.h>.
- Semaphores are of type sem_t and should be initialized before
using it by calling sem_init function.
- After you finished your job with a semaphore, you may destroy it
using sem_destroy.
- You can wait on a semaphore by sem_wait or post to a
semaphore using sem_post .
145. Semaphores
- To get the current value of a semaphore, you can use
sem_getvalue.
- To perform a non blocking wait on a semaphore (just like
mutexes), you can use sem_trywait function which will return the
error value EAGAIN in case of a zero semaphore.
Is it a good idea to use the value returned by sem_getvalue to
make a decision wether to post to or wait on the semaphore?
147. Condition Variables
- A condition variable enables you to implement a condition
under which a thread executes or blocked.
- Linux guarantees that threads which are blocked on the
condition will be unblocked when the condition changes.
- Just like semaphores, a thread can wait on a condition variable
until another thread, signal s the same condition variable.
- Condition variables do not have any counter, so a thread must
wait on a condition variable before another one signals it.
148. Condition Variables
- Because condition variables are some sort of shared resource
(two threads try to access it), Race Condition may occur.
- You should always use a condition variable in conjunction with
a mutex.
- The action of unlocking the mutex and waiting on the condition
variable, should be performed atomically.
149. Condition Variables
- To use a condition variable, you should create it of type
pthread_cond_t.
- pthread_cond_init can be used to initialize the condition
variable (the mutex should be initialized separately).
- pthread_cond_signal signals a condition variable and a single
thread waiting on the variable, will get unblocked.
- pthread_cond_broadcat will unblock all threads waiting on a
conition variable.
150. Condition Variables
- To wait on a condition variable, you can call pthread_cond_wait
which will block the calling thread until the condition variable is
signaled.
- The second argument for pthread_cond_wait is a pointer to the
mutex.
- When pthread_cond_wait is called, the mutex must already bi
locked by the calling thread.
151. Condition Variables
- You should follow these steps to perform an action that may
change the sense of the condition:
• Lock the mutex.
• Take the action.
• Signal or broadcast the condition variable.
• Unlock the mutex.
You can use the condition variable without a condition,
just to block a thread until another one wakes it up.
152. A condition variable can also
be used to wait for a condition
and perform a task when
condition is changed.
153. Deadlock Conditions
- There are several conditions in which deadlock may occur. Two
of them are more common:
• Two threads are blocked, waiting for a condition to occur
that only the other one can cause.
• Two different threads (running same routine) are trying to
lock the same two mutex in different order.
154. Signal Handling in Threads
- In Linux threads are implemented like processes.
- Each process should take care of received signals in it.
- When a signal is received in a multi-threaded program, it is
received during the execution of one of its threads.
- The parent process will hold the process id of the main thread
of the child process’s program.
- Threads can send signal to eachother using pthread_kill .
155. Processes Vs. Threads
All threads in a
program run the same
executable.
Child process may run
a different executable
(exec)
An errant thread can
harm other threads in
the same process
An errant process can
not harm others
There is no need to
copy the memory for a
new thread.
Copying memory for a
new process, adds
additional performance
overhead.
Sharing data among
threads is trivial.
Sharing data among
processes can be done
using IPC mechanisms
156. Processes Vs. Threads
In general threads should be used for programs that need fine-grained
parallelism and processes should be used for programs that need coarser
parallelism.
157. IPC
- Inter Process Communication (IPC) is transfer of data
between processes.
- In Linux there are some methods of IPC:
• Shared Memory.
• Mapped Memory.
• Pipe (Named and Unnamed).
• Socket (Remote, Local).
158. Linux Memory Model
- In Linux (and lots of other OS es) the virtual memory is
implemented.
- Each process has its own mapping of physical memory.
- Data in memory is stored in some pages.
- Each page is a fixed-length block of memory.
- Page table is a data structure to store the mapping between
virtual addresses and physical addresses.
159. Linux Memory Model
- Using virtual memory, each process thinks it has a large
range of contiguous addresses.
- In reality, the parts of data the process is currently using is
scattered around the RAM.
- The inactive parts of data, are saved in a disk file.
- If two or more processes map the same part of memory, that
part is shared between them.
161. Shared Memory
- Is the fastest form of interprocess communication and is also
called: Fast Local Communication.
- It allows two or more processes to access the same memory.
- Linux kernel does not take care of synchronization between
processes to access the shared memory.
- Process semaphores are a suitable way of synchronization.
- Using shared memory for each process is like calling malloc.
M.Golyani
163. Using Shared Memory
- To use shared memory in Linux, one process must allocate the
segment.
- Each process desiring to access the segment, must attach the
segment and after finishing its job must detach the segment.
- One process at the end must deallocate the segment.
- Allocating a new shared memory segment, causes virtual memory
pages to be created.
M.Golyani
164. - Allocating an existing segment, does not create new pages,
but returns and identifier to the existing ones.
- All shared memory segments are allocated as integral
multiples of the system’s page size.
- On Linux systems, page size is 4KB.
- You should obtain the page size by calling the getpagesize()
function.
Using Shared Memory
M.Golyani
165. - You can use ipcs -m command to see currently assigned
shared segments.
- Using ipcrm -m command, you can remove unused shared
segments left behind by processes.
- There are some limitations on using shared memory in linux.
- SHMALL, SHMMAX, SHMMIN, SHMMNI are corresponding
values to these limitations which are located under
/proc/sys/kernel/.
Using Shared Memory
M.Golyani
166. - SHMALL: system-wide maximum of shared memory pages.
- SHMMAX: maximum size in bytes for a shared memory
segment.
- SHMMIN: minimum size in bytes for a shared memory
segment.
- SHMMNI: system-wide maximum number of shared segments.
Using Shared Memory
If the minimum size of a shared memory segment is
equal to page size, then what is SHMMIN?
M.Golyani
167. - A process may allocate a shared memory segment using
shmget().
- Its first argument is a key specified to the shared segment.
- Other processes can access to the same shared memory
segment using the same key.
- To ensure that the key is not previously used, you can use the
special constant IPC_PRIVATE.
Using Shared Memory
M.Golyani
168. Using Shared Memory
int shmget ( key_t key, size_t size, int shmflg )
The key you wish to
specify for the shared
segment.
A valid segment
identifier on success
and -1 on error.
The segment size (will be
rounded up to the multiple
of page size).
Permission and other
specifications of the shared
segment.
M.Golyani
169. Using Shared Memory
- shmflg is the logical OR of flags. The most useful flags are:
• IPC_CREAT: Is used to create a new shared segment.
• IPC_EXCL: Is used with IPC_CREAT to ensure failure if the
segment already exists.
• Mode flags (see the manual page of stat.h for details).
If IPC_CREAT is used without IPC_EXCL, and the segment key
already exists, the existing segment’s id will be returned and no
error will occur.
M.Golyani
170. Using Shared Memory
- Permission flags are:
Mode bit Meaning
S_IRWXU R, W, X by owner
S_IRUSR Read by owner
S_IWUSR Write by owner
S_IXUSR Execute by owner
S_IRWXG R, W, X by group
S_IRGRP Read by group
S_IWGRP Write by group
S_IXGRP Execute by group
S_IRWXO R, W, X by other
S_IROTH Read by other
S_IWOTH Write by other
S_IXOTH Execute by other
M.Golyani
172. Using Shared Memory
ipcs command shows the
currently assigned shared
memory segments, ID,
permissions, number of
attached processes and …Which decimal number is
equal to D6A in hex?
173. - To use a shared memory segment, a process must attach it.
- shmat() is used to attach to a shared memory segment with
given segment identifier.
- You can tell shmat() where in your process address space to
map the shared memory.
- If you call fork() the child will inherit the shared memory.
- When you finished with shared memory, you can detach it
using shmdt().
Using Shared Memory
M.Golyani
174. Using Shared Memory
void * shmat ( int shmid, const void * shmaddr, int
shmflg )
The segment ID (returned
by shmget()).
On success, will
return the address of
attached shared
memory. On error,
-1 is returned and
errno is set.
The address in your process
address space in which you
want the shared memory be
mapped.
Could be SHM_RND,
SHM_RDONLY,
SHM_REMAP (Linux specific)
M.Golyani
175. Using Shared Memory
Because you do not want to
create a new shared memory
segment, you do not need to
specify the size.
You can detach the shared
memory by calling shmdt().
176. Using Shared Memory
During the execution, we can see there is
1 process attached to this shared
memory.
177. Using Shared Memory
Any process who knows the key, can
access to this shared memory segment
(regarding to permission flags)
M.Golyani
179. - The shmctl() returns information about a shared memory
segment and can modify it.
- Using shmctl() you can also deallocate a shared memory
segment.
- Each shared memory segment should be deallocated explicitly.
- The shmctl() fills the shmid_ds type structure.
Using Shared Memory
M.Golyani
180. Using Shared Memory
int shmctl ( int id, int cmd, struct shmid_ds * buf )
The segment ID (returned
by shmget()).
On success
depends on cmd.
On error, -1 is
returned and errno
is set.
IPC_STAT, IPC_SET, IPC_RMID,
RPC_INFO (Linux specific), SHM_INFO
(Linux specific), SHM_STAT (Linux
specific), SHM_LOCK (Linux specific),
SHM_UNLOCK (Linux specific).
An structure which contains
the information you want to
be set or to be read.
M.Golyani
183. - Processes must coordinate access to shared memory.
- Process semaphores like thread semaphores are kind of
counter with two operations: POST & WAIT.
- Process semaphores come in sets.
- The last process using a semaphore set, must explicitly
remove it.
- Unlike shared memory, removing a semaphore set causes
Linux to deallocate immediately.
Process Semaphore
M.Golyani
184. - To use a semaphore set, you should first allocate it using semget().
- The semget() system call, will return a semaphore ID regarding to the
key you gave it before.
- After allocating the semaphore, you should initialize it using semctl() .
- After initializing the semaphore set, you can do POST or WAIT on it using
semop() system call.
- The last process must invoke semctl() to remove the semaphore.
Process Semaphore
M.Golyani
185. Process Semaphore
- Allocating an existing semaphore will return it’s semaphore ID.
- semget() flags behave the same way as shmget() does.
- You can use ipcs –s command to view information about existing
semaphore sets.
- Using ipcrm –s you may remove the semaphore sets.
- Each semaphore in a set has the following associated values:
unsigned short semval; /* semaphore value */
unsigned short semzcnt; /* # waiting for zero */
unsigned short semncnt; /* # waiting for increase */
pid_t sempid; /* PID that did last OP */
M.Golyani
186. Allocating a semaphore set
int semget ( key_t key, int nsems, int semflg )
The key you wish to
specify for the semaphore
set.
A valid semaphore
identifier on success
and -1 on error.
The number of semaphores
you wish to have in this set.
Permission and other
specifications of the
semaphore set.
M.Golyani
187. Allocating a semaphore set
Flags are the same as we
used in shmget().
The semaphore set will have
only 1 semaphore in it.
189. - To initialize a semaphore set, you must do:
• Set the semaphore value of all members to desired values.
• Set the last change time of all members.
• Set other specifications of semaphore set members.
- To do so, you can use semctl() function.
- As mentioned in semctl() manual page, the calling program must define
a semun union.
Initializing Semaphores
M.Golyani
190. int semctl ( int semid, int semnum, int cmd, union semun args )
The semaphore ID
(returned by semget()).
On success
depends on cmd.
On error, -1 is
returned and errno
is set.
IPC_STAT, IPC_SET, IPC_RMID,
IPC_INFO (Linux specific), SEM_INFO
(Linux specific), SEM_STAT (Linux
specific), GETALL, SETALL, GETVAL,
SETVAL, GETNCNT, GETZCNT, GETPID
Number of desired
semaphore in set.
Initializing Semaphores
Use of fourth argument is
depended on CMD
(might be ignored).
M.Golyani
191. - Depending on CMD, you may need to provide the fourth argument.
- If so, you must define the union semun yourself like below:
Initializing Semaphores
union semun
{
int val; /* Value for SETVAL */
struct semid_ds * buf; /* Buffer for IPC_STAT, IPC_SET */
unsigned short int * array; /* Array for GETALL, SETALL */
struct seminfo * __buf; /* Buffer for IPC_INFO (linux specific) */
};
semid_ds and ipc_perm (an structure in semid_ds) are filled with
information about semaphores
193. - To do wait and post on a semaphore in a set, you can use semop().
- semop() performs desired operation on selected semaphores in a set.
- semop() takes an array of operation structure.
- Each operation structure is related to one semaphore in a set.
- Operation structure is of type sembuf and contains:
Wait and post operation
unsigned short sem_num; /* Semaphore number */
short sem_op; /* Semaphore operation */
short sem_flg; /* Semaphore flags */
M.Golyani
194. Wait and post operation
We are waiting on
this semaphore.
195. - The semaphore operations are performed atomically.
- If the SEM_UNDO flag is set in operation structure, the action will be
undone when process terminates.
- You must deallocate the semaphore set when you finnished.
- Unlike shared memory segments, removing a semaphore set causes
Linux to deallocate immediately.
Semaphore Deallocation
M.Golyani
196. Wait and post operations
We are posting on
this semaphore.
What will happen if we do not
deallocate the shared memory?
197. - Permits different processes to communicate via a shared file.
- Linux splits the file into page-sized chunks and copies them in virtual
memory (Available in a process’s address space).
- Linux handles the file reading and writing operation.
- You can map an ordinary file to a process’s memory using mmap().
- You may map all or part of a file into memory.
- You can release the mapped memory using munmap().
Mapped Memory
What happens if you do not unmap the mapped
memory?
198. Mapped Memory
void * mmap ( void * start, size_t length, int prots, int flags,
int fd, off_t offset )
The address in
memory you prefer the
file to be mapped
On success
returns a pointer
to the mapped
area. On error
errno is set and
returns -1.
The return value of thread
will be put here.
Size of data to be written in
memory from given offset.
The desired memory
protections (not the
file open mode)
Type of mapped
object and its
specifications
File descriptor of the
file or the object you
want to be mapped
199. - The flag value is bitwise OR of the following standard flags:
Mapped Memory
MAP_FIXED: Use the address (first argument) any way.
MAP_SHARED: Writes are immediately reflected in the file.
MAP_PRIVATE: Writes to the memory range will be written to
a private copy of file.
- There are also some non standard flags:
MAP_DENYWRITE , MAP_EXECUTABLE , MAP_NORESERVE ,
MAP_LOCKED , MAP_GROWSDOWN , MAP_ANONYMOUS ,
MAP_ANON , MAP_FILE , MAP_32BIT , MAP_POPULATE ,
MAP_NONBLOCK
200. Mapped Memory
We need to put something in file so
the file be created.
Even after closing
the file, you still
have access to it.
M.Golyani
201. - On exit, the mapped memory will be automatically unmapped.
- You can call munmap() to unmap the memory yourself.
- Calling msync() will cause the buffers to be flushed into disk.
- Processes must coordinate access to shared file.
- You can use semaphore to synchronize access to file.
- Using fcntl() and file locks you can simulate MUTEX operation on
file.
Mapped Memory
M.Golyani
202. - Simultaneous access to a file, should get managed.
- you can use fcntl() to perform some actions on a file.
- To put a read (non blocking) lock or write (blocking) lock, you should use
SETLK, GETLK and SETLKW commands respectively.
- To perform locking on a file, you must describe a struct flock in your code.
- When a file is locked, it is still accessible and programs should check for
locks on files separately (eg. Using fcntl()).
Locking files
203. Locking files
int fcntl ( int fd, int cmd, struct flock * lock );
The file descriptor to
perform CMD on it.
On error, -1 is
returned and errno
is set.
- CMD could be lots of things. To acquire, release and test for locks, it
could be: F_GETLK, F_SETLK, F_SETLKW.
- The struct flock, has lots of members. The l_type defines the lock type
(read lock, write lock or unlock).
The operation you
want to perform on
the file
The struct flock holds the lock information
204. Locking files
We can use flock to lock a file
It’s a read lock.
Multiple processes may put a read
lock on a file.
205. - A pipe is a communicational device that permits unidirectional
communication.
- The first data written into pipe is the first one that is read.
- If the writer, writes faster than the reader and pipe is full, the writer blocks.
- If the reader reads tries to read an empty pipe, it blocks.
- You can create pipes using pipe().
Pipes
206. Pipes
int pipe ( int filedes[2] );
An array of size 2, which
contains read and write file
descriptors.
On success, 0 is
returned and on
error, -1 is returned
and errno is set.
- pipe() stores the reading file descriptor in array position 0 and the writing
file descriptor in position 1.
- Read and write file descriptors are available only in calling process and its
children.
- You can use pipes to communicate between threads in a process.
207. Pipes
In the writer function, we put data
into the file descriptor returned by
pipe()
In the reader function, we read data
until there is no more data.
208. Pipes
In child process we call the reader
function. And in parent process we
call the writer function.
209. Pipes
int dup2 (int oldfd, int newfd);
The file descriptor you want
to be duplicated.
On success, new
descriptor is
returned. On error
-1.
- Equated file descriptors share the same file position.
The new file descriptor in
which the old one is copied.
211. Pipes
FILE * popen( const char * command, const char * type)
The command you wish to
execute.
The file descriptor
related to created
process stdin (or
stdout).
- You can use popen() to send data to or receive data from a program
running in a subprocess.
- After closing the stream (using pclos()), pclose() waits for the child
process to terminate.
Might be “w” or “r” as for
writing or reading.
213. Named Pipes
int mkfifo (const char * pathname, mode_t mode)
Name and address of the
pipe file in the system.
0 on success and
-1 on error.
- You can access a named pipe like an ordinary file.
- One program must open it for writing and another for reading.
Permission flags.
215. - Another IPC mechanism in Linux.
- Implemented both SYS V type and POSIX type and it’s structure is
somehow like pipes (FIFO).
- A process initializes a message queue and itself or other processes can
put messages in this queue, knowing the MSQID of this message queue.
- After finishing the job, one process should deallocate the queue.
- If you don’t remove a message queue it will remain even after process
termination.
- You can use ipcs command to view the current message queues.
Message Queue
216. int msgget (key_t key, int msgflg);
The message queue Key to create or to connect to. All
key features are same as other IPC mechanisms
Message queue ID
on success and -1
on error
- You can connect to a queue and also create one, using msgget().
The flags (and permissions) to use for
the action (IPC_CREAT, IPC_EXCL, …)
Message Queue
217. int msgsnd (int msqid, const void * msgp, size_t msgsz,
int msgflg);
The message queue ID to
send messages through
Zero on success
and -1 on error
- The message you are going to send should be a struct containing two
members: m_type and m_text.
A pointer to the struct which
you are going to send
Message Queue
Some flags related to the
message and IPC actions
Size of the payload (mtext
member of your struct)
218. int msgrcv (int msqid, const void * msgp, size_t msgsz,
long msgtyp, int msgflg);
- If the message has size bigger than what is said here, the truncated message
could be lost (MSG_NOERROR flag)
Message Queue
The message queue ID to
send messages through
Zero on success
and -1 on error
A pointer to the struct which
you are going to send
Second member of the sent message
struct or 0 or a negative number
Size of the payload (mtext
member of your struct)
Some flags related
to the message and
IPC actions
219. int msgctl (int msqid, int cmd, struct msqid_ds * buf);
The command you want to perform on
this msgQ (like other IPC commands)
Zero or msqid or
index on success
and -1 on error
- The msqid_ds type is described in sys/msg.h and contains information about
desired message queue.
A pointer to a struct you want
to put returned data in it
Message Queue
The message queue ID to
send messages through
220. - A socket is a communicational device that permits bidirectional
communication.
- The socket can be used between two processes on the same machine or
between local and remote processes.
- Data transfer might be connection oriented or connection less
(Connection style or Datagram style).
- Protocol specifies how the data is transmitted.
- Socket address in local system is an ordinary file name.
- Reading and writing from/to sockets is performed like files (read, write).
Sockets
221. - The main system calls used in socket programming are:
- socket()
- connect()
- listen ()
- bind()
- close()
- accept()
Sockets
Used to request a socket descriptor
Used to connect to another socket
Opens a port and configures the socket
Assigns some information to the socket
Closes a file descriptor (in this case, socket descriptor)
Accepts incoming connection and assigns new socket to it
222. Sockets
int socket (int domain, int type, int protocol);
Communication domain (aka namespace) , selects the protocol family which will be used.
Socket descriptor
(file descriptor) on
success and -1 on
error (errno is set)
- Usually for each domain-type combination there is only one suitable protocol.
- In an IPV4 (PF_INET) domain and a connection oriented type
(SOCK_STREAM), we usually use IP protocol (No 0) for data transmission.
Protocol specifies a particular
protocol to be used
Socket type (aka
communication style),
specifies the
communication semantics
223. - Currently these are known Domains to use:
•PF_UNIX, PF_LOCAL: Used for local communication.
•PF_INET: IPV4 Internet Protocols
•PF_INET6: IPV6 Internet Protocols
•PF_IPX: IPX – Novell Protocols
•PF_NETLINK: Kernel user interface device
•PF_X25: ITU-T X.25 / ISP-8208 protocol
•PF_AX25: Amateur radio AX.25 protocol
•PF_ATMPVC: Access to raw ATM PVCs
•PF_APPLETALK: Apples talk!
•PF_PACKET: Low level packet interface.
Sockets
224. - Currently these are known Types to use:
•SOCK_STREAM: Connection oriented with support of OOB.
•SOCK_DGRAM: Datagram (connectionless transmission).
•SOCK_SEQPACKET: Sequenced packets like STREAM but not same.
•SOCK_RAW: Raw network protocol access.
•SOCK_RDM: Reliable datagram but no guarantee for orderint.
- You can check for supported protocols (third argument of socket() in
/etc/protocols.
Sockets
225. Sockets
int connect(int sockfd, const struct sockaddr * serv_addr,
socklen_t addrlen);
The socket file descriptor which
socket() has already returned
Zero on success
and -1 on error
(errno is set)
The destination address that is going to connect to,
using the previous argument (sockfd)
Specfies the size of serv_addr
- The format of the address in serv_addr is determined by the namespace
of the socket “sockfd”.
227. Sockets
int listen (int sockfd, int backlog);
Defines the maximum length the queue of
pending connections may grow to
Zero on success
and -1 on error
(errno is set)
- The listen() call applies only to sockets of type SOCK_STREAM or
SOCK_SEQPACKET.
- In client-server programming, the listen() is used on the server side.
A socket descriptor that specifies which
socket structure you are going to listen on
228. Sockets
int bind (int sockfd, const struct sockaddr * my_addr,
socklen_t addrlen);
The local address structure depends on
protocol family
Zero on success
and -1 on error
(errno is set)
- Its normally necessary to assign a local address using bind() before a
SOCK_STREAM socket may receive connections.
A socket descriptor that specifies which socket you are going to
assign the local address structure to
229. Sockets
int accept(int sockfd, struct sockaddr * addr,
socklen_t addrlen);
The main socket descriptor which returned
by socket()
A new socket
descriptor on
success and -1 on
error (errno is set)
- The accept() is called on the server side and once for each client connection.
A pointer to a sockaddr structure which is
filled with the address of the peer socket
Initially contains the size
of addr structure, on
return it will contain the
actual size.
230. - When a server-client communication is established, the diagram looks
like:
Sockets
server client
close()
socket()
bind()
listen()
accept()
send(),receive()
close()
socket()
connect()
send(),receive()
231. Sockets
Here, we are creating a new socket of
type connection oriented to use over the
net
This type of protocol family (PF_INET)
has it’s own structure described in ip(7).
We must fill the structure, properly
اضافه کردن اسلاید های مربوط به
Constructor , destructor
و سایر function attribute ها
در gcc
The signals are
simply preprocessor definitions that represent positive integers—that is, every signal
is also associated with an integer identifier.
The name-to-integer mapping for the sig-
nals is implementation-dependent, and varies among Unix systems, although the first
dozen or so signals are usually mapped the same way (SIGKILL is infamously signal 9,
for example). A good programmer will always use a signal’s human-readable name,
and never its integer value
Assert() calls abort() and abort() sends the caller program the SIGABRT signal.
If you want to check previous signals disposition, you should set it using singal() its bad. Better functions are available.
Interdependent: تغییر موفقیت امیز هر کدام از این مقادیر به هنگام فراخوانی های مربوطه، منوط به حصول شرایط در سایر مقادیر است.
Interdependent: تغییر موفقیت امیز هر کدام از این مقادیر به هنگام فراخوانی های مربوطه، منوط به حصول شرایط در سایر مقادیر است.
بررسی مقادیر زیر پراک برای پروسۀ زامبی...
Session leader’s pid is used as session id.
Migrating a process from one cpu to another is expensive.
Load balancing in SMP
LWP
LWP, Load balancing.
[latancy], [jitter], [real time]
You can set linux scheduling policy.
Explain Linux scheduler…
Truncated message will be lost
Truncated message will be lost, using msgtyp you can specify a specific queue (The queue which mtype is like this) or 0 to first msg
Truncated message will be lost, using msgtyp you can specify a specific queue (The queue which mtype is like this) or 0 to first msg