The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that changes state as it runs. Process information is stored in a process control block. Processes move between ready, running, waiting, and terminated states. The operating system uses long-term and short-term schedulers to select which processes to move between queues like ready and device queues. Context switching occurs when the CPU switches between processes. Processes can cooperate through communication and synchronization using message passing or shared memory.
The document summarizes key aspects of process management in operating systems. It discusses the process manager and its role in managing processes, threads, and resources. It describes process descriptors that contain information about processes and threads. Process states like running, blocked, and ready are also summarized. The document outlines reusable and consumable resources and how resource managers allocate and track resources. Process hierarchies and generalizing process management policies are briefly covered at the end.
This document discusses interprocess communication (IPC) and message passing in distributed systems. It covers key topics such as:
- The two main approaches to IPC - shared memory and message passing
- Desirable features of message passing systems like simplicity, uniform semantics, efficiency, reliability, correctness, flexibility, security, and portability
- Issues in message passing IPC like message format, synchronization methods (blocking vs. non-blocking), and buffering strategies
The document discusses processes and operations on processes in an operating system. It defines a process as a program in execution that includes a program counter, stack, data section, and heap. A process goes through various states like running, ready, waiting, and terminated. The operating system uses process control blocks to store process information and schedules processes using queues and schedulers. It also covers process creation, where a parent process spawns child processes, and process termination when a process exits.
This document discusses shared memory in Linux, including creating shared memory segments using shmget, attaching and detaching shared memory using shmat and shmdt, controlling shared memory segments using shmctl, and using mmap to map files to shared memory. It provides details on the system calls used for shared memory and examples of creating and using shared memory between processes.
Supporting Time-Sensitive Applications on a Commodity OSNamHyuk Ahn
Â
1) The document discusses Time-Sensitive Linux (TSL), which aims to support time-sensitive applications on commodity operating systems like Linux.
2) TSL improves kernel latency through an accurate timer mechanism called firm timers, a responsive kernel using lock-breaking preemption, and effective scheduling using proportion-based and priority-based algorithms.
3) Evaluation shows TSL reduces timer latency to under 1ms and preemption latency to under 1ms, improving synchronization of media playback under load compared to standard Linux.
Message passing between processes can use either synchronous or asynchronous communication. With synchronous communication, sending and receiving blocks until the operation completes, while asynchronous does not block. There are issues to consider with blocking sends and receives, such as processes crashing or messages getting lost. Non-blocking receives use polling or interrupts to check for incoming messages. Reliable message passing protocols use acknowledgments and retransmissions to ensure reliable delivery.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that changes state as it runs. Process information is stored in a process control block. Processes move between ready, running, waiting, and terminated states. The operating system uses long-term and short-term schedulers to select which processes to move between queues like ready and device queues. Context switching occurs when the CPU switches between processes. Processes can cooperate through communication and synchronization using message passing or shared memory.
The document summarizes key aspects of process management in operating systems. It discusses the process manager and its role in managing processes, threads, and resources. It describes process descriptors that contain information about processes and threads. Process states like running, blocked, and ready are also summarized. The document outlines reusable and consumable resources and how resource managers allocate and track resources. Process hierarchies and generalizing process management policies are briefly covered at the end.
This document discusses interprocess communication (IPC) and message passing in distributed systems. It covers key topics such as:
- The two main approaches to IPC - shared memory and message passing
- Desirable features of message passing systems like simplicity, uniform semantics, efficiency, reliability, correctness, flexibility, security, and portability
- Issues in message passing IPC like message format, synchronization methods (blocking vs. non-blocking), and buffering strategies
The document discusses processes and operations on processes in an operating system. It defines a process as a program in execution that includes a program counter, stack, data section, and heap. A process goes through various states like running, ready, waiting, and terminated. The operating system uses process control blocks to store process information and schedules processes using queues and schedulers. It also covers process creation, where a parent process spawns child processes, and process termination when a process exits.
This document discusses shared memory in Linux, including creating shared memory segments using shmget, attaching and detaching shared memory using shmat and shmdt, controlling shared memory segments using shmctl, and using mmap to map files to shared memory. It provides details on the system calls used for shared memory and examples of creating and using shared memory between processes.
Supporting Time-Sensitive Applications on a Commodity OSNamHyuk Ahn
Â
1) The document discusses Time-Sensitive Linux (TSL), which aims to support time-sensitive applications on commodity operating systems like Linux.
2) TSL improves kernel latency through an accurate timer mechanism called firm timers, a responsive kernel using lock-breaking preemption, and effective scheduling using proportion-based and priority-based algorithms.
3) Evaluation shows TSL reduces timer latency to under 1ms and preemption latency to under 1ms, improving synchronization of media playback under load compared to standard Linux.
Message passing between processes can use either synchronous or asynchronous communication. With synchronous communication, sending and receiving blocks until the operation completes, while asynchronous does not block. There are issues to consider with blocking sends and receives, such as processes crashing or messages getting lost. Non-blocking receives use polling or interrupts to check for incoming messages. Reliable message passing protocols use acknowledgments and retransmissions to ensure reliable delivery.
1. A distributed system is a collection of independent computers that appears as a single coherent system to users. It is organized as middleware that extends over multiple machines.
2. Transparency in distributed systems hides where resources are located, that they may move, be replicated, or shared concurrently. It also hides failures and whether resources are in memory or disk.
3. Scaling techniques include dividing work like form checking between servers and clients, partitioning namespaces, and distributing data and services across machines.
IPC (Inter-Process Communication) with Shared MemoryHEM DUTT
Â
This document presents an example of using shared memory for inter-process communication (IPC) between a server and client process. It shows the implementation of a server that creates a shared memory segment, writes data to it, and waits for the client to read from it. The client attaches to the shared memory, reads the data, and writes a character to indicate it has read the segment. The shared memory allows the processes to communicate data without copying.
GCMA: Guaranteed Contiguous Memory AllocatorSeongJae Park
Â
This document summarizes a presentation about GCMA, a Guaranteed Contiguous Memory Allocator for Linux kernels. GCMA aims to provide fast and guaranteed contiguous memory allocation while maintaining good memory utilization. It designates "discardable" pages used by frontswap and clean cache as secondary clients that can be quickly vacated from the reserved area. Evaluations show GCMA provides significantly faster allocation latency and system performance compared to CMA, especially under memory pressure or fragmentation. The presenter is working to upstream GCMA code and provide further evaluation results.
Multi-threaded processor architectures can improve parallelism at both the instruction-level and thread-level. Simultaneous multi-threading (SMT) allows multiple threads to issue and execute instructions simultaneously by dynamically sharing processor resources. SMT reduces underutilization of functional units and improves performance over multiprocessors. Multi-threaded designs are well-suited for digital signal processing applications that can benefit from parallel execution at multiple levels. Examples of multi-threaded real-time operating systems that support parallel DSP applications are discussed.
This document discusses real-time solutions in Linux. It explains that real-time systems focus on determinism, ensuring events and timing are known. There are several approaches to real-time Linux, including dual kernel systems like RTLinux that run a real-time kernel alongside Linux, and PREEMPT_RT kernels that modify Linux for lower latencies. Popular dual kernel options are RTAI and Xenomai, which provide real-time cores on top of Linux. RTAI focuses on x86 while Xenomai aims to facilitate porting and supports multiple personality "skins". The document provides references for further information on these real-time Linux solutions.
Analysis of interrupt latencies in a real-time kernelGabriele Modena
Â
This document analyzes interrupt latencies in a real-time kernel. It discusses how real-time operating systems must ensure operations complete within fixed deadlines to maintain predictability. When dealing with device drivers, this implies managing race conditions and fulfilling temporal constraints. The document evaluates the performance of the Linux real-time preempt patch using cyclictest to measure latency and compares its performance to the standard Linux kernel.
Operating Systems Part II-Process Scheduling, Synchronisation & DeadlockAjit Nayak
Â
This document discusses interprocess communication and CPU scheduling in operating systems. It covers two models of interprocess communication: shared memory and message passing. It also describes different CPU scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. The key aspects of process synchronization, producer-consumer problem, and implementation of message passing are explained.
This document discusses distributed operating systems and CPU scheduling. It covers basic concepts of CPU scheduling like processes, context switching, and dispatching. It then discusses different scheduling algorithms like first-come first-served, shortest job first, priority scheduling, and round robin. It also covers multiple processor scheduling, real-time scheduling, and algorithm evaluation. Deadlocks are discussed including characterization, handling methods like prevention, avoidance, and detection. Memory management techniques like swapping, paging, segmentation and their implementation are also summarized.
The document discusses interprocess communication and distributed systems, noting that distributed systems rely on defined communication protocols like interprocess communication using message passing over networks. It also covers topics like overhead in communication systems, different data storage and transfer formats between systems like XML, and Java object serialization for complex data transfer. Effective communication protocols aim to minimize overhead while maximizing throughput.
The Linux Scheduler: a Decade of Wasted Coresyeokm1
Â
The talk I gave at Papers We Love #20 (Singapore) about this academic paper "The Linux Scheduler: a Decade of Wasted Cores" by a few researchers.
The video of this talk can be found here: https://engineers.sg/v/758
Here are some relevant links:
Paper: http://www.ece.ubc.ca/~sasha/papers/eurosys16-final29.pdf
Reference Slides: http://www.i3s.unice.fr/~jplozi/wastedcores/files/extended_talk.pdf
Reference summary: https://blog.acolyer.org/2016/04/26/the-linux-scheduler-a-decade-of-wasted-cores/
Improving Real-Time Performance on Multicore Platforms using MemGuardHeechul Yun
Â
This document summarizes research on improving real-time performance on multicore platforms using MemGuard. It begins with an introduction to challenges in shared memory systems like unpredictable memory performance. It then presents MemGuard, which guarantees minimum memory bandwidth for each core through bandwidth reservation and best-effort sharing. A case study shows MemGuard improved real-time performance of a video capture task running with an X-server on an Intel Xeon system, eliminating deadline misses. The document concludes MemGuard can efficiently improve real-time performance but has limitations like coarse-grained enforcement that future work could address.
The document discusses the Linux kernel memory model (LKMM). It provides an overview of LKMM, including that it defines ordering rules for the Linux kernel due to weaknesses in the C language standard and need to support multiple hardware architectures. It describes ordering primitives like atomic operations and memory barriers provided by LKMM and how the LKMM was formalized into an executable model that can prove properties of parallel code against the LKMM.
There are several mechanisms for inter-process communication (IPC) in UNIX systems, including message queues, shared memory, and semaphores. Message queues allow processes to exchange data by placing messages into a queue that can be accessed by other processes. Shared memory allows processes to communicate by declaring a section of memory that can be accessed simultaneously. Semaphores are used to synchronize processes so they do not access critical sections at the same time.
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...Heechul Yun
Â
This document describes MemGuard, an operating system mechanism for providing efficient per-core memory performance isolation on commercial off-the-shelf hardware. MemGuard uses memory bandwidth reservation to guarantee each core's minimum memory bandwidth. It then performs predictive bandwidth donation and on-demand reclaiming to redistribute excess bandwidth, improving overall utilization. Evaluation shows MemGuard isolates performance and eliminates over 50% slowdown of a foreground real-time task due to interference, while maximizing throughput via bandwidth sharing.
This document discusses CPU scheduling in operating systems. It begins by introducing CPU scheduling and describing scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time and response time. It then explains common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). The document also covers more advanced topics such as multilevel queue scheduling, real-time scheduling, thread scheduling, multiprocessor scheduling and examples of scheduling in Linux, Windows and Solaris.
This document discusses various techniques for improving cache performance, including reducing the miss rate and miss penalty. It describes reducing misses through larger block sizes, higher associativity, victim caches, and prefetching. It also covers reducing miss penalties via read priority on misses, non-blocking caches, and adding a second level cache. The goal is to improve CPU performance by lowering the miss rate, miss penalty, and time to hit in the cache.
This document discusses data replication in distributed systems. It describes two types of replication: synchronous and asynchronous. For asynchronous replication, there is primary site asynchronous and peer-to-peer asynchronous. The document then focuses on the lazy master framework, outlining components like the primary/secondary copies, ownership, propagation, refreshment, and configuration. It provides details on the system architecture involving log monitors, propagators, receivers, and refreshers. Finally, it covers topics like failure and recovery handling.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, written in C++, and provides high performance distributed storage and processing for large datasets across wide areas using techniques like UDT for fast data transfer. Experimental results show it outperforms Hadoop for certain applications by exploiting data locality.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, supports fault tolerance through replication, and provides security through user accounts and encryption. Performance tests show Sector/Sphere outperforms Hadoop for sorting and malware analysis benchmarks by processing data locally.
1. A distributed system is a collection of independent computers that appears as a single coherent system to users. It is organized as middleware that extends over multiple machines.
2. Transparency in distributed systems hides where resources are located, that they may move, be replicated, or shared concurrently. It also hides failures and whether resources are in memory or disk.
3. Scaling techniques include dividing work like form checking between servers and clients, partitioning namespaces, and distributing data and services across machines.
IPC (Inter-Process Communication) with Shared MemoryHEM DUTT
Â
This document presents an example of using shared memory for inter-process communication (IPC) between a server and client process. It shows the implementation of a server that creates a shared memory segment, writes data to it, and waits for the client to read from it. The client attaches to the shared memory, reads the data, and writes a character to indicate it has read the segment. The shared memory allows the processes to communicate data without copying.
GCMA: Guaranteed Contiguous Memory AllocatorSeongJae Park
Â
This document summarizes a presentation about GCMA, a Guaranteed Contiguous Memory Allocator for Linux kernels. GCMA aims to provide fast and guaranteed contiguous memory allocation while maintaining good memory utilization. It designates "discardable" pages used by frontswap and clean cache as secondary clients that can be quickly vacated from the reserved area. Evaluations show GCMA provides significantly faster allocation latency and system performance compared to CMA, especially under memory pressure or fragmentation. The presenter is working to upstream GCMA code and provide further evaluation results.
Multi-threaded processor architectures can improve parallelism at both the instruction-level and thread-level. Simultaneous multi-threading (SMT) allows multiple threads to issue and execute instructions simultaneously by dynamically sharing processor resources. SMT reduces underutilization of functional units and improves performance over multiprocessors. Multi-threaded designs are well-suited for digital signal processing applications that can benefit from parallel execution at multiple levels. Examples of multi-threaded real-time operating systems that support parallel DSP applications are discussed.
This document discusses real-time solutions in Linux. It explains that real-time systems focus on determinism, ensuring events and timing are known. There are several approaches to real-time Linux, including dual kernel systems like RTLinux that run a real-time kernel alongside Linux, and PREEMPT_RT kernels that modify Linux for lower latencies. Popular dual kernel options are RTAI and Xenomai, which provide real-time cores on top of Linux. RTAI focuses on x86 while Xenomai aims to facilitate porting and supports multiple personality "skins". The document provides references for further information on these real-time Linux solutions.
Analysis of interrupt latencies in a real-time kernelGabriele Modena
Â
This document analyzes interrupt latencies in a real-time kernel. It discusses how real-time operating systems must ensure operations complete within fixed deadlines to maintain predictability. When dealing with device drivers, this implies managing race conditions and fulfilling temporal constraints. The document evaluates the performance of the Linux real-time preempt patch using cyclictest to measure latency and compares its performance to the standard Linux kernel.
Operating Systems Part II-Process Scheduling, Synchronisation & DeadlockAjit Nayak
Â
This document discusses interprocess communication and CPU scheduling in operating systems. It covers two models of interprocess communication: shared memory and message passing. It also describes different CPU scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. The key aspects of process synchronization, producer-consumer problem, and implementation of message passing are explained.
This document discusses distributed operating systems and CPU scheduling. It covers basic concepts of CPU scheduling like processes, context switching, and dispatching. It then discusses different scheduling algorithms like first-come first-served, shortest job first, priority scheduling, and round robin. It also covers multiple processor scheduling, real-time scheduling, and algorithm evaluation. Deadlocks are discussed including characterization, handling methods like prevention, avoidance, and detection. Memory management techniques like swapping, paging, segmentation and their implementation are also summarized.
The document discusses interprocess communication and distributed systems, noting that distributed systems rely on defined communication protocols like interprocess communication using message passing over networks. It also covers topics like overhead in communication systems, different data storage and transfer formats between systems like XML, and Java object serialization for complex data transfer. Effective communication protocols aim to minimize overhead while maximizing throughput.
The Linux Scheduler: a Decade of Wasted Coresyeokm1
Â
The talk I gave at Papers We Love #20 (Singapore) about this academic paper "The Linux Scheduler: a Decade of Wasted Cores" by a few researchers.
The video of this talk can be found here: https://engineers.sg/v/758
Here are some relevant links:
Paper: http://www.ece.ubc.ca/~sasha/papers/eurosys16-final29.pdf
Reference Slides: http://www.i3s.unice.fr/~jplozi/wastedcores/files/extended_talk.pdf
Reference summary: https://blog.acolyer.org/2016/04/26/the-linux-scheduler-a-decade-of-wasted-cores/
Improving Real-Time Performance on Multicore Platforms using MemGuardHeechul Yun
Â
This document summarizes research on improving real-time performance on multicore platforms using MemGuard. It begins with an introduction to challenges in shared memory systems like unpredictable memory performance. It then presents MemGuard, which guarantees minimum memory bandwidth for each core through bandwidth reservation and best-effort sharing. A case study shows MemGuard improved real-time performance of a video capture task running with an X-server on an Intel Xeon system, eliminating deadline misses. The document concludes MemGuard can efficiently improve real-time performance but has limitations like coarse-grained enforcement that future work could address.
The document discusses the Linux kernel memory model (LKMM). It provides an overview of LKMM, including that it defines ordering rules for the Linux kernel due to weaknesses in the C language standard and need to support multiple hardware architectures. It describes ordering primitives like atomic operations and memory barriers provided by LKMM and how the LKMM was formalized into an executable model that can prove properties of parallel code against the LKMM.
There are several mechanisms for inter-process communication (IPC) in UNIX systems, including message queues, shared memory, and semaphores. Message queues allow processes to exchange data by placing messages into a queue that can be accessed by other processes. Shared memory allows processes to communicate by declaring a section of memory that can be accessed simultaneously. Semaphores are used to synchronize processes so they do not access critical sections at the same time.
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...Heechul Yun
Â
This document describes MemGuard, an operating system mechanism for providing efficient per-core memory performance isolation on commercial off-the-shelf hardware. MemGuard uses memory bandwidth reservation to guarantee each core's minimum memory bandwidth. It then performs predictive bandwidth donation and on-demand reclaiming to redistribute excess bandwidth, improving overall utilization. Evaluation shows MemGuard isolates performance and eliminates over 50% slowdown of a foreground real-time task due to interference, while maximizing throughput via bandwidth sharing.
This document discusses CPU scheduling in operating systems. It begins by introducing CPU scheduling and describing scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time and response time. It then explains common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). The document also covers more advanced topics such as multilevel queue scheduling, real-time scheduling, thread scheduling, multiprocessor scheduling and examples of scheduling in Linux, Windows and Solaris.
This document discusses various techniques for improving cache performance, including reducing the miss rate and miss penalty. It describes reducing misses through larger block sizes, higher associativity, victim caches, and prefetching. It also covers reducing miss penalties via read priority on misses, non-blocking caches, and adding a second level cache. The goal is to improve CPU performance by lowering the miss rate, miss penalty, and time to hit in the cache.
This document discusses data replication in distributed systems. It describes two types of replication: synchronous and asynchronous. For asynchronous replication, there is primary site asynchronous and peer-to-peer asynchronous. The document then focuses on the lazy master framework, outlining components like the primary/secondary copies, ownership, propagation, refreshment, and configuration. It provides details on the system architecture involving log monitors, propagators, receivers, and refreshers. Finally, it covers topics like failure and recovery handling.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, written in C++, and provides high performance distributed storage and processing for large datasets across wide areas using techniques like UDT for fast data transfer. Experimental results show it outperforms Hadoop for certain applications by exploiting data locality.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, supports fault tolerance through replication, and provides security through user accounts and encryption. Performance tests show Sector/Sphere outperforms Hadoop for sorting and malware analysis benchmarks by processing data locally.
This document discusses the key components of a file system and disk drive architecture. It covers disk structure, scheduling, management, and swap space. It describes concepts like disk formatting, boot blocks, bad blocks, and swap space location. The document also provides a high-level overview of the Windows 2000 operating system, covering its architecture, kernel, processes and threads, exceptions, and subsystems.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
Net framework key components - By Senthil Chinnakondatalenttransform
Â
This document provides information on the history and key components of the .NET Framework as well as new features introduced in version 4.5. It began with version 1.0 in 2002 and the latest version is 4.5.2. The .NET Framework includes the Common Language Runtime (CLR) which manages code execution, the Common Type System (CTS) which defines supported data types, and garbage collection for automatic memory management. Version 4.5 introduced improvements such as larger array support, background garbage collection, and regular expression matching timeouts. The document also summarizes WCF concepts like contracts and bindings and describes transport, message, and transport with message credential security options.
The document discusses operating system layers and components. It describes how kernels and servers work together to perform invocation-related tasks like communication, scheduling, and resource management. Kernels set protection for physical resources through memory management and address spaces. Processes have separate user and kernel address spaces to safely access resources. Threads and processes are core components managed by the operating system.
1. The document discusses using PHP and MySQL to manage hydrodynamic flood models. PHP scripts are used to manipulate flood model meshes and results from the command line and present visualizations.
2. A PHP class library has been developed over 7 years to manipulate spatial data and store it in a MySQL database. PHP acts as an interface between command line tools to analyze and visualize flood modeling outputs.
3. Examples show how PHP can be used from the command line to generate meshes, run flood models, and create 3D visualizations of results to effectively summarize large, complex spatial datasets.
The document describes an alarm system toolkit that includes programs and procedures for archiving, maintaining, and retrieving control system data from an alarm server. However, there is no warranty for the quality or performance of the programs. The document also provides an overview of the alarm system components, including the Java Message Service (JMS) for communication between clients and servers, a relational database for storing configuration and alarm state information, and tools for configuring and viewing alarms.
This document provides an overview of the Red Hat Cluster Suite, which delivers high availability solutions. It discusses the Cluster Manager technology, which provides application failover capability to make applications highly available. Cluster Manager uses shared storage, service monitoring, and communication between servers to detect failures and restart applications on healthy nodes. It ensures data integrity through techniques like I/O barriers, quorum partitions, and active/passive or active/active application configurations across nodes.
The implementation phase involves materializing ideas from analysis and design into the final solution. The authors implemented rejuvenation on three domains using warm and cold methods based on time and prediction policies. They also simulated rejuvenation of failing nodes and implemented Petri net modeling. KVM was selected as the virtualization platform and run on CentOS. C was used as the programming language and key libraries were included. NFS was configured to enable sharing of VM images between servers to allow live migration.
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
Â
This Presentation contains information about os level virtualization and Containers internals. It has used other material on slide share which is referenced in Notes of PPT
Sofware architure of a SAN storage Control SystemGrupo VirreySoft
Â
The document describes the software architecture of a storage control system that uses a cluster of Linux servers to provide storage virtualization and management in a heterogeneous storage area network (SAN) environment. The storage control system, also called the "virtualization engine", aggregates storage resources into a common pool and allocates storage to hosts. It enables advanced functions like fast-write caching, point-in-time copying, remote copying, and transparent data migration. The system is built using commodity hardware and open source software to reduce costs compared to traditional proprietary storage controllers.
The document proposes a cloud environment for backup and data storage using remote servers that can be accessed through the Internet. It involves using the disks of cluster nodes as a global storage system with PVFS2 parallel file system for improved performance. The proposed system aims to increase data availability and reduce information loss by storing data on a private cloud using PVFS2 and developing a multiplatform client application for fast data transfer. It allows reuse of existing infrastructure to reduce costs and gives users experience of managing a private cloud.
Sector - Presentation at Cloud Computing & Its Applications 2009Robert Grossman
Â
This is a presentation about Sector that I gave at the Cloud Computing and Its Applications (CCA 09) Workshop that took place in Chicago on October 20, 2009. Sector is an open source cloud computing framework designed for data intensive computing.
This document discusses character devices in Linux. It explains that character devices have three main registers: a data register for reading and writing small amounts of data, an action register that triggers a physical action when written to but returns zero when read, and a status register that provides information when read but is a no-op when written to. It then provides an overview of input/output in operating systems generally.
This document provides an overview of distributed key-value stores and Cassandra. It discusses key concepts like data partitioning, replication, and consistency models. It also summarizes Cassandra's features such as high availability, elastic scalability, and support for different data models. Code examples are given to demonstrate basic usage of the Cassandra client API for operations like insert, get, multiget and range queries.
Similar to AdvFS Storage (domain) Threshold Alerts (20)
Can Bitcoin Be Palestine’s Currency of Freedom?Justin Goldberg
Â
This document summarizes an interview with a Bitcoin user named Uqab living in Gaza. Uqab describes the dire economic and living conditions in Gaza due to ongoing conflict and blockade. He says that Bitcoin has allowed some Gazans to get out of poverty by gradually investing and has given them a way to send and receive money internationally more easily compared to traditional financial systems. Uqab believes that Bitcoin represents an opportunity for financial freedom and that more Palestinians will adopt the technology. The document provides historical context on Gaza's economic struggles over decades due to policies of forced reliance on Israel and later isolation.
The document summarizes the author's journey between different operating systems after losing faith in BeOS. He first tried Windows but found it boring. He then switched to Linux but struggled with stability issues and inconsistent user experience between applications. This led him to try Mac OS X, where he was able to rediscover the joy of computing and found an operating system that felt coherent and like a home environment.
This document provides instructions for building a VCX 7.1 system disk using an automated script. It begins by discussing what is new in VCX 7.1, including new hardware support and features. It then describes booting a Knoppix CD to initialize the disk and configure basic networking before running the installation script. Detailed steps are provided for installing VCX Linux, applications, and configuring the system. Appendix A describes building a special "FRU disk" that can be configured to different software versions in the field.
3com® University Instructor-Led Training A GUIDE TO TECHNICAL COURSES Tech - ...Justin Goldberg
Â
This document provides information on technical training courses offered by 3Com University to help customers master the technical aspects of 3Com products. It lists over 20 instructor-led courses ranging from 1 to 4 days covering topics such as installing and configuring 3Com switches, routers, firewalls, wireless networks, and IP telephony. The courses provide hands-on experience under expert guidance to help technicians gain practical skills with 3Com technologies.
This document describes the design for implementing property lists and access control lists (ACLs) in the AdvFS file system on HP-UX. It proposes storing small ACLs directly in file metadata, while larger ACLs are stored using a property list infrastructure. Property list files will initially use a simple flat file layout for ease of implementation, and may later transition to a more scalable B-tree indexed layout. The design aims to provide an efficient and scalable way to store up to 1024 property list elements and ACL entries per file.
The document discusses the on-disk structures used by AdvFS, including bitfiles, mcells, extent maps, tags, directory files, and fragments. It describes how AdvFS uses a two-level implementation with the File Access System (FAS) and Bitfile Access System (BAS). The BAS uses bitfiles, mcells, extent maps, and tag files to store and locate file data and metadata on disk.
The document discusses various issues encountered when using the new Salesforce system for customer support tickets. It notes defects related to inability to see service contract statuses, find associated accounts, input dispatch notes, transfer tickets between queues, and more. Workarounds and status updates are provided for some of the issues.
Streaming AllWorx Call Detail Records To External DevicesJustin Goldberg
Â
The Allworx server will listen on a TCP port on the LAN interface for connections. While a connection exists to the port, any new CDR records will be sent to the connecting client.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Fueling AI with Great Data with Airbyte WebinarZilliz
Â
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
Â
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Â
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Â
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
Â
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Â
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
Â
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Â
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
Â
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
Â
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Â
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Â
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Â
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Â
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
1. Storage (Domain) Threshold Alerts
Design Specification
Version 0.3
NW
CASL
Building ZK3
110 Spit Brook Road
Nashua, NH 03062
Copyright (C) 2008 Hewlett-Packard Development Company, L.P.
2. 2
Primary reviewers:
Design Specification Revision History
Version Date Changes
0.1 01/12/04 First draft for internal review
0.2 01/14/04 Addressed comments from review
0.3 01/29/04 Addressed DS PRT comments
3. 3
1. Introduction
1.1 Overview
This project is intended to provide a means for administrators and/or
the system itself to respond to conditions of increased or decreased
consumption beyond a predetermined (threshold) limit of a filesystem’s
available storage pool.
On Tru64 Unix via the AdvFS GUI, each AdvFS Domain and fileset could
optionally have a "free space alert" set. When free space used by a
given object crossed the alert threshold (based upon percentage in use)
an alert was issued. Upon issuance of an alert, a shell script may
have optionally been executed.
1.2 Impacts
This project implements threshold management via AdvFS utilities and
threshold monitoring by the kernel with alerts generated in the form of
EVM events.
Due to the fact that on HPUX the user is not presented with the concept
of fileset domains, even though “under the covers” the domain/fileset
design still exists, AdvFS on HPUX will present thresholds as Storage
thresholds. A Storage threshold to the user is going to be implemented
as a domain threshold in the code. They will be settable on original
(non-snapset) Storages only, although any storage consumed by a snapset
will come from its parent filesystem’s storage pool and ultimately
contribute to hitting its Storages thresholds.
The original Tru64 Unix design implemented threshold functionality for
filesets as well. Although initially we will only be exposing domain
thresholds to the user (as Storage thresholds), the supporting code for
fileset thresholds will be defined in this document and implemented in
the coding stage as well.
As we still utilize a domain/fileset design in our AdvFS code, we will
be implementing kernel APIs, data structures, transaction FTX agent IDs
and RootDone records whose naming conventions follow the domain/fileset
maxim.
4. 4
2. Proposed Design
2.1 High Level Design Overview
In this initial version of the threshold project, Storage(domain)
thresholds will be exposed to the user. Fileset thresholds will be
implemented as well, but not presented to the user via any API.
Possibly sometime in future versions, fileset thresholds will be made
available.
A threshold can be defined as follows:
A threshold consists of a limit, interval, and an elapsed time value.
The limit and interval are values set by the user. The kernel is
responsible for managing the elapse time value.
The limit value is an integer that represents percent full for the
Storage. For a given filesystem(domain) or fileset there will be the
potential for setting two threshold limits. There can be an upper
limit threshold set and a lower limit threshold set. These can be
thought of as high and low water marks.
The interval value is an integer that equals the number of minutes that
must elapse prior to generating subsequent EVM events. This is to
eliminate the possibility of a flood of EVM events being generated as
storage continues to be allocated or deallocated around the threshold
limit. It is important to note two things. First is that events are
generated only when a request for storage takes us from being below a
threshold limit to being above that threshold limit, and that future
storage allocation above the limit would not generate an event
regardless of the event interval. Second is that users can set the
interval value to zero. This relinquishes ability of AdvFS to control
a potential flood of EVM events should storage thrash around the
threshold limit.
The proposed design is divided into three parts. The first deals with
displaying threshold values, the second deals with managing threshold
values, and the third deals with monitoring for threshold crossing and
generating EVM events when conditions have been satisfied.
2.1.1 Displaying Threshold values
Storage threshold values can be viewed by users via the AdvFS utility
fsadm_advfs from the fsadm wrapper command. The following goes through
the high level design for viewing Storage thresholds. Remember that
these are the equivalent of Domain thresholds.
2.1.1.1 User interface
Storage threshold values can be viewed via the fsadm info command which
will access the kernel via the existing API msfs_get_dmnname_params().
Fileset threshold values will be retrievable using the existing API
msfs_fset_get_info() but will not be utilized in the initial version of
this project.
5. 5
2.1.1.2 Kernel interface and syscall implementation
The libmsfs.so library interface makes a kernel syscall with the op-
type ADVFS_OP_GET_DMNNAME_PARAMS.
msfs_syscall_op_get_dmnname_params() will be the routine that gets
called for the operation. The threshold limit, event interval and time
of last event are copied into the user’s buffer from the in-memory
domainT structure.
2.1.2 Managing Threshold Values
User settable threshold values consist of both a threshold limit and an
event interval. A valid limit will be between 0 and 100, corresponding
to "percent full". A valid event interval will be between 0 and 10080,
corresponding to "minutes between EVM events". Intervals will be
settable via a minute, day. Some examples would be 5m or 3d. All
would be converted to minutes and dealt with by the kernel as such.
Thresholds are initialized at domain and fileset creation time to a
limit of 0. They remain inactive until activated by a System
Administrator by changing either the upper or lower limit to a valid
non-zero value. The following walks through the high level design for
managing each.
2.1.2.1 User interface
Storage thresholds will be managed via the fsadm –chfs utility and will
access the kernel via the new API advfs_set_dmnname_params ().
The caller will have the ability to activate or deactivate threshold
monitoring for a given named, non-snap filesystem. If the caller wants
to activate threshold monitoring a valid filesystem name and threshold
limit must be given. This can be the upper and/or lower limit.
Providing an event interval is optional as a default value of 5 minutes
will be used if omitted by the caller. If the caller wishes to
deactivate threshold monitoring for a filesystem, fsadm –chfs would be
called with a valid Storage name and a threshold limit of 0 for both
the upper and lower values.
2.1.2.2 Kernel interface and syscall implementation
Storage (Domain) thresholds:
The libadvfs library interface makes a kernel syscall with the new
op-type ADVFS_OP_SET_DMNNAME_PARAMS.
advfs_syscall_op_set_dmnname_params() is the handling routine that gets
called for the operation.
Finally, bs_set_dmn_threshold_info() is called and the domain threshold
limits and event interval are copied into the in-memory domainT and
written out to disk in the BSR_DMN_MATTR record of the RBMT. This
record resides in the RBMT of the volume that holds the most current
Log for the domain.
6. 6
Fileset thresholds:
The libadvfs library interface makes a kernel syscall with the op-type
ADVFS_OP_SET_BFSET_PARAMS. advfs_syscall_op_set_bfset_params() is the
handling routine that gets called for the operation.
Finally, bs_set_bfset_threshold_info() is called and the fileset
threshold limits and event interval are copied into the in-memory
bfSetT and written out to disk in the BSR_BFS_ATTR record for the
fileset. This record resides in the RBMT of the volume that holds the
most current Log for the domain.
2.1.3 Threshold monitoring / EVM event generation
2.1.3.1 Upper Limit
Storage(domain) and fileset threshold upper limits are checked anytime
additional on-disk storage is requested in bs_stg.c:add_stg(). When
new storage is requested, if the upper threshold limit would be crossed
by the request, and the event interval has been exceeded as compared to
the time of the last EVM threshold event, a rootdone operation is
attached to the parent transaction handling the request for additional
storage. When the parent transaction completes, the rootdone
operation is executed and an EVM event is generated announcing the
crossing of the threshold limit. The rootdone approach guarantees that
an EVM event will not be generated unless the request for additional
storage succeeds.
2.1.3.2 Lower limit
Storage(domain) and fileset threshold lower limits are checked anytime
on-disk storage is removed by file deletion in
bs_access.c:bs_close_one() and when files are truncated in
bs_stg.c:dealloc_stg(). When storage is being removed, if the lower
threshold limit would be crossed by the request, and the event interval
has been exceeded as compared to the time of the last EVM threshold
event, a rootdone operation is attached to the parent transaction
handling the storage deallocation. When the parent transaction
completes, the rootdone operation is executed and an EVM event is
generated announcing the crossing of the threshold limit. The rootdone
approach guarantees that an EVM event will not be generated unless the
storage deallocation succeeds.
7. 7
3. Detailed Design Overview
The following describes the threshold project design in greater
detail. The approach will be to define new and changed data
structures, EVM events, FTX transaction types, and new functions for
the three general subsystems of the project followed by pseudo code of
the logic for each of these subsystems. Again, these are threshold
viewing, threshold management, and threshold monitoring. For the sake
of simplicity, threshold management and threshold viewing will be
combined in one subsection. First are listed the new and changed data
structures common to all three subsystems.
3.1 New or Changed Structures for Storage(domain)/fileset
Thresholds
3.1.1 bs_public.h:adv_threshold_t and adv_threshold_ods_t
These new data structure describe thresholds for domains and for
filesets. This structures will be incorporated into existing in-memory
structures in the case of adv_threshold_t and on disk in the case of
adv_threhsold_ods_t.
/*
* holds domain and fileset threshold values in memory.
*/
typedef struct {
uint32_t threshold_upper_limit;
uint32_t threshold_lower_limit;
uint32_t upper_event_interval;
uint32_t lower_event_interval;
uint64_t time_of_last_upper_event;
uint64_t time_of_last_lower_event;
} adv_threshold_t;
/*
* holds domain and fileset threshold values on disk.
*/
typedef struct {
uint32_t threshold_upper_limit;
uint32_t threshold_lower_limit;
uint32_t upper_event_interval;
uint32_t lower_event_interval;
} adv_threshold_ods_t;
3.1.2 bs_ods.h:bsDmnMAttrT
This is the existing on-disk RBMT record defined to hold domain mutable
attributes with the new thresholdT structure added.
#define BSR_DMN_MATTER 15
typedef struct {
onDiskVersionT ODSVersion;
8. 8
.
.
.
thresholdT threshold;
.
.
} bsDmnMAttrT;
3.1.3 bs_ods.h:bsBfSetAttrT
This is the existing on-disk RBMT record defined to hold fileset
attributes with the new thresholdT structure added.
#define BSR_BFS_ATTR 8
typedef struct {
bfSetIdT bfSetId;
.
.
.
thresholdT threshold;
.
.
} bsBfSetAttrT;
3.1.4 advfs_evm.h: #defines
These are the new EVM event types needed to support this project. Note
the use of STORAGE in the naming conventions due to exposure to the
user.
#define EVENT_STORAGE_UPPER_THRESHOLD_CROSSED
_EvmSYSTEM_EVENT_NAME("fs.advfs.storage.upper.threshold.crossed")
#define EVENT_STORAGE_LOWER_THRESHOLD_CROSSED
_EvmSYSTEM_EVENT_NAME("fs.advfs.storage.lower.threshold.crossed")
#define EVENT_FSET_UPPER_THRESHOLD_CROSSED
_EvmSYSTEM_EVENT_NAME("fs.advfs.fset.upper.threshold.crossed")
#define EVENT_FSET_LOWER_THRESHOLD_CROSSED
_EvmSYSTEM_EVENT_NAME("fs.advfs.fset.lower.threshold.crossed")
3.1.5 advfs_evm.h:advfs_ev
This is the modified advfs_ev structure. It’s used to pass data values
to the EVM. The only change is the addition of threshold_limit values
and event_interval values.
9. 9
typedef struct _advfs_event {
char *special;
char *domain;
.
.
.
uint32_t threshold_upper_limit; /* NEW */
uint32_t threshold_upper_interval; /* NEW */
uint32_t threshold_lower_limit; /* NEW */
uint32_t threshold_lower_interval; /* NEW */
} advfs_ev_t;
3.2 New or Changed Structures and Functions for
Threshold Viewing / Management
3.2.1 msfs_syscalls.h:ml_threshold_t
This is a new structure that is used as the buffer for passing
threshold data between the user commands and the kernel via the AdvFS
system calls for viewing and managing thresholds.
The freeBlks and totalBlks contain data needed at the user level after
a threshold has been set with the fsadm_advfs utility. The utilities
make an initial check for threshold crossing and need these fields to
accomplish this.
typedef struct {
thresholdT threshold; /* dmn or bfSet threshold values */
uint64_t freeBlks; /* total free blocks in dmn or bfSet */
uint64_t totalBlks; /* total blocks in dmn or bfSet */
uint64_t unused[10]; /* zeroed and reserved for future use */
} ml_threshold_t;
3.2.2 msfs_syscalls.h:mlDmnParamsT
This is the new mlDmnParamsT structure that now includes an instance of
an mlThresholdT.
typedef struct {
mlBfDomainIdT bfDomainId; /* domain id */
.
.
.
mlThresholdT threshold; /* domain threshold */
} mlDmnParamsT;
3.2.3 msfs_syscalls.h:libParamsT
The following are the one new and three existing union members of the
libParamsT structure used when making the four AdvFS system calls for
threshold viewing/management. The structure name infers the system call
it is associated with.
New:
10. 10
typedef struct {
char *domain; /* in - domain name */
mlDmnParmsT dmnParms; /* in - domain params */
} opSetDmnNameParamsT;
Existing:
typedef struct {
char *domain; /* in - domain name */
mlDmnParmsT dmnParms; /* out - domain params */
} opGetDmnNameParamsT;
typedef struct {
mlBfSetIdT bfSetId; /* in - fileset id */
mlBfSetParamsT bfSetParams; /* in - fileset params */
} opSetBfSetParamsT;
typedef struct {
mlBfSetIdT bfSetId; /* in - fileset id */
mlBfSetParamsT bfSetParams; /* out - fileset params */
} opGetBfSetParamsT;
3.2.4 bs_params.c:bs_set_dmn_threshold_info()
This is a new routine defined to set domain threshold values in memory
and on disk as a result of an instance of the fsadm –chfs utility.
definition:
statusT
bs_set_dmn_threshold_info(
domainT *dmnP, /* in - domain pointer */
thresholdT *threshold /* in - threshold pointer */
)
logic:
check validity of domain pointer.
start a root transaction.
if fail then return failure value.
get domain's log vd pointer.
malloc memory for a bsDmnMAttrT structure buffer.
if fail then end transaction and return ENO_MORE_MEMORY.
copy threshold data into malloc'ed memory buffer.
write buffer to BSR_DMN_MATTR record on vd of domain log.
if fail then free memory buffer, end ftx and return failure value.
copy threshold data to domainT in memory.
free malloc'ed memory buffer.
end transaction.
return EOK.
3.2.5 bs_bitfile_sets.c:bs_set_bfset_threshold_info()
This is a new routine defined to set fileset threshold values in memory
and on disk as part of future functionality. A user interface needs to
be defined as part of that future work.
11. 11
definition:
statusT
bs_set_bfset_threshold_info(
bfSetT *bfSetp, /* in - bitfile-set's desc pointer */
thresholdT *threshold /* in - bitfile-set's threshold struct */
)
logic:
check validity of bfset pointer.
start a root transaction.
if fail then return failure value.
malloc memory for a bsBfSetThrshldAttrT structure buffer.
if fail then end transaction and return ENO_MORE_MEMORY.
copy threshold data into malloc'ed memory buffer.
write buffer to BSR_BFSET_THRESHOLD_ATTR record into m-cell chain of
filesets Tag file.
if fail then free memory buffer, end ftx and return failure value.
copy threshold data to bfSetT in memory.
free malloc'ed memory buffer.
end transaction.
return EOK.
3.3 New or changed Data structures and Functions for
Threshold Monitoring
3.3.1 ftx_agents.h:ftxAgentIdT
A new transaction agent will be added to handle generating threshold
EVM events. The new agent added to the enumeration is
FTA_ADVFS_THRESHOLD_EVENT.
3.3.2 bs_stg.c:thresholdAlertRtDnRecT
This is the record for the FTA_ADVFS_THRESHOLD_EVENT agent that is
attached to the transaction passed to add_stg() and dealloc_stg(). It
is passed to the rootdone routine upon completion of the root
transaction. First there is a threshold type enumeration to denote
domain or fileset threshold followed by the definition of the
thresholdAlertRtDnRecT.
typedef enum {
DOMAIN_UPPER = 0,
DOMAIN_LOWER = 1,
FSET_UPPER = 2,
FSET_LOWER = 3
}thresholdTypeT;
12. 12
typedef struct thresholdAlertRtDnRec
{
thresholdTypeT type;
bfSetIdT bfSetID;
}thresholdAlertRtDnRecT;
3.3.3 bs_stg.c:init_bs_stg_opx()
Add a call to ftx_register_agent() registering
advfs_threshold_rtdn_opx() as the rootdone routine for an
FTA_ADVFS_THRESHOLD_EVENT agent.
sts = ftx_register_agent(
FTA_ADVFS_THRESHOLD_EVENT,
NULL, /* undo */
&advfs_threshold_rtdn_opx /* root done */
);
3.3.4 bs_stg.c changes to add / remove storage routines
3.3.4.1 add_stg()
Additions to this routine are made to facilitate the monitoring of
domain and fileset upper limit thresholds. This code will be added to
the beginning of the routine.
logic:
Calculate potential domain and fileset blocks used if the request for
storage was to succeed.
if domain upper threshold limit is > zero and
total domain blocks is < the upper threshold limit and
potential domain blocks is > upper threshold limit and
event interval has expired then
Start a sub transaction and fill in rootdone record.
Attach the FTA_ADVFS_THRESHOLD_EVENT agent.
Set rootdone record type to DOMAIN_UPPER.
End sub transaction.
if fileset upper threshold limit is > zero and
total fileset blocks is < the upper threshold limit and
potential fileset blocks used is > upper threshold limit and
event interval has expired then
Start a sub transaction and fill in rootdone record.
Attach the FTA_ADVFS_THRESHOLD_EVENT agent.
Set rootdone record type to FSET_UPPER.
Set the bfSetId in the rootdone record equal to the
fileset’s bfSetId.
End sub transaction.
13. 13
3.3.4.2 dealloc_stg() and bs_close_one()
Additions to this routine are made to facilitate the monitoring of
domain and fileset lower limit thresholds. This code will be added to
the beginning of the routine.
logic:
Calculate potential domain and fileset blocks used if the deallocation
of storage was to succeed.
if domain lower threshold limit is > zero and
total domain blocks is > threshold limit and
potential domain blocks is < threshold limit and
event interval has expired then
Start a sub transaction and fill in rootdone record.
Attach the FTA_ADVFS_THRESHOLD_EVENT agent.
Set rootdone record type to DOMAIN_LOWER.
End sub transaction.
if fileset lower threshold limit is > zero and
total fileset blocks is > threshold limit and
potential fileset blocks is < threshold limit and
event interval has expired then
Start a sub transaction and fill in rootdone record.
Attach the FTA_ADVFS_THRESHOLD_EVENT agent.
Set rootdone record type to FSET_LOWER.
Set the bfSetId in the rootdone record equal to the
fileset’s bfSetId.
End sub transaction.
3.3.5 bs_stg.c:advfs_threshold_rtdn_opx()
This is the rootdone operation associated with the
FTA_ADVFS_THRESHOLD_EVENT agent. This is the routine that generates an
EVM event when a domain or fileset threshold has been crossed. Using
the rootdone approach ensures that an event is not prematurely
generated if the addition or deallocation of storage were to fail.
definition:
void
advfs_threshold_rtdn_opx(ftxHT ftxH, /* handle to transaction */
int32_t size, /* size of rootdone record */
void* address /* address of rootdone record */
)
14. 14
logic:
copy rootdone record from address to local thresholdAlertRtDnRecT.
get system time.
if rootdone record type is DOMAIN_UPPER then
generate an EVENT_STORAGE_THRESHOLD_UPPER_CROSSED EVM event
update domain->threshold.time_of_last_upper_event with system time
return.
If rootdone record type is DOMAIN_LOWER then
generate an EVENT_STORAGE_THRESHOLD_LOWER_CROSSED EVM event
update domain->threshold.time_of_last_lower_event with system time
return.
if rootdone record type is FSET_UPPER or FSET_LOWER then
if domain of the transaction is not active then open fileset.
if open fails then panic domain and return.
if rootdone record type is FSET_UPPER then
generate an EVENT_FSET_THRESHOLD_UPPER_CROSSED EVM event.
update bfSetP-> threshold.time_of_last_upper_event with
system time.
else if rootdone record type is FSET_LOWER then
gerneate an EVENT_FSET_THRESHOLD_LOWER_CROSSED EVM event.
update bfSetP-> threshold.time_of_last_lower_event with
system time.
if fileset was opened then close.
if fail then panic domain.
return.
15. 15
Appendix
I. Possible Future Enhancements
1. Implement fileset threshold viewing and management if the
domain/fileset environment is exposed to the user in a future HPUX
release.
2. Add initial checks for threshold crossing at domain activation and
fileset mount time.
3. Add rootdone operation to parent transaction passed to add_stg()
directly, saving having to start a sub transaction to attach the
FTA_ADVFS_THRESHOLD_EVENT agent.