Need for Speed (NFS) is a renowned racing video game series that has captivated gamers since 1994. Developed by various studios, it offers high-octane, illegal street racing experiences with exotic cars. NFS is known for its stunning graphics, intense police chases, and extensive car customization options, allowing players to modify both performance and aesthetics. The series explores themes like underground racing culture and professional circuit racing, providing a visually impressive and immersive racing experience. With realistic physics and diverse settings, NFS remains a top choice for both casual and hardcore racing enthusiasts.
NFS (Network File System) allows hosts to mount remote file systems and access them locally. There are three versions of NFS in use (v2, v3, v4). NFS implements a client-server model and uses RPC (Remote Procedure Call) to make file operations on remote servers appear local. NFS aims to support UNIX file semantics over the network in a stateless manner for scalability.
File service architecture and network file systemSukhman Kaur
Distributed file systems allow users to access and share files located on multiple computer systems. They provide transparency so that clients can access local and remote files in the same way. Issues include maintaining consistent concurrent updates and caching files for improved performance. Network File System (NFS) is an open standard protocol that allows remote file access like a local file system. It uses remote procedure calls and has evolved through several versions to support features like locking, caching, and security.
Network File System (NFS) is a distributed file system protocol that allows users to access and share files located on remote computers as if they were local. NFS runs on top of RPC and supports operations like file reads, writes, lookups and locking. It uses a stateless client-server model where clients make requests to NFS servers, which are responsible for file storage and operations. NFS provides mechanisms for file sharing, locking, caching and replication to enable reliable access and performance across a network.
The Network File System (NFS) Version 4 is a distributed file system similar to previous versions of NFS in its straightforward design, simplified error recovery, and independence of transport protocols and operating systems for file access in a heterogeneous network.
NFS, was developed by Sun Microsystems to provide distributed transparent file access in a heterogeneous network. It achieves this by being relatively simple in design and not relying too heavily on any particular file system model.
This presentation is based on the paper of “The NFS Version 4 Protocol” written by Brian Pawlowski, Spencer Shepler, Carl Beame, Brent Callaghan, Michael Eisler, David Noveck, David Robinson and Robert Thurlow.
A server algorithm that combines delayed allocation and pre-allocation can limit maximum concurrency levels. Delayed allocation reserves disk space for files but writes data to disk periodically rather than immediately. Pre-allocation reserves additional disk blocks upfront. To limit concurrency, the server restricts the number of concurrent client connections or operations that can be processed simultaneously.
The document discusses processes, threads, communication, and synchronization in distributed systems. It covers key concepts like processes and threads, address spaces, creation of new processes, shared memory regions, thread communication and synchronization, remote invocation, and architectures for multi-threaded servers. Communication primitives provided by some distributed operating systems are also summarized.
- The Andrew File System (AFS) is a distributed file system that provides transparent access to shared files across a network like NFS. It aims to be scalable and efficient by caching entire files on client machines to reduce server load.
- AFS distinguishes between client machines that access files and dedicated server machines that store files. It achieves scalability through caching whole files on clients to reduce load on servers.
- AFS servers keep track of open files to inform clients of any updates through callbacks, providing location independence and transparency not present in NFS.
This document discusses processes and threads in distributed systems. It covers how threads allow multitasking within a process by sharing resources. Threads can be implemented in either userspace or kernelspace. Distributed systems can use multithreaded clients and servers. Code migration involves moving a running program between machines for load balancing, reducing communication, or dynamic configuration. Weak mobility transfers just code, while strong mobility also moves execution state. References to local resources must also be handled during migration.
NFS (Network File System) allows hosts to mount remote file systems and access them locally. There are three versions of NFS in use (v2, v3, v4). NFS implements a client-server model and uses RPC (Remote Procedure Call) to make file operations on remote servers appear local. NFS aims to support UNIX file semantics over the network in a stateless manner for scalability.
File service architecture and network file systemSukhman Kaur
Distributed file systems allow users to access and share files located on multiple computer systems. They provide transparency so that clients can access local and remote files in the same way. Issues include maintaining consistent concurrent updates and caching files for improved performance. Network File System (NFS) is an open standard protocol that allows remote file access like a local file system. It uses remote procedure calls and has evolved through several versions to support features like locking, caching, and security.
Network File System (NFS) is a distributed file system protocol that allows users to access and share files located on remote computers as if they were local. NFS runs on top of RPC and supports operations like file reads, writes, lookups and locking. It uses a stateless client-server model where clients make requests to NFS servers, which are responsible for file storage and operations. NFS provides mechanisms for file sharing, locking, caching and replication to enable reliable access and performance across a network.
The Network File System (NFS) Version 4 is a distributed file system similar to previous versions of NFS in its straightforward design, simplified error recovery, and independence of transport protocols and operating systems for file access in a heterogeneous network.
NFS, was developed by Sun Microsystems to provide distributed transparent file access in a heterogeneous network. It achieves this by being relatively simple in design and not relying too heavily on any particular file system model.
This presentation is based on the paper of “The NFS Version 4 Protocol” written by Brian Pawlowski, Spencer Shepler, Carl Beame, Brent Callaghan, Michael Eisler, David Noveck, David Robinson and Robert Thurlow.
A server algorithm that combines delayed allocation and pre-allocation can limit maximum concurrency levels. Delayed allocation reserves disk space for files but writes data to disk periodically rather than immediately. Pre-allocation reserves additional disk blocks upfront. To limit concurrency, the server restricts the number of concurrent client connections or operations that can be processed simultaneously.
The document discusses processes, threads, communication, and synchronization in distributed systems. It covers key concepts like processes and threads, address spaces, creation of new processes, shared memory regions, thread communication and synchronization, remote invocation, and architectures for multi-threaded servers. Communication primitives provided by some distributed operating systems are also summarized.
- The Andrew File System (AFS) is a distributed file system that provides transparent access to shared files across a network like NFS. It aims to be scalable and efficient by caching entire files on client machines to reduce server load.
- AFS distinguishes between client machines that access files and dedicated server machines that store files. It achieves scalability through caching whole files on clients to reduce load on servers.
- AFS servers keep track of open files to inform clients of any updates through callbacks, providing location independence and transparency not present in NFS.
This document discusses processes and threads in distributed systems. It covers how threads allow multitasking within a process by sharing resources. Threads can be implemented in either userspace or kernelspace. Distributed systems can use multithreaded clients and servers. Code migration involves moving a running program between machines for load balancing, reducing communication, or dynamic configuration. Weak mobility transfers just code, while strong mobility also moves execution state. References to local resources must also be handled during migration.
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
This document provides information about clustered NFS (cNFS) in IBM Spectrum Scale. cNFS allows multiple Spectrum Scale servers to share a common namespace via NFS, providing high availability, performance, scalability and a single namespace as storage capacity increases. The document discusses components of cNFS including load balancing, monitoring, and failover. It also provides instructions for prerequisites, setup, administration and tuning of a cNFS configuration.
The document provides an overview of load balancing concepts and configurations using Cisco Application Centric Infrastructure (ACE). It discusses key topics such as virtual contexts, physical connections, connection management methods, address translation, offloading services, and integrating ACE virtual contexts in a data center environment. Example configurations are also provided for common load balancing modes like routed, bridged, and one-arm.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)Hien Nguyen Van
This document discusses deploying scalable NFS clusters that export CephFS volumes using Rook and Kubernetes. CephFS is a distributed file system that uses an MDS and RADOS to manage metadata and data. Rook allows dynamically deploying Ceph clusters and NFS-Ganesha servers that export CephFS subvolumes. Coordinating grace periods across the NFS cluster during server reboots is challenging due to the stateful nature of both NFS and CephFS protocols. Future work includes improving migration and optimizing grace periods when scaling the NFS cluster size.
This document compares the architectures of Kafka and Kinesis. Both have similar architectures, with Kafka brokers storing messages in partitions and consumers subscribing to topics. The document finds that Kafka has higher throughput and lower costs than Kinesis due to Kinesis' throughput limits. It also notes headaches with Kinesis' throughput limits and management overhead. The document recommends switching from Kinesis to Kafka for these reasons.
A Project Report on Linux Server AdministrationAvinash Kumar
This is a Project Report on Linux Server Admin. It contains key network features which are installed on Linux. This project was conducted on RedHat Enterprise Linux 7.2.
ASIT is one of the leading providers of Programming courses "Advanced JAVA",along with professional certification. We associate with industry experts to deliver the training requirements of Job seeks and working professionals.for more details please visit our website.
NFS allows remote hosts to mount file systems over a network as if they were local. It uses TCP and RPC processes to authenticate clients and grant access to exported file shares based on configuration in /etc/exports. Administrators can start and stop the NFS server and related services using the service command to export resources from centralized servers.
We develop a consistent mutable replication extension for NFSv4 tuned to meet the rigorous demands of large-scale data sharing in global collaborations. The system uses a hierarchical replication control protocol that dynamically elects a primary server at various granularities. Experimental evaluation indicates a substantial performance advantage over a single server system. With the introduction of the hierarchical replication control, the overhead of replication is negligible even when applications mostly write and replication servers are widely distributed.
Understanding Apache Kafka P99 Latency at ScaleScyllaDB
Apache Kafka is a highly popular distributed system used by many organizations to connect systems, build microservices, create data mesh, etc. However, as a distributed system, understanding its performance could be a challenge, so many moving parts exist.
In this talk, we are going to review the key moving parts (producers, consumers, replication, network, etc), a strategy to measure and interpret the performance results for consumers and producers and a general guideline for deciding about performance in Apache Kafka.
An attendee will take home after the talk a proven method to measure, evaluate and optimise the performance of an Apache Kafka based infrastructure. A key skill for low throughput users, but especially for the biggest scale deployments.
The document discusses three RPC communication protocols:
1. The R protocol uses asynchronous communication with only request messages and no reply. This improves performance.
2. The RR protocol fits requests and replies into a single packet and caches replies to reduce overhead.
3. The RRA protocol adds acknowledgements to replies to make communication more reliable as it ensures replies are received.
This document discusses directory write leases in MagFS, a globally distributed file system. It introduces the concept of directory write leases, which allow clients to cache and execute namespace-modifying operations locally to improve performance over high-latency networks. Evaluation results show that directory write leases enable workloads to complete much faster with increasing network latency compared to synchronous approaches.
This document provides an introduction to Apache Kafka. It discusses why Kafka is needed for real-time streaming data processing and real-time analytics. It also outlines some of Kafka's key features like scalability, reliability, replication, and fault tolerance. The document summarizes common use cases for Kafka and examples of large companies that use it. Finally, it describes Kafka's core architecture including topics, partitions, producers, consumers, and how it integrates with Zookeeper.
This document summarizes key aspects of distributed file systems (DFS), including their structure, naming and transparency, remote file access using caching, stateful versus stateless service models, file replication, and examples like the Sun Network File System (NFS). A DFS manages dispersed storage across a network, using caching to improve performance of remote file access and dealing with issues of consistency between cached and server copies. NFS provides a specific implementation of a DFS that integrates remote directories transparently and uses stateless remote procedure calls along with caching for efficiency.
GFProxy: Scaling the GlusterFS FUSE Client Gluster.org
GFProxy is a server and client that improves upon the native FUSE client for GlusterFS. It provides a single connection between the FUSE client and GFProxy server, rather than many connections between clients and bricks. This simplifies upgrades and eliminates client-side network magnification. If the GFProxy server fails, queued operations will be retried after it reconnects. Performance testing shows GFProxy FUSE outperforming native FUSE and NFS, especially for multi-streamed writes. Future work includes supporting multiple volumes and better integration.
The document summarizes key aspects of the Coda distributed file system. Coda was designed by Carnegie Mellon University to be scalable, secure, and highly available. It aims for transparency in naming, location, and failures. Coda uses a client-server model where clients interact with file servers through Venus processes. File data and metadata are cached at clients to allow for continued access when disconnected from servers. Coda employs replication and unique file identifiers to manage distributed data access transparently.
Parallel NFS (pNFS) is a standard defined in NFSv4.1 that separates file metadata and data to allow parallel and distributed access to file data. It defines protocols for clients to communicate with a metadata server to get file layouts describing the data locations and protocols to access multiple data servers directly in parallel. However, it does not define protocols between metadata and data servers, allowing flexibility in implementation. pNFS supports various storage layout types including file, block, and object storage and can provide high performance parallel I/O while maintaining NFS semantics.
This document discusses CPU scheduling and multithreaded programming. It covers key concepts in CPU scheduling like multiprogramming, CPU-I/O burst cycles, and scheduling criteria. It also discusses dispatcher role, multilevel queue scheduling, and multiple processor scheduling challenges. For multithreaded programming, it defines threads and their benefits. It compares concurrency and parallelism and discusses multithreading models, thread libraries, and threading issues.
RPC allows for remote procedure calls to be made across distributed systems in a similar way to local procedure calls. RPC uses stubs to pack and unpack arguments and results so that the calls appear transparent whether local or remote. RPC messages contain information about the procedure, arguments, and client for execution on the server and returning results. Marshalling converts data to a stream for transmission and decoding on the receiving end. Servers can be stateful, maintaining information across calls, or stateless, requiring all needed information be passed each time.
Assessment and Planning in Educational technology.pptxKavitha Krishnan
In an education system, it is understood that assessment is only for the students, but on the other hand, the Assessment of teachers is also an important aspect of the education system that ensures teachers are providing high-quality instruction to students. The assessment process can be used to provide feedback and support for professional development, to inform decisions about teacher retention or promotion, or to evaluate teacher effectiveness for accountability purposes.
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
This document provides information about clustered NFS (cNFS) in IBM Spectrum Scale. cNFS allows multiple Spectrum Scale servers to share a common namespace via NFS, providing high availability, performance, scalability and a single namespace as storage capacity increases. The document discusses components of cNFS including load balancing, monitoring, and failover. It also provides instructions for prerequisites, setup, administration and tuning of a cNFS configuration.
The document provides an overview of load balancing concepts and configurations using Cisco Application Centric Infrastructure (ACE). It discusses key topics such as virtual contexts, physical connections, connection management methods, address translation, offloading services, and integrating ACE virtual contexts in a data center environment. Example configurations are also provided for common load balancing modes like routed, bridged, and one-arm.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)Hien Nguyen Van
This document discusses deploying scalable NFS clusters that export CephFS volumes using Rook and Kubernetes. CephFS is a distributed file system that uses an MDS and RADOS to manage metadata and data. Rook allows dynamically deploying Ceph clusters and NFS-Ganesha servers that export CephFS subvolumes. Coordinating grace periods across the NFS cluster during server reboots is challenging due to the stateful nature of both NFS and CephFS protocols. Future work includes improving migration and optimizing grace periods when scaling the NFS cluster size.
This document compares the architectures of Kafka and Kinesis. Both have similar architectures, with Kafka brokers storing messages in partitions and consumers subscribing to topics. The document finds that Kafka has higher throughput and lower costs than Kinesis due to Kinesis' throughput limits. It also notes headaches with Kinesis' throughput limits and management overhead. The document recommends switching from Kinesis to Kafka for these reasons.
A Project Report on Linux Server AdministrationAvinash Kumar
This is a Project Report on Linux Server Admin. It contains key network features which are installed on Linux. This project was conducted on RedHat Enterprise Linux 7.2.
ASIT is one of the leading providers of Programming courses "Advanced JAVA",along with professional certification. We associate with industry experts to deliver the training requirements of Job seeks and working professionals.for more details please visit our website.
NFS allows remote hosts to mount file systems over a network as if they were local. It uses TCP and RPC processes to authenticate clients and grant access to exported file shares based on configuration in /etc/exports. Administrators can start and stop the NFS server and related services using the service command to export resources from centralized servers.
We develop a consistent mutable replication extension for NFSv4 tuned to meet the rigorous demands of large-scale data sharing in global collaborations. The system uses a hierarchical replication control protocol that dynamically elects a primary server at various granularities. Experimental evaluation indicates a substantial performance advantage over a single server system. With the introduction of the hierarchical replication control, the overhead of replication is negligible even when applications mostly write and replication servers are widely distributed.
Understanding Apache Kafka P99 Latency at ScaleScyllaDB
Apache Kafka is a highly popular distributed system used by many organizations to connect systems, build microservices, create data mesh, etc. However, as a distributed system, understanding its performance could be a challenge, so many moving parts exist.
In this talk, we are going to review the key moving parts (producers, consumers, replication, network, etc), a strategy to measure and interpret the performance results for consumers and producers and a general guideline for deciding about performance in Apache Kafka.
An attendee will take home after the talk a proven method to measure, evaluate and optimise the performance of an Apache Kafka based infrastructure. A key skill for low throughput users, but especially for the biggest scale deployments.
The document discusses three RPC communication protocols:
1. The R protocol uses asynchronous communication with only request messages and no reply. This improves performance.
2. The RR protocol fits requests and replies into a single packet and caches replies to reduce overhead.
3. The RRA protocol adds acknowledgements to replies to make communication more reliable as it ensures replies are received.
This document discusses directory write leases in MagFS, a globally distributed file system. It introduces the concept of directory write leases, which allow clients to cache and execute namespace-modifying operations locally to improve performance over high-latency networks. Evaluation results show that directory write leases enable workloads to complete much faster with increasing network latency compared to synchronous approaches.
This document provides an introduction to Apache Kafka. It discusses why Kafka is needed for real-time streaming data processing and real-time analytics. It also outlines some of Kafka's key features like scalability, reliability, replication, and fault tolerance. The document summarizes common use cases for Kafka and examples of large companies that use it. Finally, it describes Kafka's core architecture including topics, partitions, producers, consumers, and how it integrates with Zookeeper.
This document summarizes key aspects of distributed file systems (DFS), including their structure, naming and transparency, remote file access using caching, stateful versus stateless service models, file replication, and examples like the Sun Network File System (NFS). A DFS manages dispersed storage across a network, using caching to improve performance of remote file access and dealing with issues of consistency between cached and server copies. NFS provides a specific implementation of a DFS that integrates remote directories transparently and uses stateless remote procedure calls along with caching for efficiency.
GFProxy: Scaling the GlusterFS FUSE Client Gluster.org
GFProxy is a server and client that improves upon the native FUSE client for GlusterFS. It provides a single connection between the FUSE client and GFProxy server, rather than many connections between clients and bricks. This simplifies upgrades and eliminates client-side network magnification. If the GFProxy server fails, queued operations will be retried after it reconnects. Performance testing shows GFProxy FUSE outperforming native FUSE and NFS, especially for multi-streamed writes. Future work includes supporting multiple volumes and better integration.
The document summarizes key aspects of the Coda distributed file system. Coda was designed by Carnegie Mellon University to be scalable, secure, and highly available. It aims for transparency in naming, location, and failures. Coda uses a client-server model where clients interact with file servers through Venus processes. File data and metadata are cached at clients to allow for continued access when disconnected from servers. Coda employs replication and unique file identifiers to manage distributed data access transparently.
Parallel NFS (pNFS) is a standard defined in NFSv4.1 that separates file metadata and data to allow parallel and distributed access to file data. It defines protocols for clients to communicate with a metadata server to get file layouts describing the data locations and protocols to access multiple data servers directly in parallel. However, it does not define protocols between metadata and data servers, allowing flexibility in implementation. pNFS supports various storage layout types including file, block, and object storage and can provide high performance parallel I/O while maintaining NFS semantics.
This document discusses CPU scheduling and multithreaded programming. It covers key concepts in CPU scheduling like multiprogramming, CPU-I/O burst cycles, and scheduling criteria. It also discusses dispatcher role, multilevel queue scheduling, and multiple processor scheduling challenges. For multithreaded programming, it defines threads and their benefits. It compares concurrency and parallelism and discusses multithreading models, thread libraries, and threading issues.
RPC allows for remote procedure calls to be made across distributed systems in a similar way to local procedure calls. RPC uses stubs to pack and unpack arguments and results so that the calls appear transparent whether local or remote. RPC messages contain information about the procedure, arguments, and client for execution on the server and returning results. Marshalling converts data to a stream for transmission and decoding on the receiving end. Servers can be stateful, maintaining information across calls, or stateless, requiring all needed information be passed each time.
Assessment and Planning in Educational technology.pptxKavitha Krishnan
In an education system, it is understood that assessment is only for the students, but on the other hand, the Assessment of teachers is also an important aspect of the education system that ensures teachers are providing high-quality instruction to students. The assessment process can be used to provide feedback and support for professional development, to inform decisions about teacher retention or promotion, or to evaluate teacher effectiveness for accountability purposes.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
2. Network File System (NFS)
syscall layer
*FS
NFS
server
VFS
VFS
NFS
client
*FS
syscall layer
client
user programs
RPC over UDP or TCP
server
3. NFS Vnodes
syscall layer
*FS
NFS
server
VFS
RPC
network
nfsnode
NFS client stubs
nfs_vnodeops
The nfsnode holds client state
needed to interact with the server
to operate on the file.
struct nfsnode* np = VTONFS(vp);
The NFS protocol has an operation type for (almost) every
vnode operation, with similar arguments/results.
4. File Handles
Question: how does the client tell the server which file or
directory the operation applies to?
• Similarly, how does the server return the result of a lookup?
More generally, how to pass a pointer or an object reference as an
argument/result of an RPC call?
In NFS, the reference is a file handle or fhandle, a token/ticket
whose value is determined by the server.
• Includes all information needed to identify the file/object on
the server, and find it quickly.
volume ID inode # generation #
5. NFS: From Concept to Implementation
Now that we understand the basics, how do we make it fast?
• caching
data blocks
file attributes
lookup cache (dnlc): name->fhandle mappings
directory contents?
• read-ahead and write-behind
file I/O at wire speed
And of course we want the full range of other desirable “*ility”
properties....
6. NFS as a “Stateless” Service
A classical NFS server maintains no in-memory hard state.
The only hard state is the stable file system image on disk.
• no record of clients or open files
• no implicit arguments to requests
E.g., no server-maintained file offsets: read and write requests
must explicitly transmit the byte offset for each operation.
• no write-back caching on the server
• no record of recently processed requests
• etc., etc....
Statelessness makes failure recovery simple and efficient.
7. Recovery in Stateless NFS
If the server fails and restarts, there is no need to rebuild in-
memory state on the server.
• Client reestablishes contact (e.g., TCP connection).
• Client retransmits pending requests.
Classical NFS uses a connectionless transport (UDP).
• Server failure is transparent to the client; no connection to
break or reestablish.
A crashed server is indistinguishable from a slow server.
• Sun/ONC RPC masks network errors by retransmitting a
request after an adaptive timeout.
A dropped packet is indistinguishable from a crashed server.
8. Drawbacks of a Stateless Service
The stateless nature of classical NFS has compelling design
advantages (simplicity), but also some key drawbacks:
• Recovery-by-retransmission constrains the server interface.
ONC RPC/UDP has execute-at-least-once semantics (“send and
pray”), which compromises performance and correctness.
• Update operations are disk-limited.
Updates must commit synchronously at the server.
• NFS cannot (quite) preserve local single-copy semantics.
Files may be removed while they are open on the client.
Server cannot help in client cache consistency.
Let’s explore these problems and their solutions...
9. Problem 1: Retransmissions and Idempotency
For a connectionless RPC transport, retransmissions can saturate
an overloaded server.
Clients “kick ‘em while they’re down”, causing steep hockey stick.
Execute-at-least-once constrains the server interface.
• Service operations should/must be idempotent.
Multiple executions should/must have the same effect.
• Idempotent operations cannot capture the full semantics we
expect from our file system.
remove, append-mode writes, exclusive create
10. Solutions to the Retransmission Problem
1. Hope for the best and smooth over non-idempotent requests.
E.g., map ENOENT and EEXIST to ESUCCESS.
2. Use TCP or some other transport protocol that produces
reliable, in-order delivery.
higher overhead...and we still need sessions.
3. Implement an execute-at-most once RPC transport.
TCP-like features (sequence numbers)...and sessions.
4. Keep a retransmission cache on the server [Juszczak90].
Remember the most recent request IDs and their results, and just
resend the result....does this violate statelessness?
DAFS persistent session cache.
11. Problem 2: Synchronous Writes
Stateless NFS servers must commit each operation to stable
storage before responding to the client.
• Interferes with FS optimizations, e.g., clustering, LFS, and
disk write ordering (seek scheduling).
Damages bandwidth and scalability.
• Imposes disk access latency for each request.
Not so bad for a logged write; much worse for a complex
operation like an FFS file write.
The synchronous update problem occurs for any storage
service with reliable update (commit).
12. Speeding Up Synchronous NFS Writes
Interesting solutions to the synchronous write problem, used
in high-performance NFS servers:
• Delay the response until convenient for the server.
E.g., NFS write-gathering optimizations for clustered writes
(similar to group commit in databases).
Relies on write-behind from NFS I/O daemons (iods).
• Throw hardware at it: non-volatile memory (NVRAM)
Battery-backed RAM or UPS (uninterruptible power supply).
Use as an operation log (Network Appliance WAFL)...
...or as a non-volatile disk write buffer (Legato).
• Replicate server and buffer in memory (e.g., MIT Harp).
13. NFS V3 Asynchronous Writes
NFS V3 sidesteps the synchronous write problem by adding a
new asynchronous write operation.
• Server may reply to client as soon as it accepts the write,
before executing/committing it.
If the server fails, it may discard any subset of the accepted but
uncommitted writes.
• Client holds asynchronously written data in its cache, and
reissues the writes if the server fails and restarts.
When is it safe for the client to discard its buffered writes?
How can the client tell if the server has failed?
14. NFS V3 Commit
NFS V3 adds a new commit operation to go with async-write.
• Client may issue a commit for a file byte range at any time.
• Server must execute all covered uncommitted writes before
replying to the commit.
• When the client receives the reply, it may safely discard any
buffered writes covered by the commit.
• Server returns a verifier with every reply to an async write or
commit request.
The verifier is just an integer that is guaranteed to change if the
server restarts, and to never change back.
• What if the client crashes?
15. Problem 3: File Cache Consistency
Problem: Concurrent write sharing of files.
Contrast with read sharing or sequential write sharing.
Solutions:
• Timestamp invalidation (NFS).
Timestamp each cache entry, and periodically query the server:
“has this file changed since time t?”; invalidate cache if stale.
• Callback invalidation (AFS, Sprite, Spritely NFS).
Request notification (callback) from the server if the file
changes; invalidate cache and/or disable caching on callback.
• Leases (NQ-NFS) [Gray&Cheriton89,Macklem93,NFS V4]
• Later: distributed shared memory
16. File Cache Example: NQ-NFS Leases
In NQ-NFS, a client obtains a lease on the file that permits
the client’s desired read/write activity.
“A lease is a ticket permitting an activity; the lease is valid until
some expiration time.”
• A read-caching lease allows the client to cache clean data.
Guarantee: no other client is modifying the file.
• A write-caching lease allows the client to buffer modified
data for the file.
Guarantee: no other client has the file cached.
Allows delayed writes: client may delay issuing writes to
improve write performance (i.e., client has a writeback cache).
17. Using NQ-NFS Leases
1. Client NFS piggybacks lease requests for a given file on
I/O operation requests (e.g., read/write).
NQ-NFS leases are implicit and distinct from file locking.
2. The server determines if it can safely grant the request, i.e.,
does it conflict with a lease held by another client.
read leases may be granted simultaneously to multiple clients
write leases are granted exclusively to a single client
3. If a conflict exists, the server may send an eviction notice
to the holder of the conflicting lease.
If a client is evicted from a write lease, it must write back.
Grace period: server grants extensions while the client writes.
Client sends vacated notice when all writes are complete.
18. NQ-NFS Lease Recovery
Key point: the bounded lease term simplifies recovery.
• Before a lease expires, the client must renew the lease.
• What if a client fails while holding a lease?
Server waits until the lease expires, then unilaterally reclaims the
lease; client forgets all about it.
If a client fails while writing on an eviction, server waits for
write slack time before granting conflicting lease.
• What if the server fails while there are outstanding leases?
Wait for lease period + clock skew before issuing new leases.
• Recovering server must absorb lease renewal requests and/or
writes for vacated leases.
19. NQ-NFS Leases and Cache Consistency
• Every lease contains a file version number.
Invalidation cache iff version number has changed.
• Clients may disable client caching when there is concurrent
write sharing.
no-caching lease
• What consistency guarantees do NQ-NFS leases provide?
Does the server eventually receive/accept all writes?
Does the server accept the writes in order?
Are groups of related writes atomic?
How are write errors reported?
What is the relationship to NFS V3 commit?
20. The Distributed Lock Lab
The lock implementation is similar to DSM systems, with
reliability features similar to distributed file caches.
• use Java RMI
• lock token caching with callbacks
lock tokens passed through server, not peer-peer as DSM
• synchronizes multiple threads on same client
• state bit for pending callback on client
• server must reissue callback each lease interval (or use RMI
timeouts to detect a failed client)
• client must renew token each lease interval
22. A Typical Unix File Tree
/
tmp usr
etc
File trees are built by grafting
volumes from different volumes
or from network servers.
Each volume is a set of directories and files; a host’s file tree is the set of
directories and files visible to processes on a given host.
bin vmunix
ls sh project users
packages
(volume root)
tex emacs
In Unix, the graft operation is
the privileged mount system call,
and each volume is a filesystem.
mount point
mount (coveredDir, volume)
coveredDir: directory pathname
volume: device specifier or network volume
volume root contents become visible at pathname coveredDir
23. Filesystems
Each file volume (filesystem) has a type, determined by its
disk layout or the network protocol used to access it.
ufs (ffs), lfs, nfs, rfs, cdfs, etc.
Filesystems are administered independently.
Modern systems also include “logical” pseudo-filesystems in
the naming tree, accessible through the file syscalls.
procfs: the /proc filesystem allows access to process internals.
mfs: the memory file system is a memory-based scratch store.
Processes access filesystems through common system calls.
24. VFS: the Filesystem Switch
syscall layer (file, uio, etc.)
user space
Virtual File System (VFS)
network
protocol
stack
(TCP/IP) NFS FFS LFS etc.
*FS etc.
device drivers
Sun Microsystems introduced the virtual file system interface
in 1985 to accommodate diverse filesystem types cleanly.
VFS allows diverse specific file systems to coexist in a file tree,
isolating all FS-dependencies in pluggable filesystem modules.
VFS was an internal kernel restructuring
with no effect on the syscall interface.
Incorporates object-oriented concepts:
a generic procedural interface with
multiple implementations.
Based on abstract objects with dynamic
method binding by type...in C.
Other abstract interfaces in the kernel: device drivers,
file objects, executable files, memory objects.
25. Vnodes
In the VFS framework, every file or directory in active use is
represented by a vnode object in kernel memory.
syscall layer
NFS UFS
free vnodes
Each vnode has a standard
file attributes struct.
Vnode operations are
macros that vector to
filesystem-specific
procedures.
Generic vnode points at
filesystem-specific struct
(e.g., inode, rnode), seen
only by the filesystem.
Each specific file system
maintains a cache of its
resident vnodes.
26. Vnode Operations and Attributes
directories only
vop_lookup (OUT vpp, name)
vop_create (OUT vpp, name, vattr)
vop_remove (vp, name)
vop_link (vp, name)
vop_rename (vp, name, tdvp, tvp, name)
vop_mkdir (OUT vpp, name, vattr)
vop_rmdir (vp, name)
vop_symlink (OUT vpp, name, vattr, contents)
vop_readdir (uio, cookie)
vop_readlink (uio)
files only
vop_getpages (page**, count, offset)
vop_putpages (page**, count, sync, offset)
vop_fsync ()
vnode attributes (vattr)
type (VREG, VDIR, VLNK, etc.)
mode (9+ bits of permissions)
nlink (hard link count)
owner user ID
owner group ID
filesystem ID
unique file ID
file size (bytes and blocks)
access time
modify time
generation number
generic operations
vop_getattr (vattr)
vop_setattr (vattr)
vhold()
vholdrele()
27. V/Inode Cache
HASH(fsid, fileid)
VFS free list head
Active vnodes are reference- counted
by the structures that hold pointers to
them.
- system open file table
- process current directory
- file system mount points
- etc.
Each specific file system maintains its
own hash of vnodes (BSD).
- specific FS handles initialization
- free list is maintained by VFS
vget(vp): reclaim cached inactive vnode from VFS free list
vref(vp): increment reference count on an active vnode
vrele(vp): release reference count on a vnode
vgone(vp): vnode is no longer valid (file is removed)
28. Pathname Traversal
When a pathname is passed as an argument to a system call,
the syscall layer must “convert it to a vnode”.
Pathname traversal is a sequence of vop_lookup calls to descend
the tree to the named file or directory.
open(“/tmp/zot”)
vp = get vnode for / (rootdir)
vp->vop_lookup(&cvp, “tmp”);
vp = cvp;
vp->vop_lookup(&cvp, “zot”);
Issues:
1. crossing mount points
2. obtaining root vnode (or current dir)
3. finding resident vnodes in memory
4. caching name->vnode translations
5. symbolic (soft) links
6. disk implementation of directories
7. locking/referencing to handle races
with name create and delete operations
29. NFS Protocol
NFS is a network protocol layered above TCP/IP.
• Original implementations (and most today) use UDP
datagram transport for low overhead.
Maximum IP datagram size was increased to match FS block
size, to allow send/receive of entire file blocks.
Some implementations use TCP as a transport.
• The NFS protocol is a set of message formats and types.
Client issues a request message for a service operation.
Server performs requested operation and returns a reply message
with status and (perhaps) requested data.
30. Network Block Storage
One approach to scalable storage is to attach raw block
storage to a network.
• abstraction: OS addresses storage by <volume, sector>.
iSCSI, Petal, FC: access through souped-up device driver
• dedicated Storage Area Network or general-purpose network
FibreChannel vs. Ethernet
• shared access with scalable bandwidth and capacity
• volume-based administrative tools
backup, volume replication, remote sharing
• Called “raw” or “block”, “storage volumes” or just “SAN”.
31. “NAS vs. SAN”
In the commercial sector there is a raging debate today about
“NAS vs. SAN”.
• Network-Attached Storage has been the dominant approach
to shared storage since NFS.
NAS == NFS or CIFS: named files over Ethernet/Internet.
Network Appliance “filers”
• Proponents of FibreChannel SANs market them as a
fundamentally faster way to access shared storage.
no “indirection through a file server” (“SAD”)
lower overhead on clients
network is better/faster (if not cheaper) and dedicated/trusted
Brocade, HP, Emulex are some big players.
32. NAS vs. SAN: Cutting through the BS
• FibreChannel a high-end technology incorporating NIC
enhancements to reduce host overhead....
...but bogged down in interoperability problems.
• Ethernet is getting faster faster than FibreChannel.
gigabit, 10-gigabit, + smarter NICs, + smarter/faster switches
• Future battleground is Ethernet vs. Infiniband.
• The choice of network is fundamentally orthogonal to
storage service design.
Well, almost: flow control, RDMA, user-level access (DAFS/VI)
• The fundamental questions are really about abstractions.
shared raw volume vs. shared file volume vs. private disks
33. Storage Abstractions
• relational database (IBM and Oracle)
tables, transactions, query language
• file system
hierarchical name space with ACLs
• block storage
SAN, Petal, RAID-in-a-box (e.g., EMC)
• object storage
object == file, with a flat name space: NASD, DDS
• persistent objects
pointer structures, requires transactions: OODB, ObjectStore
34. Storage Architecture
Any of these abstractions can be built using any, some, or all
of the others.
Use the “right” abstraction for your application.
The fundamental questions are:
• What is the best way to build the abstraction you want?
division of function between device, network, server, and client
• What level of the system should implement the features and
properties you want?
How does Frangipani answer them?
35. Cluster File Systems
shared block storage service (FC/SAN, Petal, NASD)
xFS [Dahlin95]
Petal/Frangipani [Lee/Thekkath]
GFS
Veritas
EMC Celerra
storage client
cluster FS cluster FS
storage client
issues
trust
compatibility with NAS protocols
sharing, coordination, recovery
36. Sharing and Coordination
storage service + lock manager
*FS client *FS client
*FS svc *FS svc
block allocation and layout
locking/leases, granularity
shared access
separate lock service
logging and recovery
network partitions
reconfiguration
NAS
“SAN”
What does Frangipani need from Petal?
How does Petal contribute to F’s *ility?
Could we build Frangipani without Petal?