This document discusses user space file systems and the FUSE framework. It provides background on previous efforts to develop file systems in user space and their limitations. The document then describes FUSE, how it allows file systems to be implemented and mounted in user space, and the typical workflow of a file system operation through FUSE using read() as an example. It notes that while user space file systems were previously thought to have significantly lower performance than kernel file systems, improvements to processors, memory, and FUSE itself have helped reduce overhead.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Motivation
Types of Distributed Operating Systems
Network Structure
Network Topology
Communication Structure
Communication Protocols
Robustness
Design Issues
An Example: Networking
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Motivation
Types of Distributed Operating Systems
Network Structure
Network Topology
Communication Structure
Communication Protocols
Robustness
Design Issues
An Example: Networking
Factored Operating Systems (fos) - The Case for a Scalable Operating System for Multicores - Designing a new operating system targeting manycore
systems with scalability as the primary design constraint,
where space sharing replaces time sharing to increase
scalability.
Distributed Operating System,Network OS and Middle-ware.??Abdul Aslam
Define Distributed Operating System, Network Operating System and Middle-ware? Differentiate between DOS, NOS and Middle-ware? Define the goals of each? ???
Improving Rollback in Linux via DSL approach & distributing johngt
Presented to FOSDEM 2011 in the DistRoom H.1308 at 9:30am on Sunday 6th February.
Improve rollback methods for package management in Linux using a domain specific language (DSL) to inject tags to try and reduce the effect of configuration errors.
White Paper: EMC VNXe File Deduplication and Compression EMC
This White Paper describes EMC VNXe file deduplication and compression, a VNXe system feature that increases the efficiency with which network-attached storage (NAS) data is stored.
FUSE and beyond: bridging filesystems paper by Emmanuel Dreyfuseurobsdcon
Abstract
Everything started with the desire to have a distributed filesystem better than NFS, which would be elastic and replicated with no SPOF. glusterFS was among the filesystems that could do that, but it uses The FUSE interface to the kernel, which was not available on NetBSD.
We will therefore discuss FUSE implementation and its NetBSD counterpart which is called PUFFS. A first but isufficient bridging attmpt called reFUSE was done, but porting real filesystems required the more complete PERFUSE bridge, which we will present here.
In a third part, we will look at the various unexpected roadblocks that had to be handled during PERFUSE implementation, in order to get a fully functionnal glusterFS on NetBSD.
Speaker bio
Emmanuel Dreyfus works as an IT manager at ESPCI ParisTech in France. He has been contributing for 13 years to NetBSD, in areas such as binary compatibility, and FUSE implementation. More at http://hcpnet.free.fr/pubz/
PREDICTIVE RESOURCE MANAGEMENT BY REDUCING COLD START IN SERVERLESS CLOUDindexPub
Serverless Cloud computing expanding its domain rapidly. This is simple, efficient, light-weight, secure and ubiquitous. All Cloud players provide it with different attractive names such as Amazone branding it with AWS Lambda, Goole using Cloud Run, Ali Baba calling it Function Compute and last but not least Microsoft providing serverless cloud with name of Azure Function. Normally, function service executes the core business logic of application and host’s machine policy of execution create a significant impact of overall quality of service provided by CSP (Cloud Service Provider). To produce an effective execution policy, the host machine maintains a lean balance between Cold and Hot restart. Policy efforts to reduce Cold restart but manage resources during Hot restart. In this paper, we employed a machine learning based classification methodology that segregate the functions in terms of cold and hot functions. We implemented aïve Bayes classifier and boosting the accuracy with Kernel Density Estimation. The overall best accuracy was observed up to 94.35%.
Factored Operating Systems (fos) - The Case for a Scalable Operating System for Multicores - Designing a new operating system targeting manycore
systems with scalability as the primary design constraint,
where space sharing replaces time sharing to increase
scalability.
Distributed Operating System,Network OS and Middle-ware.??Abdul Aslam
Define Distributed Operating System, Network Operating System and Middle-ware? Differentiate between DOS, NOS and Middle-ware? Define the goals of each? ???
Improving Rollback in Linux via DSL approach & distributing johngt
Presented to FOSDEM 2011 in the DistRoom H.1308 at 9:30am on Sunday 6th February.
Improve rollback methods for package management in Linux using a domain specific language (DSL) to inject tags to try and reduce the effect of configuration errors.
White Paper: EMC VNXe File Deduplication and Compression EMC
This White Paper describes EMC VNXe file deduplication and compression, a VNXe system feature that increases the efficiency with which network-attached storage (NAS) data is stored.
FUSE and beyond: bridging filesystems paper by Emmanuel Dreyfuseurobsdcon
Abstract
Everything started with the desire to have a distributed filesystem better than NFS, which would be elastic and replicated with no SPOF. glusterFS was among the filesystems that could do that, but it uses The FUSE interface to the kernel, which was not available on NetBSD.
We will therefore discuss FUSE implementation and its NetBSD counterpart which is called PUFFS. A first but isufficient bridging attmpt called reFUSE was done, but porting real filesystems required the more complete PERFUSE bridge, which we will present here.
In a third part, we will look at the various unexpected roadblocks that had to be handled during PERFUSE implementation, in order to get a fully functionnal glusterFS on NetBSD.
Speaker bio
Emmanuel Dreyfus works as an IT manager at ESPCI ParisTech in France. He has been contributing for 13 years to NetBSD, in areas such as binary compatibility, and FUSE implementation. More at http://hcpnet.free.fr/pubz/
PREDICTIVE RESOURCE MANAGEMENT BY REDUCING COLD START IN SERVERLESS CLOUDindexPub
Serverless Cloud computing expanding its domain rapidly. This is simple, efficient, light-weight, secure and ubiquitous. All Cloud players provide it with different attractive names such as Amazone branding it with AWS Lambda, Goole using Cloud Run, Ali Baba calling it Function Compute and last but not least Microsoft providing serverless cloud with name of Azure Function. Normally, function service executes the core business logic of application and host’s machine policy of execution create a significant impact of overall quality of service provided by CSP (Cloud Service Provider). To produce an effective execution policy, the host machine maintains a lean balance between Cold and Hot restart. Policy efforts to reduce Cold restart but manage resources during Hot restart. In this paper, we employed a machine learning based classification methodology that segregate the functions in terms of cold and hot functions. We implemented aïve Bayes classifier and boosting the accuracy with Kernel Density Estimation. The overall best accuracy was observed up to 94.35%.
Building Toward an Open and Extensible Autonomous Computing Platform Utilizi...Phil Cryer
NOTE: this was a proposal paper I turned in for consideration, and it was not excepted. I put it here because I still have an interest in implementing many of these ideas, and even after 2 years, they're still needed. Thanks. -- 'Building Toward an Open and Extensible Autonomous Computing Platform Utilizing Existing Technologies' is a paper I wrote for the Third IEEE WoWMoM Workshop on Autonomic and Opportunistic Communications (AOC 2009), which was held June 15, 2009 - Island of Kos, Greece. It contains some of my early ideas of how more autonomous computing systems could use opportunistic networks for communication. Many of these ideas still hold true as I build systems with the goal of having them monitor and repair themselves when a issues arise.
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
Audience Level
Intermediate
Synopsis
High performance computing and cloud computing have traditionally been seen as separate solutions to separate problems, dealing with issues of performance and flexibility respectively. In a diverse research environment however, both sets of compute requirements can occur. In addition to the administrative benefits in combining both requirements into a single unified system, opportunities are provided for incremental expansion.
The deployment of the Spartan cloud-HPC hybrid system at the University of Melbourne last year is an example of such a design. Despite its small size, it has attracted international attention due to its design features. This presentation, in addition to providing a grounding on why one would wish to build an HPC-cloud hybrid system and the results of the deployment, provides a complete technical overview of the design from the ground up, as well as problems encountered and planned future developments.
Speaker Bio
Lev Lafayette is the HPC and Training Officer at the University of Melbourne. Prior to that he worked at the Victorian Partnership for Advanced Computing for several years in a similar role.
Authenticated Key Exchange Protocols for Parallel Network File Systems1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...Maurice Dawson
This paper represents the evaluation and integration of Open Source Software (OSS) technologies to enhance the learning of engineers and scientists within the university. The utilization of OSS is essential as costs around the world continue to rise for education, institutions must become innovative in the ways they teach and grow Science, Technology, Engineering, & Mathematics (STEM) majors. To do this effectively professors and administrative staff should push toward the utilization of OSS and other available tools to enhance or supplement currently available tools with minimal integration costs. The OSS applications would allow students the ability to learn critical technological skills for success at small fraction of the cost. OSS also provides faculty members the ability to have students dissect source code, analyze network traffic, create virtual instances of real Operating Systems (OSs), and prepare students for low level software development. It is critical that all institutions look at alternatives in providing training and delivering educational material regardless of limitations going forward as the world continues to be more global due to the increased use of technologies everywhere. Through reviewing the available technology, possible implementations of these technologies, and the application of these items in industry could provide a starting point in integrating these tools into academia. When administrators or faculty debate the possibilities of OSS, gaming, and simulation tools this applied research provides a guide for changing the ability to develop future scientists and engineers that will be competitive on a global level in STEM fields.
What is an operating System Structure?
We want a clear structure to let us apply an operating system to our particular needs because operating systems have complex structures. It is easier to create an operating system in pieces, much as we break down larger issues into smaller, more manageable subproblems. Every segment is also a part of the operating system. Operating system structure can be thought of as the strategy for connecting and incorporating various operating system components within the kernel. Operating systems are implemented using many types of structures, as will be discussed below:
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is only appropriate for usage with tiny and restricted systems. Since the interfaces and degrees of functionality in this structure are clearly defined, programs are able to access I/O routines, which may result in unauthorized access to I/O procedures.
UNIX Internals - UNIT-I, General Overview of the system, General Overview of the UNIX system, General Overview of the system in UNIX,General Overview of the system of UNIX
Virtual machines are popular because of their efficiency, ease of use, and flexibility. There has been an increasing demand for the deployment of a robust distributed network for maximizing the performance of such systems and minimizing the infrastructural cost. In this study, we have discussed various levels at which virtualization can be implemented for distributed computing, which can contribute to increased efficiency and performance of distributed computing. The study gives an overview of various types of virtualization techniques and their benefits. For example, server virtualization helps to create multiple server instances from one physical server. Such techniques will decrease the infrastructure costs, make the system more scalable, and help in the full utilization of available resources.
A Survey of Performance Comparison between Virtual Machines and Containersprashant desai
Since the onset of Cloud computing and its inroads into infrastructure as a service, Virtualization has become peak
of importance in the field of abstraction and resource management. However, these additional layers of abstraction provided by virtualization come at a trade-off between performance and cost in a cloud environment where everything is on a pay-per-use basis. Containers which are perceived to be the future of virtualization are developed to address these issues. This study paper scrutinizes the performance of a conventional virtual machine and contrasts them with the containers. We cover the critical
assessment of each parameter and its behavior when its subjected to various stress tests. We discuss the implementations and their performance metrics to help us draw conclusions on which one is ideal to use for desired needs. After assessment of the result and discussion of the limitations, we conclude with prospects for future research
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Деградация производительности при использовании FUSE
1. Performance and Extension of User Space File Systems∗
Aditya Rajgarhia
Stanford University
Stanford, CA 94305, USA
aditya@cs.stanford.edu
Ashish Gehani
SRI International
Menlo Park, CA 94025, USA
ashish.gehani@sri.com
ABSTRACT
Several efforts have been made over the years for develop-
ing file systems in user space. Many of these efforts have
failed to make a significant impact as measured by their use
in production systems. Recently, however, user space file
systems have seen a strong resurgence. FUSE is a popular
framework that allows file systems to be developed in user
space while offering ease of use and flexibility.
In this paper, we discuss the evolution of user space file
systems with an emphasis on FUSE, and measure its perfor-
mance using a variety of test cases. We also discuss the fea-
sibility of developing file systems in high-level programming
languages, by using as an example Java bindings for FUSE
that we have developed. Our benchmarks show that FUSE
offers adequate performance for several kinds of workloads.
Categories and Subject Descriptors
D.2.11 [Software Engineering]: Software Architectures—
Language interconnection; H.3.4 [Information Storage and
Retrieval]: Systems and Software—Performance evalua-
tion
Keywords
user space, FUSE, performance, Java, language binding
1. INTRODUCTION
Developing in-kernel file systems for Unix is a challenging
task, due to a variety of reasons. This approach requires the
programmer to understand and deal with complicated kernel
code and data structures, making new code prone to bugs
caused by programming errors. Moreover, there is a steep
learning curve for doing kernel development due to the lack
of facilities that are available to application programmers.
For instance, the kernel code lacks memory protection, re-
quires careful use of synchronization primitives, can be writ-
∗
This material is based upon work supported by the Na-
tional Science Foundation under Grant OCI-0722068. Any
opinions, findings, and conclusions or recommendations ex-
pressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foun-
dation.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SAC’10 March 22-26, 2010, Sierre, Switzerland.
Copyright 2010 ACM 978-1-60558-638-0/10/03 ...$10.00.
ten only in C, and that too without being linked against the
standard C library. Debugging kernel code is also tedious,
and errors can require rebooting the system.
Even a fully functional in-kernel file system has several
disadvantages. Porting a file system written for a particu-
lar flavor of Unix to a different one can require significant
changes in the design and implementation of the file sys-
tem, even though the use of similar file system interfaces
(such as the VFS layer) on several Unix-like systems makes
the task somewhat easier. Besides, an in-kernel file system
can be mounted only with superuser privileges. This can be
a hindrance for file system development and usage on cen-
trally administered machines, such as those in universities
and corporations.
In contrast to kernel development, programming in user
space mitigates, or completely eliminates, several of the above-
mentioned issues. Contemporary research in the area of
storage and file systems increasingly involves the addition
of rich functionality over a basic file system, as opposed to
designing low-level file systems directly. As an example, the
Ceph [37] distributed file system, designed to provide high
performance, reliability and scalability, uses a client that is
developed in user space. As we pointed out previously, de-
veloping extensive features in an in-kernel file system is not
a trivial task. On the other hand, by developing in user
space, the programmer has access to a wide range of famil-
iar programming languages, third-party tools and libraries,
and need not worry about the intricacies and challenges of
kernel-level programming. Of course, user space file systems
may still require some effort to port to different operating
systems, depending on the extent to which a file system’s im-
plementation is coupled to a particular operating system’s
internals.
FUSE [34] is a widely used framework available for Unix-
like operating systems, that allows nonprivileged users to
develop file systems in user space. It has been merged into
the Linux kernel, while ports are also available for several
mainstream operating systems. FUSE requires a program-
mer to deal only with a simple application programming
interface (API) consisting of familiar file system operations.
A variety of programming language bindings are available
for FUSE, which, coupled with its simplicity, makes file sys-
tem development accessible to a large number of program-
mers. FUSE file systems can also be mounted by nonprivi-
leged users, which motivates novel uses for file systems. For
instance, WikipediaFS [2] allows a user to view and edit
Wikipedia articles as if they were local files. SSHFS, which
is distributed with the FUSE user space library, allows ac-
2. cess to a remote file system via the SFTP protocol. Finally,
developing a FUSE file system does not require recompila-
tion or reboot of the kernel, adding to its convenience.
The prevailing view is that user space file systems suf-
fer from significantly lower performance as compared to in-
kernel implementations, due to overheads associated with
additional context switches and memory copies. However,
that may have changed due to increases in processor, mem-
ory, and bus speeds. Regular enhancements to the imple-
mentation of FUSE also contribute to augmented perfor-
mance.
The remainder of the paper is organized as follows. Sec-
tion 2 discusses previous work related to extending kernel
functionality in user space. Section 3 gives an overview of
FUSE, and explains the causes for its performance overhead.
Section 4 describes the use of various programming lan-
guages for file system development, and our implementation
of Java bindings for FUSE. Section 5 explains our bench-
marking methodology, while the results of the benchmarks
are presented in Section 6. Finally, Section 7 concludes.
2. BACKGROUND
Several efforts have been made to ease the development
of file systems, some of them to facilitate the extensibility
of operating systems in general.
Microkernel-based operating systems such as Mach [10]
and Spring [11] typically implement only the most basic
services such as address space management, process man-
agement, and interprocess communication within the ker-
nel, while other functionality, including device drivers and
file systems, is implemented as user space servers. How-
ever, the performance overhead of microkernels has been a
contentious issue, and they have not been deployed widely.
Other experimental approaches, such as the Exokernel [7]
and L4 [21] overcome the performance issue, but these are
still ongoing research efforts and their feasibility for widespread
adoption is not clear.
Extensible operating systems such as Spin [1] and Vino
[4] export interfaces to operating system services at varying
degrees of granularity. The goal is to let applications safely
modify system behavior at runtime, in order to achieve a
particular level of performance and functionality. Like mi-
crokernels, extensible operating systems are still in the re-
search domain.
Stackable file systems, first implemented in [28], allow new
features to be added incrementally. Most notable of this
approach is FiST [40], which allows file systems to be de-
scribed using a high-level language. Using such a descrip-
tion, a code generator produces kernel file system modules
for Linux, FreeBSD and Solaris without any modifications to
the kernels. Maintaining flexibility while producing code for
a variety of kernels, though, means that FiST file systems
cannot easily manipulate low-level constructs such as the
block layout of files on disks, and metadata structures such
as inodes. Moreover, user space file systems have other ad-
vantages, such as the ability to be mounted by nonprivileged
users without any intervention by the system administrator.
Using NFS loopback servers, portable user space file sys-
tems can be created. The toolkit described in [24] facilitates
writing user space NFS loopback servers while avoiding sev-
eral issues associated with this approach. Using the NFS
interface can be beneficial for certain types of applications
while affording portability. On the other hand, it means that
file system semantics are limited due to NFS’s weak cache
consistency, and the POSIX file system semantics are often
more suitable for file systems such as those dependent on low
latency. Moreover, performance of NFS loopback servers is
affected due to the overhead incurred in the network stack.
Various projects have aimed to support development of
user space file systems while exporting an API similar to
that of the VFS layer. UserFS [8] worked on Linux with
kernel versions as high as 2.2. It consisted of a kernel module
that registered a userfs file system type with the VFS. All
requests to this file system were then communicated to a
user space library through a file descriptor.
The Coda distributed file system [29] contains a user space
cache manager, Venus. In order to service file system oper-
ations, the Coda kernel module communicates with Venus
through a character device /dev/cfs0. UserVFS [22], which
was developed as a replacement for UserFS, used this Coda
character device for communication between the kernel mod-
ule and the user space library. Thus, UserVFS did not re-
quire patching the kernel, since the Coda file system was
already part of the Linux kernel. UserVFS was also ported
to Solaris. Similarly, Arla [38] is an AFS client that consists
of a kernel module, xfs, that communicates with the arlad
user space daemon to serve file system requests.
The ptrace() system call can also be used to build an in-
frastructure for developing file systems in user space [31].
An advantage of this technique is that all OS entry points,
rather than just file system operations, can be intercepted.
The downside is that the overhead of using ptrace() is signif-
icant, which makes this approach unsuitable for production-
level file systems.
Recent versions of NetBSD ship with puffs [15], which is
similar to FUSE. The FUSE user space library interface has
also been implemented on top of puffs, allowing the plethora
of FUSE file systems available to run on NetBSD using the
existing puffs kernel module [16].
Like some of the approaches mentioned above, FUSE con-
sists of a loadable kernel module (although it can be com-
piled into the kernel as well) along with a user space library,
libfuse. Unlike previous efforts, FUSE has been part of
the mainline Linux kernel since version 2.6.14, and ports are
also available for Mac OS X [30], OpenSolaris [9], FreeBSD
[12] and NetBSD [16]. This reduces concerns that develop-
ers using FUSE may have about compatibility of their file
systems with future versions of the kernel. Furthermore,
FUSE is released under a flexible licensing scheme, enabling
it to be used for free as well as commercial file systems. It is
interesting to analyze the functionality and performance of
FUSE since it is presently very widely used by researchers,
corporations, the open source community and even hobby-
ists, as is evident from the following examples.
TierStore [6] is a distributed file system that aims to
simplify the deployment of applications in network environ-
ments that lack the ability to support reliable, low-latency,
end-to-end communication sessions, such as those in devel-
oping regions. TierStore uses either FUSE or an NFS loop-
back server to provide a file system interface to applications.
An increasing trend has been to install both Microsoft
Windows as well as a flavor of Unix on the same machine.
Historically, there has been a lack of support for access-
ing files stored in one operating system on the other due to
incompatible file systems. NTFS-3G [25] is an open source
implementation of the Microsoft Windows NTFS file system
3. with read/write support. Since NTFS-3G is implemented
with FUSE, it can run unmodified on a variety of operating
systems.
Likewise, ZFS-FUSE [41] is a FUSE implementation of
the Sun Microsystems ZFS file system [32]. Porting ZFS to
the Linux kernel is complicated by the fact that the GNU
General Public License, which governs the Linux kernel, is
incompatible with CDDL, the license under which ZFS is
released. Since ZFS-FUSE runs in user space, though, it is
not restricted by the kernel’s license.
VMware’s Disk Mount utility [36] allows a user to mount
an unused virtual disk as a separate file system without
the need to run a virtual machine. On Linux hosts, this
is achieved by using FUSE.
3. FILE SYSTEM IN USER SPACE (FUSE)
We now describe the FUSE framework, and how a typical
file system operation works in FUSE.
Once a FUSE volume is mounted, all file system calls tar-
geting the mount point are forwarded to the FUSE kernel
module, which registers a fusefs file system with the VFS. As
an example, Figure 1 shows the call path through FUSE for
a read() operation. The user space file system functionality
is implemented as a set of callback functions in the userfs
program, which is passed the mountpoint for the FUSE file
system as a command line parameter. Assume the directory
/fuse in the underlying file system is chosen as the FUSE
mountpoint. When an application issues a read() system
call for the file /fuse/file, the VFS invokes the appropri-
ate handler in fusefs. If the requested data is found in the
page cache, it is returned immediately. Otherwise, the sys-
tem call is forwarded over a character device, /dev/fuse, to
the libfuse library, which in turn invokes the callback de-
fined in userfs for the read() operation. The callback may
take any action, and return the desired data in the supplied
buffer. For instance, it may do some pre-processing, request
the data from the underlying file system (such as Ext3),
and then post-process the read data. Finally, the result is
propagated back by libfuse, through the kernel, and to the
application that issued the read() system call. Other file
system operations work in a similar manner.
A helper utility, fusermount, is provided to allow nonpriv-
ileged users to mount file systems. The fusermount utility
allows several parameters to be customized at the time of
mounting the file system.
FUSE provides a choice of two different APIs for develop-
ing the user space file system. The low-level API resembles
the VFS interface closely, and the user space file system must
manage the inodes and perform pathname translations (and
any associated caching for the translations). Moreover, the
file system must manually fill the data structure used for re-
plying to the kernel. This interface is useful for file systems
that are developed from scratch, such as ZFS-FUSE, as op-
posed to those that add functionality to an existing native
file system.
On the other hand, the high-level API requires the file
system developer to deal only with pathnames, rather than
inodes, thus resembling system calls more closely than the
VFS interface. libfuse is responsible for performing and
caching the inode-to-pathname translations, and for filling
and sending the reply data structures back to the kernel.
The high-level API is sufficient for most purposes, and hence
is used by the vast majority of FUSE-based file systems.
Figure 1: Typical file system call path through
FUSE for a read() operation.
Performance Overhead of FUSE
When only a native file system such as Ext3 is used, there
are two user-kernel mode switches per file system operation
(to and from the kernel). No context switches need to be
performed, where we use the term context switch to refer
to the scheduler switching between processes with different
address spaces. User-kernel mode switches are inexpensive
and involve only switching the processor from unprivileged
user mode to privileged kernel mode, or vice versa. How-
ever, using FUSE introduces two context switches for each
file system call. There is a context switch from the user ap-
plication that issued the system call to the FUSE user space
library, and another one in the opposite direction. There-
fore, when FUSE further passes operations to the underlying
file system, a total of two context switches and two mode
switches are performed. A context switch can have a signifi-
cant cost, although the cost may depend vastly on a variety
of factors such as the processor type, workload, and mem-
ory access patterns of the applications between which the
context switch is performed. As discussed in [20, 5], the
overhead of a context switch comes from several aspects -
the processor registers need to be saved and restored, cache
and translation lookaside buffer (TLB) entries for the evicted
process need to be flushed and then reloaded for the incom-
ing process, and the processor pipeline must be flushed.
FUSE also splits read and write requests into chunks of
128 KB, in order to minimize the number of dirty pages in
the kernel that depend on the user space file system to be
written to disk. If the kernel is under memory pressure due
to a large number of such dirty pages, a deadlock may occur
if the user space process that implements the responsible file
system has been swapped out. Until recently, FUSE requests
were split into chunks of only 4 KB, resulting in a significant
number of context switches for each read and write opera-
tion that operated on buffers of size greater than 4 KB. The
current 128 KB chunk size mitigates this issue since most
applications, including common UNIX utilities such as cp,
cat, tar and sftp, typically do not use buffers that are
more than 32 KB. From Linux kernel 2.6.27 onward, using
the big_writes mount option enables a 128 KB chunk size.
Figure 2 shows the times required to write a 16 MB file us-
ing chunk sizes varying from 4 KB to 128 KB. As expected,
4. 0
25
50
75
100
125
150
175
200
4KB 32KB 128KB
ExecutionTime(Milliseconds)
Chunk Size
Native
FUSE (big_writes enabled)
FUSE (big_writes disabled)
Figure 2: Time for writing a 16 MB file using various
sizes for the write() buffer.
big_writes has no impact when the chunk size is 4 KB.
However, for a chunk size of 32 KB, FUSE with big_writes
enabled is approximately twice as fast as FUSE without
big_writes. Note that the time without big_writes also
decreases somewhat with an increase in chunk size, since
fewer system calls (and hence context switches) need to be
made. The time does not decrease when big_writes is used
and the buffer size is increased from 32 KB to 128 KB. This
may be due to the bottleneck being the memory bandwidth
or the time required to set up the kernel data structures.
Hence, 32 KB appears to be an ideal buffer size while using
FUSE. Although the overhead of FUSE appears significant
in Figure 2, it is only because a 16 MB file is small enough
to fit entirely in the page cache. This is not a true reflection
of FUSE’s performance, as discussed further in Section 6.
The above-mentioned deadlock issue is most easily demon-
strated through the use of shared writable mappings. With
such mappings, the user space file system can create vast
numbers of dirty pages without the kernel knowing about
it. For this reason, FUSE did not support shared, writable
mappings until Linux kernel 2.6.26, which added a dirty page
accounting infrastructure that allows the number of pages
used for write caching to be limited.
FUSE also introduces two additional memory copies, for
arguments that are passed to or returned from the user space
file system. Most significantly, these include the buffers con-
taining data that is read or written. While using the native
file system alone, data need be copied in memory only once,
either from the kernel’s page cache 1
to the application is-
suing the system call, or vice versa. While writing data to a
FUSE file system, the data is first copied from the applica-
tion to the page cache, then from the page cache to libfuse
via /dev/fuse, and finally from libfuse to the page cache
when the system call is made to the native file system. For
read(), the copies are similarly performed in the other direc-
tion. Note that if the FUSE file system was mounted with
the direct_io option, then the FUSE kernel module by-
passes the page cache, and forwards the application-supplied
1
Starting from kernel verion 2.4.10, there is no separate
buffer cache in Linux. The page cache stores both, pages
resulting from accesses through the VM subsystem as well
as those resulting from accesses through the VFS.
buffer directly to the user space daemon. In this case, only
one additional memory copy has to be performed. The ad-
vantage of using direct_io is that writes are significantly
faster due to the reduced memory copy. The downside is
that each read() request has to be forwarded to the user
space file system, as data is not present in the page cache,
thus affecting read performance severely. Nonetheless, the
direct_io parameter is essential when only the file system
can ensure the consistency of the data. As an example, if
the data is read from physical sensors then content from the
page cache should not be returned to the application.
Last, when using the native file system, all data that is
read from or written to the disk is cached in the kernel’s page
cache. With FUSE, fusefs also caches the data in the page
cache, resulting in two copies of the same data being cached.
As we mentioned previously, if the data requested by an
application is found in the page cache, it is returned im-
mediately rather than the request being passed to libfuse.
Although the use of the page cache by FUSE is very benefi-
cial for read operations since it avoids unnecessary context
switches and memory copies, the fact that the same data
is cached twice reduces the efficiency of the page cache. In
Linux, one can open files on the native file system using the
O_DIRECT flag and thereby eliminate caching by the native
file system. However, this is generally not a feasible solu-
tion since O_DIRECT imposes alignment restrictions on the
length and address of the write() buffers and on the file off-
sets. Moreover, these restrictions vary by file system and
kernel versions. As mentioned above, the direct_io mount
option can be used to disable the page cache for FUSE, but
the cost of doing so is far greater than the overhead of the
double caching.
4. LANGUAGE BINDINGS FOR FUSE
More than twenty different language bindings are available
for FUSE, allowing file systems to be written in languages
other than C. This means that programmers can use lan-
guages that are based on different programming paradigms,
offer different levels of type safety and type checking, and
are generally intended for different usage scenarios. For in-
stance, using a language such as C++ or C# allows the
file system to be written in a high-performance, object-
oriented language. On the other hand, functional languages
like Haskell and OCaml provide tools such as higher-order
functions and lazy evaluation that facilitate modularity and,
in turn, productivity [13].
Some programming languages are particularly suitable for
certain uses. For example, the Erlang programming lan-
guage was designed to support fault-tolerant, real-time, non-
stop applications in a distributed environment. As such, it
has been used in several distributed storage systems [23, 26].
Halfs [14] is a file system developed using Haskell, which is
well suited to high assurance development, a methodology
for creating software systems that meet rigorously defined
specifications with a high degree of confidence. The Ruby
language, in which every data type is an object, was designed
to encourage good interface design while providing a lot of
flexibility to the programmer. Ruby is used in one project
[35] to provide transparent access to online meteorological
databases through a local file system interface. Outside the
realm of file systems, the SPIN operating system [1] depends
on key language features of Modula-3 to safely export fine-
grained interfaces to operating system services.
5. Several projects [33, 18] use the Python bindings for FUSE,
since it allows for rapid software development. Python is an
increasingly popular programming language, and has an ex-
tensive set of third-party libraries beyond a comprehensive
standard library.
JavaFuse
We have implemented JavaFuse [27], a Java interface for
the FUSE high-level API, by using the Java Native Interface
(JNI) to communicate between the C and Java layers. Al-
though a basic Java interface had already existed for FUSE,
it is no longer maintained [19]. In addition, JavaFuse con-
tains JNI code that allows the corresponding system call to
be invoked from each Java callback method. This allows the
file system developer to build functionality over the native
file system without writing any JNI code.
JavaFuse allows file systems to be written in a combi-
nation of C and Java by utilizing the FUSE library, while
aiming to provide maximum portability, flexibility, and ease
of use. Since JavaFuse is a native executable, it inherits the
UNIX runtime, which can be valuable for implementing file
system features.
The developer writes the file system functionality as a
Java class, and registers this class with JavaFuse using a
command line parameter. For each FUSE callback, the
Java class may have two types of methods that can com-
municate with JavaFuse: a pre-call and a post-call. The
pre-call is passed the parameters that are received by the
FUSE callback. Upon its completion, these parameters are
copied back to JavaFuse and passed to the native system
call. Finally, the parameters along with the result of the
system call are passed to the post-call. Within a pre-call or
post-call method, the file system developer is free to invoke
any number of Java methods and use external libraries.
The behavior of JavaFuse can be customized for individ-
ual file systems by use of a configuration file. Each of the
three types of calls (pre-calls, post-calls and system calls)
can be switched on or off depending on the requirements
of the file system, in order to reduce unnecessary overhead.
Another important parameter that can be configured de-
fines the behavior of the read and write callbacks. These
operations may result in large amounts of data being copied
between the C and Java layers. In certain file systems, copy-
ing the data may not be needed, in which case it can be
avoided by specifying the meta-data-only parameter in the
configuration file. An alternative technique would have been
to use the ByteBuffer class provided by Java’s Non-blocking
I/O package, which would avoid memory copies between the
Java virtual machine (JVM) and the native code. However,
these buffers limit portability when compound data types
such as C structures are used. Moreover, they are not safe
for concurrent access, while FUSE uses multiple threads in
order to augment performance.
5. BENCHMARK METHODOLOGY
Our goal is to determine the feasibility of developing and
running file systems in user space. To this end, we have
performed two kinds of benchmarks.
5.1 Microbenchmarks
Microbenchmarks were used to measure the performance
of common file system operations and the raw throughput
attainable. We have used a modified version of the Bonnie
2.0.6 benchmark tool [3]. Given a file size as a command line
parameter, the benchmark proceeds in six phases. First, a
file of the specified size is written using putc(), one charac-
ter at a time. Then, the file is written from scratch again
using 16 KB blocks. In steps three and four, the file is read
one character at a time using getc(), and in 16 KB chunks
using read(), respectively. Finally, these two read opera-
tions are performed again, this time after clearing the page
cache before each step.
5.2 Macrobenchmarks
Although microbenchmarks are useful for determining the
overhead of individual operations, application-level bench-
marks provide an overall view of a file system’s performance,
and its viability for use in production systems. Hence, we
have performed the following benchmarks that measure the
file system performance for commonly used applications.
PostMark - This widely used benchmark was designed
to replicate the small file workloads seen in electronic mail,
Usenet news, and Web-based commerce under heavy load
[17]. PostMark generates an initial pool of random text
files whose range of sizes is configurable. Then, a specified
number of transactions occurs on these files. PostMark is
considered to be a good benchmark since it puts both file and
metadata operations under heavy stress. We used version
1.51 of PostMark, with the configuration set to use 5,000
files ranging from 1 KB to 64 KB, and 50,000 transactions.
This range of file sizes is representative of a typical Web
server’s workload [39].
Large File Copy - The time is measured for copying
a 1.1 GB movie file using the cp command. This is very
different from the PostMark benchmark since it involves a
single, large file. Such large files are often used in desktop
computers, multimedia servers, and scientific computing.
5.3 Tested Configurations
Comparisons were made between the following file system
configurations:
Native - the underlying Ext4 file system.
FUSE - A null FUSE file system written in C, which
simply passes each call to the native file system.
JavaFuse1 (metadata-only) - A null file system written
using JavaFuse, which does not copy the read() and write()
buffers over the JNI call.
JavaFuse2 (copy all data) - Same as above, but also
copies the read() and write() buffers over JNI.
5.4 Experimental Setup
All benchmarks were performed on a single machine with
a Pentium IV processor running at 3.40 GHz, 512 MB of
main memory (which was increased to 2 GB for the mac-
robenchmarks, which generate a large number of files and
hence require more memory to store the resulting file han-
dles), and a 320 GB Seagate ST3320813AS SATA disk with
8 MB cache, running at 7,200 RPM. The maximum sus-
tained transfer rate of this disk is 115 MB/s. Therefore, all
observed throughputs that exceed 115 MB/s are benefiting
either from the disk drive’s cache or from the file system’s
page cache, and likely from a combination of both. The op-
erating system was Linux, with kernel version 2.6.30.5, while
the version of the user space FUSE library was 2.8.0-pre1.
The native file system was Ext4.
7. 6. DISCUSSION
The results of the benchmarks are analyzed below. All
benchmarks were performed five times, and the average value
was used. We cleared the page cache before running each
benchmark. The following discussion assumes an under-
standing of how FUSE works, as described in Section 3.
6.1 Microbenchmark Results
Figure 3 presents the output from the microbenchmarks.
For the per-character sequential output in Figure 3(a), FUSE
has approximately 25% overhead, caused by the large num-
ber of additional context switches. Likewise, using JavaFuse
degrades performance even further due to another layer of
context switches between libfuse and the JVM. For block
sequential output (Figure 3(b)), Native is much faster for file
sizes less than 100 MB. After that, the throughputs for Na-
tive and FUSE converge because for smaller file sizes, data is
written only to the page cache, and not to the disk. Writing
to the page cache does not consume much time, and hence
the overhead of FUSE appears relatively large. On the other
hand, once the file size is big enough to force data to be writ-
ten to the disk, the I/O time dominates, and hence there is
not much relative difference between the Native and FUSE
throughputs.
Since the benchmark reads the same file that it wrote in
the preceding step, part of the file now resides in the page
cache. As we had explained earlier, data is cached twice
when FUSE is used. For per-character input (Figure 3(c)),
FUSE has less than 5% overhead for file sizes as large as 200
MB. After this point, the file can no longer be accommo-
dated completely in the page cache when accessed through
FUSE, and it’s throughput drops slightly. Similar to the case
of per-character input, files as large as 450 MB are stored
completely in the page cache in the case of Native for the
block input test (Figure 3(d)). For both FUSE and Java-
Fuse, the block input test contains a spike in the throughput
due to caching effects caused by the synthetic nature of the
benchmark; in the previous step (per-character sequential
input), when the file can no longer fit in the page cache, the
initial part is written out to disk. However, the latter part
of the file still resides in the page cache, and is thus accessed
directly in the current step, causing the spike.
To negate the effects of the test file already residing in
the page cache, the previous two steps are performed again
with empty page caches. Because the page cache was cleared
before this step, the throughputs start out slow for both na-
tive and FUSE in the per-character input test (Figure 3(e)).
The kernel determines that the file is being read sequen-
tially, and enables read-ahead caching in order to improve
performance. Surprisingly, FUSE obtains a higher sustained
throughput than Native. The 2.6 series of the Linux kernel
employs a complex read-ahead mechanism, and we believe
that better use of this mechanism is being made when FUSE
is used. This may be due to read-ahead being enabled for
both, fusefs as well as Ext4, when a file is accessed through
FUSE. When block input is performed (Figure 3(f)), Na-
tive and FUSE seem to benefit equally from the read-ahead
logic, and there is no visible overhead for FUSE regardless
of the file size.
6.2 Macrobenchmark Results
The native ext3 file system completes the PostMark bench-
mark in 84 seconds, while FUSE requires 91 seconds (see
0
25
50
75
100
125
150
175
200
ExecutionTime(Seconds)
Native
FUSE
JavaFuse1
JavaFuse2
Figure 4: Time for executing PostMark.
Figure 4). Thus, FUSE incurs an overhead of less than 10%.
With Java, however, the overhead (more than 60%) is much
greater due to higher CPU and memory usage. Although
the benchmark is synthetic and does not by itself imply that
FUSE can be used for production servers, it does put heavy
stress on the file system, and is certainly an indication of
the performance that can be obtained by FUSE.
0
10
20
30
40
50
ExecutionTime(Seconds) Native
FUSE
JavaFuse1
JavaFuse2
Figure 5: Time for copying a 1.1 GB video file.
As seen in Figure 5, FUSE also performs comparably to
Native while copying a 1.1 GB file. This corroborates the
results of the block input throughputs from the microbench-
marks, where the overhead of FUSE was negligible when the
cost of performing disk I/O was the dominating factor.
7. CONCLUSION
Our benchmarks show that whether FUSE is a feasible
solution depends on the expected workload for a system. It
achieves performance comparable to in-kernel file systems
when large, sustained I/O is performed. On the other hand,
the overhead of FUSE is more visible when the workload
consists of a relatively large number of metadata operations,
such as those seen in Web servers and other systems that
deal with small files and a very large number of clients.
FUSE is certainly an adequate solution for personal com-
puters and small-scale servers, especially those that perform
large I/O transfers, such as Grid applications and multime-
dia servers.
8. Using an additional programming language layer in the
form of JavaFuse incurred visible overhead for CPU inten-
sive benchmarks due to the presence of the JVM. However,
with proper optimization, the overhead may be reduced.
For instance, if the language is compiled to native code and
shared buffers are used, then the overhead should be negli-
gible.
8. REFERENCES
[1] B. N. Bershad, S. Savage, P. Pardyak, E. G. Sirer, M.
E. Fiuczynski, D. Becker, C. Chambers, S. Eggers,
Extensibility, safety and performance in the SPIN
operating system, 15th ACM Symposium on
Operating System Principles (SOSP), 1995.
[2] M. Blondel, WikipediaFS,
http://wikipediafs.sourceforge.net
[3] T. Bray, Bonnie, http://www.textuality.com/bonnie
[4] D. R. Cheriton, and K. J. Duda, A caching model of
operating system kernel functionality, 1st USENIX
Symposium on Operating Systems Design and
Implementation (OSDI), 1994.
[5] F. M. David, J. C. Carlyle, and R. H. Campbell,
Context switch overheads for Linux on ARM
platforms, Workshop on Experimental Computer
Science, 2007.
[6] H. Demmer, B. Du, and E. Brewer, TierStore: a
distributed file system for challenged networks in
developing regions, 6th USENIX Conference on File
and Storage Technologies (FAST), 2008.
[7] D. R. Engler, M. Frans Kaashoek, and J. O’Toole Jr.,
Exokernel: an operating system architecture for
application-level resource management, 15th ACM
Symposium on Operating System Principles (SOSP),
1995.
[8] J. Fitzhardinge, UserFS,
http://www.goop.org/~jeremy/userfs
[9] FUSE on OpenSolaris,
http://opensolaris.org/os/project/fuse
[10] D. Golub, R. Dean, A. Forin, and R. Rashid, Unix as
an application program, Summer USENIX Technical
Conference, 1990.
[11] G. Hamilton and P. Kougiouris, The Spring nucleus:
A microkernel for objects, Summer USENIX Technical
Conference, 1993.
[12] C. Henk, FreeBSD FUSE, http://fuse4bsd.creo.hu
[13] J. Hughes, Why functional programming matters, The
Computer Journal, 1989.
[14] I. Jones, Halfs, ACM SIGPLAN Workshop on Haskell,
2005.
[15] A. Kantee, puffs - Pass-to-userspace framework file
system, AsiaBSDCon, 2007.
[16] A. Kantee, ReFUSE: Userspace FUSE
reimplementation using puffs, EuroBSDCon, 2007.
[17] J. Katcher, PostMark: A new file system benchmark,
Technical Report TR3022, Network Appliance, 1997.
[18] D. Letscher, Carbon
copy file system: A real-time snapshot file system layer,
http://dehn.slu.edu/research/papers/usenix06.pdf
[19] P. Levart, FUSE-J,
http://sourceforge.net/projects/fuse-j
[20] C. Li, C. Ding, and K. Shen, Quantifying the cost of
context switch, ACM Workshop on Experimental
Computer Science, 2007.
[21] J. Liedtke, On micro-kernel construction, ACM
SIGOPS Operating Systems Review, Vol. 29(5), 1995.
[22] P. Machek, UserVFS,
http://sourceforge.net/projects/uservfs
[23] H. Mattsson, H. Nilsson, and C. Wikstrom, Mnesia -
A distributed robust DBMS for telecommunications
applications, Practical Applications of Declarative
Languages Symposium, 1999.
[24] D. Mazieres, A toolkit for user-level file systems,
Usenix Technical Conference, 2001.
[25] NTFS-3G, http://www.ntfs-3g.com
[26] J. Paris, M. Gulias, A. Valderruten, and S. Jorge, A
distributed file system for spare storage,
Springer-Verlag Lecture Notes in Computer Science,
Vol. 4739, 2007.
[27] A. Rajgarhia, JavaFuse,
http://sourceforge.net/projects/javafuse
[28] D. S. Rosenthal, Evolving the vnode interface,
Summer USENIX Technical Conference, 1990.
[29] M. Satyanarayanan, J. J. Kistler, P. Kumar, M. E.
Okasaki, E. H. Siegel, and D. C. Steere, Coda: A
highly available file system for a distributed
workstation environment, IEEE Transaction on
Computers, Vol. 39(4), 1990.
[30] A. Singh, MacFUSE,
http://code.google.com/p/macfuse
[31] R. P. Spillane, C. P. Wright, G. Sivathanu, and E.
Zadok, Rapid file system development using ptrace,
ACM Workshop on Experimental Computer Science
(EXPCS), 2007.
[32] ZFS, Sun Microsystems,
http://www.opensolaris.org/os/community/zfs
[33] M. Slawinska, J. Slawinski, and V. Sunderam,
Enhancing build-portability for scientific applications
across heterogeneous platforms, IEEE International
Symposium on Parallel and Distributed Processing,
2008.
[34] M. Szeredi, File system in user space,
http://fuse.sourceforge.net
[35] K. Ui, T. Amagasa, and H. Kitagawa, A FUSE-based
tool for accessing meteorological data in remote
servers, 20th International Conference on Scientific
and Statistical Database Management, 2008.
[36] VMware Disk Mount User’s Guide,
http://www.vmware.com/pdf/VMwareDiskMount.pdf
[37] S. A. Weil, S. A. Brandt, E. L. Miller, D. D. E. Long,
and C. Maltzahn, Ceph: A scalable, high-performance
distributed file system, 7th Symposium on Operating
Systems Design and Implementation (OSDI), 2000.
[38] A. Westerlund, and J. Danielsson, Arla - a free AFS
client, 1998 USENIX Technical Conference, Freenix
track, 1998.
[39] A. Williams, M. Arlitt, C. Williamson, and K. Barker,
Web workload characterization: Ten years later, Web
Information Systems Engineering and Internet
Technologies Book Series, Vol. 2, 2006.
[40] E. Zadok, and J. Nieh, FiST: A language for stackable
file systems, Usenix Technical Conference, 2000.
[41] ZFS-FUSE, http://www.wizy.org/wiki/ZFS_on_FUSE