This document reviews several storage-specific solutions for providing quality of service (QoS) in storage area networks (SANs). It summarizes Stonehenge, pClock, Argon, Façade, and PARDA as approaches that have been developed to implement QoS at the storage level. Each approach aims to provide performance isolation and guarantees for different workloads and applications sharing storage resources, though they require running instances of the algorithms on individual storage devices, increasing overhead. The document concludes that current solutions do not provide end-to-end QoS when data traverses the network in an IP SAN.
Service Request Scheduling based on Quantification Principle using Conjoint A...IJECEIAES
This document presents a service request scheduling technique for heterogeneous distributed systems using quantification principles. It uses conjoint analysis to identify the most influential server attribute, and z-score to quantify attribute values. Servers are assigned a "servicing cutoff" percentage based on z-scores, indicating each server's share of total requests. Requests are prioritized and assigned to servers without exceeding capacity limits. The technique aims to evenly distribute workload among servers according to their quantified capacities. Experimental results showed improved performance over other scheduling principles.
This document discusses monitoring and troubleshooting a Hadoop cluster running MapR. It outlines tools for cluster monitoring including the MapR Control System and MapR Metrics. Common troubleshooting scenarios like slow nodes, out of memory errors, and time skew issues are described. The document also provides guidance on working with MapR support and things to avoid when troubleshooting.
Presentation about Oracle Active Data Guard which I gave together with my colleague Luca Canali on UKOUG 2012
http://2012.ukoug.org/default.asp?p=9339&dlgact=shwprs&prs_prsid=7240&day_dayid=63
HBaseCon 2013: A Developer’s Guide to CoprocessorsCloudera, Inc.
This document discusses coprocessors in HBase, which allow arbitrary code to run on each region server. It provides examples of using coprocessors for observers that react to events and endpoints that clients can explicitly call. The examples include expanding single-row JSON data into multiple columns, collecting real-time analytics, and optimizing searches through endpoints.
This document provides an introduction and overview of HBase coprocessors. It discusses the motivations for using coprocessors such as performing distributed and parallel computations directly on data stored in HBase without data movement. It describes the architecture of coprocessors and compares the HBase coprocessor model to Google's Bigtable coprocessor model. It also provides details on the different types of coprocessors (observers and endpoints), how they are implemented and used, and provides examples code for both.
Alibaba builds the data infrastructure with Apache Hadoop YARN since 2013, and till now it manages more than 10k nodes. In Alibaba, Hadoop YARN serves various systems such as search, advertising, and recommendation etc. It runs not just batch jobs, also streaming, machine learning, OLAP, and even online services that directly impact Alibaba’s user experience. To extend YARN’s ability to support such complex scenarios, we have done and leveraged a lot of YARN 3.x improvements. In this talk, you will find what are these improvements and how they helped to solve difficult problems in large production clusters.
This includes:
1. Highly improved performance with Capacity Scheduler’s async scheduling framework
2. Better placement decisions with node attributes, placement constraints
3. Better resource utilization with opportunistic containers
4. Introduce a load balancer to balance resource utilization
5. Generic resource types scheduling/isolation to manage new resources such as GPU and FPGA
In the presentation, we will further introduce how we build the entire ecosystem on top of YARN and how we keep evolving YARN’s ability to tackle the challenges brought by continuously increasing data and business in Alibaba.
Speakers
Weiwei Yang, Alibaba, Staff Software Engineer
Ren Chunde, Alibaba Group, Senior Engineer
Hadoop was born much earlier than the Cloud Native era. But the question is still the same: what can it offer in the time of Kubernetes, containerization and hybrid clouds?
Apache Hadoop Ozone is a new subproject of Hadoop. It has a generic low-level binary layer, the Hadoop Distributed Data Storage (HDDS) and a S3 compatible Object Store implementation on top of it.
But the HDDS data storage layer is not just for the object store. It could be used for multiple purposes: to enhance the scalability the HDFS or provide block level access to the managed storage space. With this approach the same Hadoop Ozone cluster could provide hadoop file system based storage, object store space and block level storage.
Storage is still a hot topic with Kubernetes and in Cloud Native environments. Container Storage Interface specification is a vendor neutral standard to provide storage plugin for multiple container orchestration system.
Quadra provides block level access on top of the Hadoop Distributed Data Storage layer and it’s first class citizen of the containerized word. It implements the Container Storage Interface and can work as a Kubernetes dynamic volume provisioner.
In this talk we will demonstrate how the Hadoop Ozone storage could be used from containers. We will explain the basic storage type of Kubernetes clusters and show how Hadoop Ozone and Quadra could help to solve the storage problem in an industry standard way.
This document outlines the steps for migrating from another Hadoop distribution to MapR, including planning the migration, deploying MapR, migrating components, applications, data, and nodes. The key steps are planning requirements and goals, deploying and testing MapR, migrating customized components to work with MapR, ensuring applications work with MapR filesystem APIs, using distcp or the file client to copy data to MapR, and adding decommissioned nodes from the original cluster to MapR.
Service Request Scheduling based on Quantification Principle using Conjoint A...IJECEIAES
This document presents a service request scheduling technique for heterogeneous distributed systems using quantification principles. It uses conjoint analysis to identify the most influential server attribute, and z-score to quantify attribute values. Servers are assigned a "servicing cutoff" percentage based on z-scores, indicating each server's share of total requests. Requests are prioritized and assigned to servers without exceeding capacity limits. The technique aims to evenly distribute workload among servers according to their quantified capacities. Experimental results showed improved performance over other scheduling principles.
This document discusses monitoring and troubleshooting a Hadoop cluster running MapR. It outlines tools for cluster monitoring including the MapR Control System and MapR Metrics. Common troubleshooting scenarios like slow nodes, out of memory errors, and time skew issues are described. The document also provides guidance on working with MapR support and things to avoid when troubleshooting.
Presentation about Oracle Active Data Guard which I gave together with my colleague Luca Canali on UKOUG 2012
http://2012.ukoug.org/default.asp?p=9339&dlgact=shwprs&prs_prsid=7240&day_dayid=63
HBaseCon 2013: A Developer’s Guide to CoprocessorsCloudera, Inc.
This document discusses coprocessors in HBase, which allow arbitrary code to run on each region server. It provides examples of using coprocessors for observers that react to events and endpoints that clients can explicitly call. The examples include expanding single-row JSON data into multiple columns, collecting real-time analytics, and optimizing searches through endpoints.
This document provides an introduction and overview of HBase coprocessors. It discusses the motivations for using coprocessors such as performing distributed and parallel computations directly on data stored in HBase without data movement. It describes the architecture of coprocessors and compares the HBase coprocessor model to Google's Bigtable coprocessor model. It also provides details on the different types of coprocessors (observers and endpoints), how they are implemented and used, and provides examples code for both.
Alibaba builds the data infrastructure with Apache Hadoop YARN since 2013, and till now it manages more than 10k nodes. In Alibaba, Hadoop YARN serves various systems such as search, advertising, and recommendation etc. It runs not just batch jobs, also streaming, machine learning, OLAP, and even online services that directly impact Alibaba’s user experience. To extend YARN’s ability to support such complex scenarios, we have done and leveraged a lot of YARN 3.x improvements. In this talk, you will find what are these improvements and how they helped to solve difficult problems in large production clusters.
This includes:
1. Highly improved performance with Capacity Scheduler’s async scheduling framework
2. Better placement decisions with node attributes, placement constraints
3. Better resource utilization with opportunistic containers
4. Introduce a load balancer to balance resource utilization
5. Generic resource types scheduling/isolation to manage new resources such as GPU and FPGA
In the presentation, we will further introduce how we build the entire ecosystem on top of YARN and how we keep evolving YARN’s ability to tackle the challenges brought by continuously increasing data and business in Alibaba.
Speakers
Weiwei Yang, Alibaba, Staff Software Engineer
Ren Chunde, Alibaba Group, Senior Engineer
Hadoop was born much earlier than the Cloud Native era. But the question is still the same: what can it offer in the time of Kubernetes, containerization and hybrid clouds?
Apache Hadoop Ozone is a new subproject of Hadoop. It has a generic low-level binary layer, the Hadoop Distributed Data Storage (HDDS) and a S3 compatible Object Store implementation on top of it.
But the HDDS data storage layer is not just for the object store. It could be used for multiple purposes: to enhance the scalability the HDFS or provide block level access to the managed storage space. With this approach the same Hadoop Ozone cluster could provide hadoop file system based storage, object store space and block level storage.
Storage is still a hot topic with Kubernetes and in Cloud Native environments. Container Storage Interface specification is a vendor neutral standard to provide storage plugin for multiple container orchestration system.
Quadra provides block level access on top of the Hadoop Distributed Data Storage layer and it’s first class citizen of the containerized word. It implements the Container Storage Interface and can work as a Kubernetes dynamic volume provisioner.
In this talk we will demonstrate how the Hadoop Ozone storage could be used from containers. We will explain the basic storage type of Kubernetes clusters and show how Hadoop Ozone and Quadra could help to solve the storage problem in an industry standard way.
This document outlines the steps for migrating from another Hadoop distribution to MapR, including planning the migration, deploying MapR, migrating components, applications, data, and nodes. The key steps are planning requirements and goals, deploying and testing MapR, migrating customized components to work with MapR, ensuring applications work with MapR filesystem APIs, using distcp or the file client to copy data to MapR, and adding decommissioned nodes from the original cluster to MapR.
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
The document discusses scaling tier-based applications using Space Based Architecture (SBA). SBA uses a common data and processing grid to virtualize tiers, enabling applications to scale out processing across commodity hardware. This approach parallelizes transactions, reduces serialization overhead between tiers, and allows dynamic scalability through automated deployment of services on a grid. The session will provide examples of how financial and telecom applications achieve scalability using SBA.
[Hadoop Meetup] Apache Hadoop 3 community update - Rohith SharmaNewton Alex
Hadoop 3.0 will include several major new features and improvements, including HDFS erasure coding for improved storage efficiency, built-in support for long running services in YARN, and better resource isolation including Docker support. It also focuses on compatibility by preserving wire compatibility with Hadoop 2 clients and supporting rolling upgrades. Extensive testing is planned through alpha, beta, and GA releases to stabilize and validate the new features.
Hortonworks provides best practices for system testing Hadoop clusters. It recommends testing across different operating systems, configurations, workloads and hardware to mimic a production environment. The document outlines automating the testing process through continuous integration to test over 15,000 configurations. It provides guidance on test planning, including identifying requirements, selecting hardware and workloads to test upgrades, migrations and changes to security settings.
This document discusses client-side load balancing in a cloud computing environment. It describes how a client-side load balancer can distribute requests across backend web servers in a scalable way without requiring control of the infrastructure. The proposed architecture uses static anchor pages hosted on Amazon S3 that contain JavaScript code to select a web server based on its reported load. The JavaScript then proxies the request to that server and updates the page content. This approach achieves high scalability and adaptiveness without hardware load balancers or layer 2 optimizations.
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
In this presentation, we will introduce Hotspot's Garbage First collector (G1GC) as the most suitable collector for latency-sensitive applications running with large memory environments. We will first discuss G1GC internal operations and tuning opportunities, and also cover tuning flags that set desired GC pause targets, change adaptive GC thresholds, and adjust GC activities at runtime. We will provide several HBase case studies using Java heaps as large as 100GB that show how to best tune applications to remove unpredicted, protracted GC pauses.
Database replication is used to provide database resiliency in Exchange 2013 by replicating the active Mailbox database to other Mailbox servers in a Database Availability Group (DAG). The DAG configuration includes adding servers as members and deciding which databases will replicate to which members, with one server having the active copy and others storing passive copies. If the active database fails, a passive copy will become active with minimal interruption to users. DAGs require components like clustering and use replication to continuously sync transaction logs between copies. Multiple DAG configurations can be used depending on the environment.
This document provides best practices for YARN administrators and application developers. For administrators, it discusses YARN configuration, enabling ResourceManager high availability, configuring schedulers like Capacity Scheduler and Fair Scheduler, sizing containers, configuring NodeManagers, log aggregation, and metrics. For application developers, it discusses whether to use an existing framework or develop a native application, understanding YARN components, writing the client, and writing the ApplicationMaster.
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
In this video from the DDN User Group at SC16, Sven Oehme Chief Research Strategist, IBM, presents "Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program."
Watch the video presentation: http://wp.me/p3RLHQ-g52
Sign up for our insideHPC Newsletter: http://wp.me/p3RLHQ-g52
SQL Server Reporting Services Disaster Recovery WebinarDenny Lee
This is the PASS DW/BI Webinar for SQL Server Reporting Services (SSRS) Disaster Recovery webinar. You can find the video at: http://www.youtube.com/watch?v=gfT9ETyLRlA
The document discusses load balancing and intelligent load balancing. It covers load balancing architecture, how the data collector and dynamic store work, and how performance counters are used. Intelligent load balancing techniques like load throttling are explained. Potential issues that could cause load imbalances like the "black hole effect" or failing to read performance counters are also reviewed. Troubleshooting techniques for resolving common problems are provided.
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon
The document discusses improvements made by Hubspot's Big Data Team to increase the availability of HBase in a multi-tenant environment. It outlines reducing the cost of region server failures by improving mean time to recovery, addressing issues that slowed recovery, and optimizing the load balancer. It also details eliminating workload-driven failures through service limits and improving hardware monitoring to reduce impacts of failures. The changes resulted in 8-10x faster balancing, reduced recovery times from 90 to 30 seconds, and consistently achieving 99.99% availability across clusters.
Gluster is an open-source distributed scale-out storage system. It uses commodity hardware and has no centralized metadata server. Key concepts include bricks (storage units on servers), volumes (logical collections of bricks), and a trusted storage pool of nodes. Main volume types are distributed, replicated, distributed replicated, and striped. To set up Gluster, install packages, start services, create a storage pool, make volumes, and mount them on clients.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
The document discusses planning a MapR cluster, including hardware requirements and recommendations, operating system requirements, node configuration, service layout, and high availability cluster design considerations. The objectives are to understand MapR requirements, recommended hardware configurations for 50TB and 100TB clusters, how MapR services are arranged, and important factors for HA cluster design.
NonStop Hadoop - Applying the PaxosFamily of Protocols to make Critical Hadoo...DataWorks Summit
This document discusses using Paxos and the WANdisco DConE coordination engine to provide high availability for Hadoop services like HDFS and HBase. It provides background on WANdisco and describes how Paxos works to achieve consensus across replicated servers. The DConE innovations beyond Paxos allow for features like concurrent proposals, dynamic reconfiguration, and self-healing. The document then explains how DConE can be used to replicate the HDFS namespace across multiple consensus nodes and replicate HBase region servers. This provides active-active replication and eliminates single points of failure for critical Hadoop services.
In this session, you will learn the work Xiaomi has done to improve the availability and stability of our HBase clusters, including cross-site data and service backup and a coordinated compaction framework. You'll also learn about the Themis framework, which supports cross-row transactions on HBase based on Google's percolator algorithm, and its usage in Xiaomi's applications.
Performance of persistent apps on Container-Native Storage for Red Hat OpenSh...Principled Technologies
The document summarizes benchmark testing of Container-Native Storage (CNS) on Red Hat OpenShift Container Platform using two different storage media: solid-state drives (SSDs) and hard disk drives (HDDs). It tested CNS under IO-intensive and CPU-intensive workloads. For the IO-intensive workload, the SSD configuration significantly outperformed the HDD configuration, achieving over 5 times the maximum performance. However, for the CPU-intensive workload, the two configurations achieved similar maximum performance levels, with SSDs having less than a 5% improvement. The testing demonstrated that CNS can provide scalable storage and that storage performance depends on understanding how it matches the workload characteristics.
Hadoop Summit San Jose 2015: Towards SLA-based Scheduling on YARN Clusters Sumeet Singh
In this talk, we look at YARN scheduler choices available today for Apache Hadoop 2 and discuss their pros and cons. We dive deeper into Capacity Scheduler by providing a comprehensive overview of its various settings with examples from real large-scale Hadoop clusters to promoter a broader understanding of schedulers’ current state and best practices in place today when it comes to queue nomenclature, planning, allocations, and ongoing management. We present detailed cluster, queue, and job behaviors from several different capacity management philosophies.
We then propose practical solutions without any change to the scheduler or core Hadoop that allows managing queue creations and capacity allocations while optimizing for cluster utilization and maintaining SLA guarantees. A unified queue nomenclature, admission and capacity re-allocation policies across BUs, applications, and clusters make service automation possible. Transparency in resources consumed allows for defining realistic SLA expectation. Finally, consistent application tagging completes the feedback loop with SLAs observed through application level reporting.
Advanced resource allocation and service level monitoring for container orche...Conference Papers
This document proposes an architecture for advanced resource allocation and service level monitoring for container orchestration platforms. It begins with background on containerization and different container orchestration platforms like Docker Swarm, Kubernetes, and Mesos. It then discusses the need for resource-aware container placement and SLA-based monitoring to minimize container migration and ensure performance. The proposed architecture consists of different components like a request manager, information collector, policy manager, and resource manager to enable advanced scheduling and monitoring of containers on Kubernetes. The proposed solution aims to analyze future resource utilization to improve placement decisions and reduce issues after deployment.
Advanced resource allocation and service level monitoring for container orche...Conference Papers
This document proposes an architecture for advanced resource allocation and service level monitoring for container orchestration platforms. It begins with background on containerization and popular orchestration platforms like Docker Swarm and Kubernetes. It then highlights issues with default scheduling approaches and proposes a resource-aware placement algorithm and SLA-based monitoring to minimize container migration and ensure performance. The key components of the proposed architecture are described and its advantages over default scheduling are discussed. In conclusion, the solution is meant to benefit container orchestrators by improving application performance through more effective scheduling and issues prevention.
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
The document discusses scaling tier-based applications using Space Based Architecture (SBA). SBA uses a common data and processing grid to virtualize tiers, enabling applications to scale out processing across commodity hardware. This approach parallelizes transactions, reduces serialization overhead between tiers, and allows dynamic scalability through automated deployment of services on a grid. The session will provide examples of how financial and telecom applications achieve scalability using SBA.
[Hadoop Meetup] Apache Hadoop 3 community update - Rohith SharmaNewton Alex
Hadoop 3.0 will include several major new features and improvements, including HDFS erasure coding for improved storage efficiency, built-in support for long running services in YARN, and better resource isolation including Docker support. It also focuses on compatibility by preserving wire compatibility with Hadoop 2 clients and supporting rolling upgrades. Extensive testing is planned through alpha, beta, and GA releases to stabilize and validate the new features.
Hortonworks provides best practices for system testing Hadoop clusters. It recommends testing across different operating systems, configurations, workloads and hardware to mimic a production environment. The document outlines automating the testing process through continuous integration to test over 15,000 configurations. It provides guidance on test planning, including identifying requirements, selecting hardware and workloads to test upgrades, migrations and changes to security settings.
This document discusses client-side load balancing in a cloud computing environment. It describes how a client-side load balancer can distribute requests across backend web servers in a scalable way without requiring control of the infrastructure. The proposed architecture uses static anchor pages hosted on Amazon S3 that contain JavaScript code to select a web server based on its reported load. The JavaScript then proxies the request to that server and updates the page content. This approach achieves high scalability and adaptiveness without hardware load balancers or layer 2 optimizations.
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
In this presentation, we will introduce Hotspot's Garbage First collector (G1GC) as the most suitable collector for latency-sensitive applications running with large memory environments. We will first discuss G1GC internal operations and tuning opportunities, and also cover tuning flags that set desired GC pause targets, change adaptive GC thresholds, and adjust GC activities at runtime. We will provide several HBase case studies using Java heaps as large as 100GB that show how to best tune applications to remove unpredicted, protracted GC pauses.
Database replication is used to provide database resiliency in Exchange 2013 by replicating the active Mailbox database to other Mailbox servers in a Database Availability Group (DAG). The DAG configuration includes adding servers as members and deciding which databases will replicate to which members, with one server having the active copy and others storing passive copies. If the active database fails, a passive copy will become active with minimal interruption to users. DAGs require components like clustering and use replication to continuously sync transaction logs between copies. Multiple DAG configurations can be used depending on the environment.
This document provides best practices for YARN administrators and application developers. For administrators, it discusses YARN configuration, enabling ResourceManager high availability, configuring schedulers like Capacity Scheduler and Fair Scheduler, sizing containers, configuring NodeManagers, log aggregation, and metrics. For application developers, it discusses whether to use an existing framework or develop a native application, understanding YARN components, writing the client, and writing the ApplicationMaster.
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
In this video from the DDN User Group at SC16, Sven Oehme Chief Research Strategist, IBM, presents "Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program."
Watch the video presentation: http://wp.me/p3RLHQ-g52
Sign up for our insideHPC Newsletter: http://wp.me/p3RLHQ-g52
SQL Server Reporting Services Disaster Recovery WebinarDenny Lee
This is the PASS DW/BI Webinar for SQL Server Reporting Services (SSRS) Disaster Recovery webinar. You can find the video at: http://www.youtube.com/watch?v=gfT9ETyLRlA
The document discusses load balancing and intelligent load balancing. It covers load balancing architecture, how the data collector and dynamic store work, and how performance counters are used. Intelligent load balancing techniques like load throttling are explained. Potential issues that could cause load imbalances like the "black hole effect" or failing to read performance counters are also reviewed. Troubleshooting techniques for resolving common problems are provided.
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon
The document discusses improvements made by Hubspot's Big Data Team to increase the availability of HBase in a multi-tenant environment. It outlines reducing the cost of region server failures by improving mean time to recovery, addressing issues that slowed recovery, and optimizing the load balancer. It also details eliminating workload-driven failures through service limits and improving hardware monitoring to reduce impacts of failures. The changes resulted in 8-10x faster balancing, reduced recovery times from 90 to 30 seconds, and consistently achieving 99.99% availability across clusters.
Gluster is an open-source distributed scale-out storage system. It uses commodity hardware and has no centralized metadata server. Key concepts include bricks (storage units on servers), volumes (logical collections of bricks), and a trusted storage pool of nodes. Main volume types are distributed, replicated, distributed replicated, and striped. To set up Gluster, install packages, start services, create a storage pool, make volumes, and mount them on clients.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
The document discusses planning a MapR cluster, including hardware requirements and recommendations, operating system requirements, node configuration, service layout, and high availability cluster design considerations. The objectives are to understand MapR requirements, recommended hardware configurations for 50TB and 100TB clusters, how MapR services are arranged, and important factors for HA cluster design.
NonStop Hadoop - Applying the PaxosFamily of Protocols to make Critical Hadoo...DataWorks Summit
This document discusses using Paxos and the WANdisco DConE coordination engine to provide high availability for Hadoop services like HDFS and HBase. It provides background on WANdisco and describes how Paxos works to achieve consensus across replicated servers. The DConE innovations beyond Paxos allow for features like concurrent proposals, dynamic reconfiguration, and self-healing. The document then explains how DConE can be used to replicate the HDFS namespace across multiple consensus nodes and replicate HBase region servers. This provides active-active replication and eliminates single points of failure for critical Hadoop services.
In this session, you will learn the work Xiaomi has done to improve the availability and stability of our HBase clusters, including cross-site data and service backup and a coordinated compaction framework. You'll also learn about the Themis framework, which supports cross-row transactions on HBase based on Google's percolator algorithm, and its usage in Xiaomi's applications.
Performance of persistent apps on Container-Native Storage for Red Hat OpenSh...Principled Technologies
The document summarizes benchmark testing of Container-Native Storage (CNS) on Red Hat OpenShift Container Platform using two different storage media: solid-state drives (SSDs) and hard disk drives (HDDs). It tested CNS under IO-intensive and CPU-intensive workloads. For the IO-intensive workload, the SSD configuration significantly outperformed the HDD configuration, achieving over 5 times the maximum performance. However, for the CPU-intensive workload, the two configurations achieved similar maximum performance levels, with SSDs having less than a 5% improvement. The testing demonstrated that CNS can provide scalable storage and that storage performance depends on understanding how it matches the workload characteristics.
Hadoop Summit San Jose 2015: Towards SLA-based Scheduling on YARN Clusters Sumeet Singh
In this talk, we look at YARN scheduler choices available today for Apache Hadoop 2 and discuss their pros and cons. We dive deeper into Capacity Scheduler by providing a comprehensive overview of its various settings with examples from real large-scale Hadoop clusters to promoter a broader understanding of schedulers’ current state and best practices in place today when it comes to queue nomenclature, planning, allocations, and ongoing management. We present detailed cluster, queue, and job behaviors from several different capacity management philosophies.
We then propose practical solutions without any change to the scheduler or core Hadoop that allows managing queue creations and capacity allocations while optimizing for cluster utilization and maintaining SLA guarantees. A unified queue nomenclature, admission and capacity re-allocation policies across BUs, applications, and clusters make service automation possible. Transparency in resources consumed allows for defining realistic SLA expectation. Finally, consistent application tagging completes the feedback loop with SLAs observed through application level reporting.
Advanced resource allocation and service level monitoring for container orche...Conference Papers
This document proposes an architecture for advanced resource allocation and service level monitoring for container orchestration platforms. It begins with background on containerization and different container orchestration platforms like Docker Swarm, Kubernetes, and Mesos. It then discusses the need for resource-aware container placement and SLA-based monitoring to minimize container migration and ensure performance. The proposed architecture consists of different components like a request manager, information collector, policy manager, and resource manager to enable advanced scheduling and monitoring of containers on Kubernetes. The proposed solution aims to analyze future resource utilization to improve placement decisions and reduce issues after deployment.
Advanced resource allocation and service level monitoring for container orche...Conference Papers
This document proposes an architecture for advanced resource allocation and service level monitoring for container orchestration platforms. It begins with background on containerization and popular orchestration platforms like Docker Swarm and Kubernetes. It then highlights issues with default scheduling approaches and proposes a resource-aware placement algorithm and SLA-based monitoring to minimize container migration and ensure performance. The key components of the proposed architecture are described and its advantages over default scheduling are discussed. In conclusion, the solution is meant to benefit container orchestrators by improving application performance through more effective scheduling and issues prevention.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
Design of storage benchmark kit framework for supporting the file storage ret...IJECEIAES
An open-source software framework called the storage benchmark kit (SBK) is used to store the system benchmarking performance framework. The SBK is designed to perform any storage client or device using any data type as a payload. SBK simultaneously helps number of readers as well as writes to the storage system of large amounts of data as well as allows end-to-end latency benchmarking for multiple writers and readers. The SBK uses standardized performance measures for comparing and evaluating various storage systems and their combinations. Distributed file systems, distributed database systems, single or local node databases, systems of object storage, platforms of distributed streaming and messaging, and systems of key-value storage are the storage solutions supported by SBK. The SBK supports various storage systems like XFS, Kafka streaming storage systems, and Hadoop distributed file system (HDFS) performance benchmarking. The experimental results show that a proposed method achieves execution time of 65.530 s, 40.826 s and 30.351 s for the 100k, 500k and 1000k files respectively which ensures better improvement than the existing methods such as simple data interface and distributed data protection system.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
Cloud service analysis using round-robin algorithm for qualityof-service awar...IJECEIAES
Round-robin (RR) is a process approach to sharing resources that requires each user to get a turn using them in an agreed order in cloud computing. It is suited for time-sharing systems since it automatically reduces the problem of priority inversion, which are low-priority tasks delayed. The time quantum is limited, and only a one-time quantum process is allowed in round-robin scheduling. The objective of this research is to improve the functionality of the current RR method for scheduling actions in the cloud by lowering the average waiting, turnaround, and response time. CloudAnalyst tool was used to enhance the RR technique by changing the parameter value in optimizing the high accuracy and low cost. The result presents the achieved overall min and max response times are 36.69 and 650.30 ms for running 300 min RR. The cost for the virtual machines (VMs) is identified from $0.5 to $3. The longer the time used, the higher the cost of the data transfer. This research is significant in improving communication and the quality of relationships within groups.
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
A Novel Dynamic Priority Based Job Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a new dynamic priority-based job scheduling algorithm for cloud environments to optimize the problem of starvation. It assigns priority to jobs based on criteria like CPU requirements, I/O requirements, and job criticality. The algorithm aims to reduce wait time, turnaround time, and increase throughput and CPU utilization. It was tested against the Shortest Job First algorithm in CloudSim simulation software. The results showed improvements in wait time, turnaround time, and total finish time compared to the SJF algorithm.
This document summarizes a research paper on developing an efficient and dynamic resource allocation mechanism for cloud infrastructure services based on genetic algorithms. The mechanism aims to reduce energy utilization and latency by exactly matching resource requirements to virtual machine capacities while tolerating variations in available infrastructure and workload requirements. It proposes classifying workloads and machines based on their heterogeneities and allocating tasks in a way that diversifies machine usage to reduce risks from potential attackers. The genetic algorithm-based approach is compared to other scheduling methods and experimental results demonstrate its effectiveness in lowering power consumption and delay. Future work could account for machines with capacities exceeding available resources and optimize allocation based on predicted capacities.
IRJET - Efficient Load Balancing in a Distributed EnvironmentIRJET Journal
This document discusses load balancing algorithms for distributed computing environments. It begins by defining load balancing and describing its importance in distributed systems for optimizing resource utilization and system performance. Several static and dynamic load balancing algorithms are then summarized, including round robin, random, min-min, and max-min algorithms. The document also outlines key issues in load balancing, advantages, metrics for evaluating algorithms, and provides more detailed descriptions of 13 load balancing algorithms.
IRJET- Cloud Cost Analyzer and OptimizerIRJET Journal
This document proposes a system to monitor virtual machines (VMs or EC2 instances) on private clouds like Amazon or Google and provide solutions to reduce infrastructure costs from the customer's perspective. The system would monitor EC2 VM usage, performance metrics, and the customer's current cloud cost plan. It aims to optimize resource usage and save costs by proposing reductions to resources or cost plans. The system is designed to build a test bed using an Amazon account to connect to a user's resources and fetch performance data like RAM, CPU usage. It would then calculate pricing for storage, CPU usage, requests and other metrics to estimate overall setup costs and find opportunities for cost optimization.
This document proposes a new task scheduling algorithm called Dynamic Heterogeneous Shortest Job First (DHSJF) for heterogeneous cloud computing systems. DHSJF aims to improve performance metrics like reduced makespan and low energy consumption by considering the heterogeneity of resources and workloads. It discusses existing scheduling algorithms like Round Robin, First Come First Serve and their limitations. The proposed DHSJF algorithm prioritizes tasks with the shortest estimated completion time to optimize resource utilization and improve overall performance of the cloud computing system. Simulation results show that DHSJF provides better results for metrics like average waiting time and turnaround time as compared to Round Robin and First Come First Serve scheduling algorithms.
The document discusses the Open Grid Services Architecture (OGSA). It provides definitions and explanations of key concepts in OGSA including:
- OGSA defines standard protocols and formats to build large-scale, interoperable grid systems based on services.
- The Open Grid Services Infrastructure (OGSI) provides a specification for implementing grid services as stateful web services.
- Some major goals of OGSA are identifying use cases, core platform components, and defining models and profiles for interoperable solutions.
- Security is a key challenge in grid environments due to the need for integration with existing systems, interoperability across different hosting environments, and managing dynamic trust relationships.
A Host Selection Algorithm for Dynamic Container Consolidation in Cloud Data ...IRJET Journal
This document proposes a novel host selection algorithm called Energy-Efficient Particle Swarm Optimization (EE-PSO) for dynamic container consolidation in cloud data centers. The goal of the algorithm is to reduce energy consumption while maintaining quality of service levels. It was tested using the ContainerCloudSim toolkit on real-world workloads and was found to outperform existing algorithms in terms of energy savings, quality of service guarantees, number of new virtual machines created, and number of container migrations.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
This document summarizes a dissertation on an improved load balancing technique for secure data in cloud computing. The dissertation discusses research issues in load balancing and data security in cloud computing. It proposes a load balancing methodology that uses a load balancer, Kerberos authentication, and Nginx load balancing algorithms like round robin and least connections to securely store and balance load of encrypted data across multiple cloud nodes. The methodology is implemented using tools like HP LoadRunner, Amazon Web Services, and Jelastic cloud platform. Performance is analyzed in terms of transaction time. The proposed technique aims to improve resource utilization, access control, data security, and efficiency in cloud environments.
This document summarizes a research paper that proposes a load balancing algorithm for cloud computing using process migration. The algorithm aims to improve resource utilization by transferring processes from heavily loaded virtual machines to lightly loaded or idle ones. It describes related work on existing load balancing approaches and process migration. The proposed mechanism designates a server virtual machine to monitor member virtual machines' workloads and a balancer virtual machine to determine overloaded and underloaded members and migrate processes between them using a VM process migrator module. This helps balance loads across virtual machines to avoid overloading and improve overall resource efficiency.
Energy-Efficient Task Scheduling in Cloud EnvironmentIRJET Journal
1. The document discusses developing an energy-efficient task scheduling approach for cloud data centers using deep reinforcement learning.
2. It aims to minimize computational costs and cooling costs by optimizing task assignment to servers based on factors like temperature, CPU, and memory.
3. The proposed approach uses a greedy algorithm to schedule tasks to servers maintaining the lowest temperature, thus reducing energy consumption and improving data center performance.
This document proposes a cloud infrastructure that combines on-demand allocation of resources with opportunistic provisioning of idle cloud nodes. It uses Hadoop configuration with MapReduce algorithms to split large files into smaller parts and distribute the work across nodes to improve CPU and storage utilization. Encryption is also used to securely transmit data and address security challenges. The system aims to make better use of idle resources, process large datasets faster, and enhance security in cloud computing environments.
Similar to A Review of Storage Specific Solutions for Providing Quality of Service in Storage Area Networks (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This document discusses green computing practices and sustainable IT services. It provides an overview of factors driving adoption of green computing to reduce costs and environmental impact of data centers, such as rising energy costs and density. Green strategies discussed include improving infrastructure efficiency, power management, thermal management, efficient product design, and virtualization to optimize resource utilization. The document examines how green computing aims to lower costs and environmental footprint, and how sustainable IT services take a broader approach considering economic, environmental and social impacts.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
A Review of Storage Specific Solutions for Providing Quality of Service in Storage Area Networks
1. International Journal of Computer Applications Technology and Research
Volume 5– Issue 6, 364 - 367, 2016, ISSN:- 2319–8656
www.ijcat.com 364
A Review of Storage Specific Solutions for Providing
Quality of Service in Storage Area Networks
Joseph Kithinji
Department Computer Science and Information Technology,
School of Information Technology and Engineering,
Meru University of Science and Technology,
Meru, Kenya
Abstract: Predictable storage performance is a vital requirement for promising performance of the applications utilizing it and it is the
systems administrators’ job to ensure that storage performance meets the requirements of the applications. Most storage solutions are
able to virtualize the amount of storage presented to the host in a flexible way, but the same storage devices have no QOS features
.Storage level agreements provided by storage devices do not provide predictability in service delivery due to the absence of
prioritization (QOS) mechanisms in storage devices. This paper reviews some of the storage specific solutions developed to implement
quality of service in storage area networks.
Keywords: Starvation, latency, burst handling, quanta, performance isolation.
1. INTRODUCTION
Storage area networks play a key role in business continuity,
enterprise wide storage consolidation and disaster recovery
strategies in which storage resources are most often
distributed over many distant data centers[10].Future storage
systems are required to scale to large sizes due to the amount
of information that is being generated. In a SAN large
numbers of magnetic disks are attached to a network through
custom storage controllers or general purpose pcs and provide
storage to application servers [6].
In a SAN, a single host request may flood the resources of a
storage pool causing poor performance of all hosts utilizing
that particular pool [5]. Hence, the performance of a given
host utilizing a shared pool resource is unpredictable by the
nature of resource sharing. To address this problem a
mechanism of providing QOS based on some policy is
required. Storage service level agreements provide for
predictability in service delivery which is not effective due to
the absence of QOS mechanisms in storage devices [5].
QOS is essential in the mixed environment where various
users with different levels of priorities and preferences are
accessing the storage systems simultaneously. For example in
an enterprise network, web hosting, data analysis and data
editing may be running at the same time[10].Providing QOS
to SANs has been a challenge which has led to the design of
many approaches.
2. STORAGE SPECIFIC QUALITY OF
SERVICE SOLUTIONS
2.1 Stonehenge
QOS is essential in mixed environment where various users
with different levels of priorities and preferences are
accessing the storage systems simultaneously.it is important to
ensure that critical tasks get satisfying performance given
limited resources.[8] Developed Stonehenge to solve the
issues of storage scalability, manageability and quality of
service. Stonehenge is built on IP networks IDE hard drives,
IDE controllers and off-the shelf low end personal computers.
To implement QOS Stonehenge dedicates a set of storage
servers to manage disk arrays and single personal computers
to perform the controlling functions such as storage
reservation and run-time management [7].
[1]Developed pClock based on arrival curves that capture the
bandwidth and burst requirements of applications. When
implemented pClock showed efficient performance isolation
and burst handling.it also showed an ability to allocate spare
capacity to either speed up some applications. When a request
arrives the pClock algorithm performs three functions;
updating the number of tokens, checking and adjusting tags
and computing the tags. The update number of tokens
function updates the arrival upper bound function for the
present arrival time while the check adjust tags is used to-
resynchronize flows to avoid starvation and the compute tags
assigns start and finish tags. The pClock algorithm allows
multiple workloads to share storage, with each workload
receiving the level of service it requires. PClock allows each
workload to specify its throughput, burst size and desired
latency [1] [8].
The pClock algorithm is as follows:-
Request arrival:
Let t be arrival time of request r from fi;
Update Numtokens();
CheckandAdjustTags();
ComputeTags();
Request scheduling:
Choose the request w with minimum finish tag fjw and
dispatch to the server
2. International Journal of Computer Applications Technology and Research
Volume 5– Issue 6, 364 - 367, 2016, ISSN:- 2319–8656
www.ijcat.com 365
Let the chosen request be from flow fk with start tag swk;
Minsk=sk; [5].
2.2.1 updateTokens
In order to assign tags the arrival upper bound function Uia()
to the current time t.it maintains a variable numtokens for
each flow fi.
This means that the difference Uia(t)-Ri(0,t) is the difference
between AUB at time t and the cumulative number of arrivals
up to that time. The value obtained indicates the number of
requests that can be made by fi at t without violating the
arrival constraints.
Hence when [9];
Uia(t)-Ri(0,t)<1, means that a well
behaved flow cannot make any request at t.
2.2.2 computeTags
This function assigns start and finish tags (Sir + Fir) to the
request r from fi arriving at time t.The value assigned to the
start tag Sir depends on whether the request is within the AUB
or exceeds it.When numtokensi>=1, Sir is set to the current
time t.If the total number of requests made by fi through time
t exceeds AUB(numtokensi<1),the start tag will be assigned a
future time greater than t.In particular the start tag is set to the
time it would have taken a well behaved flow to send a
number of requests[2].
Pclock guarantees that the well behaved flows are not missed
and the requests of the background jobs are done in batches,
which can lead to better disk utilization since many
background jobs tend to be sequential [8]. The algorithm has
the ability to allocate spare capacity to the workloads or to the
background jobs. The algorithm is also lightweight to
implement and efficient to execute. However it does not offer
control of how QOS mechanisms interact with storage devices
[7].
2.3 Argon
The argon storage server explicitly manages its resources to
bind the inefficiency arising from interservice disk and cache
interference in traditional systems. The goal is to provide each
service with at least a configured fraction of the throughput it
achieves when the storage server to itself. Argon uses
automatically configured prefetch/write back sizes to insulate
streaming efficiency from disk seeks introduced by competing
workloads. Argon uses prefetching and write back
aggregation as a tool for performance insulation [4] [6].
Argon adapts, extends and applies some existing mechanisms
to provide performance insulation for shared storage servers.
Many operating systems such as eclipse operating system use
time slicing of disk head time to achieve performance
insulation. Argon goes beyond this approach by automatically
determining the lengths of time slices required and by adding
appropriate and automatically configured cache partitioning
and prefetch/write back [8].
Argon uses QOS aware disk scheduler in place of strict time
slicing, for workloads whose access patterns would not
interfere when combined.to implement fairness or weighted
fair sharing between workloads argon uses amortization cache
partitioning and quanta based scheduling. Assumes that
network bandwidth and CPU time has no effect on efficiency
[9]. To achieve complete isolation argon does not allow
requests from different workloads to be mixed, instead it uses
a strict quanta based scheduling .This ensures that each client
gets exclusive access to the disk during a scheduling quantum
which avoids starvation because active clients quanta are
scheduled in a round robin manner[5].
Traditional disk and cache management allow interference
among services access patterns to significantly reduce
efficiency [7]. Argon combines and automatically configures
prefetch/write back cache partitioning and quanta based disk
time scheduling to provide each service with a configurable
fraction of efficiency it would receive without competition.
This increases both efficiency and predictability when
services share storage server [4]
However as with all other storage specific solutions Argon
runs on the storage device itself which requires multiple
instances of it to be implemented in all the devices. This
increases overhead and CPU time. Again since there is no
centralized management of QOS when the storage data is in
transit from the source to destination QOS is not taken care of.
The argon design also assumes that bandwidth is not a factor
in QOS however with IP SANs bandwidth management is
very important since the storage data will be moving from
source to destination via IP network [6].
2.4 Facade
[3]Developed Façade as a dynamic storage controller for
controlling multiple input/output streams going to a shared
storage device and to ensure that each of the input/output
streams receives a performance specified by its service level
objective. Façade provides performance guarantees in highly
volatile scenario. To achieve QOS Façade is implemented as a
virtual store controller that sits between hosts and storage
devices in the network, and throttles individual input/output
requests from multiple clients so that devices do not saturate
[2].
Figure: Facade structure [3]
The capacity planner allocates storage for each workload on
the storage device and ensures that the device has adequate
Capacity
planning
Façade
Storage devices
Overload
alarm
I/Os
SLO storage
allocation
Allo
cate
stor
es
3. International Journal of Computer Applications Technology and Research
Volume 5– Issue 6, 364 - 367, 2016, ISSN:- 2319–8656
www.ijcat.com 366
capacity and bandwidth to meet the aggregate demands of the
workloads assigned to it.The allocation is adjusted depending
on the workload. Requests arriving at façade are queued in per
workload input queues.to determine which requests are
admitted to the storage devices façade relies on three
components that is the I/O scheduler, statistics monitor and
controller [8].
The I/O scheduler maintains a target queue depth value and
per workload latency target which it tries to meet using
earliest deadline first (EDF) scheduling. The deadline for a
request from a workload WK is arrivalTime (WK) +
latenctyTarget (Wk), where arrivalTime (WK) is its
arrivalTime and latencyTarget (WK) is a target supplied for
WK by the controller. Requests are admitted into the devices
in two cases; if the device queue depth is now less than the
current queue length target or if the deadline for any workload
is already past. The intent of controlling queue depth is to
allow workloads with low latency requirements to satisfy their
SLOs [3].
2.4.1 Statistics monitor
This receives I/O arrivals and completions.it reports the
completions to the I/O scheduler and also computes the
average latency and read and write request arrival rates for
active workloads every P seconds and reports them to the
controller [10].
2.4.2 Controller
The controller adjusts the target workload latencies and the
target device queue length. Target workload latencies must be
adjusted because the workload request rates vary and
therefore it is necessary to give those requests a different
latency based on the workload SLO.The device queue depth
must also be adjusted to meet the varying workload
requirements[8] .The controller tries to keep the queue as full
as possible to enhance device utilization. However this
increases the latency. This means when any.
Workload demands a low latency, the controller reduces the
target queue depth. The controller uses the I/O statistics it
receives from the monitor every P seconds to compute a new
latency target based on the SLO for each workload as follows;
Let the SLO for WK be
((r,tr1,tw1),(r2,tr2,tw2),…,(rn,trn,twn)) with a window w and
the fraction of reads reported is fr.
Let r0=0, rn+1=∞, trn+1=twn+1=∞ then latencyTarget (WK)
=trifn+twi (1-fr)
If ri-1<=readRate (WK) + writeRate (WK) <ri [7].
Facade is able to efficiently utilize resources and balance the
load among multiple backend devices while satisfying the
performance requirement of many different client
applications. Facade is also able to adopt to workloads whose
performance requirements change overtime. However façade
cannot handle large workloads. This is because multiple
instance of façade that are in every storage device cannot be
able to cooperate in order to handle large workloads [3].
2.5 PARDA
PARDA enforces proportional share fairness among
distributed hosts accessing a storage array without assuming
any support from the array itself.PARDA uses latency
measurements to detect overload and adjust issue queue
lengths to provide fairness [7]. Numerous algorithms for
network QOS have been proposed, including many variants of
fair queueing.However these approaches are suitable only in
centralized setting where a single controller manages all
requests for resources [2].
3. Discussion and Conclusion
PARDA enforces proportional fairness among distributed
hosts accessing a storage array, without assuming any support
from the array itself. PARDA uses latency measurements to
detect overload and adjust issue queue lengths to provide
fairness. However these technique require each storage
device to run an instance of the algorithm which results in
overhead caused by running the algorithm. Facade provides
performance guarantees by throttling individual input/output
requests from multiple clients so that devices do not saturate.
Facade provides performance isolation in that the
performance experienced by the workload from a given
customer must not suffer because of variations on the
workloads from other customers. Façade is able to use
resources much more efficiently and to balance the load
among multiple back end devices while satisfying the
performance requirements of many different client
applications. However it cannot handle well large workloads.
It also requires multiple instances of the same algorithm to run
in all storage devices.
Stonehenge was developed as a technique for providing QOS
guarantees in storage area networks. All the above techniques
require that multiple instances of the same algorithm runs on
every storage device. These increases overhead which is
caused by the processing of the algorithms. These techniques
are implemented on the storage device and therefore so do not
provide service guarantees when storage traffic is traversing
the network which is important since with IP SAN storage
traffic will be interacting with other traffic in the network.
4. ACKNOWLEDGMENTS
My thanks goes to all my friends who have contributed
towards development of the paper.
5. REFERENCES
[1] Gulati, A., Merchant, A., & Varman, P. J. (2007). p
Clock : An Arrival Curve Based Approach For QoS
Guarantees In Shared Storage Systems. ACM.
[2] Gulati, A., & Waldspurger, C. A. (2007). PARDA :
Proportional Allocation of Resources for Distributed
Storage Access. In 7th USENIX Conference on File and
Storage Technologie (pp. 85–98).
[3] Lumb, C. R., Merchant, A., & Alvarez, G. A. (2003). Fac
¸ ade : virtual storage devices with performance
guarantees. In File and Storage Technologies (pp. 131–
144).
[4] Wachs, M., Abd-el-malek, M., Thereska, E., & Ganger,
G. R. (2007). Argon : performance insulation for shared
storage servers. In File and Storage Technologies (pp.
61–76).
4. International Journal of Computer Applications Technology and Research
Volume 5– Issue 6, 364 - 367, 2016, ISSN:- 2319–8656
www.ijcat.com 367
[5] A. Gulati and I. Ahmad(2008). Towards distributed
storage resource management using flow control. ACM
SIGOPS Operating Systems Review, 42(6):pp 10–16.
[6] Bjørgeengen, J. (2010). Using TCP / IP traffic shaping to
achieve iSCSI service predictability. In Proceedings of
LISA ’10: 24th Large Installation System Administration
Conference (pp. 91–107).
[7] Traver, L, Tarin, C, Cardona, N.(2009)Bandwidth
Resource Management for Neural Signal Telemetry,
Information Technology in Biomedicine, IEEE , Vol.13,
no.6(December), pp.1083-1084.
[8] M.Wachs, M. Abd-El-Malek, E. Thereska, and G.R.
Ganger (2007). Argon: performance insulation for shared
storage servers. In Proceedings of the 5th USENIX
conference on File and Storage Technologies,pages 5–5.
USENIX Association.
[9] Van der Stok,P, D. Jarnikov,D,Kozlov, S,van
Hartskamp,M & Lukkien,J.J(2009)Hierarchical Resource
Allocation for Robust In-Home Video Streaming,
Journal of Systems and Software.Vol. 80, no.
7(February), pp. 951–961.
[10] Peter, M.O, and Babatunde, P.J. (2012) Software
Prototyping: A Strategy to Use When User Lacks Data
Processing Experience, ARPN Journal of Systems and
Software.Vol.2,No.6(June),pp 219-223