The document discusses live migration of Docker containers using checkpoint and restore in userspace (CRIU). It analyzes the usage of network, memory, and CPU during live migration in different scenarios. Four scenarios are simulated: 1) one-way migration from platform 1 to 2, 2) one-way migration from platform 2 to 1, 3) two-way migration with one container, 4) two-way migration with three containers. The results show the time taken for checkpoint and restore in each platform and scenario. Memory and CPU usage are also analyzed before and after checkpoint and restore. Live migration using CRIU is found to effectively migrate containers while minimizing downtime, with performance depending on factors like the number of memory pages changed
High performance and flexible networkingJohn Berkmans
This document summarizes a paper presented at the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI '14) held from April 2-4, 2014 in Seattle, WA. The paper proposes NetVM, a platform that uses virtualization to run complex network functions at line speed on commodity servers while providing flexibility. NetVM leverages the DPDK framework to allow virtual machines to directly access packets from the NIC without kernel involvement. It introduces innovations such as inter-VM communication through shared memory, a hypervisor switch for state-dependent packet routing, and security domains. Evaluation shows NetVM can process packets at 10Gbps throughput across multiple VMs, over 250% faster than existing SR
IRJET- An Adaptive Scheduling based VM with Random Key Authentication on Clou...IRJET Journal
This document summarizes a research paper on an adaptive scheduling-based virtual machine (VM) approach with random key authentication for cloud data access. The paper proposes allocating VMs to servers in a way that flexibly utilizes cloud resources while guaranteeing job deadlines. It employs time sliding and bandwidth scaling in resource allocation to better match resources to job requirements and cloud availability. Simulations showed the approach can accept more jobs than existing solutions while increasing provider revenue and lowering tenant costs. The paper also discusses generating random keys for user authentication and reviewing related work on scheduling methods and cloud resource provisioning.
This document analyzes the performance of application protocols like HTTP and FTP in a mobile ad hoc network (MANET) using the network simulation tool OPNET. It describes 6 simulation scenarios with varying numbers of nodes. Traffic is generated using FTP and HTTP applications. Key metrics like throughput, network load, and media access delay are observed. The document finds that as the number of nodes increases, these performance metrics are affected.
Impact of network quality deterioration on user’s perceived operability and l...IJCNCJournal
The remote desktop environment (Virtual Desktop Infrastructure) is attracting interest as a way to
strengthen security and support mobile access ortelework. To realize the remote desktop environments, a remote desktop protocol is required to transfer information via a network about the user’s operations made on the keyboard and mouse on a terminal to the remote server. The growing popularity of remote desktop environments makes it important to determine the factors that govern the user’s perceived operability with a remote desktop protocol. It is also necessary important to find out the conditions for a wide- area live migration of virtual machines, to use resources efficiently in the remote desktop environments.
This paper examines the impact of network quality deterioration (long network delay, high packet loss,
small downlink bandwidth) on a user’s perceived operability in remote desktop environments, assuming
RDP, PCoIP and ICA as remote desktop protocol. Next, this paper studies the impact of network quality on
the performance of a live-migration of virtual machines in remote desktop environments.
DCE is an architecture defined by OSF to provide a distributed computing platform. It includes services like RPC, directory service, security, time, and file service. DCE defines a framework for client-server communication and developing distributed applications across networked computers. It aims to address challenges of distributed computing like scalability, availability and security.
This document summarizes distributed computing environment (DCE). DCE provides a vendor-independent platform for building distributed applications. It uses remote procedure calls (RPC) to allow systems to access remote procedures simply by calling them. DCE provides security services like authentication, authorization, and encryption. Its components include a thread package, RPC facility, time service, name service, and file and security services. DCE has applications in security, the world wide web, and distributed objects.
The document discusses storage virtualization. It defines storage virtualization as presenting a logical view of physical storage resources. It describes different forms of virtualization including memory, network, server, and storage virtualization. It then focuses on storage virtualization, discussing the SNIA taxonomy, configurations such as in-band vs out-of-band, challenges, and types including block-level and file-level virtualization.
The document provides an introduction to the Distributed Computing Environment (DCE). It discusses the goals of DCE, which include allowing applications to run on different operating systems and networks, providing a platform for distributed applications, and tools for authentication and access protection. It describes the core DCE services, which include distributed file service, thread service, RPC, time service, directory service, and security service. It also explains how to write a DCE client and server, bind them, and perform an RPC.
High performance and flexible networkingJohn Berkmans
This document summarizes a paper presented at the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI '14) held from April 2-4, 2014 in Seattle, WA. The paper proposes NetVM, a platform that uses virtualization to run complex network functions at line speed on commodity servers while providing flexibility. NetVM leverages the DPDK framework to allow virtual machines to directly access packets from the NIC without kernel involvement. It introduces innovations such as inter-VM communication through shared memory, a hypervisor switch for state-dependent packet routing, and security domains. Evaluation shows NetVM can process packets at 10Gbps throughput across multiple VMs, over 250% faster than existing SR
IRJET- An Adaptive Scheduling based VM with Random Key Authentication on Clou...IRJET Journal
This document summarizes a research paper on an adaptive scheduling-based virtual machine (VM) approach with random key authentication for cloud data access. The paper proposes allocating VMs to servers in a way that flexibly utilizes cloud resources while guaranteeing job deadlines. It employs time sliding and bandwidth scaling in resource allocation to better match resources to job requirements and cloud availability. Simulations showed the approach can accept more jobs than existing solutions while increasing provider revenue and lowering tenant costs. The paper also discusses generating random keys for user authentication and reviewing related work on scheduling methods and cloud resource provisioning.
This document analyzes the performance of application protocols like HTTP and FTP in a mobile ad hoc network (MANET) using the network simulation tool OPNET. It describes 6 simulation scenarios with varying numbers of nodes. Traffic is generated using FTP and HTTP applications. Key metrics like throughput, network load, and media access delay are observed. The document finds that as the number of nodes increases, these performance metrics are affected.
Impact of network quality deterioration on user’s perceived operability and l...IJCNCJournal
The remote desktop environment (Virtual Desktop Infrastructure) is attracting interest as a way to
strengthen security and support mobile access ortelework. To realize the remote desktop environments, a remote desktop protocol is required to transfer information via a network about the user’s operations made on the keyboard and mouse on a terminal to the remote server. The growing popularity of remote desktop environments makes it important to determine the factors that govern the user’s perceived operability with a remote desktop protocol. It is also necessary important to find out the conditions for a wide- area live migration of virtual machines, to use resources efficiently in the remote desktop environments.
This paper examines the impact of network quality deterioration (long network delay, high packet loss,
small downlink bandwidth) on a user’s perceived operability in remote desktop environments, assuming
RDP, PCoIP and ICA as remote desktop protocol. Next, this paper studies the impact of network quality on
the performance of a live-migration of virtual machines in remote desktop environments.
DCE is an architecture defined by OSF to provide a distributed computing platform. It includes services like RPC, directory service, security, time, and file service. DCE defines a framework for client-server communication and developing distributed applications across networked computers. It aims to address challenges of distributed computing like scalability, availability and security.
This document summarizes distributed computing environment (DCE). DCE provides a vendor-independent platform for building distributed applications. It uses remote procedure calls (RPC) to allow systems to access remote procedures simply by calling them. DCE provides security services like authentication, authorization, and encryption. Its components include a thread package, RPC facility, time service, name service, and file and security services. DCE has applications in security, the world wide web, and distributed objects.
The document discusses storage virtualization. It defines storage virtualization as presenting a logical view of physical storage resources. It describes different forms of virtualization including memory, network, server, and storage virtualization. It then focuses on storage virtualization, discussing the SNIA taxonomy, configurations such as in-band vs out-of-band, challenges, and types including block-level and file-level virtualization.
The document provides an introduction to the Distributed Computing Environment (DCE). It discusses the goals of DCE, which include allowing applications to run on different operating systems and networks, providing a platform for distributed applications, and tools for authentication and access protection. It describes the core DCE services, which include distributed file service, thread service, RPC, time service, directory service, and security service. It also explains how to write a DCE client and server, bind them, and perform an RPC.
IRJET - Torcloud - An Energy-Efficient Public Cloud for Imparting FilesIRJET Journal
This document proposes TorCloud, a software-as-a-service cloud platform that allows users to download torrent files more efficiently. It does this by intermediating between users and peer computers, allowing cloud servers to quickly find peers and download files with minimal requests from users. This reduces the load on users' computers compared to traditional torrent downloading. The system distributes files across multiple cloud servers and monitors their CPU, memory, disk space, and load usage to balance resources.
The document discusses the World Wide Web (WWW) and Hypertext Transfer Protocol (HTTP). It describes the basic architecture of the WWW including clients, servers, web pages, and URLs. It explains that web pages can be static, dynamic, or active. The document then discusses HTTP in more detail, including how HTTP requests and responses are structured, how persistent connections work in HTTP 1.1, and how caching can improve performance.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
An Empirical study on Peer-to-Peer sharing of resources in Mobile Cloud Envi...IJECEIAES
The increase usage of mobile users with internet and interoperability among the cloud services intensifies the role of distributed environemtnt in today’s real world application. Modern technologies are important for building rich, scalable and interoperable applications. To meet the requirements of client,the cloud service provider should offer adequate infrastructure especially under heavy multi-client load.To provide solution for large scale requirements and to statisfy the mobile client from the critical situation like lacking with bandwidth,connectivity issues,service completion ratio, we present adhoc virtual cloud model for different scenarios that include single and multiple client configurations with various file sizes of various file formats for retrieving files in the mobile cloud environement.We evaluate the strategies with the socket and RMI implemented using java and identify the best model for real world applications. Performance evaluation is done with the results obtained and recommends that when sockets and RMI can be appropriately used in peer-to-peer environment when the mobile user cannot connect directly to the cloud services.
Client server computing in mobile environmentsPraveen Joshi
Client server computing in mobile environments. Versatile, Message based, Modular Infrastructure intended to improve usability, flexibility, interoperability and scalability as compared to Centralized, Mainframe, time sharing computing.
Intended to reduce Network Traffic.
Communication is using RPC or SQL
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
1. The document describes a secure cloud storage system that uses proxy re-encryption to allow authorized data sharing among multiple users. It focuses on privacy issues in cloud storage and proposes a solution using proxy re-encryption.
2. Proxy re-encryption schemes allow a proxy (like a cloud server) to alter an encrypted file so that it can be decrypted by another user, without revealing the content to the proxy. The proposed system uses this to share files encrypted for one user so they can be decrypted by another authorized user.
3. The system assigns different trust levels to control what data different users can access. A high trust level allows access to more data fields, while a low trust level restricts access. This trust
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on
demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection
of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious
concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the
cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to
achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that
user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based
privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request
and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing
among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority
sharing is attractive for multi-user collaborative cloud applications.
Resumption of virtual machines after adaptive deduplication of virtual machin...IJECEIAES
In cloud computing, load balancing, energy utilization are the critical problems solved by virtual machine (VM) migration. Live migration is the live movement of VMs from an overloaded/underloaded physical machine to a suitable one. During this process, transferring large disk image files take more time, hence more migration and down time. In the proposed adaptive deduplication, based on the image file size, the file undergoes both fixed, variable length deduplication processes. The significance of this paper is resumption of VMs with reunited deduplicated disk image files. The performance measured by calculating the percentage reduction of VM image size after deduplication, the time taken to migrate the deduplicated file and the time taken for each VM to resume after the migration. The results show that 83%, 89.76% reduction overall image size and migration time respectively. For a deduplication ratio of 92%, it takes an overall time of 3.52 minutes, 7% reduction in resumption time, compared with the time taken for the total QCOW2 files with original size. For VMDK files the resumption time reduced by a maximum 17% (7.63 mins) compared with that of for original files.
Overview of Network Programming, Remote Procedure Calls, Remote Method Invocation, Message Oriented Communication, and web services in distributed systems
SDN: A New Approach to Networking TechnologyIRJET Journal
This document summarizes SDN (Software Defined Networking) and its relationship to network virtualization and NFV (Network Function Virtualization). It discusses how SDN separates the control plane from the data plane to make networks programmable. It also describes how network virtualization allows multiple virtual networks to run simultaneously on top of a physical network. NFV aims to virtualize network functions like firewalls and load balancers that were traditionally hardware-based. The document argues that SDN, network virtualization, and NFV work together to provide flexible, easily reconfigurable networks and reduce costs. When combined, they allow networks to be centrally programmed and abstracted from physical hardware.
This document discusses distributed multimedia systems (DMMS). It defines DMMS as consisting of multimedia databases, proxy servers, and clients intended to distribute multimedia content over networks. Key requirements of DMMS include supporting continuous media like audio and video, quality of service management to ensure requests are met, and multiparty communications. The document outlines the basic DMMS architecture and discusses factors like server bandwidth that affect the system. It also covers approaches like proxy-based and clustered servers, as well as quality of service management through techniques such as buffering, bandwidth reservation and traffic shaping. Examples of DMMS applications include digital libraries, distance learning, and audio/video streaming.
This chapter describes network-attached storage (NAS), including its components, implementations, file-sharing protocols, and factors that affect performance. NAS uses file-level access for input/output operations and is optimized for file serving functions. There are two types of NAS implementations - integrated, where all components are in a single enclosure, and gateway, where the NAS head shares storage with a SAN. Common file-sharing protocols for NAS include NFS and CIFS.
A distributed system is a collection of computational and storage devices connected through a communications network. In this type of system, data, software, and users are distributed.
This document discusses managing and monitoring the storage infrastructure. It describes monitoring components like hosts, networks, and storage arrays to ensure accessibility, capacity, performance, and security. Key storage management activities are also outlined, such as availability, capacity, performance, and security management. Emerging standards for storage management are presented, along with challenges in monitoring today's complex storage environments and ideal solutions.
The document summarizes research on security risks in cloud computing due to multi-tenancy. It discusses how researchers were able to:
1) Map the physical layout of Amazon EC2 instances to determine placement parameters to achieve co-residence with target VMs.
2) Verify co-residence through network checks and a covert channel with over 60% success.
3) Cause co-residence by launching many probes or targeting recently launched instances, achieving up to 40% success.
4) Exploit co-residence to measure cache usage and network traffic, allowing for load monitoring and covert channels to leak information.
This document outlines 7 key challenges in designing distributed systems: heterogeneity, openness, security, scalability, failure handling, concurrency, and transparency. It discusses each challenge in detail, providing examples. Heterogeneity refers to differences in networks, hardware, operating systems, and programming languages that must be addressed. Openness means a system can be extended and implemented in various ways. Security concerns confidentiality, integrity, and availability of resources. Scalability means a system remains effective as resources and users increase significantly. Failure handling techniques include detecting, masking, tolerating, and recovering from failures. Concurrency ensures correct and high performance sharing of resources. Transparency aims to make distributed components appear as a single system regardless of location, access
This document discusses several key concepts in distributed operating systems:
1. Transparency allows applications to operate without regard to whether the system is distributed or implementation details. Inter-process communication enables communication within and between nodes.
2. Process management provides policies and mechanisms for sharing resources between distributed processes like load balancing.
3. Resource management distributes resources like memory and files across nodes and implements policies for load sharing and balancing.
4. Reliability is achieved through fault avoidance, tolerance, and detection/recovery to prevent and recover from errors.
The document discusses backup and recovery concepts including the purposes of backups for disaster recovery, operational backups, and archiving. It covers backup considerations like retention periods and file sizes. Backup methods can be full, incremental, differential or synthetic. The backup process involves a backup server coordinating with clients and storage nodes. Restore is initiated manually. Common backup topologies are direct-attached, LAN-based, and SAN-based.
Chapter 1 characterisation of distributed systemsAbDul ThaYyal
This document discusses the key concepts and challenges of distributed systems. It defines distributed systems as networked computers that communicate by passing messages in order to share resources. Some of the main challenges discussed include heterogeneity, security, scalability, failure handling, and transparency. Transparency refers to hiding the complexities of the distributed nature of the system from users, such as hiding the physical location of resources or ability to access both local and remote resources uniformly.
The document discusses four key challenges for implementing embedded cloud computing:
1. Configuring systems for data timeliness and reliability across multiple data streams and applications.
2. Configuring ad-hoc datacenters for remote operations in a timely manner based on available cloud resources.
3. Ensuring configuration accuracy so data timeliness and reliability are optimized for the given computing resources.
4. Reducing development complexity to allow systems to readily configure and operate across different cloud environments and applications.
The document discusses a cloud operating system (OS) that runs on Linux and provides cloud computing services. The cloud OS allows users to access cloud resources through a web interface similar to desktop programs. It provides file management, productivity apps, and communication tools. The cloud OS manages virtual machines across cloud nodes and provides APIs for distributed process and application management. Key features include resource measurement, abstraction and publishing resources, and distributed user authentication.
Efficient architectural framework of cloud computing Souvik Pal
This document discusses an efficient architectural framework for cloud computing. It begins by providing background on cloud computing and discusses challenges such as security, privacy, and reliability. It then proposes a new architectural framework that separates infrastructure as a service (IaaS) into three sub-modules: IaaS itself, a hypervisor monitoring environment (HME), and resources as a service (RaaS). The HME acts as middleware between IaaS and physical resources, using a hypervisor to allocate resources from a pool managed by RaaS. This proposed framework is intended to improve performance and access speed for cloud computing.
IRJET - Torcloud - An Energy-Efficient Public Cloud for Imparting FilesIRJET Journal
This document proposes TorCloud, a software-as-a-service cloud platform that allows users to download torrent files more efficiently. It does this by intermediating between users and peer computers, allowing cloud servers to quickly find peers and download files with minimal requests from users. This reduces the load on users' computers compared to traditional torrent downloading. The system distributes files across multiple cloud servers and monitors their CPU, memory, disk space, and load usage to balance resources.
The document discusses the World Wide Web (WWW) and Hypertext Transfer Protocol (HTTP). It describes the basic architecture of the WWW including clients, servers, web pages, and URLs. It explains that web pages can be static, dynamic, or active. The document then discusses HTTP in more detail, including how HTTP requests and responses are structured, how persistent connections work in HTTP 1.1, and how caching can improve performance.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
An Empirical study on Peer-to-Peer sharing of resources in Mobile Cloud Envi...IJECEIAES
The increase usage of mobile users with internet and interoperability among the cloud services intensifies the role of distributed environemtnt in today’s real world application. Modern technologies are important for building rich, scalable and interoperable applications. To meet the requirements of client,the cloud service provider should offer adequate infrastructure especially under heavy multi-client load.To provide solution for large scale requirements and to statisfy the mobile client from the critical situation like lacking with bandwidth,connectivity issues,service completion ratio, we present adhoc virtual cloud model for different scenarios that include single and multiple client configurations with various file sizes of various file formats for retrieving files in the mobile cloud environement.We evaluate the strategies with the socket and RMI implemented using java and identify the best model for real world applications. Performance evaluation is done with the results obtained and recommends that when sockets and RMI can be appropriately used in peer-to-peer environment when the mobile user cannot connect directly to the cloud services.
Client server computing in mobile environmentsPraveen Joshi
Client server computing in mobile environments. Versatile, Message based, Modular Infrastructure intended to improve usability, flexibility, interoperability and scalability as compared to Centralized, Mainframe, time sharing computing.
Intended to reduce Network Traffic.
Communication is using RPC or SQL
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
1. The document describes a secure cloud storage system that uses proxy re-encryption to allow authorized data sharing among multiple users. It focuses on privacy issues in cloud storage and proposes a solution using proxy re-encryption.
2. Proxy re-encryption schemes allow a proxy (like a cloud server) to alter an encrypted file so that it can be decrypted by another user, without revealing the content to the proxy. The proposed system uses this to share files encrypted for one user so they can be decrypted by another authorized user.
3. The system assigns different trust levels to control what data different users can access. A high trust level allows access to more data fields, while a low trust level restricts access. This trust
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on
demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection
of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious
concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the
cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to
achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that
user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based
privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request
and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing
among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority
sharing is attractive for multi-user collaborative cloud applications.
Resumption of virtual machines after adaptive deduplication of virtual machin...IJECEIAES
In cloud computing, load balancing, energy utilization are the critical problems solved by virtual machine (VM) migration. Live migration is the live movement of VMs from an overloaded/underloaded physical machine to a suitable one. During this process, transferring large disk image files take more time, hence more migration and down time. In the proposed adaptive deduplication, based on the image file size, the file undergoes both fixed, variable length deduplication processes. The significance of this paper is resumption of VMs with reunited deduplicated disk image files. The performance measured by calculating the percentage reduction of VM image size after deduplication, the time taken to migrate the deduplicated file and the time taken for each VM to resume after the migration. The results show that 83%, 89.76% reduction overall image size and migration time respectively. For a deduplication ratio of 92%, it takes an overall time of 3.52 minutes, 7% reduction in resumption time, compared with the time taken for the total QCOW2 files with original size. For VMDK files the resumption time reduced by a maximum 17% (7.63 mins) compared with that of for original files.
Overview of Network Programming, Remote Procedure Calls, Remote Method Invocation, Message Oriented Communication, and web services in distributed systems
SDN: A New Approach to Networking TechnologyIRJET Journal
This document summarizes SDN (Software Defined Networking) and its relationship to network virtualization and NFV (Network Function Virtualization). It discusses how SDN separates the control plane from the data plane to make networks programmable. It also describes how network virtualization allows multiple virtual networks to run simultaneously on top of a physical network. NFV aims to virtualize network functions like firewalls and load balancers that were traditionally hardware-based. The document argues that SDN, network virtualization, and NFV work together to provide flexible, easily reconfigurable networks and reduce costs. When combined, they allow networks to be centrally programmed and abstracted from physical hardware.
This document discusses distributed multimedia systems (DMMS). It defines DMMS as consisting of multimedia databases, proxy servers, and clients intended to distribute multimedia content over networks. Key requirements of DMMS include supporting continuous media like audio and video, quality of service management to ensure requests are met, and multiparty communications. The document outlines the basic DMMS architecture and discusses factors like server bandwidth that affect the system. It also covers approaches like proxy-based and clustered servers, as well as quality of service management through techniques such as buffering, bandwidth reservation and traffic shaping. Examples of DMMS applications include digital libraries, distance learning, and audio/video streaming.
This chapter describes network-attached storage (NAS), including its components, implementations, file-sharing protocols, and factors that affect performance. NAS uses file-level access for input/output operations and is optimized for file serving functions. There are two types of NAS implementations - integrated, where all components are in a single enclosure, and gateway, where the NAS head shares storage with a SAN. Common file-sharing protocols for NAS include NFS and CIFS.
A distributed system is a collection of computational and storage devices connected through a communications network. In this type of system, data, software, and users are distributed.
This document discusses managing and monitoring the storage infrastructure. It describes monitoring components like hosts, networks, and storage arrays to ensure accessibility, capacity, performance, and security. Key storage management activities are also outlined, such as availability, capacity, performance, and security management. Emerging standards for storage management are presented, along with challenges in monitoring today's complex storage environments and ideal solutions.
The document summarizes research on security risks in cloud computing due to multi-tenancy. It discusses how researchers were able to:
1) Map the physical layout of Amazon EC2 instances to determine placement parameters to achieve co-residence with target VMs.
2) Verify co-residence through network checks and a covert channel with over 60% success.
3) Cause co-residence by launching many probes or targeting recently launched instances, achieving up to 40% success.
4) Exploit co-residence to measure cache usage and network traffic, allowing for load monitoring and covert channels to leak information.
This document outlines 7 key challenges in designing distributed systems: heterogeneity, openness, security, scalability, failure handling, concurrency, and transparency. It discusses each challenge in detail, providing examples. Heterogeneity refers to differences in networks, hardware, operating systems, and programming languages that must be addressed. Openness means a system can be extended and implemented in various ways. Security concerns confidentiality, integrity, and availability of resources. Scalability means a system remains effective as resources and users increase significantly. Failure handling techniques include detecting, masking, tolerating, and recovering from failures. Concurrency ensures correct and high performance sharing of resources. Transparency aims to make distributed components appear as a single system regardless of location, access
This document discusses several key concepts in distributed operating systems:
1. Transparency allows applications to operate without regard to whether the system is distributed or implementation details. Inter-process communication enables communication within and between nodes.
2. Process management provides policies and mechanisms for sharing resources between distributed processes like load balancing.
3. Resource management distributes resources like memory and files across nodes and implements policies for load sharing and balancing.
4. Reliability is achieved through fault avoidance, tolerance, and detection/recovery to prevent and recover from errors.
The document discusses backup and recovery concepts including the purposes of backups for disaster recovery, operational backups, and archiving. It covers backup considerations like retention periods and file sizes. Backup methods can be full, incremental, differential or synthetic. The backup process involves a backup server coordinating with clients and storage nodes. Restore is initiated manually. Common backup topologies are direct-attached, LAN-based, and SAN-based.
Chapter 1 characterisation of distributed systemsAbDul ThaYyal
This document discusses the key concepts and challenges of distributed systems. It defines distributed systems as networked computers that communicate by passing messages in order to share resources. Some of the main challenges discussed include heterogeneity, security, scalability, failure handling, and transparency. Transparency refers to hiding the complexities of the distributed nature of the system from users, such as hiding the physical location of resources or ability to access both local and remote resources uniformly.
The document discusses four key challenges for implementing embedded cloud computing:
1. Configuring systems for data timeliness and reliability across multiple data streams and applications.
2. Configuring ad-hoc datacenters for remote operations in a timely manner based on available cloud resources.
3. Ensuring configuration accuracy so data timeliness and reliability are optimized for the given computing resources.
4. Reducing development complexity to allow systems to readily configure and operate across different cloud environments and applications.
The document discusses a cloud operating system (OS) that runs on Linux and provides cloud computing services. The cloud OS allows users to access cloud resources through a web interface similar to desktop programs. It provides file management, productivity apps, and communication tools. The cloud OS manages virtual machines across cloud nodes and provides APIs for distributed process and application management. Key features include resource measurement, abstraction and publishing resources, and distributed user authentication.
Efficient architectural framework of cloud computing Souvik Pal
This document discusses an efficient architectural framework for cloud computing. It begins by providing background on cloud computing and discusses challenges such as security, privacy, and reliability. It then proposes a new architectural framework that separates infrastructure as a service (IaaS) into three sub-modules: IaaS itself, a hypervisor monitoring environment (HME), and resources as a service (RaaS). The HME acts as middleware between IaaS and physical resources, using a hypervisor to allocate resources from a pool managed by RaaS. This proposed framework is intended to improve performance and access speed for cloud computing.
This document provides an overview of distributed computing paradigms such as cloud computing, jungle computing, and fog computing. It defines distributed computing as utilizing multiple autonomous computers located across different areas to solve large problems. Cloud computing is described as internet-based computing using shared online resources and data storage. Jungle computing combines distributed systems for high performance, while fog computing extends cloud computing to network edges for low latency applications. The document discusses characteristics, architectures, advantages and disadvantages of these paradigms.
In recent years, mobile devices such as smart phones, tablets empowered with tremendous
technological advancements. Augmenting the computing capability to the distant cloud help us
to envision a new computing era named as mobile cloud computing (MCC). However, distant
cloud has several limitations such as communication delay and bandwidth which brings the idea
of proximate cloud of cloudlet. Cloudlet has distinct advantages and is free from several
limitations of distant cloud. However, limited resources of cloudlet negatively impact the
cloudlet performance with the increasing number of substantial users. Hence, cloudlet is a
viable solution to augment the mobile device task to the nearest small scale cloud known as
cloudlet. However, this cloudlet resource is finite which in some point appear as resource
scarcity problem. In this paper, we analyse the cloudlet resource scarcity problem on overall
performance in the cloudlet for mobile cloud computing. In addition, for empirical analysis, we
make some definitions, assumptions and research boundaries. Moreover, we experimentally
examine the finite resource impact on cloudlet overall performance. By, empirical analysis, we
explicitly establish the research gap and present cloudlet finite resource problem in mobile
cloud computing. In this paper, we propose a Performance Enhancement Framework of
Cloudlet (PEFC) which enhances the finite resource cloudlet performance. Our aim is to
increase the cloudlet performance with this limited cloudlet resource and make the better user
experience for the cloudlet user in mobile cloud computing.
An Efficient Queuing Model for Resource Sharing in Cloud Computingtheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Ant colony Optimization: A Solution of Load balancing in Cloud dannyijwest
As the cloud computing is a new style of computing over internet. It has many advantages along with some
crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related
with the load management, fault tolerance and different security issues in cloud environment. In this paper
the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity,
delay or network load. Load balancing is the process of distributing the load among various nodes of a
distributed system to improve both resource utilization and job response time while also avoiding a
situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work.
Load balancing ensures that all the processor in the system or every node in the network does
approximately the equal amount of work at any instant of time. Many methods to resolve this problem has
been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several
scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony
optimization to resolve the problem of load balancing in cloud environment.
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
Ijarcce9 b a anjan a comparative analysis grid cluster and cloud computingHarsh Parashar
1) The document compares and contrasts three computing technologies: cluster computing, grid computing, and cloud computing.
2) Cluster computing involves connecting multiple nodes together to function as a single entity for improved performance and fault tolerance. Grid computing shares resources from multiple geographically dispersed locations.
3) Cloud computing provides on-demand access to dynamically scalable virtual resources as a utility over the Internet. It has advantages like cost savings, flexibility, and reliability.
ENHANCING AND MEASURING THE PERFORMANCE IN SOFTWARE DEFINED NETWORKINGIJCNCJournal
Software Defined Networking (SDN) is a challenging chapter in today’s networking era. It is a network design approach that engages the framework to be controlled or 'altered' adroitly and halfway using programming applications. SDN is a serious advancement that assures to provide a better strategy than displaying the Quality of Service (QoS) approach in the present correspondence frameworks. SDN etymologically changes the lead and convenience of system instruments using the single high state program. It separates the system control and sending functions, empowering the network control to end up specifically. It provides more functionality and more flexibility than the traditional networks. A network administrator can easily shape the traffic without touching any individual switches and services which are needed in a network. The main technology for implementing SDN is a separation of data plane and control plane, network virtualization through programmability. The total amount of time in which user can respond is called response time. Throughput is known as how fast a network can send data. In this paper, we have design a network through which we have measured the Response Time and Throughput comparing with the Real-time Online Interactive Applications (ROIA), Multiple Packet Scheduler, and NOX.
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUDijccsa
The advent of container orchestration and cloud computing, as well as associated security and compliance complexities, make it challenging for the enterprises to develop robust, secure, manageable and extendable architectures which would be applicable to the public and private cloud. The main challenges stem from the fact that on-premises, private cloud and third-party, public cloud services often have seemingly different and sometimes conflicting requirements to tenant provisioning, service deployment, security and compliance and that can lead to rather different architectures which still have a lot of commonalities but evolve independently. Understanding and bridging the functionality gaps between such architectures is highly desirable in terms of common approaches, API/SPI as well as maintainability and extendibility. The authors discuss and propose common architectural approaches to the dynamic tenant provisioning and
service orchestration in public, private and hybrid clouds focusing on deployment, security, compliance, scalability and extendibility of stateful Kubernetes runtimes.
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUDijccsa
The advent of container orchestration and cloud computing, as well as associated security and compliance complexities, make it challenging for the enterprises to develop robust, secure, manageable and extendable architectures which would be applicable to the public and private cloud. The main challenges stem from the fact that on-premises, private cloud and third-party, public cloud services often have seemingly different and sometimes conflicting requirements to tenant provisioning, service deployment, security and compliance and that can lead to rather different architectures which still have a lot of commonalities but evolve independently. Understanding and bridging the functionality gaps between such architectures is highly desirable in terms of common approaches, API/SPI as well as maintainability and extendibility. The authors discuss and propose common architectural approaches to the dynamic tenant provisioning and service orchestration in public, private and hybrid clouds focusing on deployment, security, compliance, scalability and extendibility of stateful Kubernetes runtimes.
This document summarizes a research paper that proposes using a Google File System (GFS) configuration with MapReduce to improve CPU and storage utilization in a cloud computing system. It discusses how GFS with MapReduce can split large files into chunks and distribute the processing of those chunks across idle cloud nodes to make better use of resources. The document also addresses using encryption to improve security of data in the cloud.
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...IOSR Journals
This document summarizes a research paper that proposes using Google File System (GFS) and MapReduce in a cloud computing environment to improve resource utilization and processing of large datasets. The paper discusses GFS architecture with a master node and chunk servers, and how MapReduce can split large files into chunks and process them in parallel across idle cloud nodes. It also proposes encrypting data for security and using a third party to audit client files. The goal is to provide fault tolerance, optimize workload processing time, and maximize utilization of cloud resources for data-intensive applications.
This document summarizes a proposed enhancement to the OpenStack Nova scheduler to incorporate network factors into virtual machine scheduling decisions. The current Nova scheduler only considers CPU, memory, and storage utilization when placing VMs, but not network utilization or connectivity. The proposed enhancement adds a network filter and weighting to Nova's filtering scheduler. It would check network interface status and bandwidth when initially placing VMs to ensure connectivity. It would also enable dynamic VM migration if a host's network card fails. This aims to optimize VM placement and improve performance by considering network factors.
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRESijcsit
This document summarizes a research paper on scheduling flows in hybrid optical and electrical networks for cloud data centers. The paper proposes a strategy for selecting which flows are suitable to switch from the electrical packet network to the optical circuit network. It presents techniques for detecting bottlenecks in the packet network and selecting flows to offload. Simulation results showed improved network performance from this flow selection approach, including higher average throughput, lower configuration delay, and more stable offloaded flows.
SECURE THIRD PARTY AUDITOR (TPA) FOR ENSURING DATA INTEGRITY IN FOG COMPUTINGIJNSA Journal
Fog computing is an extended version of Cloud computing. It minimizes the latency by incorporating Fog servers as intermediates between Cloud Server and users. It also provides services similar to Cloud like Storage, Computation and resources utilization and security.Fog systems are capable of processing large amounts of data locally, operate on-premise, are fully portable, and can be installed on the heterogeneous hardware. These features make the Fog platform highly suitable for time and location-sensitive applications. For example, the Internet of Things (IoT) devices isrequired to quickly process a large amount of data. The Significance of enterprise data and increased access rates from low-resource terminal devices demands for reliable and low- cost authentication protocols. Lots of researchers have proposed authentication protocols with varied efficiencies.As a part of our contribution, we propose a protocol to ensure data integrity which is best suited for fog computing environment.
SECURE THIRD PARTY AUDITOR (TPA) FOR ENSURING DATA INTEGRITY IN FOG COMPUTINGIJNSA Journal
Fog computing is an extended version of Cloud computing. It minimizes the latency by incorporating Fog servers as intermediates between Cloud Server and users. It also provides services similar to Cloud like Storage, Computation and resources utilization and security.Fog systems are capable of processing large amounts of data locally, operate on-premise, are fully portable, and can be installed on the heterogeneous hardware. These features make the Fog platform highly suitable for time and location-sensitive applications. For example, the Internet of Things (IoT) devices isrequired to quickly process a large amount of data. The Significance of enterprise data and increased access rates from low-resource terminal devices demands for reliable and low- cost authentication protocols. Lots of researchers have proposed authentication protocols with varied efficiencies.As a part of our contribution, we propose a protocol to ensure data integrity which is best suited for fog computing environment.
Performance Analysis of Server Consolidation Algorithms in Virtualized Cloud...Susheel Thakur
This document discusses server consolidation algorithms for virtualized cloud environments. It begins with an introduction to cloud computing and virtualization. It then reviews several existing server consolidation algorithms from literature, including Sandpiper, Khanna's algorithm, and Entropy. Sandpiper aims to mitigate hotspots by migrating virtual machines between physical machines. Khanna's algorithm aims for server consolidation by packing virtual machines to minimize the number of physical machines needed. Entropy aims to minimize the number of migrations required during consolidation. The document evaluates the performance of these algorithms in a virtualized cloud test environment.
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
Similar to Live migration using checkpoint and restore in userspace (CRIU): Usage analysis of network, memory and CPU (20)
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
The document proposes using a particle swarm optimization (PSO) algorithm to optimize the hyperparameters of a convolutional neural network (CNN) for image classification. The PSO algorithm is used to find optimal values for CNN hyperparameters like the number and size of convolutional filters. In experiments on the MNIST handwritten digit dataset, the optimized CNN achieved a testing error rate of 0.87%, which is competitive with state-of-the-art models. The proposed approach finds optimized CNN architectures automatically without requiring manual design or encoding strategies during training.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
1) Researchers developed a prototype contactless transaction system using QR codes and digital payments to support physical distancing during the COVID-19 pandemic in traditional markets.
2) The system allows sellers and buyers in traditional markets to conduct fast, secure transactions via smartphones without direct cash exchange. Buyers scan sellers' QR codes to view product details and make e-wallet payments.
3) Testing showed the system's functions worked properly and users found it easy to use and useful for supporting contactless transactions and digital transformation of traditional markets. However, further development is needed to increase trust in digital payments for users unfamiliar with the technology.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
This document describes the implementation of a double-layer structure on an octagon microstrip yagi antenna (OMYA) to improve its performance at 5.8 GHz. The double-layer consists of two double positive (DPS) substrates placed above the OMYA. Simulation and experimental results show that the double-layer configuration increases the gain of the OMYA by 2.5 dB compared to without the double-layer. The measured bandwidth of the OMYA with double-layer is 14.6%, indicating the double-layer can increase both the gain and bandwidth of the OMYA.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
The document provides an overview of the key performance indicators (KPIs) for 6G wireless networks compared to 5G networks. Some of the major KPIs discussed for 6G include: achieving data rates of up to 1 Tbps and individual user data rates up to 100 Gbps; reducing latency below 10 milliseconds; supporting up to 10 million connected devices per square kilometer; improving spectral efficiency by up to 100 times through technologies like terahertz communications and smart surfaces; and achieving an energy efficiency of 1 pico-joule per bit transmitted through techniques like wireless power transmission and energy harvesting. The document outlines how 6G aims to integrate terrestrial, aerial and maritime communications into a single network to provide ubiquitous connectivity with higher
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...amsjournal
The Fourth Industrial Revolution is transforming industries, including healthcare, by integrating digital,
physical, and biological technologies. This study examines the integration of 4.0 technologies into
healthcare, identifying success factors and challenges through interviews with 70 stakeholders from 33
countries. Healthcare is evolving significantly, with varied objectives across nations aiming to improve
population health. The study explores stakeholders' perceptions on critical success factors, identifying
challenges such as insufficiently trained personnel, organizational silos, and structural barriers to data
exchange. Facilitators for integration include cost reduction initiatives and interoperability policies.
Technologies like IoT, Big Data, AI, Machine Learning, and robotics enhance diagnostics, treatment
precision, and real-time monitoring, reducing errors and optimizing resource utilization. Automation
improves employee satisfaction and patient care, while Blockchain and telemedicine drive cost reductions.
Successful integration requires skilled professionals and supportive policies, promising efficient resource
use, lower error rates, and accelerated processes, leading to optimized global healthcare outcomes.
NATURAL DEEP EUTECTIC SOLVENTS AS ANTI-FREEZING AGENT
Live migration using checkpoint and restore in userspace (CRIU): Usage analysis of network, memory and CPU
1. Bulletin of Electrical Engineering and Informatics
Vol. 10, No. 2, April 2021, pp. 837~847
ISSN: 2302-9285, DOI: 10.11591/eei.v10i2.2742 837
Journal homepage: http://beei.org
Live migration using checkpoint and restore in userspace
(CRIU): Usage analysis of network, memory and CPU
Adityas Widjajarto, Deden Witarsyah Jacob, Muharman Lubis
School of System and Industrial Engineering, Telkom University, Indonesia
Article Info ABSTRACT
Article history:
Received Oct 25, 2020
Revised Jan 17, 2021
Accepted Feb 15, 2021
Currently cloud service providers have used a variety of operational
mechanism to support the company's business processes. Therefore, the
services are stored on the company's server, which present in the form of
infrastructure, platform, software and function. There are several
vulnerabilities have been faced in the implementation such as system failure,
natural disasters, human errors or attacks from unauthorized parties. In
addition, the time of unavailability of services can be minimized by doing a
LM, which many servers have been used the containers to behave like a
service provider. Actually, its existence replaces the virtual machine that
requires more resources although the process only can be done through
Docker with checkpoint and restore in userspace (CRIU). In this research,
LM processes are done to the Docker container using CRIU by analyzing the
quality of service (QoS), memory and CPU usage. Thus, the simulation are
carried out by establishing the LM using 2 (two) different platforms through
scenarios with one and three containers respectively. The performance
analysis results aim to examine several indicators in comparison with the
achievable result to reduce problem occurred in the cloud service.
Keywords:
Container
CRIU
Docker
Live Migration
This is an open access article under the CC BY-SA license.
Corresponding Author:
Adityas Widjajarto
School of System and Industrial Engineering, Telkom University
Jalan Telekomunikasi, Dayehkolot, Bandung, Indonesia
Email: adtwjrt@telkomuniversity.ac.id
1. INTRODUCTION
Containerization or container-based virtualization has been known since the late 60s, which allows
platforms to provide more efficient services compared to the virtual machines (VM). It has begun to shift the
position of the traditional VM to perform system maintenance, fault management and load balancing while
providing end users with anticipating costly service downtimes and perceptible interruption in the service [1].
In fact, live migration (LM) across the cloud service can improve the provision of resources aggregation from
a single data center to multiple disparate data centers geographically. Meanwhile, it is also possible to
execute the process through ad-hoc solutions utilizing network file systems or replication of private storage
arrays or even proprietary block devices with software used in coordination with more popular memory
migration methods. Therefore, this loose set of mechanisms makes the relay structures become more
complex, inflexible and unreliable, which its performance is extremely poor compared to the traditional LAN
(local area network) migration. In addition, the platform must be online at all times to provide robust service
to every clients while shutdown and restart for maintenance purposes should be organized accordingly to
prevent data corruption or loss. Individual or even company prefer service provider platforms through paid
service for cloud compared to develop own servers due to the practicality and manageability [2]. Thus, the
use of containers that are lightweight technology is proven to be helpful and supportive for application
2. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 837 – 847
838
management in the cloud although several copying memory on remote host task cause time frozen [3]. On the
other hand, the container implementation can be very useful in the cloud to accommodate the application
services be migrated by using Docker [4], which is very risky because it requires a lot of consideration such as
time and cost.
There are roughly three methods of direct migration of VMs, which are pre-copy, post-copy, and
hybrid method, where direct migration of VMs is mainly done by pre-copy or the variation method [5]. In
short, LM has been defined as a solution to reduce platform downtime when maintenance is carried out by
moving containers to the backup platform so both source and destination is still online with the monitoring
process can be done simultaneously [6]. However, it is also have the ability to freeze the running
applications, record the memory content according to the current state of the application, moving the record
results to the backup platform, restoring the record data from the main platform on the backup platform and
keeping the application to run according to the conditions at the checkpoint [7, 8]. In actual sense, it is very
useful for maintaining service availability when the platform has to pause at certain moment due to several
connection issues. On the other hand, Docker is an open-source project based on Linux containers that
provides an open platform for developers and administrators to be able to automatically build, package and
run applications as lightweight containers [9] while at the same time allows the user to separate applications
from infrastructure to build software quickly [10]. It provides the ability to run applications in an isolated
environment because the lightweight nature cause the running process has been done without adding the
burden of a hypervisor, so more containers on certain hardware combinations can be done using a virtual
machine [11]. In this case, the LM has been established in the real time by moving containers one by one
between different physical machines without disconnecting the network connection with the client [12].
This study determined to use Ubuntu 16.04 due to the several reason such as this version is
compatible with the logical requirement of Docker and CRIU and the operating system (OS) has been well
known for community support in term of troubleshooting and operational purposes. When making a
migration, the container will enter frozen time, that is, when the container pauses, the source node performs
memory blocks, processes, file systems and network connections to record the state of the container at that
time. Then, performance evaluation is carried out based on various criteria according to the research
objectives, which in this case related to the usage analysis and the results are also studied to prove the goal.
Most data migration projects flock to the main project without considering whether migration is possible,
how long it will take, the technology it will need and the risks that lie ahead. A pre-migration impact
assessment is recommended to verify the cost and likely outcome of migration. This study simulates the data
migration to analysis several aspect of performance using CRIU based on the network bandwidth and
throughput, memory and CPU usage.
2. MATERIAL AND METHOD
The container record data is copied to the destination node and will restore from source so it can
operate and unfreeze normally [13]. Meanwhile, CRIU is a software tool on the Linux operating system that
is able to freeze and make a checkpoint of an application that is running on the platform and also able to
restore it to different physical machine with the same conditions as the frozen time [14]. It allows
administrators to conduct LM of containers through communication protocol as a set of rules that must be
obeyed to communicate between two or more computers. These protocols are needed to ensure that all data
communications on each computer can be carried out according to the needs and objectives of the user. The
basic functions of the communication protocol include connection, addressing, error control, data flow
control and synchronization [15]. Meanwhile, TCP is a protocol used to transmit data with good reliability
and has a system of congestion control or control of data transfer bottlenecks [16]. It is a byte-stream
protocol that is able to control data flow based on byte numbers, the smallest unit of data transmitted on the
Internet is a data segment or packet, each identified by the data octet number. When a destination receives a
data segment, the host recognizes that segment acceptance by issuing a packet acknowledgment (ACK) to
notify the status of the packet sent by the sending host [17]. On the other hand, UDP is a protocol for
transferring data between computer network applications, which is called a connectionless protocol that does
not require a connection between hosts before sending or receiving data so that a host can service many other
hosts by sending the same data [18]. Therefore, Netlink is a protocol used on Linux operating systems for
data communication between kernels and userspace, which has functions as a protocol between Forwarding
engine components and control plane components that function to determine IP services [19].
Quality of service (QoS) is the ability of a network to provide good services by providing
bandwidth, and overcoming the delay [20]. Delay is the time needed by a packet sent from the sender to the
recipient and one of the parameters in QoS [21]. In a network, delay can be a reference to assess network
quality with the smaller the delay value produced, the better the network performance. On the other hand,
3. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Live migration using checkpoint and restore in userspace (CRIU): Usage analysis… (Adityas Widjajarto)
839
packet loss is the loss of a number of data packets in the delivery to the destination address [22]. Actually,
packet loss on computer networks has a huge effect, where if there is a certain amount of packet loss it will
cause TCP interconnection to slow down [23]. Meanwhile, throughput is a value when a service can process
a request [24], which is measured when it is finished transmitting data to a host or client. The most important
part of throughput is that there is enough bandwidth. Next, random access memory (RAM) is a volatile
hardware on a computer where data will be lost when the power on the computer is turned off [25]. In RAM,
data can be accessed randomly, which its function is to store temporary data and read data. The central
processing unit (CPU) is one of the computer components that composes a computer to function, which has
aim to be the brain of a computer for processing inputted data and producing the output [26]. In addition to
processing data, the CPU also controls all components of the computer so that it can work well, which it
consists of three components; control unit, arithmetic logical unit (ALU) and register [27], which is explained
visually in Figure 1. Lastly, process definition is a program that is being executed or running, which its
process is the smallest work unit in the operating system. The process passes through a series of states and a
variety of events can occur to cause state changes [28].
Figure 1. Live migration (LM) mechanism
The testing mechanism in this study is the structure and concept to describe on how the LM process
takes place. The container on the platform consists of memory, volume and root stored on the disk. On the
source platform, CRIU functions as software for performing memory checkpoints on containers. The results
of the checkpoint by CRIU are stored on disk in the form of checkpoint files. NFS serves as a protocol for
sending checkpoint files from the source platform to the destination platform. At the destination platform,
CRIU functions as a software for restoring containers by using checkpoint files obtained from the source
platform. Actually, LM is executed in the four test scenarios in this study.
‒ First scenario; one-way live migration from platform 1 as the source platform to platform 2 as the
destination platform.
‒ Second scenario; one-way live migration from platform 2 as the source platform to platform 1 as the
destination platform.
‒ Third scenario; two-way live migration with one service.
‒ Fourth scenario; two-way live migration with three services.
3. RESULTS AND ANALYSIS
System testing is executed by using Docker container and CRIU as the tool to do LM which is run on
the Ubuntu Linux as the operating system on two platforms. It is based on a previously designed scenario,
namely the scenario of testing LM, which in these tests, four LM scenarios were carried out differently.
3.1. First scenario; one-way live migration from platform 1 to platform 2
In scenario I the checkpoint process is carried out by platform 1, which lasts for 1,320 s and the
platform 2 lasts for 17,963 s. The LM with VM is a very powerful tool for cluster administrators in many
major scenarios, such as load balancing, which in actual VMs can be reorganized on physical devices into a
group to relieve the load on the busy hosts. It is also can be done for online maintenance and proactive fault
tolerance, which sometimes the physical machine may need an upgrade on its service for future system
malfunctions. Thus, the administrator must move the running virtual machines to alternate devices to release
or freeing the original machine for maintenance by improving serviceability and system availability [8].
4. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 837 – 847
840
From the results of these tests in platform 1 can be obtained a value of 1.69 GB before doing a checkpoint
and 1.56 GB after a checkpoint. Meanwhile, in the platform 2 obtained a value of 1.86 GB before doing a
checkpoint and 2.28 GB after a checkpoint. The amount of time needed to do a checkpoint takes 1.32 s on
platform 1 while restore takes 17.963 s on platform 2. Based on the result, more memory usage were needed
within the context of before compare to after checkpoint due to upon completion, the VM return to running
state while the old host’s VM is deleted which the session will be disconnecting for several seconds before
auto-reconnecting. On the other hand, in the process of restore, CRIU transform itself into a target task
through orderly steps which are open images to read in VM, fork and pre-mmaps, open file mappings, open
shared mappings, dive into restorer context and restore mapping in their places. When doing LM, a lot of
time and resources is needed based on procedure to copy task’s memory to the destination host as shown in
Figures 2-4.
(a) (b)
Figure 2. CPU usage scenario I, (a) Checkpoint, (b) Restore
(a) (b)
Figure 3.Memory usage scenario I, (a) Checkpoint, (b) Restore
Figure 4.Time usage scenario I
5. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Live migration using checkpoint and restore in userspace (CRIU): Usage analysis… (Adityas Widjajarto)
841
3.2. Second scenario; one-way live migration from platform 2 to platform 1
The checkpoint in scenario II for platform 2 lasts for 1,189 s while platform 1 lasts for 18,091 s. In
power management scenario, the load and performance of servers are generally uneven but statistically
uniform at different periods. When some of the virtual machines on the distributed hosts run light functions,
which can be integrated across fewer hosts, the downloaded hosts can be stopped after the migration is
complete. This strategy helps companies reduce IT operating expenses and benefit the natural environment.
The performance of source influence significantly the total amount of time needed for the LM to be
completed, which in this case in scenario I and II have been indicated with the fluctuative percentage usage
in changes cycle. Actually, the number of pages moved is decided on how effective the VM handles and
manipulating the memory pages. It will take the longer time for every page to transfer to the destination
server due to the number of pages that have been changed or arranged. After the process completed, the
destination server create the working set for VM testing, which in exact time with when the migration
process has started. From the results of these tests in platform 2, it can be obtained a value of 2.02 GB before
doing a checkpoint and 1.90 GB after a checkpoint while platform 1 has a value of 1.47 GB before restoring
and 1.90 GB after restore. The amount of time needed to do a checkpoint takes 1,189 s on platform 1 while
restore takes 18,091 s on platform 2. In each solution, there are three basic types of cases that must be
migrated, which are physical memory for the virtual machine, network connections, and the state of the
virtual machine, such as SCSI storage as shown in Figures 5-7. The most difficult problem is the paging of
physical memory, as it is the main factor affecting paging downtime, that is, when the services in the virtual
machine are not fully available.
(a) (b)
Figure 5. CPU usage scenario II, (a) Checkpoint, (b) Restore
(a) (b)
Figure 6. Memory usage scenario II, (a) Checkpoint, (b) Restore
6. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 837 – 847
842
Figure 7. Time usage scenario II
3.3. Third scenario; two-way live migration with one service
In Scenario III the checkpoint process is done by platform 1 lasts for 0.994s and platform 2 for
1.037 s, which the restore process is done by both platforms simultaneously. Restore in scenario III on
platform 1 lasts for 44.8 s and on platform 2 for 18.111 s. The result of checkpoint on platform 1 has a value
of 2.09 GB before doing a checkpoint and 1.99 GB after checking, 1.97 GB before restoring and 2.70 after
restoring. On platform 2, the value is 2.62 GB before checking, 2.34 GB after checking and 2.34 GB before
restoring and 2.76 GB after restoring. The amount of time needed to do a checkpoint takes 0.994 s for
platform 1 and 1.037 s for platform 2 while restore requires 44.8 s for platform 1 and 18.111 s for platform 2.
When the memory page contamination rate is faster than the replication rate for the pre-transcription
procedures, all the pre-copy work will be inefficient and one has to stop the virtual machine immediately and
copy all the pages from memory to the destination host as shown in Figures 8 and 9. Some strenuous memory
workloads will not benefit from a pre-copy algorithm and downtime can increase to several seconds. This
limitation makes the algorithm only applicable on high-speed local networks. Second, some semi-virtual
optimization systems, such as rogue operations and unassigned page editing, can have some negative impact
on the user experience, especially for some responsive interactive services. Finally, the pre-backup algorithm
does not restore the data from the CPU cache. Although it may not fail on the target host, massive cache and
TLB loss can cause performance degradation once the virtual machine takes over service.
(a) (b)
Figure 8. CPU usage scenario III, (a) Checkpoint, (b) Restore
(a) (b)
Figure 9. Memory usage scenario III, (a) Checkpoint, (b) Restore
7. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Live migration using checkpoint and restore in userspace (CRIU): Usage analysis… (Adityas Widjajarto)
843
3.4. Fourth scenario; two-way live migration with three services
In scenario IV the checkpoint process is carried out by platform 1 and platform 2, which there are 3
magento services. The checkpoint on platform 1 lasts for 3,939s and on platform 2 for 2,831 s. In scenario IV
the restore process is done by platform 1 and platform 2 simultaneously, which the number of containers to
be restored is 3 containers. The restore in scenario IV on platform 1 lasts for 59,584 s and on platform 2 for
59,902 s. In general, the running speed can be faster than the original execution with the log, because during
normal execution, the process can prevent waiting for input and output events, while all the events can be
instantly reproduced during operation due to the ability to bypass the downtime of the instructions. During
the migration, physical memory pages are sent across the network to the new destination while the source
host continues to operate. Pages modified during replication should be reintroduced to ensure consistency.
After a limited iterative repeat phase, a very short stop and copy phase is implemented to move the remaining
dirty pages. Figure 10 is the result of checkpoint and restores testing on platform 1 and platform 2. From the
test results, it can be obtained a value of 3.63 GB before performing a checkpoint, 3.27 GB after checking,
3.27 GB before restoring and 3.61 GB after restoring on platform 1. On platform 2, 4.12 GB is obtained
before checking, 3.78 GB after checking, 3.76 GB before restoring and 4.22 GB after restoring. The amount
of time needed to do a checkpoint and restore. When checking, it takes 3,939 s for platform 1 and 1,831 s for
platform 2 while for restore takes 59,584 s for platform 1 and 59,902 s for platform 2 as shown in Figure 11.
In a LAN environment, since the migrated virtual machine maintains the same network address as before,
any ongoing interactions at the network level will not be disabled.
(a) (b)
Figure 10. CPU usage scenario IV, (b) Checkpoint, (b) Restore
(a) (b)
Figure 11. Memory usage scenario IV: (a) Checkpoint, (b) Restore
Table 1 is a table that contains the difference in the number of files accessed by the process in
scenario I, II, III, IV, which can be seen that when the checkpoint process there is a reduction in the system
process files that are accessed, this is because the service is dead after performing a checkpoint. When the
restore process there are additional system process files that are accessed. This is because there are additional
containers resulting from the restore process.
8. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 837 – 847
844
Table 1. The difference of the accessed file in live migration process
Scenario Checkpoint/restore
Process name
Platform 1 Platform 2
Dockerd
(1305)
Dockerd
(1230)
Docker-co
(1478)
Dockerd
(1403)
Dockerd
(1447)
Docker-co
(1747)
Scenario I
Checkpoint (system
process file reduced)
8 file - - - - -
Restore (system process
file increased)
- - - 10 file - -
Scenario II
Checkpoint (system
process file reduced)
- - - - 8 file 3 file
Restore (system process
file increased)
- 10 file 3 file - - -
Scenario III
Checkpoint (system
process file reduced)
- 9 file 3 file - 8 file 3 file
Restore (system process
file increased)
- 8 file 3 file - 8 file 3 file
Scenario IV
Checkpoint (system
process file reduced)
- 24 file 9 file - 24 file 9 file
Restore (system process
file increased)
- 30 file 9 file - 30 file 9 file
3.5. Network analysis
3.5.1. Quality of service analysis
QoS analysis in the LM process has been carried out, the analysis summary is divided into 2, the
analysis results on platform 1 and platform 2. Table 2 shows the delay, packet loss, and throughput on
platform 1 based on the test scenarios that have been done. The data is obtained from the results of the
analysis that has been done before. Table 3 shows the delay, packet loss, and throughput on platform 1 based
on the test scenarios that have been done. The data is obtained from the results of the analysis that has been
done before. The relatively large volume of storage data, such as the virtual disk for a virtual machine, and
the small network bandwidth between data centers make storage data migration a bottleneck for direct VM
migration over a WAN. A network file sharing system is implemented between the source and destination
data centers to prevent the migration of storage data from generating high disk input and output latency. In
fact, this architecture results in the double transfer of the virtual machine storage data during the migration,
from the source data center to the shared storage system and from Shared storage system to the target data
center. That generates a large amount of network traffic. Each data center uses a local storage system, and
storage data migration performance is improved by using a variety of methods, such as deduplication or
snapshots [10]. QoS should provide an online assurance to offer a range of measurable service features to
end-to-end users and applications in terms of delay, instability, available bandwidth and packet loss.
Table 2. Quality of Service of the live migration
process of platform 1
Table 3. Quality of Service of the live migration
process of platform 2
Scenario Delay Packet loss Throughput
1 1 ms 0.01% 0.35 MB/s
2 1 ms 0.03% 9.45 MB/s
3 1 ms 0.08% 9.95 MB/s
4 1 ms 0.59% 8.54 MB/s
Scenario Delay Packet loss Throughput
1 1 ms 0.98% 9.48 MB/s
2 1 ms 0.06% 0.34 MB/s
3 1 ms 0.40% 9.95 MB/s
4 1 ms -0.49% 8.55 MB/s
3.5.2. System process in network analysis
Analysis of the system process is summarized based on the test scenario along with the data
obtained. The protocols that are read by LSOF, the protocol used to transfer data from source platform to
destination platform only TCP is based on the .pcap output file from Wireshark which shows the percentage
of Transmission Control Protocol by 100% on all LM tests. However, each improvement faces a trade-off
between the newly introduced overhead and the performance benefits of the posting. For example, the de-
duplication feature increases hashed fingerprints of storage data to detect blocks that have already been
introduced at the destination site to reduce the migration time. If the de-duplication process cannot find
enough duplicate blocks at the destination site, it can extend the total migration time, since the computational
burden of canceling the duplicate data exceeds its contribution to the data transfer. Therefore, there is a need
to systematically rethink storage data migration to greatly improve performance. In other direction, it also
can lead to the disruptions in service and communication overheads but the decisions regarding the time and
9. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Live migration using checkpoint and restore in userspace (CRIU): Usage analysis… (Adityas Widjajarto)
845
the location of transfer, which depend on many aspects, such as user mobility, communication channel
characteristics and resource availability. The current cloud-based architecture has resulted in an overall great
geographic separation between the user and the infrastructure. In such an arrangement, the end-to-end
communication involves several network hops resulting in high latency while the incoming bandwidth to the
cloud can also be saturated because it is accessed on a multiple-to-one basis [29]. As a result, service
applications must remain relatively close to their end users to ensure low latency and high bandwidth
connectivity. Basically, algorithms for decision-making should create equilibrium to these trade-offs, which
usually need to anticipate the future service requests with some precision or queued buffer service requests so
that they can be served in as many batches as possible after migration [30-33].
Table 4. System processes which uses TCP protocol in live migration
Command PID User FD Type Device Size/Off Node Name
Dockerd 1403 root 31u sock 0.9 0t0 96280 Protocol: TCP
Dockerd 1230 root 31u sock 0.9 0t0 92456 Protocol: TCP
Dockerd 1230 root 31u sock 0.9 0t0 128883 Protocol: TCP
Dockerd 1230 root 31u sock 0.9 0t0 270184 Protocol: TCP
Dockerd 1230 root 41u sock 0.9 0t0 276728 Protocol: TCP
Dockerd 1230 root 51u sock 0.9 0t0 281782 Protocol: TCP
4. CONCLUSION
Based on the testing and analysis that has been done on the scenario of Docker LM process using the
CRIU, it can be concluded that this test makes it possible to backup multiple services and the platform uses
more memory than the platform 1. The amount of RAM on a platform does not affect the speed of the backup
process for LM. The number of cores on the CPU affects the amount of CPU usage at checkpoint and restore.
The more cores used will be the lower CPU usage. The cause of the increase in CPU usage in the checkpoint
process is a process that uses more resources when the checkpoint is running. This can be proven through the
absence of additional system process files accessed by the process when performing a checkpoint. The cause
of the increase in CPU usage in the restore process is the process of accessing more files when restoring. This
can be proven through the addition of files that are accessed by the process when restore. The increased
system process file is the result of restore container files. Based on the analysis of the process of LM in the
Docker container with the CRIU, the LM process only uses TCP as the transport protocol on the Transport
Layer computer network. Other protocols shown by the output of LSOF, namely UDP and NETLINK are
protocols used by Docker container to provide services that are run. Future implementations can be done by
defining an optional interface between the framework and the application, so that it can determine
independently when it is preferable or rejected the migration process, thus improving the overall performance
after all.
REFERENCES
[1] L. Shu, “The design and implementation of cloud-scale live migration,” Rutgers University Libraries, Master
Thesis, Graduate School New Brunswick, New Jersey, pp. 1-51, 2014.
[2] I. Foster, Y. Zhao, I. Raicu, and S. Lu, “Cloud computing and grid computing 360-degree compared,” 2008 Grid
Computing Environments Workshop, Austin, TX, pp. 1-10, 2008.
[3] C. Pahl, A. Brogi, J. Soldani, and P. Jamshidi, “Cloud container technologies: a state-of-the-art review,” in IEEE
Transactions on Cloud Computing, vol. 7, no. 3, pp. 677-692, 1 July-Sept 2019.
[4] M. Abdelbaky, J. Diaz-Montes, M. Parashar, M. Unuvar, and M. Steinder, “Docker containers across multiple
clouds and data centers,” 2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC),
Limassol, pp. 368-371, 2015.
[5] D. S. Linthicum, “Moving to autonomous and self-migrating containers for cloud applications,” in IEEE Cloud
Computing, vol. 3, no. 6, pp. 6-9, Nov-Dec 2016.
[6] A. Strunk, “Costs of virtual machine live migration: A Survey,” 2012 IEEE Eighth World Congress on Services,
Honolulu, HI, pp. 323-329, 2012.
[7] A. J. Elmore, S. Das, D. Agrawal, and A.E. Abbadi, “Zephyr: Live migration in shared nothing databases for elastic
cloud platforms categories and subject descriptors,” Sigmod, pp. 301-312, 2011.
[8] H. Liu, H. Jin, X. Liao, L. Hu, and C. Yu, “Live migration of virtual machine based on full system trace and
replay,” ACM HPDC, pp. 101-110, 2009.
[9] B. Kavitha and P. Varalakshmi, “Performance analysis of virtual machines and docker containers,” Data Science
Analytics and Applications, pp. 99-113, 2017.
[10] F. Zhang, “Challenges and new solutions for live migration of virtual machines in cloud computing environments,”
Doctoral Thesis Georg-August University School of Science (Gottingen), pp. 1-69, 2018.
10. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 837 – 847
846
[11] Z. Kozhirbayev and R. O. Sinnott, “Performance comparison of container-based technologies for Cloud,” Future
Generation Computer System, vol. 68, pp. 175-182, 2017.
[12] F. Zhang, G. Liu, X. Fu and R. Yahyapour. “A survey on virtual machine migration: challenges, techniques and
open issues,” IEEE Communications Surveys & Tutorials, vol. 20, no. 2, pp. 1206-1243, 2018.
[13] R. Morabito, “Virtualization on Internet of Things edge devices with container technologies: A performance
evaluation,” in IEEE Access, vol. 5, pp. 8835-8850, 2017.
[14] P. Karhula, J. Janak, and H. Schulzrinne, “Checkpointing and migration of IoT edge functions,” EdgeSys '19:
Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking, pp. 60-65, 2019.
[15] F. Aïssaoui, G. Cooperman, T. Monteil, and S. Tazi, “Smart scene management for IoT-based constrained devices
using checkpointing,” 2016 IEEE 15th International Symposium on Network Computing and Applications (NCA),
Cambridge, MA, pp. 170-174, 2016.
[16] R. Amrutha and V. Nithya, “Curbing of TCP incast in data center networks,” 2015 4th International Conference on
Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), Noida, pp. 1-5,
2015.
[17] K-C. Leung, V. O. Li and D. Yang, “An overview of packet reordering in transmission control protocol (TCP):
problems, solutions, and challenges,” in IEEE Transactions on Parallel and Distributed Systems, vol. 18, no. 4, pp.
522-535, April 2007.
[18] T. Li, W. Zhu, J. Xu, and Y. Cheng, “The analysis and implementation of UDP-based cross-platform data
transmission,” 2012 2nd International Conference on Consumer Electronics, Communications and Networks
(CECNet), Yichang, pp. 628-630, 2012.
[19] S. R. U. Kakakhel, L. Mukkala, T. Westerlund, and J. Plosila, “Virtualization at the network edge: A technology
perspective,” 2018 Third International Conference on Fog and Mobile Edge Computing (FMEC), Barcelona, pp.
87-92, 2018.
[20] C. Cicconetti, L. Lenzini, E. Mingozzi, and C. Eklund, “Quality of service support in IEEE 802.16 networks,” in
IEEE Network, vol. 20, no. 2, pp. 50-55, March-April 2006.
[21] H. Li and W. Zhang, “QoS routing in smart grid,” 2010 IEEE Global Telecommunications Conference
GLOBECOM 2010, Miami, FL, pp. 1-6, 2010.
[22] D. Chen and P. K. Varshney, “QoS support in wireless sensor networks: A survey,” Proceedings of the
International Conference on Wireless Networks, (ICWN ’04), Las Vegas, pp. 227-233, 2004.
[23] K. Chandran, S. Raghunathan, S. Venkatesan, and R. Prakash, “A feedback-based scheme for improving TCP
performance in ad hoc wireless networks,” in IEEE Personal Communications, vol. 8, no. 1, pp. 34-39, Feb 2001.
[24] D. A. Menasce, “QoS issues in Web services,” in IEEE Internet Computing, vol. 6, no. 6, pp. 72-75, Nov-Dec,
2002.
[25] E. E. Hassan, T. K. A. Z. Zakaria, N. Bahaman, and M.H. Jifri,“Maximum loadability enhancement with a hybrid
optimization method,” Bulletin of Electrical Engineering and Informatics, vol. 7, no. 3, pp. 323-330, 2018.
[26] N. M. N. Mathivanan, N. A. M. Ghani, and R. M. Janor, “Improving classification accuracy using clustering
technique,” Bulletin of Electrical Engineering and Informatics, vol. 7, no. 3, pp. 465-470, 2018.
[27] R. Fauzi, M. Hariadi, M. Lubis, and S. M. S. Nugroho, “Defense behavior of real time strategy games: Comparison
between HFSM and FSM,” Indonesia Journal of Electrical Engineering and Computer Science, vol. 13, no. 2, pp.
634-642, 2019.
[28] Julham, H. A. Adam, A. R. Lubis, and M. Lubis, “Development of Soil Moisture Measurement with Wireless
Sensor Web-Based Concept,” Indonesia Journal of Electrical Engineering and Computer Science, vol. 13, no. 2,
pp. 514-520, 2019.
[29] A. Machen, S. Wang, K. K. Leung, B. J. Ko, and T. Salonidis, “Live service migration in mobile edge clouds,” in
IEEE Wireless Communications, vol. 25, no. 1, pp. 140-147, February 2018.
[30] A. Machen, S. Wang, K.K. Leung, B. J. Ko, and T. Salonidis, “Poster: Migrating running application across mobile
edge clouds,” MobiCom ’16: Proceedings of the 22nd
Annual International Conferemce on Mobile Computing and
Networking, pp. 435-436, 2016.
[31] J. Sinti, F. Jiffry and M. Aiash, “Investigating the impact of live migration on the network infrastructure in
enterprise environments,” 2014 28th International Conference on Advanced Information Networking and
Applications Workshops, Victoria, BC, pp. 154-159, 2014.
[32] J. Fesl, V. Gokhale, M. Dolezalova, J. Cehak and J. Janecek, “Cloud infrastructures protection technique based on
virtual machines live migration,” Proc. ACIT, 2019.
[33] A. Riaz, H. F. Ahmad, A.K. Kiani, J. Qadir, R.U. Rasool, and U. Younis “Intrusion detection systems in cloud
computing: A contemporary review of techniques and solutions,” Journal of Information Science and Engineering,
vol. 33, pp. 611-634, 2017.
11. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Live migration using checkpoint and restore in userspace (CRIU): Usage analysis… (Adityas Widjajarto)
847
BIOGRAPHIES OF AUTHORS
Adityas Widjajarto received his Bachelor and Master degree from Institut Teknologi Bandung
at 1999 and 2007 respectively. He joined as a Lecturer in the School of Industrial Engineering,
Telkom University, in 2013. He teaches some of subjects related to networking such as basic
operation system, network service management, information system security and computer
forensics and cyber security. His research interests include simulation and model, business
intelligence, data communication and networking.
Deden Witarsyah Jacob received his bachelor from Universitas Bhayangkara Surabaya in
Electrical and Information Engineering at 1997 while his master in Curtin University of
Technology Australia in Computer Engineering at 2006. Lastly, he was graduated his Doctor of
Philosophy from University Tun Hussein Onn at 2018. He joined as a Lecturer in the School of
Industrial Engineering, Telkom University, in 2013. He teaches some of subjects related to
cyberlaw and ethics such as introduction to information system, profession ethics, regulation and
international culture as well as research methodology and academic works writing. His research
focused on Open Data, Internet of Things and Decision Support System.
Muharman Lubis received his Doctor of Philosophy and Master degree in IT (Information
Techology) from International Islamic University Malaysia at 2011 while his Bachelor degree in
Information Technology from University Utara Malaysia at 2008. He joined as a Lecturer in the
School of Industrial Engineering, Telkom University, in 2017. He taught several subjects related
to design and network management such as network design, user experience design and
customer relationship management. His research interests include privacy protection,
information security awareness, knowledge management and project management.