Wireless Sensor Networks (WSNs) are used in many application fields, such as military,
healthcare, environment surveillance, etc. The WSN OS based on event-driven model doesn’t
support real-time and multi-task application types and the OSs based on thread-driven model
consume much energy because of frequent context switch. Due to the high-dense and largescale
deployment of sensor nodes, it is very difficult to collect sensor nodes to update their
software. Furthermore, the sensor nodes are vulnerable to security attacks because of the
characteristics of broadcast communication and unattended application. This paper presents a
task and resource self-adaptive embedded real-time microkernel, which proposes hybrid
programming model and offers a two-level scheduling strategy to support real-time multi-task
correspondingly. A communication scheme, which takes the “tuple” space and “IN/OUT”
primitives from “LINDA”, is proposed to support some collaborative and distributed tasks. In
addition, this kernel implements a run-time over-the-air updating mechanism and provides a
security policy to avoid the attacks and ensure the reliable operation of nodes. The performance
evaluation is proposed and the experiential results show this kernel is task-oriented and
resource-aware and can be used for the applications of event-driven and real-time multi-task.
This document discusses using wireless sensor networks for habitat monitoring. It presents requirements for a system to monitor seabird nesting environments and behaviors. The currently deployed network consists of 32 sensor nodes on an island off the coast of Maine that stream live data online. The application-driven design helps identify important areas for further work like data sampling, communications, network tasks, and health monitoring.
High performance energy efficient multicore embedded computingAnkit Talele
The document discusses high-performance energy efficient multicore embedded computing. It describes the challenges of achieving both high performance and low power consumption. Several approaches are examined at the architectural level, including heterogeneous multicore processors, cache partitioning, and wireless interconnects. Hardware-assisted middleware approaches like power gating and software approaches like task scheduling and migration are also covered. Finally, examples of commercial high-performance energy efficient multicore processors are provided. The document concludes that high-performance energy efficient computing is an important research area with applications from consumer electronics to supercomputing.
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Achieving High Performance Distributed System: Using Grid, Cluster and Cloud ...IJERA Editor
To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing.
Using Virtualization Technique to Increase Security and Reduce Energy Consump...IJORCS
An approach has been presented in this paper in order to generate a secure environment on internet Based Virtual Computing platform and also to reduce energy consumption in green cloud computing. The proposed approach constantly checks the accuracy of stored data by means of a central control service inside the network environment and also checks system security through isolating single virtual machines using a common virtual environment. This approach has been simulated on two types of Virtual Machine Manager (VMM) Quick EMUlator (Qemu), HVM (Hardware Virtual Machine) Xen and outputs of the simulation in VMInsight show that when service is getting singly used, the overhead of its performance will be increased. As a secure system, the proposed approach is able to recognize malicious behaviors and assure service security by means of operational integrity measurement. Moreover, the rate of system efficiency has been evaluated according to the amount of energy consumption on five applications (Defragmentation, Compression, Linux Boot Decompression and Kernel Boot). Therefore, this has been resulted that to secure multi-tenant environment, managers and supervisors should independently install a security monitoring system for each Virtual Machines (VMs) which will come up to have the management heavy workload of. While the proposed approach, can respond to all VM’s with just one virtual machine as a supervisor.
1. The document discusses the journey to cloud computing through three phases - classic data center, virtual data center, and cloud. In a classic data center, components like compute, applications, databases, storage, and networks are physically separate.
2. The next phase is a virtual data center, where virtualization allows multiple operating systems to run simultaneously on a single physical machine. Virtual machines act like physical machines but are logical files.
3. Security is a key challenge in cloud computing due to the use of virtualization. Various security measures like authentication, access control, and virtual machine theft prevention need to be implemented to secure data and systems in the cloud.
Review on operating systems and routing protocols for wireless sensorIAEME Publication
This document provides an overview of operating systems and routing protocols for wireless sensor networks. It discusses several popular operating systems used in wireless sensor networks, including TinyOS, MANTIS, Contiki, RetOS, and MagnetOS. It describes the key features and limitations of each operating system. The document also reviews common routing protocols for wireless sensor networks, and discusses flooding and its variants as a basic routing technique.
The document discusses the need for middleware in wireless sensor networks. It describes some of the challenges in designing middleware for sensor networks, including limited resources, scalability, and heterogeneity. It then summarizes several approaches to sensor network middleware, including virtual machine approaches, modular programming approaches, database approaches, and message-oriented middleware.
This document discusses using wireless sensor networks for habitat monitoring. It presents requirements for a system to monitor seabird nesting environments and behaviors. The currently deployed network consists of 32 sensor nodes on an island off the coast of Maine that stream live data online. The application-driven design helps identify important areas for further work like data sampling, communications, network tasks, and health monitoring.
High performance energy efficient multicore embedded computingAnkit Talele
The document discusses high-performance energy efficient multicore embedded computing. It describes the challenges of achieving both high performance and low power consumption. Several approaches are examined at the architectural level, including heterogeneous multicore processors, cache partitioning, and wireless interconnects. Hardware-assisted middleware approaches like power gating and software approaches like task scheduling and migration are also covered. Finally, examples of commercial high-performance energy efficient multicore processors are provided. The document concludes that high-performance energy efficient computing is an important research area with applications from consumer electronics to supercomputing.
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Achieving High Performance Distributed System: Using Grid, Cluster and Cloud ...IJERA Editor
To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing.
Using Virtualization Technique to Increase Security and Reduce Energy Consump...IJORCS
An approach has been presented in this paper in order to generate a secure environment on internet Based Virtual Computing platform and also to reduce energy consumption in green cloud computing. The proposed approach constantly checks the accuracy of stored data by means of a central control service inside the network environment and also checks system security through isolating single virtual machines using a common virtual environment. This approach has been simulated on two types of Virtual Machine Manager (VMM) Quick EMUlator (Qemu), HVM (Hardware Virtual Machine) Xen and outputs of the simulation in VMInsight show that when service is getting singly used, the overhead of its performance will be increased. As a secure system, the proposed approach is able to recognize malicious behaviors and assure service security by means of operational integrity measurement. Moreover, the rate of system efficiency has been evaluated according to the amount of energy consumption on five applications (Defragmentation, Compression, Linux Boot Decompression and Kernel Boot). Therefore, this has been resulted that to secure multi-tenant environment, managers and supervisors should independently install a security monitoring system for each Virtual Machines (VMs) which will come up to have the management heavy workload of. While the proposed approach, can respond to all VM’s with just one virtual machine as a supervisor.
1. The document discusses the journey to cloud computing through three phases - classic data center, virtual data center, and cloud. In a classic data center, components like compute, applications, databases, storage, and networks are physically separate.
2. The next phase is a virtual data center, where virtualization allows multiple operating systems to run simultaneously on a single physical machine. Virtual machines act like physical machines but are logical files.
3. Security is a key challenge in cloud computing due to the use of virtualization. Various security measures like authentication, access control, and virtual machine theft prevention need to be implemented to secure data and systems in the cloud.
Review on operating systems and routing protocols for wireless sensorIAEME Publication
This document provides an overview of operating systems and routing protocols for wireless sensor networks. It discusses several popular operating systems used in wireless sensor networks, including TinyOS, MANTIS, Contiki, RetOS, and MagnetOS. It describes the key features and limitations of each operating system. The document also reviews common routing protocols for wireless sensor networks, and discusses flooding and its variants as a basic routing technique.
The document discusses the need for middleware in wireless sensor networks. It describes some of the challenges in designing middleware for sensor networks, including limited resources, scalability, and heterogeneity. It then summarizes several approaches to sensor network middleware, including virtual machine approaches, modular programming approaches, database approaches, and message-oriented middleware.
The document discusses four key challenges for implementing embedded cloud computing:
1. Configuring systems for data timeliness and reliability across multiple data streams and applications.
2. Configuring ad-hoc datacenters for remote operations in a timely manner based on available cloud resources.
3. Ensuring configuration accuracy so data timeliness and reliability are optimized for the given computing resources.
4. Reducing development complexity to allow systems to readily configure and operate across different cloud environments and applications.
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...CSCJournals
An ideal Network Processor, that is, a programmable multi-processor device must be capable of offering both the flexibility and speed required for packet processing. But current Network Processor systems generally fall short of the above benchmarks due to traffic fluctuations inherent in packet networks, and the resulting workload variation on individual pipeline stage over a period of time ultimately affects the overall performance of even an otherwise sound system. One potential solution would be to change the code running at these stages so as to adapt to the fluctuations; a near robust system with standing traffic fluctuations is the dynamic adaptive processor, reconfiguring the entire system, which we introduce and study to some extent in this paper. We achieve this by using a crucial decision making model, transferring the binary code to the processor through the SOAP protocol.
- An operating system consists of kernel mode and user mode. A microkernel breaks the kernel into separate processes called servers that run in either kernel or user space and communicate via message passing.
- The microkernel only includes essential functions for memory management, IPC, and I/O interrupt handling while less critical services are built as external subsystems that interact with the microkernel through message passing.
- A potential disadvantage is performance, as message passing takes longer than direct calls, but this can be improved by minimizing the microkernel size.
This document summarizes a research paper that proposes a new approach called DiffP for more energy efficient ad hoc reprogramming of sensor networks. DiffP aims to mitigate the effects of program layout modifications and maximize similarity between old and new software. It also organizes global variables in a novel way to eliminate the effect of variable shifting. The document provides background on challenges with reprogramming deployed sensor networks due to limited energy, processing and memory resources. It reviews related work on dissemination protocols and reprogramming schemes, noting limitations such as producing large patches from layout changes or variable shifts. DiffP is presented as a potential improvement over existing approaches.
International Journal of Computer Science and Security Volume (3) Issue (3)CSCJournals
The document discusses system-level modeling and simulation of a customizable network-on-chip (NoC) architecture using system level tools. It proposes a methodology with three phases: 1) Using FSP to model concurrency aspects, 2) Modeling each NoC component as a customizable class in MLD, and 3) Integrating the components in MLD to realize a 4x4 mesh NoC. Simulation results for various parameters like scheduling criteria, injection rates and buffer sizes are presented to validate the NoC operation. The NoC is also implemented on an FPGA to obtain area results. The goal is to develop a rapid customizable system-level NoC design that can be evaluated by a system architect.
The document provides an overview of key components and concepts in operating system structures. It discusses common system components like process management, memory management, file management, and protection systems. It also covers operating system services, system calls, system programs, virtual machines, and approaches to system design and implementation.
The document defines a distributed system and provides examples. It outlines the challenges in designing distributed systems, including heterogeneity, openness, security, scalability, failure handling, concurrency, and transparency. Distributed systems divide tasks across networked computers and aim to appear as a single computer to users.
A DDS-Based Scalable and Reconfigurable Framework for Cyber-Physical Systemsijseajournal
Cyber-Physical Systems (CPSs) involve the interconnection of heterogeneous computing devices which are
closely integrated with the physical processes under control. Often, these systems are resource-constrained
and require specific features such as the ability to adapt in a timeliness and efficient fashion to dynamic
environments. Also, they must support fault tolerance and avoid single points of failure. This paper
describes a scalable framework for CPSs based on the OMG DDS standard. The proposed solution allows
reconfiguring this kind of systems at run-time and managing efficiently their resources.
Issues and Challenges in Distributed Sensor Networks- A ReviewIOSR Journals
1) The document discusses various design issues and challenges in distributed sensor networks, including limited resources of sensor nodes, scalability, frequent topology changes, and data aggregation.
2) Data aggregation aims to reduce redundant data by having sensor nodes combine and summarize correlated sensor readings. This helps reduce transmission costs and bandwidth usage.
3) Time synchronization is also an important challenge as many sensor network applications require correlating sensor readings with physical times, but achieving precise synchronization is difficult given the networks' constraints.
EDEM is a general-purpose CAE tool that uses state-of-the-art discrete element modeling technology for the simulation and analysis of particle handling and manufacturing operations.
This document summarizes a research paper that proposes a file system level snapshot for the Ext4 file system in Linux kernels 2.6 and higher. It begins by discussing different types of snapshot techniques, including volume-based and system-based approaches. It then reviews related work on snapshot implementations in other file systems. The document motivates the need for a snapshot feature in Ext4, as it has replaced Ext3 as the default file system in many Linux distributions, though Ext3 included snapshots. It provides details on Ext3 and Ext4 file systems. The goal of the proposed work is to enable effective large-scale backups through a new system-based snapshot for Ext4.
11.0004www.iiste.org call for paper.a file system level snapshot in ext4--35-30Alexander Decker
This document describes a proposed file system level snapshot feature for the Ext4 file system in Linux kernels 2.6 and higher. It begins with background on existing snapshot techniques, including volume-based and system-based approaches. It then discusses limitations of existing solutions like SnapFS and LVM. The document outlines key aspects of the Ext4 file system and proposes a system-based snapshot approach for Ext4 that allocates a new indirect inode to copy address space information when a snapshot is created, avoiding actual block copies. This allows for space-efficient snapshots in Ext4.
IRJET- Research Paper on Energy-Aware Virtual Machine Migration for Cloud Com...IRJET Journal
This document discusses using artificial neural networks and genetic algorithms to optimize virtual machine migration for energy efficiency in cloud computing. Virtual machine migration allows live movement of virtual machines between physical machines for load balancing and maintenance. The proposed approach uses an artificial neural network for virtual machine classification and a genetic algorithm for optimization to reduce energy consumption and service level agreement violations during migration. The goal is to develop an energy-aware virtual machine migration method for cloud computing data centers.
The document defines distributed and parallel systems. A distributed system consists of independent computers that communicate over a network to collaborate on tasks. It has features like no common clock and increased reliability. Examples include telephone networks and the internet. Advantages are information sharing and scalability, while disadvantages include difficulty developing software and security issues. A parallel system uses multiple processors with shared memory to solve problems. Examples are supercomputers and server clusters. Advantages are concurrency and saving time, while the main disadvantage is lack of scalability between memory and CPUs.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
This document outlines the course outcomes and topics to be covered for a Cloud Computing elective course. The course aims to describe system models, analyze virtualization mechanisms, demonstrate cloud architectural design and security, and construct cloud-based software applications. The topics covered in Unit 1 include scalable computing over the internet, technologies for network-based systems, system models for distributed and cloud computing, software environments, and performance, security and energy efficiency. Specific topics in Unit 1 range from multicore CPUs and virtualization to models like clusters, grids, peer-to-peer networks and cloud computing.
A distributed system is a collection of independent computers that appears as a single coherent system to users. It provides advantages like cost-effectiveness, reliability, scalability, and flexibility but introduces challenges in achieving transparency, dependability, performance, and flexibility due to its distributed nature. A true distributed system that solves all these challenges perfectly is difficult to achieve due to limitations like network complexity and security issues.
Basic Architecture of Wireless Sensor NetworkKarthik
The document discusses software architecture design considerations for wireless sensor networks. It examines four key characteristics of wireless sensor networks that impact software architecture: self-organization, cooperative processing, energy efficiency, and modularity. It then describes common components of service-oriented wireless sensor network architectures, including sensor applications, node applications, network applications, and middleware. Finally, it analyzes two proposed software architectures and how they address the requirements of wireless sensor networks.
DESIGN ISSUES ON SOFTWARE ASPECTS AND SIMULATION TOOLS FOR WIRELESS SENSOR NE...IJNSA Journal
In this paper, various existing simulation environments for general purpose and specific purpose WSNs are discussed. The features of number of different sensor network simulators and operating systems are compared. We have presented an overview of the most commonly used operating systems that can be used in different approaches to address the common problems of WSNs. For different simulation environments there are different layer, components and protocols implemented so that it is difficult to compare them. When same protocol is simulated using two different simulators still each protocol implementation differs, since their functionality is exactly not the same. Selection of simulator is purely based on the application, since each simulator has a varied range of performance depending on application.
IRJET - Analytical Study of Hierarchical Routing Protocols for Virtual Wi...IRJET Journal
This document analyzes and compares several hierarchical routing protocols for virtual wireless sensor networks (VWSNs). It first provides background on VWSNs and how virtualization allows a single physical sensor network to serve multiple applications simultaneously. It then reviews several common cluster-based routing protocols for wireless sensor networks, including LEACH, ModLEACH, SEP, and ZSEP. Through simulation and analysis of network lifetime, load balancing, energy consumption, and packets received, the document aims to provide insights on how well different routing protocols can be utilized for VWSNs under various conditions.
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
The document discusses four key challenges for implementing embedded cloud computing:
1. Configuring systems for data timeliness and reliability across multiple data streams and applications.
2. Configuring ad-hoc datacenters for remote operations in a timely manner based on available cloud resources.
3. Ensuring configuration accuracy so data timeliness and reliability are optimized for the given computing resources.
4. Reducing development complexity to allow systems to readily configure and operate across different cloud environments and applications.
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...CSCJournals
An ideal Network Processor, that is, a programmable multi-processor device must be capable of offering both the flexibility and speed required for packet processing. But current Network Processor systems generally fall short of the above benchmarks due to traffic fluctuations inherent in packet networks, and the resulting workload variation on individual pipeline stage over a period of time ultimately affects the overall performance of even an otherwise sound system. One potential solution would be to change the code running at these stages so as to adapt to the fluctuations; a near robust system with standing traffic fluctuations is the dynamic adaptive processor, reconfiguring the entire system, which we introduce and study to some extent in this paper. We achieve this by using a crucial decision making model, transferring the binary code to the processor through the SOAP protocol.
- An operating system consists of kernel mode and user mode. A microkernel breaks the kernel into separate processes called servers that run in either kernel or user space and communicate via message passing.
- The microkernel only includes essential functions for memory management, IPC, and I/O interrupt handling while less critical services are built as external subsystems that interact with the microkernel through message passing.
- A potential disadvantage is performance, as message passing takes longer than direct calls, but this can be improved by minimizing the microkernel size.
This document summarizes a research paper that proposes a new approach called DiffP for more energy efficient ad hoc reprogramming of sensor networks. DiffP aims to mitigate the effects of program layout modifications and maximize similarity between old and new software. It also organizes global variables in a novel way to eliminate the effect of variable shifting. The document provides background on challenges with reprogramming deployed sensor networks due to limited energy, processing and memory resources. It reviews related work on dissemination protocols and reprogramming schemes, noting limitations such as producing large patches from layout changes or variable shifts. DiffP is presented as a potential improvement over existing approaches.
International Journal of Computer Science and Security Volume (3) Issue (3)CSCJournals
The document discusses system-level modeling and simulation of a customizable network-on-chip (NoC) architecture using system level tools. It proposes a methodology with three phases: 1) Using FSP to model concurrency aspects, 2) Modeling each NoC component as a customizable class in MLD, and 3) Integrating the components in MLD to realize a 4x4 mesh NoC. Simulation results for various parameters like scheduling criteria, injection rates and buffer sizes are presented to validate the NoC operation. The NoC is also implemented on an FPGA to obtain area results. The goal is to develop a rapid customizable system-level NoC design that can be evaluated by a system architect.
The document provides an overview of key components and concepts in operating system structures. It discusses common system components like process management, memory management, file management, and protection systems. It also covers operating system services, system calls, system programs, virtual machines, and approaches to system design and implementation.
The document defines a distributed system and provides examples. It outlines the challenges in designing distributed systems, including heterogeneity, openness, security, scalability, failure handling, concurrency, and transparency. Distributed systems divide tasks across networked computers and aim to appear as a single computer to users.
A DDS-Based Scalable and Reconfigurable Framework for Cyber-Physical Systemsijseajournal
Cyber-Physical Systems (CPSs) involve the interconnection of heterogeneous computing devices which are
closely integrated with the physical processes under control. Often, these systems are resource-constrained
and require specific features such as the ability to adapt in a timeliness and efficient fashion to dynamic
environments. Also, they must support fault tolerance and avoid single points of failure. This paper
describes a scalable framework for CPSs based on the OMG DDS standard. The proposed solution allows
reconfiguring this kind of systems at run-time and managing efficiently their resources.
Issues and Challenges in Distributed Sensor Networks- A ReviewIOSR Journals
1) The document discusses various design issues and challenges in distributed sensor networks, including limited resources of sensor nodes, scalability, frequent topology changes, and data aggregation.
2) Data aggregation aims to reduce redundant data by having sensor nodes combine and summarize correlated sensor readings. This helps reduce transmission costs and bandwidth usage.
3) Time synchronization is also an important challenge as many sensor network applications require correlating sensor readings with physical times, but achieving precise synchronization is difficult given the networks' constraints.
EDEM is a general-purpose CAE tool that uses state-of-the-art discrete element modeling technology for the simulation and analysis of particle handling and manufacturing operations.
This document summarizes a research paper that proposes a file system level snapshot for the Ext4 file system in Linux kernels 2.6 and higher. It begins by discussing different types of snapshot techniques, including volume-based and system-based approaches. It then reviews related work on snapshot implementations in other file systems. The document motivates the need for a snapshot feature in Ext4, as it has replaced Ext3 as the default file system in many Linux distributions, though Ext3 included snapshots. It provides details on Ext3 and Ext4 file systems. The goal of the proposed work is to enable effective large-scale backups through a new system-based snapshot for Ext4.
11.0004www.iiste.org call for paper.a file system level snapshot in ext4--35-30Alexander Decker
This document describes a proposed file system level snapshot feature for the Ext4 file system in Linux kernels 2.6 and higher. It begins with background on existing snapshot techniques, including volume-based and system-based approaches. It then discusses limitations of existing solutions like SnapFS and LVM. The document outlines key aspects of the Ext4 file system and proposes a system-based snapshot approach for Ext4 that allocates a new indirect inode to copy address space information when a snapshot is created, avoiding actual block copies. This allows for space-efficient snapshots in Ext4.
IRJET- Research Paper on Energy-Aware Virtual Machine Migration for Cloud Com...IRJET Journal
This document discusses using artificial neural networks and genetic algorithms to optimize virtual machine migration for energy efficiency in cloud computing. Virtual machine migration allows live movement of virtual machines between physical machines for load balancing and maintenance. The proposed approach uses an artificial neural network for virtual machine classification and a genetic algorithm for optimization to reduce energy consumption and service level agreement violations during migration. The goal is to develop an energy-aware virtual machine migration method for cloud computing data centers.
The document defines distributed and parallel systems. A distributed system consists of independent computers that communicate over a network to collaborate on tasks. It has features like no common clock and increased reliability. Examples include telephone networks and the internet. Advantages are information sharing and scalability, while disadvantages include difficulty developing software and security issues. A parallel system uses multiple processors with shared memory to solve problems. Examples are supercomputers and server clusters. Advantages are concurrency and saving time, while the main disadvantage is lack of scalability between memory and CPUs.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
This document outlines the course outcomes and topics to be covered for a Cloud Computing elective course. The course aims to describe system models, analyze virtualization mechanisms, demonstrate cloud architectural design and security, and construct cloud-based software applications. The topics covered in Unit 1 include scalable computing over the internet, technologies for network-based systems, system models for distributed and cloud computing, software environments, and performance, security and energy efficiency. Specific topics in Unit 1 range from multicore CPUs and virtualization to models like clusters, grids, peer-to-peer networks and cloud computing.
A distributed system is a collection of independent computers that appears as a single coherent system to users. It provides advantages like cost-effectiveness, reliability, scalability, and flexibility but introduces challenges in achieving transparency, dependability, performance, and flexibility due to its distributed nature. A true distributed system that solves all these challenges perfectly is difficult to achieve due to limitations like network complexity and security issues.
Basic Architecture of Wireless Sensor NetworkKarthik
The document discusses software architecture design considerations for wireless sensor networks. It examines four key characteristics of wireless sensor networks that impact software architecture: self-organization, cooperative processing, energy efficiency, and modularity. It then describes common components of service-oriented wireless sensor network architectures, including sensor applications, node applications, network applications, and middleware. Finally, it analyzes two proposed software architectures and how they address the requirements of wireless sensor networks.
DESIGN ISSUES ON SOFTWARE ASPECTS AND SIMULATION TOOLS FOR WIRELESS SENSOR NE...IJNSA Journal
In this paper, various existing simulation environments for general purpose and specific purpose WSNs are discussed. The features of number of different sensor network simulators and operating systems are compared. We have presented an overview of the most commonly used operating systems that can be used in different approaches to address the common problems of WSNs. For different simulation environments there are different layer, components and protocols implemented so that it is difficult to compare them. When same protocol is simulated using two different simulators still each protocol implementation differs, since their functionality is exactly not the same. Selection of simulator is purely based on the application, since each simulator has a varied range of performance depending on application.
IRJET - Analytical Study of Hierarchical Routing Protocols for Virtual Wi...IRJET Journal
This document analyzes and compares several hierarchical routing protocols for virtual wireless sensor networks (VWSNs). It first provides background on VWSNs and how virtualization allows a single physical sensor network to serve multiple applications simultaneously. It then reviews several common cluster-based routing protocols for wireless sensor networks, including LEACH, ModLEACH, SEP, and ZSEP. Through simulation and analysis of network lifetime, load balancing, energy consumption, and packets received, the document aims to provide insights on how well different routing protocols can be utilized for VWSNs under various conditions.
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
ACCELERATED DEEP LEARNING INFERENCE FROM CONSTRAINED EMBEDDED DEVICESIAEME Publication
Hardware looping is a feature of some processor instruction sets whose hardware can repeat the body of a loop automatically, rather than requiring software instructions which take up cycles (and therefore time) to do so. Loop Unrolling is a loop transformation technique that attempts to advance a program's execution speed to the detriment of its twofold size, which is a methodology known as space–time tradeoff. A convolutional neural network is created with simple loops, with hardware looping, with loop unrolling and with both hardware looping and loop unrolling, and a comparison is made to evaluate the effectiveness of hardware looping and loop unrolling. The hardware loops alone will add to a cycle check decline, while the mix of hardware loops and dot product instructions will decrease the clock cycle tally further. The CNN is simulated on Xilinx Vivado 2021.1 running on Zync-7000 FPGA.
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
This document summarizes the implementation of open source virtualization technologies in cloud computing. It discusses setting up a 3 node cluster using KVM as the hypervisor with Debian GNU/Linux 7 as the base operating system. Key steps included installing Ganeti software, configuring LVM and VLAN networking, adding nodes to the cluster from the master node, and enabling DRBD for redundant storage across nodes. The goal was to create a basic virtualized infrastructure using open source tools to demonstrate cloud computing concepts.
Implementation of the Open Source Virtualization Technologies in Cloud Computingijccsa
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
Remote temperature and humidity monitoring system using wireless sensor networkseSAT Journals
Abstract Today’s world has become very advanced with smart appliances and devices like laptops, tablets, televisions. smart phones with different features and their usage has been enormously increasing in our day-to-day life. The technology advancement in Digital Electronics and Micro Electro Mechanical Systems. In this scenario the most important role is played by Wireless Sensor Networks and its development and usage in heterogeneous fields and several contexts. the home automation field and process control systems and health control systems widely uses wireless sensor networks. Moreover with WSN we can monitor environments and its conditions also. We are designing a protocol to monitor the environmental temperature and humidity at different conditions. The architecture is simple to construct and ease to implement and also has an advantage of low power consumption. The aim of our paper to describe and show how to create a simple protocol for environment monitoring using a wireless development kit. we are using advanced technology of crossbow motes and NESC Language Programming. Keywords: Motes, WSN, sensor, TinyOS, Nesc.
This document summarizes and compares different operating systems and simulation tools for wireless sensor networks. It discusses TinyOS, LiteOS, and MANTIS OS as examples of operating systems designed for wireless sensor networks. It also outlines some of the key features and considerations of simulating wireless sensor networks, such as the need for simulation given the resource constraints of sensor nodes. The document provides an overview of different operating systems and simulation environments used for wireless sensor network research.
A modified clustered based routing protocol to secure wireless sensor network...eSAT Journals
Abstract Wireless Sensor Network (WSN) comprises of sensor nodes, which form a network by building connections wirelessly, to send detected information from source to destination. Routing Protocols are utilized to shape honest to goodness and littlest routes between a starting node (source) and ending node (destination). Numerous WSN applications use various hierarchical routing protocols. Low Energy Adaptive grouping (LEACH) is the first hierarchical routing protocols and it takes after the standard of forming cluster of nodes and picking a cluster head randomly among the nodes for inter cluster communication. This type of cluster head election leads to attack by adversary node, hence a Modified LEACH algorithm is proposed in this paper. The execution of LEACH and Modified LEACH is assessed utilizing distinctive performance measurements and Modified LEACH was discovered to be extremely compelling in enhancing overall performance of the WSN. MATLAB is made use of, to simulate the undertaking situation. Keywords: WSN, LEACH, Hierarchical protocols, Cluster Head.
Classification of Virtualization Environment for Cloud ComputingSouvik Pal
Cloud Computing is a relatively new field gaining more popularity day by day for ramified applications among the Internet users.
Virtualization plays a significant role for managing and coordinating the access from the resource pool to multiple virtual machines on
which multiple heterogeneous applications are running. Various virtualization methodologies are of significant importance because it
helps to overcome the complex workloads, frequent application patching and updating, and multiple software architecture. Although
a lot of research and study has been conducted on virtualization, a range of issues involved have mostly been presented in isolation
of each other. Therefore, we have made an attempt to present a comprehensive survey study of different aspects of virtualization. We
present our classification of virtualization methodologies and their brief explanation, based on their working principle and underlying
features.
An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...IJAAS Team
This document presents a new model for securing key distribution in wireless sensor networks (WSNs) using intuitionistic fuzzy sets. The model determines the appropriate length of an intermediate encryption key based on network parameters like node count, node authentication history, trusted neighbor count, key change frequency, and desired key length. An intuitionistic fuzzy sets implementation assigns membership and non-membership values to each parameter. Based on the combinations of parameter values, intuitionistic rules output a session key scale and corresponding key length ranging from very low to very high security. The model was experimentally shown to efficiently create and distribute keys compared to other models.
VIRTUAL ARCHITECTURE AND ENERGYEFFICIENT ROUTING PROTOCOLS FOR 3D WIRELESS SE...ijwmn
This paper proposes a virtual architecture for three-dimensional (3D) wireless sensor networks (WSNs), a
dynamic coordinate system, and a scalable energy-efficient training protocol for collections of nodes
deployed in the space that are initially anonymous, asynchronous, and unaware of their initial location.
The 3D WSNs considered comprise massively deployed tiny energy-constrained commodity sensors and
one or more sink nodes that provide an interface to the outside world. The proposed architecture is a
generalization of a two-dimensional virtual architecture previously proposed in the literature, in which a
flexible and intuitive coordinate system is imposed onto the deployment area and the anonymous nodes are
partitioned into clusters where data can be gathered from the environment and synthesized under local
control. The architecture solves the hidden sensors problem that occurs because of irregularities in rugged
deployment areas or environments containing buildings by training the network of nodes arbitrarily
dispersed in the 3D space. In addition, we derive two simple and energy-efficient routing protocols,
respectively for dense and sparse networks, based on the proposed dynamic coordinate system. They are
used to minimize the power expended in collecting and routing data to the sink node, thus increasing the
lifetime of the network.
VIRTUAL ARCHITECTURE AND ENERGYEFFICIENT ROUTING PROTOCOLS FOR 3D WIRELESS SE...ijwmn
This paper proposes a virtual architecture for three-dimensional (3D) wireless sensor networks (WSNs), a
dynamic coordinate system, and a scalable energy-efficient training protocol for collections of nodes
deployed in the space that are initially anonymous, asynchronous, and unaware of their initial location.
The 3D WSNs considered comprise massively deployed tiny energy-constrained commodity sensors and
one or more sink nodes that provide an interface to the outside world. The proposed architecture is a
generalization of a two-dimensional virtual architecture previously proposed in the literature, in which a
flexible and intuitive coordinate system is imposed onto the deployment area and the anonymous nodes are
partitioned into clusters where data can be gathered from the environment and synthesized under local
control. The architecture solves the hidden sensors problem that occurs because of irregularities in rugged
deployment areas or environments containing buildings by training the network of nodes arbitrarily
dispersed in the 3D space. In addition, we derive two simple and energy-efficient routing protocols,
respectively for dense and sparse networks, based on the proposed dynamic coordinate system. They are
used to minimize the power expended in collecting and routing data to the sink node, thus increasing the
lifetime of the network.
VIRTUAL ARCHITECTURE AND ENERGYEFFICIENT ROUTING PROTOCOLS FOR 3D WIRELESS SE...ijwmn
This paper proposes a virtual architecture for three-dimensional (3D) wireless sensor networks (WSNs), a dynamic coordinate system, and a scalable energy-efficient training protocol for collections of nodes deployed in the space that are initially anonymous, asynchronous, and unaware of their initial location. The 3D WSNs considered comprise massively deployed tiny energy-constrained commodity sensors and one or more sink nodes that provide an interface to the outside world. The proposed architecture is a generalization of a two-dimensional virtual architecture previously proposed in the literature, in which a flexible and intuitive coordinate system is imposed onto the deployment area and the anonymous nodes are partitioned into clusters where data can be gathered from the environment and synthesized under local control. The architecture solves the hidden sensors problem that occurs because of irregularities in rugged deployment areas or environments containing buildings by training the network of nodes arbitrarily dispersed in the 3D space. In addition, we derive two simple and energy-efficient routing protocols, respectively for dense and sparse networks, based on the proposed dynamic coordinate system. They are used to minimize the power expended in collecting and routing data to the sink node, thus increasing the lifetime of the network.
Concepts and evolution of research in the field of wireless sensor networksIJCNCJournal
This document provides a comprehensive review of research in the field of wireless sensor networks (WSNs). It begins with an introduction to WSNs and discusses sensor nodes, network architectures, communication protocols, applications, and open research issues. It then reviews several related surveys that focus on specific aspects of WSNs such as applications, energy consumption techniques, security protocols, and testbeds. The document aims to provide an overview of all aspects of WSN research and identify promising directions like bio-inspired solutions. It discusses sensors and network types, architectures and services, and challenges like fault tolerance.
IRJET- An Overview of Slum Rehabilitation by IN-SITU TechniqueIRJET Journal
This document discusses integrating wireless sensor networks with cloud computing using middleware services. It begins by providing background on cloud computing and wireless sensor networks, and describes how merging the two can allow for easy management of remotely connected sensor nodes and generated data. A model is proposed that uses middleware between the wireless sensor network and cloud for data compatibility, bandwidth management, security and connectivity. Networked control systems are presented as an example middleware that can collect sensor data, utilize cloud services for processing, and provide information to different types of users.
IRJET- Integrating Wireless Sensor Networks with Cloud Computing and Emerging...IRJET Journal
This document discusses integrating wireless sensor networks with cloud computing through the use of middleware services. It proposes a model that combines wireless sensor networks and cloud computing, allowing for easy management of remotely connected sensor nodes and the data they generate. The model uses middleware as an intermediary layer between the wireless sensor networks and cloud to provide data compatibility, bandwidth management, security, and connectivity. It describes how sensor data can be collected via heterogeneous wireless networks, additional computational capabilities provided through cloud services, and information delivered to different types of end users through a networked control system. Load balancing of the cloud computing environment is achieved using a honey bee foraging strategy algorithm.
Analysis of programming aspects of wireless sensor networksiaemedu
This document discusses programming aspects of wireless sensor networks and non-uniformity issues. It analyzes programming efforts for developing test cases using wireless sensor network solutions versus traditional tools. It also explores system performance based on metrics like overhead, energy usage, and resource utilization. Key frameworks discussed include WiSeKit for adaptive applications, Remora for component-based programming, and extensions that enable distributed sensor services, dynamic reconfiguration, and integration with Internet systems.
This document summarizes a paper on the challenges of wireless sensor networks, with a focus on time synchronization issues. It discusses how wireless sensor networks face many constraints including limited energy, bandwidth, and resources. It also outlines various challenges such as security, deployment, and design constraints. The document then discusses the importance of time synchronization for applications requiring coordination between sensor nodes. It describes issues that can cause clocks to drift like clock skew. It also analyzes different communication methods and synchronization protocols for wireless sensor networks, comparing their advantages and disadvantages.
Human: Thank you, that is a concise 3 sentence summary that captures the key aspects of the document.
Similar to TASK & RESOURCE SELF-ADAPTIVE EMBEDDED REAL-TIME OPERATING SYSTEM MICROKERNEL FOR WIRELESS SENSOR NODES (20)
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
This document discusses using social media technologies to promote student engagement in a software project management course. It describes the course and objectives of enhancing communication. It discusses using Facebook for 4 years, then switching to WhatsApp based on student feedback, and finally introducing Slack to enable personalized team communication. Surveys found students engaged and satisfied with all three tools, though less familiar with Slack. The conclusion is that social media promotes engagement but familiarity with the tool also impacts satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The document proposes a blockchain-based digital currency and streaming platform called GoMAA to address issues of piracy in the online music streaming industry. Key points:
- GoMAA would use a digital token on the iMediaStreams blockchain to enable secure dissemination and tracking of streamed content. Content owners could control access and track consumption of released content.
- Original media files would be converted to a Secure Portable Streaming (SPS) format, embedding watermarks and smart contract data to indicate ownership and enable validation on the blockchain.
- A browser plugin would provide wallets for fans to collect GoMAA tokens as rewards for consuming content, incentivizing participation and addressing royalty discrepancies by recording
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This document discusses the importance of verb suffix mapping in discourse translation from English to Telugu. It explains that after anaphora resolution, the verbs must be changed to agree with the gender, number, and person features of the subject or anaphoric pronoun. Verbs in Telugu inflect based on these features, while verbs in English only inflect based on number and person. Several examples are provided that demonstrate how the Telugu verb changes based on whether the subject or pronoun is masculine, feminine, neuter, singular or plural. Proper verb suffix mapping is essential for generating natural and coherent translations while preserving the context and meaning of the original discourse.
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The document discusses automated penetration testing and provides an overview. It compares manual and automated penetration testing, noting that automated testing allows for faster, more standardized and repeatable tests but has limitations in developing new exploits. It also reviews some current automated penetration testing methodologies and tools, including those using HTTP/TCP/IP attacks, linking common scanning tools, a Python-based tool targeting databases, and one using POMDPs for multi-step penetration test planning under uncertainty. The document concludes that automated testing is more efficient than manual for known vulnerabilities but cannot replace manual testing for discovering new exploits.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
The document proposes a new validation method for fuzzy association rules based on three steps: (1) applying the EFAR-PN algorithm to extract a generic base of non-redundant fuzzy association rules using fuzzy formal concept analysis, (2) categorizing the extracted rules into groups, and (3) evaluating the relevance of the rules using structural equation modeling, specifically partial least squares. The method aims to address issues with existing fuzzy association rule extraction algorithms such as large numbers of extracted rules, redundancy, and difficulties with manual validation.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
2. 76 Computer Science & Information Technology (CS & IT)
3. To update codes remotely and dynamically.
4. To provide security services in resource-constrained nodes
This paper presents a novel task and resource self-adaptive real-time microkernel for wireless
sensor nodes. The kernel provides a hybrid programming model, which combines the benefits of
Event-driven and Thread-driven model. Correspondingly, the kernel adopts event & thread 2
level strategies aimed at supporting real-time multi-task. Moreover, to make the nodes capable of
coping with new tasks and then adjusting their behaviours in different environments, the kernel
provides an over-the-air reprogramming mechanism to update the code of the OS dynamically.
The kernel also provides a security service to avoid the attacks and ensure the normal operation of
nodes.
The rest of this paper is organized as follows: Section 2 discusses the related work on WSN OS
design. Section 3 describes the kernel in detail, including its architecture, hybrid programming
model, scheduling strategy, the run-time updating mechanism and its security policy. In Section 4,
we evaluate the performance and overheads, and draw the conclusions. Finally, we will give a
brief conclusion in Section 5.
2. RELATED WORK
At present, there are two different research methodologies for WSN OS: one is to streamline the
general RTOS(such as the µC/OS-II[1], VxWorks[2], Windows CE[3]) but they are not suitable
for resource-constrained nodes. Another is to develop a dedicated OS according to the
characteristic of WSN, such as TinyOS[4], SOS[5], MantisOS[6], and Contiki[7]. But most of
dedicated WSN OS are built on network protocols without supporting multi-task and event-driven
at the same time.
2.1. Architecture
The kernel architecture has an influence on the size of the OS kernel as well as on the way it
provides services to the application programs. For resource-constrained sensor nodes, we must
take the performance and flexibility into consideration at same time.
A WSN OS should have an architecture that results in a small kernel size, hence small memory
footprint. The architecture must be flexible, that is, only application-required services get loaded
onto the system [8].The followings are the feature analyses of three mainstream architectures [9],
as described in Table 1.
Table 1. Features of 3 mainstream architectures.
Architectures Features
Monolithic
• a single image for the node
• not suitable for frequent changes
Modular
• Organized as independent module
• fits well in reconfiguration
• extra overhead of loading and unloading modules
Virtual machine
• whole network of nodes is a single entity
• gives flexibility in developing applications
3. Computer Science & Information Technology (CS & IT) 77
2.2. Programming Model
An appropriate programming model not only facilitates software development but also promotes
good programming practices [10]. The issue between event-driven mode and thread-driven model
has been discussed in the WSN field for years. Table 2 shows the advantages and disadvantages
of event-driven and thread-driven model with its corresponding OSs [11].
Table 2. Event-driven vs.Thread-driven
Model advantages disadvantages OS
Event-
driven
• Concurrency with low
resources
• Inexpensive scheduling
• Complements the way
networking protocols work
• Highly portable
• Event-loop is in control
• Can’t be preempted and do
not support multi-task
• Bounded buffer producer
consumer problem
Tiny
OS,
SOS,
Contik
Thread-
driven
• Eliminates bounded buffer
problem
• Automatic scheduling
• Real-time performance
• Simulates parallel execution
• Complex shared memory
• Expensive context switches
• Complex stack analyze
Contiki
Mantis
OS
2.3. Run-time Remote-updating
Due to the dense and large-scale deployment of the sensor nodes, we should provide a mechanism
to update nodes over-the-air: collecting nodes to apply updates is often tedious or even dangerous
[12]. Remote code-update schemes for sensor nodes meet two key challenges: (1) resource
consumption and (2) integration into the system architecture. Existing approaches can be
classified into three main categories.
Full-Image Replacement: These techniques such as XNP [14] or Deluge [15] operate by
disseminating a new binary image of an application and the OS in the network. However,
frequent full image replacements result in huge energy consumption and short-life of sensor node.
Virtual Machines (VM): VMs can reduce the energy-cost of disseminating new functionality in
the network as VM code is commonly more compact than native code [16]. However, as
mentioned in architecture, it is hard to be realized on a resource-constrained sensor node.
Dynamic Module Loading: It is also called the incremental update. Unlike the full image
replacement, the kernel will not be modified and just add e new or remove modules, such as
Contiki, SOS and TinyOS. This mechanism can reduce the updating code size and energy-
consumption.
4. 78 Computer Science & Information Technology (CS & IT)
2.4. Security Guarantee
WSNs suffer from many constraints, including low computation capability, small memory,
limited energy resources, susceptibility to physical capture, and the use of insecure wireless
communication channels. These constraints make security in WSNs a challenge.
At present, the researches on security are classified into five categories: cryptography, key-
management, secure routing, secure data aggregation, and intrusion detection. Among them,
cryptography and key-management are regarded as the core issues in WSN security.
3. TASK & RESOURCE SELF-ADAPTIVE OPERATING SYSTEM DESIGN
3.1. Microkernel Design
3.1.1. Architecture
As the kernel is task and resource self-adaptive, we must take the performance and the flexibility
into consideration at the same time. So the kernel takes the modular structure based on its
requirement as illustrated.
Figure 1. Architecture of task-oriented and resource-aware operating system
Figure 1 shows the kernel structure of task and resource self-adaptive WSN OS. The hardware
driver is isolated from the kernel. Applications can also access kernel function by API and system
call. Moreover, Applications are loaded as modules for processing the specific task. By this way,
the node can adapt to the environment automatically.
3.1.2. Hybrid programming model
As mentioned in section 2, the event can’t be preempted and doesn’t support real-time multi-task.
And thread-driven model consumes much energy on context switching .The kernel integrates
advantages of the event-driven and thread-driven model and proposes a hybrid programming
model. The kernel is defined as a set of events, and an event is then defined as a set of threads, as
shown in Equation 1 and 2
K={Ei:i=1,2,···,n; E1→E2···→En} (1)
Ei={Ti,j: j=1,2, ···,n;Ti,1//Ti,2 //···//Ti,n} (2)
The K is an instance of the kernel, E represents an event and T is a thread. The symbol ‘→’
indicates the precedence operation and ‘//’ indicates the concurrent operation.
The feature of the hybrid programming model is that the node can easily switch operation mode
between two typical programming models according to the kind of task. If we set the event
number equal 1, the kernel runs in the manner of multi-thread model, shown in Figure 2. While if
event contains a single thread, the kernel runs in the manner of event-driven model.
5. Computer Science & Information Technology (CS & IT) 79
The kernel is task self-adaptive automatically to work in different modes, thus the scopes of
applications will be greatly extended.
Figure 2. Switch to the thread-driven mode &Switch to Event-driven mode
3.1.3. Communication Scheme
From the view of operating system, the communication refers to two parts: internal
communication in single node and the communication between nodes. In addition, most of WSN
OS is designed for a single node without supporting communication in distributed network.
Therefore, the kernel proposes a scheme to unify the inter-process communication and
communication between nodes. This scheme is based on “tuple space”, a thought from a parallel
programming language “Linda” [17], through “In / Out” system primitives to achieve the
communication of inter-event, inter-thread, event/thread-peripherals and inter-CPU, as shown in
Figure 3. Each “tuple” is identified by one numeric identifier, which can be assigned to the local
or distributed buffer. And user can customize it automatically.
Figure 3. Tuple-based unified communication scheme
3.1.4. Scheduling strategy.
Based on hybrid programming model and tuple space, the kernel offers a two-level scheduling
strategy: ‘event-driven’ (scheduling for events) and ‘priority-based preemptive’ (scheduling for
threads).
Event management
Each event is associated with a unique tuple identified by a numeric identifier. A tuple has its
private buffer space which is used for storing messages or signals received from hardware or
software interrupts. Each event has also a priority level which is adapted automatically to the
message number of the event tuple according to the event priority. The priority of the event can
be pre-defined by programmer, as shown in (3)
Event_next = select_event( tuple_msg_num, priority) (3)
The strategy is sensitive to the memory consumption and proved a solution for the problem that
event cant’ be preempted. Thus, the kernel can make a rapid response to emergent tasks.
6. 80 Computer Science & Information Technology (CS & IT)
Thread scheduler
In comparison with events, threads have one more state named ‘Suspend’. If the operating
condition is not met, this thread is blocked and its state changes to ‘Suspend’. Whereas, if the
operating condition is satisfied, the suspended thread is activated to resume running and its state
changes to ‘Ready’. Each thread is allocated with a unique priority level. Thread can be
preempted by other threads in a same event. The kernel adopts a ‘priority-based preemptive’
scheduling scheme and the scheduler will decide next-run thread through look up the value of
thread_priority and thread_timeslice, as shown in Equation (4)
thread_next=select_thread(priotity,time_slice) (4)
Through switching the threads, the system can process multiple tasks concurrently. For some
periodic tasks such as data collections, sampling and interval routing, the kernel can be
customized to thread-driven mode and process them at the same time. However, if the system is
used to detect some events, the kernel can be switched to the event-driven mode to make a rapid
response.
3.2. Run-time over-the-air updating mechanism
As the kernel is task-oriented, it is necessary for nodes to provide a run-time over-the-air updating
mechanism, which is described from three levels according to granularity of updating code size.
Global variable modification
In the kernel, there exist many global variables pre-defined by developers, which can determine
the operating routines. Through modifying the global variable in the ram, the node can be
switched to another operation mode. So the kernel will maintain a variable table which keeps the
information of the global variables, including the variable name and the physical address. After
rebooting the nodes, the new mode is activated. Meanwhile, a set of remote-commands is
designed to inform the node to update the global variables. The method relatively consumes little
power because only several bytes are transmitted and modified.
Dynamically loading Module
In the modular architecture, the kernel can load new module or unload the existing module at run
time without rebooting the node. Thus, just the necessary modules need to be transmitted, that is
an efficient way to reduce the update code size and optimize the power usage. The procedure can
be divided into the following steps [18]:
1. Meta-Data Generation: The meta-data consists of information about the whole program,
component specific information, symbol information, and the relocation table.
2. Code Distribution: during this step, the new modules together with their meta-data are
transmitted from the base station or the sink node.
3. Data Storage on the Node: Once modules with meta-data are received at the radio interface of
the node, they are stored in the external FLASH memory.
4. Register module: the kernel merges information of the new module’s meta-data with the old
symbol table in order to access to the new module and invoke the new functions.
5. Relocation Table Substitution: the kernel will substitute the old relocation table with the new
one.
6. Update References: through the new relocation table, the kernel can update the references of
the global variable in the data segment or the function address in the code segment.
Full image replacement
This level is designed for updating the whole kernel image. Namely, when the new version of the
kernel differs from the old one greatly, the node should erase the whole image and program the
7. Computer Science & Information Technology (CS & IT) 81
new version into the flash. For resource-constrained nodes, it is unlikely to modify the kernel
frequently. So full image replacement is relatively rarely used.
3.3. Security policy
Due to the nature of wireless connection and unattended mode, the sensor node is prone to
security attack. The kernel provides a security policy for sensor nodes, in order to guarantee the
normal operation of the nodes and support reliable and efficient code distribution.
3.3.1. Cryptography
The traditional encryption algorithm is divided in two categories: symmetric cryptography (such
as RC4, RC5, SHA-1 and MD5) and asymmetric cryptography (such as RSA and ECC).
Therefore, the kernel adopts a tow-level cryptography mechanism, as shown in Figure 4. In
symmetric cryptography, hashing algorithms (MD5 and SHA-11) incurred almost an order of a
magnitude higher overhead than encryption algorithms (RC4, RC5). Thus, RC5 is adopted for
communication between resource-constrained nodes. Meanwhile, the RSA algorithm, a typical
asymmetric cryptography is applied for communication between nodes and sink node or base
station which has adequate resource.
Figure 4. The two-level cryptography mechanism
3.3.2. Key Management.
In Sensor networks end-to-end encryption is impractical because of large number of
communicating nodes and each node is incapable of storing large number of encryption keys. We
assume that the number of sensor nodes in WSN is N. Storing the keys of other N-1 nodes will
occupy large memory of the limited memory nodes.
In order to reduce the memory the keys occupied, the kernel takes the hop-by-hop encryption
mechanism, in which each sensor node stores encryption keys shared with its immediate
neighbours. Thus the keys stored in nodes can be divided into 2 parts: the keys Kn, which is
shared with its neighbours and the key Ks, which is shared with the base station or sink node.
This mechanism reduces the memory the keys occupied and the power consumption of
transmitting keys.
8. 82 Computer Science & Information Technology (CS & IT)
4. EVALUATION
4.1. Sensor platform
The kernel has been tested on the AT91SAM7S evaluation board which include AT91SAM7S256
processor core (based on arm7),256Kbytes Flash, 64Kbytes SRAM, and peripherals such as
UART, SPI, and I2C etc. The RF module is XBEE-PRO [21] (based on Zigbee) which
communicates with the processor core through UART (RS-232 protocol).
4.2. Performance analysis
4.2.1. Memory usage
Table 3 shows the storage usage of the kernel. It occupies no more than 5KB and is suitable for
resource-constrained sensor node. The code size of whole image which includes kernel, hard
drivers and applications is less than 30KB. Thus, the kernel is flexible and portable to
heterogeneous sensor nodes.
Table 3. Code size and data size of the kernel and whole system
Code size(bytes) Data size(bytes)
Kernel 3572 1272
Kernel+Driver+Applications 16988 11929
4.2.2. Power analysis
To reduce energy consumption, the main energy-aware module of the sensor node
(microprocessor, wireless RF module) support low power mode, as shown in Table 4.
Table 4. Power consumption evaluation
Chips
Power consumption
Normal mode Low power mode
AT91SAM7S256(48MHz) <50mA <60µA
Xbee-pro
Transmit Mode <250mA
Receive Mode <50mA
Sleep Mode <50µA
When the system is in idle state (no data transmission or no running components), the kernel can
customize the node configuration to make it run at low power mode to reduce the consumption
and prolong the lifetime of the node. Thus, it demonstrated that the kernel is task and resource
self-adaptive for sensor node.
4.2.3. Overhead of scheduling strategy
The kernel adopts event/thread 2 level scheduling strategies to support real-time multi-task. This
paper evaluates the overhead of scheduling strategy from 3 aspects: Interrupt response time,
Event switch time and Thread switch time.
9. Computer Science & Information Technology (CS & IT) 83
Table 5. Overhead of scheduling strategy
Overhead
(instruction cycle)
Time (µS)(48Mhz)
min avg max min avg max
Event switch time 164 168 516 3.4 3.5 10.7
Thread switch time 176 93 798 3.6 1.9 16.6
Interrupt response time 157 194 395 3.2 4 8.2
Table 5 shows the overhead of the scheduling strategy through testing the average time
mentioned above.
4.2.4. Overhead of run-time updating mechanism
As mentioned in chapter 3, the paper proposes a run-time remote-updating scheme from 3 levels
according to the granularity of the updating code size. Obviously, the overhead and power
consumption is determined by the code size disseminated to a great extent. The paper evaluates
the overhead from two aspects, dynamic loading of modules and full image replacement.
Table 6. Comparison of energy-consumption of dynamic loading and full image replacement
step Dynamic loading (mJ) Full image replacement(mJ)
Code size 160bytes 28917bytes
Receive data 17mJ 330mJ
Store data 1.1mJ 22mJ
Reloc&ref updating 14.mJ 0mJ
As the meta-data generation is executed in sink node or base station, the paper evaluate the
energy-consumption of the rest of steps, which are executed in sensor node. The table 6 shows the
Comparison of energy-consumption of dynamic loading and full image replacement.
5. CONCLUSION
This paper presents a novel task and resource self-adaptive real-time microkernel for wireless
sensor nodes. Based on the modular architecture, the kernel integrates the benefits of Event-
driven and Thread-driven model to make the kernel capable of processing real-time tasks and
multi-task. Moreover, in order to adapt to the complex environment, the sensor nodes are enabled
to be customized and configured dynamically through the run-time over-the-air updating
mechanism. A two level Cryptography mechanism and key management is also proposed for
resource-constrained node to guarantee the reliable and efficient communication and code
distribution.
Through analysis of the evaluation results, the kernel has good performance in communication
throughout, supporting real-time task, and over-the-air updating mechanism. Compared with
other dedicated WSN OSs, the kernel consumes more energy and memory. The verification of the
updating code integrity should also be taken into account in over-the-air updating mechanism. In
addition, there is still much room for optimizing the cryptography algorithm and managing the
keys more efficiently.
Finally, the kernel extends the range of WSN application and provides the reliability and
robustness for the complex application.
10. 84 Computer Science & Information Technology (CS & IT)
REFERENCES
[1] J.J. Labrosse.Micro/OS-II, the Real-time Kernel, Second edition. Technical Document. 2002: 2-3.
[2] Dedicated Systems Magazine. VxWorks/x86 5.3.1 evaluation, RTOS evaluations. URL:
http://www.dedicated-systems.com. 2000.
[3] Microsoft Corporation. Which to choose: Evaluating Microsoft windows CE.NET and windows XP
embedded. Technical Report. 2001: 3-6.
[4] Levis P, Madden S, Polastre J, et al. TinyOS: An Operating System for Sensor Networks. Ambient
Intelligence(Part II), 2005, 35: 115-148.
[5] C.-C. Han, R. Kumar, R. Shea, E. Kohler and M. Srivastava. A Dynamic Operating System for Sensor
Nodes. In Proceedings of the 3rd International Conference on Mobile Systems, Applications, and
Services. New York, USA. 2005: 163-176.
[6] Abrach H, Bhatti S, Carlson J, et al. MANTIS OS: An Embedded Multithreaded Operating System
for Wireless Micro Sensor Platforms. Mobile Networks & Applications (MONET), 2005,10(4): 50-59
[7] Dunkels, Adam, Bjorn Gronvall, and Thiemo Voigt. "Contiki-a lightweight and flexible operating
system for tiny networked sensors." Local Computer Networks, 2004. 29th Annual IEEE International
Conference on. IEEE, 2004.
[8] Farooq, Muhammad Omer, and Thomas Kunz. "Operating systems for wireless sensor networks: A
survey." Sensors 11.6 (2011): 5900-5930.
[9] Anil Kumar Sharma,Surendra Kumar Patel,Gupteshwar Gupta. Design issues and classification of
WSNs Operating Systems. International Journal of Smart Sensors and Ad Hoc Networks (IJSSAN),
ISSN No. 2248-9738 (Print), Vol-2, Iss-3,4, 2012.
[10] Chen, Yu-Ting, Ting-Chou Chien, and Pai H. Chou. "Enix: a lightweight dynamic operating system
for tightly constrained wireless sensor platforms." Proceedings of the 8th ACM Conference on
Embedded Networked Sensor Systems. ACM, 2010.
[11] Moubarak, Mohamed, and Mohamed K. Watfa. "Embedded operating systems in wireless sensor
networks." Guide to Wireless Sensor Networks. Springer London, 2009. 323-346.
[12] Pompili, Dario, TommasoMelodia, and Ian F. Akyildiz. "Deployment analysis in underwater acoustic
wireless sensor networks." Proceedings of the 1st ACM international workshop on Underwater
networks. ACM, 2006.
[13] Munawar, Waqaas, et al. "Dynamic Tinyos: Modular and transparent incremental code-updates for
sensor networks." Communications (ICC), 2010 IEEE International Conference on. IEEE, 2010.
[14] J. Jeong, S. Kim, and A. Broad, “Network reprogramming,” Aug 12,2003. [Online]. Available:
http://www.tinyos.net/tinyos-1.x/doc/Xnp.pdf
[15] J. W. Hui and D. Culler, “The dynamic behavior of a data dissemination protocol for network
programming at scale,” in SenSys ’04, 2004.
[16] R. M¨ uller, G. Alonso, and D. Kossmann, “A virtual machine for sensor networks”,SIGOPSOper.
Syst. Rev., vol. 41, no. 3, pp. 145–158, 2007
[17] Steiner, Rodrigo, et al. "An operating system runtime reprogramming infrastructure for WSN."
Computers and Communications (ISCC), 2012 IEEE Symposium on. IEEE, 2012.
[18] Poschmann, Axel, Dirk Westhoff, and Andre Weimerskirch. "Dynamic code update for the efficient
usage of security components in wsns."Communication in Distributed Systems (KiVS), 2007 ITG-GI
Conference. VDE, 2007.
[19] Atmel Ltd. AT91SAM7S-EK Evaluation Board User Guide.Technical Document. 2006: 12-19.
[20] MaxSream, Inc. XBee™/XBee-PRO™ OEM RF Modules Product Manual. Technical Document.
2006.