Abstract
Smart Grid (SG) is an intelligent electricity network that incorporates advanced information, control and communication technologies to increase the reliability of existing power grid. With advanced communication and information technologies, smart grid deploys complex information management model. This paper presents a cloud service based information management model which opens the issues and benefits from the perspective of both smart grid domain and cloud domain of system model. The overall cost of data management includes storage, computation, upload, download and communication costs which need to be optimized. This paper provides an optimization framework for reducing the overall cost for data management and integration in smart grid model. In this paper, the optimization model focuses on optimizing the size of data items to be stored in the clouds under concern. The types of data items to be stored in the clouds are considered as customer behavior data and Phasor Measurement Units (PMU) data in the smart grid environment. The management model usually comprises of four domains viz., smart grid domain, cloud domain, broker domain and network domain. The present work focuses mainly on smart grid and cloud domain and optimization of cost related to these domains for simplicity of model considered. The proposed model is optimized using various evolutionary optimization techniques such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE). The results of various techniques when implemented for proposed model are compared in terms of performance measures and a most suitable technique is identified for cloud based data management.
Keywords: Smart grid, Information management, Optimization, Cloud Computing.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Power consumption prediction in cloud data center using machine learningIJECEIAES
The flourishing development of the cloud computing paradigm provides several ser- vices in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.
Target Response Electrical usage Profile Clustering using Big DataIRJET Journal
This document discusses using clustering algorithms to analyze large datasets from smart meters to identify patterns in electricity usage. It proposes a new method for clustering micro-clusters that uses a density graph to explicitly represent the density of data points between micro-clusters. This allows the micro-clusters to be re-clustered into a smaller number of final clusters. The algorithm involves constructing a minimum spanning tree from the density graph, partitioning it into trees representing clusters, and selecting representative features from each micro-cluster. This clustering-based feature subset selection aims to improve the efficiency and accuracy of load profiling and short-term load forecasting using big data from smart meters.
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Journals
Abstract Cloud computing is a new model of computing that is widely used in today’s industry, organizations and society in information technology service delivery as a utility. It enables organizations to reduce operational expenditure and capital expenditure. However, cloud computing with underutilized resources still consumes an unacceptable amount of energy than fully utilized resource. Many techniques for optimizing energy consumption in virtualized cloud have been proposed. This paper surveys different energy efficient models with task consolidation in the virtualized cloud computing environment. Keywords: Cloud computing, Virtualization, Task consolidation, Energy consumption, Virtual machine
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
IRJET- Reducing electricity usage in Internet using transactional dataIRJET Journal
This document summarizes a research paper that proposes a method to reduce electricity usage and costs for internet services by optimizing how transactional data is mapped across geographically distributed data centers. It formulates the problem as a stochastic programming problem to maximize energy utilization within a cost budget. An efficient online algorithm is developed using Lyapunov optimization to map user requests to data centers based on changing factors like electricity prices and workload, with the goal of significantly reducing costs compared to baseline strategies. The system architecture involves front-end servers collecting user requests and dispatching them to appropriate back-end data centers for processing.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Power consumption prediction in cloud data center using machine learningIJECEIAES
The flourishing development of the cloud computing paradigm provides several ser- vices in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.
Target Response Electrical usage Profile Clustering using Big DataIRJET Journal
This document discusses using clustering algorithms to analyze large datasets from smart meters to identify patterns in electricity usage. It proposes a new method for clustering micro-clusters that uses a density graph to explicitly represent the density of data points between micro-clusters. This allows the micro-clusters to be re-clustered into a smaller number of final clusters. The algorithm involves constructing a minimum spanning tree from the density graph, partitioning it into trees representing clusters, and selecting representative features from each micro-cluster. This clustering-based feature subset selection aims to improve the efficiency and accuracy of load profiling and short-term load forecasting using big data from smart meters.
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Journals
Abstract Cloud computing is a new model of computing that is widely used in today’s industry, organizations and society in information technology service delivery as a utility. It enables organizations to reduce operational expenditure and capital expenditure. However, cloud computing with underutilized resources still consumes an unacceptable amount of energy than fully utilized resource. Many techniques for optimizing energy consumption in virtualized cloud have been proposed. This paper surveys different energy efficient models with task consolidation in the virtualized cloud computing environment. Keywords: Cloud computing, Virtualization, Task consolidation, Energy consumption, Virtual machine
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
IRJET- Reducing electricity usage in Internet using transactional dataIRJET Journal
This document summarizes a research paper that proposes a method to reduce electricity usage and costs for internet services by optimizing how transactional data is mapped across geographically distributed data centers. It formulates the problem as a stochastic programming problem to maximize energy utilization within a cost budget. An efficient online algorithm is developed using Lyapunov optimization to map user requests to data centers based on changing factors like electricity prices and workload, with the goal of significantly reducing costs compared to baseline strategies. The system architecture involves front-end servers collecting user requests and dispatching them to appropriate back-end data centers for processing.
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEYijccsa
This document summarizes dynamic energy management techniques in cloud data centers. It discusses that cloud data centers consume enormous amounts of energy, resulting in high costs and environmental impacts. It then categorizes energy management approaches as either static or dynamic. Dynamic energy management techniques dynamically reconfigure hardware and software systems based on workload variability to optimize energy usage. The document surveys techniques at the hardware level including for processors, networks, and storage, and also virtualization-assisted techniques like server consolidation and virtual machine placement and migration. The goal of these dynamic techniques is to improve energy efficiency in cloud data centers through runtime adaptation.
An efficient approach on spatial big data related to wireless networks and it...eSAT Journals
Abstract
Spatial big data acts as a important key role in wireless networks applications. In that spatial and spatio temporal problems contains the distinct role in big data and it’s compared to common relational problems. If we are solving those problems means describing the three applications for spatial big data. In each applications imposing the specific design and we are developing our work on highly scalable parallel processing for spatial big data in Hadoop frameworks by using map reduce computational model. Our results show that enables highly scalable implementations of algorithms using Hadoop for the purpose of spatial data processing problems. Inspite of developing these implementations requires specialized knowledge and user friendly.
Keywords: Spatial Big Data, Hadoop, Wireless Networks, Map reduce
An adaptive algorithm for task scheduling for computational grideSAT Journals
Abstract
Grid Computing is a collection of computing and storage resources that are collected from multiple administrative domains. Grid resources can be applied to reach a common goal. Since computational grids enable the sharing and aggregation of a wide variety of geographically distributed computational resources, an effective task scheduling is vital for managing the tasks. Efficient scheduling algorithms are the need of the hour to achieve efficient utilization of the unused CPU cycles distributed geographically in various locations. The existing job scheduling algorithms in grid computing are mainly concentrated on the system’s performance rather than the user satisfaction. This research work presents a new algorithm that mainly focuses on better meeting the deadlines of the statically available jobs as expected by the users. This algorithm also concentrates on the better utilization of the available heterogeneous resources.
Keywords: Task Scheduling, Computational Grid, Adaptive Scheduling and User Deadline.
The document discusses grid computing in remote sensing data processing. It describes how grid computing can help process huge amounts of remote sensing data in real-time by distributing processing across networked computers. Key requirements for a grid environment for remote sensing include sharing computational and software resources, managing resources, and supporting different data formats. Case studies demonstrate how grid middleware can improve the efficiency of tasks like image deblurring and generating lookup tables for aerosol analysis.
Energy efficient clustering using the AMHC (adoptive multi-hop clustering) t...IJECEIAES
IoT has gained fine attention in several field such as in industry applications, agriculture, monitoring, surveillance, similarly parallel growth has been observed in field of WSN. WSN is one of the primary component of IoT when it comes to sensing the data in various environment. Clustering is one of the basic approach in order to obtain the measurable performance in WSNs, Several algorithms of clustering aims to obtain the efficient data collection, data gathering and the routing. In this paper, a novel AMHC (Adaptive MultiHop Clustering) algorithm is proposed for the homogenous model, the main aim of algorithm is to obtain the higher efficiency and make it energy efficient. Our algorithm mainly contains the three stages: namely assembling, coupling and discarding. First stage involves the assembling of independent sets (maximum), second stage involves the coupling of independent sets and at last stage the superfluous nodes are discarded. Discarding superfluous nodes helps in achieving higher efficiency. Since our algorithm is a coloring algorithm, different color are used at the different stages for coloring the nodes. Afterwards our algorithm (AMHC) is compared with the existing system which is a combination of Second order data CC(Coupled Clustering) and Compressive-Projection PCA(Principal Component Analysis), and results shows that our algorithm excels in terms of several parameters such as energy efficiency, network lifetime, number of rounds performed.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
A SURVEY: TO HARNESS AN EFFICIENT ENERGY IN CLOUD COMPUTINGijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
A survey to harness an efficient energy in cloud computingijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to
optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
Mobile Agents based Energy Efficient Routing for Wireless Sensor NetworksEswar Publications
Energy Efficiency and prolonged network lifetime are few of the major concern areas. Energy consumption rated of sensor nodes can be reduced in various ways. Data aggregation, result sharing and filtration of aggregated data among sensor nodes deployed in the unattended regions have been few of the most researched areas in the field of wireless sensor networks. While data aggregation is concerned with minimizing the information transfer from source to sink to reduce network traffic and removing congestion in network, result sharing focuses on sharing of information among agents pertinent to the tasks at hand and filtration of aggregated data so as to remove redundant information. There exist various algorithms for data aggregation and filtration using different mobile agents. In this proposed work same mobile agent is used to perform both tasks data aggregation and data filtration. This approach advocates the sharing of resources and reducing the energy consumption level of sensor nodes.
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Stochastic Scheduling Algorithm for Distributed Cloud Networks using Heuristi...Eswar Publications
Rule based heuristic scheduling algorithms in real time and cloud computing Systems employ for resource or task scheduling since they are suitable to implement for NP-complete problems. However, they are simple but there is much room to improve these algorithms. This study presents a heuristic scheduling algorithm, called High performance hyper-heuristic scheduling algorithm (HHSA) using detection operator, to find better scheduling solutions for real and cloud computing systems. The two operators - diversity detection and
improvement detection operators - are employed in this algorithm to determine the timing to determine the heuristic algorithm.. These two are employed to dynamically determine a low level heuristic that can be used to find better solution. To evaluate the performance of this method, authors examined the above method with several scheduling algorithms and results prove that Hyper Heuristic Scheduling Algorithm can
significantly decrease the makespan of task scheduling when compared with all other scheduling algorithms.
A novel high-performance hyper-heuristic algorithm is proposed for scheduling on cloud computing systems to
reduce the makespan. This algorithm can be applied to both sequence dependent and sequence independent scheduling problems.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
This document discusses electrical energy management and load forecasting in smart grids using artificial neural networks. It presents a study applying backpropagation neural networks to short-term load forecasting for Sudan's National Electric Company. The neural network model was used to forecast load, with error calculated by comparing forecasted and actual load data. The document also discusses generation dispatch, demand forecasting techniques, and designing a neural network for one-day load forecasting. It evaluates network performance and error for different training data sizes, finding that a ten-day training dataset produced the best results with minimum error. The neural network approach was able to reliably predict the nonlinear relationship between historical data and load.
A SURVEY ON REDUCING ENERGY SPRAWL IN CLOUD COMPUTINGaciijournal
Cloud computing is the cluster of autonomic computing, grid computing and utility computing. Cloud
providers are there to rescue their customers from the problem of dynamism. The providers focus on
resource sharing and in improving the performance. Energy consumption is the major factor to degrade the
performance. Reducing energy sprawl will bloom the performance. This paper delineates the different
techniques involved in scheduling the workload of the servers in order to minimize the energy sprawl.
IRJET- Enhanced Density Based Method for Clustering Data StreamIRJET Journal
The document presents a new incremental density-based algorithm called Enhanced Density-based Data Stream (EDDS) for clustering data streams. EDDS modifies the traditional DBSCAN algorithm to represent clusters using only surface core points. It detects clusters and outliers in incoming data chunks, merges new clusters with existing ones, and filters outliers for the next round. The algorithm prunes internal core points using heuristics and removes aged core points/outliers using a fading function. It was evaluated on datasets and found to improve clustering correctness with time complexity comparable to existing methods.
Energy efficient-resource-allocation-in-distributed-computing-systemsCemal Ardil
The document discusses energy efficient resource allocation in distributed computing systems. It proposes using a solution from cooperative game theory called the Nash Bargaining Solution (NBS) to allocate tasks to machines in a computational grid in a way that minimizes both power consumption and makespan.
The key points are:
- It formulates the problem of mapping tasks to machines as a multi-constrained multi-objective extension of the Generalized Assignment Problem (GAP) called Power-aware Task Allocation (PATA).
- It models a cooperative game where each machine is a player, and the goal is to minimize both the total power consumed and the overall makespan when allocating tasks.
- The proposed NBS
A Survey on Neural Network Based Minimization of Data Center in Power Consump...IJSTA
This document summarizes a survey on using neural networks to minimize data center power consumption. It discusses how load prediction using neural networks can balance server loads and optimize energy usage by selectively powering down unused servers. The document reviews several related works applying neural network prediction to schedule virtual machines and transition servers to low-power sleep states. The survey concludes neural network models trained on historical load data can accurately predict future loads to enable reliable and optimized load balancing across servers.
This document discusses trends in big data analytics. It covers the rapidly increasing scale of data from sources like social media, sensors, and scientific experiments. This poses challenges for data storage, processing, and analysis. Current hardware platforms for analytics include distributed data centers with compute and storage resources. Emerging hardware trends involve using non-volatile memory technologies closer to processors to improve performance and efficiency. Software systems need to be highly scalable and support a variety of analytic workloads. Overall, big data analytics requires new techniques and architectures to effectively handle massive, diverse datasets.
Electronic structure and magnetic properties of timn3 n, timn3 and mnti3 comp...eSAT Journals
Abstract
Tight Binding Linear Muffin Tin Orbital Method has been employed to obtain the electronic and Magnetic properties of TiMn3N and ordered TiMn3 Compounds. The Equilibrium volume has been found for each compound using self-consistent band structure calculations at several lattice parameters. Obtained results reveals that TiMn3 is ferromagnetic with 3.3624μB as magnetic moment at Mn sites and-1.2174μB at Ti sites. Hence TiMn3 attains the stable magnetic order at Ferromagnetic and Non-Magnetic calculations, while TiMn3N shows the nonmagnetic phase as a stable one during FM and NM calculations.
Keywords: Ferromagnetic, Non-Magnetic, Tight Binding Linear Muffin Tin Orbital method
This document discusses Human Area Networks (HAN), a type of personal area network that uses the human body as a transmission medium to pass data. It proposes using capacitive coupling to generate an alternating current field on the body's surface to propagate signals between two bodies in contact, without radiation into the surroundings. This allows for a highly secure form of data transfer. The document provides background on computer networking and discusses various standard types of networks like WLAN, LAN, WAN, and MAN. It also reviews some common short-range wireless technologies that could enable HAN, such as Bluetooth, ZigBee, and Wi-Fi, and their respective standards.
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEYijccsa
This document summarizes dynamic energy management techniques in cloud data centers. It discusses that cloud data centers consume enormous amounts of energy, resulting in high costs and environmental impacts. It then categorizes energy management approaches as either static or dynamic. Dynamic energy management techniques dynamically reconfigure hardware and software systems based on workload variability to optimize energy usage. The document surveys techniques at the hardware level including for processors, networks, and storage, and also virtualization-assisted techniques like server consolidation and virtual machine placement and migration. The goal of these dynamic techniques is to improve energy efficiency in cloud data centers through runtime adaptation.
An efficient approach on spatial big data related to wireless networks and it...eSAT Journals
Abstract
Spatial big data acts as a important key role in wireless networks applications. In that spatial and spatio temporal problems contains the distinct role in big data and it’s compared to common relational problems. If we are solving those problems means describing the three applications for spatial big data. In each applications imposing the specific design and we are developing our work on highly scalable parallel processing for spatial big data in Hadoop frameworks by using map reduce computational model. Our results show that enables highly scalable implementations of algorithms using Hadoop for the purpose of spatial data processing problems. Inspite of developing these implementations requires specialized knowledge and user friendly.
Keywords: Spatial Big Data, Hadoop, Wireless Networks, Map reduce
An adaptive algorithm for task scheduling for computational grideSAT Journals
Abstract
Grid Computing is a collection of computing and storage resources that are collected from multiple administrative domains. Grid resources can be applied to reach a common goal. Since computational grids enable the sharing and aggregation of a wide variety of geographically distributed computational resources, an effective task scheduling is vital for managing the tasks. Efficient scheduling algorithms are the need of the hour to achieve efficient utilization of the unused CPU cycles distributed geographically in various locations. The existing job scheduling algorithms in grid computing are mainly concentrated on the system’s performance rather than the user satisfaction. This research work presents a new algorithm that mainly focuses on better meeting the deadlines of the statically available jobs as expected by the users. This algorithm also concentrates on the better utilization of the available heterogeneous resources.
Keywords: Task Scheduling, Computational Grid, Adaptive Scheduling and User Deadline.
The document discusses grid computing in remote sensing data processing. It describes how grid computing can help process huge amounts of remote sensing data in real-time by distributing processing across networked computers. Key requirements for a grid environment for remote sensing include sharing computational and software resources, managing resources, and supporting different data formats. Case studies demonstrate how grid middleware can improve the efficiency of tasks like image deblurring and generating lookup tables for aerosol analysis.
Energy efficient clustering using the AMHC (adoptive multi-hop clustering) t...IJECEIAES
IoT has gained fine attention in several field such as in industry applications, agriculture, monitoring, surveillance, similarly parallel growth has been observed in field of WSN. WSN is one of the primary component of IoT when it comes to sensing the data in various environment. Clustering is one of the basic approach in order to obtain the measurable performance in WSNs, Several algorithms of clustering aims to obtain the efficient data collection, data gathering and the routing. In this paper, a novel AMHC (Adaptive MultiHop Clustering) algorithm is proposed for the homogenous model, the main aim of algorithm is to obtain the higher efficiency and make it energy efficient. Our algorithm mainly contains the three stages: namely assembling, coupling and discarding. First stage involves the assembling of independent sets (maximum), second stage involves the coupling of independent sets and at last stage the superfluous nodes are discarded. Discarding superfluous nodes helps in achieving higher efficiency. Since our algorithm is a coloring algorithm, different color are used at the different stages for coloring the nodes. Afterwards our algorithm (AMHC) is compared with the existing system which is a combination of Second order data CC(Coupled Clustering) and Compressive-Projection PCA(Principal Component Analysis), and results shows that our algorithm excels in terms of several parameters such as energy efficiency, network lifetime, number of rounds performed.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
A SURVEY: TO HARNESS AN EFFICIENT ENERGY IN CLOUD COMPUTINGijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
A survey to harness an efficient energy in cloud computingijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to
optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
Mobile Agents based Energy Efficient Routing for Wireless Sensor NetworksEswar Publications
Energy Efficiency and prolonged network lifetime are few of the major concern areas. Energy consumption rated of sensor nodes can be reduced in various ways. Data aggregation, result sharing and filtration of aggregated data among sensor nodes deployed in the unattended regions have been few of the most researched areas in the field of wireless sensor networks. While data aggregation is concerned with minimizing the information transfer from source to sink to reduce network traffic and removing congestion in network, result sharing focuses on sharing of information among agents pertinent to the tasks at hand and filtration of aggregated data so as to remove redundant information. There exist various algorithms for data aggregation and filtration using different mobile agents. In this proposed work same mobile agent is used to perform both tasks data aggregation and data filtration. This approach advocates the sharing of resources and reducing the energy consumption level of sensor nodes.
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Stochastic Scheduling Algorithm for Distributed Cloud Networks using Heuristi...Eswar Publications
Rule based heuristic scheduling algorithms in real time and cloud computing Systems employ for resource or task scheduling since they are suitable to implement for NP-complete problems. However, they are simple but there is much room to improve these algorithms. This study presents a heuristic scheduling algorithm, called High performance hyper-heuristic scheduling algorithm (HHSA) using detection operator, to find better scheduling solutions for real and cloud computing systems. The two operators - diversity detection and
improvement detection operators - are employed in this algorithm to determine the timing to determine the heuristic algorithm.. These two are employed to dynamically determine a low level heuristic that can be used to find better solution. To evaluate the performance of this method, authors examined the above method with several scheduling algorithms and results prove that Hyper Heuristic Scheduling Algorithm can
significantly decrease the makespan of task scheduling when compared with all other scheduling algorithms.
A novel high-performance hyper-heuristic algorithm is proposed for scheduling on cloud computing systems to
reduce the makespan. This algorithm can be applied to both sequence dependent and sequence independent scheduling problems.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
This document discusses electrical energy management and load forecasting in smart grids using artificial neural networks. It presents a study applying backpropagation neural networks to short-term load forecasting for Sudan's National Electric Company. The neural network model was used to forecast load, with error calculated by comparing forecasted and actual load data. The document also discusses generation dispatch, demand forecasting techniques, and designing a neural network for one-day load forecasting. It evaluates network performance and error for different training data sizes, finding that a ten-day training dataset produced the best results with minimum error. The neural network approach was able to reliably predict the nonlinear relationship between historical data and load.
A SURVEY ON REDUCING ENERGY SPRAWL IN CLOUD COMPUTINGaciijournal
Cloud computing is the cluster of autonomic computing, grid computing and utility computing. Cloud
providers are there to rescue their customers from the problem of dynamism. The providers focus on
resource sharing and in improving the performance. Energy consumption is the major factor to degrade the
performance. Reducing energy sprawl will bloom the performance. This paper delineates the different
techniques involved in scheduling the workload of the servers in order to minimize the energy sprawl.
IRJET- Enhanced Density Based Method for Clustering Data StreamIRJET Journal
The document presents a new incremental density-based algorithm called Enhanced Density-based Data Stream (EDDS) for clustering data streams. EDDS modifies the traditional DBSCAN algorithm to represent clusters using only surface core points. It detects clusters and outliers in incoming data chunks, merges new clusters with existing ones, and filters outliers for the next round. The algorithm prunes internal core points using heuristics and removes aged core points/outliers using a fading function. It was evaluated on datasets and found to improve clustering correctness with time complexity comparable to existing methods.
Energy efficient-resource-allocation-in-distributed-computing-systemsCemal Ardil
The document discusses energy efficient resource allocation in distributed computing systems. It proposes using a solution from cooperative game theory called the Nash Bargaining Solution (NBS) to allocate tasks to machines in a computational grid in a way that minimizes both power consumption and makespan.
The key points are:
- It formulates the problem of mapping tasks to machines as a multi-constrained multi-objective extension of the Generalized Assignment Problem (GAP) called Power-aware Task Allocation (PATA).
- It models a cooperative game where each machine is a player, and the goal is to minimize both the total power consumed and the overall makespan when allocating tasks.
- The proposed NBS
A Survey on Neural Network Based Minimization of Data Center in Power Consump...IJSTA
This document summarizes a survey on using neural networks to minimize data center power consumption. It discusses how load prediction using neural networks can balance server loads and optimize energy usage by selectively powering down unused servers. The document reviews several related works applying neural network prediction to schedule virtual machines and transition servers to low-power sleep states. The survey concludes neural network models trained on historical load data can accurately predict future loads to enable reliable and optimized load balancing across servers.
This document discusses trends in big data analytics. It covers the rapidly increasing scale of data from sources like social media, sensors, and scientific experiments. This poses challenges for data storage, processing, and analysis. Current hardware platforms for analytics include distributed data centers with compute and storage resources. Emerging hardware trends involve using non-volatile memory technologies closer to processors to improve performance and efficiency. Software systems need to be highly scalable and support a variety of analytic workloads. Overall, big data analytics requires new techniques and architectures to effectively handle massive, diverse datasets.
Electronic structure and magnetic properties of timn3 n, timn3 and mnti3 comp...eSAT Journals
Abstract
Tight Binding Linear Muffin Tin Orbital Method has been employed to obtain the electronic and Magnetic properties of TiMn3N and ordered TiMn3 Compounds. The Equilibrium volume has been found for each compound using self-consistent band structure calculations at several lattice parameters. Obtained results reveals that TiMn3 is ferromagnetic with 3.3624μB as magnetic moment at Mn sites and-1.2174μB at Ti sites. Hence TiMn3 attains the stable magnetic order at Ferromagnetic and Non-Magnetic calculations, while TiMn3N shows the nonmagnetic phase as a stable one during FM and NM calculations.
Keywords: Ferromagnetic, Non-Magnetic, Tight Binding Linear Muffin Tin Orbital method
This document discusses Human Area Networks (HAN), a type of personal area network that uses the human body as a transmission medium to pass data. It proposes using capacitive coupling to generate an alternating current field on the body's surface to propagate signals between two bodies in contact, without radiation into the surroundings. This allows for a highly secure form of data transfer. The document provides background on computer networking and discusses various standard types of networks like WLAN, LAN, WAN, and MAN. It also reviews some common short-range wireless technologies that could enable HAN, such as Bluetooth, ZigBee, and Wi-Fi, and their respective standards.
Delay tolerant network routing protocol a comprehensive survey with hybrid...eSAT Journals
Abstract
Delay –disruption Tolerant networks are sparse wireless network which is recently being used by the existing /current network for the purpose to connect devices or the underdeveloped area of the world that works in challenging environment. In DTN there majority of time does not exist the total path from source to target which is leads to the difficulty of how to route the packet in such environment. A communications network which is accomplished of storing packets temporarily in intermediate nodes, until the time an end-to-end route is re-established or regenerated is known as a delay tolerant networks. Routing in such network is very difficult and for that different routing protocols are developed. In this Survey paper we discuss about various routing Strategy and at the end compared the different routing protocol with their various performance metrics.
Keywords: Delay tolerant networks (DTNs), Erasure coding, Replication, Routing.
A new approach “ring” for restricting web pages from script accesseSAT Journals
Abstract Web Pages which are meant for user registration or Comment in Blogs or Online Polls or logging in to a web portal or getting indexed in search engines are open to be accessed by scripts. It means it cannot be known whether the page being requested from a Human user or from a machine. There are different Websites who provides specific API to be used by scripts, for example: API for sending SMS through script. But there may be situations where a web page may need to be restricted from script access i.e. user must be a Human Being. Few popular techniques for restricting the web page from script access are “use of CAPTCHA” or “use of OTP”. In this paper a different approach named as “RinG: Random id name Generator” is proposed and then the popularly used approaches have been compared with different parameters. Keywords: Captcha, OTP (One time Password) , RinG , API.
A comparative analysis for stabilize the temperature variation of a water bod...eSAT Journals
Abstract
Minimization of performance indices is very effective criteria for real time system. The process basically indicates a stable system.
Now, fixing the temperature in a water body is an important method for practical life where fuzzy logic is appl ied naturally. Here
in this paper we will try to show the method of minimization of error parameter using optimal regulator have equally effects the
fuzzy system for that the system runs towards the stable system.
Keywords - Crisp Set, Fuzzy Set, FLC, Cost Function, HJB Method
Effect of multipath routing in autonomous mobile mesh networkseSAT Journals
Abstract
Autonomous mobile mesh networks are a combination of mobile ad hoc networks and mesh networks. Mobile ad hoc networks are temporarily formed for a specific purpose, without any fixed infrastructure. Mesh networks are fully connected networks with the support of fixed infrastructure. Autonomous mobile mesh networks (AMMNET in short) [1], is the framework designed for supporting mesh networks, which are mobile still expects connectivity as opposed to ad hoc networks. This paper proposes a multipath routing technology for AMMNETs which provides better packet delivery ratio in AMMNETs. The simulations are done in ns2. The results prove that, multipath routing technology works well with AMMNET framework.
Keywords: AMMNET, routing, multipath, ns2.
An approach to control inter cellular interference using load matrix in multi...eSAT Journals
Abstract
This paper deals with reduction of inter cellular interference in Multi-carrier communication systems. In the past, Load Matrix(LM) is proposed to allocate power to different users in a network based upon Signal to noise plus interference ratio (SNIR) so as to reduce inter cellular interference and is observed for single carrier systems. In Multi carrier systems the SNIR is affected distinctly in each carrier thus a single SNIR for power allocation is not optimal. In this paper, to obtain the optimization of power allocation in Multi-Carrier system, Load Matrix coding with bifurcated SNIR (LM-BFSNIR) is proposed. Using this approach it is observed that inter cellular interference is reduced better when compared to a single carrier system evaluated over a 3GPP-LTE standard.
Keywords−Power allocation, Inter cellular interference, Multi-Carrier mobile Communication system.
Quality assurance and its management for engineering college in indiaeSAT Journals
Abstract The concept of Quality Assurance in Engineering Institutions, used as long term policy. For the effectiveness, QA usually require a continuing evaluation factors towards design to providing confidence. For the competitive environment, QA has gained wide importance in industry and is generally being introduced and experimental in educational institutions. However, educational system can’t be treated as industry. But, there are similarities with respect to their sub-components. Each of these viewed, as a system consists of input, process, management, resource, output and feedback. Educational institution system is complex, as it involves human being as input and output from the system. Students are input and customer too. Keywords: Quality Assurance, evaluation factors, process, resource
Secure and privacy preserving data centric sensor networks with multi query o...eSAT Journals
Abstract
In large-scale Wireless Sensor Networks the effective use of the vast amounts of data gathered will require scalable, self-organizing, and energy-efficient data dissemination and data retrieval techniques. Data Centric Sensor (DCS) networks is a better approach in which the sensor nodes send the sensed data to a designated sensor node whose name is associated with sensed data. However, due to unattended nature of Wireless Sensor Networks, these sensor nodes are susceptible to different types of attacks. The attacker may compromise these storage nodes and get data of his interest. In this paper we propose a Secure and Privacy Preserving Data Centric Sensor Networks that includes security and privacy support to DCS networks. We use multi level key structure and cryptographic algorithms to provide security. In addition, we propose a multi-query optimization technique that aggregates similar queries for a small periodic time and construct a query message. This reduces the number of messages required to serve multiple similar queries. Simulation and experimental results show that our work provides a secure privacy preserving data centric sensor network based on cryptographic keys and reduces the message overhead and incurs a minimum communication cost compared to previous works
Keywords: Data Centric Sensor Networks, Privacy preserving, Query Optimization, Security, Steiner tree, Wireless Sensor Networks
Isolating values from big data with the help of four v’seSAT Journals
Abstract
Big Data refers to the massive amounts of data that collect over time that are difficult to analyze and handle using common database management tools. It includes business transactions, e-mail messages, photos, surveillance videos and activity logs. It also includes unstructured text posted on the Web, such as blogs and social media. Big Data has shown lot of potential in real world industry and research community. We support the power and Potential of it in solving real world problems. However, it is imperative to understand Big Data through the lens of 4 Vs. 4th V as ‘Value’ is desired output for industry challenges and issues. We provide a brief survey study of 4 Vs. of Big Data in order to understand Big Data and extract Value concept in general. Finally we conclude by showing our vision of improved healthcare, a product of Big Data Utilization, as a future work for researchers and students, while moving forward.
Keywords: Big Data, Surveillance videos, blogs, social media, four Vs.
Recent technological developments in pv+thermal technology a revieweSAT Journals
Abstract Large amount of work had been carried out and going on in research and technological development of solar energy systems. Many systems have been innovated and approved as a product industrial bodies as per its market potential. Theoretical models have been developed, manufactured within specified design constraints and tested to get desired results. Many have optimized the systems using different advanced tools, some have developed software techniques like neural network, and the product developed is studied for market potential. The journey is going on in order to increase efficiency of system and compete with conventional energy prizes. This article gives an overview about the trend of solar technology development, future key areas in which researchers have to work for sustainable and efficient solar technology. Keywords: - PV/T, solar energy, efficiency of PV cells, hybrid systems.
Study on 100% energy efficient sustainable buildingseSAT Journals
Abstract This paper addresses the approach to minimize the Energy consumption and the cost of house and it givesthe comfort to the
people living within. This can be achieved by proper design of the structure and use of renewable resources. Energy can be
harnessed on site by use of solar for energy production which can be further stored for consumption in absence of daylight.
For achieving zero energy houses first we need to conserve energy at the time of construction and the execution then create
energy by renewable resources. Hence the amount of energy required for proper working in created on site hence there is no need
for any external source of energy. A zero energy home guarantees long term energy and cost stability for the homeowner. The aim
of the present study is to develop an open-access, consistent database of both personified energy and carbon for construction
materials.
Keywords: Energy, Energy saving, Cost saving, Emission reduction
Widipay a cross layer design for mobile payment system over lte directeSAT Journals
Abstract Long term evolution direct, plus its features of device-to-device networking and proximate discovery, are new and emerging
technologies able to come out of the shadow to render a whole new perspective at mobile payments. In this work, we propose a
new mobile payment system using long term evolution direct and its features. A sensitive mobile payment system would require
high security requirements in order to be trusted by the users and the businesses. These requirements are taken into account in
our proposed system design and solutions to security considerations are provided. The system’s security and usability features are
designed for implementation from physical to application layer to address the identified issues. Within the scope of this work, we
provided the conceptual design solutions to allow the system to be as solid and secure as possible while they are convenient
enough not to degrade user’s experience when using the system.
Keywords: LTE Direct, Mobile Payment, Internet of Things, Device-To-Device Networking
Usability analysis of sms alert system for immunization in the context of ban...eSAT Journals
Abstract Both the market and academia strongly encourage the development of usable systems, and they do so by relying on a number of standards, guide-lines, research and good practice streams. Unfortunately, in the health sector, whilst being the owner of standards under many purposes and topics, seems still falling and running behind as the conceptual issues and practical implications of usability are concerned. In this study, it was found that rapid growth of mobile applications through SMS increases in a significant way in developing countries particularly in Bangladesh. Public satisfaction was highly shown in mobile health services through SMS. In our paper, usability has been analytically investigated throughout a simulated health oriented action setting and against a prototype of SMS based health services in Bangladesh, and several provoking conclusions in terms of “rethinking usability” applied to academic actions and decision making have been derived. Various health institutes can be influenced by this study to challenge existing difficulties against usability potential. Keywords: ICT, mHealth, Mobile applications, SMS, Usability
Study on the dependency of steady state response on the ratio of larmor and r...eSAT Journals
Abstract In this project we simulate with very high accuracy specially to study the dependency of the steady state power and dispersion output on the ratio (r) between Larmor and Rabi frequency for the electron spin resonance experiment by the matlab software (version 7.9.0.529(R2009b)). Where the sample material (DPPH) has been kept in a strong static magnetic field (B0) and in orthogonal direction a high frequency electromagnetic field (B1(t)) has been applied. We divide our simulation into two parts. In the first part we ignore the terms and observe the dependency of the power maximum on the amplitude of the oscillating e.m. field B1 (for fixed (ωL) Larmor frequency) and on ωL (for fixed B1). Also observe a clear shift (Δω) of the power maxima (Pmax) from ωL. In our second part we consider the term and the ratio (r) between Larmor and Rabi frequency and observe the shift (Δω) of the power maxima (Pmax) from ωL and change in peak to peak line width (ΔBPP) with B1 both depends upon the ratio r. we consider various range of r ([0.83,5], [16,100], [88.3,500], [1000,2000], [833.3,5000]) and observe these dependency. We observe as the ratio of r increases the output i.e. shift (Δω) and the change in ΔBPP with B1 decreases and converges to the case of neglecting terms. We also observe the shift (Δω) follows some non linear relationship with B1. Keywords: E.S.R., Larmor, Rabi, Ratio r, Spin
Analysis of various mcm algorithms for reconfigurable rrc fir filtereSAT Journals
This document analyzes various multiple constant multiplication (MCM) algorithms for implementing reconfigurable root raised cosine (RRC) finite impulse response (FIR) filters. It compares digit based recoding, canonic sign digit (CSD), common subexpression elimination (CSE), and binary common subexpression elimination (BCSE) algorithms in terms of area, power, and speed. The results show that the BCSE algorithm provides the best performance, reducing area by up to 11.7% and power consumption compared to the other methods. The BCSE technique reuses common binary bit patterns within filter coefficients to optimize constant multiplications.
A study of the constraints affecting the proper utilization of computer appli...eSAT Journals
Abstract
Construction is one of the area in which computer application software are highly required to perform different task at various stages of project construction. Computers have been used to enhance the effectiveness of construction management. Efficient utilization of computer application software is a key to enhancing the proper management of construction activities which will contribute to the successful implementation of construction project. Careful selection of computer application software is required by the construction companies for proper management of their resources. The difficulty about computerization especially with regard to the use of computer application software in resource management require the knowledge and expertise of those working in construction companies, and who are directly involved in the management of construction resources. The study was carried out to determine the obstacles confronting the proper utilization of computer application software in resource management. The result of the questionnaire survey used in this research work clearly indicated those constraints, which include, non-understanding about software potentials, lack of qualified personnel to use the software, unaware of most of the resource management software available in the market, communication gap between the vendors and users which contribute to non-understanding of the full scope of the software, greater-know-how required from staff, among numerous others. A total of 22 construction companies were selected for the study. An open ended questionnaire which consist of 22 questions with regard to the constraints affecting the proper utilization of computer application software in resource management was designed, validated, and distributed among Engineers, Project Managers, as well as executives who are directly involved in the activities of the construction site.
Keywords: resource management, application software, construction management
A study of region based segmentation methods for mammogramseSAT Journals
Abstract Breast Cancer is one of the most common diseases that are found in women. The number of women getting affected by cancer is increasing year by year. Detecting cancer in the late stages, leads to very complicated surgeries and the chance of death is very high. Early detection of Breast Cancer helps in less complicated procedures and early recovery. Many tests have been found so as to detect cancer. Some of these tests are mammography; ultrasound etc.Mammography is a method that helps in early detection of Breast cancer. But finding the mass and its spread from a mammographic image is very difficult. Expert radiologists are needed for accurate reading of a mammogram. Researchers have been working for years for algorithms that help for easy detection and segmentation of breast masses. Feature extraction and classification have also been done extensively so that the studied cases can be compared to diagnose the new cases. Segmentation of cancerous mass regions from the breast tissues is a difficult process. Many algorithms have been proposed for this. Some of these algorithms are region growing, watershed segmentation, clustering etc. Region Growing Method is based on two major factors which is the seed point selection and then the stopping criteria. Watershed Segmentation on the other hand is based on the basic geographical concept of watersheds and catchment basins, and uses a technique called as flooding. A study of these two major region based methods such as Region Growing and Watershed Segmentation are compared and detailed in this paper. Keywords: Mammography, Mass Detection, Segmentation, Region growing, and Watershed Segmentation.
An Exploration of Grid Computing to be Utilized in Teaching and Research at TUEswar Publications
This document describes building a simple grid computing environment from existing computing resources at Taiz University in Yemen. It outlines:
1) Installing and configuring software like Globus Toolkit, Tomcat, and OGCE portal on three machines to set up basic grid services like a certificate authority server, MyProxy server, and portal server.
2) Configuring the hardware nodes, installing the portal server, setting up the certificate authority server, and MyProxy server.
3) Testing basic grid services like credential delegation to MyProxy, retrieval from MyProxy, and GridFTP file transfers.
The results indicate the proposed grid model is promising for teaching and research at Taiz University and could serve as a
IRJET- Cost Effective Workflow Scheduling in BigdataIRJET Journal
This document summarizes research on cost-effective workflow scheduling in big data environments. It discusses a Pointer Gossip Content Addressable Network Montage Framework that allows automatic selection of target clouds, uniform access to clouds, and workflow data management while meeting service level agreements at lowest cost. The framework is evaluated using a real scientific workflow application in different deployment scenarios. Results show it can execute workflows with expected performance and quality at lowest cost.
Energy Optimized Link Selection Algorithm for Mobile Cloud ComputingEswar Publications
Mobile cloud computing is the revolutionary distributed computing research area which consists of three different domains: cloud computing, wireless networks and mobile computing targeting to improve the task computational capabilities of the mobile devices in order to minimize the energy consumption. Heavy computations can be offloaded to the cloud to decrease energy consumption for the mobile device. In some mobile cloud applications, it has been more energy inefficient to use the cloud compared to the conventional computing conducted in the local device. Despite mobile cloud computing being a reliable idea, still faces several
problems for mobile phones such as storage, short battery life and so on. One of the most important concerns for mobile devices is low energy consumption. Different network links has different bandwidths to uplink and downlink task as well as data transmission from mobile to cloud or vice-versa. In this paper, a novel optimal link selection algorithm is proposed to minimize the mobile energy. In the first phase, all available networks are
scanned and then signal strength is calculated. All the calculated signals along with network locations are given
input to the optimal link selection algorithm. After the execution of link selection algorithm, an optimal network link is selected.
This document discusses architectural and security management for grid computing. It begins by defining grid computing as an environment that enables sharing of distributed resources across organizations to achieve common goals. It then describes the key components of a grid, including computation resources, storage, communications, software/licenses, and special equipment. The document outlines a four-level grid architecture including a fabric level, core middleware level, user middleware level, and application level. It also discusses important aspects of grid computing such as resource balancing, reliability through distribution, parallel CPU capacity, and management of different projects. Finally, it emphasizes that security is a major concern for grid computing due to the open nature of sharing resources across organizational boundaries.
Analysis of Cloud Computing Security Concerns and MethodologiesIRJET Journal
This document analyzes security concerns and methodologies related to cloud computing. It begins with an abstract that introduces cloud computing and discusses security concerns with utilizing cloud-based services. The document then reviews literature on cloud computing fundamentals, performance, scalability, availability, and security challenges. It examines common security threats to cloud computing like data loss, malicious insiders, insecure interfaces, and account hijacking. The document also outlines requirements for security in cloud computing and analyzes cryptographic algorithms and their parameters that can help strengthen security. It concludes by comparing different cryptographic algorithms based on parameters like level of data protection and availability.
Cloud Computing Task Scheduling Algorithm Based on Modified Genetic AlgorithmIRJET Journal
This document presents a cloud computing task scheduling algorithm based on a modified genetic algorithm. It begins with an abstract discussing scalable cloud computing and the need for efficient task scheduling and virtual machine allocation. It then discusses the problem of existing scheduling algorithms having high overhead and slow convergence. The proposed methodology uses a heuristic-based prediction model with a logistic normal distribution technique to improve data transmission prediction. Simulation results show the proposed approach has better throughput and computation time than existing algorithms for different data packet sizes. The conclusion discusses overcoming drawbacks of earlier algorithms and future work focusing on algorithms with better tradeoffs between performance characteristics.
Job Scheduling Mechanisms in Fog Computing Using Soft Computing Techniques.IRJET Journal
This document discusses job scheduling mechanisms in fog computing using soft computing techniques. It begins with an abstract that introduces fog computing and the need for dynamic load balancing algorithms to efficiently allocate computing resources. It then provides background on fog computing and its advantages over cloud computing for applications requiring low latency. The document discusses different types of scheduling mechanisms (resource, task, workflow, job scheduling) and optimization techniques that can be used, including metaheuristic algorithms like particle swarm optimization and genetic algorithms. It provides details on these algorithms and their application to scheduling in fog computing environments.
IRJET- Fog Route:Distribution of Data using Delay Tolerant NetworkIRJET Journal
This document summarizes a research paper that proposes using delay tolerant network (DTN) approaches for data dissemination in fog computing networks. It describes a hybrid data dissemination framework with a two-plane architecture: 1) the cloud serves as a control plane to process content updates and organize data flows, and 2) geometrically distributed fog servers form a data plane to disseminate data among themselves using DTN techniques. This allows non-urgent, high-volume content to be distributed across fog servers in an efficient manner without relying on expensive bandwidth between the fog and cloud layers.
SECURE FILE STORAGE IN THE CLOUD WITH HYBRID ENCRYPTIONIRJET Journal
The document proposes a new public audit program for secure cloud storage using a dynamic hash table (DHT) data structure. A DHT would store data integrity information in the Third Party Auditor (TPA) to reduce computational costs and communication overhead compared to previous schemes. The approach combines public-key-based homomorphic authenticators with a TPA-generated random mask to achieve data protection and batch verification. This allows the cloud storage system to meet security requirements for data confidentiality, integrity, and availability while improving audit efficiency.
IRJET- Cost Effective Scheme for Delay Tolerant Data TransmissionIRJET Journal
This document proposes two schemes, the deadline cost (DC) scheme and the deadline shortest queue first (DSQF) scheme, to improve the rate of data meeting its deadline with minimal data transmission cost in a wireless mesh network of IoT gateways. The DC scheme selects the cheapest gateway that meets the data deadline, while DSQF selects the fastest gateway, with the other metric as the secondary factor. The schemes aim to reduce overall data transmission costs compared to traditional greedy cost and shortest queue first schemes. According to tests, the proposed schemes can meet over 98% of data deadlines while reducing costs by 5.74% on average.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
This document discusses security protocols for high performance grid computing architectures. It analyzes the different network layers in grid computing protocols and identifies various security disciplines. It also analyzes various security suites available in the TCP/IP protocol architecture. The paper aims to define security disciplines at different levels of cluster computing architecture and propose applicable security suites from the TCP/IP security protocol suite. Grid computing allows sharing and aggregation of distributed computing resources to enable more powerful applications. Security is an important consideration in grid computing due to sharing resources across administrative domains.
Secure cloud storage privacy preserving public auditing for data storage secu...rajender147
This document proposes a privacy-preserving public auditing system for data storage security in cloud computing. It describes a system with four entities - the data owner who stores data in the cloud, the cloud service provider who provides storage, a third party auditor who verifies the integrity of stored data on behalf of owners, and granted applications that can access the stored data. The system uses secret keys to preprocess and encrypt the owner's data before storage. It allows the third party auditor to efficiently verify the correctness and integrity of stored data through proof of irretrievability protocols, while preserving the privacy of the owner's data. The system aims to ensure data security and privacy during public auditing in cloud storage.
Efficient architectural framework of cloud computing Souvik Pal
This document discusses an efficient architectural framework for cloud computing. It begins by providing background on cloud computing and discusses challenges such as security, privacy, and reliability. It then proposes a new architectural framework that separates infrastructure as a service (IaaS) into three sub-modules: IaaS itself, a hypervisor monitoring environment (HME), and resources as a service (RaaS). The HME acts as middleware between IaaS and physical resources, using a hypervisor to allocate resources from a pool managed by RaaS. This proposed framework is intended to improve performance and access speed for cloud computing.
Load Balance in Data Center SDN Networks IJECEIAES
In the last two decades, networks had been changed according to the rapid changing in its requirements. The current Data Center Networks have large number of hosts (tens or thousands) with special needs of bandwidth as the cloud network and the multimedia content computing is increased. The conventional Data Center Networks (DCNs) are highlighted by the increased number of users and bandwidth requirements which in turn have many implementation limitations. The current networking devices with its control a nd forwarding planes coupling result in network architectures are not suitable for dynamic computing and storage needs. Software Defined networking (SDN) is introduced to change this notion of traditional networks by decoupling control and forwarding planes. So, due to the rapid increase in the number of applications, websites, storage space, and some of the network resources are being underutilized due to static routing mechanisms. To overcome these limitations, a Software Defined Network based Openflow Data Center network architecture is used to obtain better performance parameters and implementing traffic load balancing function. The load balancing distributes the traffic requests over the connected servers, to diminish network congestions, and reduce un derutilization problem of servers. As a result, SDN is developed to afford more effective configuration, enhanced performance, and more flexibility to deal with huge network designs.
Energy Meters using Internet of Things PlatformIRJET Journal
This document proposes an architecture and implementation for integrating energy meters with an Internet of Things (IoT) platform. The key aspects of the approach are: 1) Integrating smart grid applications and home applications using a common IoT infrastructure, 2) Collecting data from different sensor communication protocols, 3) Providing secure and customized data access, and 4) Mapping sensors and actuators to a common abstraction layer to enable multiple concurrent applications. The proposed system was demonstrated with a kit using Zigbee meters and gateways connected to an IoT server and custom user interface.
IRJET- Edge Computing the Next Computational LeapIRJET Journal
The document discusses edge computing, which involves processing data at the edge of networks, close to where it is generated by IoT devices, rather than sending all data to centralized cloud servers. Edge computing can reduce latency, bandwidth costs, and improve privacy and security by keeping data processing localized. It describes how edge computing is needed as more data is generated by devices and applications like self-driving cars require real-time processing. Edge computing provides advantages over traditional cloud-based approaches like reduced latency and energy consumption. Potential applications of edge computing include smart cities and autonomous vehicles. Challenges to address include programming heterogeneous edge devices and ensuring security and privacy.
IRJET- Edge Computing the Next Computational LeapIRJET Journal
This document discusses edge computing, which involves processing data at the edge of networks rather than sending all data to centralized cloud servers. It defines edge computing and describes how it can reduce latency, bandwidth costs, and improve privacy and security over cloud-only systems. Key applications of edge computing mentioned are smart cities and autonomous vehicles. The document outlines some challenges of edge computing, such as ensuring programmability across heterogeneous edge devices and addressing security, privacy, naming, and data abstraction issues.
An advanced ensemble load balancing approach for fog computing applicationsIJECEIAES
Fog computing has emerged as a viable concept for expanding the capabilities of cloud computing to the periphery of the network allowing for efficient data processing and analysis from internet of things (IoT) devices. Load balancing is essential in fog computing because it ensures optimal resource utilization and performance among distributed fog nodes. This paper proposed an ensemble-based load-balancing approach for fog computing environments. An advanced ensemble load balancing approach (AELBA) uses real-time monitoring and analysis of fog node metrics, such as resource utilization, network congestion, and service response times, to facilitate effective load distribution. Based on the ensemble's collective decision-making, these metrics are fed into a centralized load-balancing controller, which dynamically adjusts the load distribution across fog nodes. Performance of the proposed ensemble load-balancing approach is evaluated and compared it to traditional load-balancing techniques in fog using extensive simulation experiments. The results demonstrate that our ensemble-based approach outperforms individual load-balancing algorithms regarding response time, resource utilization, and scalability. It adapts to dynamic fog environments, providing efficient load balancing even under varying workload conditions.
Similar to An optimization framework for cloud based data management model in smart grid (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
This document summarizes a study on stabilizing expansive black cotton soil with the natural inorganic stabilizer RBI-81. Laboratory tests were conducted to evaluate the effect of RBI-81 on the soil's engineering properties. The tests showed that with 2% RBI-81 and 28 days of curing, the unconfined compressive strength increased by around 250% and the CBR value improved by approximately 400% compared to the untreated soil. Overall, the study found that RBI-81 effectively improved the strength properties of the black cotton soil and its suitability as a soil stabilizer was supported.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
This document summarizes a study on the influence of compaction energy on soil stabilized with a chemical stabilizer. Laboratory tests were conducted on locally available loamy soil treated with a patented polymer liquid stabilizer and compacted at four different energy levels. The study found that increasing the compaction effort increased the density of both untreated and treated soil, but the rate of increase was lower for stabilized soil. Treating the soil with the stabilizer improved its unconfined compressive strength and resilient modulus, and reduced accumulated plastic strain, with these properties further improved by higher compaction efforts. The stabilized soil exhibited strength and performance benefits compared to the untreated soil.
Geographical information system (gis) for water resources managementeSAT Journals
This document describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) to meet the information needs of various government departments related to water management in a state. The HIS consists of a hydrological database coupled with tools for collecting and analyzing spatial and non-spatial water resources data. It also incorporates a hydrological model to indirectly assess water balance components over space and time. A web-based GIS portal was created to allow users to access and visualize the hydrological data, as well as outputs from the SWAT hydrological model. The framework is intended to facilitate integrated water resources planning and management across different administrative levels.
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
This document summarizes an experimental study that tested circular concrete-filled steel tube columns with varying parameters. 45 specimens were tested with different fiber percentages (0-2%), tube diameter-to-wall-thickness ratios (D/t from 15-25), and length-to-diameter (L/d) ratios (from 2.97-7.04). The results found that columns filled with fiber-reinforced concrete exhibited higher stiffness, equal ductility, and enhanced energy absorption compared to those filled with plain concrete. The load carrying capacity increased with fiber content up to 1.5% but not at 2.0%. The analytical predictions of failure load closely matched the experimental values.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
This document evaluates the operational efficiency of an urban road network in Tiruchirappalli, India using travel time reliability measures. Traffic volume and travel times were collected using video data from 8-10 AM on various roads. Average travel times, 95th percentile travel times, and buffer time indexes were calculated to assess reliability. Non-motorized vehicles were found to most impact reliability on one road. A relationship between buffer time index and traffic volume was developed. Finally, a travel time model was created and validated based on length, speed, and volume.
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
This document summarizes a study that used remote sensing and GIS techniques to estimate morphometric parameters and runoff for the Yagachi catchment area in India over a 10-year period. Morphometric analysis was conducted to understand the hydrological response at the micro-watershed level. Daily runoff was estimated using the SCS curve number model. The results showed a positive correlation between rainfall and runoff. Land use/land cover changes between 2001-2010 were found to impact estimated runoff amounts. Remote sensing approaches provided an effective means to model runoff for this large, ungauged area.
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...
An optimization framework for cloud based data management model in smart grid
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 751
AN OPTIMIZATION FRAMEWORK FOR CLOUD-BASED DATA
MANAGEMENT MODEL IN SMART GRID
P.M.Devie1
, S.Kalyani2
1
Assistant Professor, EEE Department, Kamaraj College of Engineering and Technology, Tamil Nadu, India
2
Professor& Head, EEE Department, Kamaraj College of Engineering and Technology, Tamil Nadu, India
Abstract
Smart Grid (SG) is an intelligent electricity network that incorporates advanced information, control and communication
technologies to increase the reliability of existing power grid. With advanced communication and information technologies, smart
grid deploys complex information management model. This paper presents a cloud service based information management model
which opens the issues and benefits from the perspective of both smart grid domain and cloud domain of system model. The
overall cost of data management includes storage, computation, upload, download and communication costs which need to be
optimized. This paper provides an optimization framework for reducing the overall cost for data management and integration in
smart grid model. In this paper, the optimization model focuses on optimizing the size of data items to be stored in the clouds
under concern. The types of data items to be stored in the clouds are considered as customer behavior data and Phasor
Measurement Units (PMU) data in the smart grid environment. The management model usually comprises of four domains viz.,
smart grid domain, cloud domain, broker domain and network domain. The present work focuses mainly on smart grid and cloud
domain and optimization of cost related to these domains for simplicity of model considered. The proposed model is optimized
using various evolutionary optimization techniques such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and
Differential Evolution (DE). The results of various techniques when implemented for proposed model are compared in terms of
performance measures and a most suitable technique is identified for cloud based data management.
Keywords: Smart grid, Information management, Optimization, Cloud Computing.
--------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
Smart grid is the replacement of aging power system by
intelligent power system incorporating Energy Technology
(ET) & Information and Communication Technology (ICT).
The implementation of smart grid technologies significantly
increase the complexity of handling data in information
management model, which means the overall cost of data
manipulation in the data management model increases [2].
Cloud based data integration offers higher level of
scalability, efficient data sharing mechanism, easy data
handling for highly complex systems and better outsourcing
for new entities. Hence, a design of efficient model and
selection of suitable simulation tool is required for overall
cost reduction of data management.
A next-generation distributed computing paradigm termed
as cloud-computing technology plays a key role in
information management by serving large data centers using
cloud providers with massive storage and computation
services [10]. Public utility services accessed by cloud
consumers include resources such as network bandwidth,
storage, processing power and cost, etc. Virtualization
technology can provide resources to consumers based on
their specified requirements and operating systems along
with their applications is termed as virtual machine. Chaisiri
et al. [6] proposed an optimal cloud resource provisioning
algorithm to provision resources offered by multiple cloud
providers. Endo et al. [3] discussed the main challenges
which are inherent in the process of resource allocation for
distributed clouds of high complex data management
system. Adam Hahn et al. [4] proposed a modeling of secure
testbed for smart grid networks, design of architecture for
testbed and evaluation of performance applications to
enhance the system reliability. Feng Guo et al. [5]
developed a platform for real-time simulation of smart grid
network and provide an effective approach to examine
communication and distributed control related issues inn
smart grids. The developed model can also be used to
evaluate reconfiguration strategies of communication
networks in smart grid environment. Fang et al. [7]
discussed the evolving opportunities, designing of working
models and applications of smart grid data management
using cloud computing. Recently, researchers focus on key
issues for handling the high complex data of large systems
in smart grid system using cloud based services such as
dynamic resource allocation, distributed computing,
virtualization technology and data sharing mechanism, etc.,
The optimization of smart grid model comprising of
minimizing the data storage and computation cost deals with
equality constraints such as data redundancy and task
execution constraints. Furthermore, the model can also be
reformulated to incorporate other inequality constraints viz.,
data splitting, data exclusive, data upload and inter-cloud
data transfer constraints for better understandability but it
greatly increases the complexity of the model. This paper
provides an optimization framework for reducing the overall
cost of data management and integration in smart grid
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 752
model. The proposed model is optimized using various
evolutionary optimization techniques such as Real Coded
Genetic Algorithm (RCGA), Particle Swarm Optimization
(PSO) and Differential Evolution (DE). The results of
various techniques when implemented for proposed model
are compared in terms of performance measures and a most
appropriate & suitable technique is identified for efficient
cloud based data integration.
Nomenclature
OC Overall Cost
Di Set of Data Items
U Set of Users
Tk Set of Tasks
C Set of Clouds
DiS
Set of Data Items needs to be splitted
DiO
Set of Data Items needs to be stored
in one cloud
TDi
(d) Set of Tasks by considering data item
‘d’ as input
TTk
(t) Set of Tasks by considering the
output of task ‘t’ as input
SDi
(d) Size of Data Item ‘d’
STk
(t) Size of Output task ‘t’
SDi
(d,c) Size of portion of Data Item ‘d’
stored in cloud ‘c’
UTk
(t) Set of Users requesting the output of
task ‘t’
PUP
(d,c) Unit Price of uploading data item ‘d’
to cloud ‘c’
PDW
(c,u) Unit Price of downloading data from
cloud ‘c’ to user ‘u’
PS
(c) Unit Storage Price in cloud ‘c’
PICT
(c1,c2) Unit Inter-Cloud Transfer Price from
cloud ‘c1’ to cloud ‘c2’
Cp(t,c) Computation cost charged by cloud
‘c’ for task ‘t’
BUP
(d,c) Binary Variable to indicate whether
data item ‘d’ is uploaded to cloud ‘c’
or not for data manipulation
BTk
(t,c) Variable to indicate whether task ‘t’
is performed in cloud ‘c’
BICT
(t1,t2,c1,
c2)
Binary Variable to indicate whether
intermediate data is transferring from
cloud ‘c1’ to ‘c2’ as result of task ‘t2’
takes input from output of task ‘t1’
)(d Storage Splitting ratio of data item
‘d’
ρ(d) Storage Redundancy ratio of data
item ‘d’
2. OVERVIEW OF SYSTEM MODEL
Basically the system model comprises of four domains-
Smart Grid domain, cloud domain, network domain and
broker domain [1]. Our model focuses on smart grid and
cloud domain and optimizes the control variables (data
items) related to these domains only. Fig.1 shows the cloud
based data management model under concern for our study
and depicts the networks involved in detail.
Fig -1: Data Management Model of Smart Grid
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 753
1) Smart Grid Domain:
It is composed of seven sub domains as defined by the
National Institute of Standards and Technology [21]. In
this paper, we introduce three concepts related to data
management – data item, computational project and
user.
Data Item: It is a piece of information or object
generated by information sources in the smart grid
network such as Phasor Measurement Units [9],
which should be stored securely. Data items are
utilized by computational project for performing
computation.
Computational Project: It consists of one or more
tasks in which it uses some data items as input and
performs required computing operations.
User: A user is a consumer who is interested in
getting some stored data items for access of usage.
2) Cloud Domain:
It consists of one or more clouds to provide storage and
computation services for data items of smart grid
network. Each cloud has its own pricing policy such as
on-demand, reserve and spot pricing based on their
requirement. For example, it is considered that a simple
Amazon Storage Service is available in seven regions
with different pricing policies [17].
3) Network Domain:
In this domain, networking providers own the network
infrastructure & communications and provide data
transmission service between any two domains of smart
grid network.
4) Broker Domain:
It consists of one or more brokers who act as a mediator
between the smart grid and cloud domain for gathering
information from cloud consumers and assisting the
cloud providers in providing cloud services. The need
for cloud brokers becomes inevitable due to increasing
difficulty in satisfying the cloud consumer requirements
[8]. Cloud brokers differ from traditional brokers and
aim specifically to solve the smart grid data
management problems.
In our model, the smart grid, network and cloud domain are
of great concern. However, while programming the data of
broker domain, the concerned model is updated at frequent
intervals of time in existing smart grid network. To facilitate
an efficient data integration of smart grid, our work focuses
on optimization of cost of resource allocation (data item) for
data storage and computation. The objective function of
model combines the two sub models of smart grid viz., data
storage and computation. For processing the data items
collected from smart grid customers using data management
model, we consider two Public Clouds (PC I and PC II) and
one private cloud with pricing scheme of West-Northern
California, US [17].
3. AN OPTIMIZATION MODEL FOR DATA
MANAGEMENT IN SMART GRID
The overall cost minimization problem for cloud-based data
storage and computation is formulated as follows:
Objective Function:
Min (Overall Cost) OC =
Tkt tCc tUu
DWTkTk
Tkt tTt
ICTTk
tCc tCc
ICT
Did dCc
UPDiUP
Did tCc
Tk
Did dCc
SDiUP
Did dCc
DW
dUu
Di
Did dCc
UPDi
Did dCc
SDi
Tk Tk
Tk Tk Tk
Di
Tk
Di
Di Di
Di
Di
ucPtSctB
ccPtSccttB
cdPdScdB
ctCpctB
cPcdScdB
cPcdS
cdPcdS
cPcdS
)( )(
1 )1(2 )1(1 )2(2
)(
)(
)(
)( )(
)(
)(
),()(),(
)2,1()1()2,1,2,1(
),()(),(
),(),(
)(),(),(
)(),(
),(),(
)(),(
(1)
Subject to
* Equality Constraints
1. Data Redundancy Constraint
)(
;),()(),(
dCc
DiDi
Di
DiddSdcdS
(2)
2. Task Execution Constraint
)(
;,1),(
tCc
Tk
Tk
TktctB
(3)
* Inequality Constraints
3. Data Splitting Constraint
;),(,
)(
)()(
),( SDi
Di
Di
DiddCc
d
dSd
cdS
(4)
4. Data Exclusive Constraint
ODi
Di
Di
DiddCc
dSd
cdS
),(},1,0{
)()(
),(
;
.),(),,0[),( DiddCccdS DiDi
(5)
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 754
5. Data Upload Constraint
;),(,),(
),(
)(
),(
)(
)(
DiddCcctB
cdB
dTk
ctB
Di
dTkt
Tk
UP
Di
dTkt
Tk
Di
Di
(6)
6. Inter Cloud Data Transfer Constraint
;1),1(2),2(2
),1(1,
3
1
))2,2()1,1((
3
1
)2,1,2,1(
2
1
))2,2()1,1((
2
1
TkttTkTttTkCc
tTkCcctTkBctTkB
ccttICTBctTkBctTkB
(7)
In this model, the fitness function eqn. (1) is considered as
which minimizes the overall cost of smart grid data
management (data upload, data download, communication,
inter cloud data transfer, transfer in and transfer out costs).
Elucidation of Constraints:
Eqn. (2) represents the data redundancy constraint,
which consist of storage redundancy ratio (ρ (d)) of
data item which is assumed to be 2. Total size of data
items stored in cloud domain is the product of size of
data item and its storage redundancy ratio.
Eqn. (3) represents the task execution constraint i.e.,
each task is exactly executed in one of the clouds.
Eqn. (4) represents the data splitting constraint, which
comprise of storage splitting ratio )(d should not be
less than the reciprocal of total size of data items
stored in cloud.
Eqn. (5) represents the data exclusive constraint,
which requires the set of data items to be completely
stored in one of the clouds.
Eqn. (6) represents the data upload constraint, which
ensures BUP
(d, c) to be ‘1’ only for some task with
BTk
(t, c) is one.
Eqn. (7) represents the inter cloud data transfer
constraint which ensures BICT
(t1,t2,c1,c2) to be ‘1’ if
and only if BTk
(t1,c1) = BTk
(t2,c2) = 1.i.e., If task
‘t1’ is executed in cloud ‘c1’, task ‘t2’ is executed in
cloud ‘c2’ and ‘t2’ takes the output of ‘t1’ as input,
then c1 will transfer the output of ‘t1’ to ‘c2’.
In this paper, the Optimized cost of data management is
evaluated by incorporating the equality constraints in the
fitness function. The use of inequality constraints given by
Eqns. (4) – (7) increases the complexity of proposed model.
However, the inclusion of inequality constraints in the
optimization algorithm does not create significant impact in
the solution variables. Hence for simplicity of mathematical
analysis, the inequality constraints are neglected in the
present work.
4. OPTIMIZATION TECHNIQUES USED
In this paper, the proposed optimization model is evaluated
using various techniques such as Real Coded Genetic
Algorithm (RCGA), Particle Swarm Optimization (PSO)
and Differential Evolution (DE) for minimizing the overall
cost of data integration (Eqn. 1) in smart grid network.
4.1 Real Coded Genetic Algorithm:
Genetic Algorithm (GA) belongs to the class of randomized
heuristic search techniques. GA is a general purpose search
procedure that uses the principles inspired by natural genetic
populations to evolve solution [16]. The traditional GA uses
binary representation of strings which is not preferred in
continuous search space domain. Real Coded Genetic
algorithm (RCGA) gives a straightforward representation of
chromosomes by directly coding all variables. The
chromosome X is represented as X = {SDi
, STk
}, where SDi
and STk
denotes the size of data items stored in every cloud
for storage and computation respectively.
Unlike traditional binary coded GA, decision variables can
be directly used to compute the fitness value. The RCGA
uses selection, crossover and mutation operators to
reproduce offspring for the existing population [12]. The
RCGA–Smart grid model incorporates Roulette Wheel
selection to decide chromosomes for next generation. The
selected chromosomes are placed in a matting pool for
crossover and mutation operations. The crossover operation
enhances the global search property of GA and mutation
operation prevents the permanent loss of any gene value. In
this work, Arithmetic Crossover and Polynomial Mutation,
described in Ref. [13], has been used to perform crossover
and mutation respectively. The detailed procedure of RCGA
applied for the problem of SG data management is shown in
the form of flowchart in Fig. 2.
4.1.1 PSO Algorithm:
Particle Swarm Optimization (PSO) is an evolutionary
computation technique developed by Kennedy and Eberhart
in 1995, inspired by social behavior of birds flocking in a
multidimensional space. The system is initialized with a
population of random solutions and searches for optima by
updating the generations [19]. In PSO, each single solution
is called as particle. All particles have fitness value,
evaluated by fitness function to be optimized, and have
velocities, which direct the flying of the particles [20]. To
discover the optimal solution, each and every particle is
updated by two ‘best’ values. The first one called Pbest is
the best position particle has achieved so far. The second
best value is the overall best value obtained among all
particles in the population, called as Gbest. After finding
these two best values, each particle changes its velocity and
position according to the cognition part (Pbest) and social
part (Gbest). The update equations for particle’s velocity
and position are given by Eqns. (8) and (9).
)(*** 11
1 k
id
k
id
k
id
k
id XPbestrandcVV
)(** 12
k
id
k
d XGbestrandc
(8)
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 755
k
id
k
id
k
id VXX 1
(9)
IterationCurrent
IterationsMax
.*
.
minmax
max
(10)
Where w is the inertia weight calculated by Eqn. (10), Vid is
the particle velocity, Xid is the current particle position
(solution), rand1 is a random number between (0, 1), c1 and
c2 indicates cognition and social learning factors
respectively.
PSO Algorithm for SG Data Management
1) Randomly initialize a population of particles with
position Xid (C, c) and velocities Vid of the ith
particle
in dth
dimension.
2) Set the parameters of PSO, C1 = C2 = 2, Wmax = 0.9,
Wmin = 0.5.
3) Evaluate the fitness of each particle in the population
for SG model built with each particle’s position (SDi
and STk
).
4) Compare the current position with particle’s previous
best experience, Pbest, in terms of fitness value and
hence update Pbest for each particle in the
population.
5) After updating the Pbest, choose the best value
among all the particles in Pbest and call it as Global
best, Gbest.
6) Update the particle’s velocity using Eqn. (8) and
clamp to its minimum (Vmin) and maximum (Vmax)
limit, whichever violates.
7) Move to the next position of the particle using Eq. (9)
bounded to its upper and lower limits.
8) Stop the algorithm and print the near optimal solution
*(final Gbest) if termination criterion or maximum
iterations are reached; otherwise loop to Step 3 and
continue the process.
Fig -2: Flowchart for Real Coded Genetic Algorithm
(RCGA)
4.2 Differential Evolution Algorithm:
Differential Evolution, one of the evolutionary optimization
techniques, was introduced by Storn and Price in 1995 [14].
DE is highly effective and suitable for high dimensionality
problems, which deals with high nonlinearity and multiple
optima. Like any other evolutionary algorithm, DE also
starts with random initialization of population vector with
uniform distribution in search space. DE has three
operations–mutation, crossover and selection [15].
The crucial idea behind DE is that crossover and mutation
are used for generating trial vectors [18]. Mutation operation
generates new parameter vector by adding the weighted
difference between two population vectors to a third vector.
Crossover operation mixes the mutated vector with target
vector to yield a trial vector. Based on the computation of
trial vectors, selection operation decides the survival of new
vectors to the next generation. There are several strategies
that can be used in DE algorithm. In this paper, we have
used a commonly used strategy denoted as ‘DE/rand/1/bin’
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 756
[14]. In this representation, ‘rand’ indicates a random
mutant vector to be chosen; ‘1’ the number of difference
vectors used and ‘bin’ denotes the crossover scheme. The
DE algorithm used in this work for SG Data Management is
briefed herein.
DE Algorithm for SG Data Management
1) Randomly initialize a population of individuals Xid
denoting the ith individual in d dimension.
2) Specify the DE parameters; difference vector scale
factor F = 0.05, minimum and maximum crossover
probability CRmin= 0.1 and CRmax= 0.9.
3) Evaluate the fitness value of each individual in the
population. The fitness value is the overall cost of
proposed data management of smart grid.
4) Generate mutant vector for each individual xi
according to Eqn. (11)
)(* 321 SSSi XXFXV
(11)
The indices s1, s2 and s3 are randomly chosen from
population size. It is important to ensure that these
indices are different from each other and also from
the running index i.
5) Perform crossover to yield the trial vector u by
combining the mutant vector v with target vector x
using Eqn. (12).
)()(
)()(
irandnjorCRjrand
irandnjorCRjrand
X
V
U
ij
ij
ij
(12)
Where rand (j)[0,1] is the jth
evaluation of a
uniform random generator number. randn (i){1, 2, .
. . , D} is a randomly chosen index ensuring that ui
gets at least one element form mutant vector, vi. CR
is the time varying crossover probability constant
determined using (13).
Iteration
IterationsMax
CRCR
CRCR *
.
minmax
min
(13)
6) Perform selection operation based on fitness value
and generate new population. If the trial vector ui
yields a better fitness, then xi is replaced by ui, else xi
is retained at its old value.
7) If stopping criterion (maximum iterations) is reached,
stop and print the optimized parameter set (SDi
and
STk
); else increase iteration count and loop to Step 3.
5. RESULTS & DISCUSSIONS
The program code for optimizing the cost of data
management using various evolutionary algorithms were
performed in MATLAB environment with processor of
below specified configuration.
Processor Configuration:
The construction of SG Data integration model was
performed in MATLAB environment using Matlab 8.1
version. The optimization framework is designed on PC
with 2.5 GHz core i3 CPU and 4 GB RAM.
The simulation is performed by considering one large
electric utility, two public clouds (PC-I and PC-II), and two
types of data item (customer behavior data and PMU data).
This electric utility maintains single private clouds by itself.
The number of residential areas served by this utility was
varied from 50 to 500 with increment of 50. The number of
households in each residential area were set to G (a, b, c, d),
where ‘G’ denotes Gaussian variable with mean of ‘a’,
variance of ‘b’, and range of [c, d].
Each household is equipped with one smart meter. It is
assumed that each smart meter generates one megabyte of
information per day with sampling interval is 15 minutes.
The data aggregated from the smart meters in each
residential area is represented as one data item and
categorized as the customer behavior data. These data items
can be stored in public and/or private clouds. The sensitivity
of the customer demand (behavior) data forces some of the
data items to be stored only in the private clouds, and the
storage splitting ratio and the redundancy ratio are both set
to 2.
It is further assumed that all the data items are stored for a
period of six months and its cost of storage is evaluated in
one day. The unit storage price for one gigabyte per month
of PC-I is $0.125 and that of PC-II is $0.14. The unit storage
price for one gigabyte per month of a private cloud is
uniformly distributed between $0.15 and $0.2.
The unit upload (download) price is equal to the sum of the
transfer-in (transfer-out) price charged by the clouds and the
communication price charged by networking providers. The
average transfer-in (transfer-out) price of PC-I and PC-II is
$0 ($0.12) per gigabyte. The average transfer-in (transfer-
out) price of the private clouds is assumed to be $0.2 ($0.2)
per gigabyte. The communication price was assumed to be
G (0.1, 0.05, 0.05, 0.15).
Computational Project is evaluated which analyzes
electricity saving products once a month (30 days). The
settings of clouds, residential areas, upload prices, and
download prices are similar to storage pricing of data item.
In addition, the price of PC-I for high memory on-demand
computational task is $2.28 per hour, and that of PC-II is
$2.00 per hour. The price of private clouds for
computational task is $3.00 per hour. The unit inter-cloud
transfer price between clouds c1 and c2 is equal to the sum
of the transfer-out price of c1, the transfer-in price of c2, and
the communication price between c1 and c2.
The proposed optimization model is framed under the above
specified scenario and the optimized cost of data
management is evaluated using different evolutionary
algorithms such as Real Coded Genetic Algorithm, Particle
7. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 757
Swarm Optimization and Differential Evolution and their
detailed algorithms were already discussed in previous
section. The performance of each evolutionary optimization
technique when applied for the proposed data integration
model is evaluated and depicted in Table 1. It is evident
from Table 1 that consistency of each algorithm is measured
in terms of statistical parameters namely mean and standard
deviation. Further, the near global optimal solution of sizing
of data items for storage & computation and the total overall
cost of storage (fitness function) obtained in various
algorithms are compared in Table 1.
On comparing the results of various techniques, the Particle
Swarm Optimization (PSO) technique is shown to produce a
better performance in terms of minimum fitness function
and convergence time per trial. Fig.3 shows the convergence
characteristics of different evolutionary algorithms. It is also
seen from Fig.3 that PSO algorithm outperforms other
evolutionary algorithms as it shows a quicker convergence
(i.e., only for convergence) and also the value of fitness
function is greatly reduced.
Fig -3: Convergence Curves for Different Optimization
Techniques
Table -1: Performance Evaluation of Various Optimization Tool
PerformanceResults
Parameters Optimization Technique Used
RCGA PSO DE
Storage Size of
each cloud (GB)
8.510 6.490 0.000 10.000 5.000 0.000 9.902 5.081 0.017
Computation Size
of each
cloud(GB)
0.063 0.064 0.073 0.100 0.100 0.000 0.063 0.098 0.039
Fitness Function
($/GB)
84.7464 84.6531 84.6868
Mean($/GB) 85.9553 84.6534 84.8491
Standard
Deviation
0.79074 0.0033834 0.10155
Convergence
Time per Trial
(sec)
0.16365 0.05905 0.33835
No of Iteration
for Convergence
33 4 1
6. CONCLUSION
The proposed work presents an optimization model for
cloud-based data management of smart grid network. The
proposed model deals with optimization framework that
incorporates the data items such as customer behavior data
and PMU data of smart grid network. This work also aims to
devise an appropriate evolutionary technique for overall cost
minimization of data integration. It was proven that PSO
algorithm outperforms other evolutionary algorithms by
analyzing the simulation results of various evolutionary
algorithms and hence is identified as a suitable optimization
tool. In this work, PSO simulation for proposed model takes
comparatively lesser time which makes it better to
implement for online computation of smart grid network.
The model can be extended further to include the detailed
data pattern of smart grid such as customer account data and
electric vehicle charging data in deregulated environment.
REFERENCES
[1] Xi Fang, Dejun Yang and Guoliang Xue “Evolving
Smart Grid Information Management Cloudward: A
Cloud Optimization Perspective”, IEEE Transactions
on Smart Grid, Vol.4, No.1, March 2013,pp. 111-119.
[2] Zhong Fan, Parag Kulkurani, et.al “Smart Grid
Communications: Overview of Research Challenges,
Solutions, and Standardization Activities”, IEEE
Communication Surveys & Tutorials, Vol.15, No.1,
2013,pp. 21-38.
[3] P. Endo, A. de Almeida Palhares, N. Pereira, G.
Goncalves, D. Sadok, J. Kelner, B. Melander, and J.
Mangs. Resource allocation for distributed cloud:
concepts and research challenges. IEEE Network,
pages 42–46, 2011.
8. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 02 | Feb-2015, Available @ http://www.ijret.org 758
[4] Adam Hahn, Aditya Ashok, Siddharth Sridhar and
Manimaran Govindarasu “Cyber – Physical Security
Testbeds: Architecture, Application, and Evaluation
for Smart Grid”, IEEE Transactions on Smart Grid,
Vol.4, No.2, June 2013,pp. 847 -855.
[5] S. Chaisiri, B.-S. Lee, and D. Niyato, “Optimization
of resource provisioning cost in cloud computing,”
IEEE Trans. Services Comput., vol.5, no. 2, pp. 164–
177, Apr.–Jun., 2012.
[6] X. Fang, S. Misra, G. Xue, and D. Yang, “Managing
smart grid information in the cloud: Opportunities,
model, and applications,” IEEE Netw., vol. 26, no. 4,
pp. 32–38, Jul.–Aug., 2012.
[7] S. Sakr, A. Liu, D.M. Batista, andM.Alomari,
“Asurvey of large scale data management approaches
in cloud environments,” IEEE Commun. Surveys
Tuts., vol. 13, no. 3, pp. 311–336, 2011.
[8] X. Fang, S. Misra, G. Xue, and D. Yang. Smart grid -
the new and improved power grid: A survey. IEEE
Communications surveys and tutorial (Digital Object
Identifier: 10.1109/SURV.2011.101911.00087).
[9] Simmhan, Y.; Aman, S.; Kumbhare, A.; Rongyang
Liu; Stevens, S.; Qunzhi Zhou; Prasanna, V., "Cloud-
Based Software Platform for Big Data Analytics in
Smart Grids," Computing in Science & Engineering ,
vol.15, no.4, pp.38,47, July-Aug. 2013.
[10] Wu C, Tzeng G, Goo Y, Fang W. “A real-valued
genetic algorithm to optimize the parameters of
support vector machine for predicting bankruptcy”,
Expert System Applications 2007; 32(2):397–408.
[11] Goldberg D. “Genetic algorithms in search
optimization and machine learning” Addison-Wesley;
1989.
[12] Deb K. “Multi-objective optimization using
evolutionary algorithms” Wiley; 2001.
[13] Storn R, Price K.” Differential evolution – a simple
and efficient heuristic for global optimization over
continuous spaces” Journal on Global Optimization:
11(4), pp. 341–359, 1997.
[14] Arya L, Choube S, Arya R. “Differential evolution
applied for reliability optimization of radial
distribution systems”, International Journal of Electric
Power and Energy Systems, Volume 33, Issue 2, pp.
271–277,2011.
[15] Kalyani S and Swarup K S, “Pattern Analysis and
Classification for Security Evaluation in Power
Networks”, International Journal of Electric Power
and Energy Systems, volume 44 ,pp. 547–560,2013.
[16] Amazon Simple Storage Service [Online]. Available:
http://aws. amazon.com/s3/.
[17] Zhou S, Wu L, Yuan X, Tan W. “Parameters
selection of SVM for function approximation based
on differential evolution”, Proceedings of the
international conference on intelligent systems and
knowledge engineering (ISKE 2007), Chengdu,
China; 2007.
[18] Kennedy J, Eberhart R, et al. “Particle swarm
optimization” Proceedings of IEEE international
conference on neural networks, vol. 4. Piscataway,
NJ: IEEE; pp. 1942–1948, 1995.
[19] Mostafa H, El-Sharkawy M, Emary A, Yassin K.
“Design and allocation of power system stabilizers
using the particle swarm optimization technique for
an interconnected power system” International
Journal of Electric Power and Energy Systems,
Volume 34, Issue 1,pp. 57-65, January 2012.
[20] National Institute of Standards and Technology, NIST
Framework and Roadmap for Smart Grid
Interoperability Standards, Release 1.0, 2010.
BIOGRAPHIES
P. M. Devie received her bachelor’s degree
in Electrical and Electronics Engineering
from KLN College of Information
Technology, Madurai, in the year 2012.
She completed her Masters Degree in
Power Systems Engineering from Kamaraj
College of Engineering & Technology,
Virudhunagar, in the year 2014. She is currently working as
Assistant Professor in the Kamaraj College of Engineering
& Technology, Virudhunagar and pursuing Ph.D. in Anna
University, Chennai under Part-Time. Her research interests
are power system stability, data mining techniques for power
system classification problems, smart grid technologies and
its applications.
S. Kalyani is currently working as
Professor and Head in the Department of
Electrical & Electronics Engineering at
Kamaraj College of Engineering &
Technology, Virudhunagar. She obtained
her PhD degree from Indian Institute of
Technology Madras, Chennai in year
2011. She received her Master’s Degree
in Power Systems Engineering from Thiagarajar College of
Engineering, Madurai in December 2002 and Bachelors
Degree in Electrical and Electronics Engineering from
Alagappa Chettiar College of Engineering Karaikudi, in the
year 2000. She has about 12 years of teaching experience at
various levels. Her research interests are power system
stability, Pattern Recognition, Machine Learning, Soft
Computing Techniques to Power System studies. She is a
Member of IEEE, member of IE (India) and also life
member of ISTE.