This document proposes a flexible phasor data concentrator system called FIPS to address issues with existing synchrophasor systems. FIPS would receive, store, and share synchrophasor data efficiently using open-source software technologies. It describes a flat file database to store synchrophasor data in an ordered fashion for fast retrieval. FIPS would provide a robust foundation for applications using real-time and stored synchrophasor data.
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
This document discusses the challenges of collecting, storing, and analyzing large volumes of internet measurement data. It examines issues such as distributed and resilient data collection, handling multi-timescale and heterogeneous data from various sources, and developing standardized tools and formats. The paper proposes the "datapository" - an internet data repository designed to address these challenges through a collaborative framework for data sharing, storage, and analysis tools. The goal is to help both network operators and researchers more effectively harness the wealth of data available.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document presents a comparative study of flat-based/data-centric wireless sensor network (WSN) specific routing protocols. It first provides background on data-centric approaches in WSNs and discusses some popular flat-based/data-centric routing protocols, including Directed Diffusion, Minimum Cost Forwarding Algorithm (MCFA), Threshold sensitive Energy Efficient sensor Network protocol (TEEN), Adaptive Periodic Threshold sensitive Energy Efficient sensor Network protocol (APTEEN), Energy Aware Data (EAD) Centric Routing Protocol, RUMOR Routing, Sensor Protocols for Information via Negotiation (SPIN), Constrained Anisotropic Diffusion Routing (CADR), COUGAR,
The document discusses building a data warehouse by migrating data from legacy systems using an iterative methodology. It emphasizes the importance of high quality metadata to handle changes during the migration process and minimize errors. Uniform data access times across all machines are optimal for parallel query execution to avoid data skew. The crossbar switch architecture connects all machines equally, eliminating data skew issues seen in other architectures.
A New Architecture for Group Replication in Data GridEditor IJCATR
Nowadays, grid systems are vital technology for programs running with high performance and problems solving with largescale
in scientific, engineering and business. In grid systems, heterogeneous computational resources and data should be shared
between independent organizations that are scatter geographically. A data grid is a kind of grid types that make relations computational
and storage resources. Data replication is an efficient way in data grid to obtain high performance and high availability by saving
numerous replicas in different locations e.g. grid sites. In this research, we propose a new architecture for dynamic Group data
replication. In our architecture, we added two components to OptorSim architecture: Group Replication Management component
(GRM) and Management of Popular Files Group component (MPFG). OptorSim developed by European Data Grid projects for
evaluate replication algorithm. By using this architecture, popular files group will be replicated in grid sites at the end of each
predefined time interval.
Data characterization towards modeling frequent pattern mining algorithmscsandit
Big data quickly comes under the spotlight in recent years. As big data is supposed to handle
extremely huge amount of data, it is quite natural that the demand for the computational
environment to accelerates, and scales out big data applications increases. The important thing
is, however, the behavior of big data applications is not clearly defined yet. Among big data
applications, this paper specifically focuses on stream mining applications. The behavior of
stream mining applications varies according to the characteristics of the input data. The
parameters for data characterization are, however, not clearly defined yet, and there is no study
investigating explicit relationships between the input data, and stream mining applications,
either. Therefore, this paper picks up frequent pattern mining as one of the representative
stream mining applications, and interprets the relationships between the characteristics of the
input data, and behaviors of signature algorithms for frequent pattern mining.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
This document discusses the challenges of collecting, storing, and analyzing large volumes of internet measurement data. It examines issues such as distributed and resilient data collection, handling multi-timescale and heterogeneous data from various sources, and developing standardized tools and formats. The paper proposes the "datapository" - an internet data repository designed to address these challenges through a collaborative framework for data sharing, storage, and analysis tools. The goal is to help both network operators and researchers more effectively harness the wealth of data available.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document presents a comparative study of flat-based/data-centric wireless sensor network (WSN) specific routing protocols. It first provides background on data-centric approaches in WSNs and discusses some popular flat-based/data-centric routing protocols, including Directed Diffusion, Minimum Cost Forwarding Algorithm (MCFA), Threshold sensitive Energy Efficient sensor Network protocol (TEEN), Adaptive Periodic Threshold sensitive Energy Efficient sensor Network protocol (APTEEN), Energy Aware Data (EAD) Centric Routing Protocol, RUMOR Routing, Sensor Protocols for Information via Negotiation (SPIN), Constrained Anisotropic Diffusion Routing (CADR), COUGAR,
The document discusses building a data warehouse by migrating data from legacy systems using an iterative methodology. It emphasizes the importance of high quality metadata to handle changes during the migration process and minimize errors. Uniform data access times across all machines are optimal for parallel query execution to avoid data skew. The crossbar switch architecture connects all machines equally, eliminating data skew issues seen in other architectures.
A New Architecture for Group Replication in Data GridEditor IJCATR
Nowadays, grid systems are vital technology for programs running with high performance and problems solving with largescale
in scientific, engineering and business. In grid systems, heterogeneous computational resources and data should be shared
between independent organizations that are scatter geographically. A data grid is a kind of grid types that make relations computational
and storage resources. Data replication is an efficient way in data grid to obtain high performance and high availability by saving
numerous replicas in different locations e.g. grid sites. In this research, we propose a new architecture for dynamic Group data
replication. In our architecture, we added two components to OptorSim architecture: Group Replication Management component
(GRM) and Management of Popular Files Group component (MPFG). OptorSim developed by European Data Grid projects for
evaluate replication algorithm. By using this architecture, popular files group will be replicated in grid sites at the end of each
predefined time interval.
Data characterization towards modeling frequent pattern mining algorithmscsandit
Big data quickly comes under the spotlight in recent years. As big data is supposed to handle
extremely huge amount of data, it is quite natural that the demand for the computational
environment to accelerates, and scales out big data applications increases. The important thing
is, however, the behavior of big data applications is not clearly defined yet. Among big data
applications, this paper specifically focuses on stream mining applications. The behavior of
stream mining applications varies according to the characteristics of the input data. The
parameters for data characterization are, however, not clearly defined yet, and there is no study
investigating explicit relationships between the input data, and stream mining applications,
either. Therefore, this paper picks up frequent pattern mining as one of the representative
stream mining applications, and interprets the relationships between the characteristics of the
input data, and behaviors of signature algorithms for frequent pattern mining.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
REPLICATION STRATEGY BASED ON DATA RELATIONSHIP IN GRID COMPUTINGcsandit
This study discusses the utilization of three types of relationships in performing data replication. As grid
computing offers the ability of sharing huge amount of resources, resource availability is an important issue
to be addressed. The undertaken approach combines the viewpoint of user, system and the grid itself in
ensuring resource availability. The realization of the proposed strategy is demonstrated via OptorSim and
evaluation is made based on execution time, storage usage, network bandwidth and computing element
usage. Results suggested that the proposed strategy produces a better outcome than an existing method even
though various job workload is introduced.
A novel cache resolution technique for cooperative caching in wireless mobile...csandit
Cooperative caching is used in mobile ad hoc networks to reduce the latency perceived by the
mobile clients while retrieving data and to reduce the traffic load in the network. Caching also
increases the availability of data due to server disconnections. The implementation of a
cooperative caching technique essentially involves four major design considerations (i) cache
placement and resolution, which decides where to place and how to locate the cached data (ii)
Cache admission control which decides the data to be cached (iii) Cache replacement which
makes the replacement decision when the cache is full and (iv) consistency maintenance, i.e.
maintaining consistency between the data in server and cache. In this paper we propose an
effective cache resolution technique, which reduces the number of messages flooded in to the
network to find the requested data. The experimental results gives a promising result based on
the metrics of studies.
This document summarizes an article from the International Journal of Computer Engineering and Technology (IJCET) that proposes a new dynamic data replication and job scheduling strategy for data grids. The strategy aims to improve data access time and reduce bandwidth consumption by replicating data based on file popularity, storage limitations at nodes, and data category. It replicates more popular files that are in the same category as frequently accessed data to nodes close to where jobs are run. This is intended to optimize performance by locating data and jobs close together. The document provides context on related work and outlines the proposed system architecture and replication/scheduling approach.
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIEScsandit
This document summarizes a research paper that proposes a system to enhance keyword search over relational databases using ontologies. The system builds structures during pre-processing like a reachability index to store connectivity information and an ontology concept graph. During querying, it maps keywords to concepts, uses the ontology to find related concepts and tuples, and generates top-k answer trees combining syntactic and semantic matches while limiting redundant results. The system is expected to perform better than existing approaches by reducing storage requirements through its approach to materializing neighborhood information in the reachability index.
This document provides a survey of security techniques for the Border Gateway Protocol (BGP). It reviews recent techniques categorized as cryptographic/attestation, database, overlay/group protocols, penalty methods, and data-plane testing. The techniques are reviewed at a high level and their shortcomings summarized to provide readers a quick understanding of the direction of research in BGP security.
Peer to peer cache resolution mechanism for mobile ad hoc networksijwmn
In this paper we investigate the problem of cache resolution in a mobile peer to peer ad hoc network. In our
vision cache resolution should satisfy the following requirements: (i) it should result in low message
overhead and (ii) the information should be retrieved with minimum delay. In this paper, we show that
these goals can be achieved by splitting the one hop neighbours in to two sets based on the transmission
range. The proposed approach reduces the number of messages flooded in to the network to find the
requested data. This scheme is fully distributed and comes at very low cost in terms of cache overhead. The
experimental results gives a promising result based on the metrics of studies
Efficient Cost Minimization for Big Data ProcessingIRJET Journal
This document discusses efficient cost minimization techniques for big data processing. It characterizes big data processing using a two-dimensional Markov chain model to evaluate expected completion time. The problem is formulated as a mixed non-linear programming problem to optimize data assignment, placement, and migration across distributed data centers. A weighted bloom filter approach is presented to reduce communication costs through distributed incomplete pattern matching.
Ad Hoc Networks are infrastructure less network in which nodes are connected by Multi-hop wireless links. Each node is acting as a router as it supports distributed routing. Routing challenges occurs as there are frequent path breaks due to the mobility. Various application domains include military applications, emergency search and rescue operations and collaborative computing. The existing protocols used are divided into proactive and on demand routing protocols. The various new routing algorithms are also designed to optimize the performance of a network in terms of various performance parameters. Dual reinforcement routing is learning based approach used for routing. This paper describes the implementation, mathematical evaluation and judging the performance of a network and analyze it to find the performance of a network.
IRJET- An Integrity Auditing &Data Dedupe withEffective Bandwidth in Cloud St...IRJET Journal
This document proposes a system for secure cloud storage that uses data deduplication, integrity auditing by a third party auditor (TPA), and encryption to improve security, reduce storage usage, and verify data integrity. It compares different levels of data deduplication (byte-level, block-level, file-level) and proposes using a combination of SHA-512 hashing, Merkle hash trees, and AES-128 encryption. Performance analysis shows the proposed system requires less storage space than existing systems by removing duplicate data, and the third party auditor can verify data integrity more efficiently than the cloud service provider.
Advancing life sciences with IBM reference architecture for genomicsPatrick Berghaeger
IBM has created a reference architecture for genomics in collaboration with researchers and partners to address the challenges of processing, storing, and analyzing massive amounts of genomic data. The architecture defines an end-to-end solution with components for data management, workflow orchestration, and access. It supports large-scale genomics research and clinical applications through a scalable, software-defined, and data-centric approach.
Method and apparatus for scheduling resources on a switched underlay networkTal Lavian Ph.D.
A method and apparatus for resource scheduling on a switched underlay network enables coordination, scheduling, and scheduling optimization to take place taking into account the availability of the data and the network resources comprising the switched underlay network. Requested transfers may be fulfilled by assessing the requested transfer parameters, the availability of the network resources required to fulfill the request, the availability of the data to be transferred, the availability of sufficient storage resources to receive the data, and other potentially conflicting requested transfers. In one embodiment, the requests are under-constrained to enable transfer scheduling optimization to occur. The under-constrained nature of the requests enables requests to be scheduled taking into account factors such as transfer priority, transfer duration, the amount of time it has been since the transfer request was submitted, and many other factors.
https://www.google.com/patents/US20050076336?dq=20050076336&hl=en&sa=X&ei=e-JVVOb3J8G0uATNmILgAQ&ved=0CB8Q6AEwAA
The document discusses data stream mining and summarizes some key challenges and techniques. It describes how traditional data mining cannot be directly applied to data streams due to their continuous, rapid arrival. It then outlines several techniques used for summarizing and extracting knowledge from data streams, including sampling, sketching, load shedding, synopsis data structures, and algorithms modified from basic data mining to handle streams.
Hadoop is an open source implementation of the MapReduce Framework in the realm of distributed processing.
A Hadoop cluster is a unique type of computational cluster designed for storing and analyzing large datasets
across cluster of workstations. To handle massive scale data, Hadoop exploits the Hadoop Distributed File
System termed as HDFS. The HDFS similar to most distributed file systems share a familiar problem on data
sharing and availability among compute nodes, often which leads to decrease in performance. This paper is an
experimental evaluation of Hadoop's computing performance which is made by designing a rack aware cluster
that utilizes the Hadoop’s default block placement policy to improve data availability. Additionally, an adaptive
data replication scheme that relies on access count prediction using Langrange’s interpolation is adapted to fit
the scenario. To prove, experiments were conducted on a rack aware cluster setup which significantly reduced
the task completion time, but once the volume of the data being processed increases there is a considerable
cutback in computational speeds due to update cost. Further the threshold level for balance between the update
cost and replication factor is identified and presented graphically.
A time efficient and accurate retrieval of range aggregate queries using fuzz...IJECEIAES
This document presents a new approach called Fuzzy Clustering Means (FCM) to efficiently retrieve range aggregate queries from big data. Existing approaches have issues with inefficient retrieval times and clustering inaccuracies for large datasets. The FCM approach first partitions big data into independent partitions using balanced partitioning. It then creates an estimation sketch for each partition. When a range query is received, it estimates the result from each partition and summarizes the local estimates to provide the final output. Analysis on a dataset of 200,000 records shows the FCM approach has higher accuracy, lower error rates, and faster execution times for queries compared to existing approaches. Future work will investigate extending this solution to handle more complex query formats and using FCM to boost general
Este documento ofrece consejos sobre cómo curar la rosácea de forma natural. Recomienda evitar los desencadenantes como el alcohol, la comida picante y los cosméticos, y mantener un diario de los alimentos y el medio ambiente para identificar los propios desencadenantes. También sugiere que las curas rosácea pueden inventarse uno mismo sin necesidad de médicos, ya que una cura individual es más eficaz y barata que pagar por tratamientos médicos.
Value stream mapping (VSM) is a tool that uses symbols to depict and improve the flow of inventory and information through a process. It makes waste visible and allows organizations to plan its elimination. VSM involves mapping the current state, identifying areas for improvement, and designing a future state with minimum waste. Key steps include selecting a process to map, collecting data on times and flows, critiquing the current state, and creating an action plan to implement the future state design.
La Pedagogía Conceptual es una teoría desarrollada por Miguel de Subiría en 1998. Se basa en dos postulados, uno psicológico y otro pedagógico. Utiliza el Triángulo Humano y el Hexágono Pedagógico para explicar las fases del aprendizaje. Propone los instrumentos de conocimiento como herramientas mentales para comprender la realidad, incluyendo nociones, proposiciones, conceptos y categorías que van de lo simple a lo complejo. Concluye que cada clase debe enseñar algo, sobre algo
Presentación1 guillanbarre grupo -------de EIDY MONTAÑO YEPEZEidy Montaño Yepez
Este documento describe el síndrome de Guillain-Barré, un trastorno neurológico en el que el sistema inmunológico ataca parte del sistema nervioso periférico. Afecta a personas de todas las edades y sexos, y puede causar parálisis flácida, debilidad muscular y dificultad respiratoria. El tratamiento incluye plasmaféresis, corticoides e inmunoglobulina para reducir la inflamación, así como fisioterapia y apoyo respiratorio. El pronóstico depende de
This document discusses several ways to use quizzes and mysteries in the classroom to engage students. It describes having students use detective skills to find answers provided in clues like photos or diagrams. It also suggests creating treasure hunts where students search a passage for a specific fact or concept. Another idea is making map-based quizzes where students label locations on a diagram. The document advises having students create their own quizzes to assess what they've learned. It also proposes competitive team-based quizzes and games to motivate students.
El documento describe el síndrome de malabsorción, que ocurre cuando el tracto gastrointestinal fracasa en absorber macronutrientes y micronutrientes. Se clasifica en 4 tipos según la causa subyacente. Las manifestaciones clínicas incluyen diarrea crónica, distensión abdominal y falta de crecimiento. Las causas varían según la edad del paciente y pueden incluir enfermedades como la enfermedad celíaca, alergias alimentarias o infecciones. El examen y tratamiento requieren un enfoque integral que incluye
Este documento presenta el desarrollo de un módulo didáctico para prácticas de laboratorio con detectores de proximidad para la asignatura Instrumentación Electrónica de la Universidad Industrial de Santander. El módulo incluye la selección y diseño de sensores capacitivos, inductivos, fotoeléctricos y ultrasónicos, así como los materiales y componentes requeridos. Adicionalmente, se crearon guías de laboratorio para explicar el funcionamiento y aplicaciones de cada sensor.
REPLICATION STRATEGY BASED ON DATA RELATIONSHIP IN GRID COMPUTINGcsandit
This study discusses the utilization of three types of relationships in performing data replication. As grid
computing offers the ability of sharing huge amount of resources, resource availability is an important issue
to be addressed. The undertaken approach combines the viewpoint of user, system and the grid itself in
ensuring resource availability. The realization of the proposed strategy is demonstrated via OptorSim and
evaluation is made based on execution time, storage usage, network bandwidth and computing element
usage. Results suggested that the proposed strategy produces a better outcome than an existing method even
though various job workload is introduced.
A novel cache resolution technique for cooperative caching in wireless mobile...csandit
Cooperative caching is used in mobile ad hoc networks to reduce the latency perceived by the
mobile clients while retrieving data and to reduce the traffic load in the network. Caching also
increases the availability of data due to server disconnections. The implementation of a
cooperative caching technique essentially involves four major design considerations (i) cache
placement and resolution, which decides where to place and how to locate the cached data (ii)
Cache admission control which decides the data to be cached (iii) Cache replacement which
makes the replacement decision when the cache is full and (iv) consistency maintenance, i.e.
maintaining consistency between the data in server and cache. In this paper we propose an
effective cache resolution technique, which reduces the number of messages flooded in to the
network to find the requested data. The experimental results gives a promising result based on
the metrics of studies.
This document summarizes an article from the International Journal of Computer Engineering and Technology (IJCET) that proposes a new dynamic data replication and job scheduling strategy for data grids. The strategy aims to improve data access time and reduce bandwidth consumption by replicating data based on file popularity, storage limitations at nodes, and data category. It replicates more popular files that are in the same category as frequently accessed data to nodes close to where jobs are run. This is intended to optimize performance by locating data and jobs close together. The document provides context on related work and outlines the proposed system architecture and replication/scheduling approach.
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIEScsandit
This document summarizes a research paper that proposes a system to enhance keyword search over relational databases using ontologies. The system builds structures during pre-processing like a reachability index to store connectivity information and an ontology concept graph. During querying, it maps keywords to concepts, uses the ontology to find related concepts and tuples, and generates top-k answer trees combining syntactic and semantic matches while limiting redundant results. The system is expected to perform better than existing approaches by reducing storage requirements through its approach to materializing neighborhood information in the reachability index.
This document provides a survey of security techniques for the Border Gateway Protocol (BGP). It reviews recent techniques categorized as cryptographic/attestation, database, overlay/group protocols, penalty methods, and data-plane testing. The techniques are reviewed at a high level and their shortcomings summarized to provide readers a quick understanding of the direction of research in BGP security.
Peer to peer cache resolution mechanism for mobile ad hoc networksijwmn
In this paper we investigate the problem of cache resolution in a mobile peer to peer ad hoc network. In our
vision cache resolution should satisfy the following requirements: (i) it should result in low message
overhead and (ii) the information should be retrieved with minimum delay. In this paper, we show that
these goals can be achieved by splitting the one hop neighbours in to two sets based on the transmission
range. The proposed approach reduces the number of messages flooded in to the network to find the
requested data. This scheme is fully distributed and comes at very low cost in terms of cache overhead. The
experimental results gives a promising result based on the metrics of studies
Efficient Cost Minimization for Big Data ProcessingIRJET Journal
This document discusses efficient cost minimization techniques for big data processing. It characterizes big data processing using a two-dimensional Markov chain model to evaluate expected completion time. The problem is formulated as a mixed non-linear programming problem to optimize data assignment, placement, and migration across distributed data centers. A weighted bloom filter approach is presented to reduce communication costs through distributed incomplete pattern matching.
Ad Hoc Networks are infrastructure less network in which nodes are connected by Multi-hop wireless links. Each node is acting as a router as it supports distributed routing. Routing challenges occurs as there are frequent path breaks due to the mobility. Various application domains include military applications, emergency search and rescue operations and collaborative computing. The existing protocols used are divided into proactive and on demand routing protocols. The various new routing algorithms are also designed to optimize the performance of a network in terms of various performance parameters. Dual reinforcement routing is learning based approach used for routing. This paper describes the implementation, mathematical evaluation and judging the performance of a network and analyze it to find the performance of a network.
IRJET- An Integrity Auditing &Data Dedupe withEffective Bandwidth in Cloud St...IRJET Journal
This document proposes a system for secure cloud storage that uses data deduplication, integrity auditing by a third party auditor (TPA), and encryption to improve security, reduce storage usage, and verify data integrity. It compares different levels of data deduplication (byte-level, block-level, file-level) and proposes using a combination of SHA-512 hashing, Merkle hash trees, and AES-128 encryption. Performance analysis shows the proposed system requires less storage space than existing systems by removing duplicate data, and the third party auditor can verify data integrity more efficiently than the cloud service provider.
Advancing life sciences with IBM reference architecture for genomicsPatrick Berghaeger
IBM has created a reference architecture for genomics in collaboration with researchers and partners to address the challenges of processing, storing, and analyzing massive amounts of genomic data. The architecture defines an end-to-end solution with components for data management, workflow orchestration, and access. It supports large-scale genomics research and clinical applications through a scalable, software-defined, and data-centric approach.
Method and apparatus for scheduling resources on a switched underlay networkTal Lavian Ph.D.
A method and apparatus for resource scheduling on a switched underlay network enables coordination, scheduling, and scheduling optimization to take place taking into account the availability of the data and the network resources comprising the switched underlay network. Requested transfers may be fulfilled by assessing the requested transfer parameters, the availability of the network resources required to fulfill the request, the availability of the data to be transferred, the availability of sufficient storage resources to receive the data, and other potentially conflicting requested transfers. In one embodiment, the requests are under-constrained to enable transfer scheduling optimization to occur. The under-constrained nature of the requests enables requests to be scheduled taking into account factors such as transfer priority, transfer duration, the amount of time it has been since the transfer request was submitted, and many other factors.
https://www.google.com/patents/US20050076336?dq=20050076336&hl=en&sa=X&ei=e-JVVOb3J8G0uATNmILgAQ&ved=0CB8Q6AEwAA
The document discusses data stream mining and summarizes some key challenges and techniques. It describes how traditional data mining cannot be directly applied to data streams due to their continuous, rapid arrival. It then outlines several techniques used for summarizing and extracting knowledge from data streams, including sampling, sketching, load shedding, synopsis data structures, and algorithms modified from basic data mining to handle streams.
Hadoop is an open source implementation of the MapReduce Framework in the realm of distributed processing.
A Hadoop cluster is a unique type of computational cluster designed for storing and analyzing large datasets
across cluster of workstations. To handle massive scale data, Hadoop exploits the Hadoop Distributed File
System termed as HDFS. The HDFS similar to most distributed file systems share a familiar problem on data
sharing and availability among compute nodes, often which leads to decrease in performance. This paper is an
experimental evaluation of Hadoop's computing performance which is made by designing a rack aware cluster
that utilizes the Hadoop’s default block placement policy to improve data availability. Additionally, an adaptive
data replication scheme that relies on access count prediction using Langrange’s interpolation is adapted to fit
the scenario. To prove, experiments were conducted on a rack aware cluster setup which significantly reduced
the task completion time, but once the volume of the data being processed increases there is a considerable
cutback in computational speeds due to update cost. Further the threshold level for balance between the update
cost and replication factor is identified and presented graphically.
A time efficient and accurate retrieval of range aggregate queries using fuzz...IJECEIAES
This document presents a new approach called Fuzzy Clustering Means (FCM) to efficiently retrieve range aggregate queries from big data. Existing approaches have issues with inefficient retrieval times and clustering inaccuracies for large datasets. The FCM approach first partitions big data into independent partitions using balanced partitioning. It then creates an estimation sketch for each partition. When a range query is received, it estimates the result from each partition and summarizes the local estimates to provide the final output. Analysis on a dataset of 200,000 records shows the FCM approach has higher accuracy, lower error rates, and faster execution times for queries compared to existing approaches. Future work will investigate extending this solution to handle more complex query formats and using FCM to boost general
Este documento ofrece consejos sobre cómo curar la rosácea de forma natural. Recomienda evitar los desencadenantes como el alcohol, la comida picante y los cosméticos, y mantener un diario de los alimentos y el medio ambiente para identificar los propios desencadenantes. También sugiere que las curas rosácea pueden inventarse uno mismo sin necesidad de médicos, ya que una cura individual es más eficaz y barata que pagar por tratamientos médicos.
Value stream mapping (VSM) is a tool that uses symbols to depict and improve the flow of inventory and information through a process. It makes waste visible and allows organizations to plan its elimination. VSM involves mapping the current state, identifying areas for improvement, and designing a future state with minimum waste. Key steps include selecting a process to map, collecting data on times and flows, critiquing the current state, and creating an action plan to implement the future state design.
La Pedagogía Conceptual es una teoría desarrollada por Miguel de Subiría en 1998. Se basa en dos postulados, uno psicológico y otro pedagógico. Utiliza el Triángulo Humano y el Hexágono Pedagógico para explicar las fases del aprendizaje. Propone los instrumentos de conocimiento como herramientas mentales para comprender la realidad, incluyendo nociones, proposiciones, conceptos y categorías que van de lo simple a lo complejo. Concluye que cada clase debe enseñar algo, sobre algo
Presentación1 guillanbarre grupo -------de EIDY MONTAÑO YEPEZEidy Montaño Yepez
Este documento describe el síndrome de Guillain-Barré, un trastorno neurológico en el que el sistema inmunológico ataca parte del sistema nervioso periférico. Afecta a personas de todas las edades y sexos, y puede causar parálisis flácida, debilidad muscular y dificultad respiratoria. El tratamiento incluye plasmaféresis, corticoides e inmunoglobulina para reducir la inflamación, así como fisioterapia y apoyo respiratorio. El pronóstico depende de
This document discusses several ways to use quizzes and mysteries in the classroom to engage students. It describes having students use detective skills to find answers provided in clues like photos or diagrams. It also suggests creating treasure hunts where students search a passage for a specific fact or concept. Another idea is making map-based quizzes where students label locations on a diagram. The document advises having students create their own quizzes to assess what they've learned. It also proposes competitive team-based quizzes and games to motivate students.
El documento describe el síndrome de malabsorción, que ocurre cuando el tracto gastrointestinal fracasa en absorber macronutrientes y micronutrientes. Se clasifica en 4 tipos según la causa subyacente. Las manifestaciones clínicas incluyen diarrea crónica, distensión abdominal y falta de crecimiento. Las causas varían según la edad del paciente y pueden incluir enfermedades como la enfermedad celíaca, alergias alimentarias o infecciones. El examen y tratamiento requieren un enfoque integral que incluye
Este documento presenta el desarrollo de un módulo didáctico para prácticas de laboratorio con detectores de proximidad para la asignatura Instrumentación Electrónica de la Universidad Industrial de Santander. El módulo incluye la selección y diseño de sensores capacitivos, inductivos, fotoeléctricos y ultrasónicos, así como los materiales y componentes requeridos. Adicionalmente, se crearon guías de laboratorio para explicar el funcionamiento y aplicaciones de cada sensor.
El documento expresa la melancolía y las reflexiones existenciales de un hombre sobre el paso del tiempo, la fugacidad de la vida y la naturaleza. Plantea preguntas sobre por qué desaparecen las cosas hermosas de la vida como el olor del bosque, la brisa, la fruta y la juventud. También cuestiona sobre el destino de sus palabras, emociones y talentos, y sobre el significado de la vida y la muerte.
El documento describe los diferentes tipos de sensores utilizados en teledetección, incluyendo sensores pasivos como cámaras y radiómetros de microondas, y sensores activos como radar y lidar. Explica cómo funcionan estos sensores y sus características técnicas. También presenta algunos programas de observación de la Tierra clave como Landsat y describe las características orbitales y de instrumentación del satélite Landsat-7.
This is a frequently asked question, and to understand it, we need to differentiate between dominance and frequency of expression. While many traits may be e.
1. La energía es la capacidad de producir cambios. Se mide en julios y puede transferirse entre sistemas en forma de calor o trabajo.
2. Existen sistemas abiertos, cerrados y aislados que pueden intercambiar energía. La energía se conserva y puede transferirse por conducción, convección o radiación.
3. La energía mecánica incluye la energía cinética y potencial. El trabajo modifica la energía mecánica y la potencia mide la rapidez del trabajo.
Privacy Preserved Distributed Data Sharing with Load Balancing SchemeEditor IJMTER
Data sharing services are provided under the Peer to Peer (P2P) environment. Federated
database technology is used to manage locally stored data with a federated DBMS and provide unified
data access. Information brokering systems (IBSs) are used to connect large-scale loosely federated data
sources via a brokering overlay. Information brokers redirect the client queries to the requested data
servers. Privacy preserving methods are used to protect the data location and data consumer. Brokers are
trusted to adopt server-side access control for data confidentiality. Query and access control rules are
maintained with shared data details under metadata. A Semantic-aware index mechanism is applied to
route the queries based on their content and allow users to submit queries without data or server
information.
Distributed data sharing is managed with Privacy Preserved Information Brokering (PPIB)
scheme. Attribute-correlation attack and inference attacks are handled by the PPIB. PPIB overlay
infrastructure consisting of two types of brokering components, brokers and coordinators. The brokers
acts as mix anonymizer are responsible for user authentication and query forwarding. The coordinators
concatenated in a tree structure, enforce access control and query routing based on the automata.
Automata segmentation and query segment encryption schemes are used in the Privacy-preserving
Query Brokering (QBroker). Automaton segmentation scheme is used to logically divide the global
automaton into multiple independent segments. The query segment encryption scheme consists of the
preencryption and postencryption modules.
The PPIB scheme is enhanced to support dynamic site distribution and load balancing
mechanism. Peer workloads and trust level of each peer are integrated with the site distribution process.
The PPIB is improved to adopt self reconfigurable mechanism. Automated decision support system for
administrators is included in the PPIB.
A data estimation for failing nodes using fuzzy logic with integrated microco...IJECEIAES
Continuous data transmission in wireless sensor networks (WSNs) is one of the most important characteristics which makes sensors prone to failure. A backup strategy needs to co-exist with the infrastructure of the network to assure that no data is missing. The proposed system relies on a backup strategy of building a history file that stores all collected data from these nodes. This file is used later on by fuzzy logic to estimate missing data in case of failure. An easily programmable microcontroller unit is equipped with a data storage mechanism used as cost worthy storage media for these data. An error in estimation is calculated constantly and used for updating a reference “optimal table” that is used in the estimation of missing data. The error values also assure that the system doesn’t go into an incremental error state. This paper presents a system integrated of optimal data table, microcontroller, and fuzzy logic to estimate missing data of failing sensors. The adapted approach is guided by the minimum error calculated from previously collected data. Experimental findings show that the system has great potentials of continuing to function with a failing node, with very low processing capabilities and storage requirements.
Reduce the False Positive and False Negative from Real Traffic with Intrusion...inventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
A Cooperative Cache Management Scheme for IEEE802.15.4 based Wireless Sensor ...IJECEIAES
Wireless Sensor Networks (WSNs) based on the IEEE 802.15.4 MAC and PHY layer standards is a recent trend in the market. It has gained tremendous attention due to its low energy consumption characteristics and low data rates. However, for larger networks minimizing energy consumption is still an issue because of the dissemination of large overheads throughout the network. This consumption of energy can be reduced by incorporating a novel cooperative caching scheme to minimize overheads and to serve data with minimal latency and thereby reduce the energy consumption. This paper explores the possibilities to enhance the energy efficiency by incorporating a cooperative caching strategy.
Data Retrieval Scheduling For Unsynchronized Channel in Wireless Broadcast Sy...IJERA Editor
Wireless data broadcast is a disseminating data into large number of mobile clients. In many information services, the users may query multiple data items at a time. The environment under consideration is asymmetric in that the information server has much more bandwidth available, as compared to the clients. To maximize the number of downloads given a deadline. It defines a problem called largest number data retrieval (LNDR). To prove the decision problem of LNDR is a NP hard, and to investigate approximation algorithm for it. It also define another problem called minimum cost data retrieval (MCDR), which aims at downloading a set of requested data items with the least response time and energy consumption. Data scheduling problem over unsynchronized channel at server side. In proposed system LNDR and MCDR in push based and pull based broadcast system are used. The proposed approximation algorithms efficiently schedule the data retrieval process of downloading multiple data from multiple channels. Push based and pull based broadcast model are used in unsynchronized channel. When the time needed for channel switching can be ignored, a Maximum Matching optimal algorithm is exhibited for LNDR which requires only polynomial time. The switching time cannot be neglected, finally to provide simulation results to demonstrate the practical efficiency of the proposed algorithms.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies
This document summarizes a paper that presents a novel method for passive resource discovery in cluster grid environments. The method monitors network packet frequency from nodes' network interface cards to identify nodes with available CPU cycles (<70% utilization) by detecting latency signatures from frequent context switching. Experiments on a 50-node testbed showed the method can consistently and accurately discover available resources by analyzing existing network traffic, including traffic passed through a switch. The paper also proposes algorithms for distributed two-level resource discovery, replication and utilization to optimize resource allocation and access costs in distributed computing environments.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
IRJET- AC Duct Monitoring and Cleaning Vehicle for Train CoachesIRJET Journal
This document summarizes research on techniques for handling concept drift in data stream mining. It begins with an introduction to the challenges of concept drift in data streams and the two main approaches for handling concept drift using ensembles: online and block-based. It then reviews several existing studies on concept drift detection and handling in data streams. Finally, it proposes an adaptive online ensemble approach that uses an internal change detector to dynamically determine block sizes and capture concept drifts in a timely manner. Experimental results show this approach outperforms other ensemble techniques, especially on datasets with sudden concept changes.
IRJET- A Data Stream Mining Technique Dynamically Updating a Model with Dynam...IRJET Journal
This document summarizes several techniques for handling concept drift in data stream mining. It discusses how ensemble methods are commonly used to deal with concept drift and categorizes ensemble approaches into online and block-based. It also reviews several existing studies on handling concept drift, including methods that use adaptive windowing and online learning as well as techniques for detecting concept drift and efficiently updating models. The document concludes by discussing the need for approaches that can adapt to different types of concept drift and changes in non-stationary data streams.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
The Overview of Discovery and Reconciliation of LTE NetworkIRJET Journal
This document provides an overview of the Discovery and Reconciliation of LTE Network system. The system discovers physical and logical network assets from the LTE network and reconciles them with records stored in the Adaptive Inventory database. It identifies any discrepancies between the network assets and database records, along with ways to resolve the discrepancies either manually or automatically. The system uses various modules like the NMS Sweep Module and LTE Module to discover different parts of the LTE network, and the Equipment Hierarchy Module reconciles the physical network information.
MULTIDIMENSIONAL ANALYSIS FOR QOS IN WIRELESS SENSOR NETWORKSijcses
Nodes in Mobile Ad-hoc network are connected wirelessly and the network is auto configuring [1]. This paper introduces the usefulness of data warehouse as an alternative to manage data collected by WSN.Wireless Sensor Network produces huge quantity of data that need to be proceeded and homogenised, so as to help researchers and other people interested in the information. Collected data is managed and compared with other coming from datasources and systems could participate in technical report and decision making. This paper proposes a model to design, extract, transform and normalize data collected by Wireless Sensor Networks by implementing a multidimensional warehouse for comparing many aspects in WSN such as (routing protocol[4], sensor, sensor mobility, cluster ….). Hence, data warehouse defined and applied to the context above is presented as a useful approach that gives specialists row data and information for decision processes and navigate from one aspect to another.
Analysis of service-oriented traffic classification with imperfect traffic cl...IOSR Journals
This document proposes a new approach to network traffic classification called service-oriented traffic classification (SOTC). SOTC relies on identifying network services running on specific IP addresses and ports, and then classifying any traffic directed to that IP/port as belonging to that service. This reduces computational requirements compared to other methods. The accuracy of SOTC depends on correctly identifying the services in the initial stage. Evaluating SOTC on real traffic data confirmed it can improve classification accuracy while meeting scalability needs for large networks.
This document discusses dynamic adaptation techniques for optimizing data transfer performance over networks. It describes how the number of concurrent data transfer streams can be adjusted dynamically according to changing network conditions, without relying on historical measurements or external profiling. The proposed approach gradually increases the level of parallelism during a transfer to find a near-optimal number of streams based on instant throughput measurements, allowing it to adapt to varying environments and network utilization over time.
A New Data Stream Mining Algorithm for Interestingness-rich Association RulesVenu Madhav
Frequent itemset mining and association rule generation is
a challenging task in data stream. Even though, various algorithms
have been proposed to solve the issue, it has been found
out that only frequency does not decides the significance
interestingness of the mined itemset and hence the association
rules. This accelerates the algorithms to mine the association
rules based on utility i.e. proficiency of the mined rules. However,
fewer algorithms exist in the literature to deal with the utility
as most of them deals with reducing the complexity in frequent
itemset/association rules mining algorithm. Also, those few
algorithms consider only the overall utility of the association
rules and not the consistency of the rules throughout a defined
number of periods. To solve this issue, in this paper, an enhanced
association rule mining algorithm is proposed. The algorithm
introduces new weightage validation in the conventional
association rule mining algorithms to validate the utility and
its consistency in the mined association rules. The utility is
validated by the integrated calculation of the cost/price efficiency
of the itemsets and its frequency. The consistency validation
is performed at every defined number of windows using the
probability distribution function, assuming that the weights are
normally distributed. Hence, validated and the obtained rules
are frequent and utility efficient and their interestingness are
distributed throughout the entire time period. The algorithm is
implemented and the resultant rules are compared against the
rules that can be obtained from conventional mining algorithms
A novel cloud storage system with support of sensitive data applicationijmnct
Most users are willing to store their data in the c
loud storage system and use many facilities of clou
d. But
their sensitive data applications faces with potent
ial serious security threats. In this paper, securi
ty
requirements of sensitive data application in the c
loud are analyzed and improved structure for the ty
pical
cloud storage system architecture is proposed. The
hardware USB-Key is used in the proposed architectu
re
for purpose of enhancing security of user identity
and interaction security between the users and the
cloud
storage system. Moreover, drawn from the idea of da
ta active protection, a data security container is
introduced in the system to enhance the security of
the data transmission process; by encapsulating th
e
encrypted data, increasing appropriate access contr
ol and data management functions. The static data
blocks are replaced with a dynamic executable data
security container. Then, an enhanced security
architecture for software of cloud storage terminal
is proposed for more adaptation with the user's sp
ecific
requirements, and its functions and components can
be customizable. Moreover, the proposed architectur
e
have capability of detecting whether the execution
environment is according with the pre-defined
environment requirements.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
Similar to A flexible phasor data concentrator design (20)
2. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2 IEEE TRANSACTIONS ON SMART GRID
Fig. 1. An overview of FIPS.
Fig. 2. IEEE C37.118 data frame format (simplified).
Fig. 3. IEEE C37.118 configuration frame format (simplified).
parsing the format. Data frames cannot stand alone, as they do
not describe the data they contain. The data must therefore be
combined with data from the configuration frame to be useful.
The data may be transformed to an intermediate format to alle-
viate this issue. Also, this conversion process allows processing
tools to operate on data in many formats, and the output data
may be readily converted to any desired format.
III. DATABASE DESIGN
Within the power system operation domain, synchrophasor
data represents a new challenge in data storage. Data has histor-
ically been sampled and archived at much lower rates, such as
TABLE I
DATA INSERTION RATES INTO MYSQL TABLES
one sample per 4 s in many applications. To accommodate the
high data rate of synchrophasor data, a robust database system
is required.
First, synchrophasor data arrives at a high rate, typically 30
samples per second (sps), but in some cases up to 60 sps. With
many PMUs, and many sampled channels, databases quickly be-
come strained. When certain communication protocols are used,
data may arrive out of order, or with excessive latency. In some
cases, data may be corrupted in transit, and if the communica-
tion protocol or medium is sufficiently unreliable, it may be lost
entirely. The most important of these issues to address is to en-
sure that the database used can achieve a sufficient data rate.
Initially, the open-source MySQL database [7] was tested.
Testing was conducted to verify that an adequate data insertion
rate could be achieved. The results of that testing are shown in
Table I. The testing was conducted on a server with eight 2-GHz
Intel Xeon processor cores, 16 GB of random-access memory,
and a 6-TB array of four 7200-r/min hard disks. The results
show that the system is not very scalable, because even with
a modest number of channels, data tables become full quickly.
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.
3. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
ARMENIA AND CHOW: FLEXIBLE PHASOR DATA CONCENTRATOR DESIGN 3
Fig. 4. A well-balanced binary search tree.
Fig. 5. A pathologically unbalanced search tree.
This leads to a large number of data tables, making data access
a complex task.
In MySQL, data tables must be indexed to achieve high per-
formance in data extraction. If the tables are not indexed, the
entire table must be searched to retrieve data matching the de-
sired criteria. This occurs because the data itself is not stored
in an ordered fashion on disk [7]. The indexing makes use of
a class of data structures known as search trees. For example,
let us consider indexing time-tagged data using a simple binary
search tree. In a binary search tree, larger values are placed to
the right and smaller values are placed to the left as the tree
grows. Each node in the search tree contains a pointer to the data
which is indexed by that node. For instance, Fig. 4 represents a
well-constructed binary search tree for a data set. Fig. 5 repre-
sents a pathological case, in which data has been inserted into
the tree in sorted order. The well-constructed tree has a number
of levels on the order of , where is the number of items
in the tree. But in the worst case, the tree actually has levels, in-
creasing the number of operations required to find a given item.
To avoid the worst case, database systems use balancing algo-
rithms [8] to keep the trees balanced. However, these algorithms
add overhead to the insertion of data into the tree.
Other database systems, including PostgreSQL [9] and
Berkeley DB [10], were briefly investigated, but it was de-
termined that a more optimal database could be developed,
starting from the assumption that data will arrive at the database
in an approximately ordered fashion. For data tables with many
columns, only a few of which require indexing, the use of
search trees is practical. But when data arrives in order, and
data records are not large compared to the indexed fields, search
trees create a large amount of unnecessary overhead.
In the proposed database design, data is stored in a single,
flat file per channel, and is buffered for a period before being
written to the database. Synchrophasor data generally arrives
approximately in order, so the buffer only needs to reorder the
few data points that arrive out of order. The data is then written
to the file in a strictly ordered fashion. The scheme for writing
the data to the database is shown in Fig. 6. In the figure, the
dark gray (red in online color version) cell in the queue on the
right represents a missing piece of data. Before the upper gray
cells in the queue can be output to the data file, this dark gray
(red) cell must be filled or discarded. If it is discarded, the data
intended to occupy the dark gray (red) cell cannot be added to
the database later, because the data in the file would then no
longer be in order. If the buffer is made sufficiently large, and
the time before discard sufficiently long, the probability of data
loss can be reduced to a suitably small value.
Once data is in the database, the binary search algorithm is
employed. This algorithm actually operates in a similar fashion
to the binary search tree previously described, but operates di-
rectly on the data. It assumes that the data is already in sorted
order. As can be seen from Fig. 7, a list of data in sorted order
can be treated as if it were the balanced tree in Fig. 4. But when
random access to the stored data is possible, it is unnecessary
to construct or store the actual tree structure to gain the same
performance benefits.
To find data given a timestamp or range, first the data set is
divided in half. The search then proceeds either to the left or
to the right of the split, depending on whether the timestamp
at the midpoint is greater or less than the timestamp sought.
This subset is once again divided in half, and in this fashion, the
search continues recursively until either the desired data point is
found or all points are eliminated. Because the data is repeatedly
divided in half, it can be shown that this search algorithm re-
quires on the order of operations, where is the number
of data points.
IV. COMMUNICATION ISSUES
In developing a phasor measurement network, reliable,
timely communication is of paramount importance. Many
utilities today use multiprotocol label switching (MPLS) virtual
private networks [11], or frame relay circuits [12] to provide
this communication between PMUs and the control center.
These communication media provide a guaranteed bit rate but
are not guaranteed to be error free. Thus, it is important for a
data concentrator to be resilient in responding to errors while
maintaining the ability to provide a low-delay data stream to
real-time applications. If a high degree of reliability cannot be
built into the network upstream of the data concentrator, some
dropped data may need to be tolerated in this stream.
IEEE C37.118 uses a cyclic redundancy check to ensure data
has not been corrupted in transit. However, this does not make
C37.118 a reliable protocol. In the field of communication net-
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.
4. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4 IEEE TRANSACTIONS ON SMART GRID
Fig. 6. Buffering system for data inbound to the database.
Fig. 7. The binary search algorithm. Here, the value 29 is sought.
works, a reliable protocol is one which can make certain rea-
sonable guarantees about data delivery. The concept of layered
protocols provides some relief: IEEE C37.118 frames are typi-
cally not sent directly over a network, but instead are encapsu-
lated within frames of some other protocol. This protocol can
provide the reliability guarantees desired in a given application.
Many networks use simple first-in, first-out (FIFO) queuing
systems. The end systems are assumed to detect congestion in
the network, and react appropriately by reducing the data trans-
mission rate. The bandwidth is divided among various flows,
which are not assumed to have a particular bit rate. Simple FIFO
queuing presents a problem when real-time or constant-bit-rate
(CBR) data transmission is required. CBR sources have some
advantage in queues, because the transmission rate is not de-
creased to accommodate congestion. However, when queues be-
come full, packets are dropped from the tail of the queue. Unless
queues within the network are configured using existing tech-
niques to separately accommodate the CBR data flows, inter-
mittent data loss issues are likely.
If TCP is used to transmit a CBR flow, retransmission ac-
counts for data loss. But TCP implements a congestion window
mechanism to automatically avoid congestion in a FIFO-queued
network [13]. In general, the TCP congestion window follows
a policy known as additive-increase, multiplicative-decrease, il-
lustrated in Fig. 8. When congestion is not detected, the rate
of transmission is gradually increased, as TCP attempts to dis-
cover the maximum capacity of the link. When congestion is
detected, the window size is rapidly decreased to avoid a phe-
nomenon known as congestion collapse [14]. As TCP imple-
mentations have evolved, so have the methods used to avoid
congestion [15]. But TCP does not provide any built-in mech-
anism for priority-based flow control. Thus, high-priority flows
such as synchrophasor data should be identified and marked at
the network edge, and queued separately within the network, to
avoid conflict with other TCP flows sharing the same link. Oth-
erwise, when the congestion window is reduced, the TCP buffer
is likely to overflow due to the constant input rate.
Because communication links must be appropriately provi-
sioned and engineered to provide congestion-free flow of syn-
chrophasor data, the congestion-avoidance portion of TCP rep-
resents additional overhead. This overhead may be avoided by
the use of protocols other than TCP. For instance, the User Data-
gram Protocol (UDP) [16] allows for the simple transmission of
packets over the communication medium. However, UDP does
not provide for guaranteed reliable delivery.
The Real-time Transport Protocol (RTP) can provide further
improvement. Consider that a typical PMU data frame con-
sisting of 10 phasors and 10 analog values is 120 B. Added to
this is a minimum 20-B IP header and a minimum 20-B TCP
header. This represents an overhead due to headers of 33%. RTP
supports a form of header compression [17] which enables the
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.
5. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
ARMENIA AND CHOW: FLEXIBLE PHASOR DATA CONCENTRATOR DESIGN 5
Fig. 8. TCP behavior on a lossy link.
header overhead to be reduced to 2 B. In remote locations, where
bandwidth is scarce, this savings could enable synchrophasor
streaming where it was previously impossible. But RTP still
does not provide reliability in the event of bit errors or other
packet losses.
To provide for resilience in the face of errors, a “selective re-
peat” protocol for reliable, real-time data transport may be used.
In this protocol, data is sent as datagrams using UDP. The sender
does not wait for acknowledgements from the receiver, but in-
stead maintains a large buffer of previously sampled data. While
data arrival is not guaranteed, the sender need not be concerned
with this fact. The receiver can detect, based on timestamps or
sequence numbers, when data has been lost, and request that the
sender retransmit data from its local storage. Many PMUs al-
ready support local data storage, so implementation of this pro-
tocol should not be difficult. This protocol may also be imple-
mented by connecting the PMU over a high-reliability link to an
intermediate system.
Of course, no communication protocol can provide re-
dundancy if the physical medium is damaged, e.g., an errant
backhoe destroying the fiber. Redundant communication links
may be built into the network to provide additional reliability.
Ideally, physical-layer redundancy schemes would be em-
ployed. For instance, the same signal may be sent down two
separate optical fibers or wires. The receiver may then choose
the stronger signal, providing redundancy against transient
noise and physically damaged media.
V. SIMPLE PHASOR STREAMING PROTOCOL
The Simple Phasor Streaming Protocol (SPSP) is a protocol
developed for the transmission of data which is tagged in both
time and space dimensions. The protocol data unit is a time-
tagged data frame which contains tagged data values in a variety
of different formats. SPSP data frames can be transported using
a variety of underlying protocols, such as UDP or RTP, or a
custom protocol for reliable transport as described previously.
In FIPS, this protocol is used within a given node to avoid the
need to deal with disparate, possibly complex protocols within
each system component. It may also be used as a protocol for
communication between FIPS instances at different locations.
Fig. 9. SPSP protocol data unit format (simplified).
The protocol has been designed for ease of data concentra-
tion. It is simpler to process than protocols such as IEEE C37.
118, due to the fact that all data arrives with both time and lo-
cation tags. In existing protocols, more complex parsing and
additional state information is required to determine the spe-
cific measurements contained within a protocol data unit. In the
C37.118 protocol data frame shown in Fig. 2, phasors, analog,
and digital values are not tagged with unique identifiers. Their
significance is given only by position within the frame. This has
the advantage of saving a small amount of bandwidth. Once data
reaches the data concentrator, speed of processing is of greater
concern. Therefore, the SPSP data frame tags each value with
a channel identifier, as seen in Fig. 9. IEEE C37.118 has the
ability to transmit human-readable channel identifier informa-
tion in configuration frames, such as the one in Fig. 3. But this
data is of little use to a data concentrator. Thus, manual con-
figuration is required. The use of the intermediate SPSP pro-
tocol eliminates the need for this manual configuration within
the core of the data concentrator. If used to transmit data be-
tween data concentrators, and care is taken to avoid overlap in
channel IDs, it also eliminates the need for manual configura-
tion of those communication channels.
VI. TIME ALIGNMENT
One function of a data concentrator is to align data in time.
This consists of forming data packets with a given timestamp,
and assembling all data received with that timestamp into a
single packet. The basic algorithm is simple: construct a new
frame when the first frame with that timestamp is received.
Then, as additional frames with the same timestamp arrive,
combine them with the first frame. Once a certain time expires,
or all the data is received, transmit the frame, then ignore any
additional frames with timestamps less than or equal to that of
the frame just sent.
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.
6. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
6 IEEE TRANSACTIONS ON SMART GRID
Fig. 10. A listing of events in the database.
A configurable parameter is the time delay between the first
data arrival and the frame’s transmission. By varying this pa-
rameter, a shift is made from low delay to high reliability. Thus,
assuming the data concentrator has sufficient throughput capa-
bilities and negligible processing delay, in an unreliable network
the data concentrator cannot improve latency without impacting
reliability. Improving both of these desired attributes simultane-
ously requires improvements at the lower levels of the system.
The time alignment algorithm can be easily adapted to handle
inbound streams of different data rates. In this case, the algo-
rithm must be aware that certain channels will not be represented
in all data frames. Also, if the output is to be in the C37.118
protocol, data must be resampled, since C37.118 does not allow
some values to be missing from a frame. If output is to be in
SPSP format, as in FIPS, the data need not be resampled at the
time-alignment phase. Only the data expected for a given time-
stamp will be sent. A conversion stage can handle resampling
tasks if this SPSP stream is to be converted to another protocol.
This conversion stage may either downsample by simple deci-
mation, resulting in minimal delay, or attempt to upsample and
interpolate the missing data points from slower channels. The
specific design of these resampling algorithms is outside the
scope of the phasor system design.
VII. USER INTERFACE
A data concentrator should provide a flexible user interface
with the ability to perform many functions. A user interface
has been developed which provides this flexibility in a number
of important ways. By using the open-source Ruby on Rails
model-view-controller framework [18], the user interface is in-
dependent of the means used to store the data, and new func-
tionalities can easily be added simply by implementation of new
views and controllers which access the data in the model layer.
Data can easily be exported in multiple formats, visualization
tools are available to show PMU data geographically or on tra-
ditional plots, and users can easily see an inventory of available
PMUs and measurement channels. For example, Fig. 10 shows
a listing of disturbance events stored in a database, and Fig. 11
shows PMUs in the database on a map, using the Google Maps
API [19]. Note that only a subset of existing PMUs is shown in
the figure.
In addition, the framework allows controllers to be developed
which expose an interface to external applications. By accessing
specially formatted addresses on the data concentrator, applica-
tions may authenticate with the system and request data from a
given measurement channel in a number of formats. This inter-
face can be accessed by code written in any language which has
support for the HTTP protocol.
VIII. PHASOR GATEWAYS
The North American SynchroPhasor Initiative (NASPI) has
proposed a concept known as NASPInet [20], which represents
a peer-to-peer architecture for the distribution of phasor data.
NASPInet consists of an entity called a data bus to which phasor
gateways are connected. These phasor gateways communicate
with each other to exchange data on an as-needed basis.
FIPS can provide the functionality of a phasor gateway with
the addition of a few network services. First, a service must be
provided to allow access to metadata, such as channel identi-
fiers. Remote access must be available to data stored in the data-
base, and a means to stream real-time data as it is received is
also necessary. A system for authentication of remote systems
is necessary. But these functionalities are straightforward to im-
plement given a robust foundation.
An example network topology for peer-to-peer data sharing
is shown in Fig. 12. The FIPS server at the top of the diagram
might be located at a large regional control center such as an
ISO. The servers at the bottom might be located at transmission
owners. All data is routed through the MPLS links to the central
location, and the router at that point must transfer data between
the two TOs, if that is desired.
When operating as a phasor gateway, the possibility of mul-
ticast for data transport should be considered. Multicast, as the
name suggests, permits data to be transmitted from one source
to many receivers. Furthermore, with network-layer multicast,
the replication of data occurs within the network elements, re-
ducing the bandwidth requirements at the network edges. It is
conceivable that a phasor stream will be distributed from one
source to many destinations, and bandwidth can be conserved
by the use of multicasting. In MPLS networks, multicasting can
occur at the service provider level, reducing the level of traffic
leaving any endpoint [21]. Fig. 13 shows the network topology
with network-layer multicast. Data being exchanged between
the TOs is no longer required to travel twice through the MPLS
network.
Data sharing is one of the basic goals of the FIPS system.
When data is being shared via communication networks, secu-
rity is of concern. Public networks such as the Internet provide
no guarantees of security, and malicious attackers may be able
to intercept or tamper with data if cryptographic techniques are
not employed.
Security concerns are especially important in the multicast-
based network described above. The most important security
concern in synchrophasor measurement is data integrity: en-
suring that no attacker is able to pose as one of the authorized
entities in the system. This can be achieved with a number of
widely known asymmetric encryption algorithms. One such al-
gorithm is RSA [22]. In these schemes, every data source has
a secret private key. This key has a corresponding public key.
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.
7. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
ARMENIA AND CHOW: FLEXIBLE PHASOR DATA CONCENTRATOR DESIGN 7
Fig. 11. A map generated from the PMU database.
Fig. 12. An MPLS network topology for simple peer-to-peer data sharing.
These keys may be considered as mathematical functions and
such that , but that it is not easy
to determine from and , or and , that is,
Fig. 13. MPLS network topology with network-layer multicast.
and are not easily invertible. (In fact, the keys are large nu-
meric parameters to more general functions, but therefore the
simplification in notation used here can be made without loss of
generality.) Note that and must be invertible by definition,
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.
8. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
8 IEEE TRANSACTIONS ON SMART GRID
since and , but knowledge of should
not yield information about and vice versa. Most algorithms
in use generate and as a pair. Generally, one key is made
public, and is assumed to be widely known and distributed. The
other is kept a secret, and is not divulged to anyone other than
the entity which created it.
As an illustration, using the nomenclature customary to cryp-
tographic discussions, suppose Alice generates a pair of keys
and . Alice keeps a secret, but publishes her key .
If Bob wishes to send a message to Alice, he should find
Alice’s function from a trusted source. Then, he can transmit
to Alice, confident that even given the public knowledge
of , no one other than Alice is able to decrypt the message.
Since Eve (an eavesdropper) cannot compute as ,
the encrypted message is secure. Similarly, Alice can prove her
identity to Bob by sending a message as . If the value
represents a sensible message, it can be concluded
with some reasonable probability that Eve could not have gen-
erated the message.
There is a catch: Eve can construct her own key pair and
and try to convince Bob that is really Alice’s . Then
Eve can receive the message, compute , then
send onward to Alice, and intercept the message without
Alice or Bob knowing of the interception. Also, Eve can send
forged messages to Bob. This is referred to as a man-in-the-
middle attack. To prevent this attack, it is clearly very important
to ensure that the users of cryptographic keys obtain them only
from trusted sources, such as their original creators, and only
obtain them through trusted communication channels, such as
an in-person exchange.
Asymmetric encryption algorithms typically represent con-
siderable overhead. Therefore it is not desirable to evaluate
or where consists of a large amount of data.
One-way functions, or hashes, are used to avoid this problem.
A hash is a function which returns a fixed, short, unique
representation of . should satisfy a few properties: it
should be difficult to compute a value of for which
is equal to some known value . It should also be unlikely
that for two different inputs and . Finally,
should be fast to compute relative to or .
Given this function, we need only to compute , not
, to create proof that the data originated from an authentic
source. The receiver computes and and
compares them, with knowledge of the correct function for
the authentic source. Given the properties of , and ,
it is very difficult for an attacker to construct a value such
that —the necessary condition to have a
receiver accept false data.
In the event that data confidentiality is required, another cryp-
tographic scheme can be adopted. In a multicast medium, the
same encrypted data must be sent to all receivers, but it is de-
sirable to allow the data to be decrypted by a set of many keys.
To accomplish this goal, the data can be encrypted with a sym-
metric encryption algorithm, whose inverse
is easily computed given the single key. An example
of such an algorithm is Blowfish [23]. This key may be gen-
erated randomly and used for only a short period before it is
changed. Then the key is encrypted successively using the
public key of each authorized receiver, and transmitted as
a key block. The key block thus contains a series of encrypted
keys: Each receiver may attempt to
decrypt the key using its private key . If the receiver is autho-
rized to decrypt the data associated with the key block, one of
these decryptions will be successful. Once the key is obtained,
the receiver may decrypt the multicast data. Any receiver may be
deauthorized the next time the symmetric key is changed simply
by not including that receiver’s key in the next key block.
It is important to note that a detailed cryptographic analysis of
these cryptographic strategies has not yet been conducted, and
that the authors may not be aware of all possible attacks against
them. They are presented here for illustrative purposes. While
the underlying algorithms are generally well-tested, minor im-
plementation errors can result in large security issues. Before
being implemented in a production system, the techniques de-
scribed in this section should be subjected to additional scrutiny
and review by experts in the field of cryptography.
IX. CONCLUSION AND FUTURE WORK
FIPS will provide an important building block in a future dis-
tributed synchrophasor measurement system. It provides many
of the attributes of an ideal phasor processing system in a co-
hesive, integrated form. It is important to continue to consider
communication issues when new PMUs are being installed. An
FIPS pilot deployment is planned at ISO New England in June
2010.
ACKNOWLEDGMENT
The authors would like to acknowledge the support of the RPI
Power System Research Consortium Industry Members: AEP,
FirstEnergy, ISO-NE, NYISO, and PJM. The authors would also
like to acknowledge the FNET project at Virginia Polytechnic
Institute, and J. R. Carroll and P. Trachian at the Tennessee
Valley Authority, who have provided us with data for testing
purposes. We would also like to thank M. Shukla, X. Luo, and
D. Bertagnolli of ISO New England for providing a list of distur-
bance events, and L. Vanfretti for providing input and assistance
with data collection.
REFERENCES
[1] IEC 60870-6/TASE.2: ICCP, IEC Standard 60870-6-802, 2005.
[2] J. Zuo, R. Carroll, P. Trachian, J. Dong, S. Affare, B. Rogers, L. Beard,
and Y. Liu, “Development of TVA SuperPDC: Phasor applications,
tools, and event replay,” in Proc. 2008 IEEE Power and Energy Soc.
General Meeting—Conversion and Delivery of Electrical Energy in the
21st Century, Jul. 2008, pp. 1–8.
[3] D. Shi, D. Tylavsky, N. Logic, and K. Koellner, Identification of Short
Transmission-Line Parameters From Synchrophasor Measurements
Sep. 2008, pp. 1–8.
[4] L. Vanfretti, J. Chow, S. Sarawgi, D. Ellis, and B. Fardanesh, A Frame-
work for Estimation of Power Systems Based on Synchronized Phasor
Measurement Data Jul. 2009, pp. 1–6.
[5] IEEE Standard for Synchrophasors for Power Systems, IEEE Standard
C37.118, 2006.
[6] IEEE Standard for Synchrophasors for Power Systems, IEEE Standard
1344, 1995.
[7] “MySQL 5.0 Reference Manual,” Sun Microsystems Inc., 2010 [On-
line]. Available: http://dev.mysql.com/doc/refman/5.0/en/index.html
[8] L. J. Guibas and R. Sedgewick, “A dichromatic framework for balanced
trees,” in Proc. 19th Annu. Symp. Found. Comput. Sci., Oct. 1978, pp.
8–21.
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.
9. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
ARMENIA AND CHOW: FLEXIBLE PHASOR DATA CONCENTRATOR DESIGN 9
[9] PostgreSQL Documentation, PostgreSQL Global Development Group,
2009 [Online]. Available: http://www.postgresql.org/docs/
[10] Oracle Berkeley DB. Oracle Corp. [Online]. Available: http://www.or-
acle.com/technology/products/berkeley-db/index.html
[11] E. Rosen, A. Viswanathan, and R. Callon, RFC 3031: Multiprotocol
Label Switching Architecture 2001 [Online]. Available: http://www.
ietf.org/rfc/rfc3031.txt
[12] T. Bradley, C. Brown, and A. Malis, RFC 1490: Multiprotocol Inter-
connect Over Frame Relay 1993 [Online]. Available: http://www.faqs.
org/rfcs/rfc1490.html
[13] M. Allman, V. Paxson, and W. Stevens, RFC 2581-TCP Congestion
Control 1999 [Online]. Available: http://www.ietf.org/rfc/rfc2581.txt
[14] V. Jacobson and M. J. Karels, Congestion Avoidance and Control 1988.
[15] K. Fall and S. Floyd, “Simulation-based comparisons of Tahoe, Reno,
and SACK TCP,” Comput. Commun. Rev., vol. 26, pp. 5–21, 1996.
[16] J. Postel, RFC 768—User Datagram Protocol 1980 [Online]. Available:
http://www.ietf.org/rfc/rfc768.txt
[17] S. Casner and V. Jacobson, RFC 2508—Compressing IP/UDP/RTP
Headers for Low-Speed Serial Links 1999 [Online]. Available: http://
www.ietf.org/rfc/rfc2508.txt
[18] D. H. Hansson, Ruby on Rails: Documentation 2010 [Online]. Avail-
able: http://rubyonrails.org/documentation
[19] Google Maps API Google Inc., 2008 [Online]. Available: http://code.
google.com/apis/maps/
[20] Y. Hu, Data Bus Technical Specifications for NASPInet 2008.
[21] B. Yang and P. Mohapatra, “Multicasting in MPLS domains,” Comput.
Commun., vol. 27, pp. 162–170, 2003.
[22] R. L. Rivest, A. Shamir, and L. M. Adleman, “Cryptographic Commu-
nications System and Method,” U.S. Patent 4 405 829, Sep. 20, 1983.
[23] B. Schneier, “Description of a new variable-length key, 64-bit block
cipher (blowfish),” in Fast Software Encryption, Cambridge Security
Workshop Proc., Dec. 1993, pp. 191–204.
Andrew Armenia (S’08) received the B.S. degree in electrical engineering and
computer and systems engineering from Rensselaer Polytechnic Institute (RPI),
Troy, NY, in 2008. He is currently working toward the M.S./Ph.D. degrees in the
Electrical, Computer, and Systems Engineering Department at RPI.
His interests currently include computer networking, database systems, and
power systems.
Joe H. Chow (S’72–M’78–SM’84–F’92) received the M.S. and Ph.D. degrees
from the University of Illinois, Urbana-Champaign.
After working in the General Electric Power System business in Schenec-
tady, NY, he joined Rensselaer Polytechnic Institute, Troy, NY, in 1987. He
is currently a Professor of Electrical, Computer, and Systems Engineering and
the Associate Dean of Engineering for Research and Graduate Programs. His
research interests include multivariable control, power system dynamics and
control, voltage-sourced converter-based FACTS Controllers, and synchronized
phasor data.
Authorized licensed use limited to: Sri Manakula Vinayagar Engineering College. Downloaded on May 19,2010 at 11:04:48 UTC from IEEE Xplore. Restrictions apply.