call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technolog
This document summarizes a proposed security model for transaction management in distributed mobile database environments. The model uses encryption/decryption between base stations and mobile hosts to securely transmit data. It also uses a damage assessment algorithm to detect the spread of malicious transactions across replicated distributed databases and aid in recovery. The architecture includes fixed proxy servers that update an encrypted mobile request and return an encrypted result. The damage assessment algorithm evaluates logs to assess attack impacts and supports both "coldstart" and "warmstart" recovery methods.
Optimal software-defined network topology for distributed denial of service a...journalBEEI
Distributed denial of service (DDoS) attacks are a major threat to all internet services. The main goal is to disrupt normal traffic and overwhelms the target. Software-defined networking (SDN) is a new type of network architecture where control and data plane are separated. A successful attack may block the SDN controller which may stop processing the new request and will lead to a total disruption of the whole network. The main goal of this paper is to find the optimal network topology and size which can handle Distributed denial of service attack without management channel bandwidth exhaustion or run out of SDN controller CPU and memory. Through simulations, it is shown that mesh topologies with more connections between switches are more resistant to DDoS attacks than liner type network topologies.
This document describes a distributed storage system called UniversalDistributedStorage. It discusses distributed computing principles like data hashing, replication, and leader election. UniversalDistributedStorage uses consistent hashing to store data across servers and replicates data for fault tolerance. It elects leaders using the Bully algorithm and synchronizes data asynchronously across multiple masters. The system aims to provide distributed transactions, data independence, fault tolerance and transparency.
IRJET- HHH- A Hyped-up Handling of Hadoop based SAMR-MST for DDOS Attacks...IRJET Journal
This document proposes a novel scheme called SAMR-MST to detect DDoS attacks using Hadoop's MapReduce framework more efficiently. It introduces the SAMR (Self-Adaptive MapReduce) scheduling algorithm, which uses historical task performance data to identify slow tasks and launch backup tasks. It then enhances SAMR with Minimum Spanning Tree clustering to tune SAMR's parameters, improving its ability to find slow tasks. The proposed approach is evaluated against existing MapReduce schedulers like FIFO and LATE, showing it can reduce execution time by up to 25% in heterogeneous cloud environments subject to DDoS attacks.
The document discusses process migration as a way to balance workload across systems. It describes how a process can be transferred between machines and resume where it left off. Key aspects covered include kernel modules, ELF files, advantages of process migration like load balancing and fault tolerance, and potential applications in distributed and multi-user systems.
Hadoop provides high availability through replication of data across multiple nodes. Replication handles data integrity through checksums and automatic re-replication of corrupt blocks. Rack failures are reduced by dual networking and more replication bandwidth. NameNode failures are rare but cause downtime, so Hadoop 1 adds cold failover of Namenodes using VMware HA or RedHat HA. Hadoop 2 introduces live failover of Namenodes using a quorum journal manager to eliminate single points of failure. Full stack high availability adds monitoring and restart of all services.
The document discusses various techniques for resource management in distributed systems. It describes approaches like task assignment, load balancing, and load sharing. It also covers desirable features of scheduling algorithms and discusses techniques like task assignment in detail with an example. Furthermore, it discusses concepts like load balancing approaches, task assignment, location policies, state information exchange policies, and priority assignment policies.
Network Game Design: Hints and Implications of Player InteractionAcademia Sinica
While psychologists analyze network game-playing behavior in terms of players’ social interaction and experience, understanding user behavior is equally important to network researchers, because how users act determines how well network systems, such as online games, perform. To gain a better understanding of patterns of player interaction and their implications for game design, we analyze a 1, 356-millionpacket trace of ShenZhou Online, a mid-sized commercial MMORPG. This work is dedicated to draw out hints and implications of player interaction patterns, which is inferred from network-level traces, for online games.
We find that the dispersion of players in a virtual world is heavy-tailed, which implies that static and fixed-size partitioning of game worlds is inadequate. Neighbors and teammates tend to be closer to each other in network topology. This property is an advantage, because message delivery between the hosts of interacting players can be faster than between those of unrelated players. In addition, the property can make game playing fairer, since interacting players tend to have similar latencies to their servers. We also find that participants who have a higher degree of social interaction tend to play much longer, and players who are closer in network topology tend to team up for longer periods. This suggests that game designers could increase the “stickiness” of games by encouraging, or even forcing, team playing.
Process Migration in Heterogeneous Systemsijsrd.com
This document discusses process migration in heterogeneous distributed systems. Process migration involves transferring the entire state of a process from one computer to another so execution can continue. In heterogeneous systems, data must be translated between the source and destination computer formats. The document proposes using an external data representation standard to reduce the complexity of the translation software needed. It also describes challenges in migrating processes that use floating point numbers or signed values and how these can be addressed through the external data representation. The advantages of process migration for load balancing and improving system reliability and performance are also summarized.
Optimal software-defined network topology for distributed denial of service a...journalBEEI
Distributed denial of service (DDoS) attacks are a major threat to all internet services. The main goal is to disrupt normal traffic and overwhelms the target. Software-defined networking (SDN) is a new type of network architecture where control and data plane are separated. A successful attack may block the SDN controller which may stop processing the new request and will lead to a total disruption of the whole network. The main goal of this paper is to find the optimal network topology and size which can handle Distributed denial of service attack without management channel bandwidth exhaustion or run out of SDN controller CPU and memory. Through simulations, it is shown that mesh topologies with more connections between switches are more resistant to DDoS attacks than liner type network topologies.
This document describes a distributed storage system called UniversalDistributedStorage. It discusses distributed computing principles like data hashing, replication, and leader election. UniversalDistributedStorage uses consistent hashing to store data across servers and replicates data for fault tolerance. It elects leaders using the Bully algorithm and synchronizes data asynchronously across multiple masters. The system aims to provide distributed transactions, data independence, fault tolerance and transparency.
IRJET- HHH- A Hyped-up Handling of Hadoop based SAMR-MST for DDOS Attacks...IRJET Journal
This document proposes a novel scheme called SAMR-MST to detect DDoS attacks using Hadoop's MapReduce framework more efficiently. It introduces the SAMR (Self-Adaptive MapReduce) scheduling algorithm, which uses historical task performance data to identify slow tasks and launch backup tasks. It then enhances SAMR with Minimum Spanning Tree clustering to tune SAMR's parameters, improving its ability to find slow tasks. The proposed approach is evaluated against existing MapReduce schedulers like FIFO and LATE, showing it can reduce execution time by up to 25% in heterogeneous cloud environments subject to DDoS attacks.
The document discusses process migration as a way to balance workload across systems. It describes how a process can be transferred between machines and resume where it left off. Key aspects covered include kernel modules, ELF files, advantages of process migration like load balancing and fault tolerance, and potential applications in distributed and multi-user systems.
Hadoop provides high availability through replication of data across multiple nodes. Replication handles data integrity through checksums and automatic re-replication of corrupt blocks. Rack failures are reduced by dual networking and more replication bandwidth. NameNode failures are rare but cause downtime, so Hadoop 1 adds cold failover of Namenodes using VMware HA or RedHat HA. Hadoop 2 introduces live failover of Namenodes using a quorum journal manager to eliminate single points of failure. Full stack high availability adds monitoring and restart of all services.
The document discusses various techniques for resource management in distributed systems. It describes approaches like task assignment, load balancing, and load sharing. It also covers desirable features of scheduling algorithms and discusses techniques like task assignment in detail with an example. Furthermore, it discusses concepts like load balancing approaches, task assignment, location policies, state information exchange policies, and priority assignment policies.
Network Game Design: Hints and Implications of Player InteractionAcademia Sinica
While psychologists analyze network game-playing behavior in terms of players’ social interaction and experience, understanding user behavior is equally important to network researchers, because how users act determines how well network systems, such as online games, perform. To gain a better understanding of patterns of player interaction and their implications for game design, we analyze a 1, 356-millionpacket trace of ShenZhou Online, a mid-sized commercial MMORPG. This work is dedicated to draw out hints and implications of player interaction patterns, which is inferred from network-level traces, for online games.
We find that the dispersion of players in a virtual world is heavy-tailed, which implies that static and fixed-size partitioning of game worlds is inadequate. Neighbors and teammates tend to be closer to each other in network topology. This property is an advantage, because message delivery between the hosts of interacting players can be faster than between those of unrelated players. In addition, the property can make game playing fairer, since interacting players tend to have similar latencies to their servers. We also find that participants who have a higher degree of social interaction tend to play much longer, and players who are closer in network topology tend to team up for longer periods. This suggests that game designers could increase the “stickiness” of games by encouraging, or even forcing, team playing.
Process Migration in Heterogeneous Systemsijsrd.com
This document discusses process migration in heterogeneous distributed systems. Process migration involves transferring the entire state of a process from one computer to another so execution can continue. In heterogeneous systems, data must be translated between the source and destination computer formats. The document proposes using an external data representation standard to reduce the complexity of the translation software needed. It also describes challenges in migrating processes that use floating point numbers or signed values and how these can be addressed through the external data representation. The advantages of process migration for load balancing and improving system reliability and performance are also summarized.
This document proposes a new robust hybrid watermarking scheme that embeds data in all frequencies of an image using both the discrete cosine transform (DCT) and singular value decomposition (SVD). It first applies DCT to the cover image and maps the coefficients into four quadrants representing different frequency bands. SVD is then applied to each quadrant. The singular values in each quadrant are modified by the singular values of the DCT-transformed visual watermark. Embedding data in all frequencies makes the scheme robust against attacks that target specific frequencies.
This document discusses physical infrastructure designs to support logical network architectures in data centers. It examines Top of Rack (ToR) and End of Row (EoR) access models. ToR uses an access switch in each cabinet, requiring connections for each server. EoR uses chassis switches in the row middle, connecting cabinets within cable length limits. Designs must map logical networks to physical cable routing and manage connectivity growth.
Fault Tolerance in Big Data Processing Using Heartbeat Messages and Data Repl...IJSRD
Big data is a popular term used to define the exponential evolution and availability of data, includes both structured and unstructured data. The volatile progression of demands on big data processing imposes heavy burden on computation, communication and storage in geographically distributed data centers. Hence it is necessary to minimize the cost of big data processing, which also includes fault tolerance cost. Big Data processing involves two types of faults: node failure and data loss. Both the faults can be recovered using heartbeat messages. Here heartbeat messages acts as an acknowledgement messages between two servers. This paper depicts about the study of node failure and recovery, data replication and heartbeat messages.
Software for the new COMPASS data acquisition systembodlosh
The document summarizes the evaluation of software for a new data acquisition system. It was decided not to use the existing DATE software due to complexity, and instead develop a simpler system. A minimal run control system was implemented using DIM for communication between a master node and slave nodes. Initial testing showed the system could reliably exchange messages between nodes. Further implementation of error reporting and simulation tools is planned.
Perceiving and recovering degraded data on secure cloudIAEME Publication
This document discusses securing data stored on cloud systems. It proposes a method using tokens to represent file blocks distributed across multiple servers. A third party auditor verifies the integrity of tokens and can detect corrupted data by checking signatures. The system uses erasure coding and fault tolerance techniques like retransmission to recover lost data blocks and make the file system tolerant to node failures without data loss. Performance is evaluated, showing that optimal token size balances processing time against overhead of managing many small tokens.
Software Defined Networking: A Concept and Related IssuesEswar Publications
SDN (Software Defined Networking) is the networking architecture that has gained attention of researchers in recent past. It is the future of programmable networks. Traditional networks were very complex and difficult to manage. SDN is going to change this by offering a standard interface (OpenFlow) between the control plane and the networking devices (data plane). Its implementation is fully supported by software so that we can control the behavior of networking devices through programmatic control. This programmatic control provides various new ways to find breakpoints and failures in networking devices. Today SDN has become an important part of networking, so it is important to emulate its behavior. SDN support virtualization which makes it scalable and flexible. Data traffic resides in the data plane. The main function of intelligent controller is to decide the routing
policy and manage the traffic in data plane. So effectively SDN emerges as a networking architecture that has the ability to solve all problems those were found in traditional architecture In this paper the authors discussed historical perspective of SDN, languages that support SDN, emulation tools, security issues with SDN and advantages that makes SDN suitable choice for today’s network.
This document discusses resource management techniques in distributed systems. It covers three main scheduling techniques: task assignment approach, load balancing approach, and load sharing approach. It also outlines desirable features of good global scheduling algorithms such as having no a priori knowledge about processes, being dynamic in nature, having quick decision-making capability, balancing system performance and scheduling overhead, stability, scalability, fault tolerance, and fairness of service. Finally, it discusses policies for load estimation, process transfer, state information exchange, location, priority assignment, and migration limiting that distributed load balancing algorithms employ.
This document discusses resource management techniques in distributed systems. It describes three main approaches: task assignment, load balancing, and load sharing. Task assignment involves scheduling related tasks to optimize performance metrics like turnaround time. Load balancing aims to evenly distribute workloads across nodes to utilize resources efficiently. Load sharing is a simpler approach that prevents idle nodes when others are heavily loaded. The document also outlines desirable properties for scheduling algorithms and categorizes different types of load balancing techniques.
To detect network intrusions protects a computer network from unauthorized users, including perhaps insiders. The intrusion detector learning task is to build a predictive model (i.e. a classifier) capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections
The German Climate Computing Centre (DKRZ) provides computing resources and data management for climate research. It needed more powerful systems to handle increasing data demands. DKRZ purchased an IBM POWER6 cluster with QLogic InfiniBand switches. This exceeded performance expectations, ranking 27th on the Top500 list. The switches also simplified storage integration and management.
This document describes research into developing a discovery method to ensure DNSSEC information can be delivered to end hosts. Measurements using RIPE ATLAS probes found that 64% of recursive resolvers could perform basic DNSSEC queries, while only 40% could process authenticated wildcard information. The proposed discovery method has stub resolvers first try the default recursive resolver, then the ISP resolver, a public resolver, or full recursion if needed, to balance functionality and efficiency.
This paper proposes improvements to the Chord peer-to-peer network protocol to increase reliability of data transfer. The Chord protocol uses a ring topology to organize nodes and determine routing, but direct routes sometimes fail. The paper suggests having nodes store failed packets locally and route them through multiple neighboring nodes to reach the destination. It also considers available bandwidth when selecting next-hop nodes. Simulations show the proposed methods increase packet delivery ratio compared to the base Chord protocol, especially as node movement increases, improving the reliability of data transfer in the peer-to-peer network.
This document provides an overview of multicast communication concepts. It discusses IP multicast and how it allows efficient single-message delivery to groups. Reliable multicast is described as ensuring validity, integrity, and agreement even if the sender crashes. Ordered multicast can provide FIFO, causal, or total ordering guarantees for message delivery across group members. Practical implementations rely on techniques like sequence numbers, acknowledgments, and negative acknowledgments to ensure reliability and ordering.
Introduction: What is clock synchronization?
The challenges of clock synchronization.
Basic Concepts: Software and hardware clocks. Basic clock synchronization algorithm
Algorithms: Deep dive into landmark papers
NTP: Internet scale time synchronization
This document discusses peer-to-peer systems and middleware for managing distributed resources at a large scale. It describes key characteristics of peer-to-peer systems like nodes contributing equal resources and decentralized operation. Middleware systems like Pastry and Tapestry are overlay networks that route requests to distributed objects across nodes through knowledge at each node. They provide simple APIs and support scalability, load balancing, and dynamic node availability.
This document discusses process management in distributed systems. It describes how distributed operating systems aim to make the best use of processing resources across an entire system by sharing processors among all processes. Key concepts discussed include processor allocation, process migration, and threads. Process migration involves transferring a running process from one machine to another to achieve goals like load balancing and fault tolerance. The challenges and mechanisms of freezing, transferring, and restarting a migrating process's address space and forwarding messages are also covered.
Design of a Remotely Accessible PC based Temperature Monitoring SystemIDES Editor
An innovative data-acquisition circuit for
temperature monitoring and control is designed and interfaced
to printer port of a web server computer. Further, an interactive
web application program has been developed and kept running
in the server computer for controlling the operation of the
data-acquisition circuit. Authenticated clients can access the
web based instrumentation system through Internet / Intranet.
This document discusses two approaches for distributed clustering of data streams from sensor networks: DGClust and L2GClust. DGClust performs local discretization and representative clustering to improve computation and communication loads for clustering sensor data streams at a central server. L2GClust performs local clustering based on each sensor's sketch of its own data and its neighbors' estimates of the global clustering, allowing each sensor to estimate the overall network clustering with limited resources and communication. Evaluation shows L2GClust achieves high agreement with centralized clustering while reducing storage, communication and sensitivity to uncertainty.
This document summarizes a research paper that proposes a content-based hybrid DWT-DCT watermarking technique for image authentication in color images. The technique embeds statistical features extracted from the host image as the watermark. Four different statistical features are used to generate the watermark - the Frobenius norm, mean, standard deviation, and combined mean and standard deviation of the host image blocks. The watermark is then embedded into the host image by applying both DWT and DCT transforms. During extraction, the same process is applied to extract the watermark for authentication. Experimental results show the technique is robust against various attacks like compression, noise, and filters.
The document discusses tensile properties of long jute fiber reinforced polypropylene composites. It begins with an abstract that states the objective is to test tensile properties of composites made from chemically treated long jute fibers reinforced in polypropylene at different weight ratios. The results showed tensile strength and modulus increased for treated fiber composites compared to plain polypropylene, with up to a 28.4% increase for 15% NaOH treated fibers at 10% weight ratio. The introduction provides background on composites and defines them. It also describes the phases in a composite including polypropylene matrix and jute fiber reinforcement. Experimental details on materials and fiber extraction are then presented.
This document proposes a new robust hybrid watermarking scheme that embeds data in all frequencies of an image using both the discrete cosine transform (DCT) and singular value decomposition (SVD). It first applies DCT to the cover image and maps the coefficients into four quadrants representing different frequency bands. SVD is then applied to each quadrant. The singular values in each quadrant are modified by the singular values of the DCT-transformed visual watermark. Embedding data in all frequencies makes the scheme robust against attacks that target specific frequencies.
This document discusses physical infrastructure designs to support logical network architectures in data centers. It examines Top of Rack (ToR) and End of Row (EoR) access models. ToR uses an access switch in each cabinet, requiring connections for each server. EoR uses chassis switches in the row middle, connecting cabinets within cable length limits. Designs must map logical networks to physical cable routing and manage connectivity growth.
Fault Tolerance in Big Data Processing Using Heartbeat Messages and Data Repl...IJSRD
Big data is a popular term used to define the exponential evolution and availability of data, includes both structured and unstructured data. The volatile progression of demands on big data processing imposes heavy burden on computation, communication and storage in geographically distributed data centers. Hence it is necessary to minimize the cost of big data processing, which also includes fault tolerance cost. Big Data processing involves two types of faults: node failure and data loss. Both the faults can be recovered using heartbeat messages. Here heartbeat messages acts as an acknowledgement messages between two servers. This paper depicts about the study of node failure and recovery, data replication and heartbeat messages.
Software for the new COMPASS data acquisition systembodlosh
The document summarizes the evaluation of software for a new data acquisition system. It was decided not to use the existing DATE software due to complexity, and instead develop a simpler system. A minimal run control system was implemented using DIM for communication between a master node and slave nodes. Initial testing showed the system could reliably exchange messages between nodes. Further implementation of error reporting and simulation tools is planned.
Perceiving and recovering degraded data on secure cloudIAEME Publication
This document discusses securing data stored on cloud systems. It proposes a method using tokens to represent file blocks distributed across multiple servers. A third party auditor verifies the integrity of tokens and can detect corrupted data by checking signatures. The system uses erasure coding and fault tolerance techniques like retransmission to recover lost data blocks and make the file system tolerant to node failures without data loss. Performance is evaluated, showing that optimal token size balances processing time against overhead of managing many small tokens.
Software Defined Networking: A Concept and Related IssuesEswar Publications
SDN (Software Defined Networking) is the networking architecture that has gained attention of researchers in recent past. It is the future of programmable networks. Traditional networks were very complex and difficult to manage. SDN is going to change this by offering a standard interface (OpenFlow) between the control plane and the networking devices (data plane). Its implementation is fully supported by software so that we can control the behavior of networking devices through programmatic control. This programmatic control provides various new ways to find breakpoints and failures in networking devices. Today SDN has become an important part of networking, so it is important to emulate its behavior. SDN support virtualization which makes it scalable and flexible. Data traffic resides in the data plane. The main function of intelligent controller is to decide the routing
policy and manage the traffic in data plane. So effectively SDN emerges as a networking architecture that has the ability to solve all problems those were found in traditional architecture In this paper the authors discussed historical perspective of SDN, languages that support SDN, emulation tools, security issues with SDN and advantages that makes SDN suitable choice for today’s network.
This document discusses resource management techniques in distributed systems. It covers three main scheduling techniques: task assignment approach, load balancing approach, and load sharing approach. It also outlines desirable features of good global scheduling algorithms such as having no a priori knowledge about processes, being dynamic in nature, having quick decision-making capability, balancing system performance and scheduling overhead, stability, scalability, fault tolerance, and fairness of service. Finally, it discusses policies for load estimation, process transfer, state information exchange, location, priority assignment, and migration limiting that distributed load balancing algorithms employ.
This document discusses resource management techniques in distributed systems. It describes three main approaches: task assignment, load balancing, and load sharing. Task assignment involves scheduling related tasks to optimize performance metrics like turnaround time. Load balancing aims to evenly distribute workloads across nodes to utilize resources efficiently. Load sharing is a simpler approach that prevents idle nodes when others are heavily loaded. The document also outlines desirable properties for scheduling algorithms and categorizes different types of load balancing techniques.
To detect network intrusions protects a computer network from unauthorized users, including perhaps insiders. The intrusion detector learning task is to build a predictive model (i.e. a classifier) capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections
The German Climate Computing Centre (DKRZ) provides computing resources and data management for climate research. It needed more powerful systems to handle increasing data demands. DKRZ purchased an IBM POWER6 cluster with QLogic InfiniBand switches. This exceeded performance expectations, ranking 27th on the Top500 list. The switches also simplified storage integration and management.
This document describes research into developing a discovery method to ensure DNSSEC information can be delivered to end hosts. Measurements using RIPE ATLAS probes found that 64% of recursive resolvers could perform basic DNSSEC queries, while only 40% could process authenticated wildcard information. The proposed discovery method has stub resolvers first try the default recursive resolver, then the ISP resolver, a public resolver, or full recursion if needed, to balance functionality and efficiency.
This paper proposes improvements to the Chord peer-to-peer network protocol to increase reliability of data transfer. The Chord protocol uses a ring topology to organize nodes and determine routing, but direct routes sometimes fail. The paper suggests having nodes store failed packets locally and route them through multiple neighboring nodes to reach the destination. It also considers available bandwidth when selecting next-hop nodes. Simulations show the proposed methods increase packet delivery ratio compared to the base Chord protocol, especially as node movement increases, improving the reliability of data transfer in the peer-to-peer network.
This document provides an overview of multicast communication concepts. It discusses IP multicast and how it allows efficient single-message delivery to groups. Reliable multicast is described as ensuring validity, integrity, and agreement even if the sender crashes. Ordered multicast can provide FIFO, causal, or total ordering guarantees for message delivery across group members. Practical implementations rely on techniques like sequence numbers, acknowledgments, and negative acknowledgments to ensure reliability and ordering.
Introduction: What is clock synchronization?
The challenges of clock synchronization.
Basic Concepts: Software and hardware clocks. Basic clock synchronization algorithm
Algorithms: Deep dive into landmark papers
NTP: Internet scale time synchronization
This document discusses peer-to-peer systems and middleware for managing distributed resources at a large scale. It describes key characteristics of peer-to-peer systems like nodes contributing equal resources and decentralized operation. Middleware systems like Pastry and Tapestry are overlay networks that route requests to distributed objects across nodes through knowledge at each node. They provide simple APIs and support scalability, load balancing, and dynamic node availability.
This document discusses process management in distributed systems. It describes how distributed operating systems aim to make the best use of processing resources across an entire system by sharing processors among all processes. Key concepts discussed include processor allocation, process migration, and threads. Process migration involves transferring a running process from one machine to another to achieve goals like load balancing and fault tolerance. The challenges and mechanisms of freezing, transferring, and restarting a migrating process's address space and forwarding messages are also covered.
Design of a Remotely Accessible PC based Temperature Monitoring SystemIDES Editor
An innovative data-acquisition circuit for
temperature monitoring and control is designed and interfaced
to printer port of a web server computer. Further, an interactive
web application program has been developed and kept running
in the server computer for controlling the operation of the
data-acquisition circuit. Authenticated clients can access the
web based instrumentation system through Internet / Intranet.
This document discusses two approaches for distributed clustering of data streams from sensor networks: DGClust and L2GClust. DGClust performs local discretization and representative clustering to improve computation and communication loads for clustering sensor data streams at a central server. L2GClust performs local clustering based on each sensor's sketch of its own data and its neighbors' estimates of the global clustering, allowing each sensor to estimate the overall network clustering with limited resources and communication. Evaluation shows L2GClust achieves high agreement with centralized clustering while reducing storage, communication and sensitivity to uncertainty.
This document summarizes a research paper that proposes a content-based hybrid DWT-DCT watermarking technique for image authentication in color images. The technique embeds statistical features extracted from the host image as the watermark. Four different statistical features are used to generate the watermark - the Frobenius norm, mean, standard deviation, and combined mean and standard deviation of the host image blocks. The watermark is then embedded into the host image by applying both DWT and DCT transforms. During extraction, the same process is applied to extract the watermark for authentication. Experimental results show the technique is robust against various attacks like compression, noise, and filters.
The document discusses tensile properties of long jute fiber reinforced polypropylene composites. It begins with an abstract that states the objective is to test tensile properties of composites made from chemically treated long jute fibers reinforced in polypropylene at different weight ratios. The results showed tensile strength and modulus increased for treated fiber composites compared to plain polypropylene, with up to a 28.4% increase for 15% NaOH treated fibers at 10% weight ratio. The introduction provides background on composites and defines them. It also describes the phases in a composite including polypropylene matrix and jute fiber reinforcement. Experimental details on materials and fiber extraction are then presented.
The document analyzes the bit error rate (BER) performance of the mobile WiMAX physical layer under different communication channels and modulation techniques. It simulates BER and signal-to-noise ratio (SNR) using the Stanford University Interim (SUI) channel models, which model six different channel conditions for varying terrain types. The performance is evaluated for different data rates and modulation schemes like BPSK and OFDMA under the SUI channel models.
This document summarizes research into optimizing process parameters for Eli-Twist yarn production. Eli-Twist yarn is produced using Suessen Elite compact spinning technology and has advantages over traditional two-ply yarns. The distance between roving strands and negative pressure applied in the suction zone can substantially impact yarn quality. Ten trials were conducted varying these two parameters. The effects on yarn fineness, strength, elongation, imperfections and hairiness were evaluated. A process capability index (Cpk) was used to assess yarn quality. The goal was to optimize the parameters to improve Eli-Twist yarn quality.
The document proposes a fast handoff scheme for IEEE 802.11 wireless networks using virtual access points. It aims to reduce handoff latency, which is primarily caused by the probe delay during scanning for new access points. The scheme uses selective scanning to identify neighboring access points with strong signals. It performs pre-registration of the mobile host with neighboring access points to transfer security contexts in advance. A virtual access point handles communication between the mobile host and registered access points to enable fast switching during handoff. Buffering at the virtual access point allows seamless data transfer when the connection changes between access points.
This document summarizes challenges with migrating applications between cloud environments and discusses potential solutions. It addresses three main points:
1) Application architecture impacts migration ability, and architectures like asynchronous apps are better suited for cloud portability.
2) Standards like OVF could help by providing universal metadata for virtual machines, but full standards adoption will take time.
3) Tools to automate migration are needed to move apps without rewriting them for each cloud, but current tools often result in multiple versions that are difficult to manage.
This document summarizes and analyzes different Flash Translation Layer (FTL) schemes for Solid State Drives (SSDs). It discusses the strengths and weaknesses of page mapping, block mapping, and hybrid mapping FTL schemes. It proposes that the optimal FTL scheme would have infrequent garbage collection, short garbage collection latency, low computation and memory overhead, and maintain good average and worst-case write performance. The document suggests that modified page mapping and FAST mapping schemes may achieve this if their computation overhead and worst-case latency issues are addressed without hurting average performance.
This document summarizes a research paper that proposes improvements to the probabilistic packet marking (PPM) algorithm for detecting the path of distributed denial-of-service attacks. The PPM algorithm allows routers to mark attack packets with identification information based on a predetermined probability. However, its termination condition is not well-defined, which can result in an incorrectly constructed attack path. The paper proposes a modified PPM algorithm called rectified PPM (RPPM) that defines a precise termination condition to guarantee the constructed attack path is correct with a specified level of confidence. An experimental framework is designed to test the RPPM algorithm under different packet marking probabilities and network structures.
El documento describe la teoría del conectivismo, un enfoque del aprendizaje que reconoce el impacto de la tecnología en la vida cotidiana. Explica que los profesores deben adaptarse a los cambios tecnológicos para comprender mejor a los estudiantes y enseñarles a usar la tecnología de manera efectiva y responsable. También sugiere que el uso de herramientas digitales como PowerPoint y pizarras interactivas puede mejorar el aprendizaje de los estudiantes si los profesores están preparados para
Este documento es una factura que detalla los productos vendidos a un cliente, Cristobal Colon, incluyendo un sofa, soda, patatas, libros, cajas y una computadora. La factura enumera la cantidad de cada producto, su descripción, precio unitario y total, así como el subtotal, impuestos, envío y monto total adeudado de $9,635.
Paulo Freire nació en 1921 en Brasil. Experimentó la pobreza durante la Gran Depresión, lo que influyó en su enfoque educativo centrado en los pobres. Estudió filosofía y psicología del lenguaje en la universidad. En 1946 dirigió el departamento de educación en Pernambuco, donde desarrolló un método no ortodoxo para enseñar a leer y escribir a los pobres. En 1961 dirigió la extensión cultural de la Universidad de Recife y en 1962 enseñó a 300 traba
El documento lista los nombres y grupos de atención de cuatro nutricionistas en una escuela. Carmen Elizabeth Carrillo Bravo atiende a lactantes, maternal y preescolar 1. Claudia Yolanda Armenta Delgado atiende a preescolar 2, preescolar 3 y primaria 1*A. Araceli Galindo Tejeda atiende a primaria 1*B, primaria 2*-3* y primaria 3*-4*. Estefanía del Carmen López Díaz atiende a primaria 4*-5*, primaria 5
Similar to call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technolog
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, supports fault tolerance through replication, and provides security through user accounts and encryption. Performance tests show Sector/Sphere outperforms Hadoop for sorting and malware analysis benchmarks by processing data locally.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, written in C++, and provides high performance distributed storage and processing for large datasets across wide areas using techniques like UDT for fast data transfer. Experimental results show it outperforms Hadoop for certain applications by exploiting data locality.
DFAA- A Dynamic Flow Aggregation Approach Against SDDOS Attacks in CloudIRJET Journal
This document proposes a new method called DFAA (Dynamic Flow Aggregation Approach) to detect periodic shrew distributed denial of service (DDoS) attacks in cloud computing. The method uses frequency-domain characteristics extracted from the autocorrelation of network flows as clustering features. It groups end-user flows using the BIRCH clustering algorithm and then refines the clusters. The evaluation shows the method can categorize abnormal network flows with fast response times and high detection accuracy, while avoiding lower impact groups of abnormal flows.
Real Application Cluster (RAC) allows multiple computers to simultaneously run Oracle RDBMS while accessing a single database, providing clustering. RAC provides high availability, scalability, and ease of administration by making multiple instances transparent to users. Nodes must have identical environments. Oracle Clusterware manages node additions and removals. Instances from different nodes write to the same physical database. The presentation covers RAC architecture, components, startup sequence, single instance configuration, node eviction, and tips for monitoring and improving the RAC environment.
IRJET- Detection of Distributed Denial-of-Service (DDos) Attack on Software D...IRJET Journal
This document discusses detecting distributed denial-of-service (DDoS) attacks on software defined networks (SDNs). It first provides background on SDNs and DDoS attacks. It then reviews related research on DDoS detection methods for SDNs. The document evaluates these methods based on results using the KDD99 dataset in a simulated SDN environment. It finds that the Double P-value of Transductive Confidence Machines for K-Nearest Neighbors (DPTCM-KNN) method achieved the highest true positive rate and lowest false positive rate, making it the most efficient approach for detecting anomalous flows in SDNs.
IRJET - Detecting and Securing of IP Spoofing Attack by using SDNIRJET Journal
This document discusses detecting and preventing IP spoofing attacks using software-defined networking (SDN). It begins with an abstract that outlines using SDN architecture to implement controls for IP spoofing through an algorithm to manage flows of unused IP addresses via the shortest path. It then discusses how IP spoofing works by creating packets with fake source IP addresses. The proposed approach uses SDN destination networking to associate source networks with cryptographic keys added to packets for authentication by routers. This provides incentives for internet service providers to implement spoofing prevention. Evaluation shows the proposed approach improves performance metrics like IP address usage, intrusion detection, secure data transmission, and synchronization compared to existing methods.
IRJET-Security Based Data Transfer and Privacy Storage through Watermark Dete...IRJET Journal
Gowtham.T ,Pradeep Kumar.G " Security Based Data Transfer and Privacy Storage through Watermark Detection ", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net .published by Fast Track Publications
Abstract
Digital watermarking has been proposed as a technology to ensure copyright protection by embedding an imperceptible, yet detectable signal in visual multimedia content such as images or video. In every field key aspect is the security Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service. Several attempts has been made for increasing the security related works and avoidance of data loss. Existing system had attain its solution up to its level where it can be further able to attain the parameter refinement. In this paper improvising factor been made on the successive compressive sensing reconstruction part and Peak Signal-to-Noise Ratio (PSNR).Another consideration factor is to increase (CS) rate through de-emphasize the effect of predictive variables that become uncorrelated with the measurement data which eliminates the need of (CS) reconstruction.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Intrusion Detection and Marking Transactions in a Cloud of Databases Environm...neirew J
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
INTRUSION DETECTION AND MARKING TRANSACTIONS IN A CLOUD OF DATABASES ENVIRONMENTijccsa
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
EFFICIENT IDENTIFICATION AND REDUCTION OF MULTIPLE ATTACKS ADD VICTIMISATION ...IRJET Journal
This document discusses efficient identification and reduction of multiple attacks in IoT networks using deep learning techniques. It proposes a Deep Learning based secure RPL routing (DLRP) protocol to detect attacks like rank, version number, and Denial of Service attacks. The DLRP protocol first creates a complex dataset of normal and attack behaviors using network simulation. It then trains a machine learning model using this dataset to efficiently identify attack behaviors. Additionally, it classifies attack types using a Generative Adversarial Network to reduce the dataset dimensionality. Simulation results show the DLRP protocol improves attack detection accuracy and fits IoT environments well, achieving 80% packet delivery ratio using only 1474 control packets in a 30 node IoT scenario.
IRJET- SDN Simulation in Mininet to Provide Security Via FirewallIRJET Journal
This document discusses implementing a firewall application in a Software Defined Networking (SDN) environment using Mininet and the POX controller. The authors create an SDN network topology in Mininet with hosts and switches. They develop an OpenFlow-based firewall that checks incoming packets against rules defined in the POX controller. This allows filtering of traffic and blocking of unauthorized access in a centralized, software-based way without dedicated hardware. The firewall implementation and experiment results using this SDN testbed are presented.
Plongée profonde dans les technos de haute disponibilité d’Exchange 2010 par...Microsoft Technet France
Vous aussi, devenez incollable sur la Haute Dispo d’Exchange ! Session technique, en Anglais, faite par le gourou des technos de haute disponibilité d’Exchange : Scott Schnoll. Scott est speaker aux TechReady et TechEd de Microsoft, a écrit de nombreux livres de référence, et il sera présent en exclusivité pour animer cette session. Parmi les thèmes abordés : Comment séparer mon flux de réplication des logs de mon flux client ? quand un DAG (Database Availability Group) tombe, comment le système choisit-il la bonne copie de la base de données à répliquer ? Allez au-delà des fonctions de base de la haute disponibilité et apprenez ce qui se passe réellement dans les arcanes d’un DAG Exchange. Cette session couvre le fonctionnement interne des DAGs, nous discuterons des réseaux de DAGs, d’Active Manager, de comment le système permet la sélection des meilleures réplications de bases et du Datacenter Activation Coordination Mode.
ASSURED NEIGHBOR BASED COUNTER PROTOCOL ON MAC-LAYER PROVIDING SECURITY IN MO...cscpconf
In this paper, we have taken out the concern of security on a Medium Access Control layer
implementing Assured Neighbor based Security Protocol to provide the authentication,
confidentiality and taking in consideration High speed transmission by providing security in
parallel manner in both Routing and Link Layer of Mobile Ad hoc Networks. We basically
divide the protocol into two different segments as the first portion concentrates, based on
Routing layer information; we implement the scheme for the detection and isolation of the
malicious nodes. The trust counter for each node is maintained which actively increased and
decreased considering the trust value for the packet forwarding. The threshold level is defined differencing the malicious and non malicious nodes. If the value of the node in trust counter lacks below the threshold value then the node is considered as malicious. The second part focus on providing the security in the link layer, the security is provided using CTR (Counter) approach for authentication and encryption. Hence simulating the results in NS-2, we come to conclude that the proposed protocol can attain high packet delivery over various intruders while attaining low delays and overheads.
Data Center Network Design Using Metrics And Their Results...Sharon Roberts
The document discusses implementing complex protocols on network systems. It notes that networks are currently built by implementing protocols across devices like routers, switches, and middleware. Network administrators manually configure new policies by converting them into low-level device commands. This approach has issues with security, scalability, and manageability. Previous efforts tried to make networks programmable through systems like Forces, Routing Control Platform, Ethane, and OpenFlow, but SDN aims to better address these issues.
Application independent based multicast routing protocols in mobile ad hoc ne...Alexander Decker
This document summarizes and compares several application-independent multicast routing protocols for mobile ad hoc networks (MANETs). It discusses the key challenges in designing multicast routing protocols for MANETs, including robustness, efficiency, control overhead, and dependency on unicast routing. It also presents a reference model architecture for multicast routing protocols and classifications based on topology (tree-based vs. mesh-based) and approach (reactive vs. proactive). Several specific multicast routing protocols are described, including AMRoute, AMRIS, and ODMRP, focusing on their mechanisms for group management, tree/mesh construction, and maintenance in dynamic network conditions.
This document summarizes a study on the impact of malicious nodes on throughput, packets dropped, and average latency in mobile ad hoc networks (MANETs). The study used the NS-2 simulator to model different MANET scenarios with varying numbers of malicious nodes, from 0 to 10 nodes per group and 0 to 40 nodes total. Key findings from the simulations include: (1) network throughput was highest with 0 malicious nodes, (2) packet drops were lowest with 4 malicious nodes, and (3) average latency was lowest with 4 malicious nodes. As the number of malicious nodes increased, network throughput decreased while packet drops and latency increased. The document concludes the presence of malicious nodes degrades MANET performance but having a small number
Similar to call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technolog (20)
This document discusses the impact of data mining on business intelligence. It begins by defining business intelligence as using new technologies to quickly respond to changes in the business environment. Data mining is an important part of the business intelligence lifecycle, which includes determining requirements, collecting and analyzing data, generating reports, and measuring performance. Data mining allows businesses to access real-time, accurate data from multiple sources to improve decision making. Using business intelligence and data mining techniques can help businesses become more efficient and make better decisions to increase profits and customer satisfaction. The expected results of applying business intelligence include improved decision making through accurate, timely information to support organizational goals and strategic plans.
This document presents a novel technique for solving the transcendental equations of selective harmonics elimination pulse width modulation (SHEPWM) inverters based on the secant method. The proposed algorithm uses the secant method to simplify the numerical solution of the nonlinear equations and solve them faster compared to other methods. Simulation results validate that the proposed method accurately estimates the switching angles to eliminate specific harmonics from the output voltage waveform and achieves near sinusoidal output current for various modulation indices and numbers of harmonics eliminated.
This document summarizes a research paper that designed and implemented a dual tone multi-frequency (DTMF) based GSM-controlled car security system. The system uses a DTMF decoder and GSM module to allow a car to be remotely controlled and secured from a mobile phone. It works by sending DTMF tones from the phone through calls to the GSM module in the car. The decoder interprets the tones and a microcontroller executes commands to disable the ignition or control other devices. The system was created to improve car security and accessibility through remote monitoring and control with DTMF and GSM technology.
This document presents an algorithm for imperceptibly embedding a DNA-encoded watermark into a color image for authentication purposes. It applies a multi-resolution discrete wavelet transform to decompose the image. The watermark, encoded into DNA nucleotides, is then embedded into the third-level wavelet coefficients through a quantization process. Specifically, the watermark nucleotides are complemented and used to quantize coefficients in the middle frequency band, modifying the coefficients. The watermarked image is reconstructed through inverse wavelet transform. Extraction reverses these steps to recover the watermark without the original image. The algorithm aims to balance imperceptibility and robustness through this wavelet-based, blind watermarking scheme.
1) The document analyzes the dynamic saturation point of a deep-water channel in Shanghai port based on actual traffic data and a ship domain model.
2) A dynamic channel transit capacity model is established that considers factors like channel width, ship density, speed, and reductions due to traffic conditions.
3) Based on AIS data from the channel, the average traffic flow is calculated to be 15.7 ships per hour, resulting in a dynamic saturation of 32.5%, or 43.3% accounting for uneven day/night traffic volumes.
The document summarizes research on the use of earth air tunnels and wind towers as passive solar techniques. Key findings include:
- Earth air tunnels circulate air through underground pipes to take advantage of the stable temperature 4 meters below ground for cooling in summer and heating in winter. Testing showed the technique can reduce ambient temperatures by up to 14 degrees Celsius.
- Wind towers circulate air through tall shafts to cool air entering buildings at night and provide downward airflow of cooled air during the day.
- Experimental testing of an earth air tunnel system over multiple months found maximum temperature reductions of 33% in spring and minimum reductions of 15% in summer.
The document compares the mechanical and physical properties of low density polyethylene (LDPE) thin films and sheets reinforced with graphene nanoparticles. LDPE/graphene thin films were produced via solution casting, while sheets were made by compression molding. Testing showed that the thin films had enhanced tensile strength, lower melt flow index, and higher thermal stability compared to sheets. The tensile strength of thin films increased by up to 160% with 1% graphene, while sheets increased by 70%. Melt flow index decreased more for thin films, indicating higher viscosity. Thin films also showed greater improvement in glass transition temperature. These results demonstrate that processing technique affects the properties of LDPE/graphene nanocomposites.
The document describes improvements made to a friction testing machine. A stepper motor and PLC control system were added to automatically vary the load on friction pairs, replacing the manual method. Tests using the improved machine found that the friction coefficient decreases as the load increases, and that abrasive and adhesive wear increased with higher loads. The improved machine allows more accurate and convenient testing of friction pairs under varying load conditions.
This document summarizes a research article that investigates the steady, two-dimensional Falkner-Skan boundary layer flow over a stationary wedge with momentum and thermal slip boundary conditions. The flow considers a temperature-dependent thermal conductivity in the presence of a porous medium and viscous dissipation. Governing partial differential equations are non-dimensionalized and transformed into ordinary differential equations using similarity transformations. The equations are highly nonlinear and cannot be solved analytically, so a numerical solver is used. Numerical results are presented for the skin friction coefficient, local Nusselt number, velocity and temperature profiles for varying parameters like the Falkner-Skan parameter and Eckert number.
An improvised white board compass was designed and developed to enhance the teaching of geometrical construction concepts in basic technology courses. The compass allows teachers to visually demonstrate geometric concepts and constructions on a white board in an engaging, hands-on manner. It supports constructivist learning principles by enabling students to observe and emulate the teacher. The design process utilized design and development research methodology to test educational theories and validate the practical application of the compass. The improvised compass was found to effectively engage students and improve their performance in learning geometric constructions.
The document describes the design of an energy meter that calculates energy using a one second logic for improved accuracy. The meter samples voltage and current values using an ADC synchronized to the line frequency via PLL. It calculates active and reactive power by averaging the sampled values over each second. The accumulated active power for each second is multiplied by one second to calculate energy, which is accumulated and converted to kWh. Test results showed the meter achieved an error of 0.3%, within the acceptable limit for class 1 meters. Considering energy over longer durations like one second helps reduce percentage error in the calculation.
This document presents a two-stage method for solving fuzzy transportation problems where the costs, supplies, and demands are represented by symmetric trapezoidal fuzzy numbers. In the first stage, the problem is solved to satisfy minimum demand requirements. Remaining supplies are then distributed in the second stage to further minimize costs. A numerical example demonstrates using robust ranking techniques to convert the fuzzy problem into a crisp one, which is then solved using a zero suffix method. The total optimal costs from both stages provide the solution to the original fuzzy transportation problem.
1) The document proposes using an Adaptive Neuro-Fuzzy Inference System (ANFIS) controller for a Distributed Power Flow Controller (DPFC) to improve voltage regulation and power quality in a transmission system.
2) A DPFC is placed at a load bus in an IEEE 4 bus system and its performance is compared using a PI controller and ANFIS controller.
3) Simulation results show the ANFIS controller provides faster convergence and better voltage profile maintenance during voltage sags and swells compared to the PI controller.
The document describes an improved particle swarm optimization algorithm to solve vehicle routing problems. It introduces concepts of leptons and hadrons to particles in the algorithm. Leptons interact weakly based on individual and neighborhood best positions, while hadrons (local best particles) undergo strong interactions by colliding with the global best particle. When stagnation occurs, particle decay is used to increase diversity. Simulations show the improved algorithm avoids premature convergence and finds better solutions compared to the basic particle swarm optimization.
This document presents a method for analyzing photoplethysmographic (PPG) signals using correlative analysis. The method involves calculating the autocorrelation function of the PPG signal, extracting the envelope of the autocorrelation function using a low pass filter, and approximating the envelope by determining attenuation coefficients. Ten PPG signals were collected from volunteers and analyzed using this method. The attenuation coefficients were found to have similar values around 0.46, providing a potentially useful parameter for medical diagnosis.
This document describes the simulation and design of a process to recover monoethylene glycol (MEG) from effluent waste streams of a petrochemical company in Iran. Aspen Plus simulation software was used to model the process, which involves separating water, salts, and various glycols (MEG, DEG, TEG, TTEG) using a series of distillation columns. Sensitivity analyses were performed to optimize column parameters such as pressure, reflux ratio, and boilup ratio. The results showed that MEG, DEG, TEG, and TTEG could be recovered at rates of 5.01, 2.039, 0.062, and 0.089 kg/hr, respectively.
This document presents a numerical analysis of fluid flow and heat transfer characteristics of ventilated disc brake rotors using computational fluid dynamics (CFD). Two types of rotor configurations are considered: circular pillared (CP) and diamond pillared radial vane (DP). A 20° sector of each rotor is modeled and meshed. Governing equations for mass, momentum, and energy are solved using ANSYS CFX. Boundary conditions include 900K and 1500K isothermal rotor walls for different speeds. Results show the DP rotor has 70% higher mass flow and 24% higher heat dissipation than the CP rotor. Velocity and pressure distributions are more uniform for the DP rotor at higher speeds, ensuring more uniform cooling. The
This document describes the design and testing of an automated cocoa drying house prototype in Trinidad and Tobago. The prototype included automated features like a retractable roof, automatic heaters, and remote control. It aims to address issues with the traditional manual sun drying process, which is time-consuming and relies on human monitoring of changing weather conditions. Initial testing with farmers showed interest in the automated system as a potential solution.
This document presents the design of a telemedical system for remote monitoring of cardiac insufficiency. The system includes an electrocardiography (ECG) device that collects and digitizes ECG signals. The ECG signals undergo digital signal processing including autocorrelation analysis. Graphical interfaces allow patients and doctors to view ECG data and attenuation coefficients derived from autocorrelation analysis. Data is transmitted between parties using TCP/IP protocol. The system aims to facilitate remote monitoring of cardiac patients to reduce hospitalizations through early detection of health changes.
The document summarizes a polygon oscillating piston engine invention. The engine uses multiple pistons arranged around the sides of a polygon within cylinders. As the pistons oscillate, they compress and combust air-fuel mixtures to produce power. This design achieves a very high power-to-weight ratio of up to 2 hp per pound. Engineering analysis and design of a prototype 6-sided engine is presented, showing it can produce 168 hp from a 353 cubic feet per minute air flow at 12,960 rpm. The invention overcomes issues with prior oscillating piston designs by keeping the pistons moving in straight lines within cylinders using conventional piston rings.
More from International Journal of Engineering Inventions www.ijeijournal.com (20)
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technolog
1. International Journal of Engineering Inventions
ISSN: 2278-7461, www.ijeijournal.com
Volume 1, Issue 4 (September 2012) PP: 10-14
Security Management for Distributed Environment
Ms. Smita Chaudhari , Mrs. Seema Kolkur2
1
1
Assi. Prof. Of S. S. Jondhale College Of Engineering,Dombivli,Mumbai University, INDIA
2
Asso. Prof. Of Thadomal Shahani College Of Engineering, Mumbai University, INDIA
Abstract––A mobile database is a database that can be connected to by a mobile computing device over a mobile network.
Mobile processed information in database systems is distributed, heterogeneous, and replicated. They are endangered by
various threats based on user’s mobility and restricted mobile resources of portable devices and wireless links. Since
mobile circumstances can be very dynamic, standard protection mechanisms do not work very well in such environments.
So our proposed model enhances the security in mobile database system. In this paper we develop a security model for
transaction management framework for peer-to-peer environments. If any attack still occurs on a database system,
evaluation of damage must be performed as soon the attack is identified. The attack recovery problem has two aspects:
damage assessment and damage repair. The complexity of attack recovery is mainly caused by a phenomenon called
damage spreading. This paper focuses on damage assessment and recovery procedure for distributed database systems.
Keywords––Mobile Database, Transaction Management, Security
I. INTRODUCTION
In mobile environment, several mobile computers collectively form the entire distributed system of interest. These
mobile computers may communicate together in an ad hoc manner by communicating through networks that are formed on
demand. Such communication may occur through wired (fixed) or wireless (ad hoc) networks. Distributed database systems
are made up of mobile nodes and peer-to-peer connection. These nodes are peers and may be replicated both for fault-
tolerance, dependability, and to compensate for nodes which are currently disconnected. Several sites from this system must
participate in the synchronization of transaction. There are different transaction models [5] available for mobile computing
environment, but data transmission between the base station (BS) and the mobile station (MH (S)) is not secure which leads
to data inconsistency as well as large number of rejected transactions. Typical operating system security features such as
memory and file protection, resource access control and user authentication are not useful for distributed environment. A key
requirement in such an environment is to support and secure the communication of mobile database. This paper focuses on
security management processing for MCTO (Multi-Check-out Timestamp Order) [2] model by using symmetric encryption
and decryption [1] between the Base station BS and the mobile host MH with the aim at achieving secure data management
at the mobile host.
If any attack occurs on a database system, evaluation of damage must be performed as soon the attack is identified.
If the evaluation of damage not performed soon after attack, the initial damage will spread to other parts of the database via
valid transactions, consequently resulting in denial-of-service. As more and more data items become affected, the spread of
damage becomes even faster. Damage assessment is a complicated task due to intricate transaction relationships among
distributed sites. For the assessment, the logs need to be checked thoroughly for the effect of the attack. Damage recovery [6]
can be “Coldstart” or “Warmstart”. This paper focuses system that uses the “Coldstart” method for damage assessment and
recovery. The proposed system uses DAA (Damage Assessment Algorithm) [3] to detect the spread of malicious transaction
in distributed replicated database system. After detection of affected transactions, these are recovered using the recovery
procedure.
II. THE PROPOSED MODEL
The architecture of the proposed system is as shown in fig.1. The mobile host in mobile network first gives the
encrypted request to fixed proxy server. The fixed proxy server updates the data and the result is given back to the mobile
network.
10
2. Security Management for Distributed Environment
Fixed network Fixed proxy server
Result
DAA Pass1/Pass2 DAAPass1/Pass2
Enc/Dec
Update the data base
Decrypted
Encrypted
Request result
DAA- Damage Assessment Algorithm
Enc/Dec
Enc- Encryption algorithm
Dec-Decryption algorithm
Mobile network
Fig.1 Architecture of the proposed system
The proposed model consists of both encryption and decryption algorithms located at the BS and the MH(s) as
shown in Fig.1.The encryption algorithm is started when the data transferred. The decryption algorithm is started when
encrypted data is received. The DAA (Damage assessment Algorithm) on the fixed network uses local logs.
The proposed system used MCTO model [2]. The model has two types of networks, i.e., the fixed network and the
mobile network. For the fixed network, all sites are logically organized in the form of two-dimensional grid structure. For
example, if the network consists of twenty-five sites, it will logically organize in the form of 5 x 5 grids. Each site has a
master data file.
A. Diagonal replication on Grid (DRG) Technique
For replication, the proposed system used a DRG [4] technique. In the fixed network, the data file will replicate to
diagonal sites. While in the ad hoc network, the data file will replicate asynchronously at only one site based on the most
frequently visited site. As an example, Assume that in5x5grid, the same file will be replicated to s (1,1),s (2,2), s (3,3),s(4,4)
and s(5,5). The „commonly visited site‟ is defined as the most frequent site that requests the same data at the fixed network
(the commonly visited sites can be given either by a user or selected automatically from a log file/ database at each centre).
B. Damage evaluation protocol
The purpose of this model [3] is to provide an efficient method to assess the effects of a malicious transaction in a
fully distributed replicated database system. The model is based on the following assumptions.
The local schedules are logged in each site and the attacker cannot destroy the log. The extended log can be
considered to include all the read operations in addition to the write operations in the log.
The attacking transactions are identified.
Blind writes are not permitted. That is if a transaction writes some data item it is assumed to read the value of that
item first.
The following transaction classifications are used in the model: Malicious transaction, authentic transaction,
affected transaction, bad transactions and unaffected transaction. Consider a distributed database system consisting of two
logs as shown in fig.3 that are replicated at different sites. Since this is a replicated distributed database system, any change
in one log is appeared to every site where that log is replicated.
11
3. Security Management for Distributed Environment
R1 (a) W1 (x) R2.2(c)
R2.1 (b) W2.1 (b) R4.2(c) W4.2(c)
R4.1 (d) R5 (g)
R6.1 (d) W6.1 (d) R6.2 (h) W6.2 (h)
R7 (d) R8 (g) W8 (g)
R12.1 (e) W12.1 (e) R9.2 (g) W9.2 (g)
R14.1 (a) R12.2 (k)
R25 (d) W25 (y) R21.2 (g) W21.2 (g)
Site 1 Site
Fig.3 Effect of Transaction distribution
The underlined transactions are malicious transactions. Let us first consider the log at site 1 in which T6 is marked
as the first malicious transaction. When DAA procedure is executed at site 1 the affected transactions will be detected and
added to the undo list (the list of transactions whose effect must be removed from the database). For example, transaction
T7 and T25 are affected transactions since they both read the damaged data item d, which has been updated by T6. At site 2,
T8 is marked as malicious hence T9 and T21 will be detected as affected. After assessing the affected transactions, all
affected transactions are repaired using damage recovery procedure.
i)Proposed Algorithm for Damage Assessment
Input: The update log, read log, the set of malicious transactions M
Output: The set of bad transactions TB, the set of dirty items D and the set of global bad Transactions GB.
1. Initializations: TB ={}, tmp_bad_list ={}, D = {}, tmp_dirty_list = {}.
2. Find the first malicious transaction committed in the log.
3. For each transaction Ti read its entry P ij (x) in the log
3.1 If Ti is in M then
If Pij is a write operation then
D = D U{x}
* Add the data item to the dirty list*
3.2 Else
3.2.1 Case Pij is a read operation
If x is in D
tmp_bad_list = tmp_bad_list U {Ti}
/*Add Ti to tmp_bad_list*/
3.2.2 Case Pij is a write operation
tmp_dirty_list = tmp_dirty_list U {x}
/* Add x to tmp_dirty_list*/
3.2.3 Case Pij is an abort operation
Delete Ti from tmp_bad_list
Delete x from tmp_dirty_list
3.2.4 Case Pij is a commit operation
If Ti is in the tmp_bad_list
Move Ti from tmp_bad_list to TB.
Move all the data items of Ti from the
tmp_dirty_list to the D.
The algorithm starts by adding data items updated by malicious transactions to the dirty list. After that it scans the log for all
presumed-bad transactions and adds the data items that have been updated by them to the dirty list.
ii) Proposed Algorithm for Damage Recovery
Input: The update log, set of malicious transaction M, set of affected transactions A.
Output: Set of recovered transactions whose effect has been undone: UN
Intermediate input/output: The set of bad transactions TB, the set of dirty items D, difference, temp.
1. Move to the position in the log where the first malicious transaction appeared.
2. For each transaction Ti in M,
temp old value of item X
3. For each Affected transaction Ti in A, read its entry Pij (x) in the log.
3.1 If x is a numeric value,
Read the old values and new values of item x from the log.
12
4. Security Management for Distributed Environment
Calculate difference ← new value of x – old value of x.
3.2 Else
Read the old values and new values of item x from the log.
4. For every transaction Ti in the update log, read its entry Pij (x) in the log
4.1 If Ti is in A,
4.2 If x is a numeric value
Update Old value of x temp
New value of x ← temp - difference.
4.3 Else
Copy temp to new value of x.
5. Update the original table‟s state according to recovered transactions in the log.
The algorithm begins by scanning every affected transaction in the update log. If the item updated by affected
transaction is a numeric value, it calculates the difference between the old value and new value of the item. The difference is
updated to new value of item. If item is a character value, copy old value of item to new value. Update the changes in the
tables to bring the database back in running state.
III. IMPLEMENTATION ISSUES
A. System Design
The system uses VB as front-end and Oracle9i as a backend. There are two parts in which the proposed security
management system works. In the first part, the mobile user retrieves the desired information from the distributed database
system. The mobile user connects with distributed database system using wireless communication. During communication,
the request or result may get violated during transmission. To avoid it, the communication between mobile user and
distributed database system is secured using MD5 algorithm. The database is replicated on distributed database system using
Oracle‟s MMR (Multi-Master Replication) in such a way that the consistent data is available at different sites.
In the second part, Damage Assessment and Repair module is developed. The main component of this module is
log, which captures different operations on database. Two types of log are required: read log and update log. The prototype
is implemented on top of an Oracle server. Since Oracle redo log structure is confidential and difficult to handle, read and
write information is maintained manually. In particular, a Proxy is used to mediate every user transaction and some specific
triggers to log write operations. A trigger is associated with each user table to log the write operations on that table. All the
write operations are recorded in the log table.
Oracle triggers cannot capture every read operation. Oracle triggers can capture the read operations on a data item
that is updated or deleted but cannot capture read-only operations. To capture every read operation, read set from SQL
statement is extracted and stored in read log table.
B. Performance Issues
The performance of the first part i.e. data retrieval can be measured by the response time for user transactions. It is
the time between mobile users enters his queries to the time he gets back the result. It depends on the number of records the
user wants to retrieve.
The performance of the attack recovery subsystem can be measured by the average repair time, which indicates
how efficient the subsystem is at repairing damage. The Average repair time depends on the number of Affected
Transactions.
IV. CONCLUSIONS AND FUTURE WORK
In this paper we have developed a mobile transaction model, which captures data and movement nature of mobile
transactions. This model is based on multi-check-out. The model describes a mobile transaction Management by Timestamp
Order. The Encryption and Decryption algorithms are used for secured data transmission between the base station BS and
the mobile host MH(s). After an attack, the DAA procedures are used to avoid and repair the effect of malicious transactions.
As a future work, an agent can be used on the fixed proxy server. When the mobile client is disconnected in
MCTO model, the result of the transaction is not lost but will be stored with the mobile agent. When the transaction is
completed, the agent returns and delivers the result to the user. If the user is disconnected, it waits until the user is
reconnected. The Agent is also used in order to maintain serializability in multi check-out mode, timestamp ordering to
serialize the mobile transaction at the fixed proxy server.
REFERENCES
1. Abdul-Mehdi, Z.T.; Mahmod, R., “Security Management Model for Mobile Databases Transaction Management”
Information and Communication Technologies: From Theory to Applications, 2008.,3rd International Conference
on
7-11 April 2008 Page(s):1 – 6.
2. Mehdi, Z.T. Mamat, A.B. Ibrahim, H. Dirs, Mustafa.M. 2006.“Multi-Check-Out Timestamp Order Technique
(MCTO) for Planned Disconnections in Mobile Database”, The 2nd IEEE International Conference on Information
& Communication Technologies: from Theory to Applications, 24-28 April, Damascus, Syria, Vol.1, and p.p 491-
498.
13