This document introduces distributed computing and provides an overview of key concepts. It defines distributed computing as autonomous processors communicating over a network with no shared memory or clock. Characteristics of distributed systems like geographical separation and heterogeneity are discussed. The document outlines challenges in areas like communication mechanisms, processes, naming, synchronization, and transparency in distributed systems. It also summarizes different execution models in distributed computing.
This presentation elaborates what are multiprocessor operating systems, Multiprocessor Hardware, Multiprocessing models and frameworks, Multiprocessor Synchronization, Multiprocessor Scheduling, Applications of multiprocessing systems, Advantages, Disadvantages and Solutions and New trends of Multiprocessing.
Parallel processing architectures allow for simultaneous computation across multiple processing elements. There are four main types of parallel architectures: single instruction single data (SISD), single instruction multiple data (SIMD), multiple instruction single data (MISD), and multiple instruction multiple data (MIMD). MIMD systems are the most common and can have either shared or distributed memory. Effective parallel programming requires approaches like message passing or shared memory models to facilitate communication between processing elements.
The document discusses processes, threads, and synchronization techniques in operating systems. It covers:
- Processes allow pseudo-parallelism through rapid switching between programs by the CPU.
- Threads are lightweight processes that share resources like memory within a process and allow greater parallelism than processes alone.
- Synchronization techniques like semaphores and mutexes are needed to control access to shared resources and prevent race conditions when multiple threads access the same data concurrently.
Field-programmable gate array design of image encryption and decryption usin...IJECEIAES
This article presents a simple and efficient masking technique based on Chua chaotic system synchronization. It includes feeding the masked signal back to the master system and using it to drive the slave system for synchronization purposes. The proposed system is implemented in a field programmable gate array (FPGA) device using the Xilinx system generator tool. To achieve synchronization, the Pecora-Carroll identical cascading synchronization approach was used. The transmitted signal should be mixed or masked with a chaotic carrier and can be processed by the receiver without any distortion or loss. For different images, the security analysis is performed using the histogram, correlation coefficient, and entropy. In addition, FPGA hardware co-simulation based Xilinx Artix7 xc7a100t1csg324 was used to check the reality of the encryption and decryption of the images.
Comparing ICH-leach and Leach Descendents on Image Transfer using DCT IJECEIAES
The development and miniaturization of CMOS (microphones and cameras) in the last years have allowed the creation of WMSN (Wireless Multimedia Sensor Network). Therefore, transferring multimedia content through the network has become an important field of research. It transmits the recorded multimedia data wirelessly from a node to another to reach the Sink (base station). Thus, routing protocols make a big contribution in this process, because they participate in optimizing the node's resource usage. Since Leach protocol was designed only to minimize energy consumption of the network. The goal of this paper is to compare our protocol in tnansfering images with other Leach protocol descendants. By using the application layer, we applied the jpeg compression using the frequency domain on images before sending them to the network. In this paper, readers will find statistics concerning the lifetime of the network, the energy consumption and most importantly statistics about received images. Also, we used Castalia framework to simulate real conditions of transmission simulation results proved the efficiency of our protocol by prolonging the lifetime of the network and transmitting more images with better quality compared to other protocols.
This document provides an overview of key concepts in mechatronics engineering. It defines mechatronic systems as integrated electronic-mechanical systems combining sensors, actuators and digital electronics. Important life cycle factors for mechatronic systems include delivery, reliability, maintainability, serviceability, upgradeability and disposability. Modeling represents real systems using mathematical equations and logic. Key properties for the design process are strength, reliability, maintainability and manufacturability. Applications of mechatronic systems include fuzzy-based washing machines, auto-focus cameras, engine management systems and autonomous robots. Main operator risks are trapping, entanglement, impact and ejection.
From Simulation to Online Gaming: the need for adaptive solutions Gabriele D'Angelo
In many fields such as distributed simulation and online gaming the missing piece is adaptivity. There is a strong need for dynamic and adaptive solutions that can improve performances and react to problems.
Checkpoint and recovery protocols are commonly used in distributed applications for providing fault
tolerance. A distributed system may require taking checkpoints from time to time to keep it free of arbitrary
failures. In case of failure, the system will rollback to checkpoints where global consistency is preserved.
Checkpointing is one of the fault-tolerant techniques to restore faults and to restart job fast. The algorithms
for checkpointing on distributed systems have been under study for years.
It is known that checkpointing and rollback recovery are widely used techniques that allow a distributed
computing to progress inspite of a failure.There are two fundamental approaches for checkpointing and
recovery.One is asynchronus approach, process take their checkpoints independenty.So,taking checkpoints
is very simple but due to absence of a recent consistent global checkpoint which may cause a rollback of
computation.Synchronus checkpointing approach assumes that a single process other than the application
process invokes the checkpointing algorithm periodically to determine a consistent global checkpoint.
This presentation elaborates what are multiprocessor operating systems, Multiprocessor Hardware, Multiprocessing models and frameworks, Multiprocessor Synchronization, Multiprocessor Scheduling, Applications of multiprocessing systems, Advantages, Disadvantages and Solutions and New trends of Multiprocessing.
Parallel processing architectures allow for simultaneous computation across multiple processing elements. There are four main types of parallel architectures: single instruction single data (SISD), single instruction multiple data (SIMD), multiple instruction single data (MISD), and multiple instruction multiple data (MIMD). MIMD systems are the most common and can have either shared or distributed memory. Effective parallel programming requires approaches like message passing or shared memory models to facilitate communication between processing elements.
The document discusses processes, threads, and synchronization techniques in operating systems. It covers:
- Processes allow pseudo-parallelism through rapid switching between programs by the CPU.
- Threads are lightweight processes that share resources like memory within a process and allow greater parallelism than processes alone.
- Synchronization techniques like semaphores and mutexes are needed to control access to shared resources and prevent race conditions when multiple threads access the same data concurrently.
Field-programmable gate array design of image encryption and decryption usin...IJECEIAES
This article presents a simple and efficient masking technique based on Chua chaotic system synchronization. It includes feeding the masked signal back to the master system and using it to drive the slave system for synchronization purposes. The proposed system is implemented in a field programmable gate array (FPGA) device using the Xilinx system generator tool. To achieve synchronization, the Pecora-Carroll identical cascading synchronization approach was used. The transmitted signal should be mixed or masked with a chaotic carrier and can be processed by the receiver without any distortion or loss. For different images, the security analysis is performed using the histogram, correlation coefficient, and entropy. In addition, FPGA hardware co-simulation based Xilinx Artix7 xc7a100t1csg324 was used to check the reality of the encryption and decryption of the images.
Comparing ICH-leach and Leach Descendents on Image Transfer using DCT IJECEIAES
The development and miniaturization of CMOS (microphones and cameras) in the last years have allowed the creation of WMSN (Wireless Multimedia Sensor Network). Therefore, transferring multimedia content through the network has become an important field of research. It transmits the recorded multimedia data wirelessly from a node to another to reach the Sink (base station). Thus, routing protocols make a big contribution in this process, because they participate in optimizing the node's resource usage. Since Leach protocol was designed only to minimize energy consumption of the network. The goal of this paper is to compare our protocol in tnansfering images with other Leach protocol descendants. By using the application layer, we applied the jpeg compression using the frequency domain on images before sending them to the network. In this paper, readers will find statistics concerning the lifetime of the network, the energy consumption and most importantly statistics about received images. Also, we used Castalia framework to simulate real conditions of transmission simulation results proved the efficiency of our protocol by prolonging the lifetime of the network and transmitting more images with better quality compared to other protocols.
This document provides an overview of key concepts in mechatronics engineering. It defines mechatronic systems as integrated electronic-mechanical systems combining sensors, actuators and digital electronics. Important life cycle factors for mechatronic systems include delivery, reliability, maintainability, serviceability, upgradeability and disposability. Modeling represents real systems using mathematical equations and logic. Key properties for the design process are strength, reliability, maintainability and manufacturability. Applications of mechatronic systems include fuzzy-based washing machines, auto-focus cameras, engine management systems and autonomous robots. Main operator risks are trapping, entanglement, impact and ejection.
From Simulation to Online Gaming: the need for adaptive solutions Gabriele D'Angelo
In many fields such as distributed simulation and online gaming the missing piece is adaptivity. There is a strong need for dynamic and adaptive solutions that can improve performances and react to problems.
Checkpoint and recovery protocols are commonly used in distributed applications for providing fault
tolerance. A distributed system may require taking checkpoints from time to time to keep it free of arbitrary
failures. In case of failure, the system will rollback to checkpoints where global consistency is preserved.
Checkpointing is one of the fault-tolerant techniques to restore faults and to restart job fast. The algorithms
for checkpointing on distributed systems have been under study for years.
It is known that checkpointing and rollback recovery are widely used techniques that allow a distributed
computing to progress inspite of a failure.There are two fundamental approaches for checkpointing and
recovery.One is asynchronus approach, process take their checkpoints independenty.So,taking checkpoints
is very simple but due to absence of a recent consistent global checkpoint which may cause a rollback of
computation.Synchronus checkpointing approach assumes that a single process other than the application
process invokes the checkpointing algorithm periodically to determine a consistent global checkpoint.
In this presentation OSI Model of TCP/IP Explained with details of all seven layers of Transmission Control Protocol/ Internet Protocol.
Application layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Datalink Layer
Physical Layer
This paper is focused on developing a platform that
helps researchers to create verify and implement their
machine learning algorithms to a humanoid robot in real
environment. The presented platform is durable, easy to fix,
upgrade, fast to assemble and cheap. Also, using this platform
we present an approach that solves a humanoid balancing
problem, which uses only fully connected neural network as a
basic idea for real time balancing. The method consists of 3
main conditions: 1) using different types of sensors detect the
current position of the body and generate the input
information for the neural network, 2) using fully connected
neural network produce the correct output, 3) using servomotors make movements that will change the current position
to the new one. During field test the humanoid robot can
balance on the moving platform that tilts up to 10 degrees to
any direction. Finally, we have shown that using our platform
we can do research and compare different neural networks in
similar conditions which can be important for the researchers
to do analyses in machine learning and robotics.
SA_IT241_5
• Compromise between access lists and capability lists. • Each object has list of unique bit patterns, called locks. • Each domain as list of unique bit patterns called keys.
This thesis presents research in the field of distributed systems. We present the dynamic organization of geo- distributed edge nodes into micro data-centers forming micro clouds to cover any arbitrary area and expand capacity, availability, and reliability. A cloud organization is used as an influence with adaptations for a different environment with a clear separation of concerns, and native applications model that can leverage the newly formed system. With the separation of concerns setup, edge-native applications model, and a unified node organization, we are moving towards the idea of edge computing as a service, like any other utility in cloud computing. We also give formal models for all protocols used for the creation of such a system.
1) Researchers have built the first computer entirely using carbon nanotube field-effect transistors (CNFETs). This overcomes substantial imperfections in CNT technology that had previously prevented complex circuits.
2) The CNT computer runs a basic operating system capable of multitasking, demonstrated by concurrently running counting and integer sorting programs. It also executes 20 instructions from the commercial MIPS instruction set.
3) The CNT computer implements a "SUBNEG" instruction that subtracts two values and conditionally branches based on the sign of the result. Its operation is controlled by a 3-clock cycle and uses 178 CNFETs made of 10-200 carbon nanotubes each.
The document discusses the history and evolution of operating systems over four generations:
1) Vacuum tubes (1945-1955): Large, slow computers programmed in machine language
2) Transistors and batch systems (1955-1965): Mainframes with batch processing via punch cards
3) Integrated circuits and multiprogramming (1965-1980): OS/360 introduced multiprocessing and spooling to improve efficiency
4) Personal computers (1980-present): Operating systems adapted to smaller, cheaper personal computers like CP/M, DOS, Windows
An operating system (OS) is software that manages computer hardware and software. Common desktop operating systems include Windows, Mac OS X, and Linux.
The document describes various architectures for multiple processor systems, including shared memory, message passing, and distributed systems. It discusses topics like uniform memory access (UMA) vs non-uniform memory access (NUMA), synchronization issues, and different communication methods between processors like remote procedure calls. The document also covers distributed computing concepts such as distributed shared memory, middleware, networking protocols, and file-based middleware.
This document provides an introduction to distributed systems. It discusses why distributed systems are developed, defines what a distributed system is, and provides examples. It then compares different types of distributed systems and networked operating systems. The rest of the document outlines various concepts, issues, and algorithms related to distributed systems, including advantages and disadvantages over centralized systems, software concepts, mutual exclusion, synchronization, and clock synchronization.
This document discusses multiprocessor and multicomputer systems. It defines a multiprocessor system as having more than one processor that shares common memory, while a multicomputer has more than one processor each with local memory. Processors may be closely coupled on a shared bus or loosely coupled distributed on a network. The document also covers Flynn's taxonomy of computer architectures and examples of single instruction single data stream (SISD), single instruction multiple data stream (SIMD), multiple instruction single data stream (MISD), and multiple instruction multiple data stream (MIMD) systems.
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
This document provides an overview of a computer system architecture course. It outlines the chapters that will be covered in the class, including digital logic circuits, digital components, data representation, register transfer and microoperations, basic computer organization and design, programming the basic computer, microprogrammed control, central processing unit, pipeline and vector processing, computer arithmetic, input-output organization, memory organization, and multiprocessors. The class aims to teach students how a computer works by building a "Mano machine" and learning about its hardware and software architecture in detail. Students will complete homework problems and have the option to complete a design report. Their grade will be based on homework, exams, an optional report, and class participation.
Detailed Simulation of Large-Scale Wireless NetworksGabriele D'Angelo
WiFra is a new framework for the detailed simulation of very large-scale wireless networks. It is based on the parallel and distributed simulation approach and provides high scalability in terms of size of simulated networks and number of execution units running the simulation. In order to improve the performance of distributed simulation, additional techniques are proposed. Their aim is to reduce the communication overhead and to maintain a good level of load-balancing. Simulation architectures composed of low-cost Commercial-Off-The-Shelf (COTS) hardware are specifically supported by WiFra. The framework dynamically reconfigures the simulation, taking care of the performance of each part of the execution architecture and dealing with unpredictable fluctuations of the available computation power and communication load on the single execution units. A fine-grained model of the 802.11 DCF protocol has been used for the performance evaluation of the proposed framework. The results demonstrate that the distributed approach is suitable for the detailed simulation of very-large scale wireless networks.
This document discusses computer architecture and organization concepts such as cache memory and input/output in computer systems. It defines computer architecture and organization, describes the major components of the CPU and their functions. It also explains the concept of interconnection within a computer system including bus interconnection, describes different types of cache memory mapping (direct, associative, set-associative) and cache initialization. Finally, it defines input/output modules, lists common I/O devices and describes I/O buses and interface modules.
Acpi and smi handlers some limits to trusted computingUltraUploader
This document discusses limits to trusted computing related to the System Management Mode (SMM) handler and the Advanced Configuration and Power Interface (ACPI) tables. It presents an original mechanism for attackers to alter the SMI handler to escalate privileges. It also describes how rogue functions could be injected into ACPI tables to be triggered by an external stimulus like unplugging and replugging the power supply. The paper explores countermeasures to prevent such modifications.
The secure communication through synchronization between two identic chaotic systems have
recently gained a lot of interest. To implement a robust secure system based on synchronization, there is
always a need to generate new discrete dynamical systems and investigate their performances in terms of
amount of randomness they have and the ability to achieve synchronization smoothly. In this work, a new
chaotic system, named Nahrain, is proposed and tested for the possible use in secure transmission via
chaos synchronization as well as in cryptography applications. The performance of the proposed chaotic
system is tested using 0-1 test, while NIST suite tests are used to check the randomness statistical
properties. The nonlinear control laws are used to verify the synchronization of master-slave parts of the
proposed system. The simulation results show that Nahrain system has chaotic behavior and
synchronizable, while the equivalent binary sequence of the system has excellent randomness statistical
properties. The numerical results obtained using MATLAB for 0-1 test was 0.9864, and for frequency test
was 0.4202, while for frequency test within a block was 0.4311. As a result, the new proposed system can
be used to develop efficient encryption and synchronization algorithms for multimedia secure transmission
applications.
The document discusses input/output (I/O) systems and their impact on computer performance. It begins by recapping parallel processing architectures and noting that I/O performance increasingly limits overall system performance. It then describes I/O systems, including storage devices like magnetic disks. Key metrics for evaluating I/O performance like response time and throughput are introduced. The document outlines techniques for processor-I/O interface like memory mapping and interrupts. It emphasizes that the fastest computer is only as good as its slowest external component based on Amdahl's law.
This document provides an overview of wireless sensor networks (WSNs). It discusses the background and components of sensor nodes, including sensors, processors, radio transceivers, and power sources. The document also covers sensor network architectures like clustered and layered networks. It describes data dissemination in WSNs using data diffusion models and explores concepts like interests, gradients, and reinforcement. Finally, it introduces the problem of gathering sensor data efficiently to prolong network lifetime and discusses optimizing the energy-delay metric.
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIPVLSICS Design
This is widely accepted that Network-on-Chip represents a promising solution for forthcoming complex embedded systems. The current SoC Solutions are built from heterogeneous hardware and Software components integrated around a complex communication infrastructure. The crossbar is a vital component of in any NoC router. In this work, we have designed a crossbar interconnect for serial bit data transfer and 128-parallel bit data transfer. We have shown comparision between power and delay for the serial bit and parallel bit data transfer through crossbar switch. The design is implemented in 0.180 micron TSM technology.The bit rate achived in serial transfer is slow as compared with parallel data transfer. The simulation resuls show that the critical path delay is less for parallel bit data transfer but power dissipation is high.
In this presentation OSI Model of TCP/IP Explained with details of all seven layers of Transmission Control Protocol/ Internet Protocol.
Application layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Datalink Layer
Physical Layer
This paper is focused on developing a platform that
helps researchers to create verify and implement their
machine learning algorithms to a humanoid robot in real
environment. The presented platform is durable, easy to fix,
upgrade, fast to assemble and cheap. Also, using this platform
we present an approach that solves a humanoid balancing
problem, which uses only fully connected neural network as a
basic idea for real time balancing. The method consists of 3
main conditions: 1) using different types of sensors detect the
current position of the body and generate the input
information for the neural network, 2) using fully connected
neural network produce the correct output, 3) using servomotors make movements that will change the current position
to the new one. During field test the humanoid robot can
balance on the moving platform that tilts up to 10 degrees to
any direction. Finally, we have shown that using our platform
we can do research and compare different neural networks in
similar conditions which can be important for the researchers
to do analyses in machine learning and robotics.
SA_IT241_5
• Compromise between access lists and capability lists. • Each object has list of unique bit patterns, called locks. • Each domain as list of unique bit patterns called keys.
This thesis presents research in the field of distributed systems. We present the dynamic organization of geo- distributed edge nodes into micro data-centers forming micro clouds to cover any arbitrary area and expand capacity, availability, and reliability. A cloud organization is used as an influence with adaptations for a different environment with a clear separation of concerns, and native applications model that can leverage the newly formed system. With the separation of concerns setup, edge-native applications model, and a unified node organization, we are moving towards the idea of edge computing as a service, like any other utility in cloud computing. We also give formal models for all protocols used for the creation of such a system.
1) Researchers have built the first computer entirely using carbon nanotube field-effect transistors (CNFETs). This overcomes substantial imperfections in CNT technology that had previously prevented complex circuits.
2) The CNT computer runs a basic operating system capable of multitasking, demonstrated by concurrently running counting and integer sorting programs. It also executes 20 instructions from the commercial MIPS instruction set.
3) The CNT computer implements a "SUBNEG" instruction that subtracts two values and conditionally branches based on the sign of the result. Its operation is controlled by a 3-clock cycle and uses 178 CNFETs made of 10-200 carbon nanotubes each.
The document discusses the history and evolution of operating systems over four generations:
1) Vacuum tubes (1945-1955): Large, slow computers programmed in machine language
2) Transistors and batch systems (1955-1965): Mainframes with batch processing via punch cards
3) Integrated circuits and multiprogramming (1965-1980): OS/360 introduced multiprocessing and spooling to improve efficiency
4) Personal computers (1980-present): Operating systems adapted to smaller, cheaper personal computers like CP/M, DOS, Windows
An operating system (OS) is software that manages computer hardware and software. Common desktop operating systems include Windows, Mac OS X, and Linux.
The document describes various architectures for multiple processor systems, including shared memory, message passing, and distributed systems. It discusses topics like uniform memory access (UMA) vs non-uniform memory access (NUMA), synchronization issues, and different communication methods between processors like remote procedure calls. The document also covers distributed computing concepts such as distributed shared memory, middleware, networking protocols, and file-based middleware.
This document provides an introduction to distributed systems. It discusses why distributed systems are developed, defines what a distributed system is, and provides examples. It then compares different types of distributed systems and networked operating systems. The rest of the document outlines various concepts, issues, and algorithms related to distributed systems, including advantages and disadvantages over centralized systems, software concepts, mutual exclusion, synchronization, and clock synchronization.
This document discusses multiprocessor and multicomputer systems. It defines a multiprocessor system as having more than one processor that shares common memory, while a multicomputer has more than one processor each with local memory. Processors may be closely coupled on a shared bus or loosely coupled distributed on a network. The document also covers Flynn's taxonomy of computer architectures and examples of single instruction single data stream (SISD), single instruction multiple data stream (SIMD), multiple instruction single data stream (MISD), and multiple instruction multiple data stream (MIMD) systems.
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
This document provides an overview of a computer system architecture course. It outlines the chapters that will be covered in the class, including digital logic circuits, digital components, data representation, register transfer and microoperations, basic computer organization and design, programming the basic computer, microprogrammed control, central processing unit, pipeline and vector processing, computer arithmetic, input-output organization, memory organization, and multiprocessors. The class aims to teach students how a computer works by building a "Mano machine" and learning about its hardware and software architecture in detail. Students will complete homework problems and have the option to complete a design report. Their grade will be based on homework, exams, an optional report, and class participation.
Detailed Simulation of Large-Scale Wireless NetworksGabriele D'Angelo
WiFra is a new framework for the detailed simulation of very large-scale wireless networks. It is based on the parallel and distributed simulation approach and provides high scalability in terms of size of simulated networks and number of execution units running the simulation. In order to improve the performance of distributed simulation, additional techniques are proposed. Their aim is to reduce the communication overhead and to maintain a good level of load-balancing. Simulation architectures composed of low-cost Commercial-Off-The-Shelf (COTS) hardware are specifically supported by WiFra. The framework dynamically reconfigures the simulation, taking care of the performance of each part of the execution architecture and dealing with unpredictable fluctuations of the available computation power and communication load on the single execution units. A fine-grained model of the 802.11 DCF protocol has been used for the performance evaluation of the proposed framework. The results demonstrate that the distributed approach is suitable for the detailed simulation of very-large scale wireless networks.
This document discusses computer architecture and organization concepts such as cache memory and input/output in computer systems. It defines computer architecture and organization, describes the major components of the CPU and their functions. It also explains the concept of interconnection within a computer system including bus interconnection, describes different types of cache memory mapping (direct, associative, set-associative) and cache initialization. Finally, it defines input/output modules, lists common I/O devices and describes I/O buses and interface modules.
Acpi and smi handlers some limits to trusted computingUltraUploader
This document discusses limits to trusted computing related to the System Management Mode (SMM) handler and the Advanced Configuration and Power Interface (ACPI) tables. It presents an original mechanism for attackers to alter the SMI handler to escalate privileges. It also describes how rogue functions could be injected into ACPI tables to be triggered by an external stimulus like unplugging and replugging the power supply. The paper explores countermeasures to prevent such modifications.
The secure communication through synchronization between two identic chaotic systems have
recently gained a lot of interest. To implement a robust secure system based on synchronization, there is
always a need to generate new discrete dynamical systems and investigate their performances in terms of
amount of randomness they have and the ability to achieve synchronization smoothly. In this work, a new
chaotic system, named Nahrain, is proposed and tested for the possible use in secure transmission via
chaos synchronization as well as in cryptography applications. The performance of the proposed chaotic
system is tested using 0-1 test, while NIST suite tests are used to check the randomness statistical
properties. The nonlinear control laws are used to verify the synchronization of master-slave parts of the
proposed system. The simulation results show that Nahrain system has chaotic behavior and
synchronizable, while the equivalent binary sequence of the system has excellent randomness statistical
properties. The numerical results obtained using MATLAB for 0-1 test was 0.9864, and for frequency test
was 0.4202, while for frequency test within a block was 0.4311. As a result, the new proposed system can
be used to develop efficient encryption and synchronization algorithms for multimedia secure transmission
applications.
The document discusses input/output (I/O) systems and their impact on computer performance. It begins by recapping parallel processing architectures and noting that I/O performance increasingly limits overall system performance. It then describes I/O systems, including storage devices like magnetic disks. Key metrics for evaluating I/O performance like response time and throughput are introduced. The document outlines techniques for processor-I/O interface like memory mapping and interrupts. It emphasizes that the fastest computer is only as good as its slowest external component based on Amdahl's law.
This document provides an overview of wireless sensor networks (WSNs). It discusses the background and components of sensor nodes, including sensors, processors, radio transceivers, and power sources. The document also covers sensor network architectures like clustered and layered networks. It describes data dissemination in WSNs using data diffusion models and explores concepts like interests, gradients, and reinforcement. Finally, it introduces the problem of gathering sensor data efficiently to prolong network lifetime and discusses optimizing the energy-delay metric.
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIPVLSICS Design
This is widely accepted that Network-on-Chip represents a promising solution for forthcoming complex embedded systems. The current SoC Solutions are built from heterogeneous hardware and Software components integrated around a complex communication infrastructure. The crossbar is a vital component of in any NoC router. In this work, we have designed a crossbar interconnect for serial bit data transfer and 128-parallel bit data transfer. We have shown comparision between power and delay for the serial bit and parallel bit data transfer through crossbar switch. The design is implemented in 0.180 micron TSM technology.The bit rate achived in serial transfer is slow as compared with parallel data transfer. The simulation resuls show that the critical path delay is less for parallel bit data transfer but power dissipation is high.
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
1. Chapter 1: Introduction
Ajay Kshemkalyani and Mukesh Singhal
Distributed Computing: Principles, Algorithms, and Systems
Cambridge University Press
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 1 / 36
2. Distributed Computing: Principles, Algorithms, and Systems
Definition
Autonomous processors communicating over a communication network
Some characteristics
I No common physical clock
I No shared memory
I Geographical seperation
I Autonomy and heterogeneity
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 2 / 36
3. Distributed Computing: Principles, Algorithms, and Systems
Distributed System Model
M memory bank(s)
P
P
P P
P
P
P
M
M M
M
M
M M
Communication network
(WAN/ LAN)
P processor(s)
Figure 1.1: A distributed system connects processors by a communication network.
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 3 / 36
4. Distributed Computing: Principles, Algorithms, and Systems
Relation between Software Components
protocols
Operating
system
Distributed software
Network
protocol
stack
Transport layer
Data link layer
Application layer
(middleware libraries)
Network layer
Distributed application Extent of
distributed
Figure 1.2: Interaction of the software components at each process.
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 4 / 36
5. Distributed Computing: Principles, Algorithms, and Systems
Motivation for Distributed System
Inherently distributed computation
Resource sharing
Access to remote resources
Increased performance/cost ratio
Reliability
I availability, integrity, fault-tolerance
Scalability
Modularity and incremental expandability
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 5 / 36
6. Distributed Computing: Principles, Algorithms, and Systems
Parallel Systems
Multiprocessor systems (direct access to shared memory, UMA model)
I Interconnection network - bus, multi-stage sweitch
I E.g., Omega, Butterfly, Clos, Shuffle-exchange networks
I Interconnection generation function, routing function
Multicomputer parallel systems (no direct access to shared memory, NUMA
model)
I bus, ring, mesh (w w/o wraparound), hypercube topologies
I E.g., NYU Ultracomputer, CM* Conneciton Machine, IBM Blue gene
Array processors (colocated, tightly coupled, common system clock)
I Niche market, e.g., DSP applications
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 6 / 36
7. Distributed Computing: Principles, Algorithms, and Systems
UMA vs. NUMA Models
M memory
M
P P P
P
P
P M
M M
M
M
M M M M
P
P P P
Interconnection network Interconnection network
(a) (b)
P processor
Figure 1.3: Two standard architectures for parallel systems. (a) Uniform memory
access (UMA) multiprocessor system. (b) Non-uniform memory access (NUMA)
multiprocessor. In both architectures, the processors may locally cache data from
memory.
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 7 / 36
9. Distributed Computing: Principles, Algorithms, and Systems
Omega Network
n processors, n memory banks
log n stages: with n/2 switches of size 2x2 in each stage
Interconnection function: Output i of a stage connected to input j of next
stage:
j =
2i for 0 ≤ i ≤ n/2 − 1
2i + 1 − n for n/2 ≤ i ≤ n − 1
Routing function: in any stage s at any switch:
to route to dest. j,
if s + 1th MSB of j = 0 then route on upper wire
else [s + 1th MSB of j = 1] then route on lower wire
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 9 / 36
10. Distributed Computing: Principles, Algorithms, and Systems
Interconnection Topologies for Multiprocesors
1101
0110
0100
0000
0001
0010
0111
0011
0101
(b)
(a)
processor + memory
1100
1000
1110
1010
1111
1011
1001
Figure 1.5: (a) 2-D Mesh with wraparound (a.k.a. torus) (b) 3-D hypercube
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 10 / 36
11. Distributed Computing: Principles, Algorithms, and Systems
Flynn’s Taxonomy
(c) MISD
D D D
I I I
I I
I I
I
I
C C C C
P P P P P
D D
P
C C
I instruction stream
D
P
Control Unit
Processing Unit
data stream
(a) SIMD (b) MIMD
Figure 1.6: SIMD, MISD, and MIMD modes.
SISD: Single Instruction Stream Single Data Stream (traditional)
SIMD: Single Instruction Stream Multiple Data Stream
I scientific applicaitons, applications on large arrays
I vector processors, systolic arrays, Pentium/SSE, DSP chips
MISD: Multiple Instruciton Stream Single Data Stream
I E.g., visualization
MIMD: Multiple Instruction Stream Multiple Data Stream
I distributed systems, vast majority of parallel systems
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 11 / 36
12. Distributed Computing: Principles, Algorithms, and Systems
Terminology
Coupling
I Interdependency/binding among modules, whether hardware or software (e.g.,
OS, middleware)
Parallelism: T(1)/T(n).
I Function of program and system
Concurrency of a program
I Measures productive CPU time vs. waiting for synchronization operations
Granularity of a program
I Amt. of computation vs. amt. of communication
I Fine-grained program suited for tightly-coupled system
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 12 / 36
13. Distributed Computing: Principles, Algorithms, and Systems
Message-passing vs. Shared Memory
Emulating MP over SM:
I Partition shared address space
I Send/Receive emulated by writing/reading from special mailbox per pair of
processes
Emulating SM over MP:
I Model each shared object as a process
I Write to shared object emulated by sending message to owner process for the
object
I Read from shared object emulated by sending query to owner of shared object
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 13 / 36
14. Distributed Computing: Principles, Algorithms, and Systems
Classification of Primitives (1)
Synchronous (send/receive)
I Handshake between sender and receiver
I Send completes when Receive completes
I Receive completes when data copied into buffer
Asynchronous (send)
I Control returns to process when data copied out of user-specified buffer
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 14 / 36
15. Distributed Computing: Principles, Algorithms, and Systems
Classification of Primitives (2)
Blocking (send/receive)
I Control returns to invoking process after processing of primitive (whether sync
or async) completes
Nonblocking (send/receive)
I Control returns to process immediately after invocation
I Send: even before data copied out of user buffer
I Receive: even before data may have arrived from sender
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 15 / 36
16. Distributed Computing: Principles, Algorithms, and Systems
Non-blocking Primitive
Send(X, destination, handlek ) //handlek is a return parameter
...
...
Wait(handle1, handle2, . . . , handlek , . . . , handlem) //Wait always blocks
Figure 1.7: A nonblocking send primitive. When the Wait call returns, at least
one of its parameters is posted.
Return parameter returns a system-generated handle
I Use later to check for status of completion of call
I Keep checking (loop or periodically) if handle has been posted
I Issue Wait(handle1, handle2, . . .) call with list of handles
I Wait call blocks until one of the stipulated handles is posted
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 16 / 36
17. Distributed Computing: Principles, Algorithms, and Systems
Blocking/nonblocking; Synchronous/asynchronous;
send/receive primities
S
Send
The completion of the previously initiated nonblocking operation
duration in which the process issuing send or receive primitive is blocked
primitive issued
primitive issued
Receive
Send
(c) blocking async. Send (d) nonblocking async. Send
S
W
R_C
P, S_C
R_C
P,
(b) nonblocking sync. Send, nonblocking Receive
(a) blocking sync. Send, blocking Receive
P,
S_C
P
R R_C
S_C
processing for completes
Receive
Process may issue to check completion of nonblocking operation
Wait
duration to copy data from or to user buffer
processing for completes
S S_C
process i
buffer_i
kernel_i
process j
buffer_j
kernel_j
S W W
W W
R
R
process i
buffer_i
kernel_i
S S_C W
W
Figure 1.8:Illustration of 4 send and 2 receive primitives
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 17 / 36
18. Distributed Computing: Principles, Algorithms, and Systems
Asynchronous Executions; Mesage-passing System
internal event send event receive event
P
P
P
P
0
1
2
3
m4
m1 m7
m3 m5
m6
m2
Figure 1.9: Asynchronous execution in a message-passing system
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 18 / 36
19. Distributed Computing: Principles, Algorithms, and Systems
Synchronous Executions: Message-passing System
round 3
P
P
P
P
3
2
0
1
round 1 round 2
Figure 1.10: Synchronous execution in a message-passing system
In any round/step/phase: (send | internal)∗
(receive | internal)∗
(1) Sync Execution(int k, n) //k rounds, n processes.
(2) for r = 1 to k do
(3) proc i sends msg to (i + 1) mod n and (i − 1) mod n;
(4) each proc i receives msg from (i + 1) mod n and (i − 1) mod n;
(5) compute app-specific function on received values.
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 19 / 36
20. Distributed Computing: Principles, Algorithms, and Systems
Synchronous vs. Asynchronous Executions (1)
Sync vs async processors; Sync vs async primitives
Sync vs async executions
Async execution
I No processor synchrony, no bound on drift rate of clocks
I Message delays finite but unbounded
I No bound on time for a step at a process
Sync execution
I Processors are synchronized; clock drift rate bounded
I Message delivery occurs in one logical step/round
I Known upper bound on time to execute a step at a process
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 20 / 36
21. Distributed Computing: Principles, Algorithms, and Systems
Synchronous vs. Asynchronous Executions (2)
Difficult to build a truly synchronous system; can simulate this abstraction
Virtual synchrony:
I async execution, processes synchronize as per application requirement;
I execute in rounds/steps
Emulations:
I Async program on sync system: trivial (A is special case of S)
I Sync program on async system: tool called synchronizer
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 21 / 36
22. Distributed Computing: Principles, Algorithms, and Systems
System Emulations
S−A
Asynchronous
message−passing(AMP)
Synchronous
shared memory (ASM)
Asynchronous Synchronous
shared memory (SSM)
message−passing (SMP)
SM−MP
MP−SM
SM−MP
MP−SM
A−S
S−A
A−S
Figure 1.11: Sync ↔ async, and shared memory ↔ msg-passing emulations
Assumption: failure-free system
System A emulated by system B:
I If not solvable in B, not solvable in A
I If solvable in A, solvable in B
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 22 / 36
23. Distributed Computing: Principles, Algorithms, and Systems
Challenges: System Perspective (1)
Communication mechanisms: E.g., Remote Procedure Call (RPC), remote
object invocation (ROI), message-oriented vs. stream-oriented
communication
Processes: Code migration, process/thread management at clients and
servers, design of software and mobile agents
Naming: Easy to use identifiers needed to locate resources and processes
transparently and scalably
Synchronization
Data storage and access
I Schemes for data storage, search, and lookup should be fast and scalable
across network
I Revisit file system design
Consistency and replication
I Replication for fast access, scalability, avoid bottlenecks
I Require consistency management among replicas
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 23 / 36
24. Distributed Computing: Principles, Algorithms, and Systems
Challenges: System Perspective (2)
Fault-tolerance: correct and efficient operation despite link, node, process
failures
Distributed systems security
I Secure channels, access control, key management (key generation and key
distribution), authorization, secure group management
Scalability and modularity of algorithms, data, services
Some experimental systems: Globe, Globus, Grid
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 24 / 36
25. Distributed Computing: Principles, Algorithms, and Systems
Challenges: System Perspective (3)
API for communications, services: ease of use
Transparency: hiding implementation policies from user
I Access: hide differences in data rep across systems, provide uniform operations
to access resources
I Location: locations of resources are transparent
I Migration: relocate resources without renaming
I Relocation: relocate resources as they are being accessed
I Replication: hide replication from the users
I Concurrency: mask the use of shared resources
I Failure: reliable and fault-tolerant operation
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 25 / 36
26. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (1)
Useful execution models and frameworks: to reason with and design correct
distributed programs
I Interleaving model
I Partial order model
I Input/Output automata
I Temporal Logic of Actions
Dynamic distributed graph algorithms and routing algorithms
I System topology: distributed graph, with only local neighborhood knowledge
I Graph algorithms: building blocks for group communication, data
dissemination, object location
I Algorithms need to deal with dynamically changing graphs
I Algorithm efficiency: also impacts resource consumption, latency, traffic,
congestion
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 26 / 36
27. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (2)
Time and global state
I 3D space, 1D time
I Physical time (clock) accuracy
I Logical time captures inter-process dependencies and tracks relative time
progression
I Global state observation: inherent distributed nature of system
I Concurrency measures: concurrency depends on program logic, execution
speeds within logical threads, communication speeds
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 27 / 36
28. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (3)
Synchronization/coordination mechanisms
I Physical clock synchronization: hardware drift needs correction
I Leader election: select a distinguished process, due to inherent symmetry
I Mutual exclusion: coordinate access to critical resources
I Distributed deadlock detection and resolution: need to observe global state;
avoid duplicate detection, unnecessary aborts
I Termination detection: global state of quiescence; no CPU processing and no
in-transit messages
I Garbage collection: Reclaim objects no longer pointed to by any process
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 28 / 36
29. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (4)
Group communication, multicast, and ordered message delivery
I Group: processes sharing a context, collaborating
I Multiple joins, leaves, fails
I Concurrent sends: semantics of delivery order
Monitoring distributed events and predicates
I Predicate: condition on global system state
I Debugging, environmental sensing, industrial process control, analyzing event
streams
Distributed program design and verification tools
Debugging distributed programs
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 29 / 36
30. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (5)
Data replication, consistency models, and caching
I Fast, scalable access;
I coordinate replica updates;
I optimize replica placement
World Wide Web design: caching, searching, scheduling
I Global scale distributed system; end-users
I Read-intensive; prefetching over caching
I Object search and navigation are resource-intensive
I User-perceived latency
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 30 / 36
31. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (6)
Distributed shared memory abstraction
I Wait-free algorithm design: process completes execution, irrespective of
actions of other processes, i.e., n − 1 fault-resilience
I Mutual exclusion
F Bakery algorithm, semaphores, based on atomic hardware primitives, fast
algorithms when contention-free access
I Register constructions
F Revisit assumptions about memory access
F What behavior under concurrent unrestricted access to memory?
Foundation for future architectures, decoupled with technology (semiconductor,
biocomputing, quantum . . .)
I Consistency models:
F coherence versus access cost trade-off
F Weaker models than strict consistency of uniprocessors
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 31 / 36
32. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (7)
Reliable and fault-tolerant distributed systems
I Consensus algorithms: processes reach agreement in spite of faults (under
various fault models)
I Replication and replica management
I Voting and quorum systems
I Distributed databases, commit: ACID properties
I Self-stabilizing systems: ”illegal” system state changes to ”legal” state;
requires built-in redundancy
I Checkpointing and recovery algorithms: roll back and restart from earlier
”saved” state
I Failure detectors:
F Difficult to distinguish a ”slow” process/message from a failed process/ never
sent message
F algorithms that ”suspect” a process as having failed and converge on a
determination of its up/down status
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 32 / 36
33. Distributed Computing: Principles, Algorithms, and Systems
Challenges: Algorithm/Design (8)
Load balancing: to reduce latency, increase throughput, dynamically. E.g.,
server farms
I Computation migration: relocate processes to redistribute workload
I Data migration: move data, based on access patterns
I Distributed scheduling: across processors
Real-time scheduling: difficult without global view, network delays make task
harder
Performance modeling and analysis: Network latency to access resources
must be reduced
I Metrics: theoretical measures for algorithms, practical measures for systems
I Measurement methodologies and tools
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 33 / 36
34. Distributed Computing: Principles, Algorithms, and Systems
Applications and Emerging Challenges (1)
Mobile systems
I Wireless communication: unit disk model; broadcast medium (MAC), power
management etc.
I CS perspective: routing, location management, channel allocation, localization
and position estimation, mobility management
I Base station model (cellular model)
I Ad-hoc network model (rich in distributed graph theory problems)
Sensor networks: Processor with electro-mechanical interface
Ubiquitous or pervasive computing
I Processors embedded in and seamlessly pervading environment
I Wireless sensor and actuator mechanisms; self-organizing; network-centric,
resource-constrained
I E.g., intelligent home, smart workplace
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 34 / 36
35. Distributed Computing: Principles, Algorithms, and Systems
Applications and Emerging Challenges (2)
Peer-to-peer computing
I No hierarchy; symmetric role; self-organizing; efficient object storage and
lookup;scalable; dynamic reconfig
Publish/subscribe, content distribution
I Filtering information to extract that of interest
Distributed agents
I Processes that move and cooperate to perform specific tasks; coordination,
controlling mobility, software design and interfaces
Distributed data mining
I Extract patterns/trends of interest
I Data not available in a single repository
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 35 / 36
36. Distributed Computing: Principles, Algorithms, and Systems
Applications and Emerging Challenges (3)
Grid computing
I Grid of shared computing resources; use idle CPU cycles
I Issues: scheduling, QOS guarantees, security of machines and jobs
Security
I Confidentiality, authentication, availability in a distributed setting
I Manage wireless, peer-to-peer, grid environments
F Issues: e.g., Lack of trust, broadcast media, resource-constrained, lack of
structure
A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 36 / 36