This document provides an introduction to distributed databases. It defines a distributed database as a collection of logically related databases distributed over a computer network. It describes distributed computing and how distributed databases partition data across multiple computers. The document outlines different types of distributed database systems including homogeneous and heterogeneous. It also discusses distributed data storage techniques like replication, fragmentation, and allocation. Finally, it lists several advantages and objectives of distributed databases as well as some disadvantages.
Transport layer protocols provide services like reliable data transfer and connection establishment between applications on networked devices. They address this need through protocols like TCP and UDP. TCP provides reliable, ordered data streams using mechanisms like three-way handshake, sequence numbers, acknowledgments, retransmissions, flow control via sliding windows, and connection termination handshaking. UDP provides simple datagram transmissions without reliability or flow control.
Discussed different types of dynamic interconnection networks. Graphically demonstrated single and multiple bus interconnection networks. Discussed different types of switch based interconnection networks. Graphically shown the mechanisms of crossbar, single and multistage interconnection networks. Graphically explained the working principle of omega network, Benes network, and baseline networks.
The application layer is the top layer of the OSI model and controls how applications communicate over a network. It provides services for applications including mail, file transfer, domain name translation and network security. Protocols at this layer include HTTP, FTP, SMTP, DNS and others that allow applications to access remote files and exchange messages over the internet in a standardized way. The application layer hides the complexities of the underlying network from applications and ensures reliable and secure communication between devices.
Fault tolerance is important for distributed systems to continue functioning in the event of partial failures. There are several phases to achieving fault tolerance: fault detection, diagnosis, evidence generation, assessment, and recovery. Common techniques include replication, where multiple copies of data are stored at different sites to increase availability if one site fails, and check pointing, where a system's state is periodically saved to stable storage so the system can be restored to a previous consistent state if a failure occurs. Both techniques have limitations around managing consistency with replication and overhead from checkpointing communications and storage requirements.
The presentation layer is responsible for data representation, compression, encryption and formatting for transmission between applications. It encodes application data into messages and decodes received messages. Common data representations include ASN.1 and XDR. Lossy and lossless compression techniques are used to reduce file sizes. Encryption transforms plaintext into ciphertext using keys to protect confidentiality during transmission.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
This document provides an introduction to distributed databases. It defines a distributed database as a collection of logically related databases distributed over a computer network. It describes distributed computing and how distributed databases partition data across multiple computers. The document outlines different types of distributed database systems including homogeneous and heterogeneous. It also discusses distributed data storage techniques like replication, fragmentation, and allocation. Finally, it lists several advantages and objectives of distributed databases as well as some disadvantages.
Transport layer protocols provide services like reliable data transfer and connection establishment between applications on networked devices. They address this need through protocols like TCP and UDP. TCP provides reliable, ordered data streams using mechanisms like three-way handshake, sequence numbers, acknowledgments, retransmissions, flow control via sliding windows, and connection termination handshaking. UDP provides simple datagram transmissions without reliability or flow control.
Discussed different types of dynamic interconnection networks. Graphically demonstrated single and multiple bus interconnection networks. Discussed different types of switch based interconnection networks. Graphically shown the mechanisms of crossbar, single and multistage interconnection networks. Graphically explained the working principle of omega network, Benes network, and baseline networks.
The application layer is the top layer of the OSI model and controls how applications communicate over a network. It provides services for applications including mail, file transfer, domain name translation and network security. Protocols at this layer include HTTP, FTP, SMTP, DNS and others that allow applications to access remote files and exchange messages over the internet in a standardized way. The application layer hides the complexities of the underlying network from applications and ensures reliable and secure communication between devices.
Fault tolerance is important for distributed systems to continue functioning in the event of partial failures. There are several phases to achieving fault tolerance: fault detection, diagnosis, evidence generation, assessment, and recovery. Common techniques include replication, where multiple copies of data are stored at different sites to increase availability if one site fails, and check pointing, where a system's state is periodically saved to stable storage so the system can be restored to a previous consistent state if a failure occurs. Both techniques have limitations around managing consistency with replication and overhead from checkpointing communications and storage requirements.
The presentation layer is responsible for data representation, compression, encryption and formatting for transmission between applications. It encodes application data into messages and decodes received messages. Common data representations include ASN.1 and XDR. Lossy and lossless compression techniques are used to reduce file sizes. Encryption transforms plaintext into ciphertext using keys to protect confidentiality during transmission.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
This document provides an overview of distributed web-based systems, including the key components and technologies that enable them. It discusses the World Wide Web and how documents are accessed via URLs. It also describes HTTP and how connections and requests/responses work. Other topics covered include caching, content distribution networks, web services, traditional and multi-tiered web architectures, web server clusters, and web security protocols like SSL.
This document discusses deterministic finite automata (DFA) minimization. It defines the components of a DFA and provides an example of a non-minimized DFA that accepts strings with 'a' or 'b'. The document then introduces an algorithm to minimize a DFA by identifying redundant states that are not necessary to recognize the language. The algorithm works by iteratively labeling states as distinct or equivalent based on their transitions and whether they are accepting states. This process combines equivalent states to produce a minimized DFA with the smallest number of states.
This document discusses process migration in distributed systems. It defines process migration as relocating a process from its current node to another node, which can occur either before or during process execution. The key aspects of process migration covered include selecting processes to migrate, transferring process state such as CPU registers and address space, forwarding messages, and handling communication between related processes migrated to different nodes. Various process migration mechanisms and their tradeoffs are also summarized.
The document discusses database issues related to mobile computing. It describes how mobile devices cache data from servers to reduce latency when the device is offline. The cached data is referred to as being "hoarded" in the device database. It discusses different database architectures including one-tier architectures where the database is specific to a mobile device and two-tier architectures involving client-server models. It also describes different cache invalidation mechanisms used to maintain consistency between cached data on devices and data on servers.
The document provides information about the CCNA certification exam, including the exam number, total marks, duration, passing score, question types, and benefits of obtaining the certification. It also discusses common networking devices, network interface cards, hubs, switches, routers, common network topologies, and the functions of LANs, MANs and WANs. Finally, it introduces the OSI model and its seven layers.
A distributed system consists of multiple autonomous computers that are linked through software to appear as a single integrated system. Distributed systems have several desirable features including resource sharing, concurrency to allow multiple users simultaneous access, openness through public specifications, transparency so users are unaware of remote resources, scalability to grow by adding computers, and fault tolerance to continue functioning if components fail. Examples of distributed systems include the internet, university computing centers, and ATM networks.
The network layer is responsible for packet forwarding including routing through intermediate routers. It controls the operation of the subnet and decides which physical path data takes. Routing is the process of moving packets from source to destination, usually performed by routers. Internetworking connects different network technologies like LANs and WANs using devices like routers. The network layer uses IP addressing to identify devices and enable routing. Private IP addresses identify internal devices while public addresses provide external internet access.
Logical clocks assign sequence numbers to distributed system events to determine causality without a global clock. Lamport's algorithm uses logical clocks to impose a partial ordering on events. Vector clocks extend this to also detect concurrent events that are not causally related, providing a full happened-before relation between all events. Each process maintains a vector clock that is incremented after local events and updated when receiving messages from other processes.
There are 5 levels of virtualization implementation:
1. Instruction Set Architecture Level which uses emulation to run inherited code on different hardware.
2. Hardware Abstraction Level which uses a hypervisor to virtualize hardware components and allow multiple users to use the same hardware simultaneously.
3. Operating System Level which creates an isolated container on the physical server that functions like a virtual server.
4. Library Level which uses API hooks to control communication between applications and the system.
5. Application Level which virtualizes only a single application rather than an entire platform.
This document provides information about error detection and correction techniques used in computer networks. It discusses different types of errors that can occur like single-bit and burst errors. It explains that redundancy is needed to detect or correct errors by adding extra bits. Detection techniques discussed include parity checks, checksumming, and cyclic redundancy checks. Parity checks can only detect odd number of errors. Cyclic redundancy checks use polynomial arithmetic to generate a checksum. Forward error correction allows detection and correction of errors by adding redundant bits to distinguish different error possibilities. Hamming code is an example of an error correcting code that can detect and correct single bit errors.
The document describes a three-tier architecture for mobile computing. It consists of a presentation tier, application tier, and data tier. The presentation tier handles the user interface and rendering. The application tier controls transaction processing and accommodates many users. The data tier manages database access and storage. Middleware sits between operating systems and user applications to handle functions like network management and security across tiers. This three-tier architecture provides benefits like improved performance, flexibility, maintainability and scalability.
Fast Ethernet increased the bandwidth of standard Ethernet from 10 Mbps to 100 Mbps. It used the same CSMA/CD access method and frame format as standard Ethernet but with some changes to address the higher speed. Fast Ethernet was implemented over twisted pair cables using 100BASE-TX or over fiber optic cables using 100BASE-FX. The increased speed enabled Fast Ethernet to compete with other high-speed LAN technologies of the time like FDDI.
Client-Centric Consistency
Provide guarantees about ordering of operations only for a single client, i.e.
Effects of an operations depend on the client performing it
Effects also depend on the history of client’s operations
Applied only when requested by the client
No guarantees concerning concurrent accesses by different clients
Assumption:
Clients can access different replicas, e.g. mobile users
This document contains study of Peer to Peer Distributed system.Three Models of Distributed system.Such as Centralizes,Decentralized,Hybird Model and Pros and cons of these models. Skpye and Bit torrent architecture is also discussed.This tutorial can be very help full for those who are beginners.
Network layer - design Issues ,Store-and-Forward Packet Switching, Services Provided to the Transport Layer, Which service is the best , Implementation of Service , Implementation of Connectionless Service , Implementation of Connection-Oriented Service
This document discusses key concepts in distributed database systems including relational algebra operators, Cartesian products, joins, theta joins, equi-joins, semi-joins, horizontal fragmentation, derived horizontal fragmentation, and ensuring correctness through completeness, reconstruction, and disjointness of fragmentations. Horizontal fragmentations can be primary, defined directly on a relation, or derived, defined on a relation based on the fragmentation of another related relation it joins with. Ensuring correctness of fragmentations involves checking they are complete, the global relation can be reconstructed from fragments, and fragments are disjoint.
This document provides an overview of various topics related to the network layer, including IPv4, IPv6, ARP, RARP, mobile IP, routing algorithms, and routing protocols. It begins with basics of IPv4 such as its addressing scheme and role in interconnecting networks. IPv6 is then introduced, along with reasons for its development and key features like its large 128-bit addresses. Address Resolution Protocol (ARP) and Reverse ARP (RARP) are also covered. The document concludes by discussing routing algorithms like link-state and distance-vector, as well as protocols including RIP, OSPF, and BGP.
This document discusses multiprocessor architecture types and limitations. It describes tightly coupled and loosely coupled multiprocessing systems. Tightly coupled systems have shared memory that all CPUs can access, while loosely coupled systems have each CPU connected through message passing without shared memory. Examples given are symmetric multiprocessing (SMP) and Beowulf clusters. Interconnection structures like common buses, multiport memory, and crossbar switches are also outlined. The advantages of multiprocessing include improved performance from parallel processing, increased reliability, and higher throughput.
Recursive transition networks (RTNs) are used to define languages by representing them as graphs with nodes and labeled edges. Strings in the language are produced by paths from a start node to a final node, where the labels of the edges along the path combine to form the string. RTNs allow for defining infinite languages more efficiently than listing all strings, as adding edges increases the number of possible paths and strings. The power of RTNs increases dramatically when cycles are added to the graph, allowing for infinitely many possible paths and strings.
This document provides an overview of computer security concepts, including risks, authentication, encryption, public key cryptography, wireless network security, and hacking tools and techniques. It discusses how attackers can sniff network traffic, crack wireless encryption, scan for vulnerabilities, and use social engineering to compromise systems. The document recommends maintaining up-to-date software, using strong passwords, limiting network access, and backing up data to help secure systems from potential threats.
Secure Communication (Distributed computing)Sri Prasanna
The document discusses secure communication and digital signatures. It begins by explaining symmetric cryptography and the key distribution problem. It then describes Diffie-Hellman key exchange, which allows two parties to agree on a secret key over an insecure channel without pre-shared secrets. It also covers RSA public key cryptography. The document discusses using hybrid cryptosystems that combine public key techniques for key exchange and symmetric encryption for bulk data. Finally, it explains how digital signatures using public key cryptography allow a message to be authenticated and integrity protected without encryption.
This document provides an overview of distributed web-based systems, including the key components and technologies that enable them. It discusses the World Wide Web and how documents are accessed via URLs. It also describes HTTP and how connections and requests/responses work. Other topics covered include caching, content distribution networks, web services, traditional and multi-tiered web architectures, web server clusters, and web security protocols like SSL.
This document discusses deterministic finite automata (DFA) minimization. It defines the components of a DFA and provides an example of a non-minimized DFA that accepts strings with 'a' or 'b'. The document then introduces an algorithm to minimize a DFA by identifying redundant states that are not necessary to recognize the language. The algorithm works by iteratively labeling states as distinct or equivalent based on their transitions and whether they are accepting states. This process combines equivalent states to produce a minimized DFA with the smallest number of states.
This document discusses process migration in distributed systems. It defines process migration as relocating a process from its current node to another node, which can occur either before or during process execution. The key aspects of process migration covered include selecting processes to migrate, transferring process state such as CPU registers and address space, forwarding messages, and handling communication between related processes migrated to different nodes. Various process migration mechanisms and their tradeoffs are also summarized.
The document discusses database issues related to mobile computing. It describes how mobile devices cache data from servers to reduce latency when the device is offline. The cached data is referred to as being "hoarded" in the device database. It discusses different database architectures including one-tier architectures where the database is specific to a mobile device and two-tier architectures involving client-server models. It also describes different cache invalidation mechanisms used to maintain consistency between cached data on devices and data on servers.
The document provides information about the CCNA certification exam, including the exam number, total marks, duration, passing score, question types, and benefits of obtaining the certification. It also discusses common networking devices, network interface cards, hubs, switches, routers, common network topologies, and the functions of LANs, MANs and WANs. Finally, it introduces the OSI model and its seven layers.
A distributed system consists of multiple autonomous computers that are linked through software to appear as a single integrated system. Distributed systems have several desirable features including resource sharing, concurrency to allow multiple users simultaneous access, openness through public specifications, transparency so users are unaware of remote resources, scalability to grow by adding computers, and fault tolerance to continue functioning if components fail. Examples of distributed systems include the internet, university computing centers, and ATM networks.
The network layer is responsible for packet forwarding including routing through intermediate routers. It controls the operation of the subnet and decides which physical path data takes. Routing is the process of moving packets from source to destination, usually performed by routers. Internetworking connects different network technologies like LANs and WANs using devices like routers. The network layer uses IP addressing to identify devices and enable routing. Private IP addresses identify internal devices while public addresses provide external internet access.
Logical clocks assign sequence numbers to distributed system events to determine causality without a global clock. Lamport's algorithm uses logical clocks to impose a partial ordering on events. Vector clocks extend this to also detect concurrent events that are not causally related, providing a full happened-before relation between all events. Each process maintains a vector clock that is incremented after local events and updated when receiving messages from other processes.
There are 5 levels of virtualization implementation:
1. Instruction Set Architecture Level which uses emulation to run inherited code on different hardware.
2. Hardware Abstraction Level which uses a hypervisor to virtualize hardware components and allow multiple users to use the same hardware simultaneously.
3. Operating System Level which creates an isolated container on the physical server that functions like a virtual server.
4. Library Level which uses API hooks to control communication between applications and the system.
5. Application Level which virtualizes only a single application rather than an entire platform.
This document provides information about error detection and correction techniques used in computer networks. It discusses different types of errors that can occur like single-bit and burst errors. It explains that redundancy is needed to detect or correct errors by adding extra bits. Detection techniques discussed include parity checks, checksumming, and cyclic redundancy checks. Parity checks can only detect odd number of errors. Cyclic redundancy checks use polynomial arithmetic to generate a checksum. Forward error correction allows detection and correction of errors by adding redundant bits to distinguish different error possibilities. Hamming code is an example of an error correcting code that can detect and correct single bit errors.
The document describes a three-tier architecture for mobile computing. It consists of a presentation tier, application tier, and data tier. The presentation tier handles the user interface and rendering. The application tier controls transaction processing and accommodates many users. The data tier manages database access and storage. Middleware sits between operating systems and user applications to handle functions like network management and security across tiers. This three-tier architecture provides benefits like improved performance, flexibility, maintainability and scalability.
Fast Ethernet increased the bandwidth of standard Ethernet from 10 Mbps to 100 Mbps. It used the same CSMA/CD access method and frame format as standard Ethernet but with some changes to address the higher speed. Fast Ethernet was implemented over twisted pair cables using 100BASE-TX or over fiber optic cables using 100BASE-FX. The increased speed enabled Fast Ethernet to compete with other high-speed LAN technologies of the time like FDDI.
Client-Centric Consistency
Provide guarantees about ordering of operations only for a single client, i.e.
Effects of an operations depend on the client performing it
Effects also depend on the history of client’s operations
Applied only when requested by the client
No guarantees concerning concurrent accesses by different clients
Assumption:
Clients can access different replicas, e.g. mobile users
This document contains study of Peer to Peer Distributed system.Three Models of Distributed system.Such as Centralizes,Decentralized,Hybird Model and Pros and cons of these models. Skpye and Bit torrent architecture is also discussed.This tutorial can be very help full for those who are beginners.
Network layer - design Issues ,Store-and-Forward Packet Switching, Services Provided to the Transport Layer, Which service is the best , Implementation of Service , Implementation of Connectionless Service , Implementation of Connection-Oriented Service
This document discusses key concepts in distributed database systems including relational algebra operators, Cartesian products, joins, theta joins, equi-joins, semi-joins, horizontal fragmentation, derived horizontal fragmentation, and ensuring correctness through completeness, reconstruction, and disjointness of fragmentations. Horizontal fragmentations can be primary, defined directly on a relation, or derived, defined on a relation based on the fragmentation of another related relation it joins with. Ensuring correctness of fragmentations involves checking they are complete, the global relation can be reconstructed from fragments, and fragments are disjoint.
This document provides an overview of various topics related to the network layer, including IPv4, IPv6, ARP, RARP, mobile IP, routing algorithms, and routing protocols. It begins with basics of IPv4 such as its addressing scheme and role in interconnecting networks. IPv6 is then introduced, along with reasons for its development and key features like its large 128-bit addresses. Address Resolution Protocol (ARP) and Reverse ARP (RARP) are also covered. The document concludes by discussing routing algorithms like link-state and distance-vector, as well as protocols including RIP, OSPF, and BGP.
This document discusses multiprocessor architecture types and limitations. It describes tightly coupled and loosely coupled multiprocessing systems. Tightly coupled systems have shared memory that all CPUs can access, while loosely coupled systems have each CPU connected through message passing without shared memory. Examples given are symmetric multiprocessing (SMP) and Beowulf clusters. Interconnection structures like common buses, multiport memory, and crossbar switches are also outlined. The advantages of multiprocessing include improved performance from parallel processing, increased reliability, and higher throughput.
Recursive transition networks (RTNs) are used to define languages by representing them as graphs with nodes and labeled edges. Strings in the language are produced by paths from a start node to a final node, where the labels of the edges along the path combine to form the string. RTNs allow for defining infinite languages more efficiently than listing all strings, as adding edges increases the number of possible paths and strings. The power of RTNs increases dramatically when cycles are added to the graph, allowing for infinitely many possible paths and strings.
This document provides an overview of computer security concepts, including risks, authentication, encryption, public key cryptography, wireless network security, and hacking tools and techniques. It discusses how attackers can sniff network traffic, crack wireless encryption, scan for vulnerabilities, and use social engineering to compromise systems. The document recommends maintaining up-to-date software, using strong passwords, limiting network access, and backing up data to help secure systems from potential threats.
Secure Communication (Distributed computing)Sri Prasanna
The document discusses secure communication and digital signatures. It begins by explaining symmetric cryptography and the key distribution problem. It then describes Diffie-Hellman key exchange, which allows two parties to agree on a secret key over an insecure channel without pre-shared secrets. It also covers RSA public key cryptography. The document discusses using hybrid cryptosystems that combine public key techniques for key exchange and symmetric encryption for bulk data. Finally, it explains how digital signatures using public key cryptography allow a message to be authenticated and integrity protected without encryption.
The document discusses various topics related to network security including encryption, authentication, and protocols. It provides an overview of symmetric and public key cryptography, algorithms like DES and RSA, digital signatures, protocols like SSL and IPsec, and applications like PGP. Common security threats like packet sniffing, IP spoofing, and denial of service attacks are also summarized.
Eve could send the same request multiple times with delay to impersonate Alice and confuse the server. The worst thing Eve can do is send the request again a day later pretending to be Alice.
This document discusses key management and protocols for distributing public keys for use in public-key cryptography. It describes several problems with directly exchanging or posting public keys and introduces the concept of a public-key authority or certificate authority that can validate public keys. The document outlines some proposed solutions like a centralized public-key authority that maintains a directory of public keys or a hierarchical system of certificate authorities that can validate certificates within a public key infrastructure. It also discusses the X.509 standard format for public key certificates.
This document discusses key management and protocols for distributing public keys securely. It presents three solutions: appending public keys to emails, posting them on websites, and using a public key authority or certificate authority. The main challenges are authenticating users' identities and distributing updated keys securely. A public key authority maintains a directory of public keys but introduces a bottleneck. A certificate authority signs certificates that bind keys to identities, avoiding the need to be constantly online but requiring in-person identity verification. Standards like X.509 were introduced to format certificates uniformly. A public key infrastructure model is proposed using a hierarchy of certificate authorities to issue certificates at various levels.
Asymmetric key cryptography uses two keys - a public key that can be shared publicly and a private key that is kept secret. This allows two parties who have never shared secrets before, like Alice and Bob, to communicate securely by encrypting messages with each other's public keys. Common asymmetric algorithms discussed are RSA, which uses prime number factorization, and ECC, which is based on elliptic curve discrete logarithms. A public key infrastructure (PKI) with certificate authorities (CAs) is required to authenticate users and manage public keys.
Asymmetric key cryptography uses two keys - a public key that can be shared publicly and a private key that is kept secret. This allows two parties who have never shared secrets before, like Alice and Bob, to communicate securely by encrypting messages with each other's public keys. Common asymmetric algorithms include RSA, which uses prime number factorization, and ECC, which is based on elliptic curve discrete logarithms. Certificate authorities issue digital certificates that bind public keys to identities to facilitate trust in public key infrastructures.
The document discusses various authentication protocols including:
- Reusable passwords which store hashed passwords but are vulnerable to theft
- One-time passwords which generate new passwords each time to prevent reuse of stolen passwords
- Challenge-response authentication which uses cryptographic functions to verify identity without transmitting passwords
- Public key authentication which uses digital signatures to authenticate users based on their private/public key pairs
- Kerberos which uses tickets and session keys issued by a trusted server to allow authentication between users and services on an network
Fundamentals of digital security. Some info I made throughout the years as a refresher for digital security. Basic primer for beginners. If you are an expert, comments and feedbacks welcome.
This document discusses message authentication and digital signatures for verifying message integrity and authenticity. It describes how message digests and message authentication codes (MACs) allow parties to verify that messages have not been altered. Digital signatures provide non-repudiation by allowing recipients to verify the sender and prove to others that the sender did sign the message. Certificate authorities issue digital certificates that bind public keys to identities, allowing keys to be verified. A public key infrastructure (PKI) establishes a framework of certificate authorities, registration authorities, and protocols to manage certificates and keys.
This document provides an overview of public key infrastructure (PKI). It discusses how PKI uses public key cryptography and digital signatures to establish secure communication channels between users. Certification authorities issue and sign digital certificates that map users' public keys to their identities, allowing other users to verify signatures and establish encryption keys. PKIs can be organized in different models like hierarchies or networks to distribute trust. Overall, PKI aims to provide the flexibility of key servers without requiring direct communication with a central authority.
This document provides a high-level summary of Transport Layer Security (TLS):
- TLS establishes an encrypted connection between a client and server through a handshake that authenticates the server and negotiates encryption parameters.
- The handshake includes the client sending a ClientHello, the server responding with a Certificate and ServerHello, and agreeing on encryption keys.
- Once established, the connection uses the record protocol to securely transmit encrypted and authenticated data between the client and server. Sessions can also be resumed later using the agreed session ID.
This document discusses improving web performance through protocols like HTTP/2.0. It begins by looking at ways to speed up the web by reducing unnecessary data transfers and latency. It then describes HTTP/2.0 in more detail, including how it uses a single TCP connection for multiple data streams and binary framing of data. Key changes from HTTP/1.x are a binary protocol instead of ASCII and support for multiple streams over a single TCP connection. The document also covers HTTP/2.0 framing and different frame types like SETTINGS, DATA, HEADERS and RST_STREAM.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
2. Trusted Intermediaries (3rd Party)
Its someone who sits between
two parties, entities that don’t
naturally trust each other and
provides a bridge.
Ex. Banks act as an
intermediaries between
depositors (seeking interest) and
borrowers (seeking debit /
withdrawal)
3. Who are the Trusted intermediaries in
Network Security ?
Why this third party is required?
4. Symmetric key problem Public key problem
• How do two entities establish shared
secret key over network?
• When Alice obtains Bob’s public key
(from web site, e-mail, diskette), how does
she know it is Bob’s public key, not Trudy’s?
Solution:
• Trusted key distribution center (KDC)
acting as intermediary between entities
Solution:
Trusted certification authority (CA)
5. Key Management
In public key setup all the participants will have a pair of public key and
private key
Bob
Alice
Vinod
Paveen
(ev, dv)
(eb, db)
(ep, dp)
(ea, da)
If Alice wants to send message to Paveen. Alice has to get Paveen's Public Key.
How the Alice will get the Paveen's Public Key?
6. There are two Distinct aspects of the use of public Key
encryptions
---The Distribution of Public Keys.
---The Use of Public Key encryption to distribute secret Keys.
7. Distribution of Public Key
1 Public announcement
Can be forged
2 Publicly available directory
Can be tampered
3 Public-key authorities (KDC)
Trusted 3rd party to maintain the directory
4 Public-key Certificates(CAs)
(To Avoid the bottleneck of Public-key authorities
they issue the certificates )
9. Public Key Directory (PKD)
ID Public Key
Alice 1011001101011101
Bob 0110011110100011
Vinod 11001....................011
Praveen
Suppose Alice wants to send message to Praveen
This file maintained in the Public domain ( Like web sites etc)
Alice
Paveen
(ep )
E ep (m)
(ep )
PROBLEM : The Attacker can change the public key easily as it is Publically available
What is the Solution ?
(ep, dp)
13. How to obtain a certificate from Authorized party ?
Bob
Alice
Autho
(eAuth, dAuth)
(eb, db)
(ea, da)
Suppose Bob needs certificates
T || IDB || eb
E dAuth ( T || IDB || eb ) = CB
T || IDA || ea
E dAuth ( T || IDA || ea ) = CA
14. How the Alice and Bob will use the certificate to communicate?
BobAlice (eb, db)
T || IDB || eb
E dAuth ( T || IDB || eb ) = CB
Alice will ask certificate from Bob.
Then Alice has to verify the certificate by decrypting with Authority Public key
CB
D eAuth (CB)
D eAuth (E dAuth ( T || IDB || eb ) ) =
eb E eb (m)
15.
16. Suppose I want to Modify or change the Public Key
Sukh
Auth
(eAuth, dAuth)
(es, ds)
T* || IDS || es*
Cs* = E dAuth ( T* || IDS || es* )
Cs = E dAuth ( T || IDS || es )
Cs* = E dAuth ( T* || IDS || es* )
New Certificate generated by the Authority