This document discusses the randomized Byzantine generals problem and its solution proposed by Michael Rabin in 1983. The key points are:
- The Byzantine generals problem models reaching consensus in a distributed system where some processes may be unreliable or malicious.
- Rabin proposed a randomized algorithm where processes agree on a common value through multiple rounds of exchanging signed messages. This algorithm ensures agreement with high probability within a bounded number of rounds.
- The algorithm uses authentication techniques like digital signatures to ensure traitors can only lie about other traitors, not impersonate others. It also uses a "lottery" procedure for processes to randomly select a coordinator in each round.
- Rabin's randomized algorithm guarantees consensus
Este documento presenta una breve biografía de Bryan Sebastián Barrigas Cevallos. Nació en Riobamba, Ecuador el 2 de noviembre de 1996. Asistió al jardín de infantes y primaria en la Unidad Educativa "San Felipe Neri". Participó en atletismo representando a la provincia de Chimborazo y ganó la medalla de plata en salto largo en los Juegos Nacionales Infantiles.
When I am child contaminated, I cloud my grown up thinking with beliefs from my childhood. These are fantasies, evoked by feelings, that are taken as facts. Berne used the word delusion to describe the kind of belief that typically arise from child contamination. When the content of a child contamination comes from earlier childhood, the delusion is likely to be more bizarre.
The document provides an overview of Bitcoin, including its history, key concepts, and technical aspects. It discusses how Bitcoin works as a decentralized digital currency using blockchain technology. Some key points covered include how Bitcoin is sent through peer-to-peer transactions, the role of miners in verifying transactions and creating new blocks, and how wallets are used to store public/private keys and interact with the Bitcoin network.
Introduction of Bitcoin, explain for newbie and financial person, easy to understanding.
Language
English 99%
Thai 1% (only "Bitcoin in Thailand)
Agenda
- What is Bitcoin
- Bitcoin and Gold, The human economy evolved
- The Bitcoin bubble
- How to can get Bitcoins
- What is Bitcoin Mining
- Total Bitcoins in circulation
- Bitcoin Supply
- How long does it take to mine a single Bitcoin
- Bitcoin consumption power
- B-Commerce
- Silk Road Case
- Tulip Mania 2.0?
- Bitcoin in Thailand
- Reference
Bitcoin - Introduction to Virtual Currency / CryptocurrencySwaminath Sam
The power point presentation talks about history of bitcoin and features, it also talks about how it works and what are all the challenges involved in using new innovative financial instrument...
This presentation shows the evolution of blockchain implementations from simple financial transactions to complex computer programs (i.e. Smart Contracts)
Ethereum at its simplest, is an open software platform based on blockchain technology
Ethereum allows developers to build and deploy decentralized applications.
This document contains a single URL repeated many times with no other text. Therefore, there is no essential information to summarize in 3 sentences or less from the given document.
Este documento presenta una breve biografía de Bryan Sebastián Barrigas Cevallos. Nació en Riobamba, Ecuador el 2 de noviembre de 1996. Asistió al jardín de infantes y primaria en la Unidad Educativa "San Felipe Neri". Participó en atletismo representando a la provincia de Chimborazo y ganó la medalla de plata en salto largo en los Juegos Nacionales Infantiles.
When I am child contaminated, I cloud my grown up thinking with beliefs from my childhood. These are fantasies, evoked by feelings, that are taken as facts. Berne used the word delusion to describe the kind of belief that typically arise from child contamination. When the content of a child contamination comes from earlier childhood, the delusion is likely to be more bizarre.
The document provides an overview of Bitcoin, including its history, key concepts, and technical aspects. It discusses how Bitcoin works as a decentralized digital currency using blockchain technology. Some key points covered include how Bitcoin is sent through peer-to-peer transactions, the role of miners in verifying transactions and creating new blocks, and how wallets are used to store public/private keys and interact with the Bitcoin network.
Introduction of Bitcoin, explain for newbie and financial person, easy to understanding.
Language
English 99%
Thai 1% (only "Bitcoin in Thailand)
Agenda
- What is Bitcoin
- Bitcoin and Gold, The human economy evolved
- The Bitcoin bubble
- How to can get Bitcoins
- What is Bitcoin Mining
- Total Bitcoins in circulation
- Bitcoin Supply
- How long does it take to mine a single Bitcoin
- Bitcoin consumption power
- B-Commerce
- Silk Road Case
- Tulip Mania 2.0?
- Bitcoin in Thailand
- Reference
Bitcoin - Introduction to Virtual Currency / CryptocurrencySwaminath Sam
The power point presentation talks about history of bitcoin and features, it also talks about how it works and what are all the challenges involved in using new innovative financial instrument...
This presentation shows the evolution of blockchain implementations from simple financial transactions to complex computer programs (i.e. Smart Contracts)
Ethereum at its simplest, is an open software platform based on blockchain technology
Ethereum allows developers to build and deploy decentralized applications.
This document contains a single URL repeated many times with no other text. Therefore, there is no essential information to summarize in 3 sentences or less from the given document.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
This document discusses fault tolerance and distributed systems concepts. It defines key terms like failure, error and fault. It describes different types of faults like hard and soft faults. It discusses failure detection metrics like MTBF, MTTD and MTTR. It also covers different failure models like fail-stop, Byzantine and omission failures. The document then discusses distributed algorithms, their properties of safety and liveness, and timing models like synchronous, asynchronous and partial synchrony. It covers distributed consensus algorithms and how they ensure agreement, validity and termination properties. It provides examples of synchronous fail-stop and Byzantine consensus algorithms.
This document discusses agreement protocols in distributed systems. It defines three main agreement problems: Byzantine agreement, consensus, and interactive consistency. Byzantine agreement requires all non-faulty processors to agree on a single value initialized by a source processor. Consensus requires agreement on a single value when each processor begins with a different initial value. Interactive consistency requires agreement on a set of values when initial values differ across processors. The document outlines solutions for these problems under synchronous and asynchronous models with crash, omission, and Byzantine faults.
The document discusses various topics related to data link layer protocols in computer networks, including:
- Sliding window protocol which uses imaginary boxes (windows) on the sender and receiver sides to track frames and allow the sender to transmit multiple frames before requiring an acknowledgment.
- ARQ (Automatic Repeat Request) protocols like Stop-and-Wait, Go-Back-N, and Selective Repeat which determine the rules for retransmitting frames if errors are detected. Stop-and-Wait sends one frame at a time, Go-Back-N retransmits all outstanding frames, and Selective Repeat retransmits only corrupted frames.
- Piggybacking which is a technique used to improve the efficiency
Economics of Decentalized Currency SystemsErnie Teo
This presentation examines the justifications for a decentralized currency system, looking at the main beneficiaries of such a system and comparing it to a centralized currency. Next, the Byzantine General’s Problem will be discussed from a game theoretical perspective. We will look at how various solutions such as mining protocols (such as proof of work and proof of stake like Bitcoin) and consensus protocols (like Ripple and Hyperledger), attempts to tackle the problem. The talk will conclude by comparing between the Ripple and Bitcoin systems, looking at the pros and cons, and the participation incentives of nodes.
This document discusses algorithms for leader election and Byzantine agreement in distributed systems. It presents 3 algorithms for leader election in synchronous rings with non-anonymous processes. The most efficient uses bidirectional message passing and has O(n log n) complexity. Byzantine agreement, where some nodes may behave maliciously, is discussed for complete graphs. An algorithm is presented that works when n > 3f, where n is the number of nodes and f is the maximum faulty nodes. Additional conditions are needed for incomplete graphs.
This document discusses agreement protocols in distributed systems. It begins by defining agreement, why it is needed, and common problems that require agreement. It then covers assumptions made in consensus algorithms like failure models, synchronous/asynchronous communication, and network properties. Several failure types are described including crash, omission, and Byzantine faults. Solutions to the agreement problem are explored for synchronous and asynchronous systems under different failure assumptions. Key approaches and impossibility results are referenced.
This presentation goes over consensus fundamentals, what consensus algorithms are used in Hyperledger blockchain projects today and how do they work. This presentation was presented at the April 2nd SF Hyperledger Meetup @ PubNub.
to transfer data in network from one device to another with acceptable accuracy, so the system must guarantee the transmitted data should be identical to received data.
there should be no errors if any error occurs in how many ways it can be detected and corrected
- Errors can be single bit or burst, affecting multiple bits. Three common redundancy check methods are vertical redundancy check (VRC), longitudinal redundancy check (LRC), and cyclic redundancy check (CRC).
- VRC adds a parity bit to the data unit to detect single bit and odd-length burst errors. LRC organizes data into a table and calculates a parity bit for each column. CRC performs binary division of the data unit using a predetermined divisor and appends the remainder as redundant bits.
- CRC can detect all burst errors affecting an odd number of bits and has a very high probability of detecting longer bursts. It is the most powerful detection method using a generator to create redundant bits appended to the data and
FastBFT is a scalable Byzantine fault tolerant consensus protocol that uses hardware-assisted secret sharing to achieve high performance. It uses a trusted execution environment to implement a lightweight secret sharing scheme and assign unique sequence numbers to requests. Replicas are organized in a tree topology to distribute communication and computation costs, allowing the protocol to reach consensus in a constant number of message rounds regardless of the number of replicas. The protocol takes an optimistic approach where a subset of replicas participate in agreement while others passively update their state.
computer Networks Error Detection and Correction.pptJayaprasanna4
This document discusses error detection and correction in data transmission. It covers the following key points:
- There are two main types of errors: single-bit errors and burst errors. Burst errors are more common in serial transmission.
- Error detection verifies data accuracy without having the original message. It uses redundancy like vertical and longitudinal redundancy checks. Cyclic redundancy checks use polynomial division to detect errors.
- Error correction automatically fixes certain errors. Single-bit error correction reverses the value of the altered bit. Hamming codes use additional redundant bits to detect and correct single-bit errors.
This document proposes TorCoin and TorPath as a solution to improve the speed of the Tor network. TorCoin is a proof-of-bandwidth cryptocurrency where relays can "mine" TorCoins by providing bandwidth to transfer data. TorPath is an anonymous and verifiable protocol for assigning clients to circuits. It allows circuits to collectively sign TorCoins mined to verify bandwidth was provided, while preserving the anonymity of circuit members. The proposal aims to address challenges around incentivizing bandwidth contributions to Tor in a way that maintains its trust and anonymity properties.
The document discusses real-time embedded communication and networking concepts. It describes explicit and implicit flow control, where explicit uses acknowledgments and implicit relies on redundancy. Media access control methods like TDMA, polling, token passing, and CSMA/CD are explained. Controller Area Network (CAN) is introduced as an example real-time embedded network protocol.
This document discusses snapshots in distributed systems. It begins by defining a snapshot as recording the simultaneous local states of all processes and communication channels. Snapshots can be used for deadlock detection, monitoring systems, and checkpointing distributed databases. Determining a global state is difficult due to the distributed nature of systems with no shared memory or clocks. Consistent cuts that do not cross message orderings can accurately capture a global state. The document then discusses several snapshot algorithms, including Chandy-Lamport for FIFO systems using markers, and Lai-Yang for non-FIFO systems using message coloring.
The document discusses error detection and correction techniques used in data transmission. It explains that errors can occur during transmission and redundancy is added through encoding schemes to detect or correct corrupted data. Error detection allows a receiver to detect if an error occurred, while error correction enables locating and replacing the exact bits in error. Block coding and convolution coding are two common coding techniques used. The document also discusses forward error correction versus retransmission for error handling.
The document discusses information theory and source coding. It defines information and entropy, explaining that the amount of information contained in a message depends on its probability. The entropy of a data source measures the average information content. Huffman coding is presented as a method to assign variable-length codes to symbols to minimize the average code length. Error detection and correction codes are also summarized, including parity checking, cyclic redundancy checks (CRC), linear block codes, and convolutional codes.
1. Consensus and agreement algorithms - Introduction.pdfAzmiNizar1
This document discusses consensus and agreement algorithms. It defines the Byzantine agreement problem which requires processes to reach agreement on an initial value despite faulty processes. The key properties are agreement, validity, and termination. It also describes the consensus problem which is similar but each process starts with a value, and the interactive consistency problem where processes must agree on a set of values. The document outlines common assumptions made in studying these problems, such as failure models and synchronous/asynchronous communication.
This document discusses error detection and correction techniques used in data communication. It describes different types of errors like single-bit errors and burst errors. It then explains various error detection methods like vertical redundancy check (VRC), longitudinal redundancy check (LRC), cyclic redundancy check (CRC), and checksum that work by adding redundant bits. The document also covers error correction techniques like single-bit error correction using Hamming code which allows detecting and correcting single-bit errors.
Blockchain is a distributed ledger technology that allows for the safe distribution of a ledger across multiple nodes. It works by having each transaction digitally signed and added in a "block" along with a proof of work. This prevents double spending and allows nodes to reach consensus on the transaction history without a centralized authority. Smart contracts enable decentralized applications to run transactions automatically according to the program. However, first generation blockchains face challenges around centralization, scalability, and smart contract quality. New solutions aim to address these through alternative consensus methods, off-chain transactions, and designed smart contract languages.
Generative adversarial networks (GANs) are a class of machine learning frameworks where two neural networks contest with each other in a game. One network generates new data instances, while the other evaluates them for authenticity. The generator creates synthetic instances to fool the discriminator, while the discriminator learns to identify the generator's fakes from true instances.
Interpretable Learning Model for Lower Dimensional Feature Space: A Case stud...Kishor Datta Gupta
Detecting brown spot in rice leaf is an urgent complication in the agricultural field as Brown Spot disease lessen the rice yield remarkably. Several segmentation techniques have been applied to identify and extract the infected portion of the rice-leaf and machine learning algorithms such as decision trees, support vector machines are applied to detect this infection. In particular, a combination of Convolution Neural Networks with these algorithms has also tried to resolve this problem. Although this attempt has achieved success in providing accuracy (96.8%), these kinds of approaches raise issues regarding the size and interpretability of feature space and interpretability of the decision model. Indeed, Deep learning networks automatically create a feature space that usually contains a massive number of features (numerous of them are not necessarily appropriate). This vast number of features extends the non-interpretability of the machine learning model. Furthermore, training the model with these many features is computationally expensive. To resolve these issues, we propose a method to extract a few interpretable features from rice-leaf images and construct a low-dimensional feature space; however, interpretation shows that they deserve significant credit for the decent accuracy of our classification model.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
This document discusses fault tolerance and distributed systems concepts. It defines key terms like failure, error and fault. It describes different types of faults like hard and soft faults. It discusses failure detection metrics like MTBF, MTTD and MTTR. It also covers different failure models like fail-stop, Byzantine and omission failures. The document then discusses distributed algorithms, their properties of safety and liveness, and timing models like synchronous, asynchronous and partial synchrony. It covers distributed consensus algorithms and how they ensure agreement, validity and termination properties. It provides examples of synchronous fail-stop and Byzantine consensus algorithms.
This document discusses agreement protocols in distributed systems. It defines three main agreement problems: Byzantine agreement, consensus, and interactive consistency. Byzantine agreement requires all non-faulty processors to agree on a single value initialized by a source processor. Consensus requires agreement on a single value when each processor begins with a different initial value. Interactive consistency requires agreement on a set of values when initial values differ across processors. The document outlines solutions for these problems under synchronous and asynchronous models with crash, omission, and Byzantine faults.
The document discusses various topics related to data link layer protocols in computer networks, including:
- Sliding window protocol which uses imaginary boxes (windows) on the sender and receiver sides to track frames and allow the sender to transmit multiple frames before requiring an acknowledgment.
- ARQ (Automatic Repeat Request) protocols like Stop-and-Wait, Go-Back-N, and Selective Repeat which determine the rules for retransmitting frames if errors are detected. Stop-and-Wait sends one frame at a time, Go-Back-N retransmits all outstanding frames, and Selective Repeat retransmits only corrupted frames.
- Piggybacking which is a technique used to improve the efficiency
Economics of Decentalized Currency SystemsErnie Teo
This presentation examines the justifications for a decentralized currency system, looking at the main beneficiaries of such a system and comparing it to a centralized currency. Next, the Byzantine General’s Problem will be discussed from a game theoretical perspective. We will look at how various solutions such as mining protocols (such as proof of work and proof of stake like Bitcoin) and consensus protocols (like Ripple and Hyperledger), attempts to tackle the problem. The talk will conclude by comparing between the Ripple and Bitcoin systems, looking at the pros and cons, and the participation incentives of nodes.
This document discusses algorithms for leader election and Byzantine agreement in distributed systems. It presents 3 algorithms for leader election in synchronous rings with non-anonymous processes. The most efficient uses bidirectional message passing and has O(n log n) complexity. Byzantine agreement, where some nodes may behave maliciously, is discussed for complete graphs. An algorithm is presented that works when n > 3f, where n is the number of nodes and f is the maximum faulty nodes. Additional conditions are needed for incomplete graphs.
This document discusses agreement protocols in distributed systems. It begins by defining agreement, why it is needed, and common problems that require agreement. It then covers assumptions made in consensus algorithms like failure models, synchronous/asynchronous communication, and network properties. Several failure types are described including crash, omission, and Byzantine faults. Solutions to the agreement problem are explored for synchronous and asynchronous systems under different failure assumptions. Key approaches and impossibility results are referenced.
This presentation goes over consensus fundamentals, what consensus algorithms are used in Hyperledger blockchain projects today and how do they work. This presentation was presented at the April 2nd SF Hyperledger Meetup @ PubNub.
to transfer data in network from one device to another with acceptable accuracy, so the system must guarantee the transmitted data should be identical to received data.
there should be no errors if any error occurs in how many ways it can be detected and corrected
- Errors can be single bit or burst, affecting multiple bits. Three common redundancy check methods are vertical redundancy check (VRC), longitudinal redundancy check (LRC), and cyclic redundancy check (CRC).
- VRC adds a parity bit to the data unit to detect single bit and odd-length burst errors. LRC organizes data into a table and calculates a parity bit for each column. CRC performs binary division of the data unit using a predetermined divisor and appends the remainder as redundant bits.
- CRC can detect all burst errors affecting an odd number of bits and has a very high probability of detecting longer bursts. It is the most powerful detection method using a generator to create redundant bits appended to the data and
FastBFT is a scalable Byzantine fault tolerant consensus protocol that uses hardware-assisted secret sharing to achieve high performance. It uses a trusted execution environment to implement a lightweight secret sharing scheme and assign unique sequence numbers to requests. Replicas are organized in a tree topology to distribute communication and computation costs, allowing the protocol to reach consensus in a constant number of message rounds regardless of the number of replicas. The protocol takes an optimistic approach where a subset of replicas participate in agreement while others passively update their state.
computer Networks Error Detection and Correction.pptJayaprasanna4
This document discusses error detection and correction in data transmission. It covers the following key points:
- There are two main types of errors: single-bit errors and burst errors. Burst errors are more common in serial transmission.
- Error detection verifies data accuracy without having the original message. It uses redundancy like vertical and longitudinal redundancy checks. Cyclic redundancy checks use polynomial division to detect errors.
- Error correction automatically fixes certain errors. Single-bit error correction reverses the value of the altered bit. Hamming codes use additional redundant bits to detect and correct single-bit errors.
This document proposes TorCoin and TorPath as a solution to improve the speed of the Tor network. TorCoin is a proof-of-bandwidth cryptocurrency where relays can "mine" TorCoins by providing bandwidth to transfer data. TorPath is an anonymous and verifiable protocol for assigning clients to circuits. It allows circuits to collectively sign TorCoins mined to verify bandwidth was provided, while preserving the anonymity of circuit members. The proposal aims to address challenges around incentivizing bandwidth contributions to Tor in a way that maintains its trust and anonymity properties.
The document discusses real-time embedded communication and networking concepts. It describes explicit and implicit flow control, where explicit uses acknowledgments and implicit relies on redundancy. Media access control methods like TDMA, polling, token passing, and CSMA/CD are explained. Controller Area Network (CAN) is introduced as an example real-time embedded network protocol.
This document discusses snapshots in distributed systems. It begins by defining a snapshot as recording the simultaneous local states of all processes and communication channels. Snapshots can be used for deadlock detection, monitoring systems, and checkpointing distributed databases. Determining a global state is difficult due to the distributed nature of systems with no shared memory or clocks. Consistent cuts that do not cross message orderings can accurately capture a global state. The document then discusses several snapshot algorithms, including Chandy-Lamport for FIFO systems using markers, and Lai-Yang for non-FIFO systems using message coloring.
The document discusses error detection and correction techniques used in data transmission. It explains that errors can occur during transmission and redundancy is added through encoding schemes to detect or correct corrupted data. Error detection allows a receiver to detect if an error occurred, while error correction enables locating and replacing the exact bits in error. Block coding and convolution coding are two common coding techniques used. The document also discusses forward error correction versus retransmission for error handling.
The document discusses information theory and source coding. It defines information and entropy, explaining that the amount of information contained in a message depends on its probability. The entropy of a data source measures the average information content. Huffman coding is presented as a method to assign variable-length codes to symbols to minimize the average code length. Error detection and correction codes are also summarized, including parity checking, cyclic redundancy checks (CRC), linear block codes, and convolutional codes.
1. Consensus and agreement algorithms - Introduction.pdfAzmiNizar1
This document discusses consensus and agreement algorithms. It defines the Byzantine agreement problem which requires processes to reach agreement on an initial value despite faulty processes. The key properties are agreement, validity, and termination. It also describes the consensus problem which is similar but each process starts with a value, and the interactive consistency problem where processes must agree on a set of values. The document outlines common assumptions made in studying these problems, such as failure models and synchronous/asynchronous communication.
This document discusses error detection and correction techniques used in data communication. It describes different types of errors like single-bit errors and burst errors. It then explains various error detection methods like vertical redundancy check (VRC), longitudinal redundancy check (LRC), cyclic redundancy check (CRC), and checksum that work by adding redundant bits. The document also covers error correction techniques like single-bit error correction using Hamming code which allows detecting and correcting single-bit errors.
Blockchain is a distributed ledger technology that allows for the safe distribution of a ledger across multiple nodes. It works by having each transaction digitally signed and added in a "block" along with a proof of work. This prevents double spending and allows nodes to reach consensus on the transaction history without a centralized authority. Smart contracts enable decentralized applications to run transactions automatically according to the program. However, first generation blockchains face challenges around centralization, scalability, and smart contract quality. New solutions aim to address these through alternative consensus methods, off-chain transactions, and designed smart contract languages.
Similar to Randomized Byzantine Problem by Rabin (20)
Generative adversarial networks (GANs) are a class of machine learning frameworks where two neural networks contest with each other in a game. One network generates new data instances, while the other evaluates them for authenticity. The generator creates synthetic instances to fool the discriminator, while the discriminator learns to identify the generator's fakes from true instances.
Interpretable Learning Model for Lower Dimensional Feature Space: A Case stud...Kishor Datta Gupta
Detecting brown spot in rice leaf is an urgent complication in the agricultural field as Brown Spot disease lessen the rice yield remarkably. Several segmentation techniques have been applied to identify and extract the infected portion of the rice-leaf and machine learning algorithms such as decision trees, support vector machines are applied to detect this infection. In particular, a combination of Convolution Neural Networks with these algorithms has also tried to resolve this problem. Although this attempt has achieved success in providing accuracy (96.8%), these kinds of approaches raise issues regarding the size and interpretability of feature space and interpretability of the decision model. Indeed, Deep learning networks automatically create a feature space that usually contains a massive number of features (numerous of them are not necessarily appropriate). This vast number of features extends the non-interpretability of the machine learning model. Furthermore, training the model with these many features is computationally expensive. To resolve these issues, we propose a method to extract a few interpretable features from rice-leaf images and construct a low-dimensional feature space; however, interpretation shows that they deserve significant credit for the decent accuracy of our classification model.
A safer approach to build recommendation systems on unidentifiable dataKishor Datta Gupta
Conference: 14th International Conference on Agents and Artificial Intelligence (ICAART 2022)
In recent years, data security has been one of the biggest concerns, and individuals have grown increasingly worried about the security of their personal information. Personalization typically necessitates the collection of individual data for analysis, exposing customers to privacy concerns. Companies create an illusion of safety to make people feel safe using a mainstream word, "encryption". Though encryption protects personal data from an external breach, the companies can still exploit personal data collected from users as they own the encryption keys. We present a naive yet secure approach for recommending movies to consumers without collecting any personally identifiable information. Our proposed approach can assist a movie recommendation system understand user preferences using the user's movie watch-time and watch history only. We conducted a comprehensive and comparative study on the performance of three deep reinforcement learning architectures, namely DQN, DDQN, and D3QN, on the same task. We observed that D3QN outperformed the other two architectures and achieved a precision of 0.880, recall of 0.805, and F1 score of 0.830. The results show that we can build a competitive movie recommendation system using unidentifiable data.
This document summarizes a presentation on machine learning models, adversarial attacks, and defense strategies. It discusses adversarial attacks on machine learning systems, including GAN-based attacks. It then covers various defense strategies against adversarial attacks, such as filter-based adaptive defenses and outlier-based defenses. The presentation also addresses issues around bias in AI systems and the need for explainable and accountable AI.
The document discusses adversarial attacks (AAs) on AI/ML systems. It outlines different types of attacks like poisoning attacks, evasion attacks, and Trojan attacks. It also describes various evasion-based attack methods like one pixel attacks and gradient attacks. Additionally, it notes that AAs can be transferable between models and are less effective in physical environments. The document discusses current defense strategies like retraining models, input reconstruction, and model modifications. However, it notes limitations like reduced accuracy and vulnerability to adaptive attacks. It also does not sufficiently test defense practicality or computational costs. In conclusion, the document argues that defending against adaptive attacks and Trojan attacks is particularly challenging and requires end-to-end protections.
Robust Filtering Schemes for Machine Learning Systems to Defend Adversarial A...Kishor Datta Gupta
This presentation discusses robust filtering schemes to defend machine learning systems against adversarial attacks. It outlines three main defense schemes: input filtering, output filtering, and an end-to-end protection scheme. The input filtering scheme uses a genetic algorithm to determine an optimal sequence of filters to detect adversarial examples. The output filtering scheme formulates the detection of adversarial inputs as an outlier detection problem. The end-to-end scheme integrates components for adversarial detection, filtering, and classification into a unified framework for protection. Experimental results show the proposed approaches can effectively detect various adversarial attack types while maintaining high classification accuracy.
Zero-shot learning allows a model to recognize classes that it was not trained on by utilizing auxiliary information about both seen and unseen classes during training. The model is trained to predict this auxiliary information, like word embeddings or manually designed features, for the seen classes. During testing, the model predicts the auxiliary information for an unseen class and assigns it to the class whose auxiliary information is closest, even if that class was not part of the training data. This allows the model to generalize to new classes without requiring labeled examples of those classes.
Using Negative Detectors for Identifying Adversarial Data Manipulation in Mac...Kishor Datta Gupta
With the increased popularity of Machine Learning (ML) in real-world applications, adversarial attacks are emerging to subvert the ML-based decision support systems. It appears that the existing adversarial defenses are ineffective against adaptive attacks since these are highly depend on knowledge of prior attacks and the ML model architecture. To alleviate the challenges, We propose a negative filtering strategy that does not require any adversarial knowledge and can work independent of ML models. This filtering strategy relies on salient features of clean (training) data and employs a complementary approach to cover possible attack surface in an application. Our empirical experiments with different data sets demonstrate that the negative filters could effectively detect wide-range of adversarial inputs and update itself to protect against adaptive attacks.
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...Kishor Datta Gupta
—Recommendation is crucial in both academia andindustry, and various techniques are proposed such as content-based collaborative filtering, matrix factorization, logistic re-gression, factorization machines, neural networks and multi-armed bandits. However, most of the previous studies sufferfrom two limitations: (1) considering the recommendation asa static procedure and ignoring the dynamic interactive naturebetween users and the recommender systems; (2) focusing on theimmediate feedback of recommended items and neglecting thelong-term rewards. To address the two limitations, in this paperwe propose a novel recommendation framework based on deepreinforcement learning, called DRR. The DRR framework treatsrecommendation as a sequential decision making procedure andadopts an “Actor-Critic” reinforcement learning scheme to modelthe interactions between the users and recommender systems,which can consider both the dynamic adaptation and long-term rewards. Further more, a state representation module isincorporated into DRR, which can explicitly capture the interac-tions between items and users. Three instantiation structures aredeveloped. Extensive experiments on four real-world datasets areconducted under both the offline and online evaluation settings.The experimental results demonstrate the proposed DRR methodindeed outperforms the state-of-the-art competitors
Machine learning can be applied in various areas of computer security like network security, endpoint protection, application security, user behavior analysis, and process behavior analysis. Some common machine learning techniques that are useful for security include regression for prediction and detection of anomalies, classification to identify threats and attacks, and clustering for forensic analysis and to detect outliers. Example applications of machine learning in security include using regression to detect anomalies in network traffic, classification to identify malware, and clustering to separate malware from legitimate files.
Policy Based reinforcement Learning for time series Anomaly detectionKishor Datta Gupta
This document discusses a policy-based reinforcement learning approach called PTAD for time series anomaly detection. PTAD formulates anomaly detection as a Markov Decision Process and uses an asynchronous actor-critic algorithm to learn a stochastic policy. The agent takes as input current and previous time series data and actions, and outputs a decision of normal or anomalous. It is rewarded based on a confusion matrix calculation. Experimental results show PTAD achieves best performance both within and across datasets by adjusting to different behaviors. The stochastic policy allows exploring precision-recall tradeoffs. While interesting, it is not compared to neural network based techniques like autoencoders.
This document discusses intrusion detection systems (IDS). It covers the key components of an IDS, including methods of intrusion detection like audit trail processing, on-the-fly processing, profiles of normal behavior, signatures of abnormal behavior, and parameter pattern matching. The document also discusses building network-based IDS using tools like Snort and host-based IDS. It provides examples of labs to analyze network and wireless intrusion detection using machine learning techniques.
understanding the pandemic through mining covid news using natural language p...Kishor Datta Gupta
This document summarizes a research presentation on analyzing Covid-19 news reports from newspapers in developed and developing countries using natural language processing. It introduces the research aim to understand how newspapers portray the pandemic using NLP techniques on reports from the US and Bangladesh. The researchers collected over 1000 news articles to create the NNK Dataset, which they preprocessed and analyzed to extract keywords, sentiments, and case numbers. Word clouds of frequent terms and numeric extractions showed how coverage evolved over time. The dataset was made publicly available to encourage further analysis of portraying pandemics through newspapers.
The document discusses using different representation spaces for digits when classifying MNIST data. It shows that classifying digits when each digit has its own best representation space that other digits are compared to leads to much higher accuracy, ranging from 97-99%, compared to using the same representation space for all digits which only achieves 64% accuracy.
"Can NLP techniques be utilized as a reliable tool for medical science?" -Bui...Kishor Datta Gupta
Artificial intelligence persists on being a right-hand tool for many branches of biology. From preliminary advices and treatments, such as understanding if symptoms related to fever or cold, to critical detection of cancerous cell or classification of X-rays, traditional machine learning and deep learning techniques achieved remarkable feats. However, total dependency on machine-based prediction is yet a far fetched concept. In this paper, we provide a framework utilizing several Natural Language Processing (NLP) algorithms to construct a comparative analysis. We create an ensemble of top-performing algorithms to accomplish classification task on medical reports. We compare both the traditional machine learning and deep learning techniques and evaluate their probabilities of being reliable on analyzing medical diagnosis. We concluded that an ensemble approach can provide reliable outcomes with accuracy over 92% and that the current state of the art is unequipped to provide the result with the standard needed for health sectors but an ensemble of these techniques can be a pathway for future research direction.
Conference: IEEE 11th Annual Information Technology, Electronics and Mobile Communication Conference (IEEE IEMCON 2020)At: Vancouver
Applicability issues of Evasion-Based Adversarial Attacks and Mitigation Tech...Kishor Datta Gupta
Adversarial attacks are considered security risks for Artificial Intelligence-based systems. Researchers have been studying different defense techniques appropriate for adversarial attacks. Evaluation strategies of these attacks and corresponding defenses are primarily conducted on trivial benchmark analysis. We have observed that most of these analyses have practical limitations for both attacks and for defense methods. In this work, we analyzed the adversarial attacks based on how these are performed in real-world problems and what steps can be taken to mitigate their effects. We also studied practicability issues of well-established defense techniques against adversarial attacks and proposed some guidelines for better and effective solutions. We demonstrated that the adversarial attacks detection rate and destruction rate co-related inversely, which can be used in designing defense techniques. Based on our experimental results, we suggest an adversarial defense model incorporating security policies that are suitable for practical purposes.
https://www.researchgate.net/publication/344463103_Applicability_issues_of_Evasion-Based_Adversarial_Attacks_and_Mitigation_Techniques
Adversarial Input Detection Using Image Processing Techniques (IPT)Kishor Datta Gupta
Modern deep learning models for the computer vision domain are vulnerable against adversarial attacks. Image prepossessing technique based defense against malicious input is currently considered obsolete as this defense is not effective against all types of attacks. The advanced adaptive attack can easily defeat pre-processing based defenses. In this paper, we proposed a framework that will generate a set of image processing sequences (several image processing techniques in a series). We randomly select a set of Image processing technique sequences (IPTS) dynamically to answer the obscurity question in testing time. This paper outlines methodology utilizing varied datasets examined with various adversarial data manipulations. For specific attack types and dataset, it produces unique IPTS. The outcome of our empirical experiments shows that the method can efficiently employ as processing for any machine learning models. The research also showed that our process works against adaptive attacks as we are using a non-deterministic set of IPTS for each adversarial input.
This document discusses clustering clean and adversarial images from the MNIST dataset using K-means, LDA, and T-SNE clustering methods. It contains 10,000 clean images and 10,000 adversarial images generated using the FGSM attack method from 10 classes in MNIST. The document applies principal component analysis to extract features from the images before clustering them to visualize how the different methods group the clean and adversarial samples.
This document discusses basic digital image concepts including image data representations, color channels, bit depth, CMYK vs RGB color models, image blur filters using kernels, and kernel operations used in convolutional neural networks (CNN).
An empirical study on algorithmic bias (aiml compsac2020)Kishor Datta Gupta
In all goal-oriented selection activities, an existence of certain level of bias is unavoidable and may be desired for efficient artificial intelligence based decision support systems. However, a fair independent comparison of all eligible entities is essential to alleviate explicit bias in competitive marketplace. For example, searching online for a good or service, it is expected that the underlying algorithm will provide fair results by searching all available entities in the category mentioned. However, a biased search can make a narrow or collaborative query, ignoring competitive outcomes, resulting customers in costing more or getting lower quality products or services for the money they spend. This paper describes algorithmic bias in different contexts with examples and scenarios, best practices to detect bias, and two case studies to identify algorithmic bias.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
2. Motivation
• One of the Fundamental Problem in distributed system is
Consensus problem
• A faulty processor can sends inconsistent information
to the other processors. Which will violate the properties
of distributed system
• Influenced by popularity of dining philosophers problem
Leslie Lamport associated story of generals to increase
the attention for solving fault tolerant system
3. History
• In 1975 Two generals problems (sometimes called the
Chinese Generals Problem) introduced by E. A.
Akkoyunlu, K. Ekanadham, and R. V. Huber .
• In 1978 “Two Generals Paradox” names by Jim Gray
• In 1980 Leslie Lamport published “Reaching
Agreement in the Presence of Faults” (with Marshall
Pease and Robert Shostak) this paper shows This paper
shows that "Byzantine" faults, in which a faulty processor
sends inconsistent information to the other processors,
can defeat any traditional three-processor algorithm
4. History (cont)
• In April 1982 Leslie Lamport wrote Byzantine Generals
and Transaction Commit Protocols (with Michael
Fischer) and proved that current Byzantine problem will
lead more failure
• In July 1982, Leslie Lamport solved this problem in a
paper written with Marshall Pease and Robert Shostak
and name “Byzantine General problem” established
• 1982 Ben-Or, M., Another advantage of free choice:
Completely asynchronous agreement protocols,
Abstract.
5. History (cont)
• July 1983 A weaker Byzantine problem Leslie Lamport
• In 1983 Michael O.~Rabin proposed “Random Byzantine
General” solutions
6. Background
Consensus Problem :
The consensus problem requires agreement among a
number of processes (or agents) for a single data value.
Some of the processes (agents) may fail or be unreliable in
other ways, so consensus protocols must be fault
tolerant or resilient. The processes must somehow put forth
their candidate values, communicate with one another, and
agree on a single consensus value
7. Consensus Problem(cont)
• Consensus Problem example
Suppose process p1 ,p2, p3 decided that a data value is
X, if there is another p4 decide that value is not X then
the problem create is called Consensus problem.
• Practical Example: mutual exclusion, election,
transactions
8. Consensus Problem
• Every processor has an input x є X
• Termination: Eventually every non-faulty processor
must decide on a value y.
• Agreement: All decisions by non-faulty processors must
be the same.
• Validity: If all inputs are the same, then the decision of a
non-faulty processor must equal the common input (this
avoids trivial solutions).
9. Consensus Problem(cont)
Consensus Problem Failure Reasons
• Link Failure
• Processor Failure
• Byzantine fault
Byzantine fault: If one or more processor became
malicious or faulty that they send wrong messages.
10. Byzantine Fault
Characteristics
• Most difficult
• Most common
Difference between process failure and Byzantine Failure is
in Byzantine Failure process send wrong information while
in process failure processor doesn’t send any thing. That’s
why Byzantine failure is difficult to detect than Processor
failure
12. Byzantine Problem
• “We imagine that several divisions of the Byzantine army
are camped outside an enemy city, each division
commanded by its own general. The generals can
communicate with one another only by messenger. After
observing the enemy, they must decide upon a common
plan of action. However, some of the generals may be
traitors, trying to prevent the loyal generals from
reaching agreement.”----LESLIE LAMPORT, ROBERT
SHOSTAK, and MARSHALL PEASE
13. General Byzantine
Problem
o Each division of Byzantine army is directed by its own general.
o There are n Generals, some of which are traitors.
o All armies are camped outside enemy castle, observing enemy.
o Communicate with each other by messengers.
o Requirements:
• A: All loyal generals decide upon the same plan of action
• B: A small number of traitors cannot cause the loyal generals to
adopt a bad plan
Note: We do not need to identify the traitors.
14. Naïve Solution
All generals send message to all other generals . Majority
Result will be taken as decisions
Failure of the solutions:
o Traitors may send different values to different
generals.
o Loyal generals might get conflicting values from
traitors
15. Reduction
Interactive Consistency Conditions:
o IC1: All loyal lieutenants obey the same order.
o IC2: If the commanding general is loyal, then every loyal
lieutenant obeys the order he sends
Note: If General is loyal, IC2 => IC1.
16. Example Scenario
Conditions
• 3 generals, 1 traitor among them.
• Message: Attack / Retreat
For LIEUTENANT1 Who is traitor? COMMANDAR or LIEUTENANT2?
In Fig 1 LIEUTENANT1 has to attack to satisfy IC2. Fig 2 LIEUTENANT1
attacks, LIEUTENANT2 retreats. IC1 violated.
So it’s a impossibly situation.
17. Limit of traitors
Proof by contradiction.
o Assume there is a solution for 3m Albanians with m
traitors.
o Reduce to 3-General problem.
So we can decide that no solutions with fewer than
3m+1 generals can cope with m traitors.
18. Solution by Oral Message
Assumptions
A1 – Each message that is sent is delivered correctly.
Assures: Traitors cannot interfere with communication as
third party.
A2 – The receiver of a message knows who sent it.
Assures : Traitors cannot send fake messages
A3 – The absence of a message can be detected.
Assures: Traitors cannot interfere by being silent.
Note: Default order to “retreat” for silent traitor
19. Oral Message Algorithm
• Algorithm OM(0).
• (1) The commander sends his value to every lieutenant.
• (2) Each lieutenant uses the value he receives from the
commander, or uses the value
RETREAT if he receives no value.
20. Oral Message
Algorithm(cont)
Here m >0
o Commander sends his value to every Lieutenant (vi)
o Each Lieutenant acts as commander for OM(m-1) and sends
vi to the other n-2 lieutenants (or RETREAT)
o For each i, and each j<>i, let vj be the value lieutenant i
receives from lieutenant j in step (2) using OM(m-1).
Lieutenant i uses the value majority (v1, …, vn-1).
23. Issue With Oral Message
• Need 3f+1 nodes to tolerate f failures which is expensive
• Difficult because traitors can lie about what others said
Time Complexity O(n^m)
24. Signed Message Solution
New Assumptions:
(a) A loyal general's signature cannot be forged, and any
alteration of the contents of his signed messages can be
detected.
(b) Anyone can verify the authenticity of a general's
signature.
Step:
• Each lieutenant maintains a set V of properly signed
orders received so far.
• The commander sends a signed order to lieutenants
25. Signed Message
Algorithm
• A lieutenant receives an order from someone (either
from commander or other lieutenants),
o Verifies authenticity and puts it in V.
o If there are less than m distinct signatures on the order
• Augments orders with signature
• Relays messages to lieutenants who have not seen the order.
• When lieutenant receives no new messages, and use
choice(V) as the desired action.
27. Advantage and Issue
Advantage
• Need f+2 nodes
• Easier because traitors can only lie about other traitors
Issue:
• communication overheads
• Signatures
What if not all generals can reach all other generals
directly?
28. Missing Path
if the communication graph is 3m-regular, and there are at most m traitors,
the problem can still be solved.
29. Reliability
Achieving reliability in the face of arbitrary malfunctioning is
a difficult problem, and its solution seems to be inherently
expensive
To avoid expense it is often assumed that a computer may
fail to respond but will never respond incorrectly.
30. Asynchronous System
No algorithm can guarantee to reach consensus in an
asynchronous system, even with one process crash failure
31. RANDOMIZED BYZANTINE GENERALS
Based on:
• n processes with up to t faulty ones, t +1computing
phases are necessary for reaching Byzantine
Agreement.
• BG problem has no solution in the asynchronous case.
N=Number of process, T=faulty , G = Generals
32. Why Randomized
• Certain to achieve Byzantine Agreement
• The expected number of rounds required to do so is 4,
independent of N and T.
• The total expected number of messages =c*n*t, where
c = small constant
• In some case employs a fixed number R of rounds. To
reach Byzantine agreement,. but there is a probability
2^-R of error.
34. Basic Concept
Assumptions:
• Every Gi can directly exchange messages with every
other Gj. Where i,j ϵ n
• Every Gi agreed on a common value in the message
• If one Gi proved faulty for not having the common value
it will remain faulty even it later have the common value
35. Basic Concept
• If all the proper processes have the same initial message
then the system will be called proper, otherwise faulty.
• The processes reach agreement on the common
message by exchanging information
• We assume that each process Gi has a local phase-
clock p(i), and that pi assigns p (i) := p (i) + 1 at the end
of each phase.
36. Authentication
• A public directory containing for each participant B a
public key' KB.
• When the participant B needs to authenticate a
message M, he employs a secret key DB to compute
another message 0B(M) =N.
• Every other user can recover M form N by use of KB,
Note: The public-key directory is part of the data in each Pi,
and must be incorporated by a Non-faulty "dealer" at ,the
creation of the processes Gi
37. LOTTERY
• Algorithm employ a lottery procedure by which the
proper Gi's can agree on a randomly chosen s =0 ,I.
• Using Shamirs A method for obtaining digital signatures
and public-key cryptosystems,
• The lottery procedure admits a parameter m<N, so that
Lottery(m) is the m-th lottery round.
• End of the execution of Lottery(m) by the proper Gi, at
least all the proper processes will share Sm.
38. THE AGREEMENT
PROTOCOL(BAP)
Structure:
Polling : Gi polls the other processes on their value of the
message.
Lottery: Gi decide common random value
Decision: using s Gi determines whether to adopt the
plurality candidate version of the message obtained
through Polling or his current version of the message
These 3 happen in R time. R = desired reliability
43. Correctness
If Initial message of every proper process = M,
Proper System: End of BAP we shall have message (i)=
M for all proper Gi
Faulty System: With probability at ,least 2^-R , all proper
processes Gi will have the same value for message(i).
Note: Probability for not reaching agreement in R rounds is
2^-R
44. Bounded expected time
As long as t <n/10 we can reach an agreement ignoring
n and t
It requires 0(n^2) message
Using a wakeup call it could be reduces to 0(nt)
45. Advantage
• Both applied for Synchronized and asynchronous
version
• Better than Ben-or tossing approach
• With some condition it is a robust solution for the
distributed commit problem
• It is more simpler than other existing approach
46. Disadvantage
• Synchronized system simplify the algorithm but also
arises issue for coordinated agreement
• For Byzanatine Consensus problem also need to
implement wakeup protocol
• It only resilient under appropriate conditions
47. Practical Implementations
• For bitcoin peer to peer this solutions could be
implemented
• For distributed commit problem this solution can be
implemented and some cases its implemented
48. Conclusion
Randomize solution has some upper hand also has some
limit. However in 1984 this solutions creates such effect
which later used to implement various extended solutions
for Byzantine agreement
49. References
• [1] Leslie Lamport, Robert Shostak and Marshall Pease.
TheByzantine Generals Problem. ACM Transactions
onProgramming Languages and Systems, 4(3): 382-401,
July 1982.
• [2] “AWS S3 Availability Event”,
http://status.aws.amazon.com/s3-20080720.html .
Amazon Rec. 10/2013
• https://en.wikipedia.org/wiki/Consensus_(computer_scie
nce)
• http://marknelson.us/2007/07/23/byzantine/