The “intelligent adaptive participant’s presumption protocol” (iAP3
) is an integrated atomic commit
protocol. It interoperates implicit yes-vote, which is a one-phase commit protocol, besides presumed abort
and presumed commit, the most commonly pronounced two-phase commit protocol variants. The aim of
this combination is to achieve the performance advantages of one-phase commit protocols, on one hand,
and the wide applicability of two-phase commit protocols, on the other. iAP3
interoperates the three
protocols in a dynamic fashion and on a per participant basis, in spite of the incompatibilities among the
three protocols. Besides that, the protocol is backward compatible with the standardized presumed abort
protocol. Whereas iAP3 was initially proposed for the two-level (or flat) transaction execution model, this
article extends the protocol to the multi-level distributed transaction execution model, the model adopted by
the database standards and widely implemented in commercial database systems. Thus, broadening the
applicability scope of the iAP3
.
Analyzing consistency models for semi active data replication protocol in dis...ijfcstjournal
Data replication is generally used for increasing
accessibility, availability, performance and scalability
of database systems. For implementing data replication mechanisms, we encounter with some consistency
problems.One of the important problems for implementing data replication mechanism is consistency. In this paper,
the performance tradeoffs of consistency models for semi-active data replication protocol in distributed systems
are analyzed.A brief deliberation about consistency models in data replication is shown.Research on how client-centric guarantees relate to data-centric models is discussed.How guaranteeing conditions of data -
centric consistency models and client - centric consistency models isprovided, is also analyzed.Analysis of the consistency models guarantee in terms of multi-client and single client for the semi-active data replication
protocol without failure and leader death is presented. The
experimental results show that semi-active data replication protocol is appropriate for distributed systems by multi-client replication such as web services.
Distributed Deadlock & Recovery Deadlock concept, Deadlock in Centralized systems, Deadlock in Distributed Systems – Detection, Prevention, Avoidance, Wait-Die Algorithm, Wound-Wait algorithm Recovery in DBMS - Types of Failure, Methods to control failure, Different techniques of recoverability, Write- Ahead logging Protocol, Advanced recovery techniques- Shadow Paging, Fuzzy checkpoint, ARIES, RAID levels, Two Phase and Three Phase commit protocols
Formal Verification of Distributed Checkpointing Using Event-Bijcsit
The development of complex system makes challenging task for correct software development. Due to faulty
specification, software may involve errors. The traditional testing methods are not sufficient to verify the
correctness of such complex system. In order to capture correct system requirements and rigorous
reasoning about the problems, formal methods are required. Formal methods are mathematical techniques
that provide precise specification of problems with their solutions and proof of correctness. In this paper,
we have done formal verification of check pointing process in a distributed database system using Event B.
Event-B is an event driven formal method which is used to develop formal models of distributed database
systems. In a distributed database system, the database is stored at different sites that are connected
together through the network. Checkpoint is a recovery point which contains the state information about
the site. In order to do recovery of a distributed transaction a global checkpoint number (GCPN) is
required. A global checkpoint number decides which transaction will be included for recovery purpose. All
transactions whose timestamp are less than global checkpoint number will be marked as before checkpoint
transaction (BCPT) and will be considered for recovery purpose. The transactions whose timestamp are
greater than GCPN will be marked as after checkpoint transaction (ACPT) and will be part of next global
checkpoint number.
Analyzing consistency models for semi active data replication protocol in dis...ijfcstjournal
Data replication is generally used for increasing
accessibility, availability, performance and scalability
of database systems. For implementing data replication mechanisms, we encounter with some consistency
problems.One of the important problems for implementing data replication mechanism is consistency. In this paper,
the performance tradeoffs of consistency models for semi-active data replication protocol in distributed systems
are analyzed.A brief deliberation about consistency models in data replication is shown.Research on how client-centric guarantees relate to data-centric models is discussed.How guaranteeing conditions of data -
centric consistency models and client - centric consistency models isprovided, is also analyzed.Analysis of the consistency models guarantee in terms of multi-client and single client for the semi-active data replication
protocol without failure and leader death is presented. The
experimental results show that semi-active data replication protocol is appropriate for distributed systems by multi-client replication such as web services.
Distributed Deadlock & Recovery Deadlock concept, Deadlock in Centralized systems, Deadlock in Distributed Systems – Detection, Prevention, Avoidance, Wait-Die Algorithm, Wound-Wait algorithm Recovery in DBMS - Types of Failure, Methods to control failure, Different techniques of recoverability, Write- Ahead logging Protocol, Advanced recovery techniques- Shadow Paging, Fuzzy checkpoint, ARIES, RAID levels, Two Phase and Three Phase commit protocols
Formal Verification of Distributed Checkpointing Using Event-Bijcsit
The development of complex system makes challenging task for correct software development. Due to faulty
specification, software may involve errors. The traditional testing methods are not sufficient to verify the
correctness of such complex system. In order to capture correct system requirements and rigorous
reasoning about the problems, formal methods are required. Formal methods are mathematical techniques
that provide precise specification of problems with their solutions and proof of correctness. In this paper,
we have done formal verification of check pointing process in a distributed database system using Event B.
Event-B is an event driven formal method which is used to develop formal models of distributed database
systems. In a distributed database system, the database is stored at different sites that are connected
together through the network. Checkpoint is a recovery point which contains the state information about
the site. In order to do recovery of a distributed transaction a global checkpoint number (GCPN) is
required. A global checkpoint number decides which transaction will be included for recovery purpose. All
transactions whose timestamp are less than global checkpoint number will be marked as before checkpoint
transaction (BCPT) and will be considered for recovery purpose. The transactions whose timestamp are
greater than GCPN will be marked as after checkpoint transaction (ACPT) and will be part of next global
checkpoint number.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
Trafodion brings a completely distributed scalable transaction management implementation integrated into HBase. It does not suffer from the scale and performance limitations of other transaction managers on HBase.
This presentation reviews the elegant architecture and how this architecture is leveraged to provide full ACID SQL transactional capabilities across multiple rows, tables, statements, and region servers. It discusses the life of a transaction from BEGIN WORK, to updates, to ABORT WORK, to COMMIT WORK, and then discusses recovery and high availability capabilities provided. An accompanying white paper goes into depth explaining this animated presentation in more detail.
Given the increasing interest for transaction managers on Hadoop, or to provide transactional capabilities for NoSQL users when needed, the Trafodion community can certainly open up this Distributed Transaction Management support to be leveraged by implementations other than Trafodion.
On deferred constraints in distributed database systemsijma
An atomic commit protocol (ACP) is a distributed algorithm used to ensure the atomicity property of
transactions in distributed database systems. Although ACPs are designed to guarantee atomicity, they add
a significant extra cost to each transaction execution time. This added cost is due to the overhead of the
required coordination messages and log writes at each involved database site to achieve atomicity. For this
reason, the continuing research efforts led to a number of optimizations that reduce the aforementioned
cost. The most commonly adopted optimizations in the database standards and commercial database
management systems are those designed around the early release of read locks of transactions. In this type
of optimizations, certain participating sites may start releasing the read locks held by transactions before
they are fully terminated across all participants. Hence, greatly enhancing concurrency among executing
transactions and, consequently, the overall system performance. However, this type of optimizations
introduces possible “execution infections” in the presence of deferred consistency constraints; a
devastating complication that may lead to non-serializable executions of transactions. Thus, this type of
optimizations could be considered useless, given the importance of preserving the consistency of the
database in presence of deferred constraints, unless this complication is resolved in a practical and
efficient manner. This is the essence of the “unsolicited deferred consistency constraints validation”
mechanism presented in this paper.
Transaction concept, ACID property, Objectives of transaction management, Types of transactions, Objectives of Distributed Concurrency Control, Concurrency Control anomalies, Methods of concurrency control, Serializability and recoverability, Distributed Serializability, Enhanced lock based and timestamp based protocols, Multiple granularity, Multi version schemes, Optimistic Concurrency Control techniques
DEADLOCK RECOVERY TECHNIQUE IN BUS ENHANCED NOC ARCHITECTUREVLSICS Design
Increase in the speed of processors has led to crucial role of communication in the performance of systems. As a result, routing is taken into consideration as one of the most important subjects of the Network on Chip architecture. Routing algorithms to deadlock avoidance prevent packets route completely based on network traffic condition by means of restricting the route of packets. This action leads to less performance especially in non-uniform traffic patterns. On the other hand True Fully Adoptive Routing algorithm provides routing of packets completely based on traffic condition. However, deadlock detection and recovery mechanisms are needed to handle deadlocks. Use of global bus beside NoC as a parallel supportive environment, provide platform to offer advantages of both features of bus and NoC. This bus is useful for broadcast and multicast operations, sending delay sensitive signals, system management and other services. In this research, we use this bus as an escaping path for deadlock recovery technique. According to simulation results, this bus is suitable platform for deadlock recovery technique.
Optimistic concurrency control in Distributed Systemsmridul mishra
What is Optimistic concurrency control, how and why it is applied to distributed systems, the Kung Robinson algorithm overview and the advantages-disadvantages have been covered
Distributed database system is collection of loosely coupled sites that are independeant of each other.
Distributed transaction model
Concurrency control
2 phase commit protocol
A Review on Nanofluids Thermal Properties Determination Using Intelligent Tec...IJSRD
Nanofluids are the dispersion of nano-sized particles into base fluids. Nanofluids have wide scope for applying as coolant in many of the engineering fields because of its higher thermal conductivity and more desirable thermal properties. Numerous mathematical models and experimental models have been proposed to predict the thermo physical properties for the past two decades. It has been noticed that many discrepancies between the mathematical and experimental results of thermo physical properties of nanofluids, in particular, thermal conductivity and viscosity. To mitigate those discrepancies, Intelligent Techniques with flexible mathematical structure that is capable of identifying complex non-linear relationships between input and output data were utilized to accurately predict the thermal properties of nanofluids. The data mining model based on genetic neural network has been widely applied to the procedure of data mining on thermal physical properties of nanofluids to acquire the pattern knowledge. This paper is to review the thermal conductivity of nanofluids research publications which are inter linked with soft computing tools. The outcome of this review shall lead to optimize the nanofluids properties while applying heat transfer nanofluids and to reduce the experimental test runs and number of hypothesis posed by different investigators.
A Review on High Speed Rail Project between Ahmedabad and MumbaiIJSRD
Indian Railway Network is one of the largest rail networks of the world which connects all major and minor cities and it is one of the fastest and convenient options for travelling for the ordinary people. However, sad thing is that most of the fastest trains run on average speed of just 50 km/hr. Therefore, to save time and for the convenience of the people some better and suitable option should be introduced. Bullet trains are high speed trains offer economic and high speed travel which is a good option for the routine and solitary travelling people.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
Trafodion brings a completely distributed scalable transaction management implementation integrated into HBase. It does not suffer from the scale and performance limitations of other transaction managers on HBase.
This presentation reviews the elegant architecture and how this architecture is leveraged to provide full ACID SQL transactional capabilities across multiple rows, tables, statements, and region servers. It discusses the life of a transaction from BEGIN WORK, to updates, to ABORT WORK, to COMMIT WORK, and then discusses recovery and high availability capabilities provided. An accompanying white paper goes into depth explaining this animated presentation in more detail.
Given the increasing interest for transaction managers on Hadoop, or to provide transactional capabilities for NoSQL users when needed, the Trafodion community can certainly open up this Distributed Transaction Management support to be leveraged by implementations other than Trafodion.
On deferred constraints in distributed database systemsijma
An atomic commit protocol (ACP) is a distributed algorithm used to ensure the atomicity property of
transactions in distributed database systems. Although ACPs are designed to guarantee atomicity, they add
a significant extra cost to each transaction execution time. This added cost is due to the overhead of the
required coordination messages and log writes at each involved database site to achieve atomicity. For this
reason, the continuing research efforts led to a number of optimizations that reduce the aforementioned
cost. The most commonly adopted optimizations in the database standards and commercial database
management systems are those designed around the early release of read locks of transactions. In this type
of optimizations, certain participating sites may start releasing the read locks held by transactions before
they are fully terminated across all participants. Hence, greatly enhancing concurrency among executing
transactions and, consequently, the overall system performance. However, this type of optimizations
introduces possible “execution infections” in the presence of deferred consistency constraints; a
devastating complication that may lead to non-serializable executions of transactions. Thus, this type of
optimizations could be considered useless, given the importance of preserving the consistency of the
database in presence of deferred constraints, unless this complication is resolved in a practical and
efficient manner. This is the essence of the “unsolicited deferred consistency constraints validation”
mechanism presented in this paper.
Transaction concept, ACID property, Objectives of transaction management, Types of transactions, Objectives of Distributed Concurrency Control, Concurrency Control anomalies, Methods of concurrency control, Serializability and recoverability, Distributed Serializability, Enhanced lock based and timestamp based protocols, Multiple granularity, Multi version schemes, Optimistic Concurrency Control techniques
DEADLOCK RECOVERY TECHNIQUE IN BUS ENHANCED NOC ARCHITECTUREVLSICS Design
Increase in the speed of processors has led to crucial role of communication in the performance of systems. As a result, routing is taken into consideration as one of the most important subjects of the Network on Chip architecture. Routing algorithms to deadlock avoidance prevent packets route completely based on network traffic condition by means of restricting the route of packets. This action leads to less performance especially in non-uniform traffic patterns. On the other hand True Fully Adoptive Routing algorithm provides routing of packets completely based on traffic condition. However, deadlock detection and recovery mechanisms are needed to handle deadlocks. Use of global bus beside NoC as a parallel supportive environment, provide platform to offer advantages of both features of bus and NoC. This bus is useful for broadcast and multicast operations, sending delay sensitive signals, system management and other services. In this research, we use this bus as an escaping path for deadlock recovery technique. According to simulation results, this bus is suitable platform for deadlock recovery technique.
Optimistic concurrency control in Distributed Systemsmridul mishra
What is Optimistic concurrency control, how and why it is applied to distributed systems, the Kung Robinson algorithm overview and the advantages-disadvantages have been covered
Distributed database system is collection of loosely coupled sites that are independeant of each other.
Distributed transaction model
Concurrency control
2 phase commit protocol
A Review on Nanofluids Thermal Properties Determination Using Intelligent Tec...IJSRD
Nanofluids are the dispersion of nano-sized particles into base fluids. Nanofluids have wide scope for applying as coolant in many of the engineering fields because of its higher thermal conductivity and more desirable thermal properties. Numerous mathematical models and experimental models have been proposed to predict the thermo physical properties for the past two decades. It has been noticed that many discrepancies between the mathematical and experimental results of thermo physical properties of nanofluids, in particular, thermal conductivity and viscosity. To mitigate those discrepancies, Intelligent Techniques with flexible mathematical structure that is capable of identifying complex non-linear relationships between input and output data were utilized to accurately predict the thermal properties of nanofluids. The data mining model based on genetic neural network has been widely applied to the procedure of data mining on thermal physical properties of nanofluids to acquire the pattern knowledge. This paper is to review the thermal conductivity of nanofluids research publications which are inter linked with soft computing tools. The outcome of this review shall lead to optimize the nanofluids properties while applying heat transfer nanofluids and to reduce the experimental test runs and number of hypothesis posed by different investigators.
A Review on High Speed Rail Project between Ahmedabad and MumbaiIJSRD
Indian Railway Network is one of the largest rail networks of the world which connects all major and minor cities and it is one of the fastest and convenient options for travelling for the ordinary people. However, sad thing is that most of the fastest trains run on average speed of just 50 km/hr. Therefore, to save time and for the convenience of the people some better and suitable option should be introduced. Bullet trains are high speed trains offer economic and high speed travel which is a good option for the routine and solitary travelling people.
IP specifies the format of packets, also called #datagrams, and the addressing scheme. Most networks combine IP with a higher-level protocol called Transmission Control Protocol (TCP), which establishes a virtual connection between a destination and a source.
If you want to purchase the content e-mail me on dulith1989@gmail.com
AN ACCOMPLISHED MINIMUM-OPERATION DEPENDABLE RECOVERY LINE COMPILATION SCHEME...IAEME Publication
While dealing with mobile distributed frameworks, we come across some issues like: mobility, low bandwidth of wireless channels and lack of stable storage on mobile nodes, disconnections, limited battery power and high failure rate of mobile nodes. These issues make traditional Dependable Recovery Line Compilation (DRLcompilation) techniques designed for Distributed frameworks unsuitable for Mobile environments. In this paper, we design a minimum operation algorithm for Mobile Distributed frameworks, where no useless retrieval-marks are taken and an effort has been made to optimize the filibustering of operations. We propose to delay the processing of selective reckoning-communications at the receiver end only during the DRL-compilation period. A Process is allowed to perform its normal reckonings and send reckoning-communications during its filibustering period. In this way, we try to keep filibustering of operations to bare minimum. In order to keep the filibustering time minimum, we collect the dependency vectors and compute the exact minimum set in the beginning of the algorithm. The number of operations that take retrieval-marks is minimized to 1) avoid awakening of Mob_Nodes in doze mode of operation, 2) minimize thrashing of Mob_Nodes with DRL-compilation activity, 3) save limited battery life of Mob_Nodes and low bandwidth of wireless channels. In coordinated DRL-compilation, if a single operation fails to take its retrieval-mark; all the DRL-compilation effort goes waste, because, each operation has to abort its partially-committed retrieval-mark. In order to take its partially-committed retrieval-mark, an Mob_Node needs to transfer large retrieval-mark data to its local Mob_Supp_St over wireless channels. The DRLcompilation effort may be exceedingly high due to frequent aborts especially in mobile frameworks. We try to minimize the loss of DRL-compilation effort when any operation fails to take its retrieval-mark in coordination with others.
DEALING WITH RECURRENT TERMINATES IN ORCHESTRATED RELIABLE RECOVERY LINE ACCU...IAEME Publication
We propose a least-interacting-routine orchestrated RRL-accumulation (Reliable Recovery Line Accumulation) mechanism for non-deterministic mobile distributed frameworks, where no inoperable restoration-spots are captured. We use the following technique to minimize the filibustering of routines. During the period, when a routine consigns its causal-interrelationship set to the originator and receives the leastinteracting-set, may receive some reckoning-communications, which may add new members to the already computed least-interacting-set. Such reckoningcommunications are delayed at the receiver side. It should be noted that the duration for which the reckoning-communications are delayed at the receiver’s end is negligibly small. We also try to minimize the loss of RRL-accumulation effort when any routine miscarries to seize its restoration-spot in synchronization with others. We propose that in the first phase, all concerned Mobl-Nodules will seize transient restoration-spot only. Transient restoration-spot is stored on the memory of Mobl-Nodule only. In this case, if some routine miscarries to seize restoration-spot in the first phase, then Mobl-Nodules need to abort their transient restoration-spots only. The effort of capturing a transient restoration-spot is negligible as compared to the partially-committed one. We propose a three phase mechanism as planned in previous chapter. But, in the planned mechanism, the synchronization with the originator Mobl_Supp_St is done without consigning explicit synchronization reckoning-communications. We want to emphasize that in all orchestrated RRL-accumulation schemes available in literature, synchronization among routines and originator captures place by consigning explicit synchronization reckoning-communications. In this way, we try to significantly reduce the synchronization overhead in orchestrated RRL-accumulation.
Basic principles of blind write protocoljournalBEEI
The current approach to handle interleaved write operation and preserve consistency in relational database system still relies on the locking protocol. If any entity is locked by any transaction, then it becomes temporary unavailable to other transaction until the lock is released. The temporary unavailability can be more often if the number of write operation increases as happens in the application systems that utilize IoT technology or smartphone devices to collect the data. To solve this problem, this research is proposed blind write protocol which does not lock the entity while the transaction is performing a write operation. This paper presents the basic principles of blind write protocol implementation in a relational database system.
CHECKPOINTING WITH MINIMAL RECOVERY IN ADHOCNET BASED TMRijujournal
This paper describes two-fold approach towards utilizing Triple Modular Redundancy (TMR) in Wireless
Adhoc Network (AdocNet). A distributed checkpointing and recovery protocol is proposed. The protocol
eliminates useless checkpoints and helps in selecting only dependent processes in the concerned
checkpointing interval, to recover. A process starts recovery from its last checkpoint only if it finds that it is
dependent (directly or indirectly) on the faulty process. The recovery protocol also prevents the occurrence
of missing or orphan messages. In AdocNet, a set of three nodes (connected to each other) is considered to
form a TMR set, being designated as main, primary and secondary. A main node in one set may serve as
primary or secondary in another. Computation is not triplicated, but checkpoint by main is duplicated in its
primary so that primary can continue if main fails. Checkpoint by primary is then duplicated in secondary
if primary fails too.
CHECKPOINTING WITH MINIMAL RECOVERY IN ADHOCNET BASED TMRijujournal
This paper describes two-fold approach towards utilizing Triple Modular Redundancy (TMR) in Wireless
Adhoc Network (AdocNet). A distributed checkpointing and recovery protocol is proposed. The protocol
eliminates useless checkpoints and helps in selecting only dependent processes in the concerned
checkpointing interval, to recover. A process starts recovery from its last checkpoint only if it finds that it is
dependent (directly or indirectly) on the faulty process. The recovery protocol also prevents the occurrence
of missing or orphan messages. In AdocNet, a set of three nodes (connected to each other) is considered to
form a TMR set, being designated as main, primary and secondary. A main node in one set may serve as
primary or secondary in another. Computation is not triplicated, but checkpoint by main is duplicated in its
primary so that primary can continue if main fails. Checkpoint by primary is then duplicated in secondary
if primary fails too.
CHECKPOINTING WITH MINIMAL RECOVERY IN ADHOCNET BASED TMRijujournal
This paper describes two-fold approach towards utilizing Triple Modular Redundancy (TMR) in Wireless
Adhoc Network (AdocNet). A distributed checkpointing and recovery protocol is proposed. The protocol
eliminates useless checkpoints and helps in selecting only dependent processes in the concerned
checkpointing interval, to recover. A process starts recovery from its last checkpoint only if it finds that it is
dependent (directly or indirectly) on the faulty process. The recovery protocol also prevents the occurrence
of missing or orphan messages. In AdocNet, a set of three nodes (connected to each other) is considered to
form a TMR set, being designated as main, primary and secondary. A main node in one set may serve as
primary or secondary in another. Computation is not triplicated, but checkpoint by main is duplicated in its
primary so that primary can continue if main fails. Checkpoint by primary is then duplicated in secondary
if primary fails too.
Concurrency Control Mechanism for Nested Transactions in Mobile Environment ..............................1
Ms. Nyo Nyo Yee and Ms. Hninn Aye Thant
Generating Keys in Elliptic Curve Cryptosystems ................................................................................1
Dragan Vidakovic and Dusko Parezanovic
Cost Estimation of Information Technology Risks and Instituting Appropriate Controls........................1
Princewill Aigbe and Jackson Akpojaro
An Efficient Access Control Model for Wireless Sensor Network..........................................................1
Behzad Molavi, Hamed Bashirpour and Dr. Morteza Nikooghadam
Three Tank System Control Using Neuro - Fuzzy Model Predictive Control ..........................................1
Adel Abdurahman
The Computer-Linguistic Analysis of Socio-Demographic Profile of Virtual Community Member ..........1
Yuriy Syerov, Andriy Peleschyshyn and Solomia Fedushko
Using Analytic Hierarchy Process (AHP) to Select and Rank a Strategy Based Technology .....................1
Majid Nili Ahmadabadi, Masoud Najafi, Peyman Gholami and Payam Gholami
Explore the Possibility of Moving the Government to the Web 2 in IRAN.............................................1
Alireza Shirvani, Ameneh Malmir and Fariba Azizzadeh
DEALING WITH FREQUENT TERMINATES IN SYNCHRONIZED DEPENDENCY RECOVERY LINE COM...IAEME Publication
While dealing with mobile distributed frameworks, we come across some issues like: mobility, low bandwidth of wireless channels and lack of stable storage on mobile nodes, disconnections, limited battery power and high failure rate of mobile nodes. In this paper, we design a minimum operation methodology for Mobile Distributed frameworks, where no useless retrieval-marks are taken and an effort has been made to optimize the filibustering of operations. In order to keep the filibustering time minimum, we collect the dependency vectors and compute the exact minimum set in the beginning of the methodology. In synchronized Dependable Recovery Line Compilation (DRLcompilation), if a single operation fails to take its retrieval-mark; all the DRLcompilation effort goes waste, because, each operation has to terminate its partiallycommitted retrieval-mark. In order to take its partially-committed retrieval-mark, an Mob_Node (Mobile Host) needs to transfer large retrieval-mark data to its local Mob_Supp_Stn over wireless channels. The DRL-compilation effort may be exceedingly high due to frequent terminates especially in mobile frameworks. We try to minimize the loss of DRL-compilation effort when any operation fails to take its retrieval-mark in coordination with others
PERFORMANCE ENHANCEMENT WITH SPECULATIVE-TRACE CAPPING AT DIFFERENT PIPELINE ...caijjournal
Simultaneous Multi-Threading (SMT) processors improve system performance by allowing concurrent execution of multiple independent threads with sharing key datapath components and better utilization of resources. Speculative execution allows modern processors to fetch continuously and reduce the delays of control instructions. However, a significant amount of resources is usually wasted due to miss-speculation, which could have been used by other valid instructions, and such a waste is even more pronounced in an SMT system. In order to minimize the waste of resources, a speculative trace capping technique [1] was proposed to limit the number of speculative instructions in the system. In this paper, a thorough analysis is given to investigate the trade-offs among applying this capping mechanism at different pipeline stages so as to maximize its benefits. Our simulations show that the best choice can improve overall system throughput by a very significant margin (up to 46%) without sacrificing execution fairness among the threads.
Computer Applications: An International Journal (CAIJ)caijjournal
Computer Applications: An International Journal (CAIJ) is a Quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Computer Science Applications. The journal is devoted to the publication of high quality papers on theoretical and practical aspects of computer science applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on Computer science application advancements, and establishing new collaborations in these areas. Original research papers, state-of-the-art reviews are invited for publication in all areas of Computer Science Applications.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Computer Science Applications.
PERFORMANCE ENHANCEMENT WITH SPECULATIVE-TRACE CAPPING AT DIFFERENT PIPELINE ...caijjournal
Simultaneous Multi-Threading (SMT) processors improve system performance by allowing concurrent execution of multiple independent threads with sharing key datapath omponents and better utilization of resources. Speculative execution allows modern processors to fetch continuously and reduce the delays of control instructions. However, a significant amount of resources is usually wasted due to miss- peculation,
which could have been used by other valid instructions, and such a waste is even more pronounced in an SMT system. In order to minimize the waste of resources, a speculative trace capping technique [1] was proposed to limit the number of speculative instructions in the system. In this paper, a thorough analysis is given to investigate the trade-offs among applying this capping mechanism at different pipeline stages so as
to maximize its benefits. Our simulations show that the best choice can improve overall system throughput
by a very significant margin (up to 46%) without sacrificing execution fairness among the threads.
A PROFICIENT MINIMUM-ROUTINE RELIABLE RECOVERY LINE ACCUMULATION SCHEME FOR N...IAEME Publication
We propose a least-routine orchestrated Reliable Recovery Line Accumulation arrangement for non-deterministic nomadic distributed frameworks, where no ineffectual restoration-spots are registered. An effort has been made to curtail the filibustering of routines and synchronization communiqué expenses. We capture the partial transitive causal-interrelationships during the normal accomplishment by piggybacking causal-interrelationship vectors onto reckoning communiqués. Frequent terminations of Reliable Recovery Line Accumulation arrangement may happen in nomadic frameworks due to exhausted battery, non-voluntary disengagements of MoblNodules, or poor cellular connectivity. Therefore, we propose that in the first stage, all pertinent Mobl-Nodules will register interim restoration-spot only. Ad hoc restorationspot is stored on the memory of Mobl-Nodule only. In this case, if some routine fails to register restoration-spot in the first stage, then Mobl-Nodules need to call off their interim restoration-spots only. In this way, we try to curtail the forfeiture of Reliable Recovery Line Accumulation work when any routine fails to register its restoration-spot in harmonization with others.
Management of Distributed TransactionsAnkita Dubey
Distributed Database System
A distributed database system consists of loosely coupled sites that share no physical component
Database systems that run on each site are independent of each other
Transactions may access data at one or more sites
The management of distributed transactions require dealing with several problems which are strictly interconnected, like-
Reliability
Concurrency control
Efficient utilization of the resources of the whole system.
In this paper, a review for consistency of data replication protocols has been investigated. A brief
deliberation about consistency models in data replication is shown. Also we debate on propagation
techniques such as eager and lazy propagation. Differences of replication protocols from consistency view
point are studied. Also the advantages and disadvantages of the replication protocols are shown. We
advent into essential technical details and positive comparisons, in order to determine their respective
contributions as well as restrictions are made. Finally, some literature research strategies in replication
and consistency techniques are reviewed.
Glap a global loopback anomaly prevention mechanism for multi level distribu...ijdms
The multi-level/hierarchical distributed transaction execution model is currently the model specified in the
database standards and practiced in the implementations of commercial database management systems. In
this model, a transaction may execute more than one subtransaction with different origins at a
participating site, causing the same transaction to appear more than once at the participating site. This
system state is well recognized in the literature and is commonly known as a “loopback”. When a loopback
occurs at a site, certain types of anomalies may arise. The effects of these anomalies on the system
behaviour vary depending on the type of the anomaly. In an extreme case, a loopback anomaly may lead to
non-serializable executions of transactions, sacrificing the consistency of the entire distributed database
system. Thus, it is imperative to characterize the different types of loopback anomalies and to provide a
practical and efficient solution to those that are most devastating on the system behaviour. This is the focus
of this article.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
Extending the Intelligent Adaptive Participant's Presumption Protocol to the Multi-Level Distributed Transaction Execution Model
1. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
DOI : 10.5121/ijdms.2015.7603 29
EXTENDING THE INTELLIGENT ADAPTIVE
PARTICIPANT’S PRESUMPTION PROTOCOL TO THE
MULTI-LEVEL DISTRIBUTED TRANSACTION
EXECUTION MODEL
Yousef J. Al-Houmaily
Department of Computer and Information Programs, Institute of Public Administration,
Riyadh, Saudi Arabia
ABSTRACT
The “intelligent adaptive participant’s presumption protocol” (iAP3
) is an integrated atomic commit
protocol. It interoperates implicit yes-vote, which is a one-phase commit protocol, besides presumed abort
and presumed commit, the most commonly pronounced two-phase commit protocol variants. The aim of
this combination is to achieve the performance advantages of one-phase commit protocols, on one hand,
and the wide applicability of two-phase commit protocols, on the other. iAP3
interoperates the three
protocols in a dynamic fashion and on a per participant basis, in spite of the incompatibilities among the
three protocols. Besides that, the protocol is backward compatible with the standardized presumed abort
protocol. Whereas iAP3
was initially proposed for the two-level (or flat) transaction execution model, this
article extends the protocol to the multi-level distributed transaction execution model, the model adopted by
the database standards and widely implemented in commercial database systems. Thus, broadening the
applicability scope of the iAP3
.
KEYWORDS
Atomic Commit Protocols, Database Recovery, Database Systems, Distributed Transaction Processing,
Two-Phase Commit, Voting Protocols
1. INTRODUCTION
The two-phase commit (2PC) protocol [1, 2] is the first known and used atomic commit protocol
(ACP) [3]. It ensures atomicity of distributed transactions but with a substantial added cost to
each transaction execution time. This added cost significantly affect the overall system
performance. For this reason, a large number of 2PC variants and optimizations address this
important issue (see [4] for a survey of such variants and optimizations).
One-phase commit (1PC) protocols [5, 6, 7] reduce the cost of commit processing by eliminating
the explicit first phase of 2PC. However, these protocols achieve this at the expense of placing
assumptions on either transactions or the database management systems. In 1PC protocols, each
participant is required to acknowledge each operation after its execution. This is because, in these
protocols, an operation acknowledgment (ACK) does not only mean that the transaction preserves
the isolation and cascadless properties at the executing site, but it also means that the transaction
is not in violation of any existing consistency constraints at the site. Although this assumption is
not too restrictive since commercial systems implement rigorous schedulers and database
standards specify operation ACK, it clearly restricts the implementation of applications that wish
to utilize the option of deferred consistency constraints validation. This optionis currently part of
the SQL standards [8] and, by using this option, the evaluation of the consistency constraints are
2. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
30
delayed until the end of the execution of transactions [9, 10]. Hence, when a transaction uses this
option, there is a need to synchronize the evaluation of deferred constraints across all
participating database sites at commit time of the transaction, making 1PC protocols unusable in
this case.
The adaptive participant’s presumption protocol (AP3
) [11] alleviates the above applicability
limitation of 1PC protocols by integrating the implicit yes-vote (IYV) protocol [6], which is a
one-phase commit protocol, with the best known two-phase commit variants, namely, presumed
abort (PrA) [12] and presumed commit (PrC) [12]. Thus, achieving the performance advantages
of 1PC protocols whenever possible, on one hand, and the broad applicability of 2PC protocols,
on the other. The Intelligent AP3
(iAP3
) extends the (basic) AP3
[13] by incorporating four
advanced features that address and resolve four important issues in the design of atomic commit
protocols: two of which enhance efficiency while the other two enhance applicability.
Whereas both of the (basic) AP3
and the iAP3
were proposed for the two-level transaction
execution (TLTE) model, it is imperative to extend these protocols to the more general multi-level
transaction execution (MLTE) model. This is to provide both of them with a pragmatically wider
applicability scope as the MLTE model is the one currently adopted by database standards and
implemented in the majority of commercial database management systems. For this reason and
because the iAP3
is a superset of the (basic) AP3
, this paper extends the iAP3
to the MLTE model,
forming the multi-level iAP3
(ML-iAP3
).
The structure of the rest of this paper is as follows: Section 2 presents the extension of the
protocol to the MLTE model while Section 3 presents the extension of the advanced features of
the iAP3
to the MLTE model. Following that, Section 4 discusses the recovery aspects of the
protocol in the events of failures. Lastly, Section 5 provides some concluding remarks.
2. THE BASICS OF THE ML-iAP
3
The main difference between the ML-iAP3
and the two-level iAP3
is existence of cascaded
coordinators (i.e., non-root and non-leaf participants) in the execution trees of transactions. This
type of participants, which act as root coordinators with their direct descendants and leaf
participants with their direct ancestors, do not exist in the execution tree of a transaction in the
two-level iAP3
. This is because a participant in two-level iAP3
is either the root participant (i.e.,
the coordinator) or a leaf participant.
In ML-iAP3
, the behavior of cascaded coordinators depend on the selected protocol by each direct
descendant and the finally decided protocol by the root coordinator, leading to three possible
cases, which are as follows:
1. All participants are 1PC,
2. All participants are 2PC,
3. Participants are mixed 1PC and 2PC.
2.1 THE ML-iAP3
WHEN ALL PARTICIPANTS ARE 1PC
In the ML-iAP3
, each operation submitted to a participant (whether the participant is a cascaded
coordinator or leaf participant) is augmented with the identity of the root coordinator. Thus, when
a participant receives an operation from a direct ancestor for the first time and participates in the
execution of a transaction, following IYV protocol, the participant records the identity of the root
coordinator in its recovery-coordinators’ list (RCL) and force writes its RCL onto stable storage.
The RCL is to facilitate recovery of the participant in the case it fails. A participant removes the
3. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
31
identity of a root coordinator from its RCL when it commits or aborts the last transaction
submitted by the root coordinator.
As in other multi-level commit protocols, when a cascaded coordinator receives an operation
from its direct ancestor in the transaction execution tree, it forwards the operation to the
appropriate direct descendent(s) for execution. Since we are discussing the case where all
participants are 1PC and as IYV is the used 1PC protocol in the ML-iAP3
, the behavior of a
cascaded coordinator is similar in this case to the behavior of cascaded coordinators in multi-level
IYV [6].
In IYV, a participant aborts a transaction if it fails to process one of its operations. Once the
transaction is aborted, the participant sends a negative acknowledgment (NACK) to its direct
ancestor. If the participant itself is a cascaded coordinator, it also sends an abort message to each
implicitly prepared direct descendant. Then, the participant forgets the transaction. When the root
coordinator or a cascaded coordinator receives NACK from a direct descendant, it aborts the
transaction and sends abort messages to all implicitly prepared direct descendants and forgets the
transaction. A root coordinator of a transaction also aborts the transaction when it receives an
abort primitive from the transaction and sends an abort message to each direct descendant. If the
descendent is a cascaded coordinator and receives an abort request from its direct ancestor, it
sends an abort message to each direct descendant and forgets the transaction. When a leaf
participant receives an abort request, it aborts the transaction without writing a decision log
record for the transaction or acknowledging the decision. This is because the ML-iAP3
adopts the
presumed abort version of IYV whereby a participant never acknowledges an abort decision [6].
On the other hand, if a cascaded coordinator receives ACKs from all its direct descendants that
have participated in the execution of an operation, the cascaded coordinator sends a collective
ACK message to its direct ancestor in the transaction execution tree signaling the successful
execution of the operation. This message also contains any redo log records generated during the
execution of the operation whether at the cascaded coordinator’s site or at any of its descendants.
Thus, when a transaction finishes its execution, all its redo records are replicated at the root
coordinator which is responsible for maintaining the replicated redo log records. The root
coordinator also knows all the participants both leaf and cascaded coordinators by the time the
transaction finishes its execution phase.
When the root coordinator receives a commit request from a transaction after the successful
execution of all its operations, the coordinator commits the transaction. In this case, the
coordinator force writes a commit log record. Then, it sends a commit message to each direct
descendent. If the direct descendant is a leaf participant, it commits the transaction and writes a
non-forced commit log record. The participant acknowledges the commit decision once the
commit record is written onto the stable log.
If the descendant is a cascaded coordinator, it commits the transaction, writes a non-forced
commit log record, and forwards the commit decision to each of its direct descendants. Then, the
cascaded coordinator waits for the commit ACKs. Once the commit ACKs arrive and the commit
log record had been flushed onto the stable log, the cascaded coordinator writes a non-forced end
log record. Then, it acknowledges the commit decision. Thus, the ACK of a cascaded coordinator
serves as a collective ACK for the entire cascaded coordinator branch. When the root coordinator
receives the commit ACKs from its direct descendants, it writes a non-forced end log record.
Then, it forgets the transaction.
2.2 THE ML-iAP3
WHEN ALL PARTICIPANTS ARE 2PC
In iAP3
, when a participant executes a deferred consistency constraint during the execution of a
transaction, it switches to either PrA or PrC, depending on the anticipated results of the
4. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
32
consistency constraint. Thus, at the end of the transaction execution phase, the coordinator
declares the transaction as 2PC if all participants have switched to 2PC. If all participants have
switched to PrC, the coordinator selects PrC. Otherwise, the coordinator selects PrA. In either
case, the iAP3
can be extended to the MLTE model in a manner similar to the multi-level PrC and
multi-level PrA, depending on the selected protocol. The only distinction between the ML-iAP3
and the other two protocols is that the coordinator has to inform the participants about the finally
decided protocol during the first phase. In addition, when PrC is used, the ML-iAP3
does not
realize the commit presumption of PrC on every two adjacent levels of the transaction execution
tree. This is to reduce the costs associated with the initiation (or collecting) records of PrC. Thus,
in this respect, the ML-iAP3
is similar to the rooted PrC in which only the root coordinator force
writes an initiation log record for the transaction and not cascaded coordinators [14].
2.3 THE ML-iAP3
WHEN PARTICIPANTS ARE MIXED 1PC AND 2PC
Based on the information received from the different participants during the execution of a
transaction, at commit time, the coordinator of the transaction knows the protocol of each of the
participants. It also knows the execution tree of the transaction. That is, it knows all the ancestors
of each participant and whether a participant is a cascaded coordinator or a leaf participant.
Based on this knowledge, the coordinator considers a direct descendant to be 1PC if the
descendant and all the participants in its branch are 1PC. Otherwise, the coordinator considers the
direct descendant 2PC. For a 1PC branch, the coordinator uses the 1PC part of ML-iAP3
with the
branch, as we discussed in Section 2.1. For a 2PC branch, the coordinator uses the decided 2PC
b. Abort casea. Commit case
Figure 1. Mixed participants in a 2PC cascaded coordinator’s branch when PrC is decided.
5. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
33
protocol variant regardless of whether the direct descendant is 1PC or 2PC. That is, the
coordinator uses the 2PC part of the ML-iAP3
discussed in Section 2.2. Thus, with the exception
in the way a coordinator decide on which protocol to use with each of its direct descendants, the
coordinator’s protocol proceeds as in two-level iAP3
[13].
In ML-iAP3
, each leaf participant behaves in the same way as in two-level iAP3
. This is
regardless of whether the leaf participant descends from a 1PC or 2PC branch. That is, a
participant behaves as 1PC participant if it has not requested to switch protocol or as the decided
2PC protocol variant if has made such a request during the execution of the transaction.
On the other hand, the behaviour of cascaded coordinators is different and depends on the types
of its descendant participants in the branch. A cascaded coordinator uses multi-level IYV when
all the participants in its branch, including itself, are 1PC. Similarly, a cascaded coordinator uses
the multi-level version of the decided 2PC protocol variant when all the participants in its branch,
including itself, are 2PC. Thus, in the above two situations, a cascaded coordinator uses ML-iAP3
as discussed in the previous two sections.
When the protocol used by a cascaded coordinator is different from the protocol used by at least
one of its descendants (not necessarily a direct descendant), there are two scenarios to consider.
The first scenario is when the cascaded coordinator is 2PC while the second scenario is when the
cascaded coordinator is 1PC. Since, for each scenario, cascaded coordinators behave the same
way at any level of the transaction execution tree, below we discuss the case of the last cascaded
coordinator in a branch with mixed 1PC and 2PC protocols.
2.3.1 SCENARIO ONE: A 2PC CASCADED COORDINATOR’S BRANCH WHEN PrC IS DECIDED
When PrC is decided and a cascaded coordinator with mixed participants receives a prepare
message from its ancestor after the transaction has finished its execution, the cascaded
coordinator forwards the message to each 2PC participant indicating the decided PrC protocol
(Figure 1). Then, it waits for the descendants’ votes. If any descendant has decided to abort, the
cascaded coordinator force writes an abort log record, aborts the transaction, sends a “no” vote to
its direct ancestor and an abort message to each prepared to commit direct descendant (including
1PC descendants). Then, it waits for the ACKs of the prepared 2PC direct descendants. Once the
cascaded coordinator receives the required ACKs, it writes a non-forced end log record. Then, it
forgets the transaction. On the other hand, when the cascaded coordinator and all its 2PC direct
descendants votes “yes”, the cascaded coordinator force writes a prepared log record. Then, it
sends a collective “yes” vote, reflecting the vote of the entire branch, to its direct ancestor and
waits for the final decision.
If the final decision is a commit (Figure 1 (a)), the cascaded coordinator forwards the decision to
each of its direct descendants, both 1PC and 2PC, and writes a commit log record. The commit
log record of the cascaded coordinator is written in a non-forced manner, following PrC protocol.
Unlike PrC, however, a cascaded coordinator expects each 1PC participant to acknowledge the
commit message but not 2PC participants since they follow PrC. When a cascaded coordinator
receives ACKs from 1PC participants, it writes a non-forced end log record. Once the record is
written onto the stable log, the cascaded coordinator sends an ACK to its direct ancestor. Then, it
forgets the transaction.
On the other hand, if the final decision is an abort (Figure 1 (b)), the cascaded coordinator sends
an abort message to each of its descendants and writes a forced abort log record (following PrC
protocol). When 2PC participants acknowledge the abort decision, the cascaded coordinator
writes a non-forced end log record. Once the end record is written onto stable storage due to a
6. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
34
subsequent flush of the log buffer, the cascaded coordinator sends a collective ACK to its direct
ancestor and forgets the transaction.
It should be noted that a cascaded coordinator, in this scenario, has to acknowledge both commit
and abort decisions. A commit ACK reflects the ACKs of all 1PC participants while an abort
ACK reflects the ACKs of all 2PC participants (including the cascaded coordinator’s ACK).
2.3.2 SCENARIO ONE: A 2PC CASCADED COORDINATOR’S BRANCH WHEN PrA IS DECIDED
When PrA is decided and a cascaded coordinator with mixed participants receives a prepare
message from its ancestor after the transaction has finished its execution, the cascaded
coordinator forwards the message to each 2PC participant indicating the decided PrA protocol
(Figure 2). Then, it waits for the descendants’ votes. If any descendant has decided to abort, the
cascaded coordinator writes a non-forced abort log record, aborts the transaction, sends a “no”
vote to its direct ancestor and an abort message to each prepared to commit direct descendant
(including 1PC descendants). Then, it forgets the transaction. On the other hand, if the cascaded
coordinator and all its 2PC direct descendants votes “yes”, the cascaded coordinator force writes a
prepared log record. Then, it sends a collective “yes” vote, reflecting the vote of the entire branch,
to its direct ancestor and waits for the final decision.
If the final decision is a commit (Figure 2 (a)), the cascaded coordinator, following PrA, force
writes a commit log record and forwards the decision to each of its direct descendants (both 1PC
b. Abort casea. Commit case
Figure 2. Mixed participants in a 2PC cascaded coordinator’s branch when PrA is decided.
7. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
35
and 2PC). Then, the cascaded coordinator waits for the direct descendants’ ACKs. When the
cascaded coordinator receives ACKs from both 1PC and 2PC direct descendants, it writes a non-
forced end log record. When the end record is written onto the stable log, the cascaded
coordinator sends a collective ACK to its direct ancestor and forgets the transaction.
On the other hand, if the final decision is an abort (Figure 2 (b)), the cascaded coordinator sends
an abort message to each of its descendants and writes a non-forced abort log record (following
PrA protocol). Then, it forgets the transaction.
2.3.3 SCENARIO TWO: A 1PC CASCADED COORDINATOR’S BRANCH WHEN PrC IS DECIDED
In ML-iAP3
, a 1PC cascaded coordinator with 2PC participants is dealt with as 2PC with respect
to messages. Based on that, when a 1PC cascaded coordinator receives a prepare message from its
ancestor, it forwards the message to each 2PC participant and waits for their votes. If any
participant has decided to abort, assuming that PrC is decided, the cascaded coordinator aborts the
transaction. On an abort, the cascaded coordinator force writes an abort log record, then, sends a
“no” vote to its direct ancestor and an abort message to each prepared participant (including 1PC
participants). After that, it waits for the abort ACKs from the prepared PrC participants. Once the
ACKs arrive, the cascaded coordinator writes a non-forced end log record. Then, it forgets the
transaction. If all the PrC participants had voted “yes”, the cascaded coordinator sends a “yes”
vote. This vote reflects the vote of the entire branch, as shown in Figure 3. Then, the cascaded
coordinator waits for the final decision.
Figure 3. Mixed participants in a 1PC cascaded coordinator’s branch when PrC is decided.
b. Abort casea. Commit case
8. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
36
If the final decision is a commit (Figure 3 (a)), the cascaded coordinator forwards the decision to
each of its direct descendants, both 1PC and 2PC, and writes a non-forced commit log record,
following IYV protocol. Unlike IYV, however, a cascaded coordinator expects each 1PC
participant to acknowledge the commit message but not 2PC participants since they follow PrC.
When a cascaded coordinator receives ACKs from 1PC participants, it writes a non-forced end
log record. Once the end record is written onto the stable log due to a subsequent flush to the log
buffer, the cascaded coordinator sends a collective ACK to its direct ancestor. Then it forgets the
transaction.
On the other hand, if the final decision is an abort (Figure 3 (b)), the cascaded coordinator sends
an abort message to each of its descendants and writes a non-forced abort log record (following
IYV protocol). When 2PC participants acknowledge the abort decision, the cascaded coordinator
writes a non-forced end log record. Once the end record is written onto the stable storage, the
cascaded coordinator sends an ACK to its direct ancestor. Then, it forgets the transaction.
Notice that a 1PC participant that is cascaded coordinator has to acknowledge both commit as
well as abort decisions. This is similar to the case of a 2PC cascaded coordinator with mixed
participants and PrC is decided, a commit ACK reflects the ACKs of all 1PC participants
(including the cascaded coordinator’s ACK) while an abort ACK reflects the ACKs of all 2PC
participants.
a. Commit case b. Abort case
Figure 4. Mixed participants in a 1PC cascaded coordinator’s branch when PrA is decided.
9. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
37
2.3.4 SCENARIO TWO: A 1PC CASCADED COORDINATOR’S BRANCH WHEN PrA IS DECIDED
When a 1PC cascaded coordinator receives a prepare message from its ancestor, it forwards the
message to each 2PC participant and waits for their votes. If any participant has decided to abort,
assuming that PrA is decided, the cascaded coordinator aborts the transaction. In this case, the
cascaded coordinator writes a non-forced abort log record. Then, it sends a “no” vote to its direct
ancestor and an abort message to each of its direct descendants. After that, it forgets the
transaction. On the other hand, if the cascaded coordinator and all the PrA participants votes
“yes”, the cascaded coordinator sends a “yes” vote. This vote reflects the vote of the entire
branch, as shown in Figure 4. Then, the cascaded coordinator waits for the final decision.
If the final decision is a commit (Figure 4 (a)), the cascaded coordinator forwards the decision to
each of its direct descendants, both 1PC and 2PC, and writes a commit log record. The commit
log record of the cascaded coordinator is written in a non-forced manner, following IYV protocol.
When the cascaded coordinator receives ACKs from all its direct descendants, it writes a non-
forced end log record and sends a collective ACK to its direct ancestor. Then, it forgets the
transaction. The ACK message of the cascaded coordinator is sent only after the end record is
written onto the stable log due to a subsequent forced write of a log record or log buffer overflow.
On the other hand, if the final decision is an abort (Figure 4 (b)), the cascaded coordinator aborts
the transaction, sends an abort message to each of its descendants and writes a non-forced abort
log record (following IYV protocol). Then, it forgets the transaction.
3.THE ML-iAP3
AND THE ADVANCED FEATURES OF THE iAP3
This section extends the four advanced features of two-level iAP3
to the multi-level distributed
transaction execution model. For completeness purposes, a presentation to the details of each
feature in the context of two-level iAP3
precedes the extension of the feature to the ML-iAP3
.
3.1. THE ML-iAP3
AND READ-ONLY TRANSACTIONS
For read-only transactions, iAP3
uses the principles of the unsolicited update-vote (UUV)
optimization [14]. More specifically, once a transaction starts executing, it is marked as a read-
only transaction. It continues this way so long as its coordinator does not receive any ACK that
contains redo log records and any ACK that indicates a protocol switch. This is because only
update operations generate redo log records or are associated with consistency constraints. When
the coordinator receives an ACK that contains redo log records or an ACK with a switch flag, it
means that the transaction has become an update transaction. Accordingly, the coordinator
changes the state of the transaction in its protocol table.
At commit time of the transaction, the coordinator refers to its protocol table and identifies read-
only participants and update participants. For an update participant, the coordinator also identifies
the chosen protocol by the participant. If all participants are read-only, the coordinator sends a
read-only message to each one of them and forgets the transaction without writing any log
records. Otherwise, the coordinator initiates the voting phase with 2PC participants (if any) and,
at the same time, sends a read-only message to each read-only participant. Then, the coordinator
removes the read-only participants from its protocol. A read-only message received by a
participant means that the transaction has finished its execution. Consequently, the participant
releases all the resources held by the transaction once it has received a read-only message without
writing any log records or acknowledging the message. For update participants, both the
coordinator and each of the participants follow iAP3
.
When all the participants are read-only, the ML-iAP3
works in a manner similar to the two-level
iAP3
. That is, the coordinator of a read-only transaction sends a read-only message to each of its
10. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
38
direct descendants and then forgets the transaction. Similarly, when a cascaded coordinator
receives a read-only message, it releases all the resources held by the transaction and sends a
read-only message to each of its descendants. A leaf participant also releases all the resources
held by the transaction and forgets the transaction after receiving a read-only message. Thus, for
an exclusively read-only transaction, the coordinator sends one message to each direct descendant
without writing any log records. A cascaded coordinator also sends one message to each of its
direct descendants without writing any log records. On other hand, a leaf participant does not
send any messages or write any log records.
If the transaction is partially read-only, the coordinator sends a read-only message to each read-
only direct descendant. For update direct descendants, the coordinator initiates the decided
protocol with them. Similarly, each cascaded coordinator sends a read-only message to each read-
only direct descendant and follows the decided protocol with the other direct descendants. A
cascaded coordinator knows which of its direct descendants is read-only and which is not based
on the received ACK messages and the included control information during the execution of the
transaction. The behaviour of leaf participants remain the same as in two-level iAP3
. Hence, in
ML-iAP3
, only non-read-only participants (both cascaded coordinators and leaf participants)
continue in their involvement in the finally decided commit protocol until its end.
3.2. THE ML-iAP3
AND FORWARD RECOVERY
In iAP3
, instead of aborting a partially executed transaction during the recovery procedure after a
communication or a site failure, the transaction is allowed to continue its execution after the
failure is fixed. This forward recovery option is applicable so long as the transaction is context-
free [15] and it was originally defined and used in IYV [6]. When a transaction chooses to use
this option, it indicates its choice to its coordinator at the beginning of its execution. This option
allows a transaction to wait on any delays that it may encounter during its execution due to a
failure instead of aborting it.
When a transaction chooses the forward recovery option, each 1PC participant replicates both the
redo log records and the read locks of the transaction at the coordinator’s site. This is
accomplished by propagating the generated redo records and read locks in the ACK messages of
the operations of the transaction as the transaction executes at the participant. In this way, the
coordinator’s protocol table contains a partial image of each 1PC participant log and lock table.
After a participant failure, the participant can re-construct the missing parts of its log and lock
table with the help of the coordinators. Thus, recovering the states of forward recoverable
transactions and allowing them to continue their execution instead of aborting them.
As it is impossible, in general, to determine in advance whether a transaction that has chosen the
option of forward recovery that it will create a run-time context at a participant or not, the iAP3
has such transactions at run-time and override their choices. In the iAP3
, when the first operation
of a 1PC forward recoverable transaction creates a context at a participant, the participant changes
the state of the transaction to non-forward-recoverable and informs the transaction coordinator
accordingly. This is accomplished by requesting a protocol switch in the ACK of the operation.
After that, the participant refrains from sending any read locks for the transaction.
When the coordinator receives an ACK indicating a protocol switch for a 1PC forward-
recoverable transaction, the coordinator marks the transaction as non-forward recoverable in its
protocol table. After that, the coordinator starts informing the other 1PC participants as it submits
new operations to them for execution.
11. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
39
The forward recovery option of two-level iAP3
can be extended to the MLTE model in a straight
forward manner. As in two-level iAP3
, this option is applicable to transactions that are 1PC across
all participants. That is, if the state of a transaction becomes 2PC at a participant, then, the
transaction cannot be forward recoverable. Additionally, when a transaction chooses the option of
forward recovery, its coordinator indicates that the transaction is forward recoverable in the first
operation that it sends to each participant.
As in two-level iAP3
, each participant propagates the read locks that the transaction acquires and
the redo log records that are generated during the execution of the transaction to its ancestor along
with the operations’ ACKs. These ACKs and control information are eventually received by the
coordinator and stored in a similar way as in two-level iAP3
. If a participant decides to change the
state of the transaction to become non-forward recoverable, the participant informs its ancestor
about this state change in the ACK of the operation that caused the participant to change the state
of the transaction. This state change is propagated along with the ACK message of the operation
that caused the state change from one ancestor to another until it reaches the coordinator. At that
point, the coordinator becomes aware of the change and informs the other participants as it
submits new operations to them for execution.
3.3.THE ML-iAP3
AND UPDATING LARGE AMOUNTS OF DATA
In iAP3
, when a transaction updates large amounts of data at a participant and the updated data is
prohibitively large to be propagated to the coordinator of the transaction, the participant uses a
large amounts of data (LAD) flag. More specifically, when a 1PC participant updates large
amounts of data during the execution of an operation and the updated data is not associated with
deferred consistency constraints, it sets this flag in the ACK of the operation and switches to PrC.
If the updated data is associated with deferred constraints, the participant chooses the appropriate
2PC variant. The choice of the 2PC variant, in this case, depends on the tendency of the
evaluation of these constraints at commit time of the transaction. Once the participant has
switched to 2PC, it does not send any more redo log records in the ACKs of update operations.
Besides that, the participant changes the state of the transaction to be non-forward recoverable.
That is, of course, if the transaction was set as forward recoverable. If this occurs, the participants
also stops sending any more read locks for the transaction to the transaction’s coordinator. If a
participant switches to PrC and later on executed an operation that is associated with deferred
consistency constraints that tend to be violated, the participant changes its previously selected
protocol to PrA in the ACK of the operation. Thus, PrA is used only when the transaction is
associated with deferred constraints that tend to be violated at commit processing time.
When the coordinator receives an ACK with a set LAD flag from a participant, it marks the
participant as either PrC or PrA in its protocol table, depending on the chosen 2PC protocol by
the participant. The coordinator also changes the state of the transaction to be non-forward
recoverable if the transaction was set as forward recoverable and starts informing the other 1PC
participants about this new state of the transaction. This is accomplished by indicating the state
change in the first operation that the coordinator sends to each of the other 1PC participants.
When a participant receives a state change indication in an operation, the participant stops
sending any more read locks in the ACK of each operation that it executes on behalf of the
transaction.
Extending the above iAP3
feature to the MLTE model is straight forward as any participant can
set the LAD flag when necessary. Once the flag is set, the participant becomes a 2PC participant.
Then, the behavior of the participant depends on the location of the participant in the transaction
execution tree. That is, if the participant is a leaf participant, it follows the 2PC protocol that is
finally decided by the coordinator. On the other hand, if the participant is a cascaded coordinator,
12. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
40
it follows one of the two extensions to the basic protocol discussed in Section 2.2 and Section 2.3,
depending on the finally chosen protocol by the coordinator.
3.4. THE ML-iAP3
AND BACKWARD COMPATIBILITY
The iAP3
protocol is backward compatible with both PrA coordinators and PrA participants. In
the iAP3
, an iAP3
participant keeps a list called presumed-abort coordinators (PAC) in which it
records the identities of all pre-existing coordinators that use PrA. The PAC list is created at
system installation time and is continuously updated as new PrA coordinators join or existing
ones leave the system. Thus, this list is maintained so long as there are some PrA coordinators
exist in the system.
An iAP3
participant refers to its PAC list after the initiation of any new transaction at its site. This
is to determine if the coordinator of the transaction is a pre-existing PrA coordinator. If the
coordinator is a pre-existing PrA site, the participant deals with it using PrA. That is, the
participant marks the transaction as a PrA transaction in its protocol table and does not include
any redo log records or read locks in the ACK of any operation that it executes for the transaction.
Besides that, the participant deals with the coordinator using PrA at commit processing time of
the transaction, including the use of the traditional read-only optimization [12] if this optimization
is supported by the coordinator.
In iAP3
, an iAP3
coordinator keeps a list called presumed-abort participants (PAP) in which it
records the identities of all pre-existing PrA participants. Before launching a transaction at a
participant, the coordinator refers to its PAP list to determine if the participant is a pre-existing
PrA site. If the participant is a pre-existing PrA, converts the transaction to become non-forward
recoverable, given that the transaction was set as forward recoverable, before initiating the
transaction at the participant. Then, it starts informing the other participants as it sends them new
operations for execution.
Using the PAC and PAP lists used in two-level iAP3
, the protocol can be easily extended to the
MLTE model. In ML-iAP3
, when the root coordinator or any participant is a pre-existing PrA site,
the whole transaction becomes PrA. When the root coordinator is an iAP3
site, it knows that a
transaction has to be PrA once the transaction submits an operation that is to be executed at a pre-
existing PrA participant according to the stored PAP list at the coordinator’s site. Once the
coordinator knows that the transaction has to be PrA, it informs all iAP3
participants as it submits
new operations to them for execution or during the commit processing stage (if the operations to
be executed by pre-existing PrA participants are the last operations to arrive from the transaction
for execution). on the other hand, when the coordinator is a PrA site, an iAP3
participant knows
that the transaction has to be PrA once it receives the first operation from the coordinator. This is
because the identity of the coordinator is included in the PAC list of the participant. Hence, a
transaction that executes at a pre-exiting PrA site becomes PrA across all sites. As such, the root
coordinator and all participants follow multi-level PrA. Not only that, but for read-only
transactions, all sites follow the traditional read-only optimization if it is supported by the root
coordinator.
4.FAILURE RECOVERY IN THE ML-iAP3
The operational correctness criterion [16] represents a guiding principal for the correctness of
any practical ACP. It specifically states that: 1) all sites participating in a transaction’s execution
should reach the same outcome for the final state of the transaction, and 2) all participating sites
should be able to, eventually, forget the transaction and to garbage collect the transaction’s log
13. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
41
records. The operational correctness criterion should hold even in the case of failures and
regardless of their number and frequency.
Thus far, we extensively discussed the ML-iAP3
during normal processing. The discussion clearly
shows that the protocol strictly observes the operational correctness criterion. This section shows
that ML-iAP3
also observes the operational correctness criterion in the case of site and
communication failures, which are detected by timeouts. The section starts by discussing the
recovery aspects of the protocol in the presence of communication failures. Then, it discusses the
recovery aspects of the protocol in the presence of site failures.
4.1. COMMUNICATION FAILURES
4.1.1 A ROOT COORDINATOR COMMUNICATION FAILURES
In ML-iAP3
, there are three points at which a coordinator may timeout while waiting for a
message. In the first point, a coordinator may timeout while waiting for an operation ACK from a
participant. When a coordinator times out while waiting for an operation ACK from a participant,
it aborts the transaction and sends out abort messages to the rest of the participants.
In the second point, a coordinator may timeout while waiting for a vote from a 2PC participant.
When this occurs, the communication failure is dealt with as if it was a “no” vote, leading to an
abort decision. As during normal processing, in this case, the coordinator sends out abort
messages to all accessible participants and waits for the required ACKs. The anticipated ACKs
depend on the finally decided protocol that the coordinator sent in the prepare messages (i.e., the
ACKs of PrC participants when PrC is used with iAP3
participants). These ACKs enable the
coordinator to write an end log record for the transaction and to forget it. If a participant has
already voted “yes” before a communication failure, the participant is left blocked. In this case, it
is the responsibility of the participant to inquire about the transaction’s status after the failure is
fixed. When a participant inquires about a transaction status, it has to include the used protocol
with the transaction in the inquiry message. This piece of information guides the ancestors of the
participant in their response to the inquiry message if the transaction has already been forgotten.
If the transaction was using a presumed-abort based protocol, the direct ancestor of the participant
can respond to the inquiry message of the participant with an abort decision without the need to
consult with its own direct ancestor. On the other hand, if the used protocol is PrC, the direct
ancestor cannot make such a decision alone and has to consult with its own direct ancestor until
possibly reaching the root coordinator. This is because only the root coordinator force writes a
switch log record in iAP3
and can accurately determine the status of the transaction.
The third point occurs when the coordinator of a transaction times out while waiting for the
ACKs of a final decision. As the coordinator needs these ACKs in order to complete the protocol
and to forget the transaction, it re-submits the decision to the appropriate participants once
communication failures are fixed. In iAP3
, a coordinator re-submits a commit decision to each
inaccessible 1PC participant, a pre-existing PrA and 2PC iAP3
participant when PrA is used with
iAP3
participants. For an abort decision, the coordinator re-submits the decision to each
inaccessible iAP3
participant when PrC is used 2PC iAP3
participants. When a participant
receives a decision, it complies with the decision, if it has not done so before the failure, and then
acknowledges the decision.
4.1.2 A LEAF PARTICIPANT COMMUNICATION FAILURES
Similar to the root coordinator communication failures, there are three points at which a leaf
participant may timeout while waiting for a message. In the first point, a participant may detect a
14. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
42
communication failure and it has a pending operation ACK. In this case, the participant aborts the
transaction.
The second point is when the participant detects a communication failure and the participant has
no pending operation ACK. If this occurs and the transaction is 1PC at the participant, the
participant is blocked until the communication failure is fixed. Once the failure is fixed, the
participant inquires about the transaction’s status. The participant will receive either a final
decision or a still active message. If the participant receives a decision, it enforces the decision.
The participant also acknowledges the decision if it is a commit decision. If the participant
receives a still-active message, it means that the transaction is still executing in the system and no
decision has been made yet regarding its final status. Based on that, the participant waits for
further instructions. On the other hand, if the communication failure occurs and the participant is
2PC, whether it is an iAP3
or a pre-existing PrA, the participant aborts the transaction.
The third point occurs when a participant is 2PC and is in a prepared to commit state. In this case,
if the participant is an iAP3
, the participant inquires its direct ancestor about the status of the
transaction with a message that indicates the used protocol with the transaction once the
communication failure is fixed. Otherwise, being pre-existing PrA, the participant does not
indicate its used protocol in the inquiry message. In either of the two cases, the participant will
receive the correct final decision from its direct ancestor regardless of whether the transaction is
still remembered by its ancestors in the transaction execution tree or not.
4.1.3 A CASCADED COORDINATOR COMMUNICATION FAILURES
In ML-iAP3
, there are six points at which a cascaded coordinator may detect a communication
failure. Three of these failures may occur with the direct ancestor while the other three may occur
with a direct descendant.
4.1.3.1 COMMUNICATION FAILURES WITH THE DIRECT ANCESTOR
In the first point, a cascaded coordinator may detect a communication failure and it has a pending
operation ACK (either generated locally or received from one of its direct descendants). In this
case, the cascaded coordinator aborts the transaction and sends an abort message to each of its
direct descendants.
The second point is when a cascaded coordinator detects a communication failure and it does not
have a pending operation ACK. In this case, if the transaction is 1PC at the cascaded coordinator,
the cascaded coordinator is blocked until communication is re-established with its direct ancestor.
Once the communication failure is fixed, the cascaded coordinator inquires its direct ancestor
about the transaction’s status. The cascaded coordinator will receive either a final decision or a
still active message. In the former case, the cascaded coordinator enforces the final decision.
Then, if the decision is commit, the cascaded coordinator also acknowledges it. In the latter case,
the cascaded coordinator waits for further operations. On the other hand, if the communication
failure occurs and the cascaded coordinator or one of its direct descendants is 2PC, the cascaded
coordinator aborts the transaction. Once the cascaded coordinator has aborted the transaction, it
sends out an abort message to each of its direct descendants.
The third point occurs when a cascaded coordinator is 2PC and is in a prepared-to-commit state.
In this case, if the cascaded coordinator is an iAP3
, it inquires its direct ancestor about the status
of the transaction, indicating the used protocol with the transaction. If the cascaded coordinator is
a pre-existing PrA, it also inquires its direct ancestor about the status of the transaction with a
message that, of course, does not indicate the used protocol. In either of the two cases, the
15. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
43
cascaded coordinator will receive the correct final decision from its direct ancestor regardless of
whether the transaction is still remembered by its ancestors in the transaction execution tree or
not.
4.1.3.2 COMMUNICATION FAILURES WITH A DIRECT DESCENDANT
As mentioned above, there are three points at which a cascaded coordinator may timeout while
waiting for a message from a direct descendant. In the first point, a cascaded coordinator may
timeout while waiting for an operation ACK. In this case, the cascaded coordinator aborts the
transaction and sends out an abort message to its direct ancestor and to each of its accessible
direct descendants.
In the second point, a cascaded coordinator may timeout while waiting for the votes of 2PC direct
descendants. In this case, the cascaded coordinator treats communication failures as “no” votes
and aborts the transaction. On an abort, the cascaded coordinator sends out an abort message to its
direct ancestor and each accessible direct descendant. Then it waits for the required ACKs. The
anticipated ACKs depend on the finally decided protocol that the root coordinator sent in the
prepare messages (i.e., the ACKs of PrC participants when PrC is used with iAP3
participants).
These ACKs are necessary for the cascaded coordinator. They enable the cascaded coordinator to
write an end log record for the transaction and to forget it. If a participant has already voted “yes”
before a communication failure, the participant is left blocked. In this case, it is the responsibility
of the participant to inquire about the transaction’s status after the failure is fixed. When a
participant inquires about a transaction status, it has to include the used protocol with the
transaction in the inquiry message. This piece of information guides the ancestors of the
participant in their response to the inquiry message if the transaction has already been forgotten.
If the transaction was using a presumed-abort based protocol, the direct ancestor of the participant
can respond to the inquiry message of the participant with an abort decision without the need to
consult with its own direct ancestor. On the other hand, if the used protocol is PrC, the direct
ancestor cannot make such a decision alone and has to consult with its own direct ancestor until
possibly reaching the root coordinator. Again, this is because only the root coordinator force
writes a switch log record in iAP3
and can accurately determine the status of the transaction.
The third point occurs when the cascaded coordinator of a transaction times out while waiting for
the ACKs of a final decision. As the cascaded coordinator needs these ACKs in order to complete
the protocol and to forget the transaction, it re-submits the decision to the appropriate participants
once communication failures are fixed. In iAP3
, a coordinator re-submits a commit decision to
each inaccessible 1PC participant, a pre-existing PrA and 2PC iAP3
participant when PrA is used
with iAP3
participants. For an abort decision, the coordinator re-submits the decision to each
inaccessible iAP3
participant when PrC is used 2PC iAP3
participants. When a participant
receives a decision, it complies with the decision, if it has not done so before the failure, and then
acknowledges the decision.
4.2 SITE FAILURES
4.2.1 A ROOT COORDINATOR SITE FAILURES
During the initial scan of the log after a site failure, the coordinator re-builds its protocol table
and identifies each incomplete transaction. If the coordinator is a pre-existing PrA coordinator, it
will correctly handle iAP3
participants using its own failure recovery mechanisms. This is because
each iAP3
participant knows, using its own PAC list, that the recovering coordinator is a pre-
existing PrA coordinator. Based on that, each iAP3
participant will deal with the recovering
coordinator as a PrA participant. On the other hand, for a recovering iAP3
coordinator, the
coordinator needs to consider the following types of transactions during its failure recovery:
16. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
44
• Transactions with only switch records: the coordinator knows that PrC was used with 2PC
iAP3
participants as only PrC uses this type of records. The coordinator also knows that the
commit processing for each one of these transactions was interrupted before the decision was
propagated to the participants. Based on that, the coordinator aborts the transaction and sends
an abort message to each 2PC iAP3
participant recorded in the switch record. Then, the
coordinator waits for an ACK from each one of them. For the iAP3
participants that did not
request a protocol switch during the execution of the transaction, they will inquire about the
transaction status after the coordinator has recovered. When the coordinator receives an
inquiry message that does not include a flag that determines the used protocol after it has
received the required ACK messages and forgotten the transaction, the coordinator will
assume that the protocol is a presumed-abort based protocol. Based on that, it will correctly
reply with an abort decision that is consistent with the presumption of the protocol used by the
participant.
• Transactions with switch records and corresponding commit records but without end records:
the coordinator knows that PrC was used with the 2PC iAP3
participants of each one of these
transactions. However, the coordinator cannot be sure whether all the participants in the
execution of each transaction are 2PC or not. For this reason, the coordinator refers to each
transaction’s switch record to find out this piece of information. If all participants are 2PC, the
transaction is considered completed transaction. Otherwise, the coordinator identifies the set of
1PC participants and sends to them commit messages. Then, it waits for their ACKs.
• Transactions with only commit records: the coordinator knows that the protocol used with
each one of these transactions has to be a presumed-abort based protocol. This is because PrC
requires a switch log record before the commit decision can be made and written onto the log.
Based on that, the coordinator knows that either PrA or IYV was used with the transaction. In
either case, the coordinator re-sends its commit decision to all the participants of each
transaction and waits for their ACKs.
When a participant receives a decision message from a coordinator after a failure, it means that
the coordinator needs an ACK. If the participant had been left blocked awaiting the decision, it
enforces the received decision and then acknowledges it. Otherwise, it simply replies with an
ACK.
The other types of transactions recorded in the coordinator’s log can be safely considered
completed transactions and ignored during the recovery procedure of the coordinator. If a
participant inquires about a transaction that is not within the coordinator’s protocol table after a
failure, the coordinator responds with a decision that matches the presumption of the protocol
indicated in the inquiry message. If the inquiry message does not include any indication about the
used protocol, it has to be from an IYV or a pre-existing PrA participant. In this case, the
coordinator responds with an abort message.
4.2.2 A LEAF PARTICIPANT SITE FAILURES
For an iAP3
participant, the participant checks its stably stored RCL upon its recovery from a site
failure. If the list is empty, it means that the participant can recover its state using its own log and
without communicating with any coordinator in the system. Otherwise, it means that there may be
some missing records from the participant’s log. According to IYV, these records are replicated at
the coordinators’ logs. To retrieve these missing records, the participant needs to determine the
largest log sequence number (LSN). This number is associated with the last record written onto
the log and survived the failure. Once the largest LSN is determined, the participant sends a
recovering message that includes the largest LSN to all iAP3
coordinators recorded in the RCL.
17. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
45
When a coordinator receives a recovering message, it uses the LSN included in the message when
identifying the missing redo log records from the participant’s log.
While waiting for the reply messages to arrive, the participant initiates the undo phase of its
recovery procedure and when completed, the redo phase. This is accomplished using its own
local log. That is, the effects of completed transactions, both committed and aborted, are replayed
locally while waiting for the reply messages to arrive. This is because of the use of write-ahead
logging (WAL).
When an iAP3
coordinator receives a recovering message from a participant, the coordinator
checks its protocol table. The coordinator needs to determine each transaction that the failed
participant has executed some of its operations and the transaction is either still active in the
system or has terminated but did not finish the protocol. The former means that the transaction is
still executing at other sites and no decision has been made about its final status, yet; while the
latter means that a final decision has been made about the transaction but the participant was not
aware of the decision prior to its failure. For each forward recoverable transaction, the
coordinator includes the list of the redo log records that are stored in its log and have LSNs
greater than the one received in the recovering message in its response. For each forward
recoverable transaction, the coordinator also includes all the read-locks received from the
participant during the execution of the transaction. On the other hand, for a committed
transaction, the coordinator responds with a commit status along with the list of all the
transaction’s redo records that are stored in its log and have LSNs greater than the one that was
included in the recovering message of the participant.
The coordinator sends all the above responses in a single repair message to the participant. If a
coordinator has no active transactions at the participant’s site before the participant’s failure, the
coordinator responds with an empty repair message. This latter reply message indicates that there
is no extra information available at the responding coordinator beyond the information that is
already available at the participant’s site and can be used for the recovery of the participant.
When the participant receives the reply messages, it repairs its log and lock table, and then
completes the redo phase. During the recovery procedure of an iAP3
participant, the participant
also needs to resolve the states of any prepared-to-commit 2PC transactions that were coordinated
by either iAP3
coordinators or pre-existing PrA coordinators. A failed participant accomplishes
this by identifying such transactions during the analysis phase of the recovery procedure. For each
one of these transactions, the participant inquires its direct ancestor in the transaction tree about
the final status of the transaction, indicating the used protocol with the transaction as recorded in
the prepared log record. If the coordinator of a transaction is a pre-existing PrA, the participant
inquires its direct ancestor without making any indication about the used protocol (following PrA
protocol).
When an ancestor receives an inquiry message regarding the status of a transaction, it replies with
the decision that it still remembers. If the ancestor does not remember the transaction, it uses the
indicated protocol in the inquiry message to guide it in its response. The response of the ancestor
is abort if the indicated protocol is PrA. The response is also abort if the message does not
indicate the used protocol with the transaction. On the other hand, if the indicated protocol is PrC,
the ancestor propagates the inquiry message to its own direct ancestor, and so on. This process
continues until one of the ancestors still remembers the transaction and responds with the decision
that it still remembers or the message reaches the root coordinator which is the only one that can
make a correct presumption about unremembered PrC transactions.
18. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
46
For a PrA leaf participant, the participant follows the recovery procedure of PrA protocol. In this
case, the transaction has to be PrA across all participants in the transaction execution tree and the
participant will receive the correct decision from its direct ancestor.
4.2.3 A CASCADED COORDINATOR SITE FAILURES
During failure recovery after a site failure, being an intermediate site, a cascaded coordinator has
to synchronise its recovery with its direct ancestors, from one side, and its direct descendants,
from the other.
As a descendant, a cascaded coordinator checks its stably stored RCL. If the list is empty, it
means that there were no iAP3
coordinators with active transactions at the cascaded coordinator’s
site before the cascaded coordinator’s failure. In this case, the cascaded coordinator does not
communicate with any iAP3
coordinator for recovery purposes. This is because all the necessary
information needed for recovery is available locally in its own log. On the other hand, if the RCL
is not empty, it means that there may be some missing records from the cascaded coordinator’s
log. According to IYV, these records are replicated at the coordinators’ logs. To retrieve these
missing records, the participant needs to determine the largest LSN. Then, the cascaded
coordinator sends a recovering message that contains the largest LSN to all iAP3
coordinators
recorded in the RCL. This LSN is used by iAP3
coordinators to determine missing redo log
records at the cascaded coordinator which are replicated in their logs and are needed by the
cascaded coordinator to fully recover.
When an iAP3
coordinator receives a recovering message from a cascaded coordinator, it means
that the cascaded coordinator has failed and is recovering from a failure. In this case, the
coordinator needs to determine each transaction that the failed cascaded coordinator has executed
some of its operations and the transaction is either still active in the system or has terminated but
did not finish the protocol. The former means that the transaction is still executing at other sites
and no decision has been made about its final status, yet; while the latter means that a final
decision has been made about the transaction but the cascaded coordinator was not aware of the
decision prior to its failure. For each forward recoverable transaction, the coordinator includes the
list of the redo log records that are stored in its log and have LSNs greater than the one received
in the recovering message in its response. For each forward recoverable transaction, the
coordinator also includes all the read-locks received from the participant during the execution of
the transaction. On the other hand, for a committed transaction, the coordinator responds with a
commit status along with the list of all the transaction’s redo records that are stored in its log and
have LSNs greater than the one that was included in the message of the cascaded coordinator.
The coordinator sends all the above responses in a single repair message to the cascaded
coordinator. If a coordinator has no active transactions at the cascaded coordinator’s site before
the cascaded coordinator’s failure, the coordinator responds with an empty repair message. This
latter reply message indicates that there is no extra information available at the responding
coordinator beyond the information that is already available at the cascaded coordinator’s site and
can be used for the recovery of the cascaded coordinator.
During the recovery procedure of the cascaded coordinator, the cascaded coordinator also needs
to resolve the states of any prepared-to-commit 2PC transactions that were coordinated by either
iAP3
coordinators or pre-existing PrA coordinators. A failed cascaded coordinator accomplishes
this by identifying such transactions during the analysis phase of the recovery procedure. For each
one of these transactions, the cascaded coordinator inquires its direct ancestor in the transaction
tree about the final status of the transaction, indicating the used protocol with the transaction as
recorded in the prepared log record. If the coordinator of a transaction is a pre-existing PrA, the
19. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
47
cascaded coordinator inquires its direct ancestor without making any indication about the used
protocol (following PrA protocol).
When an ancestor receives an inquiry message regarding the status of a transaction, it replies with
the decision that it still remembers. If the ancestor does not remember the transaction, it uses the
indicated protocol in the inquiry message to guide it in its response. The response of the ancestor
is abort if the indicated protocol is PrA. The response is also abort if the message does not
indicate the used protocol with the transaction. On the other hand, if the indicated protocol is PrC,
the ancestor propagates the inquiry message to its own direct ancestor, and so on. This process
continues until one of the ancestors still remembers the transaction and responds with the decision
that it still remembers or the message reaches the root coordinator which is the only one that can
make a correct presumption about unremembered PrC transactions.
While waiting for the reply messages to arrive, the cascaded coordinator initiates the undo phase
of its recovery procedure and when completed, the redo phase. This is accomplished using its
own local log. That is, the effects of completed transactions, both committed and aborted, are
replayed locally while waiting for the reply messages to arrive. This is because of the use of
write-ahead logging (WAL).
When the cascaded coordinator receives the required reply messages from the iAP3
coordinators
recorded in its RCL, the cascaded coordinator repairs its log and lock table, and then completes
the redo phase. During the recovery procedure of a cascaded coordinator, the cascaded
coordinator also needs to resolve the states of any prepared-to-commit 2PC transactions that were
coordinated by either iAP3
coordinators or pre-existing PrA coordinators. A failed cascaded
coordinator accomplishes this by identifying such transactions during the analysis phase of the
recovery procedure. For each one of these transactions, the cascaded coordinator inquires its
direct ancestor in the transaction tree about the final status of the transaction, indicating the used
protocol with the transaction as recorded in the prepared log record. If the coordinator of a
transaction is a pre-existing PrA, the cascaded coordinator inquires its direct ancestor without
making any indication about the used protocol (following PrA protocol).
As an ancestor, the cascaded coordinator needs to finish commit processing for each prepared to
commit transaction that was interrupted due to the failure without finalizing its commit protocol
with the direct descendants. This is accomplished by following the decided protocol recorded in
the prepared record of each transaction as during normal processing.
5. CONCLUSIONS
For a practicality reason, any newly proposed ACP has to be extended to the multi-level
distributed transaction execution model as it is the one currently adopted by the database
standards. Not only that, but it is considered the de facto model in the database systems’ industry.
As the intelligent adaptive participant’s presumption protocol (iAP3
) exhibits a highly appealing
efficiency and applicability characteristics, this article concentrated on the details of extending it
to the more general multi-level distributed transaction execution model. The extension of the iAP3
includes extending its advanced features and not only the basic ones. We believe that this work
should help in the design of any future practical atomic commit protocols.
REFERENCES
[1] Gray, J. “Notes on Database Operating Systems”, in Bayer, R., Graham, R. M. & Seegmuller, G.
(Eds.): Operating Systems: An Advanced Course, LNCS, Vol. 60, Springer, 1979.
[2] Lampson, B. “Atomic Transactions”, in Lampson, B., Paul, M. & Siegert, H.J. (Eds.): Distributed
Systems: Architecture and Implementation - An Advanced Course, LNCS, Vol. 105, Springer, 1981.
20. International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
[3] Al-Houmaily, Y. & Samaras, G.
Encyclopedia of Database Systems, Springer, 2009.
[4] Al-Houmaily, Y. “Atomic Commit Protocols, their Integration, and their Optimisations in Distributed
Database Systems”, Int’l J. of Intelligent Info.
[5] Stamos, J. & Cristian, F. “Coordinator Log Transaction Execution Protocol”, Distributed and Parallel
Databases, Vol. 1, No. 4, pp. 383
[6] Al-Houmaily, Y. & Chrysanthis, P. “An Atomic Commit Protocol for Gigabit
Database Systems”, J. of Systems Architecture, Vol. 46, pp. 809
[7] Abdallah, M., Guerraoui, R. &
without Veto Right”, Distributed and Parallel Databases, Vol. 11, No. 3, pp. 239
[8] ISO. “Information Technology
ISO/IEC 9075-2, 2008.
[9] Al-Houmaily, Y. “On Deferred Constraints in Dis
Database Management Systems, Vol. 5, No. 6, December 2013.
[10] Al-Houmaily, Y. “GLAP: A Global Loopback Anomaly Prevention Mechanism for Multi
Distributed Transactions”, Int’l Journal of Database Manageme
[11] Al-Houmaily, Y. “On Interoperating Incompatible Atomic Commit Protocols in Distributed
Databases”, Proc. of the 1st
IEEE Int’l Conf. on Computers, Comm., and Signal Processing, 2005.
[12] Mohan, C., Lindsay B. & Obermarck, R. “Transaction Management in the R* Distributed Data Base
Management System”, ACM TODS, Vol. 11, No. 4, pp. 378
[13] Al-Houmaily, Y. “An Intelligent Adaptive Participant’s Presumption Protocol for Atomic
Commitment in Distributed Dat
[14] Al-Houmaily, Y., Chrysanthis, P. &
Protocol”, Proc. of the 13th
ICDE, 1997.
[15] Gray, J. & Reuter, A. “Transaction Proc
USA, 1993.
[16] Al-Houmaily, Y. & Chrysanthis, P. “Atomicity with Incompatible Presumptions”, Proc. of the 18th
ACM PODS, 1999.
AUTHOR
Yousef J. Al-Houmaily received his BSc in Computer Engineering from King Saud
University, Saudi Arabia in 1986, MSc in Computer Science from George
Washington University, Washington DC in 1990, and PhD in Computer Engineering
from the University of Pittsburgh in 1997. Currentl
the Department of Computer and Information Programs at the Institute of Public
Administration, Riyadh, Saudi Arabia. His current research interests are in the areas
of database management systems, mobile distributed compu
networks.
International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
& Samaras, G. “Two-Phase Commit”, in Liu, L. & Tamer Özsu, M. (Eds.):
Encyclopedia of Database Systems, Springer, 2009.
Houmaily, Y. “Atomic Commit Protocols, their Integration, and their Optimisations in Distributed
J. of Intelligent Info. and Database Sys., Vol. 4, No. 4, pp. 373
Stamos, J. & Cristian, F. “Coordinator Log Transaction Execution Protocol”, Distributed and Parallel
Databases, Vol. 1, No. 4, pp. 383-408, 1993.
this, P. “An Atomic Commit Protocol for Gigabit-Networked Distributed
Database Systems”, J. of Systems Architecture, Vol. 46, pp. 809-833, 2000.
Abdallah, M., Guerraoui, R. & Pucheral, P. “Dictatorial Transaction Processing: Atomic Commitment
eto Right”, Distributed and Parallel Databases, Vol. 11, No. 3, pp. 239-268, 2002.
ISO. “Information Technology - Database Languages - SQL - Part 2: Foundation (SQL/Foundation)”,
Houmaily, Y. “On Deferred Constraints in Distributed Database Systems”, Int’l Journal of
Database Management Systems, Vol. 5, No. 6, December 2013.
Houmaily, Y. “GLAP: A Global Loopback Anomaly Prevention Mechanism for Multi
Distributed Transactions”, Int’l Journal of Database Management Systems, Vol. 6, No. 3, June 2014.
Houmaily, Y. “On Interoperating Incompatible Atomic Commit Protocols in Distributed
IEEE Int’l Conf. on Computers, Comm., and Signal Processing, 2005.
Obermarck, R. “Transaction Management in the R* Distributed Data Base
Management System”, ACM TODS, Vol. 11, No. 4, pp. 378-396, 1986.
Houmaily, Y. “An Intelligent Adaptive Participant’s Presumption Protocol for Atomic
Commitment in Distributed Databases”, Int’l J. of Intel. Info. and Database Sys., Vol. 7, No. 3, 2013.
Houmaily, Y., Chrysanthis, P. & Levitan, S. “An Argument in Favor of the Presumed Commit
ICDE, 1997.
Gray, J. & Reuter, A. “Transaction Processing: Concepts and Techniques”, Morgan Kaufmann Inc.,
Chrysanthis, P. “Atomicity with Incompatible Presumptions”, Proc. of the 18th
received his BSc in Computer Engineering from King Saud
University, Saudi Arabia in 1986, MSc in Computer Science from George
Washington University, Washington DC in 1990, and PhD in Computer Engineering
from the University of Pittsburgh in 1997. Currently, he is an Associate Professor in
the Department of Computer and Information Programs at the Institute of Public
Administration, Riyadh, Saudi Arabia. His current research interests are in the areas
of database management systems, mobile distributed computing systems and sensor
International Journal of Database Management Systems ( IJDMS ) Vol.7, No.6, December 2015
48
in Liu, L. & Tamer Özsu, M. (Eds.):
Houmaily, Y. “Atomic Commit Protocols, their Integration, and their Optimisations in Distributed
and Database Sys., Vol. 4, No. 4, pp. 373-412, 2010.
Stamos, J. & Cristian, F. “Coordinator Log Transaction Execution Protocol”, Distributed and Parallel
Networked Distributed
Pucheral, P. “Dictatorial Transaction Processing: Atomic Commitment
268, 2002.
Part 2: Foundation (SQL/Foundation)”,
tributed Database Systems”, Int’l Journal of
Houmaily, Y. “GLAP: A Global Loopback Anomaly Prevention Mechanism for Multi-Level
nt Systems, Vol. 6, No. 3, June 2014.
Houmaily, Y. “On Interoperating Incompatible Atomic Commit Protocols in Distributed
IEEE Int’l Conf. on Computers, Comm., and Signal Processing, 2005.
Obermarck, R. “Transaction Management in the R* Distributed Data Base
Houmaily, Y. “An Intelligent Adaptive Participant’s Presumption Protocol for Atomic
and Database Sys., Vol. 7, No. 3, 2013.
Levitan, S. “An Argument in Favor of the Presumed Commit
essing: Concepts and Techniques”, Morgan Kaufmann Inc.,
Chrysanthis, P. “Atomicity with Incompatible Presumptions”, Proc. of the 18th