The document discusses challenges in distributed systems including managing asynchrony and partial failures. It describes how distributed consistency aims to provide guarantees around safety and liveness despite asynchrony. Specifically, it discusses approaches like object-level consistency using convergent data structures and flow-level consistency using asynchronous dataflow models to reason about distributed applications and their guarantees.
Tutotial practico de capacitación en los temas: relaciones y funciones en el área de álgebra. Demostrando procedimientos, teoremas y leyes que permiten operar los ejercicios planteados. De esta manera los objetivos esperados son que mediante este tutorial se pueda aumentar los niveles de enseñanza, ya que estos medios permiten exponerlos en la red y pueden ser de utilidad para los diversos niveles de educacion
What Developers Need to Unlearn for High Performance NoSQLScyllaDB
See where an RDBMS-pro’s intuition leads him astray – and learn practical tips for the transition
ScyllaDB has the potential to deliver impressive performance and scalability. The better you understand how it works, the more you can squeeze out of it. However, developers new to high-performance NoSQL intuitively shoot themselves in the foot with respect to things like table design, query design, indexing, and partitioning.
Watch where our experienced Postgres developer intuitively falls into traps that hurt performance and scalability. And learn with him as our database performance expert offers friendly guidance on navigating all the unexpected behaviors that tend to trip up RDBMS experts.
Our first webinar of this series will cover common mistakes with practices such as:
- Translating the data model to NoSQL
- Optimizing table design
- Optimizing query performance
- Planning for partitioning
This isn’t “Death-by-Powerpoint.” We’ll walk through problems encountered while migrating a real application from Postgres to ScyllaDB – and try to fix them live as well.
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
See where an RDBMS-pro’s intuition leads him astray – and learn practical tips for the data modeling transition
ScyllaDB has the potential to deliver impressive performance and scalability. The better you understand how it works, the more you can squeeze out of it. However, developers new to high-performance NoSQL intuitively shoot themselves in the foot with respect to things like table design, query design, indexing, and partitioning.
Watch where our experienced Postgres developer intuitively falls into traps that hurt performance and scalability. And learn with him as our database performance expert offers friendly guidance on navigating all the unexpected behaviors that tend to trip up RDBMS experts.
This webinar focuses on common data modeling and querying mistakes that occur when developers move from SQL to NoSQL. For example:
- Understanding query first design principles
- Planning for schema evolution
- Steering clear of common pitfalls and anti-patterns
- Assessing data access patterns
This isn’t “Death-by-Powerpoint.” We’ll walk through problems encountered while migrating a real application from Postgres to ScyllaDB – and try to fix them live as well.
Reactive Qt - Ivan Čukić (Qt World Summit 2015)Ivan Čukić
Reactive programming is an emerging discipline which achieves concurrency using events-based programming. Today, It is mostly used for writing very scalable web services that can achieve high concurrency levels even on a single thread.
The concept is simple - make a system that is fully event-based, and look at events not as isolated instances, but as streams. When we have streams, we can manipulate them as if they were simple ranges. We can filter them, modify them, combine multiple streams into one etc.
Reactive programming is not only applicable to the web services, it can be used in any event-based environment. In our case, in normal Qt applications, to enrich the power of signals and slots.
[...]
Tutotial practico de capacitación en los temas: relaciones y funciones en el área de álgebra. Demostrando procedimientos, teoremas y leyes que permiten operar los ejercicios planteados. De esta manera los objetivos esperados son que mediante este tutorial se pueda aumentar los niveles de enseñanza, ya que estos medios permiten exponerlos en la red y pueden ser de utilidad para los diversos niveles de educacion
What Developers Need to Unlearn for High Performance NoSQLScyllaDB
See where an RDBMS-pro’s intuition leads him astray – and learn practical tips for the transition
ScyllaDB has the potential to deliver impressive performance and scalability. The better you understand how it works, the more you can squeeze out of it. However, developers new to high-performance NoSQL intuitively shoot themselves in the foot with respect to things like table design, query design, indexing, and partitioning.
Watch where our experienced Postgres developer intuitively falls into traps that hurt performance and scalability. And learn with him as our database performance expert offers friendly guidance on navigating all the unexpected behaviors that tend to trip up RDBMS experts.
Our first webinar of this series will cover common mistakes with practices such as:
- Translating the data model to NoSQL
- Optimizing table design
- Optimizing query performance
- Planning for partitioning
This isn’t “Death-by-Powerpoint.” We’ll walk through problems encountered while migrating a real application from Postgres to ScyllaDB – and try to fix them live as well.
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
See where an RDBMS-pro’s intuition leads him astray – and learn practical tips for the data modeling transition
ScyllaDB has the potential to deliver impressive performance and scalability. The better you understand how it works, the more you can squeeze out of it. However, developers new to high-performance NoSQL intuitively shoot themselves in the foot with respect to things like table design, query design, indexing, and partitioning.
Watch where our experienced Postgres developer intuitively falls into traps that hurt performance and scalability. And learn with him as our database performance expert offers friendly guidance on navigating all the unexpected behaviors that tend to trip up RDBMS experts.
This webinar focuses on common data modeling and querying mistakes that occur when developers move from SQL to NoSQL. For example:
- Understanding query first design principles
- Planning for schema evolution
- Steering clear of common pitfalls and anti-patterns
- Assessing data access patterns
This isn’t “Death-by-Powerpoint.” We’ll walk through problems encountered while migrating a real application from Postgres to ScyllaDB – and try to fix them live as well.
Reactive Qt - Ivan Čukić (Qt World Summit 2015)Ivan Čukić
Reactive programming is an emerging discipline which achieves concurrency using events-based programming. Today, It is mostly used for writing very scalable web services that can achieve high concurrency levels even on a single thread.
The concept is simple - make a system that is fully event-based, and look at events not as isolated instances, but as streams. When we have streams, we can manipulate them as if they were simple ranges. We can filter them, modify them, combine multiple streams into one etc.
Reactive programming is not only applicable to the web services, it can be used in any event-based environment. In our case, in normal Qt applications, to enrich the power of signals and slots.
[...]
The Reactive Principles: Design Principles For Cloud Native ApplicationsJonas Bonér
Reactive Summit Keynote 2020: https://www.youtube.com/watch?v=e5kek8vx2ws
Abstract: Building applications for the cloud means embracing a radically different architecture than that of a traditional single-machine monolith, requiring new tools, practices, and design patterns. The cloud’s distributed nature brings its own set of concerns–building a Cloud Native, Edge Native, or Internet of Things (IoT) application means building and running a distributed system on unreliable hardware and across unreliable networks. In this keynote session by Jonas Bonér, creator of Akka, founder/CTO of Lightbend, and Chair of the Reactive Foundation, we’ll review a set of Reactive Principles that enable the design and implementation of Cloud Native applications–applications that are highly concurrent, distributed, performant, scalable, and resilient, while at the same time conserving resources when deploying, operating, and maintaining them.
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
Recently, Apache Phoenix has been integrated with Apache (incubator) Omid transaction processing service, to provide ultra-high system throughput with ultra-low latency overhead. Phoenix has been shown to scale beyond 0.5M transactions per second with sub-5ms latency for short transactions on industry-standard hardware. On the other hand, Omid has been extended to support secondary indexes, multi-snapshot SQL queries, and massive-write transactions.
These innovative features make Phoenix an excellent choice for translytics applications, which allow converged transaction processing and analytics. We share the story of building the next-gen data tier for advertising platforms at Verizon Media that exploits Phoenix and Omid to support multi-feed real-time ingestion and AI pipelines in one place, and discuss the lessons learned.
The Reactive Principles: Eight Tenets For Building Cloud Native ApplicationsLightbend
In this presentation by Jonas Bonér, creator of Akka and founder/CTO of Lightbend, we review a set of eight Reactive Principles that enable the design and implementation of Cloud Native applications–applications that are highly concurrent, distributed, performant, scalable, and resilient, while at the same time conserving resources when deploying, operating, and maintaining them.
MongoDB World 2019: Distributed Transactions: With Great Power Comes Great Re...MongoDB
A year ago we launched replica-set transactions in MongoDB 4.0. We've now expanded transactions to span across shards, making development against MongoDB even easier. Snapshot isolation, write atomicity, distributed commit – we'll touch on it all. You'll learn all you need to know about distributed transactions before you push to prod.
Forward Chaining in HALO
An Implementation Strategy for History-based Logic Pointcuts.
Charlotte Herzeel, Kris Gybels, Pascal Costanza, Coen De Roover and Theo D’hondt.
ESUG 2007, Lugano
Title: Bloom & CALM: Disorderly Distributed Computing and Minimal Coordination
Speaker: Joseph Hellerstein, UC Berkeley
Date & Time: Thurs, Apr 2, 12:00pm
Lunch will be served.
Abstract:
The rise of cloud computing puts pressure on the software community to find new approaches to the difficulties of distributed and parallel programming. Traditional constructs like transactions and linearizability can preserve programmer illusions of sequentiality, but the coordination overheads of these constructs are often unattractive or untenable at global scales. In the absence of such constructs, there are few general principles or tools to help programmers design and verify the correctness of their applications.
In the BOOM group at Berkeley we have addressed this situation on two fronts. First, we have designed Bloom, a “disorderly”, data-centric language for distributed computing that encourages order-insensitive, easily-parallelized programming. Second, we proposed the CALM Theorem, which formally links Consistency And Logical Monotonicity of distributed programs, and precisely characterizes the class of programs for which coordination is required. Said differently, CALM explains the expressive power of time in computer programs.
CALM and Bloom provide a principled foundation for design patterns found in the distributed systems developer community, and allowed us to build software engineering tools that address the hard, domain-specific debugging questions that distributed programmers often ask: Is my program consistent? Is it fault-tolerant? Does it block unnecessarily on coordination boundaries?
Joint work with Peter Alvaro, Ras Bodik, Neil Conway, David Maier, Bill Marczak and Sriram Srinivasan.
ArchSummit Shenzhen - Using sagas to maintain data consistency in a microserv...Chris Richardson
This is a talk I gave at QCON ArchSummit in Shenzhen.
The microservice architecture structures an application as a set of loosely coupled, collaborating services. Maintaining data consistency is challenging since each service has its own database to ensure loose coupling. To make matters worse, for a variety of reasons distributed transactions using JTA are not an option for modern applications.
In this talk we describe an alternative transaction model known as a saga. You will learn about the benefits and drawbacks of using sagas. We describe how sagas are eventually consistent rather than ACID and what this means for developers. You will learn how to design and implement sagas in a Java application.
Building & Operating High-Fidelity Data Streams - QCon Plus 2021Sid Anand
The world we live in today is fed by data. From self-driving cars and route planning to fraud prevention, to content and network recommendations, to ranking and bidding, our world not only consumes low-latency data streams, it adapts to changing conditions modeled by that data.
While software engineering has settled on best practices for developing and managing both stateless service architectures and database systems, the ecosystem of data infrastructure still presents a greenfield opportunity. To thrive, this field borrows from several disciplines : distributed systems, database systems, operating systems, control systems, and software engineering to name a few.
Of particular interest to me is the sub field of data streams, specifically regarding how to build high-fidelity nearline data streams as a service within a lean team. To build such systems, human operations is a non-starter. All aspects of operating streaming data pipelines must be automated. Come to this talk to learn how to build such a system soup-to-nuts.
Fahd Siddiqui describes the concept of full consistency lag in eventually consistent databases and how that concept can be leveraged in your own applications.
There is a trend in the industry today back to using transactions based on next generation databases that provide strong consistency and global scale. And this consistency is the highest level of consistency - meaning more than just read my own writes. But also read before write, compare and swap, etc.
Embrace NoSQL and Eventual Consistency with RippleSean Cribbs
So, there's this "NoSQL" thing you may have heard of, and this related thing called "eventual consistency". Supposedly, they help you scale, but no one has ever explained why! Well, wonder no more! This talk will demystify NoSQL, eventual consistency, how they might help you scale, and -- most importantly -- why you should care.
We'll look closely at how Riak, a linearly-scalable, distributed and fault-tolerant NoSQL datastore, implements eventual consistency, and how you can harness it from Ruby via the slick Ripple client/ORM. When the talk is finished, you'll have the tools both to understand eventual consistency and to handle it like a pro inside your next Ruby application.
Attacking and Exploiting Ethereum Smart Contracts: Auditing 101Simone Onofri
After web 1.0 and web 2.0, web3 has arrived! After a brief introduction, where we will look at the evolution of the web and what has changed as far as security is concerned, we will dive into blockchain to understand how to attack Smart Contracts on Ethereum, how these intersect with more classic vulnerabilities, and what are the main vulnerabilities we can find in contracts written in Solidity.
Nelson: Rigorous Deployment for a Functional WorldTimothy Perrett
Functional programming finds its roots in mathematics - the pursuit of purity and completeness. We functional programmers look to formalize system behaviors in an algebraic and total manner. Despite this, when it comes time to deploy ones beautiful monadic ivory towers to production, most organizations cast caution to the wind and use a myriad of bash scripts and sticky tape to get the job done. In this talk, the speaker will introduce you to Nelson, an open-source project from Verizon that looks to provide rigor to your large distributed system, whilst offering best-in-class security, runtime traffic shifting and a fully immutable approach to application lifecycle. Nelson itself is entirely composed of free algebras and coproducts, and the speaker will show not only how this has enabled development, but also how it provided a frame with which to reason about solutions to fundamental operational problems.
The Reactive Principles: Design Principles For Cloud Native ApplicationsJonas Bonér
Reactive Summit Keynote 2020: https://www.youtube.com/watch?v=e5kek8vx2ws
Abstract: Building applications for the cloud means embracing a radically different architecture than that of a traditional single-machine monolith, requiring new tools, practices, and design patterns. The cloud’s distributed nature brings its own set of concerns–building a Cloud Native, Edge Native, or Internet of Things (IoT) application means building and running a distributed system on unreliable hardware and across unreliable networks. In this keynote session by Jonas Bonér, creator of Akka, founder/CTO of Lightbend, and Chair of the Reactive Foundation, we’ll review a set of Reactive Principles that enable the design and implementation of Cloud Native applications–applications that are highly concurrent, distributed, performant, scalable, and resilient, while at the same time conserving resources when deploying, operating, and maintaining them.
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
Recently, Apache Phoenix has been integrated with Apache (incubator) Omid transaction processing service, to provide ultra-high system throughput with ultra-low latency overhead. Phoenix has been shown to scale beyond 0.5M transactions per second with sub-5ms latency for short transactions on industry-standard hardware. On the other hand, Omid has been extended to support secondary indexes, multi-snapshot SQL queries, and massive-write transactions.
These innovative features make Phoenix an excellent choice for translytics applications, which allow converged transaction processing and analytics. We share the story of building the next-gen data tier for advertising platforms at Verizon Media that exploits Phoenix and Omid to support multi-feed real-time ingestion and AI pipelines in one place, and discuss the lessons learned.
The Reactive Principles: Eight Tenets For Building Cloud Native ApplicationsLightbend
In this presentation by Jonas Bonér, creator of Akka and founder/CTO of Lightbend, we review a set of eight Reactive Principles that enable the design and implementation of Cloud Native applications–applications that are highly concurrent, distributed, performant, scalable, and resilient, while at the same time conserving resources when deploying, operating, and maintaining them.
MongoDB World 2019: Distributed Transactions: With Great Power Comes Great Re...MongoDB
A year ago we launched replica-set transactions in MongoDB 4.0. We've now expanded transactions to span across shards, making development against MongoDB even easier. Snapshot isolation, write atomicity, distributed commit – we'll touch on it all. You'll learn all you need to know about distributed transactions before you push to prod.
Forward Chaining in HALO
An Implementation Strategy for History-based Logic Pointcuts.
Charlotte Herzeel, Kris Gybels, Pascal Costanza, Coen De Roover and Theo D’hondt.
ESUG 2007, Lugano
Title: Bloom & CALM: Disorderly Distributed Computing and Minimal Coordination
Speaker: Joseph Hellerstein, UC Berkeley
Date & Time: Thurs, Apr 2, 12:00pm
Lunch will be served.
Abstract:
The rise of cloud computing puts pressure on the software community to find new approaches to the difficulties of distributed and parallel programming. Traditional constructs like transactions and linearizability can preserve programmer illusions of sequentiality, but the coordination overheads of these constructs are often unattractive or untenable at global scales. In the absence of such constructs, there are few general principles or tools to help programmers design and verify the correctness of their applications.
In the BOOM group at Berkeley we have addressed this situation on two fronts. First, we have designed Bloom, a “disorderly”, data-centric language for distributed computing that encourages order-insensitive, easily-parallelized programming. Second, we proposed the CALM Theorem, which formally links Consistency And Logical Monotonicity of distributed programs, and precisely characterizes the class of programs for which coordination is required. Said differently, CALM explains the expressive power of time in computer programs.
CALM and Bloom provide a principled foundation for design patterns found in the distributed systems developer community, and allowed us to build software engineering tools that address the hard, domain-specific debugging questions that distributed programmers often ask: Is my program consistent? Is it fault-tolerant? Does it block unnecessarily on coordination boundaries?
Joint work with Peter Alvaro, Ras Bodik, Neil Conway, David Maier, Bill Marczak and Sriram Srinivasan.
ArchSummit Shenzhen - Using sagas to maintain data consistency in a microserv...Chris Richardson
This is a talk I gave at QCON ArchSummit in Shenzhen.
The microservice architecture structures an application as a set of loosely coupled, collaborating services. Maintaining data consistency is challenging since each service has its own database to ensure loose coupling. To make matters worse, for a variety of reasons distributed transactions using JTA are not an option for modern applications.
In this talk we describe an alternative transaction model known as a saga. You will learn about the benefits and drawbacks of using sagas. We describe how sagas are eventually consistent rather than ACID and what this means for developers. You will learn how to design and implement sagas in a Java application.
Building & Operating High-Fidelity Data Streams - QCon Plus 2021Sid Anand
The world we live in today is fed by data. From self-driving cars and route planning to fraud prevention, to content and network recommendations, to ranking and bidding, our world not only consumes low-latency data streams, it adapts to changing conditions modeled by that data.
While software engineering has settled on best practices for developing and managing both stateless service architectures and database systems, the ecosystem of data infrastructure still presents a greenfield opportunity. To thrive, this field borrows from several disciplines : distributed systems, database systems, operating systems, control systems, and software engineering to name a few.
Of particular interest to me is the sub field of data streams, specifically regarding how to build high-fidelity nearline data streams as a service within a lean team. To build such systems, human operations is a non-starter. All aspects of operating streaming data pipelines must be automated. Come to this talk to learn how to build such a system soup-to-nuts.
Fahd Siddiqui describes the concept of full consistency lag in eventually consistent databases and how that concept can be leveraged in your own applications.
There is a trend in the industry today back to using transactions based on next generation databases that provide strong consistency and global scale. And this consistency is the highest level of consistency - meaning more than just read my own writes. But also read before write, compare and swap, etc.
Embrace NoSQL and Eventual Consistency with RippleSean Cribbs
So, there's this "NoSQL" thing you may have heard of, and this related thing called "eventual consistency". Supposedly, they help you scale, but no one has ever explained why! Well, wonder no more! This talk will demystify NoSQL, eventual consistency, how they might help you scale, and -- most importantly -- why you should care.
We'll look closely at how Riak, a linearly-scalable, distributed and fault-tolerant NoSQL datastore, implements eventual consistency, and how you can harness it from Ruby via the slick Ripple client/ORM. When the talk is finished, you'll have the tools both to understand eventual consistency and to handle it like a pro inside your next Ruby application.
Attacking and Exploiting Ethereum Smart Contracts: Auditing 101Simone Onofri
After web 1.0 and web 2.0, web3 has arrived! After a brief introduction, where we will look at the evolution of the web and what has changed as far as security is concerned, we will dive into blockchain to understand how to attack Smart Contracts on Ethereum, how these intersect with more classic vulnerabilities, and what are the main vulnerabilities we can find in contracts written in Solidity.
Nelson: Rigorous Deployment for a Functional WorldTimothy Perrett
Functional programming finds its roots in mathematics - the pursuit of purity and completeness. We functional programmers look to formalize system behaviors in an algebraic and total manner. Despite this, when it comes time to deploy ones beautiful monadic ivory towers to production, most organizations cast caution to the wind and use a myriad of bash scripts and sticky tape to get the job done. In this talk, the speaker will introduce you to Nelson, an open-source project from Verizon that looks to provide rigor to your large distributed system, whilst offering best-in-class security, runtime traffic shifting and a fully immutable approach to application lifecycle. Nelson itself is entirely composed of free algebras and coproducts, and the speaker will show not only how this has enabled development, but also how it provided a frame with which to reason about solutions to fundamental operational problems.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
2. Outline
1. Mourning the death of transactions
2. What is so hard about distributed systems?
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
3. The transaction concept
DEBIT_CREDIT:
BEGIN_TRANSACTION;
GET
MESSAGE;
EXTRACT
ACCOUT_NUMBER,
DELTA,
TELLER,
BRANCH
FROM
MESSAGE;
FIND
ACCOUNT(ACCOUT_NUMBER)
IN
DATA
BASE;
IF
NOT_FOUND
|
ACCOUNT_BALANCE
+
DELTA
<
0
THEN
PUT
NEGATIVE
RESPONSE;
ELSE
DO;
ACCOUNT_BALANCE
=
ACCOUNT_BALANCE
+
DELTA;
POST
HISTORY
RECORD
ON
ACCOUNT
(DELTA);
CASH_DRAWER(TELLER)
=
CASH_DRAWER(TELLER)
+
DELTA;
BRANCH_BALANCE(BRANCH)
=
BRANCH_BALANCE(BRANCH)
+
DELTA;
PUT
MESSAGE
('NEW
BALANCE
='
ACCOUNT_BALANCE);
END;
COMMIT;
4. The transaction concept
DEBIT_CREDIT:
BEGIN_TRANSACTION;
GET
MESSAGE;
EXTRACT
ACCOUT_NUMBER,
DELTA,
TELLER,
BRANCH
FROM
MESSAGE;
FIND
ACCOUNT(ACCOUT_NUMBER)
IN
DATA
BASE;
IF
NOT_FOUND
|
ACCOUNT_BALANCE
+
DELTA
<
0
THEN
PUT
NEGATIVE
RESPONSE;
ELSE
DO;
ACCOUNT_BALANCE
=
ACCOUNT_BALANCE
+
DELTA;
POST
HISTORY
RECORD
ON
ACCOUNT
(DELTA);
CASH_DRAWER(TELLER)
=
CASH_DRAWER(TELLER)
+
DELTA;
BRANCH_BALANCE(BRANCH)
=
BRANCH_BALANCE(BRANCH)
+
DELTA;
PUT
MESSAGE
('NEW
BALANCE
='
ACCOUNT_BALANCE);
END;
COMMIT;
5. The transaction concept
DEBIT_CREDIT:
BEGIN_TRANSACTION;
GET
MESSAGE;
EXTRACT
ACCOUT_NUMBER,
DELTA,
TELLER,
BRANCH
FROM
MESSAGE;
FIND
ACCOUNT(ACCOUT_NUMBER)
IN
DATA
BASE;
IF
NOT_FOUND
|
ACCOUNT_BALANCE
+
DELTA
<
0
THEN
PUT
NEGATIVE
RESPONSE;
ELSE
DO;
ACCOUNT_BALANCE
=
ACCOUNT_BALANCE
+
DELTA;
POST
HISTORY
RECORD
ON
ACCOUNT
(DELTA);
CASH_DRAWER(TELLER)
=
CASH_DRAWER(TELLER)
+
DELTA;
BRANCH_BALANCE(BRANCH)
=
BRANCH_BALANCE(BRANCH)
+
DELTA;
PUT
MESSAGE
('NEW
BALANCE
='
ACCOUNT_BALANCE);
END;
COMMIT;
6. The transaction concept
DEBIT_CREDIT:
BEGIN_TRANSACTION;
GET
MESSAGE;
EXTRACT
ACCOUT_NUMBER,
DELTA,
TELLER,
BRANCH
FROM
MESSAGE;
FIND
ACCOUNT(ACCOUT_NUMBER)
IN
DATA
BASE;
IF
NOT_FOUND
|
ACCOUNT_BALANCE
+
DELTA
<
0
THEN
PUT
NEGATIVE
RESPONSE;
ELSE
DO;
ACCOUNT_BALANCE
=
ACCOUNT_BALANCE
+
DELTA;
POST
HISTORY
RECORD
ON
ACCOUNT
(DELTA);
CASH_DRAWER(TELLER)
=
CASH_DRAWER(TELLER)
+
DELTA;
BRANCH_BALANCE(BRANCH)
=
BRANCH_BALANCE(BRANCH)
+
DELTA;
PUT
MESSAGE
('NEW
BALANCE
='
ACCOUNT_BALANCE);
END;
COMMIT;
14. Transactions: a holistic contract
Write
Read
Application
Opaque
store
Transactions
Assert:
balance > 0
15. Transactions: a holistic contract
Assert:
balance > 0
Write
Read
Application
Opaque
store
Transactions
16. Transactions: a holistic contract
Write
Read
Application
Opaque
store
Transactions
Assert:
balance > 0
17. Transactions: a holistic contract
Write
Read
Application
Opaque
store
Transactions
Assert:
balance > 0
18. Incidental complexities
• The “Internet.” Searching it.
• Cross-datacenter replication schemes
• CAP Theorem
• Dynamo & MapReduce
• “Cloud”
19. Fundamental complexity
“[…] distributed systems require that the
programmer be aware of latency, have a different
model of memory access, and take into account
issues of concurrency and partial failure.”
Jim Waldo et al.,
A Note on Distributed Computing (1994)
20. A holistic contract
…stretched to the limit
Write
Read
Application
Opaque
store
Transactions
21. A holistic contract
…stretched to the limit
Write
Read
Application
Opaque
store
Transactions
22. Are you blithely asserting
that transactions aren’t webscale?
Some people just want to see the world burn.
Those same people want to see the world use inconsistent databases.
- Emin Gun Sirer
23. Alternative to top-down design?
The “bottom-up,” systems tradition:
Simple, reusable components first.
Semantics later.
32. The “bottom-up” ethos
Simple, reusable components first.
Semantics later.
This is how we live now.
Question: Do we ever get those
application-level guarantees back?
38. When do contracts compose?
Application
Distributed
service
Assert:
balance > 0
39. iw, did I get mongo in my riak?
Assert:
balance > 0
40. Composition is the last hard
problem
Composing modules is hard enough
We must learn how to compose guarantees
41. Outline
1. Mourning the death of transactions
2. What is so hard about distributed systems?
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
47. (asynchrony * partial failure) = hard2
Tackling one clown at a time
Poor strategy for programming distributed systems
Winning strategy for analyzing distributed programs
48. Outline
1. Mourning the death of transactions
2. What is so hard about distributed systems?
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
81. Graph queries as dataflow
Graph
store
Memory
allocator
Transitive
closure
Garbage
collector
Confluent Not
Confluent
Confluent
Graph
store
Transaction
manager
Transitive
closure
Deadlock
detector
Confluent Confluent Confluent
82. Graph queries as dataflow
Graph
store
Memory
allocator
Confluent
Transitive
closure
Garbage
collector
Confluent Not
Confluent
Confluent
Graph
store
Transaction
manager
Transitive
closure
Deadlock
detector
Confluent Confluent Confluent
Coordinate
here
83. Coordination: what is that?
Strategy 1: Establish a total order
Graph
store
Memory
allocator
Coordinate
here
Transitive
closure
Garbage
collector
Confluent Not
Confluent
Confluent
84. Coordination: what is that?
Strategy 2: Establish a producer-consumer
Graph
store
Memory
allocator
Coordinate
here
Transitive
closure
Garbage
collector
Confluent Not
Confluent
Confluent
barrier
85. Fundamental costs: FT via replication
(mostly) free!
Graph
store
Transaction
manager
Transitive
closure
Deadlock
detector
Confluent Confluent Confluent
Graph
store
Transitive
closure
Deadlock
detector
Confluent Confluent Confluent
86. Fundamental costs: FT via replication
global synchronization!
Graph
store
Transaction
manager
Transitive
closure
Garbage
Collector
Confluent Confluent
Graph
store
Transitive
closure
Garbage
Collector
Confluent Not
Confluent
Confluent
Paxos
Not
Confluent
87. Fundamental costs: FT via replication
The first principle of successful scalability is to batter the
consistency mechanisms down to a minimum.
– James Hamilton
Garbage
Collector
Graph
store
Transaction
manager
Transitive
closure
Garbage
Collector
Confluent Confluent
Graph
store
Transitive
closure
Confluent Not
Confluent
Confluent
Barrier
Not
Confluent
Barrier
88. Language-level consistency
DSLs for distributed programming?
• Capture consistency concerns in the
type system
Application
Language
Flow
Object
Storage
92. Let’s review
• Consistency is tolerance to asynchrony
• Tricks:
– focus on data in motion, not at rest
– avoid coordination when possible
– choose coordination carefully otherwise
(Tricks are great, but tools are better)
93. Outline
1. Mourning the death of transactions
2. What is so hard about distributed systems?
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
94. Grand challenge: composition
Hard problem:
Is a given component fault-tolerant?
Much harder:
Is this system (built up from components)
fault-tolerant?
98. Example: Kafka replication bug
Three “correct” components:
1. Primary/backup replication
2. Timeout-based failure detectors
3. Zookeeper
One nasty bug:
Acknowledged writes are lost
99. A guarantee would be nice
Bottom up approach:
• use formal methods to verify individual
components (e.g. protocols)
• Build systems from verified components
Shortcomings:
• Hard to use
• Hard to compose
Investment
Returns
102. Composing bottom-up
assurances
Issue 1: incompatible failure models
eg, crash failure vs. omissions
Issue 2: Specs do not compose
(FT is an end-to-end property)
If you take 10 components off the shelf, you are putting 10 world views
together, and the result will be a mess. -- Butler Lampson
110. End-to-end testing
would be nice
Top-down approach:
• Build a large-scale system
• Test the system under faults
Shortcomings:
• Hard to identify complex bugs
• Fundamentally incomplete
Investment
Returns
111. Lineage-driven fault injection
Goal: top-down testing that
• finds all of the fault-tolerance bugs, or
• certifies that none exist
114. Lineage-driven fault injection
(LDFI)
Approach: think backwards from outcomes
Question: could a bad thing ever happen?
Reframe:
• Why did a good thing happen?
• What could have gone wrong along the way?
115. Thomasina: What a faint-heart! We must
work outward from the middle of the
maze. We will start with something simple.
116. The game
• Both players agree on a failure model
• The programmer provides a protocol
• The adversary observes executions and
chooses failures for the next execution.
121. Dedalus: it’s about time
consequence@when ! :- premise[s]!
!!
node(Node, Neighbor)@next :- node(Node, Neighbor);!
!!
log(Node2, Pload)@async :- bcast(Node1, Pload),
! ! ! ! ! ! ! ! ! node(Node1, Node2);
State change
Natural join (bcast.Node1 == node.Node1)
Communication
122. The match
Protocol:
Reliable broadcast
Specification:
Pre: A correct process delivers a message m
Post: All correct process delivers m
Failure Model:
(Permanent) crash failures
Message loss / partitions
130. An execution is a (fragile) “proof”
of an outcome
log(A, data)@1 node(A, B)@1
AB1 r2
log(B, data)@2
r1
log(B, data)@3
r1
log(B, data)@4
r1
log(B, data)@5
log(log(AB2 log(A, data)@1
r1
log(A, data)@2
r1
log(A, data)@3
node(A, B)@1
r3
node(A, B)@2
r3
node(A, B)@3
AB3 r2
log(B, data)@4
log(log(log(log((which required a message from A to B at time 1)
144. Round
2:
counterexample
Process b Process a Process c
1
log (LOST) log
CRASHED 2
The adversary wins!
145. Round 3
Same
as
in
Round
2,
but
symmetrical.
bcast(N, P)@next ! ! ! :- log(N, P);!
146. Round 3 in space / time
Process b Process a Process c
2
3
4
5
1
log log
2
3
4
5
2
3
4
5
log log
log log
log log
log log
log log
log log
log log
log log
log log
Redundancy in
space and time
153. Let’s reflect
Fault-tolerance is redundancy in space and
time.
Best strategy for both players: reason
backwards from outcomes using lineage
Finding bugs: find a set of failures that
“breaks” all derivations
Fixing bugs: add additional derivations
154. The role of the adversary
can be automated
1. Break a proof by dropping any contributing
message.
(AB1 ∨ BC2)
Disjunction
155. The role of the adversary
can be automated
1. Break a proof by dropping any contributing
message.
2. Find a set of failures that breaks all proofs
of a good outcome.
(AB1 ∨ BC2)
Disjunction
∧ (AC1) ∧ (AC2)
Conjunction of disjunctions (AKA CNF)
156. The role of the adversary
can be automated
1. Break a proof by dropping any contributing
message.
2. Find a set of failures that breaks all proofs
of a good outcome.
(AB1 ∨ BC2)
Disjunction
∧ (AC1) ∧ (AC2)
Conjunction of disjunctions (AKA CNF)
157. Molly, the LDFI prototype
Molly finds fault-tolerance violations quickly
or guarantees that none exist.
Molly finds bugs by explaining good
outcomes – then it explains the bugs.
Bugs identified: 2pc, 2pc-ctp, 3pc, Kafka
Certified correct: paxos (synod), Flux, bully
leader election, reliable broadcast
158. Commit protocols
Problem:
Atomically change things
Correctness properties:
1. Agreement (All or nothing)
2. Termination (Something)
159. Two-phase commit
Agent a Agent b Coordinator Agent d
2
5
2
5
1
prepare prepare prepare
3
4
2
5
vote vote
vote
commit commit commit
160. Two-phase commit
Agent a Agent b Coordinator Agent d
2
5
2
5
1
prepare prepare prepare
3
4
2
5
vote vote
vote
commit commit commit
Can I kick it?
161. Two-phase commit
Agent a Agent b Coordinator Agent d
2
5
2
5
1
prepare prepare prepare
3
4
2
5
vote vote
vote
commit commit commit
Can I kick it?
YES YOU CAN
162. Two-phase commit
Agent a Agent b Coordinator Agent d
2
5
2
5
1
prepare prepare prepare
3
4
2
5
vote vote
vote
commit commit commit
Can I kick it?
YES YOU CAN
Well I’m gone
163. Two-phase commit
Agent a Agent a Coordinator Agent d
2 2
1
p p p
3
CRASHED
2
v v
v
Violation: Termination
164. The
collabora[ve
termina[on
protocol
Basic idea:
Agents talk amongst themselves when the
coordinator fails.
Protocol: On timeout, ask other agents
about decision.
166. 2PC - CTP
Agent a Agent b Coordinator Agent d
2
3
4
5
6
7
prepare prepare prepare
2
3
4
5
6
7
1
2
3
CRASHED
2
3
4
5
6
7
vote
decision_req decision_req
vote
decision_req decision_req
vote
decision_req decision_req
Can I kick it?
YES YOU CAN
……?
167. 3PC
Basic idea:
Add a round, a state, and simple failure
detectors (timeouts).
Protocol:
1. Phase 1: Just like in 2PC
– Agent timeout à abort
2. Phase 2: send canCommit, collect acks
– Agent timeout à commit
3. Phase 3: Just like phase 2 of 2PC
168. 3PC
Process a Process b Process C Process d
2
4
7
2
4
7
1
cancommit cancommit cancommit
3
vote_msg
precommit precommit precommit
5
6
2
4
7
vote_msg
ack
vote_msg
ack
ack
commit commit commit
169. 3PC
Process a Process b Process C Process d
2
4
7
2
4
7
1
cancommit cancommit cancommit
3
vote_msg
precommit precommit precommit
5
6
2
4
7
vote_msg
ack
vote_msg
ack
ack
commit commit commit
Timeout
à Abort
Timeout
à Commit
170. Network partitions
make 3pc act crazy
Process a Process b Process C Process d
2
4
7
8
2
4
7
8
1
3
5
6
7
8
2
CRASHED
vote_msg
ack
commit
vote_msg
ack
commit
cancommit cancommit cancommit
precommit precommit precommit
abort (LOST) abort (LOST)
abort abort
vote_msg
171. Network partitions
make 3pc act crazy
Process a Process b Process C Process d
2
4
7
8
2
4
7
8
1
3
5
6
7
8
2
CRASHED
vote_msg
ack
commit
vote_msg
ack
commit
cancommit cancommit cancommit
precommit precommit precommit
abort (LOST) abort (LOST)
abort abort
vote_msg
Agent crash
Agents learn
commit decision
172. Network partitions
make 3pc act crazy
Process a Process b Process C Process d
2
4
7
8
2
4
7
8
1
3
5
6
7
8
2
CRASHED
vote_msg
ack
commit
vote_msg
ack
commit
cancommit cancommit cancommit
precommit precommit precommit
abort (LOST) abort (LOST)
abort abort
vote_msg
Agent crash
Agents learn
commit decision
d is dead; coordinator
decides to abort
173. Network partitions
make 3pc act crazy
Process a Process b Process C Process d
2
4
7
8
2
4
7
8
1
3
5
6
7
8
2
CRASHED
vote_msg
ack
commit
vote_msg
ack
commit
cancommit cancommit cancommit
precommit precommit precommit
abort (LOST) abort (LOST)
abort abort
vote_msg
Brief network
partition
Agent crash
Agents learn
commit decision
d is dead; coordinator
decides to abort
174. Network partitions
make 3pc act crazy
Process a Process b Process C Process d
2
4
7
8
2
4
7
8
1
3
5
6
7
8
2
CRASHED
vote_msg
ack
commit
vote_msg
ack
commit
cancommit cancommit cancommit
precommit precommit precommit
abort (LOST) abort (LOST)
abort abort
vote_msg
Brief network
partition
Agent crash
Agents learn
commit decision
d is dead; coordinator
decides to abort
Agents A & B
decide to
commit
175. Kafka durability bug
Replica b Replica c Zookeeper Replica a Client
1 1
2
1
3
4
CRASHED
1
3
5
m m
m
m l
a
c
w
176. Kafka durability bug
Replica b Replica c Zookeeper Replica a Client
1 1
2
1
3
4
CRASHED
1
3
5
m m
m
m l
a
c
w
Brief network
partition
177. Kafka durability bug
Replica b Replica c Zookeeper Replica a Client
1 1
2
1
3
4
CRASHED
1
3
5
m m
m
m l
a
c
w
Brief network
partition
a becomes
leader and
sole replica
178. Kafka durability bug
Replica b Replica c Zookeeper Replica a Client
1 1
2
1
3
4
CRASHED
1
3
5
m m
m
m l
a
c
w
Brief network
partition
a becomes
leader and
sole replica
a ACKs
client write
179. Kafka durability bug
Replica b Replica c Zookeeper Replica a Client
1 1
2
1
3
4
CRASHED
1
3
5
m m
m
m l
a
c
w
Brief network
partition
a becomes
leader and
sole replica
a ACKs
client write
Data
loss
180. Molly summary
Lineage allows us to reason backwards
from good outcomes
Molly: surgically-targeted fault injection
Investment similar to testing
Returns similar to formal methods
181. Where we’ve been; where we’re headed
1. Mourning the death of transactions
2. What is so hard about distributed systems?
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
182. Where we’ve been; where we’re headed
1. We need application-level guarantees
2. What is so hard about distributed systems?
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
183. Where we’ve been; where we’re headed
1. We need application-level guarantees
2. What is so hard about distributed systems?
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
184. Where we’ve been; where we’re headed
1. We need application-level guarantees
2. (asynchrony X partial failure) = too hard to
hide! We need tools to manage it.
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
185. Where we’ve been; where we’re headed
1. We need application-level guarantees
2. asynchrony X partial failure = too hard to hide!
We need tools to manage it.
3. Distributed consistency: managing asynchrony
4. Fault-tolerance: progress despite failures
186. Where we’ve been; where we’re headed
1. We need application-level guarantees
2. asynchrony X partial failure = too hard to hide!
We need tools to manage it.
3. Focus on flow: data in motion
4. Fault-tolerance: progress despite failures
187. Outline
1. We need application-level guarantees
2. asynchrony X partial failure = too hard to hide!
We need tools to manage it.
3. Focus on flow: data in motion
4. Fault-tolerance: progress despite failures
188. Outline
1. We need application-level guarantees
2. asynchrony X partial failure = too hard to hide!
We need tools to manage it.
3. Focus on flow: data in motion
4. Backwards from outcomes
189. Remember
1. We need application-level guarantees
2. asynchrony X partial failure = too hard to hide! We
need tools to manage it.
3. Focus on flow: data in motion
4. Backwards from outcomes
Composition is the hardest problem
190. A happy crisis
Valentine: “It makes me so happy. To be at
the beginning again, knowing almost
nothing.... It's the best possible time of
being alive, when almost everything you
thought you knew is wrong.”
Editor's Notes
USER-CENTRIC
OMG pause here. Remember brewer 2012? Top-down vs bottom-up designs? We had this top-down thing and it was beautiful.
It was so beautiful that it didn’t matter that it was somewhat ugly
The abstraction was so beautiful,
IT DOESN”T MATTER WHAT”S UNDERNEATH. Wait, or does it? When does it?
We’ve known for a long time that it is hard to hide the complexities of distribution
Focus not on semantics, but on the properties of components: thin interfaces, understandable latency & failure modes. DEV-centric
But can we ever recover those guarantees? I mean real guarantees, at the application level? Are my (app-level) constraints upheld? No? What can go wrong?
FIX ME: joe’s idea: sketch of a castle being filled in, vs bricks
But can we ever recover those guarantees? I mean real guarantees, at the application level? Are my (app-level) constraints upheld? No? What can go wrong?
In a world without transactions, one programmer must risk inconsistency to build a distributed application out of individually-verified components
In a world without transactions, one programmer must risk inconsistency to build a distributed application out of individually-verified components
In a world without transactions, one programmer must risk inconsistency to build a distributed application out of individually-verified components
In a world without transactions, one programmer must risk inconsistency to build a distributed application out of individually-verified components
In a world without transactions, one programmer must risk inconsistency to build a distributed application out of individually-verified components
Meaning: translation
DS are hard because of uncertainty – nondeterminism – which is fundamental to the environment and can “leak” into the results”
It’s astoundingly difficult to face these demons at the same time – tempting to try to defeat them one at a time.
Async isn’t a problem: just need to be careful to number messages and interleave correctly. Ignore arrival order.
Whoa, this is easy so far.
Failure isn’t a problem: just do redundant computation and store redundant data. Make more copies than there will be failures.
I win.
We can’t do deterministic interleaving if producers may fail.
Nd message order makes it hard to keep replicas in agreement
We can’t do deterministic interleaving if producers may fail.
Nd message order makes it hard to keep replicas in agreement
We can’t do deterministic interleaving if producers may fail.
Nd message order makes it hard to keep replicas in agreement
To guard against failures, we replicate.
NB: asynchrony => replicas might not agree
Very similar looking criteria (1 safe 1 live). Takes some work, even on a single site. But hard in our scenario: disorder => replica disagreement, partial failure => missing partitions
Very similar looking criteria (1 safe 1 live). Takes some work, even on a single site. But hard in our scenario: disorder => replica disagreement, partial failure => missing partitions
Very similar looking criteria (1 safe 1 live). Takes some work, even on a single site. But hard in our scenario: disorder => replica disagreement, partial failure => missing partitions
FIX: make it about translation vs. prayer
FIX: make it about translation vs. prayer
FIX: make it about translation vs. prayer
Ie, reorderability, batchability, tolerance to duplication / retry
Now programmer must map from application invariants to object API (with richer semantics than read/write).
Ie, reorderability, batchability, tolerance to duplication / retry
Now programmer must map from application invariants to object API (with richer semantics than read/write).
Convergence is a property of component state. It rules out divergence, but it does not readily compose.
Convergence is a property of component state. It rules out divergence, but it does not readily compose.
Convergence is a property of component state. It rules out divergence, but it does not readily compose.
Convergence is a property of component state. It rules out divergence, but it does not readily compose.
However, not sufficient to synchronize GC.
Perhaps more importantly, not *compositional* -- what guarantees does my app – pieced together from many convergent objects – give?
To reason compositionally, need guarantees about what comes OUT of my objects, and how it transits the app.
*** main point to make here: we’d like to reason backwards from the outcomes, at the level of abstraction of the appplication.
However, not sufficient to synchronize GC.
Perhaps more importantly, not *compositional* -- what guarantees does my app – pieced together from many convergent objects – give?
To reason compositionally, need guarantees about what comes OUT of my objects, and how it transits the app.
*** main point to make here: we’d like to reason backwards from the outcomes, at the level of abstraction of the appplication.
However, not sufficient to synchronize GC.
Perhaps more importantly, not *compositional* -- what guarantees does my app – pieced together from many convergent objects – give?
To reason compositionally, need guarantees about what comes OUT of my objects, and how it transits the app.
*** main point to make here: we’d like to reason backwards from the outcomes, at the level of abstraction of the appplication.
We are interested in the properties of component *outputs* rather than just internal state. Hence we are interested in a different property: confluence.
A confluent module behaves like a function from sets (of inputs) to sets (of outputs)
We are interested in the properties of component *outputs* rather than just internal state. Hence we are interested in a different property: confluence.
A confluent module behaves like a function from sets (of inputs) to sets (of outputs)
We are interested in the properties of component *outputs* rather than just internal state. Hence we are interested in a different property: confluence.
A confluent module behaves like a function from sets (of inputs) to sets (of outputs)
We are interested in the properties of component *outputs* rather than just internal state. Hence we are interested in a different property: confluence.
A confluent module behaves like a function from sets (of inputs) to sets (of outputs)
We are interested in the properties of component *outputs* rather than just internal state. Hence we are interested in a different property: confluence.
A confluent module behaves like a function from sets (of inputs) to sets (of outputs)
We are interested in the properties of component *outputs* rather than just internal state. Hence we are interested in a different property: confluence.
A confluent module behaves like a function from sets (of inputs) to sets (of outputs)
Confluence is compositional: Composing confluent components yields a confluent dataflow
Confluence is compositional: Composing confluent components yields a confluent dataflow
Confluence is compositional: Composing confluent components yields a confluent dataflow
All of these components are confluent! Composing confluent components yields a confluent dataflow
But annotations are burdensome
All of these components are confluent! Composing confluent components yields a confluent dataflow
But annotations are burdensome
A separate question is choosing a coordination strategy that “fits” the problem without “overpaying.” for example, we could establish a global ordering of messages, but that would essentially cost us what linearizable storage cost us. We can solve the GC problem with SEALING: establishing a big barrier; damming the stream.
A separate question is choosing a coordination strategy that “fits” the problem without “overpaying.” for example, we could establish a global ordering of messages, but that would essentially cost us what linearizable storage cost us. We can solve the GC problem with SEALING: establishing a big barrier; damming the stream.
M – a semantic property of code – implies confluence
An appropriately constrained language provides a conservative syntactic test for M.
M – a semantic property of code – implies confluence
An appropriately constrained language provides a conservative syntactic test for M.
Also note that a data-centric language give us the dataflow graph automatically, via dependencies (across LOC, modules, processes, nodes, etc)
Also note that a data-centric language give us the dataflow graph automatically, via dependencies (across LOC, modules, processes, nodes, etc)
Try to not use it! Learn how to choose it. Tools help!
Start with a hard problemHard problem: is my FT protocol work?
Harder: is the composition of my components FT
Point: we need to replicate data to both copies of a replica
We need to commit multiple partitions together
Start with a hard problemHard problem: is my FT protocol work?
Harder: is the composition of my components FT
Examples! 2pc and replication. Properties, etc etc