SlideShare a Scribd company logo
1 of 39
Download to read offline
Justin Thaler
Georgetown University and a16z crypto research
Joint work with:
Srinath Setty (Microsoft Research), Riad Wahby
(CMU), Arasu Arun (NYU), Sam Ragsdale (a16z),
Michael Zhu (a16z)
Lasso + Jolt: A Deep Dive
Presentation Outline
• What are lookup arguments?
• What are Lasso/Jolt?
• Lasso in detail.
• Jolt in detail.
• How to think about Lasso as a tool.
• And where else will lookup arguments be useful outside of zkVMs?
Lookup arguments: what are they?
• Unindexed lookup argument:
• Lets P commit to a vector 𝑎 ∈ 𝑭!, and prove that every entry of 𝑎 resides in a
pre-determined table 𝑡 ∈ 𝑭".
• For every entry 𝑎# there is an index 𝑏# such that 𝑎# = 𝑡 𝑏# .
• Indexed lookup argument:
• Lets P commit to vectors 𝑎, 𝑏 ∈ 𝑭!, and prove that 𝑎# = 𝑡 𝑏# for all 𝑖.
• We call 𝑎 the vector of lookup values and 𝑏 the indices.
Lookup arguments: what are they?
• Unindexed lookup argument:
• Lets P commit to a vector 𝑎 ∈ 𝑭!, and prove that every entry of 𝑎 resides in a
pre-determined table 𝑡 ∈ 𝑭".
• For every entry 𝑎# there is an index 𝑏# such that 𝑎# = 𝑡 𝑏# .
• Indexed lookup argument:
• Lets P commit to vectors 𝑎, 𝑏 ∈ 𝑭!, and prove that 𝑎# = 𝑡 𝑏# for all 𝑖.
• We call 𝑎 the vector of lookup values and 𝑏 the indices.
• Unindexed lookups are proofs of a subset relationship (i.e., batch set-membership
proofs).
• 𝑎 specifies a subset of 𝑡.
• Indexed lookups are reads into a read-only memory.
• 𝑡 is the memory, and 𝑎# = 𝑡 𝑏# is a read of memory cell 𝑏#.
Lasso+Jolt: what are they?
• Lasso: new family of (indexed) lookup arguments.
• P is an order of magnitude faster than in prior works.
• Addresses key bottleneck for P: commitment costs.
• P commits to fewer field elements, and all of them are small.
• No commitment to 𝑡 needed for many tables.
• Support for gigantic tables (decomposable, or LDE-structured).
• P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements.
• Jolt: new zkVM technique.
• Much lower commitment costs for P than prior works.
• Primitive instructions are implemented via one lookup into the
entire evaluation table of the instruction.
Lasso+Jolt: what are they?
• Lasso: new family of (indexed) lookup arguments.
• P is an order of magnitude faster than in prior works.
• Addresses key bottleneck for P: commitment costs.
• P commits to fewer field elements, and all of them are small.
• No commitment to 𝑡 needed for many tables.
• Support for gigantic tables (decomposable, or LDE-structured).
• P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements.
• Jolt: new zkVM technique.
• Much lower commitment costs for P than prior works.
• Primitive instructions are implemented via one lookup into the
entire evaluation table of the instruction.
Lasso+Jolt: what are they?
• Lasso: new family of (indexed) lookup arguments.
• P is an order of magnitude faster than in prior works.
• Addresses key bottleneck for P: commitment costs.
• P commits to fewer field elements, and all of them are small.
• No commitment to 𝑡 needed for many tables.
• Support for gigantic tables (decomposable, or LDE-structured).
• P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements.
• Jolt: new zkVM technique.
• Much lower commitment costs for P than prior works.
• Primitive instructions are implemented via one lookup into the
entire evaluation table of the instruction.
Lasso in Detail
Lasso costs in detail
• For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐:
• P commits to 3𝑐𝑚 + 𝑐𝑁!/#
field elements.
• All of them are small, say, in the set {0, 1, … , 𝑚}.
• With MSM-based polynomial commitment schemes, P does (roughly)
just one group operation per (small) committed field element.
• Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc.
• 𝑐=1 is a special case.
• P commits to only 𝑚+𝑁 field elements.
• Even amongst these 𝑚+𝑁, many are 0.
• Hence “free” to commit to with MSM-based schemes.
• Specifically, at most 2𝑚 are non-zero.
• If every read is of a different table cell, 𝑚 of the field elements are
equal to 1, and the rest are 0s.
• V costs:
• 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir).
• Plus one evaluation proof for a committed polynomial of size 𝑁!/#.
• Low enough V costs to reduce further via composition/recursion.
Lasso costs in detail
• For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐:
• P commits to 3𝑐𝑚 + 𝑐𝑁!/#
field elements.
• All of them are small, say, in the set {0, 1, … , 𝑚}.
• With MSM-based polynomial commitment schemes, P does (roughly)
just one group operation per (small) committed field element.
• Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc.
• 𝑐=1 is a special case. I call it “Basic-Lasso”.
• P commits to only 𝑚+𝑁 field elements.
• Even amongst these 𝑚+𝑁, many are 0.
• Hence “free” to commit to with MSM-based schemes.
• Specifically, at most 2𝑚 are non-zero.
• If every read is of a different table cell, 𝑚 of the field elements are
equal to 1, and the rest are 0s.
• V costs:
• 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir).
• Plus one evaluation proof for a committed polynomial of size 𝑁!/#.
• Low enough V costs to reduce further via composition/recursion.
Lasso costs in detail
• For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐:
• P commits to 3𝑐𝑚 + 𝑐𝑁!/#
field elements.
• All of them are small, say, in the set {0, 1, … , 𝑚}.
• With MSM-based polynomial commitment schemes, P does (roughly)
just one group operation per (small) committed field element.
• Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc.
• 𝑐=1 is a special case. I call it “Basic-Lasso”.
• P commits to only 𝑚+𝑁 field elements.
• Even amongst these 𝑚+𝑁, many are 0.
• Hence “free” to commit to with MSM-based schemes.
• Specifically, at most 2𝑚 are non-zero.
• If every read is of a different table cell, 𝑚 of the field elements are
equal to 1, and the rest are 0s.
• V costs:
• 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir).
• Plus one evaluation proof for a committed polynomial of size 𝑁!/#.
• Low enough V costs to reduce further via composition/recursion.
Lasso applied to huge tables: 𝑐>1
• Most big lookup tables arising in practice are decomposable.
• Can answer an (indexed) lookup into the big table of size 𝑁 by performing
roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results.
• Lasso handles the collation with the sum-check protocol.
• No extra commitment costs for P.
• Can view Lasso with 𝑐>1 as a generic reduction from lookups into big,
decomposable tables to lookups into small tables.
• Can use any lookup argument for the small tables, not just Lasso with
𝑐 =1.
• Major caveat: the small-table lookup argument must be indexed.
• There are known transformations from unindexed lookup arguments to
indexed ones.
• But they either do not preserve “smallness” of table entries or do not
preserve decomposability of the big table!
Lasso applied to huge tables: 𝑐>1
• Most big lookup tables arising in practice are decomposable.
• Can answer an (indexed) lookup into the big table of size 𝑁 by performing
roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results.
• Lasso handles the collation with the sum-check protocol.
• No extra commitment costs for P.
• Can view Lasso with 𝑐>1 as a generic reduction from lookups into big,
decomposable tables to lookups into small tables.
• Can use any lookup argument for the small tables.
• Lasso uses Basic-Lasso on the small tables.
• Major caveat: the small-table lookup argument must be indexed.
• There are known transformations from unindexed lookup arguments to
indexed ones.
• But they either do not preserve “smallness” of table entries or do not
preserve decomposability of the big table!
Lasso applied to huge tables: 𝑐>1
• Most big lookup tables arising in practice are decomposable.
• Can answer an (indexed) lookup into the big table of size 𝑁 by performing
roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results.
• Lasso handles the collation with the sum-check protocol.
• No extra commitment costs for P.
• Can view Lasso with 𝑐>1 as a generic reduction from lookups into big,
decomposable tables to lookups into small tables.
• Can use any lookup argument for the small tables.
• Lasso uses Basic-Lasso on the small tables.
• Major caveat: the small-table lookup argument must be indexed.
• There are known transformations from unindexed lookup arguments to
indexed ones.
• But they either do not preserve “smallness” of table entries or do not
preserve decomposability of the big table.
• Because they “pack” indices and values together into a single field element.
Background: Grand Product Arguments
• All known lookup arguments use something called a grand product argument.
• A SNARK for proving the product of 𝑛 committed values.
• Popular grand product arguments today have P commit to 𝑛 extra values (partial
products).
• This is unnecessary.
• T13: gave an optimized variant of the GKR protocol (sum-check-based interactive proof
for circuit evaluation).
• No commitment costs for P.
• P does linear number of field operations.
• Proof size/V time is 𝑂 log 𝑛 $ field ops (and hash evaluations from Fiat-Shamir).
• Much less than FRI, concretely and asymptotically.
• [Lee, Setty 2019] reduce V costs to about 𝑂 log(𝑛) with slight increase in commitment
costs for P.
Key Performance Insight in Basic-Lasso
• For many existing lookup arguments, if you swap out the invoked grand product
argument for T13, P commits only to small field elements.
• See upcoming work on LogUp by Papini and Haböck.
• More involved than just a simple swap of the grand product argument.
• Remember: Jolt needs an indexed lookup argument that plays nicely with
collating small-table lookup results into big-table results.
• See my second a16z talk for details on how Basic-Lasso works.
Key Performance Insight in Basic-Lasso
• For many existing lookup arguments, if you swap out the invoked grand product
argument for T13, P commits only to small field elements.
• See upcoming work on LogUp by Papini and Haböck.
• More involved than just a simple swap of a grand product argument.
• Remember: Lasso/Jolt need an indexed lookup argument that plays nicely with
collating small-table lookup results into big-table results.
• Technical takeaway: The community has still not fully internalized the power of
sum-check to avoid commitment costs for P.
Key Performance Insight in Basic-Lasso
• For many existing lookup arguments, if you swap out the invoked grand product
argument for T13, P commits only to small field elements.
• See upcoming work on LogUp by Papini and Haböck.
• More involved than just a simple swap of a grand product argument.
• Remember: Lasso/Jolt need an indexed lookup argument that plays nicely with
collating small-table lookup results into big-table results.
• Technical takeaway: The community has still not fully internalized the power of
sum-check to avoid commitment costs for P.
• See my second a16z talk for details on how Basic-Lasso works.
• Last part of this talk: more info about how to think of Lasso as a tool.
Jolt in Detail
Front-ends today for VM execution
• Say P claims to have run a computer program for 𝑚 steps.
• Say the program is written in the assembly language for a VM.
• Popular VM’s targeted: RISC-V, Ethereum Virtual Machine
(EVM)
• Today, front-ends produce a circuit that, for each step of the
computation:
1. Figures out what instruction to execute at that step.
2. Executes that instruction.
Lasso lets one replace Step 2 with a single lookup.
For each instruction, the table stores the entire evaluation table
of the function.
If instruction 𝑓 operations on two 64-bit inputs, the table stores
𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 .
This table has size 2()*.
All RISC-V instructions are decomposable.
Jolt: A new front-end paradigm
• Say P claims to have run a computer program for 𝑚 steps.
• Say the program is written in the assembly language for a VM.
• Popular VM’s targeted: RISC-V, Ethereum Virtual Machine
(EVM)
• Today, front-ends produce a circuit that, for each step of the
computation:
1. Figures out what instruction to execute at that step.
2. Executes that instruction.
• Lasso lets one replace Step 2 with a single lookup.
• For each instruction, the table stores the entire evaluation
table of the instruction.
If instruction 𝑓 operations on two 64-bit inputs, the table stores
𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 .
This table has size 2()*.
All RISC-V instructions are decomposable.
Jolt: A new front-end paradigm
• Say P claims to have run a computer program for 𝑚 steps.
• Say the program is written in the assembly language for a VM.
• Popular VM’s targeted: RISC-V, Ethereum Virtual Machine
(EVM)
• Today, front-ends produce a circuit that, for each step of the
computation:
1. Figures out what instruction to execute at that step.
2. Executes that instruction.
• Lasso lets one replace Step 2 with a single lookup.
• For each instruction, the table stores the entire evaluation
table of the instruction.
• If instruction 𝑓 operations on two 64-bit inputs, the table
stores 𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 .
• This table has size 2()*.
• Jolt shows that all RISC-V instructions are decomposable.
Jolt in a picture
query to be split into “chunks” which are fed into di↵erent subtables. The prover provides these chunks as
advice, which are c in number for some small constant c, and hence approximately W/c or 2W/c bits long,
depending on the structure of z. The constraint system must verify that the chunks correctly constitute z,
but need not perform any range checks as the Lasso algorithm itself later implicitly enforces these on the
chunks.
Jolt in context
• Jolt is a realization of Barry Whitehat’s “lookup singularity” vision (?)
• Auditability/Simplicity/Extensibility benefits.
• Performance benefits.
• A qualitatively different way of building zkVMs.
• Yet with many similarities to things people are already doing.
• People are already computing functions like bitwise-AND by
doing several lookups into small tables and combining the
results.
• Differences/keys to Jolt:
• The new small-table lookup argument is much faster for P.
• The new small-table lookup argument is naturally indexed.
• The collation technique is much faster for P.
• “Free” to multiply and add results of small-table lookups.
• These differences let us do almost everything in VM emulation
with lookups.
Jolt in context
• Jolt is a realization of Barry Whitehat’s “lookup singularity” vision (?)
• Auditability/Simplicity/Extensibility benefits.
• Performance benefits.
• A qualitatively different way of building zkVMs.
• Yet with many similarities to things people are already doing.
• People are already computing functions like bitwise-AND by
doing several lookups into small tables and combining the
results.
• Differences/keys to Jolt:
• The new small-table lookup argument is much faster for P.
• The new small-table lookup argument is naturally indexed.
• The collation technique is much faster for P.
• “Free” to multiply and add results of small-table lookups.
• These differences let us do almost everything in VM emulation
with lookups.
Three Examples of Jolt’s
decompositions
Example 1: Bitwise-AND
• Decomposable: to compute bitwise-AND of two 64-bit inputs 𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute the bitwise-AND of each chunk.
• Concatenate the results.
• i.e., output is ∑+,(
*
8+-( 0 bitwiseAND(𝑥+, 𝑦+).
LDE-structured:
bitwiseAND 𝑥, 𝑦 = :
+,(
./
2+-(
0 𝑥+ 0 𝑦+.
This is a multilinear polynomial that can be evaluated with under
200 field operations.
• Decomposable: to compute bitwise-AND of two 64-bit inputs 𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute the bitwise-AND of each chunk.
• Concatenate the results.
• i.e., output is ∑+,(
*
8+-( 0 bitwiseAND(𝑥+, 𝑦+).
• Avoiding an honest-party committing to the sub-table:
• bitwiseAND(𝑥+, 𝑦+) = ∑0,(
*
20-(
0 𝑥0 0 𝑦0.
• This is a multilinear polynomial that can be evaluated with
under 25 field operations.
• The only information the Lasso V needs about the sub-table is
one evaluation of this polynomial.
Example 1: Bitwise-AND
Example 2: RISC-V Addition
• For adding two 64-bit numbers 𝑥, 𝑦, RISC-V prescribes that they be added and
any “overflow bit” be ignored.
• Jolt computes 𝑧 = 𝑥 + 𝑦 in the finite field (via one constraint added to the
ancillary R1CS), and then uses lookups to identify the overflow bit, if any, and
adjust the result accordingly.
Example 2: RISC-V Addition
• For adding two 64-bit numbers 𝑥, 𝑦, RISC-V prescribes that they be added and
any “overflow bit” be ignored.
• Jolt computes 𝑧 = 𝑥 + 𝑦 in the finite field (via one constraint added to the
ancillary R1CS), and then uses lookups to identify the overflow bit, if any, and
adjust the result accordingly.
• P commits to the “limb-decomposition” (𝑏!, … , 𝑏#) of the field element z =
𝑥 + 𝑦.
• Let 𝑀 = 2%&/# denote the max value any limb should take.
• A constraint is added to the R1CS to confirm 𝑧 = ∑'(!
#
𝑀')!
> 𝑏' and each
𝑏' is range checked via a lookup into the subtable that stores 0, … , 𝑀 − 1 .
• These checks guarantee that (𝑏!, … , 𝑏#) is really the prescribed limb-
decomposition of 𝑧.
• To identify the overflow bit, one can do a lookup at index 𝑏#, into a table
whose 𝑖'th entry spits out the relevant high-order bit of 𝑖.
Example 3: LESS THAN UNSIGNED
• LESS-THAN
• Decomposable: to compute LESS-THAN of two 64-bit inputs
𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute LESS-THAN (LT) and EQUALITY (EQ) on each
chunk.
• Output is: ∑+,(
*
2+-( 0 LT(𝑥+, 𝑦+) ∏0,+1(
*
EQ (𝑥0, 𝑦0).
LDE-structured:
EQ 𝑥', 𝑦' = ∏()$
*
( 𝑥',(𝑦',(+ (1 − 𝑥',()(1 − 𝑦',()).
LT 𝑥#, 𝑦# = ∑()$
*
(1 − 𝑥#)𝑦# ∏,)(
*
( 𝑥#,(𝑦#,(+ (1 − 𝑥#,()(1 − 𝑦#,()).
Plugging the above into the output expression gives a multilinear
polynomial that can be evaluated with under 200 field
operations.
Example 3: LESS THAN UNSIGNED
• LESS-THAN
• Decomposable: to compute LESS-THAN of two 64-bit inputs 𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute LESS-THAN (LT) and EQUALITY (EQ) on each chunk.
• Output is: ∑+,(
*
2+-( 0 LT(𝑥+, 𝑦+) ∏0,+1(
*
EQ (𝑥0, 𝑦0).
• Avoiding commitments to the two subtables:
• EQ 𝑥', 𝑦' = ∏()$
*
( 𝑥',(𝑦',(+ (1 − 𝑥',()(1 − 𝑦',()).
• LT 𝑥#, 𝑦# = ∑()$
*
(1 − 𝑥#)𝑦# ∏,)(
*
( 𝑥#,(𝑦#,(+ (1 − 𝑥#,()(1 − 𝑦#,()).
• These are multilinear polynomials that can be evaluated with
under 50 field operations.
General intuition for Lasso as a tool
• Lasso supports simple operations on the bit-decompositions of field
elements, without requiring P to commit to the individual bits.
• The sub-tables have quickly-evaluable multilinear extensions if each
corresponds to a simple function on the (bits of) the table indices.
• This ensures no honest party has to commit to them in pre-processing.
• Can compute, say, bitwiseAND of two field elements in {0, 1, …, 2^64-1}
with lower P costs than, say, Plonk incurs per addition or multiplication
gate.
• Remember: Lookup arguments are all about economies of scale. They
only make sense to use if doing many lookups into one table (i.e.,
computing many invocations of the same function).
Viewing indexed lookup
arguments as SNARKs for
repeated function evaluation
SNARKs for repeated function evaluation
• Many previous works have studied SNARKs for repeated function
evaluation.
• Computing the same function 𝑓 on many different inputs
𝑥(, … , 𝑥2.
• They consider a “polynomial” amount of data parallelism.
• If 𝑓 takes inputs of length n, the number of different
inputs is 𝑚 = poly 𝑛 .
• They still force P to evaluate 𝑓 in a very specific way.
• Executing a specific circuit to compute 𝑓.
Zooming out: a new view on lookup arguments
• View a lookup table as storing all evaluations of a function 𝑓.
• A lookup argument is then a SNARK for highly repeated evaluation
of 𝑓.
• It lets P prove that a committed vector
((𝑎A, 𝑓 𝑎A ), …, (𝑎B, 𝑓 𝑎B )
consists of correct evaluations of 𝑓 at different inputs 𝑎A, …, 𝑎B.
Due to the 𝑂(𝑐(𝑚 + 𝑁A/D)) cost for P, Lasso is effective only if the
number of lookups 𝑚 is not too much smaller than the table size 𝑁.
The number of copies of 𝑓 should be exponential in the input size to
𝑓.
Zooming out: a new view on lookup arguments
• View a lookup table as storing all evaluations of a function 𝑓.
• A lookup argument is then a SNARK for highly repeated evaluation
of 𝑓.
• It lets P prove that a committed vector
((𝑎A, 𝑓 𝑎A ), …, (𝑎B, 𝑓 𝑎B )
consists of correct evaluations of 𝑓 at different inputs 𝑎A, …, 𝑎B.
• Due to the 𝑂(𝑐(𝑚 + 𝑁A/D)) cost for P, Lasso is effective only if the
number of lookups 𝑚 is not too much smaller than the table size 𝑁.
• i.e., The number of copies of 𝑓 should be exponential in the input size to 𝑓.
High-level message of this viewpoint
• Lasso is useful wherever the same function is evaluated many times.
• zkVMs are only one such example.
• By definition, the VM abstraction represents the computation as repeated application
of primitive instructions.
• But implementing a VM abstraction comes with substantial performance costs in
general.
• Interesting direction for future work:
• Other/better ways to isolate repeated structure in computation.
• Example work (with Yinuo Zhang and Sriram Sridhar):
• Bit-slicing.
• To evaluate a hash function or block cipher like SHA/AES naturally computed by a Boolean
circuit C on, say, 64 different inputs:
• Pack the first bit of each input into a single field element, the second bit of each input
into a single field element, and so on.
• Replace each AND gate in C with bitwiseAND, each OR gate in C with bitwiseOR, etc.
• Now each output gate of C computes (one bit of) all 64 evaluations of SHA/AES.
• Apply Lasso to this circuit.
THANK YOU!

More Related Content

What's hot

競技プログラミングで便利な外部ツールを大量紹介
競技プログラミングで便利な外部ツールを大量紹介競技プログラミングで便利な外部ツールを大量紹介
競技プログラミングで便利な外部ツールを大量紹介xryuseix
 
Linux Networking Explained
Linux Networking ExplainedLinux Networking Explained
Linux Networking ExplainedThomas Graf
 
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...Edward Curry
 
Using ARI and AGI to Connect Asterisk Instances
Using ARI and AGI to Connect Asterisk Instances Using ARI and AGI to Connect Asterisk Instances
Using ARI and AGI to Connect Asterisk Instances Jöran Vinzens
 
KubeConEU - NATS Deep Dive
KubeConEU - NATS Deep DiveKubeConEU - NATS Deep Dive
KubeConEU - NATS Deep Divewallyqs
 
G1 collector and tuning and Cassandra
G1 collector and tuning and CassandraG1 collector and tuning and Cassandra
G1 collector and tuning and CassandraChris Lohfink
 
Plny12 galera-cluster-best-practices
Plny12 galera-cluster-best-practicesPlny12 galera-cluster-best-practices
Plny12 galera-cluster-best-practicesDimas Prasetyo
 
Lecture: Regular Expressions and Regular Languages
Lecture: Regular Expressions and Regular LanguagesLecture: Regular Expressions and Regular Languages
Lecture: Regular Expressions and Regular LanguagesMarina Santini
 
OVN operationalization at scale at eBay
OVN operationalization at scale at eBayOVN operationalization at scale at eBay
OVN operationalization at scale at eBayAliasgar Ginwala
 
strassen matrix multiplication algorithm
strassen matrix multiplication algorithmstrassen matrix multiplication algorithm
strassen matrix multiplication algorithmevil eye
 
Simplification of cfg ppt
Simplification of cfg pptSimplification of cfg ppt
Simplification of cfg pptShiela Rani
 
inversion counting
inversion countinginversion counting
inversion countingtmaehara
 
Using VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear ContainersUsing VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear ContainersMichelle Holley
 

What's hot (20)

競技プログラミングで便利な外部ツールを大量紹介
競技プログラミングで便利な外部ツールを大量紹介競技プログラミングで便利な外部ツールを大量紹介
競技プログラミングで便利な外部ツールを大量紹介
 
Linux Networking Explained
Linux Networking ExplainedLinux Networking Explained
Linux Networking Explained
 
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...
 
Using ARI and AGI to Connect Asterisk Instances
Using ARI and AGI to Connect Asterisk Instances Using ARI and AGI to Connect Asterisk Instances
Using ARI and AGI to Connect Asterisk Instances
 
KubeConEU - NATS Deep Dive
KubeConEU - NATS Deep DiveKubeConEU - NATS Deep Dive
KubeConEU - NATS Deep Dive
 
G1 collector and tuning and Cassandra
G1 collector and tuning and CassandraG1 collector and tuning and Cassandra
G1 collector and tuning and Cassandra
 
動的計画法
動的計画法動的計画法
動的計画法
 
Plny12 galera-cluster-best-practices
Plny12 galera-cluster-best-practicesPlny12 galera-cluster-best-practices
Plny12 galera-cluster-best-practices
 
Astricon 10 (October 2013) - SIP over WebSocket on Kamailio
Astricon 10 (October 2013) - SIP over WebSocket on KamailioAstricon 10 (October 2013) - SIP over WebSocket on Kamailio
Astricon 10 (October 2013) - SIP over WebSocket on Kamailio
 
Pda to cfg h2
Pda to cfg h2Pda to cfg h2
Pda to cfg h2
 
Chomsky Normal Form
Chomsky Normal FormChomsky Normal Form
Chomsky Normal Form
 
TOC 6 | CFG Design
TOC 6 | CFG DesignTOC 6 | CFG Design
TOC 6 | CFG Design
 
Lecture: Regular Expressions and Regular Languages
Lecture: Regular Expressions and Regular LanguagesLecture: Regular Expressions and Regular Languages
Lecture: Regular Expressions and Regular Languages
 
OVN operationalization at scale at eBay
OVN operationalization at scale at eBayOVN operationalization at scale at eBay
OVN operationalization at scale at eBay
 
strassen matrix multiplication algorithm
strassen matrix multiplication algorithmstrassen matrix multiplication algorithm
strassen matrix multiplication algorithm
 
Simplification of cfg ppt
Simplification of cfg pptSimplification of cfg ppt
Simplification of cfg ppt
 
暗認本読書会6
暗認本読書会6暗認本読書会6
暗認本読書会6
 
inversion counting
inversion countinginversion counting
inversion counting
 
JOIss2014
JOIss2014JOIss2014
JOIss2014
 
Using VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear ContainersUsing VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear Containers
 

Similar to zkStudyClub - Lasso/Jolt (Justin Thaler, GWU/a16z)

splaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNN
splaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNNsplaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNN
splaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNNratnapatil14
 
Python Programming and GIS
Python Programming and GISPython Programming and GIS
Python Programming and GISJohn Reiser
 
MongoDB's New Aggregation framework
MongoDB's New Aggregation frameworkMongoDB's New Aggregation framework
MongoDB's New Aggregation frameworkChris Westin
 
Lecture 12 Bottom-UP Parsing.pptx
Lecture 12 Bottom-UP Parsing.pptxLecture 12 Bottom-UP Parsing.pptx
Lecture 12 Bottom-UP Parsing.pptxYusra11491
 
Chapter 4.pptx
Chapter 4.pptxChapter 4.pptx
Chapter 4.pptxTekle12
 
04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptx04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptxShree Shree
 
mongodb-aggregation-may-2012
mongodb-aggregation-may-2012mongodb-aggregation-may-2012
mongodb-aggregation-may-2012Chris Westin
 
SFDC Introduction to Apex
SFDC Introduction to ApexSFDC Introduction to Apex
SFDC Introduction to ApexSujit Kumar
 
A New Paradigm for Alignment Extraction
A New Paradigm for Alignment ExtractionA New Paradigm for Alignment Extraction
A New Paradigm for Alignment Extractioncmeilicke
 
DIG1108C Lesson 5 Fall 2014
DIG1108C Lesson 5 Fall 2014DIG1108C Lesson 5 Fall 2014
DIG1108C Lesson 5 Fall 2014David Wolfpaw
 
Tajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choi
Tajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choiTajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choi
Tajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choiData Con LA
 

Similar to zkStudyClub - Lasso/Jolt (Justin Thaler, GWU/a16z) (20)

Query processing System
Query processing SystemQuery processing System
Query processing System
 
Splay tree
Splay treeSplay tree
Splay tree
 
splaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNN
splaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNNsplaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNN
splaytree-171227043127.pptx NNNNNNNNNNNNNNNNNNNNNNN
 
Chap11 slides
Chap11 slidesChap11 slides
Chap11 slides
 
Python Tutorial Part 1
Python Tutorial Part 1Python Tutorial Part 1
Python Tutorial Part 1
 
The PostgreSQL Query Planner
The PostgreSQL Query PlannerThe PostgreSQL Query Planner
The PostgreSQL Query Planner
 
Python Programming and GIS
Python Programming and GISPython Programming and GIS
Python Programming and GIS
 
e_lumley.pdf
e_lumley.pdfe_lumley.pdf
e_lumley.pdf
 
MongoDB's New Aggregation framework
MongoDB's New Aggregation frameworkMongoDB's New Aggregation framework
MongoDB's New Aggregation framework
 
Should i Go there
Should i Go thereShould i Go there
Should i Go there
 
Searching Algorithms
Searching AlgorithmsSearching Algorithms
Searching Algorithms
 
PHP - Introduction to PHP
PHP -  Introduction to PHPPHP -  Introduction to PHP
PHP - Introduction to PHP
 
Lecture 12 Bottom-UP Parsing.pptx
Lecture 12 Bottom-UP Parsing.pptxLecture 12 Bottom-UP Parsing.pptx
Lecture 12 Bottom-UP Parsing.pptx
 
Chapter 4.pptx
Chapter 4.pptxChapter 4.pptx
Chapter 4.pptx
 
04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptx04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptx
 
mongodb-aggregation-may-2012
mongodb-aggregation-may-2012mongodb-aggregation-may-2012
mongodb-aggregation-may-2012
 
SFDC Introduction to Apex
SFDC Introduction to ApexSFDC Introduction to Apex
SFDC Introduction to Apex
 
A New Paradigm for Alignment Extraction
A New Paradigm for Alignment ExtractionA New Paradigm for Alignment Extraction
A New Paradigm for Alignment Extraction
 
DIG1108C Lesson 5 Fall 2014
DIG1108C Lesson 5 Fall 2014DIG1108C Lesson 5 Fall 2014
DIG1108C Lesson 5 Fall 2014
 
Tajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choi
Tajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choiTajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choi
Tajolabigdatacamp2014 140618135810-phpapp01 hyunsik-choi
 

More from Alex Pruden

zkStudyClub - zkSaaS (Sruthi Sekar, UCB)
zkStudyClub - zkSaaS (Sruthi Sekar, UCB)zkStudyClub - zkSaaS (Sruthi Sekar, UCB)
zkStudyClub - zkSaaS (Sruthi Sekar, UCB)Alex Pruden
 
zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)
zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)
zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)Alex Pruden
 
zkStudyClub - cqlin: Efficient linear operations on KZG commitments
zkStudyClub - cqlin: Efficient linear operations on KZG commitments zkStudyClub - cqlin: Efficient linear operations on KZG commitments
zkStudyClub - cqlin: Efficient linear operations on KZG commitments Alex Pruden
 
Eos - Efficient Private Delegation of zkSNARK provers
Eos  - Efficient Private Delegation of zkSNARK proversEos  - Efficient Private Delegation of zkSNARK provers
Eos - Efficient Private Delegation of zkSNARK proversAlex Pruden
 
zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)
zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)
zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)Alex Pruden
 
Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)
Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)
Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)Alex Pruden
 
zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]
zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]
zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]Alex Pruden
 
zkStudy Club: Subquadratic SNARGs in the Random Oracle Model
zkStudy Club: Subquadratic SNARGs in the Random Oracle ModelzkStudy Club: Subquadratic SNARGs in the Random Oracle Model
zkStudy Club: Subquadratic SNARGs in the Random Oracle ModelAlex Pruden
 
ZK Study Club: Sumcheck Arguments and Their Applications
ZK Study Club: Sumcheck Arguments and Their ApplicationsZK Study Club: Sumcheck Arguments and Their Applications
ZK Study Club: Sumcheck Arguments and Their ApplicationsAlex Pruden
 
Ecfft zk studyclub 9.9
Ecfft zk studyclub 9.9Ecfft zk studyclub 9.9
Ecfft zk studyclub 9.9Alex Pruden
 
Quarks zk study-club
Quarks zk study-clubQuarks zk study-club
Quarks zk study-clubAlex Pruden
 
zkStudyClub: CirC and Compiling Programs to Circuits
zkStudyClub: CirC and Compiling Programs to CircuitszkStudyClub: CirC and Compiling Programs to Circuits
zkStudyClub: CirC and Compiling Programs to CircuitsAlex Pruden
 

More from Alex Pruden (12)

zkStudyClub - zkSaaS (Sruthi Sekar, UCB)
zkStudyClub - zkSaaS (Sruthi Sekar, UCB)zkStudyClub - zkSaaS (Sruthi Sekar, UCB)
zkStudyClub - zkSaaS (Sruthi Sekar, UCB)
 
zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)
zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)
zkStudyClub - ProtoStar (Binyi Chen & Benedikt Bünz, Espresso Systems)
 
zkStudyClub - cqlin: Efficient linear operations on KZG commitments
zkStudyClub - cqlin: Efficient linear operations on KZG commitments zkStudyClub - cqlin: Efficient linear operations on KZG commitments
zkStudyClub - cqlin: Efficient linear operations on KZG commitments
 
Eos - Efficient Private Delegation of zkSNARK provers
Eos  - Efficient Private Delegation of zkSNARK proversEos  - Efficient Private Delegation of zkSNARK provers
Eos - Efficient Private Delegation of zkSNARK provers
 
zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)
zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)
zkStudyClub: HyperPlonk (Binyi Chen, Benedikt Bünz)
 
Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)
Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)
Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)
 
zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]
zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]
zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]
 
zkStudy Club: Subquadratic SNARGs in the Random Oracle Model
zkStudy Club: Subquadratic SNARGs in the Random Oracle ModelzkStudy Club: Subquadratic SNARGs in the Random Oracle Model
zkStudy Club: Subquadratic SNARGs in the Random Oracle Model
 
ZK Study Club: Sumcheck Arguments and Their Applications
ZK Study Club: Sumcheck Arguments and Their ApplicationsZK Study Club: Sumcheck Arguments and Their Applications
ZK Study Club: Sumcheck Arguments and Their Applications
 
Ecfft zk studyclub 9.9
Ecfft zk studyclub 9.9Ecfft zk studyclub 9.9
Ecfft zk studyclub 9.9
 
Quarks zk study-club
Quarks zk study-clubQuarks zk study-club
Quarks zk study-club
 
zkStudyClub: CirC and Compiling Programs to Circuits
zkStudyClub: CirC and Compiling Programs to CircuitszkStudyClub: CirC and Compiling Programs to Circuits
zkStudyClub: CirC and Compiling Programs to Circuits
 

Recently uploaded

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 

Recently uploaded (20)

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 

zkStudyClub - Lasso/Jolt (Justin Thaler, GWU/a16z)

  • 1. Justin Thaler Georgetown University and a16z crypto research Joint work with: Srinath Setty (Microsoft Research), Riad Wahby (CMU), Arasu Arun (NYU), Sam Ragsdale (a16z), Michael Zhu (a16z) Lasso + Jolt: A Deep Dive
  • 2. Presentation Outline • What are lookup arguments? • What are Lasso/Jolt? • Lasso in detail. • Jolt in detail. • How to think about Lasso as a tool. • And where else will lookup arguments be useful outside of zkVMs?
  • 3. Lookup arguments: what are they? • Unindexed lookup argument: • Lets P commit to a vector 𝑎 ∈ 𝑭!, and prove that every entry of 𝑎 resides in a pre-determined table 𝑡 ∈ 𝑭". • For every entry 𝑎# there is an index 𝑏# such that 𝑎# = 𝑡 𝑏# . • Indexed lookup argument: • Lets P commit to vectors 𝑎, 𝑏 ∈ 𝑭!, and prove that 𝑎# = 𝑡 𝑏# for all 𝑖. • We call 𝑎 the vector of lookup values and 𝑏 the indices.
  • 4. Lookup arguments: what are they? • Unindexed lookup argument: • Lets P commit to a vector 𝑎 ∈ 𝑭!, and prove that every entry of 𝑎 resides in a pre-determined table 𝑡 ∈ 𝑭". • For every entry 𝑎# there is an index 𝑏# such that 𝑎# = 𝑡 𝑏# . • Indexed lookup argument: • Lets P commit to vectors 𝑎, 𝑏 ∈ 𝑭!, and prove that 𝑎# = 𝑡 𝑏# for all 𝑖. • We call 𝑎 the vector of lookup values and 𝑏 the indices. • Unindexed lookups are proofs of a subset relationship (i.e., batch set-membership proofs). • 𝑎 specifies a subset of 𝑡. • Indexed lookups are reads into a read-only memory. • 𝑡 is the memory, and 𝑎# = 𝑡 𝑏# is a read of memory cell 𝑏#.
  • 5. Lasso+Jolt: what are they? • Lasso: new family of (indexed) lookup arguments. • P is an order of magnitude faster than in prior works. • Addresses key bottleneck for P: commitment costs. • P commits to fewer field elements, and all of them are small. • No commitment to 𝑡 needed for many tables. • Support for gigantic tables (decomposable, or LDE-structured). • P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements. • Jolt: new zkVM technique. • Much lower commitment costs for P than prior works. • Primitive instructions are implemented via one lookup into the entire evaluation table of the instruction.
  • 6. Lasso+Jolt: what are they? • Lasso: new family of (indexed) lookup arguments. • P is an order of magnitude faster than in prior works. • Addresses key bottleneck for P: commitment costs. • P commits to fewer field elements, and all of them are small. • No commitment to 𝑡 needed for many tables. • Support for gigantic tables (decomposable, or LDE-structured). • P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements. • Jolt: new zkVM technique. • Much lower commitment costs for P than prior works. • Primitive instructions are implemented via one lookup into the entire evaluation table of the instruction.
  • 7. Lasso+Jolt: what are they? • Lasso: new family of (indexed) lookup arguments. • P is an order of magnitude faster than in prior works. • Addresses key bottleneck for P: commitment costs. • P commits to fewer field elements, and all of them are small. • No commitment to 𝑡 needed for many tables. • Support for gigantic tables (decomposable, or LDE-structured). • P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements. • Jolt: new zkVM technique. • Much lower commitment costs for P than prior works. • Primitive instructions are implemented via one lookup into the entire evaluation table of the instruction.
  • 9. Lasso costs in detail • For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐: • P commits to 3𝑐𝑚 + 𝑐𝑁!/# field elements. • All of them are small, say, in the set {0, 1, … , 𝑚}. • With MSM-based polynomial commitment schemes, P does (roughly) just one group operation per (small) committed field element. • Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc. • 𝑐=1 is a special case. • P commits to only 𝑚+𝑁 field elements. • Even amongst these 𝑚+𝑁, many are 0. • Hence “free” to commit to with MSM-based schemes. • Specifically, at most 2𝑚 are non-zero. • If every read is of a different table cell, 𝑚 of the field elements are equal to 1, and the rest are 0s. • V costs: • 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir). • Plus one evaluation proof for a committed polynomial of size 𝑁!/#. • Low enough V costs to reduce further via composition/recursion.
  • 10. Lasso costs in detail • For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐: • P commits to 3𝑐𝑚 + 𝑐𝑁!/# field elements. • All of them are small, say, in the set {0, 1, … , 𝑚}. • With MSM-based polynomial commitment schemes, P does (roughly) just one group operation per (small) committed field element. • Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc. • 𝑐=1 is a special case. I call it “Basic-Lasso”. • P commits to only 𝑚+𝑁 field elements. • Even amongst these 𝑚+𝑁, many are 0. • Hence “free” to commit to with MSM-based schemes. • Specifically, at most 2𝑚 are non-zero. • If every read is of a different table cell, 𝑚 of the field elements are equal to 1, and the rest are 0s. • V costs: • 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir). • Plus one evaluation proof for a committed polynomial of size 𝑁!/#. • Low enough V costs to reduce further via composition/recursion.
  • 11. Lasso costs in detail • For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐: • P commits to 3𝑐𝑚 + 𝑐𝑁!/# field elements. • All of them are small, say, in the set {0, 1, … , 𝑚}. • With MSM-based polynomial commitment schemes, P does (roughly) just one group operation per (small) committed field element. • Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc. • 𝑐=1 is a special case. I call it “Basic-Lasso”. • P commits to only 𝑚+𝑁 field elements. • Even amongst these 𝑚+𝑁, many are 0. • Hence “free” to commit to with MSM-based schemes. • Specifically, at most 2𝑚 are non-zero. • If every read is of a different table cell, 𝑚 of the field elements are equal to 1, and the rest are 0s. • V costs: • 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir). • Plus one evaluation proof for a committed polynomial of size 𝑁!/#. • Low enough V costs to reduce further via composition/recursion.
  • 12. Lasso applied to huge tables: 𝑐>1 • Most big lookup tables arising in practice are decomposable. • Can answer an (indexed) lookup into the big table of size 𝑁 by performing roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results. • Lasso handles the collation with the sum-check protocol. • No extra commitment costs for P. • Can view Lasso with 𝑐>1 as a generic reduction from lookups into big, decomposable tables to lookups into small tables. • Can use any lookup argument for the small tables, not just Lasso with 𝑐 =1. • Major caveat: the small-table lookup argument must be indexed. • There are known transformations from unindexed lookup arguments to indexed ones. • But they either do not preserve “smallness” of table entries or do not preserve decomposability of the big table!
  • 13. Lasso applied to huge tables: 𝑐>1 • Most big lookup tables arising in practice are decomposable. • Can answer an (indexed) lookup into the big table of size 𝑁 by performing roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results. • Lasso handles the collation with the sum-check protocol. • No extra commitment costs for P. • Can view Lasso with 𝑐>1 as a generic reduction from lookups into big, decomposable tables to lookups into small tables. • Can use any lookup argument for the small tables. • Lasso uses Basic-Lasso on the small tables. • Major caveat: the small-table lookup argument must be indexed. • There are known transformations from unindexed lookup arguments to indexed ones. • But they either do not preserve “smallness” of table entries or do not preserve decomposability of the big table!
  • 14. Lasso applied to huge tables: 𝑐>1 • Most big lookup tables arising in practice are decomposable. • Can answer an (indexed) lookup into the big table of size 𝑁 by performing roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results. • Lasso handles the collation with the sum-check protocol. • No extra commitment costs for P. • Can view Lasso with 𝑐>1 as a generic reduction from lookups into big, decomposable tables to lookups into small tables. • Can use any lookup argument for the small tables. • Lasso uses Basic-Lasso on the small tables. • Major caveat: the small-table lookup argument must be indexed. • There are known transformations from unindexed lookup arguments to indexed ones. • But they either do not preserve “smallness” of table entries or do not preserve decomposability of the big table. • Because they “pack” indices and values together into a single field element.
  • 15. Background: Grand Product Arguments • All known lookup arguments use something called a grand product argument. • A SNARK for proving the product of 𝑛 committed values. • Popular grand product arguments today have P commit to 𝑛 extra values (partial products). • This is unnecessary. • T13: gave an optimized variant of the GKR protocol (sum-check-based interactive proof for circuit evaluation). • No commitment costs for P. • P does linear number of field operations. • Proof size/V time is 𝑂 log 𝑛 $ field ops (and hash evaluations from Fiat-Shamir). • Much less than FRI, concretely and asymptotically. • [Lee, Setty 2019] reduce V costs to about 𝑂 log(𝑛) with slight increase in commitment costs for P.
  • 16. Key Performance Insight in Basic-Lasso • For many existing lookup arguments, if you swap out the invoked grand product argument for T13, P commits only to small field elements. • See upcoming work on LogUp by Papini and Haböck. • More involved than just a simple swap of the grand product argument. • Remember: Jolt needs an indexed lookup argument that plays nicely with collating small-table lookup results into big-table results. • See my second a16z talk for details on how Basic-Lasso works.
  • 17. Key Performance Insight in Basic-Lasso • For many existing lookup arguments, if you swap out the invoked grand product argument for T13, P commits only to small field elements. • See upcoming work on LogUp by Papini and Haböck. • More involved than just a simple swap of a grand product argument. • Remember: Lasso/Jolt need an indexed lookup argument that plays nicely with collating small-table lookup results into big-table results. • Technical takeaway: The community has still not fully internalized the power of sum-check to avoid commitment costs for P.
  • 18. Key Performance Insight in Basic-Lasso • For many existing lookup arguments, if you swap out the invoked grand product argument for T13, P commits only to small field elements. • See upcoming work on LogUp by Papini and Haböck. • More involved than just a simple swap of a grand product argument. • Remember: Lasso/Jolt need an indexed lookup argument that plays nicely with collating small-table lookup results into big-table results. • Technical takeaway: The community has still not fully internalized the power of sum-check to avoid commitment costs for P. • See my second a16z talk for details on how Basic-Lasso works. • Last part of this talk: more info about how to think of Lasso as a tool.
  • 20. Front-ends today for VM execution • Say P claims to have run a computer program for 𝑚 steps. • Say the program is written in the assembly language for a VM. • Popular VM’s targeted: RISC-V, Ethereum Virtual Machine (EVM) • Today, front-ends produce a circuit that, for each step of the computation: 1. Figures out what instruction to execute at that step. 2. Executes that instruction. Lasso lets one replace Step 2 with a single lookup. For each instruction, the table stores the entire evaluation table of the function. If instruction 𝑓 operations on two 64-bit inputs, the table stores 𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 . This table has size 2()*. All RISC-V instructions are decomposable.
  • 21. Jolt: A new front-end paradigm • Say P claims to have run a computer program for 𝑚 steps. • Say the program is written in the assembly language for a VM. • Popular VM’s targeted: RISC-V, Ethereum Virtual Machine (EVM) • Today, front-ends produce a circuit that, for each step of the computation: 1. Figures out what instruction to execute at that step. 2. Executes that instruction. • Lasso lets one replace Step 2 with a single lookup. • For each instruction, the table stores the entire evaluation table of the instruction. If instruction 𝑓 operations on two 64-bit inputs, the table stores 𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 . This table has size 2()*. All RISC-V instructions are decomposable.
  • 22. Jolt: A new front-end paradigm • Say P claims to have run a computer program for 𝑚 steps. • Say the program is written in the assembly language for a VM. • Popular VM’s targeted: RISC-V, Ethereum Virtual Machine (EVM) • Today, front-ends produce a circuit that, for each step of the computation: 1. Figures out what instruction to execute at that step. 2. Executes that instruction. • Lasso lets one replace Step 2 with a single lookup. • For each instruction, the table stores the entire evaluation table of the instruction. • If instruction 𝑓 operations on two 64-bit inputs, the table stores 𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 . • This table has size 2()*. • Jolt shows that all RISC-V instructions are decomposable.
  • 23. Jolt in a picture query to be split into “chunks” which are fed into di↵erent subtables. The prover provides these chunks as advice, which are c in number for some small constant c, and hence approximately W/c or 2W/c bits long, depending on the structure of z. The constraint system must verify that the chunks correctly constitute z, but need not perform any range checks as the Lasso algorithm itself later implicitly enforces these on the chunks.
  • 24. Jolt in context • Jolt is a realization of Barry Whitehat’s “lookup singularity” vision (?) • Auditability/Simplicity/Extensibility benefits. • Performance benefits. • A qualitatively different way of building zkVMs. • Yet with many similarities to things people are already doing. • People are already computing functions like bitwise-AND by doing several lookups into small tables and combining the results. • Differences/keys to Jolt: • The new small-table lookup argument is much faster for P. • The new small-table lookup argument is naturally indexed. • The collation technique is much faster for P. • “Free” to multiply and add results of small-table lookups. • These differences let us do almost everything in VM emulation with lookups.
  • 25. Jolt in context • Jolt is a realization of Barry Whitehat’s “lookup singularity” vision (?) • Auditability/Simplicity/Extensibility benefits. • Performance benefits. • A qualitatively different way of building zkVMs. • Yet with many similarities to things people are already doing. • People are already computing functions like bitwise-AND by doing several lookups into small tables and combining the results. • Differences/keys to Jolt: • The new small-table lookup argument is much faster for P. • The new small-table lookup argument is naturally indexed. • The collation technique is much faster for P. • “Free” to multiply and add results of small-table lookups. • These differences let us do almost everything in VM emulation with lookups.
  • 26. Three Examples of Jolt’s decompositions
  • 27. Example 1: Bitwise-AND • Decomposable: to compute bitwise-AND of two 64-bit inputs 𝑥, 𝑦: • Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits. • Compute the bitwise-AND of each chunk. • Concatenate the results. • i.e., output is ∑+,( * 8+-( 0 bitwiseAND(𝑥+, 𝑦+). LDE-structured: bitwiseAND 𝑥, 𝑦 = : +,( ./ 2+-( 0 𝑥+ 0 𝑦+. This is a multilinear polynomial that can be evaluated with under 200 field operations.
  • 28. • Decomposable: to compute bitwise-AND of two 64-bit inputs 𝑥, 𝑦: • Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits. • Compute the bitwise-AND of each chunk. • Concatenate the results. • i.e., output is ∑+,( * 8+-( 0 bitwiseAND(𝑥+, 𝑦+). • Avoiding an honest-party committing to the sub-table: • bitwiseAND(𝑥+, 𝑦+) = ∑0,( * 20-( 0 𝑥0 0 𝑦0. • This is a multilinear polynomial that can be evaluated with under 25 field operations. • The only information the Lasso V needs about the sub-table is one evaluation of this polynomial. Example 1: Bitwise-AND
  • 29. Example 2: RISC-V Addition • For adding two 64-bit numbers 𝑥, 𝑦, RISC-V prescribes that they be added and any “overflow bit” be ignored. • Jolt computes 𝑧 = 𝑥 + 𝑦 in the finite field (via one constraint added to the ancillary R1CS), and then uses lookups to identify the overflow bit, if any, and adjust the result accordingly.
  • 30. Example 2: RISC-V Addition • For adding two 64-bit numbers 𝑥, 𝑦, RISC-V prescribes that they be added and any “overflow bit” be ignored. • Jolt computes 𝑧 = 𝑥 + 𝑦 in the finite field (via one constraint added to the ancillary R1CS), and then uses lookups to identify the overflow bit, if any, and adjust the result accordingly. • P commits to the “limb-decomposition” (𝑏!, … , 𝑏#) of the field element z = 𝑥 + 𝑦. • Let 𝑀 = 2%&/# denote the max value any limb should take. • A constraint is added to the R1CS to confirm 𝑧 = ∑'(! # 𝑀')! > 𝑏' and each 𝑏' is range checked via a lookup into the subtable that stores 0, … , 𝑀 − 1 . • These checks guarantee that (𝑏!, … , 𝑏#) is really the prescribed limb- decomposition of 𝑧. • To identify the overflow bit, one can do a lookup at index 𝑏#, into a table whose 𝑖'th entry spits out the relevant high-order bit of 𝑖.
  • 31. Example 3: LESS THAN UNSIGNED • LESS-THAN • Decomposable: to compute LESS-THAN of two 64-bit inputs 𝑥, 𝑦: • Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits. • Compute LESS-THAN (LT) and EQUALITY (EQ) on each chunk. • Output is: ∑+,( * 2+-( 0 LT(𝑥+, 𝑦+) ∏0,+1( * EQ (𝑥0, 𝑦0). LDE-structured: EQ 𝑥', 𝑦' = ∏()$ * ( 𝑥',(𝑦',(+ (1 − 𝑥',()(1 − 𝑦',()). LT 𝑥#, 𝑦# = ∑()$ * (1 − 𝑥#)𝑦# ∏,)( * ( 𝑥#,(𝑦#,(+ (1 − 𝑥#,()(1 − 𝑦#,()). Plugging the above into the output expression gives a multilinear polynomial that can be evaluated with under 200 field operations.
  • 32. Example 3: LESS THAN UNSIGNED • LESS-THAN • Decomposable: to compute LESS-THAN of two 64-bit inputs 𝑥, 𝑦: • Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits. • Compute LESS-THAN (LT) and EQUALITY (EQ) on each chunk. • Output is: ∑+,( * 2+-( 0 LT(𝑥+, 𝑦+) ∏0,+1( * EQ (𝑥0, 𝑦0). • Avoiding commitments to the two subtables: • EQ 𝑥', 𝑦' = ∏()$ * ( 𝑥',(𝑦',(+ (1 − 𝑥',()(1 − 𝑦',()). • LT 𝑥#, 𝑦# = ∑()$ * (1 − 𝑥#)𝑦# ∏,)( * ( 𝑥#,(𝑦#,(+ (1 − 𝑥#,()(1 − 𝑦#,()). • These are multilinear polynomials that can be evaluated with under 50 field operations.
  • 33. General intuition for Lasso as a tool • Lasso supports simple operations on the bit-decompositions of field elements, without requiring P to commit to the individual bits. • The sub-tables have quickly-evaluable multilinear extensions if each corresponds to a simple function on the (bits of) the table indices. • This ensures no honest party has to commit to them in pre-processing. • Can compute, say, bitwiseAND of two field elements in {0, 1, …, 2^64-1} with lower P costs than, say, Plonk incurs per addition or multiplication gate. • Remember: Lookup arguments are all about economies of scale. They only make sense to use if doing many lookups into one table (i.e., computing many invocations of the same function).
  • 34. Viewing indexed lookup arguments as SNARKs for repeated function evaluation
  • 35. SNARKs for repeated function evaluation • Many previous works have studied SNARKs for repeated function evaluation. • Computing the same function 𝑓 on many different inputs 𝑥(, … , 𝑥2. • They consider a “polynomial” amount of data parallelism. • If 𝑓 takes inputs of length n, the number of different inputs is 𝑚 = poly 𝑛 . • They still force P to evaluate 𝑓 in a very specific way. • Executing a specific circuit to compute 𝑓.
  • 36. Zooming out: a new view on lookup arguments • View a lookup table as storing all evaluations of a function 𝑓. • A lookup argument is then a SNARK for highly repeated evaluation of 𝑓. • It lets P prove that a committed vector ((𝑎A, 𝑓 𝑎A ), …, (𝑎B, 𝑓 𝑎B ) consists of correct evaluations of 𝑓 at different inputs 𝑎A, …, 𝑎B. Due to the 𝑂(𝑐(𝑚 + 𝑁A/D)) cost for P, Lasso is effective only if the number of lookups 𝑚 is not too much smaller than the table size 𝑁. The number of copies of 𝑓 should be exponential in the input size to 𝑓.
  • 37. Zooming out: a new view on lookup arguments • View a lookup table as storing all evaluations of a function 𝑓. • A lookup argument is then a SNARK for highly repeated evaluation of 𝑓. • It lets P prove that a committed vector ((𝑎A, 𝑓 𝑎A ), …, (𝑎B, 𝑓 𝑎B ) consists of correct evaluations of 𝑓 at different inputs 𝑎A, …, 𝑎B. • Due to the 𝑂(𝑐(𝑚 + 𝑁A/D)) cost for P, Lasso is effective only if the number of lookups 𝑚 is not too much smaller than the table size 𝑁. • i.e., The number of copies of 𝑓 should be exponential in the input size to 𝑓.
  • 38. High-level message of this viewpoint • Lasso is useful wherever the same function is evaluated many times. • zkVMs are only one such example. • By definition, the VM abstraction represents the computation as repeated application of primitive instructions. • But implementing a VM abstraction comes with substantial performance costs in general. • Interesting direction for future work: • Other/better ways to isolate repeated structure in computation. • Example work (with Yinuo Zhang and Sriram Sridhar): • Bit-slicing. • To evaluate a hash function or block cipher like SHA/AES naturally computed by a Boolean circuit C on, say, 64 different inputs: • Pack the first bit of each input into a single field element, the second bit of each input into a single field element, and so on. • Replace each AND gate in C with bitwiseAND, each OR gate in C with bitwiseOR, etc. • Now each output gate of C computes (one bit of) all 64 evaluations of SHA/AES. • Apply Lasso to this circuit.