This week's session covers new work from Justin Thaler (GWU) et al on Lasso/Jolt.
Lasso is a new lookup argument (more on this below) with a dramatically faster prover. Our initial implementation provides roughly a 10x speedup over the lookup argument in the popular, well-engineered halo2 toolchain; we expect improvements of around 40x when optimizations are complete. To demonstrate, we’re releasing the open source implementation, written in Rust. We invite the community to help us make Lasso as fast and robust as possible.
The second, accompanying innovation to Lasso is Jolt, a new approach to zkVM (zero knowledge virtual machine) design that builds on Lasso. Jolt realizes the “lookup singularity” – a vision initially laid out by Barry Whitehat of the Ethereum Foundation for simpler tooling and lightweight, lookup-centric circuits (more on why this matters below). Relative to existing zkVMs, we expect Jolt to achieve similar or better performance – and importantly, a more streamlined and accessible developer experience. With Jolt, it will be easier for developers to write fast SNARKs in their high-level language of choice.
Lasso: https://people.cs.georgetown.edu/jthaler/Lasso-paper.pdf
Jolt: https://people.cs.georgetown.edu/jthaler/Jolt-paper.pdf
Paper: https://eprint.iacr.org/2022/1355
Plonk is a widely used succinct non-interactive proof system that uses univariate polynomial commitments. Plonk is quite flexible: it supports circuits with low-degree ``custom'' gates as well as circuits with lookup gates (a lookup gate ensures that its input is contained in a predefined table). For large circuits, the bottleneck in generating a Plonk proof is the need for computing a large FFT.
In this work, the authors present HyperPlonk, an adaptation of Plonk to the boolean hypercube, using multilinear polynomial commitments. HyperPlonk retains the flexibility of Plonk but provides several additional benefits. First, it avoids the need for an FFT during proof generation. Second, and more importantly, it supports custom gates of much higher degree than Plonk without harming the running time of the prover. Both of these can dramatically speed up the prover's running time. Since HyperPlonk relies on multilinear polynomial commitments, the authors revisit two elegant constructions: one from Orion and one from Virgo. The authors also show how to reduce the Orion opening proof size to less than 10kb (an almost factor 1000 improvement) and show how to make the Virgo FRI-based opening proof simpler and shorter.
ZK Study Club: Supernova (Srinath Setty - MS Research)Alex Pruden
This week, Srinath Setty (MS Research) will present SuperNova, a new recursive proof system for incrementally producing succinct proofs of correct execution of programs on a stateful machine with a particular instruction set (e.g., EVM, RISC-V). A distinguishing aspect of SuperNova is that the cost of proving a step of a program is proportional only to the size of the circuit representing the instruction invoked by the program step. This is a stark departure from prior works that employ universal circuits where the cost of proving a program step is proportional at least to the sum of sizes of circuits representing each supported instruction—even though a particular program step invokes only one of the supported instructions. Naturally, SuperNova can support a rich instruction set without affecting the per-step proving costs. SuperNova achieves its cost profile by building on Nova, a prior high-speed recursive proof system, and leveraging its internal building block, folding schemes, in a new manner. We formalize SuperNova’s approach as a way to realize non-uniform IVC, a generalization of IVC. Furthermore, SuperNova’s prover costs and the recursion overhead are the same as Nova’s, and in fact, SuperNova is equivalent to Nova for machines that support a single instruction.
https://eprint.iacr.org/2022/1758
This week, Benedikt Bünz and Binyi Chen of Espresso Systems present ProtoStar:
Accumulation is a simple yet powerful primitive that enables incrementally verifi-able computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any (2k − 1)-move special-sound protocol. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup
https://eprint.iacr.org/2023/620
zkStudyClub - Improving performance of non-native arithmetic in SNARKs (Ivo K...Alex Pruden
In this zkStudyClub session, Ivo presents techniques for applying the log-derivative lookup tables in a circuit using LegoSNARK-style commitment. As an application, we show how this lookup table can be used to implement range checks, specifically applying it to the non-native arithmetic. Using these optimisations, we were able to reduce the proof time for BN254 pairing in Groth16 to approx 5s (MBP M1). The technique also works for PLONKish arithmetisation.
Multi-scalar multiplication: state of the art and new ideasGus Gutoski
A 90-minute online presentation for zkStudyClub, delivered 2020-06-01. I present a new idea with a demonstrated 5% speed-up for multi-scalar multiplication. When combined with precomputation, this method could yield upwards of 20% speed-up.
This week, Luke Pearson (Polychain Capital) and Joshua Fitzgerald (Anoma) present their work on Plonkup, a protocol that combines Plookup and PLONK into a single, efficient protocol. The protocol relies on a new hash function, called Reinforced Concrete, written by Dmitry Khovratovich. The three of them will present their work together at this week's edition of zkStudyClub!
Slides:
---
To Follow the Zero Knowledge Podcast us at https://www.zeroknowledge.fm
To the listeners of Zero Knowledge Podcast, if you like what we do:
- Follow us on Twitter - @zeroknowledgefm
- Join us on Telegram - https://t.me/joinchat/TORo7aknkYNLHmCM
- Support our Gitcoin Grant - https://gitcoin.co/grants/329/zero-knowledge-podcast-2
- Support us on Patreon - https://www.patreon.com/zeroknowledge
Paper: https://eprint.iacr.org/2022/1355
Plonk is a widely used succinct non-interactive proof system that uses univariate polynomial commitments. Plonk is quite flexible: it supports circuits with low-degree ``custom'' gates as well as circuits with lookup gates (a lookup gate ensures that its input is contained in a predefined table). For large circuits, the bottleneck in generating a Plonk proof is the need for computing a large FFT.
In this work, the authors present HyperPlonk, an adaptation of Plonk to the boolean hypercube, using multilinear polynomial commitments. HyperPlonk retains the flexibility of Plonk but provides several additional benefits. First, it avoids the need for an FFT during proof generation. Second, and more importantly, it supports custom gates of much higher degree than Plonk without harming the running time of the prover. Both of these can dramatically speed up the prover's running time. Since HyperPlonk relies on multilinear polynomial commitments, the authors revisit two elegant constructions: one from Orion and one from Virgo. The authors also show how to reduce the Orion opening proof size to less than 10kb (an almost factor 1000 improvement) and show how to make the Virgo FRI-based opening proof simpler and shorter.
ZK Study Club: Supernova (Srinath Setty - MS Research)Alex Pruden
This week, Srinath Setty (MS Research) will present SuperNova, a new recursive proof system for incrementally producing succinct proofs of correct execution of programs on a stateful machine with a particular instruction set (e.g., EVM, RISC-V). A distinguishing aspect of SuperNova is that the cost of proving a step of a program is proportional only to the size of the circuit representing the instruction invoked by the program step. This is a stark departure from prior works that employ universal circuits where the cost of proving a program step is proportional at least to the sum of sizes of circuits representing each supported instruction—even though a particular program step invokes only one of the supported instructions. Naturally, SuperNova can support a rich instruction set without affecting the per-step proving costs. SuperNova achieves its cost profile by building on Nova, a prior high-speed recursive proof system, and leveraging its internal building block, folding schemes, in a new manner. We formalize SuperNova’s approach as a way to realize non-uniform IVC, a generalization of IVC. Furthermore, SuperNova’s prover costs and the recursion overhead are the same as Nova’s, and in fact, SuperNova is equivalent to Nova for machines that support a single instruction.
https://eprint.iacr.org/2022/1758
This week, Benedikt Bünz and Binyi Chen of Espresso Systems present ProtoStar:
Accumulation is a simple yet powerful primitive that enables incrementally verifi-able computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any (2k − 1)-move special-sound protocol. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup
https://eprint.iacr.org/2023/620
zkStudyClub - Improving performance of non-native arithmetic in SNARKs (Ivo K...Alex Pruden
In this zkStudyClub session, Ivo presents techniques for applying the log-derivative lookup tables in a circuit using LegoSNARK-style commitment. As an application, we show how this lookup table can be used to implement range checks, specifically applying it to the non-native arithmetic. Using these optimisations, we were able to reduce the proof time for BN254 pairing in Groth16 to approx 5s (MBP M1). The technique also works for PLONKish arithmetisation.
Multi-scalar multiplication: state of the art and new ideasGus Gutoski
A 90-minute online presentation for zkStudyClub, delivered 2020-06-01. I present a new idea with a demonstrated 5% speed-up for multi-scalar multiplication. When combined with precomputation, this method could yield upwards of 20% speed-up.
This week, Luke Pearson (Polychain Capital) and Joshua Fitzgerald (Anoma) present their work on Plonkup, a protocol that combines Plookup and PLONK into a single, efficient protocol. The protocol relies on a new hash function, called Reinforced Concrete, written by Dmitry Khovratovich. The three of them will present their work together at this week's edition of zkStudyClub!
Slides:
---
To Follow the Zero Knowledge Podcast us at https://www.zeroknowledge.fm
To the listeners of Zero Knowledge Podcast, if you like what we do:
- Follow us on Twitter - @zeroknowledgefm
- Join us on Telegram - https://t.me/joinchat/TORo7aknkYNLHmCM
- Support our Gitcoin Grant - https://gitcoin.co/grants/329/zero-knowledge-podcast-2
- Support us on Patreon - https://www.patreon.com/zeroknowledge
zkStudyClub - cqlin: Efficient linear operations on KZG commitments Alex Pruden
This week, Liam Eagen (Blockstream Research) and Ariel Gabizon (Zeta Function Technologies) present cqlin - Efficient linear operations on KZG commitments with cached quotients.
Given two KZG-committed polynomials , a matrix , and subgroup of order , we present a protocol for checking that . After preprocessing, the prover makes field and group operations. This presents a significant improvement over the lincheck protocols in [CHMMVW, COS], where the prover's run-time (also after preprocessing) was quasilinear in the number of non-zeroes of M, which could be n^2.
zkStudyClub - cqlin: Efficient linear operations on KZG commitments Alex Pruden
This week, Liam Eagen (Blockstream Research) and Ariel Gabizon (Zeta Function Technologies) present cqlin - Efficient linear operations on KZG commitments with cached quotients.
Given two KZG-committed polynomials , a matrix , and subgroup of order , we present a protocol for checking that . After preprocessing, the prover makes field and group operations. This presents a significant improvement over the lincheck protocols in [CHMMVW, COS], where the prover's run-time (also after preprocessing) was quasilinear in the number of non-zeroes of M, which could be n^2.
Robert Haas
Why does my query need a plan? Sequential scan vs. index scan. Join strategies. Join reordering. Joins you can't reorder. Join removal. Aggregates and DISTINCT. Using EXPLAIN. Row count and cost estimation. Things the query planner doesn't understand. Other ways the planner can fail. Parameters you can tune. Things that are nearly always slow. Redesigning your schema. Upcoming features and future work.
A MAC URISA event. This talk is oriented to GIS users looking to learn more about the Python programming language. The Python language is incorporated into many GIS applications. Python also has a considerable installation base, with many freely available modules that help developers extend their software to do more.
The beginning third of the talk discusses the history and syntax of the language, along with why a GIS specialist would want to learn how to use the language. The middle of the talk discusses how Python is integrated with the ESRI ArcGIS Desktop suite. The final portion of the talk discusses two Python projects and how they can be used to extend your GIS capabilities and improve efficiency.
Recording of the talk: https://www.youtube.com/watch?v=F1_FqvbXHb4
ChatGPT
Data analysis is the process of inspecting, cleaning, transforming, and modeling data to discover useful information, draw conclusions, and support decision-making. It involves applying various techniques and methods to extract insights from data sets, often with the goal of uncovering patterns, trends, relationships, or making predictions.
Here's an overview of the key steps and techniques involved in data analysis:
Data Collection: The first step in data analysis is gathering relevant data from various sources. This can include structured data from databases, spreadsheets, or surveys, as well as unstructured data such as text documents, social media posts, or sensor readings.
Data Cleaning and Preprocessing: Once the data is collected, it often needs to be cleaned and preprocessed to ensure its quality and suitability for analysis. This involves handling missing values, removing duplicates, addressing inconsistencies, and transforming data into a suitable format for analysis.
Exploratory Data Analysis (EDA): EDA involves examining and understanding the data through summary statistics, visualizations, and statistical techniques. It helps identify patterns, distributions, outliers, and potential relationships between variables. EDA also helps in formulating hypotheses and guiding further analysis.
Data Modeling and Statistical Analysis: In this step, various statistical techniques and models are applied to the data to gain deeper insights. This can include descriptive statistics, inferential statistics, hypothesis testing, regression analysis, time series analysis, clustering, classification, and more. The choice of techniques depends on the nature of the data and the research questions being addressed.
Data Visualization: Data visualization plays a crucial role in data analysis. It involves creating meaningful and visually appealing representations of data through charts, graphs, plots, and interactive dashboards. Visualizations help in communicating insights effectively and spotting trends or patterns that may be difficult to identify in raw data.
Interpretation and Conclusion: Once the analysis is performed, the findings need to be interpreted in the context of the problem or research objectives. Conclusions are drawn based on the results, and recommendations or insights are provided to stakeholders or decision-makers.
Reporting and Communication: The final step is to present the results and findings of the data analysis in a clear and concise manner. This can be in the form of reports, presentations, or interactive visualizations. Effective communication of the analysis results is crucial for stakeholders to understand and make informed decisions based on the insights gained.
Data analysis is widely used in various fields, including business, finance, marketing, healthcare, social sciences, and more. It plays a crucial role in extracting value from data, supporting evidence-based decision-making, and driving actionable insig
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
A decade of active research has led to practical constructions of zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) that are now being used in a wide variety of applications. Despite this astonishing progress, overheads in proof generation time remain significant.
In this work, we envision a world where consumers with low computational resources can outsource the task of proof generation to a group of untrusted servers in a privacy-preserving manner. The main requirement is that these servers should be able to collectively generate proofs at a faster speed (than the consumer). Towards this goal, we introduce a framework called zk-SNARKs-as-a-service () for faster computation of zk-SNARKs. Our framework allows for distributing proof computation across multiple servers such that each server is expected to run for a shorter duration than a single prover. Moreover, the privacy of the prover's witness is ensured against any minority of colluding servers.
We design custom protocols in this framework that can be used to obtain faster runtimes for widely used zk-SNARKs, such as Groth16 [EUROCRYPT 2016], Marlin [EUROCRYPT 2020], and Plonk [EPRINT 2019]. We implement proof of concept zkSaaS for the Groth16 and Plonk provers. In comparison to generating these proofs on commodity hardware, we show that not only can we generate proofs for a larger number of constraints (without memory exhaustion), but can also get speed-up when run with 128 parties for constraints with Groth16 and gates with Plonk.
https://eprint.iacr.org/2023/905
Eos - Efficient Private Delegation of zkSNARK proversAlex Pruden
Succinct zero knowledge proofs (i.e. zkSNARKs) are powerful cryptographic tools that enable a prover to convince a verifier that a given statement is true without revealing any additional information. Unfortunately, existing systems for generating zkSNARKs are expensive, which limits the applications in which these proofs can be used.
This new work (presented by co-author Pratyush Mishra) achieves security against malicious workers without relying on heavyweight cryptographic tools. We implement and evaluate our delegation protocols for a state-of-the-art zkSNARK in a variety of computational and bandwidth settings, and demonstrate that our protocols
are concretely efficient. When compared to local proving, using our protocols to delegate proof generation from a recent smartphone (a) reduces end-to-end latency by up to 26×, (b) lowers the delegator’s active computation time by up to 1447×, and (c) enables proving up to 256× larger instances
https://www.usenix.org/system/files/sec23fall-prepub-492-chiesa.pdf
Caulk: zkStudyClub: Caulk - Lookup Arguments in Sublinear Time (A. Zapico)Alex Pruden
This week, Arantxa Zapico of the Ethereum Foundation presents new work (co-authored with Vitalik Buterin, Dmitry Khovratovich, Mary Maller, Anca Nitulescu, and Mark Simkin) called Caulk, which examines position-hiding linkability for vector commitment schemes. One can prove in zero knowledge that one or more values that comprise commitment cm all belong to the vector of size committed to in C. Caulk can be used for membership proofs and lookup arguments and outperforms all existing alternatives in prover time by orders of magnitude.
https://eprint.iacr.org/2022/621
zkStudyClub: Zero-Knowledge Proofs Security, in Practice [JP Aumasson, Taurus]Alex Pruden
Slides accompanying zkStudyClub talk: Zero-Knowledge Proofs Security, in Practice. JP Aumasson (co-creator of the BLAKE hash function family) will share his experience doing security auditing for various projects that use zero-knowledge proofs. He will describe his approach, the common pitfalls in the different components of a proof system, as well as a catalog of bugs that have been discovered in various projects
zkStudy Club: Subquadratic SNARGs in the Random Oracle ModelAlex Pruden
Slides for Eylon Yogev's (Bar-Ilan University) presentation at ZKStudyClub, covering his new work (co-authored w/ Alessandro Chiesa of UC Berkeley) about SNARGs in the random oracle model of sub- quadratic complexity.
Link to the original paper: https://eprint.iacr.org/2021/281.pdf
ZK Study Club: Sumcheck Arguments and Their ApplicationsAlex Pruden
Talk given at the ZK Study Club by Jonathan Bootle and Katerina Sotiraki about the universality of sumcheck arguments and their importance in zero-knowledge cryptography.
zkStudyClub: CirC and Compiling Programs to CircuitsAlex Pruden
The programming languages community, the cryptography community, and others rely on translating programs in high-level source languages (e.g., C) to logical constraint representations. Unfortunately, building compilers for this task is difficult and time consuming. In this work, Alex Ozdemir et al present CirC, an infrastructure for building compilers for SNARKs that build upon a common abstraction: stateless, non-deterministic computations called existentially quantified circuits, or EQCs.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
1. Justin Thaler
Georgetown University and a16z crypto research
Joint work with:
Srinath Setty (Microsoft Research), Riad Wahby
(CMU), Arasu Arun (NYU), Sam Ragsdale (a16z),
Michael Zhu (a16z)
Lasso + Jolt: A Deep Dive
2. Presentation Outline
• What are lookup arguments?
• What are Lasso/Jolt?
• Lasso in detail.
• Jolt in detail.
• How to think about Lasso as a tool.
• And where else will lookup arguments be useful outside of zkVMs?
3. Lookup arguments: what are they?
• Unindexed lookup argument:
• Lets P commit to a vector 𝑎 ∈ 𝑭!, and prove that every entry of 𝑎 resides in a
pre-determined table 𝑡 ∈ 𝑭".
• For every entry 𝑎# there is an index 𝑏# such that 𝑎# = 𝑡 𝑏# .
• Indexed lookup argument:
• Lets P commit to vectors 𝑎, 𝑏 ∈ 𝑭!, and prove that 𝑎# = 𝑡 𝑏# for all 𝑖.
• We call 𝑎 the vector of lookup values and 𝑏 the indices.
4. Lookup arguments: what are they?
• Unindexed lookup argument:
• Lets P commit to a vector 𝑎 ∈ 𝑭!, and prove that every entry of 𝑎 resides in a
pre-determined table 𝑡 ∈ 𝑭".
• For every entry 𝑎# there is an index 𝑏# such that 𝑎# = 𝑡 𝑏# .
• Indexed lookup argument:
• Lets P commit to vectors 𝑎, 𝑏 ∈ 𝑭!, and prove that 𝑎# = 𝑡 𝑏# for all 𝑖.
• We call 𝑎 the vector of lookup values and 𝑏 the indices.
• Unindexed lookups are proofs of a subset relationship (i.e., batch set-membership
proofs).
• 𝑎 specifies a subset of 𝑡.
• Indexed lookups are reads into a read-only memory.
• 𝑡 is the memory, and 𝑎# = 𝑡 𝑏# is a read of memory cell 𝑏#.
5. Lasso+Jolt: what are they?
• Lasso: new family of (indexed) lookup arguments.
• P is an order of magnitude faster than in prior works.
• Addresses key bottleneck for P: commitment costs.
• P commits to fewer field elements, and all of them are small.
• No commitment to 𝑡 needed for many tables.
• Support for gigantic tables (decomposable, or LDE-structured).
• P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements.
• Jolt: new zkVM technique.
• Much lower commitment costs for P than prior works.
• Primitive instructions are implemented via one lookup into the
entire evaluation table of the instruction.
6. Lasso+Jolt: what are they?
• Lasso: new family of (indexed) lookup arguments.
• P is an order of magnitude faster than in prior works.
• Addresses key bottleneck for P: commitment costs.
• P commits to fewer field elements, and all of them are small.
• No commitment to 𝑡 needed for many tables.
• Support for gigantic tables (decomposable, or LDE-structured).
• P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements.
• Jolt: new zkVM technique.
• Much lower commitment costs for P than prior works.
• Primitive instructions are implemented via one lookup into the
entire evaluation table of the instruction.
7. Lasso+Jolt: what are they?
• Lasso: new family of (indexed) lookup arguments.
• P is an order of magnitude faster than in prior works.
• Addresses key bottleneck for P: commitment costs.
• P commits to fewer field elements, and all of them are small.
• No commitment to 𝑡 needed for many tables.
• Support for gigantic tables (decomposable, or LDE-structured).
• P commitment costs: 𝑂(𝑐(𝑚 + 𝑁$/&)) field elements.
• Jolt: new zkVM technique.
• Much lower commitment costs for P than prior works.
• Primitive instructions are implemented via one lookup into the
entire evaluation table of the instruction.
9. Lasso costs in detail
• For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐:
• P commits to 3𝑐𝑚 + 𝑐𝑁!/#
field elements.
• All of them are small, say, in the set {0, 1, … , 𝑚}.
• With MSM-based polynomial commitment schemes, P does (roughly)
just one group operation per (small) committed field element.
• Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc.
• 𝑐=1 is a special case.
• P commits to only 𝑚+𝑁 field elements.
• Even amongst these 𝑚+𝑁, many are 0.
• Hence “free” to commit to with MSM-based schemes.
• Specifically, at most 2𝑚 are non-zero.
• If every read is of a different table cell, 𝑚 of the field elements are
equal to 1, and the rest are 0s.
• V costs:
• 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir).
• Plus one evaluation proof for a committed polynomial of size 𝑁!/#.
• Low enough V costs to reduce further via composition/recursion.
10. Lasso costs in detail
• For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐:
• P commits to 3𝑐𝑚 + 𝑐𝑁!/#
field elements.
• All of them are small, say, in the set {0, 1, … , 𝑚}.
• With MSM-based polynomial commitment schemes, P does (roughly)
just one group operation per (small) committed field element.
• Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc.
• 𝑐=1 is a special case. I call it “Basic-Lasso”.
• P commits to only 𝑚+𝑁 field elements.
• Even amongst these 𝑚+𝑁, many are 0.
• Hence “free” to commit to with MSM-based schemes.
• Specifically, at most 2𝑚 are non-zero.
• If every read is of a different table cell, 𝑚 of the field elements are
equal to 1, and the rest are 0s.
• V costs:
• 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir).
• Plus one evaluation proof for a committed polynomial of size 𝑁!/#.
• Low enough V costs to reduce further via composition/recursion.
11. Lasso costs in detail
• For 𝑚 indexed lookups into a table of size 𝑁, using parameter 𝑐:
• P commits to 3𝑐𝑚 + 𝑐𝑁!/#
field elements.
• All of them are small, say, in the set {0, 1, … , 𝑚}.
• With MSM-based polynomial commitment schemes, P does (roughly)
just one group operation per (small) committed field element.
• Examples: KZG-based, IPA/Bulletproofs, Hyrax, Dory, etc.
• 𝑐=1 is a special case. I call it “Basic-Lasso”.
• P commits to only 𝑚+𝑁 field elements.
• Even amongst these 𝑚+𝑁, many are 0.
• Hence “free” to commit to with MSM-based schemes.
• Specifically, at most 2𝑚 are non-zero.
• If every read is of a different table cell, 𝑚 of the field elements are
equal to 1, and the rest are 0s.
• V costs:
• 𝑂(log 𝑚) field ops and hash evaluations (from Fiat-Shamir).
• Plus one evaluation proof for a committed polynomial of size 𝑁!/#.
• Low enough V costs to reduce further via composition/recursion.
12. Lasso applied to huge tables: 𝑐>1
• Most big lookup tables arising in practice are decomposable.
• Can answer an (indexed) lookup into the big table of size 𝑁 by performing
roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results.
• Lasso handles the collation with the sum-check protocol.
• No extra commitment costs for P.
• Can view Lasso with 𝑐>1 as a generic reduction from lookups into big,
decomposable tables to lookups into small tables.
• Can use any lookup argument for the small tables, not just Lasso with
𝑐 =1.
• Major caveat: the small-table lookup argument must be indexed.
• There are known transformations from unindexed lookup arguments to
indexed ones.
• But they either do not preserve “smallness” of table entries or do not
preserve decomposability of the big table!
13. Lasso applied to huge tables: 𝑐>1
• Most big lookup tables arising in practice are decomposable.
• Can answer an (indexed) lookup into the big table of size 𝑁 by performing
roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results.
• Lasso handles the collation with the sum-check protocol.
• No extra commitment costs for P.
• Can view Lasso with 𝑐>1 as a generic reduction from lookups into big,
decomposable tables to lookups into small tables.
• Can use any lookup argument for the small tables.
• Lasso uses Basic-Lasso on the small tables.
• Major caveat: the small-table lookup argument must be indexed.
• There are known transformations from unindexed lookup arguments to
indexed ones.
• But they either do not preserve “smallness” of table entries or do not
preserve decomposability of the big table!
14. Lasso applied to huge tables: 𝑐>1
• Most big lookup tables arising in practice are decomposable.
• Can answer an (indexed) lookup into the big table of size 𝑁 by performing
roughly 𝑐 lookups into tables of size 𝑁$/& and “collating” the results.
• Lasso handles the collation with the sum-check protocol.
• No extra commitment costs for P.
• Can view Lasso with 𝑐>1 as a generic reduction from lookups into big,
decomposable tables to lookups into small tables.
• Can use any lookup argument for the small tables.
• Lasso uses Basic-Lasso on the small tables.
• Major caveat: the small-table lookup argument must be indexed.
• There are known transformations from unindexed lookup arguments to
indexed ones.
• But they either do not preserve “smallness” of table entries or do not
preserve decomposability of the big table.
• Because they “pack” indices and values together into a single field element.
15. Background: Grand Product Arguments
• All known lookup arguments use something called a grand product argument.
• A SNARK for proving the product of 𝑛 committed values.
• Popular grand product arguments today have P commit to 𝑛 extra values (partial
products).
• This is unnecessary.
• T13: gave an optimized variant of the GKR protocol (sum-check-based interactive proof
for circuit evaluation).
• No commitment costs for P.
• P does linear number of field operations.
• Proof size/V time is 𝑂 log 𝑛 $ field ops (and hash evaluations from Fiat-Shamir).
• Much less than FRI, concretely and asymptotically.
• [Lee, Setty 2019] reduce V costs to about 𝑂 log(𝑛) with slight increase in commitment
costs for P.
16. Key Performance Insight in Basic-Lasso
• For many existing lookup arguments, if you swap out the invoked grand product
argument for T13, P commits only to small field elements.
• See upcoming work on LogUp by Papini and Haböck.
• More involved than just a simple swap of the grand product argument.
• Remember: Jolt needs an indexed lookup argument that plays nicely with
collating small-table lookup results into big-table results.
• See my second a16z talk for details on how Basic-Lasso works.
17. Key Performance Insight in Basic-Lasso
• For many existing lookup arguments, if you swap out the invoked grand product
argument for T13, P commits only to small field elements.
• See upcoming work on LogUp by Papini and Haböck.
• More involved than just a simple swap of a grand product argument.
• Remember: Lasso/Jolt need an indexed lookup argument that plays nicely with
collating small-table lookup results into big-table results.
• Technical takeaway: The community has still not fully internalized the power of
sum-check to avoid commitment costs for P.
18. Key Performance Insight in Basic-Lasso
• For many existing lookup arguments, if you swap out the invoked grand product
argument for T13, P commits only to small field elements.
• See upcoming work on LogUp by Papini and Haböck.
• More involved than just a simple swap of a grand product argument.
• Remember: Lasso/Jolt need an indexed lookup argument that plays nicely with
collating small-table lookup results into big-table results.
• Technical takeaway: The community has still not fully internalized the power of
sum-check to avoid commitment costs for P.
• See my second a16z talk for details on how Basic-Lasso works.
• Last part of this talk: more info about how to think of Lasso as a tool.
20. Front-ends today for VM execution
• Say P claims to have run a computer program for 𝑚 steps.
• Say the program is written in the assembly language for a VM.
• Popular VM’s targeted: RISC-V, Ethereum Virtual Machine
(EVM)
• Today, front-ends produce a circuit that, for each step of the
computation:
1. Figures out what instruction to execute at that step.
2. Executes that instruction.
Lasso lets one replace Step 2 with a single lookup.
For each instruction, the table stores the entire evaluation table
of the function.
If instruction 𝑓 operations on two 64-bit inputs, the table stores
𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 .
This table has size 2()*.
All RISC-V instructions are decomposable.
21. Jolt: A new front-end paradigm
• Say P claims to have run a computer program for 𝑚 steps.
• Say the program is written in the assembly language for a VM.
• Popular VM’s targeted: RISC-V, Ethereum Virtual Machine
(EVM)
• Today, front-ends produce a circuit that, for each step of the
computation:
1. Figures out what instruction to execute at that step.
2. Executes that instruction.
• Lasso lets one replace Step 2 with a single lookup.
• For each instruction, the table stores the entire evaluation
table of the instruction.
If instruction 𝑓 operations on two 64-bit inputs, the table stores
𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 .
This table has size 2()*.
All RISC-V instructions are decomposable.
22. Jolt: A new front-end paradigm
• Say P claims to have run a computer program for 𝑚 steps.
• Say the program is written in the assembly language for a VM.
• Popular VM’s targeted: RISC-V, Ethereum Virtual Machine
(EVM)
• Today, front-ends produce a circuit that, for each step of the
computation:
1. Figures out what instruction to execute at that step.
2. Executes that instruction.
• Lasso lets one replace Step 2 with a single lookup.
• For each instruction, the table stores the entire evaluation
table of the instruction.
• If instruction 𝑓 operations on two 64-bit inputs, the table
stores 𝑓(𝑥, 𝑦) for every pair of 64-bit inputs 𝑥, 𝑦 .
• This table has size 2()*.
• Jolt shows that all RISC-V instructions are decomposable.
23. Jolt in a picture
query to be split into “chunks” which are fed into di↵erent subtables. The prover provides these chunks as
advice, which are c in number for some small constant c, and hence approximately W/c or 2W/c bits long,
depending on the structure of z. The constraint system must verify that the chunks correctly constitute z,
but need not perform any range checks as the Lasso algorithm itself later implicitly enforces these on the
chunks.
24. Jolt in context
• Jolt is a realization of Barry Whitehat’s “lookup singularity” vision (?)
• Auditability/Simplicity/Extensibility benefits.
• Performance benefits.
• A qualitatively different way of building zkVMs.
• Yet with many similarities to things people are already doing.
• People are already computing functions like bitwise-AND by
doing several lookups into small tables and combining the
results.
• Differences/keys to Jolt:
• The new small-table lookup argument is much faster for P.
• The new small-table lookup argument is naturally indexed.
• The collation technique is much faster for P.
• “Free” to multiply and add results of small-table lookups.
• These differences let us do almost everything in VM emulation
with lookups.
25. Jolt in context
• Jolt is a realization of Barry Whitehat’s “lookup singularity” vision (?)
• Auditability/Simplicity/Extensibility benefits.
• Performance benefits.
• A qualitatively different way of building zkVMs.
• Yet with many similarities to things people are already doing.
• People are already computing functions like bitwise-AND by
doing several lookups into small tables and combining the
results.
• Differences/keys to Jolt:
• The new small-table lookup argument is much faster for P.
• The new small-table lookup argument is naturally indexed.
• The collation technique is much faster for P.
• “Free” to multiply and add results of small-table lookups.
• These differences let us do almost everything in VM emulation
with lookups.
27. Example 1: Bitwise-AND
• Decomposable: to compute bitwise-AND of two 64-bit inputs 𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute the bitwise-AND of each chunk.
• Concatenate the results.
• i.e., output is ∑+,(
*
8+-( 0 bitwiseAND(𝑥+, 𝑦+).
LDE-structured:
bitwiseAND 𝑥, 𝑦 = :
+,(
./
2+-(
0 𝑥+ 0 𝑦+.
This is a multilinear polynomial that can be evaluated with under
200 field operations.
28. • Decomposable: to compute bitwise-AND of two 64-bit inputs 𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute the bitwise-AND of each chunk.
• Concatenate the results.
• i.e., output is ∑+,(
*
8+-( 0 bitwiseAND(𝑥+, 𝑦+).
• Avoiding an honest-party committing to the sub-table:
• bitwiseAND(𝑥+, 𝑦+) = ∑0,(
*
20-(
0 𝑥0 0 𝑦0.
• This is a multilinear polynomial that can be evaluated with
under 25 field operations.
• The only information the Lasso V needs about the sub-table is
one evaluation of this polynomial.
Example 1: Bitwise-AND
29. Example 2: RISC-V Addition
• For adding two 64-bit numbers 𝑥, 𝑦, RISC-V prescribes that they be added and
any “overflow bit” be ignored.
• Jolt computes 𝑧 = 𝑥 + 𝑦 in the finite field (via one constraint added to the
ancillary R1CS), and then uses lookups to identify the overflow bit, if any, and
adjust the result accordingly.
30. Example 2: RISC-V Addition
• For adding two 64-bit numbers 𝑥, 𝑦, RISC-V prescribes that they be added and
any “overflow bit” be ignored.
• Jolt computes 𝑧 = 𝑥 + 𝑦 in the finite field (via one constraint added to the
ancillary R1CS), and then uses lookups to identify the overflow bit, if any, and
adjust the result accordingly.
• P commits to the “limb-decomposition” (𝑏!, … , 𝑏#) of the field element z =
𝑥 + 𝑦.
• Let 𝑀 = 2%&/# denote the max value any limb should take.
• A constraint is added to the R1CS to confirm 𝑧 = ∑'(!
#
𝑀')!
> 𝑏' and each
𝑏' is range checked via a lookup into the subtable that stores 0, … , 𝑀 − 1 .
• These checks guarantee that (𝑏!, … , 𝑏#) is really the prescribed limb-
decomposition of 𝑧.
• To identify the overflow bit, one can do a lookup at index 𝑏#, into a table
whose 𝑖'th entry spits out the relevant high-order bit of 𝑖.
31. Example 3: LESS THAN UNSIGNED
• LESS-THAN
• Decomposable: to compute LESS-THAN of two 64-bit inputs
𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute LESS-THAN (LT) and EQUALITY (EQ) on each
chunk.
• Output is: ∑+,(
*
2+-( 0 LT(𝑥+, 𝑦+) ∏0,+1(
*
EQ (𝑥0, 𝑦0).
LDE-structured:
EQ 𝑥', 𝑦' = ∏()$
*
( 𝑥',(𝑦',(+ (1 − 𝑥',()(1 − 𝑦',()).
LT 𝑥#, 𝑦# = ∑()$
*
(1 − 𝑥#)𝑦# ∏,)(
*
( 𝑥#,(𝑦#,(+ (1 − 𝑥#,()(1 − 𝑦#,()).
Plugging the above into the output expression gives a multilinear
polynomial that can be evaluated with under 200 field
operations.
32. Example 3: LESS THAN UNSIGNED
• LESS-THAN
• Decomposable: to compute LESS-THAN of two 64-bit inputs 𝑥, 𝑦:
• Break each of 𝑥, 𝑦 into, say, 𝑐 = 8 chunks of 8 bits.
• Compute LESS-THAN (LT) and EQUALITY (EQ) on each chunk.
• Output is: ∑+,(
*
2+-( 0 LT(𝑥+, 𝑦+) ∏0,+1(
*
EQ (𝑥0, 𝑦0).
• Avoiding commitments to the two subtables:
• EQ 𝑥', 𝑦' = ∏()$
*
( 𝑥',(𝑦',(+ (1 − 𝑥',()(1 − 𝑦',()).
• LT 𝑥#, 𝑦# = ∑()$
*
(1 − 𝑥#)𝑦# ∏,)(
*
( 𝑥#,(𝑦#,(+ (1 − 𝑥#,()(1 − 𝑦#,()).
• These are multilinear polynomials that can be evaluated with
under 50 field operations.
33. General intuition for Lasso as a tool
• Lasso supports simple operations on the bit-decompositions of field
elements, without requiring P to commit to the individual bits.
• The sub-tables have quickly-evaluable multilinear extensions if each
corresponds to a simple function on the (bits of) the table indices.
• This ensures no honest party has to commit to them in pre-processing.
• Can compute, say, bitwiseAND of two field elements in {0, 1, …, 2^64-1}
with lower P costs than, say, Plonk incurs per addition or multiplication
gate.
• Remember: Lookup arguments are all about economies of scale. They
only make sense to use if doing many lookups into one table (i.e.,
computing many invocations of the same function).
35. SNARKs for repeated function evaluation
• Many previous works have studied SNARKs for repeated function
evaluation.
• Computing the same function 𝑓 on many different inputs
𝑥(, … , 𝑥2.
• They consider a “polynomial” amount of data parallelism.
• If 𝑓 takes inputs of length n, the number of different
inputs is 𝑚 = poly 𝑛 .
• They still force P to evaluate 𝑓 in a very specific way.
• Executing a specific circuit to compute 𝑓.
36. Zooming out: a new view on lookup arguments
• View a lookup table as storing all evaluations of a function 𝑓.
• A lookup argument is then a SNARK for highly repeated evaluation
of 𝑓.
• It lets P prove that a committed vector
((𝑎A, 𝑓 𝑎A ), …, (𝑎B, 𝑓 𝑎B )
consists of correct evaluations of 𝑓 at different inputs 𝑎A, …, 𝑎B.
Due to the 𝑂(𝑐(𝑚 + 𝑁A/D)) cost for P, Lasso is effective only if the
number of lookups 𝑚 is not too much smaller than the table size 𝑁.
The number of copies of 𝑓 should be exponential in the input size to
𝑓.
37. Zooming out: a new view on lookup arguments
• View a lookup table as storing all evaluations of a function 𝑓.
• A lookup argument is then a SNARK for highly repeated evaluation
of 𝑓.
• It lets P prove that a committed vector
((𝑎A, 𝑓 𝑎A ), …, (𝑎B, 𝑓 𝑎B )
consists of correct evaluations of 𝑓 at different inputs 𝑎A, …, 𝑎B.
• Due to the 𝑂(𝑐(𝑚 + 𝑁A/D)) cost for P, Lasso is effective only if the
number of lookups 𝑚 is not too much smaller than the table size 𝑁.
• i.e., The number of copies of 𝑓 should be exponential in the input size to 𝑓.
38. High-level message of this viewpoint
• Lasso is useful wherever the same function is evaluated many times.
• zkVMs are only one such example.
• By definition, the VM abstraction represents the computation as repeated application
of primitive instructions.
• But implementing a VM abstraction comes with substantial performance costs in
general.
• Interesting direction for future work:
• Other/better ways to isolate repeated structure in computation.
• Example work (with Yinuo Zhang and Sriram Sridhar):
• Bit-slicing.
• To evaluate a hash function or block cipher like SHA/AES naturally computed by a Boolean
circuit C on, say, 64 different inputs:
• Pack the first bit of each input into a single field element, the second bit of each input
into a single field element, and so on.
• Replace each AND gate in C with bitwiseAND, each OR gate in C with bitwiseOR, etc.
• Now each output gate of C computes (one bit of) all 64 evaluations of SHA/AES.
• Apply Lasso to this circuit.