Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Securing Sharded Networks with Swarm


Published on

Dmitry Kurinskiy @ Swarm Summit
May 24, 2019 – Madrid

Published in: Software
  • If u need a hand in making your writing assignments - visit ⇒ ⇐ for more detailed information.
    Are you sure you want to  Yes  No
    Your message goes here

Securing Sharded Networks with Swarm

  1. 1. Securing Sharded Networks 
 with Swarm Dmitry Kurinskiy
  2. 2. Agenda 1. Decentralized databases 2. Security considerations for decentralized databases 3. Hybrid approach to low-redundancy shard security
  3. 3. What is Fluence? A permissionless decentralized database platform One-click deployment for SQL and NoSQL databases (think Redis or SQLite)
  4. 4. ? are computationally 
 bound do not support 
 sophisticated queries Decentralized database gap
  5. 5. Approach #1: transfer data to the client need to transfer too much data
  6. 6. Approach #2: use a blockchain excessive redundancy
  7. 7. Redundancy in the cloud environment • Database replicas: ~2×-5× • Backup storage: ~3×
  8. 8. Goal: decentralize opensource DBs • Cloud databases: • AWS RDS: MySQL & PostgreSQL • AWS ElastiCache: Memcached & Redis • Database = Storage + Computations • Apply built-in and user-defined 
 functions to data • Bulk updates • Stored procedures
  9. 9. How to make miniature shards secure?
  10. 10. Sharding on a budget Assuming that we have a budget for just 5× to 10× redundancy, can we simply have a 7-nodes shard with a BFT consensus?
 Well, there are security threats
  11. 11. Isolated shard security • Permissioned BFT consensus (such as Tendermint) can 
 tolerate k out of 3k+1 malicious nodes
 • If 2k+1 (two thirds) of the nodes are malicious, 
 they can "tolerate" honest ones
  12. 12. How to deal with a malicious shard? With 10% malicious nodes in the network, 
 we have a 0.017% chance of 7-nodes shard takeover Can we do better... ...with the same budget?
  13. 13. Shared verifier pool • A shard should be secured 
 not only by its own nodes
 • But how to transfer transaction history to the verifier pool?
  14. 14. Swarm to the rescue • A shard uploads a block to Swarm and provides a proof of upload • Rotated validators pick data from Swarm w/o overloading the shard • Fishermen download different fragments of transaction history • Fishermen can be picked at random
  15. 15. Shared verifier pool with Swarm
  16. 16. • Ethereum holds the registry of deployed databases • BFT consensus-based replication between DB nodes • Direct frontend <–> database interaction • Transaction history is uploaded to Swarm
 Tendermint WebAssembly Real-time shards
  17. 17. Tendermint blocks
  18. 18. Blocks uploaded to Swarm
  19. 19. Swarm as a chain storage • Store Swarm receipts in Swarm, organize the data 
 into a chain to fetch them all
 • Once in a while, the pointer to the last Swarm-uploaded block 
 can be checkpointed to Ethereum
 • In between, keep the manifests chain in the interim storage
 Why not Swarm MRU? Due to possible forks
  20. 20. Preventing forks
  21. 21. Verifications Composition: • Transaction history is verified segment by segment • Segments are sequentially verified by several fishermen Fishermen: • Are randomly selected from the shared network pool • Verify that preceding validations were autonomous • Do not know if there will be a subsequent validation
  22. 22. 1. Download the old state from Swarm 2. Download the transaction history segment 
 from Swarm 3. Replay the transaction history 4. Reach the new state 5. Upload the new state to Swarm 6. Drop the transaction history segment Verification
  23. 23. Verification game • Any database transaction may be disputed • Verification game narrows the dispute to a single WebAssembly instruction • Ethereum smart contract repeats the instruction
  24. 24. Hybrid approach Speed layer
 Real-time validators Security layer
 Shared fishermen pool Data availability layer
 Swarm Dispute resolution layer
  25. 25. Sharding on a budget • For a misbehaviour to go unnoticed, the shard must be taken and all the fishermen must be malicious • 4 validator nodes in a shard, 3 fishermen nodes: 
 0.00037% risk of shard takeover
 2 orders of magnitude lower than with a naive approach 25
  26. 26. Bottomline Real-time validators are stateful and contain only few workers: + Low response latencies + cost efficiency
 – Cartels are possible
 Fishermen are stateless and independent: + Fishermen cartels are not possible
 – Takes time to reach finality
 Swarm decouples the Block Producer from the Finality Gadget: + Independent decentralized storage with data availability guarantees
 + Cost-efficient: no need to store the data forever, redundancy is limited
  27. 27. That's all folks!