Decouple transaction pool from the rest of Ethereum node

https://ethresear.ch/t/decouple-transaction-pool-from-the-rest-of-ethereum-node/7710

Based on some explanations given here: Difference between Stateless Ethereum and ReGenesis regarding transaction pool I will propose a hypothetical architecture of Ethereum node that decouples transaction pool handing from most of other things that happen it. This decoupling means that we could create a special type of Ethereum nodes, called “Transaction Pool nodes”, and their task would be exclusively to receive, verify, and propagate, transactions around the network. One of the main reasons to propose this is that if transaction pool nodes can be separated, engineering work on them can be performed by people specialising in this subject area, and they can add functionality and optimise within the confines of the protocol, and later on, suggest and implement improvements to the protocol.

The main idea is to make the merkle proofs of sender accounts (which includes balance and nonce, those are required for basic anti-spam measures) mandatory in the transactions. This will make transactions larger (3k more if we do not switch state to binary merkle tree, and 1k more if we do switch state to binary merkle tree), and it will also make transactions harder to produce. The question is – will this be an acceptable tradeoff?

If this is done, it will be just the first step towards gradually shifting the burden of maintaining the state from the core of the network (relaying nodes and mining nodes) to the perimeter (nodes that create and inject transactions). I believe that it will make the whole system more incentive compatible with further growth.

1 post – 1 participant

Read full topic

Difference between Stateless Ethereum and ReGenesis regarding transaction pool

https://ethresear.ch/t/difference-between-stateless-ethereum-and-regenesis-regarding-transaction-pool/7709

On one of the Stateless Ethereum calls we had a great question from @cdetrio about the relation between the transaction propagation and witnesses. The question is this: If we do not assume that Ethereum nodes hold any state, how would they verify transactions before propagating them? Here is the context: when a transaction arrives to an Ethereum node from a peer, this node needs to perform some basic verifications on it before sending it further:

  1. Extract the sending account from the transaction (sender), then read the ETH balance of sender from the current state (sender.balance), and verify that the sender has enough ETH to at least pay for gas: sender.balance >= tx.limit * tx.gasprice.
  2. Extract the sending account from the transaction (sender), then read the nonce of the sender from the current state (sender.nonce), as well as the nonces of other transactions from sender currently in the transaction pool. Verify that there is no nonce gaps, i.e. there is no such number nonce_gap, such that sender.nonce < nonce_gap < tx.nonce, and there are no other transaction in the transaction pool (tx'), such that tx'.nonce = nonce_gap.

These basic checks stop the node from acting as an amplification vector for basic spamming attack that can take down critical nodes like mining pools. Looking at the checks, one should be easily able to come up with such attacks:

  1. Unfunded transactions – create a lot of transactions coming from lots of generated accounts with no ETH on them.
  2. Funded transactions that will never be valid. Take an account with some ETH in it, and create a lot of transaction with very high gas price, but with a nonce gap, which means that none of the created transactions will be able to get into the block, and be stuck in the transaction pool.

There can be refinements to these verifications, for example, if there are multiple transactions from the same sender, without nonce gaps, we can require that sender.balance >= tx1.limit*tx1.gasprice + tx2.limit*tx2.gasprice + ..., in other words, sender has enough ETH to pay for all outstanding transactions. Other refinement could be based on a minimum viable gas price.

One common theme in all these verifications is that they require access to one item in the current state, to read sender.balance and sender.nonce. And if the node is “pure” stateless, where will it take this information from? That was the question introduced on the call.

On the same call, the suggested solution was to require transactions to carry the data about their sender.balance (as an extra field, perhaps), and having sender.nonce implied (meaning for the tx to be mineable, there needs to be equality tx.nonce == sender.nonce). And, additionally, the transaction would carry the merkle proof, or witness, certifying the correctness of sender.balance and sender.nonce. Such merkle proof might be a bit large (if we assume 8 levels in the state trie saturated, and 15 sibling hashes on each level, the proof size is around 8 * 15 * 32 = 3840 – not the end of the world, but still large, and adding 61k gas to the transaction cost, if we assume 16 gas per byte). Switch to binary trees would reduce the such of such merkle proof to around 32*32 = 1024 (or 16k gas).

Another interesting detail about this merkle proof inside a transaction is that the proof would have been constructed based on some state root which was in the past. And, therefore, to verify this proof, the Ethereum node will need access to the relevant segment of the header chain.

ReGenesis is the middle ground between hypothetical “pure” stateless nodes and fully stateful nodes we have now. And here is the main implication on the sender merkle proofs. Between two “resets” (ReGenesis events) of the state, any sender will only need to include the merkle proof of its balance and nonce until its first transaction made it into a block. This is because under the rules of ReGenesis, any state data attached to the transactions with correct proof, gets added to the “active state”, and this active state survives until the next ReGenesis event.

1 post – 1 participant

Read full topic

Simpler Ethereum sync: Major/minor state snapshots, blockchain files, receipt files

https://ethresear.ch/t/simpler-ethereum-sync-major-minor-state-snapshots-blockchain-files-receipt-files/7672

Recently, we decided to stop working on preparing the Merry-Go-Round snapshot sync algorithm in turbo-geth. Why? Because we think there is a way to achieve most of what it would deliver, but without most of the complexity. What is described here will not be a part of the first turbo-geth release. In the first release, the only way to sync would be to download all blocks starting from the Genesis, and execute them. The timing is not that bad – it definitely won’t take a month. On a machine with an NVMe device, it can take around 50 hours to obtain the node synced to the head of the mainnet and all history present and indexed.

In the subsequent releases, we would like to introduce the ability to sync from more recent state than genesis. Initial idea is this. Let’s say, every 1m blocks (~6 months time), we will manually (or in the future automatically) create a snapshot of the entire state, and of all the blocks and receipts prior to that point. This will result in 3 files (and approximate sizes if this were done about now):

  1. State snapshot file, contains all the accounts, contract storage, and contract bytecodes. ~50Gb
  2. Blockchain file, containing all block headers and block bodies from genesis up to the snapshot point. ~160Gb
  3. Receipt file, containing all the historical receipts from genesis up to the snapshot point. ~130Gb

These 3 files would be seeded on the BitTorrent (and perhaps Swarm) by the turbo-geth nodes (we want to try to plug in the bitTorrent library).

One slight technical challenge for such seeding is the ability to utilise the state snapshot file, while seeding it, otherwise these 50Gb would just be “wasted”, meaning this space is only useful for seeding to other nodes, but not for anything else. It should be possible to organise the state database as an overlay, where actively modifiable state “sits” on top of the immutable snapshot file. Anytime we try to read anything from the state, we look up in the modifiable state, and if not found there, we look up in the snapshot file (that means that snapshot file needs to have an index in it).

As you might have guessed, there would be two alternative ways to sync:

  1. Download blockchain file, and execute all the blocks from genesis. Result – entire history from genesis, receipts, current state.
  2. Download the most recent state snapshot file, download blocks (from eth network) after the snapshot point, and execute blocks after the snapshot points. Result – history only starting from the snapshot point, current state. If historical receipts are required, they can be downloaded as the receipt file.

How large would that “modifiable” state be (the one that sits on top of the state snapshot file as overlay)? Here is some rough calculation. As of the block 10’416’641, there were 92’430’646 accounts and 334’797’797 in the state, or 427’228’443 items in total. That makes 125 bytes per item on average.
Number of modified accounts between blocks 9’416’641 and 10’416’641 was 25’191’312, and number of modified storage items: 88’451’010, or around 13G. This is how large the modified state would grow in these latest 10m blocks. After that point, it will be merged into the snapshot, and the new snapshot will be seeded over the BitTorrent (or Swarm) network.

The second approach to syncing (from recent state snapshot) still requires executing at most 1m blocks. Depending on the performance of implementation, it might take anything from few hours to couple of days.

1 post – 1 participant

Read full topic

Terminology: What do we call “witness” in “Stateless Ethereum” and why it is appropriate

https://ethresear.ch/t/terminology-what-do-we-call-witness-in-stateless-ethereum-and-why-it-is-appropriate/7602

In order to communicate our ideas and designs more clearly, we need good terminology. There is an opinion that the “Stateless” is not a good term for what we are trying to design, and I tend to agree. We might need to move away from this term in our next pivot. For now lets see if “Witness” is an appropriate term. From my point of view, it is. This is why.

The way Ethereum state transition is usually described is that we have environment E (block hash, timestamp, gasprice, etc), block B (containing transactions), current state S, and we compute the next state S' like this:

S' = D(S, B, E)

where D is a function that can be described by a deterministic algorithm, which parses the block, takes out each transaction, run it through the state, gathers all the changes to the state, and outputs the modified state.
The same action can be viewed in an alternative way:

HS' = ND(HS, B, E)

where we have a non-deterministic algorithm ND, which takes merkle hash HS of the state as input, instead of the state S. And it outputs merkle hash of S', which is HS', instead of the full modified state.

How does this non-deterministic algorithm work? It requires a so-called oracle input, or auxiliary input, to operate. This input is provided by some abstract entity (the Oracle) that knows everything that can be known, including the full state S. For example, imagine that the first thing that the block execution does is reading balance of an account A. Non-deterministic algorithm does not have this information, so it needs the Oracle to inject it as a piece of auxiliary input. And not only that, the non-deterministic algorithm also needs to check that the Oracle is not cheating. Essentially, the Oracle will provide the balance of A together with the merkle proof that leads to HS, this will satisfy the algorithm that it has the correct information and it will proceed.

The reason why this kind of algorithm is non-deterministic, because it cannot “force” the Oracle to do anything, it is completely up to the Oracle whether the algorithm will ever succeed in computing HS‘. The Oracle can completely ignore the algorithm and never provide the input, and the algorithm will just keep “handing”. The Oracle may also provide wrong input, in which case algorithm will most probably fail (because the input will not pass the merkle proof verification). Why “most probably”? Because if the Oracle is very very powerful, it may be able to find preimage for Keccak256 (or whatever hash function we using in the Merkle tree) and forge merkle proofs of incorrect data. Although this may happen, it is very unlikely, and the degree to which we are sure it won’t happen is called “soundness”.

What about the term “witness”? Often the auxiliary input that the Oracle provides to a non-deterministic algorithm is called “witness”. Therefore it is appropriate to call the pieces of merkle proofs that we would like to attach to blocks or transactions “witnesses”. If we look at the “Stateless” execution as a non-deterministic algorithm, then it all makes sense :slight_smile:

“Witness” is a more general term than “merkle proof”, because there could be other types of witnesses, for example, proofs for polynomial commitments, SNARKs, STARKs, etc.

Hope this helps someone
:heart_eyes:

4 posts – 2 participants

Read full topic

ReGenesis – resetting Ethereum to reduce the burden of large blockchain and state

https://ethresear.ch/t/regenesis-resetting-ethereum-to-reduce-the-burden-of-large-blockchain-and-state/7582

Lessons from Cosmos Hub

If you observed how Cosmos Hub performed their upgrade from version 1 to version 2, and then from version 2 to version 3, you would know that it was essentially done via re-starting the blockchain from a fresh genesis. Upon the upgrade, node operators had to shut down their nodes, then generate a snapshot of the state of Cosmos Hub, and then effectively use that snapshot as a genesis for launching a new blockchain, from block 1.
Now anyone who wants to join Cosmos Hub, needs to acquire genesis for CosmosHub-3, download all the blocks of CosmosHub-3 (but not CosmosHub-1 or CosmosHub-2) and replay them.

Can we “relaunch” Ethereum 1?

Lets look at a hypothetical application of such method in Ethereum, where we have a really large blockchain (150-160Gb), as well as fairly large state (40-100Gb depending on how you store it). The obvious gain of such “relaunch” would be that the new joiner nodes will need to start with the 40Gb genesis state and not with 150 Gb worth of blocks. But downloading 40 Gb genesis is still not a great experience.

State in Ethereum is implicit, only its merkle root hash is explicit

Let us now assume that we can make these 40 Gb implicitly stored “off-chain” and only the root hash being used as the genesis. And let us also start with the empty state. How do we get transactions access part of the implicit state?

Bear in mind that even now the 40 Gb is implicit and the exact way of acquiring it is an implementation detail. You can run through all 10 million blocks to compute it, or you can download its snapshot via fast sync, or warp sync, or you can even copy it from someone’s external disk and then re-verify. Although the state is implicit, we assume that the block composers (mining pools usually) have access to that implicit state and will always be able to process all transactions. What we are removing is the assumption that all other validating nodes have the access to that implicit state to check that the transactions in the block are valid and the state root hash presented in the block header matches up with the result of the execution of that block.

Isn’t it Stateless Ethereum?

If you followed Stateless Ethereum at all, you may recognise that this is exactly what we are trying to do there – leave the assumption that block composers have access to the implicit state, but remove the assumption that all validating nodes have the same access. We propose to do it by obligating the block composers to add extra proofs in the block, and we call these proofs “block witness”.

Proofs in the blocks vs proofs in the transactions?

When people first learn about this, they assume that these extra proofs are indeed provided by the transaction senders, and become part of transaction payload, but we have to explain to them that no, it is block composers’ job. But later we discovered that transactions will have to include some extra proof. Namely they will need to prove that the sending address has enough ETH to pay the gas for this transaction, and also for all other transactions from this account but with lower nonces. They might also need to prove the nonce of the sending account, so that the node can figure out if there are any nonce gaps and therefore potential DDOS attack via a storm of non-viable transactions. More stringent checks can be done, but knowing the ETH balance and the nonce of the sending account is the necessary (but perhaps not sufficient) info for most of the DDOS-protection heuristics.

Critique of proofs in the transactions

Let us now imagine that we do, in fact, want transaction senders to include proofs for all the pieces of state that the transaction touches. This would be nice because it would simplify our work on trying to charge extra gas for witnesses. The main critique of this is usually to do with Dynamic State Access (DSA, as opposed to Static State Access – SSA). If transaction goes to a particularly complex smart contract, specifically, if it involves lots of nested calls to other contracts, it might be that the items of state that this transaction is going to touch, is hard to pre-calculate. And it can even be used to “trap” a user by front-running their transactions and making them fail due to insufficient proofs.

ReGenesis provides mitigation

The concern about DSA can not be easily solved completely, but it can be sufficiently mitigated to the point that users will very rarely see the inconvenience, and never get permanently “trapped” into never being able to achieve their desired state transition. The mitigation relies on the extra rule that any proof provided with a transaction, which checks out against the state root (but is not necessarily sufficient for the transaction to be successful), becomes part of the implicit state. Therefore, repeated attempts of the user to execute the transaction will keep growing the implicit state, and eventually, it will succeed. Any attacker that tries to “trap” the user, will have to come up with more sophisticated ways to redirect the transaction’s state access outside of the implicit state, and eventually, the attacker will fail.

As the implicit state grows from nothing (just after the “relaunching”) to include more and more of the actively accessed state, the proofs that transactions need to provide will shrink. After a while, most transactions won’t even need to attach any proofs, only the ones that touch some very old and “dusty” parts of the state.

We can keep doing it

I call this “relaunch” reGenesis, and it can be done regularly to ease the burden on the non-mining nodes. It also represents a less dramatic version of Stateless Ethereum.

Downsides

I have not yet explored lots of them, but there three that I already see:

  1. Users may need to have access to the full implicit state to create transactions. I actually see this as a fair compromise
  2. Users may need to repeat transactions (due to Dynamic State Access) until they eventually achieve the desired state transition
  3. Some roll-up techniques (that utilise the blockchain data for data availability) may break if the network effectively “archives” all the blocks prior to the ReGenesis

Please consider the discussion open!

1 post – 1 participant

Read full topic