Settlement is the "final step in the transfer of ownership, involving the physical exchange of securities or payment".
After settlement, the obligations of all the parties have been discharged and the transaction is considered complete.
Any individual EVM is a window into the shared state of the World Computer.
Tl;dr node operators stake ETH in order to gain the role of validator, earn rewards and secure Ethereum. This stake can be deducted from in cases of malicious behavior.
Today, the World Computer is SLOW. The EVM is not a high performance environment, both execution and storage is expensive, and we already push up against the limits of Ethereum.
And so, we must look for (credibly neutral) ways to scale.
After years of research and development, the Ethereum community has found the best path forward: rollups.
Rollups are independent, high performance blockchains that settle to Ethereum. Rollups can be fast (and centralized) and STILL benefit from Ethereum security.
But rollups are only part of the solution; while they provide an incredible performance environment, they do not scale the storage capabilities of Ethereum.
In fact, because they are so fast (generating so much data) rollups make the problem worse.
As of today, we have a plan: Danksharding. But we are still so far away from implementation and a lot of details need to be filled in. So, let's begin with the big picture idea.
We'll begin with blobs.
Imagine the blockchain like a database that contains all the transactions that have ever happened on the World Computer.
It is critical that this information is always directly available to any node; this is the internal state of Ethereum.
Rollups, on the other hand, are completely outside of Ethereum. Yes, they settle (post a reconstructable copy of all transactions) on the World Computer, but that's just a copy.
It's NOT important that nodes can directly access this data.
What IS important is that we can guarantee that this data was posted to Ethereum, is completely public and request-able by anyone and is 100% available for download.
So this is our design space: data blobs that exist outside of the EVM.
Today, tomorrow and forever it will be 100% necessary for every node to download every block.
But our new scheme will not force every node (or even any single node) to download all the data, just to ensure that the data is available in aggregate across the entire network.
We can achieve this effect with some clever peer-to-peer (P2P) networking design.
Tl;dr in P2P networks nodes communicate directly with each other (instead of a centralized node). We can organize a network to store huge amounts of data without crushing any single node.
Good news and bad news:
EIP-4844 will deliver the following changes to Ethereum consensus:
EIP-4844 is a huge step forward, creating the blob market and making all the changes needed to the execution layer of Ethereum.
But there is still a lot of work that needs to be done, and a lot of designs that need to be finalized.
It doesn't matter how complete the architecture without the actual process of sampling.
We also still need to formalize the implementation of the KZG commitment scheme (the theory/math is well understood).
Full Danksharding is dependent on another, independent Ethereum upgrade: enshrined-PBS.
Although PBS was originally conceived in the context of MEV, it will become incredibly important for Danksharding.
It turns out that a lot of the work that will go into constructing a blob is pretty computationally intense and will (probably) be unrealistic for a minimal Ethereum node.
PBS will allow blob builders to centralize and specialize without compromising on security.
A future with both PBS and Danksharding might look like this:
This workflow assumes a robust block and blob market, with at least 2 honest competitors bidding for proposer selections.
But, in the worst case, the validator can just build their own. It's just both the blocks will be suboptimal and the blobs will not be filled.
Deep Dive: LMD-GHOST
Another important aspect of Ethereum PoS that needs to change is the fork-choice rule.
Today, LMD-GHOST only looks at blocks. Under Danksharding, the protocol will also need to consider blobs (although some/all of this logic may be released with EIP-4844).
The new rule introduces the concept of "tight coupling" which states that a block is only eligible if all blobs in that block have passed a data availability check.
With tight coupling, if the chain contains even a single invalid blob, the entire chain is invalid.
The rest of the changes needed are less interesting and more about implementation. Things like "which fields need to be added to blocks" and "how to distribute validators when validator count is unreasonably low."
But if you've made it this far, you get the big picture.
Source Material - Twitter Link
Source Material - PDF