Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 5 additions & 7 deletions RIPS/rip-7728.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,8 @@ The introduction of the `L1SLOAD` precompile may increase the requirement for th

**Prerequisite 2**: The L2 sequencer has a notion of the *latest seen L1 block*, which is deterministic over all L2 nodes, i.e. it is part of the L2 state machine. The exact mechanism is not in scope for this RIP.

**Implementation**: When the L2 node encounters a call to the `L1SLOAD` precompiled contract, it first verifies that its input is well-formed. It then retrieves its latest seen L1 block number `l1BlockNumber` and sends an RPC query `eth_getStorageAt(address, storageKey, l1BlockNumber)` to the L1 node. Finally, it writes the received storage value to the designated output buffer.
**Implementation**: When the L2 sequencer encounters a call to the `L1SLOAD` precompiled contract, it first verifies that its input is well-formed. It then retrieves its latest seen L1 block number `l1BlockNumber` and sends an RPC query `eth_getStorageAt(address, storageKey, l1BlockNumber)` to the L1 node, and writes the received storage value to the designated output buffer. Finally, it writes all the received storage values of all `L1SLOAD` it encountered in the block, in order, to the end of the L2 batch it submits on L1. \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting suggestion and it does indeed have some benefits.

My concern is, wouldn't this make "L2 follower nodes" (the ones that receive unsafe blocks on L2 p2p) impossible? Since batch submission delay for most current rollups is 30s or more, they usually sync on L2 p2p, not directly from L1.

On a more general note, I think this RIP should just define the interface and semantics of L1SLOAD, and the underlying implementation can be left to the individual rollups.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Thegaram thanks for the feedback!
While the RIP can have only the interface and the expected behavior, I think that it would be nice to have a suggested implementation that lets rollups have the opcode, without needing their l2 nodes to run l1 archive nodes, as it is a pain point for several rollups right now. Wdyt?
It can definitely be moved to a different section as a recommendation, though I'm not sure it's in the scope of this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Thegaram regarding "L2 follower nodes", since only the sequencer appends the values to the batch and writing to L1, I think they'll work exactly as they would without this change. They'll need to get the values from L1 just like the sequencer would, and don't benefit from this change. However, they don't need an archive node since they're using recent state. This change was meant primarily to solve the problem for validators that sync from L1, i.e. validating the entire chain. Without this change they need an archive node and multiple RPC calls to it. With this change they don't. Correct?

And as @shahafn said, I think it's good to suggest an implementation that doesn't require an archive node even though it's an implementation detail and each rollup can decide for itself how to handle it. The only "mandatory" part is the interface.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yoavw, thank you for a well-written explanation of the benefits this brings. IMHO this is pretty good idea and I'm keen on this solution.
As everyone here already knows, the biggest issue with this RIP was archive node requirement, and unless I'm missing something, this fixes the issue completely :)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should restructure the RIP to put more emphasis on the standard interface, and leave the underlying implementation up to the rollup. We can include examples on how one would implement this. We can do this in another PR.

During batch validation, when the L2 node encounters a call to the `L1SLOAD` precompile, it retrieves the value from the `L1SLOAD` values array in end of the batch from its respective position in the array, and writes it to the designated output buffer.

### Errors

Expand Down Expand Up @@ -98,13 +99,10 @@ function loadFromL1(address l1Address, uint256 key1, uint256 key2) public view r

### Which L1 block does `L1SLOAD` read the storage value at?

According to the specification defined above, `L1SLOAD` returns the storage value at the latest L1 block known to the L2 sequencer. There are two related issues:
- How to guarantee the consistent return value of `L1SLOAD`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the consistency note should be kept for these reasons.

  • Nodes syncing from L2 p2p instead of L1 need to fetch from the same L1 height as the sequencer.
  • For proving the values appended to the batch, we need to know which L1 height they were fetched from.

That said, I don't think consistency is an issue, this section just emphasizes that the L2 needs to have a notion of "latest known L1 block" that is consistent across multiple nodes, that's all.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- The risk of loading the storage value from a very stale L1 state.
According to the specification defined above, `L1SLOAD` returns the storage value at the latest L1 block known to the L2 sequencer. \
There is one related issue - The risk of loading the storage value from a very stale L1 state.

First, to ensure the return value is consistent during transaction replay, L2 chains should provide a system contract that stores the information of the latest L1 block known to L2 sequencer. Optimism already provides a predeployed contract [`L1Block`](https://docs.optimism.io/stack/protocol/rollup/smart-contracts#l1block). Scroll has a new system contract [design](https://www.notion.so/scrollzkp/L1Blocks-System-Contract-b1a137eacea74819a3fa57d7d6e52498?pvs=4) that trustlessly imports the L1 block information and also stores other header fields such as state root, timestamp, RANDAO, etc.

Second, L2 protocols determine the L1 block import delay at their own discretion. To make `L1SLOAD` more useful and reduce the risk of reading stale L1 storage states, we argue that the import delay should not be too long, e.g., waiting for finalized state that usually takes about 18-19 minutes. We suggest to wait for around 10 L1 block confirmations that has low risk of Ethereum chain re-organization while the import delay is fairly short (around 2 min). To adopt this, the L2 sequencers need to be capable of handling the situation when there is a long L1 chain re-organization. Furthermore, if the application is sensitive to stale storage reads, developers can limit the difference between the L1 block number retrieved from the system contract and the latest L1 block number per application requirement.
L2 protocols determine the L1 block import delay at their own discretion. To make `L1SLOAD` more useful and reduce the risk of reading stale L1 storage states, we argue that the import delay should not be too long, e.g., waiting for finalized state that usually takes about 18-19 minutes. We suggest to wait for around 10 L1 block confirmations that has low risk of Ethereum chain re-organization while the import delay is fairly short (around 2 min). To adopt this, the L2 sequencers need to be capable of handling the situation when there is a long L1 chain re-organization. Furthermore, if the application is sensitive to stale storage reads, developers can limit the difference between the L1 block number retrieved from the system contract and the latest L1 block number per application requirement.

### The overhead to L2 sequencers from additional RPC latency
The `L1SLOAD` precompile introduces risks of additional RPC latency and intermittent RPC errors. Both risks can be mitigated by running a L1 node in the same cluster as the L2 sequencer. It is preferrable for a L2 operator to run their own L1 node instead of using third party to get better security and reliability. We will perform more benchmarks to quantify the latency overhead in such setting.
Expand Down