Skip to content

Conversation

@hopinheimer
Copy link
Member

#8208
Current the implementation follows a naive approach of batching every 50ms, but I do have an idea for greedy batcher which I'll be implementing in following commits.

@michaelsproul michaelsproul added optimization Something to make Lighthouse run more efficiently. beacon-processor Glorious beacon processor, guardian against chaos yet chaotic itself labels Oct 27, 2025
@hopinheimer hopinheimer changed the title Attestation processing optimization with greedy batching Attestation processing optimization with batching Nov 2, 2025
@hopinheimer hopinheimer marked this pull request as ready for review November 2, 2025 22:45
@hopinheimer hopinheimer requested a review from jxs as a code owner November 2, 2025 22:45
@hopinheimer hopinheimer requested a review from eserilev November 2, 2025 22:45
let mut attestations = GossipAttestationBatch::new();
let mut iter = batch_attestation.0.into_iter();

if let Some(first) = iter.next() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it easier to read to do if !is_empty() { let first = batch_attestation.first().expect("not empty") }?

pub const QUEUED_ATTESTATION_DELAY: Duration = Duration::from_secs(12);

/// Batched attestation delay.
pub const QUEUED_BATCH_ATTESTATION_DELAY: Duration = Duration::from_millis(50);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This must be a flag, and if set to 0 disable this mechanism. In case we observe propagation degradation have a way to turn it off. And upper bound the flag value to a reasonable value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

beacon-processor Glorious beacon processor, guardian against chaos yet chaotic itself optimization Something to make Lighthouse run more efficiently.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants