这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@twoeths
Copy link
Contributor

@twoeths twoeths commented Sep 22, 2025

Motivation

  • starting from fulu, we need to track number of peers per data column subnet

Description

  • track it in gossipsub
  • also track peers and topics by fork boundary, not fork name
  • will need to render this on the main Grafana dashboard after this PR

@twoeths twoeths requested a review from a team as a code owner September 22, 2025 07:36
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @twoeths, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the network's gossipsub layer by implementing peer tracking for data column subnets, which is crucial for future network upgrades. It also standardizes how network peers and topics are categorized by transitioning from fork names to more granular fork boundaries, ensuring more accurate and future-proof metric collection.

Highlights

  • Data Column Subnet Peer Tracking: Introduced new functionality to track the number of peers connected to each data column subnet within the gossipsub network, a requirement for the upcoming 'fulu' update.
  • Fork Boundary-based Tracking: Refactored the peer and topic tracking mechanism to use ForkBoundaryLabel instead of ForkName, providing a more precise and consistent way to categorize network metrics across different forks.
  • Metrics Update: Updated existing Prometheus metrics for gossip mesh and topic peers to reflect the change from fork to boundary labels, and added new metrics specifically for data column subnet peers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces tracking for the number of peers per data column subnet topic, a feature needed for the fulu fork. It also refactors the existing peer and topic tracking to use fork boundaries instead of fork names for more precise metrics. The changes look good and are consistent with the stated goals. I've added one suggestion to improve maintainability by reducing code duplication in the metrics collection logic.

Comment on lines 253 to 271
for (const [boundary, peersByBeaconAttSubnet] of peersByBeaconAttSubnetByBoundary.map) {
for (let subnet = 0; subnet < ATTESTATION_SUBNET_COUNT; subnet++) {
metricsGossip.peersByBeaconAttestationSubnet.set(
{fork, subnet: attSubnetLabel(subnet)},
{boundary, subnet: attSubnetLabel(subnet)},
peersByBeaconAttSubnet[subnet] ?? 0
);
}
}
for (const [fork, peersByBeaconSyncSubnet] of peersByBeaconSyncSubnetByFork.map) {
for (const [boundary, peersByBeaconSyncSubnet] of peersByBeaconSyncSubnetByBoundary.map) {
for (let subnet = 0; subnet < SYNC_COMMITTEE_SUBNET_COUNT; subnet++) {
// SYNC_COMMITTEE_SUBNET_COUNT is < 9, no need to prepend a 0 to the label
metricsGossip.peersBySyncCommitteeSubnet.set({fork, subnet}, peersByBeaconSyncSubnet[subnet] ?? 0);
metricsGossip.peersBySyncCommitteeSubnet.set({boundary, subnet}, peersByBeaconSyncSubnet[subnet] ?? 0);
}
}
for (const [boundary, peersByDataColumnSubnet] of peersByDataColumnSubnetByBoundary.map) {
for (let subnet = 0; subnet < NUMBER_OF_COLUMNS; subnet++) {
metricsGossip.peersByDataColumnSubnet.set({boundary, subnet}, peersByDataColumnSubnet[subnet] ?? 0);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's some code duplication in these loops for setting subnet-based metrics. To improve maintainability, you could extract this logic into a helper function. This function could take the map of peers, the metrics gauge, the subnet count, and an optional subnet label formatter as arguments.

For example:

function setSubnetMetrics(
  peersBySubnetByBoundary: Map<ForkBoundaryLabel, number[]>,
  metricsGauge: {set: (labels: {boundary: ForkBoundaryLabel; subnet: string | number}, value: number) => void},
  subnetCount: number,
  subnetLabelFn: (subnet: number) => string | number = (s) => s
) {
  for (const [boundary, peersBySubnet] of peersBySubnetByBoundary.map) {
    for (let subnet = 0; subnet < subnetCount; subnet++) {
      metricsGauge.set(
        {boundary, subnet: subnetLabelFn(subnet)},
        peersBySubnet[subnet] ?? 0
      );
    }
  }
}

Then you could replace the loops with calls to this helper:

setSubnetMetrics(
  peersByBeaconAttSubnetByBoundary,
  metricsGossip.peersByBeaconAttestationSubnet,
  ATTESTATION_SUBNET_COUNT,
  attSubnetLabel
);

setSubnetMetrics(
  peersByBeaconSyncSubnetByBoundary,
  metricsGossip.peersBySyncCommitteeSubnet,
  SYNC_COMMITTEE_SUBNET_COUNT
);

setSubnetMetrics(
  peersByDataColumnSubnetByBoundary,
  metricsGossip.peersByDataColumnSubnet,
  NUMBER_OF_COLUMNS
);

@codecov
Copy link

codecov bot commented Sep 22, 2025

Codecov Report

❌ Patch coverage is 3.63636% with 53 lines in your changes missing coverage. Please review.
✅ Project coverage is 52.23%. Comparing base (feed916) to head (ebf29be).
⚠️ Report is 7 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #8442      +/-   ##
============================================
- Coverage     52.24%   52.23%   -0.02%     
============================================
  Files           853      853              
  Lines         64770    64797      +27     
  Branches       4766     4767       +1     
============================================
+ Hits          33841    33845       +4     
- Misses        30859    30882      +23     
  Partials         70       70              
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Contributor

Performance Report

🚀🚀 Significant benchmark improvement detected

Benchmark suite Current: dc21c68 Previous: feed916 Ratio
forkChoice updateHead vc 600000 bc 64 eq 300000 13.028 ms/op 40.195 ms/op 0.32
Full benchmark results
Benchmark suite Current: dc21c68 Previous: feed916 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 886.28 us/op 1.0171 ms/op 0.87
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 35.836 us/op 37.361 us/op 0.96
BLS verify - blst 841.95 us/op 863.21 us/op 0.98
BLS verifyMultipleSignatures 3 - blst 1.2940 ms/op 1.2661 ms/op 1.02
BLS verifyMultipleSignatures 8 - blst 2.0984 ms/op 1.9167 ms/op 1.09
BLS verifyMultipleSignatures 32 - blst 5.9158 ms/op 5.6415 ms/op 1.05
BLS verifyMultipleSignatures 64 - blst 9.9918 ms/op 10.860 ms/op 0.92
BLS verifyMultipleSignatures 128 - blst 19.016 ms/op 17.584 ms/op 1.08
BLS deserializing 10000 signatures 691.34 ms/op 710.77 ms/op 0.97
BLS deserializing 100000 signatures 6.9311 s/op 7.1602 s/op 0.97
BLS verifyMultipleSignatures - same message - 3 - blst 911.82 us/op 956.30 us/op 0.95
BLS verifyMultipleSignatures - same message - 8 - blst 1.0195 ms/op 1.2327 ms/op 0.83
BLS verifyMultipleSignatures - same message - 32 - blst 1.7429 ms/op 1.7587 ms/op 0.99
BLS verifyMultipleSignatures - same message - 64 - blst 2.5262 ms/op 2.6545 ms/op 0.95
BLS verifyMultipleSignatures - same message - 128 - blst 4.5324 ms/op 4.5207 ms/op 1.00
BLS aggregatePubkeys 32 - blst 23.350 us/op 19.875 us/op 1.17
BLS aggregatePubkeys 128 - blst 75.417 us/op 70.841 us/op 1.06
notSeenSlots=1 numMissedVotes=1 numBadVotes=10 67.264 ms/op 59.818 ms/op 1.12
notSeenSlots=1 numMissedVotes=0 numBadVotes=4 53.457 ms/op 55.045 ms/op 0.97
notSeenSlots=2 numMissedVotes=1 numBadVotes=10 41.835 ms/op 39.644 ms/op 1.06
getSlashingsAndExits - default max 94.057 us/op 75.264 us/op 1.25
getSlashingsAndExits - 2k 399.28 us/op 350.21 us/op 1.14
isKnown best case - 1 super set check 438.00 ns/op 224.00 ns/op 1.96
isKnown normal case - 2 super set checks 433.00 ns/op 214.00 ns/op 2.02
isKnown worse case - 16 super set checks 430.00 ns/op 217.00 ns/op 1.98
InMemoryCheckpointStateCache - add get delete 2.6300 us/op 2.4790 us/op 1.06
validate api signedAggregateAndProof - struct 1.6906 ms/op 1.8987 ms/op 0.89
validate gossip signedAggregateAndProof - struct 1.4955 ms/op 1.4548 ms/op 1.03
batch validate gossip attestation - vc 640000 - chunk 32 113.90 us/op 121.40 us/op 0.94
batch validate gossip attestation - vc 640000 - chunk 64 98.992 us/op 108.82 us/op 0.91
batch validate gossip attestation - vc 640000 - chunk 128 102.85 us/op 101.03 us/op 1.02
batch validate gossip attestation - vc 640000 - chunk 256 104.52 us/op 107.08 us/op 0.98
pickEth1Vote - no votes 794.73 us/op 1.0685 ms/op 0.74
pickEth1Vote - max votes 9.6444 ms/op 7.8945 ms/op 1.22
pickEth1Vote - Eth1Data hashTreeRoot value x2048 14.258 ms/op 14.372 ms/op 0.99
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 31.341 ms/op 22.212 ms/op 1.41
pickEth1Vote - Eth1Data fastSerialize value x2048 351.77 us/op 453.67 us/op 0.78
pickEth1Vote - Eth1Data fastSerialize tree x2048 3.6438 ms/op 4.4694 ms/op 0.82
bytes32 toHexString 519.00 ns/op 374.00 ns/op 1.39
bytes32 Buffer.toString(hex) 421.00 ns/op 269.00 ns/op 1.57
bytes32 Buffer.toString(hex) from Uint8Array 700.00 ns/op 325.00 ns/op 2.15
bytes32 Buffer.toString(hex) + 0x 448.00 ns/op 295.00 ns/op 1.52
Object access 1 prop 0.36300 ns/op 0.13900 ns/op 2.61
Map access 1 prop 0.36100 ns/op 0.13100 ns/op 2.76
Object get x1000 6.4610 ns/op 6.1500 ns/op 1.05
Map get x1000 5.9440 ns/op 6.6480 ns/op 0.89
Object set x1000 22.858 ns/op 31.131 ns/op 0.73
Map set x1000 19.038 ns/op 21.374 ns/op 0.89
Return object 10000 times 0.30140 ns/op 0.29920 ns/op 1.01
Throw Error 10000 times 3.8097 us/op 4.6789 us/op 0.81
toHex 98.727 ns/op 150.70 ns/op 0.66
Buffer.from 105.32 ns/op 134.16 ns/op 0.79
shared Buffer 88.206 ns/op 86.486 ns/op 1.02
fastMsgIdFn sha256 / 200 bytes 2.7160 us/op 2.2840 us/op 1.19
fastMsgIdFn h32 xxhash / 200 bytes 404.00 ns/op 232.00 ns/op 1.74
fastMsgIdFn h64 xxhash / 200 bytes 519.00 ns/op 301.00 ns/op 1.72
fastMsgIdFn sha256 / 1000 bytes 6.9330 us/op 8.1230 us/op 0.85
fastMsgIdFn h32 xxhash / 1000 bytes 725.00 ns/op 378.00 ns/op 1.92
fastMsgIdFn h64 xxhash / 1000 bytes 541.00 ns/op 380.00 ns/op 1.42
fastMsgIdFn sha256 / 10000 bytes 63.658 us/op 68.993 us/op 0.92
fastMsgIdFn h32 xxhash / 10000 bytes 2.1880 us/op 1.8540 us/op 1.18
fastMsgIdFn h64 xxhash / 10000 bytes 1.7080 us/op 1.3360 us/op 1.28
send data - 1000 256B messages 17.661 ms/op 19.113 ms/op 0.92
send data - 1000 512B messages 20.063 ms/op 22.571 ms/op 0.89
send data - 1000 1024B messages 28.871 ms/op 30.502 ms/op 0.95
send data - 1000 1200B messages 21.997 ms/op 28.167 ms/op 0.78
send data - 1000 2048B messages 23.068 ms/op 27.648 ms/op 0.83
send data - 1000 4096B messages 23.451 ms/op 30.816 ms/op 0.76
send data - 1000 16384B messages 41.740 ms/op 48.940 ms/op 0.85
send data - 1000 65536B messages 94.144 ms/op 128.05 ms/op 0.74
enrSubnets - fastDeserialize 64 bits 1.1360 us/op 932.00 ns/op 1.22
enrSubnets - ssz BitVector 64 bits 622.00 ns/op 369.00 ns/op 1.69
enrSubnets - fastDeserialize 4 bits 432.00 ns/op 160.00 ns/op 2.70
enrSubnets - ssz BitVector 4 bits 634.00 ns/op 335.00 ns/op 1.89
prioritizePeers score -10:0 att 32-0.1 sync 2-0 258.23 us/op 248.11 us/op 1.04
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 248.77 us/op 282.95 us/op 0.88
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 409.50 us/op 401.90 us/op 1.02
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 702.85 us/op 757.43 us/op 0.93
prioritizePeers score 0:0 att 64-1 sync 4-1 893.79 us/op 915.38 us/op 0.98
array of 16000 items push then shift 1.2658 us/op 1.6707 us/op 0.76
LinkedList of 16000 items push then shift 9.1050 ns/op 9.1490 ns/op 1.00
array of 16000 items push then pop 89.569 ns/op 85.822 ns/op 1.04
LinkedList of 16000 items push then pop 8.5300 ns/op 8.6020 ns/op 0.99
array of 24000 items push then shift 1.8676 us/op 2.4616 us/op 0.76
LinkedList of 24000 items push then shift 8.1110 ns/op 9.6990 ns/op 0.84
array of 24000 items push then pop 102.92 ns/op 119.21 ns/op 0.86
LinkedList of 24000 items push then pop 6.6700 ns/op 8.4330 ns/op 0.79
intersect bitArray bitLen 8 5.6090 ns/op 6.5010 ns/op 0.86
intersect array and set length 8 34.239 ns/op 40.821 ns/op 0.84
intersect bitArray bitLen 128 27.825 ns/op 30.432 ns/op 0.91
intersect array and set length 128 761.85 ns/op 643.92 ns/op 1.18
bitArray.getTrueBitIndexes() bitLen 128 1.4030 us/op 1.0270 us/op 1.37
bitArray.getTrueBitIndexes() bitLen 248 2.0670 us/op 1.9080 us/op 1.08
bitArray.getTrueBitIndexes() bitLen 512 3.9960 us/op 3.9280 us/op 1.02
Buffer.concat 32 items 983.00 ns/op 694.00 ns/op 1.42
Uint8Array.set 32 items 1.0430 us/op 1.0180 us/op 1.02
Buffer.copy 1.9740 us/op 2.0770 us/op 0.95
Uint8Array.set - with subarray 1.7730 us/op 1.6840 us/op 1.05
Uint8Array.set - without subarray 1.3240 us/op 1.0100 us/op 1.31
getUint32 - dataview 403.00 ns/op 219.00 ns/op 1.84
getUint32 - manual 328.00 ns/op 131.00 ns/op 2.50
Set add up to 64 items then delete first 2.0921 us/op 2.0925 us/op 1.00
OrderedSet add up to 64 items then delete first 3.3709 us/op 3.5337 us/op 0.95
Set add up to 64 items then delete last 2.5803 us/op 2.5690 us/op 1.00
OrderedSet add up to 64 items then delete last 3.5524 us/op 5.4131 us/op 0.66
Set add up to 64 items then delete middle 2.8372 us/op 2.4632 us/op 1.15
OrderedSet add up to 64 items then delete middle 7.4928 us/op 7.1417 us/op 1.05
Set add up to 128 items then delete first 6.4364 us/op 7.2003 us/op 0.89
OrderedSet add up to 128 items then delete first 8.5762 us/op 11.934 us/op 0.72
Set add up to 128 items then delete last 4.5330 us/op 7.1051 us/op 0.64
OrderedSet add up to 128 items then delete last 8.4739 us/op 11.572 us/op 0.73
Set add up to 128 items then delete middle 4.7643 us/op 6.9516 us/op 0.69
OrderedSet add up to 128 items then delete middle 15.763 us/op 18.325 us/op 0.86
Set add up to 256 items then delete first 13.554 us/op 14.598 us/op 0.93
OrderedSet add up to 256 items then delete first 15.176 us/op 23.374 us/op 0.65
Set add up to 256 items then delete last 9.5214 us/op 14.073 us/op 0.68
OrderedSet add up to 256 items then delete last 15.358 us/op 21.720 us/op 0.71
Set add up to 256 items then delete middle 11.916 us/op 13.898 us/op 0.86
OrderedSet add up to 256 items then delete middle 43.436 us/op 51.492 us/op 0.84
transfer serialized Status (84 B) 2.6640 us/op 2.2390 us/op 1.19
copy serialized Status (84 B) 1.7130 us/op 1.2120 us/op 1.41
transfer serialized SignedVoluntaryExit (112 B) 2.0670 us/op 2.2070 us/op 0.94
copy serialized SignedVoluntaryExit (112 B) 1.2450 us/op 1.1550 us/op 1.08
transfer serialized ProposerSlashing (416 B) 2.3600 us/op 2.4420 us/op 0.97
copy serialized ProposerSlashing (416 B) 1.6130 us/op 1.4300 us/op 1.13
transfer serialized Attestation (485 B) 2.0780 us/op 2.3190 us/op 0.90
copy serialized Attestation (485 B) 1.4970 us/op 1.4330 us/op 1.04
transfer serialized AttesterSlashing (33232 B) 2.0510 us/op 2.3070 us/op 0.89
copy serialized AttesterSlashing (33232 B) 3.1610 us/op 4.1910 us/op 0.75
transfer serialized Small SignedBeaconBlock (128000 B) 2.8350 us/op 2.9060 us/op 0.98
copy serialized Small SignedBeaconBlock (128000 B) 7.4660 us/op 13.202 us/op 0.57
transfer serialized Avg SignedBeaconBlock (200000 B) 2.2560 us/op 3.4080 us/op 0.66
copy serialized Avg SignedBeaconBlock (200000 B) 11.419 us/op 20.047 us/op 0.57
transfer serialized BlobsSidecar (524380 B) 3.0330 us/op 3.9270 us/op 0.77
copy serialized BlobsSidecar (524380 B) 64.286 us/op 71.285 us/op 0.90
transfer serialized Big SignedBeaconBlock (1000000 B) 3.5020 us/op 3.9930 us/op 0.88
copy serialized Big SignedBeaconBlock (1000000 B) 121.47 us/op 120.09 us/op 1.01
pass gossip attestations to forkchoice per slot 2.5865 ms/op 2.9264 ms/op 0.88
forkChoice updateHead vc 100000 bc 64 eq 0 451.06 us/op 482.04 us/op 0.94
forkChoice updateHead vc 600000 bc 64 eq 0 3.5071 ms/op 3.7556 ms/op 0.93
forkChoice updateHead vc 1000000 bc 64 eq 0 4.6938 ms/op 5.6089 ms/op 0.84
forkChoice updateHead vc 600000 bc 320 eq 0 2.3269 ms/op 3.3334 ms/op 0.70
forkChoice updateHead vc 600000 bc 1200 eq 0 2.4435 ms/op 3.4249 ms/op 0.71
forkChoice updateHead vc 600000 bc 7200 eq 0 2.7556 ms/op 3.4553 ms/op 0.80
forkChoice updateHead vc 600000 bc 64 eq 1000 10.126 ms/op 10.833 ms/op 0.93
forkChoice updateHead vc 600000 bc 64 eq 10000 10.165 ms/op 10.956 ms/op 0.93
forkChoice updateHead vc 600000 bc 64 eq 300000 13.028 ms/op 40.195 ms/op 0.32
computeDeltas 500000 validators 300 proto nodes 3.7230 ms/op 4.1146 ms/op 0.90
computeDeltas 500000 validators 1200 proto nodes 4.2051 ms/op 4.1245 ms/op 1.02
computeDeltas 500000 validators 7200 proto nodes 4.1671 ms/op 4.1381 ms/op 1.01
computeDeltas 750000 validators 300 proto nodes 6.1870 ms/op 6.1319 ms/op 1.01
computeDeltas 750000 validators 1200 proto nodes 5.9515 ms/op 6.1703 ms/op 0.96
computeDeltas 750000 validators 7200 proto nodes 6.1105 ms/op 6.0755 ms/op 1.01
computeDeltas 1400000 validators 300 proto nodes 11.371 ms/op 11.467 ms/op 0.99
computeDeltas 1400000 validators 1200 proto nodes 12.614 ms/op 11.400 ms/op 1.11
computeDeltas 1400000 validators 7200 proto nodes 13.097 ms/op 11.561 ms/op 1.13
computeDeltas 2100000 validators 300 proto nodes 19.794 ms/op 17.342 ms/op 1.14
computeDeltas 2100000 validators 1200 proto nodes 19.809 ms/op 17.299 ms/op 1.15
computeDeltas 2100000 validators 7200 proto nodes 18.212 ms/op 17.151 ms/op 1.06
altair processAttestation - 250000 vs - 7PWei normalcase 3.6668 ms/op 2.2016 ms/op 1.67
altair processAttestation - 250000 vs - 7PWei worstcase 4.8535 ms/op 3.0831 ms/op 1.57
altair processAttestation - setStatus - 1/6 committees join 165.91 us/op 137.19 us/op 1.21
altair processAttestation - setStatus - 1/3 committees join 281.51 us/op 243.62 us/op 1.16
altair processAttestation - setStatus - 1/2 committees join 343.53 us/op 322.45 us/op 1.07
altair processAttestation - setStatus - 2/3 committees join 421.29 us/op 424.21 us/op 0.99
altair processAttestation - setStatus - 4/5 committees join 692.58 us/op 579.57 us/op 1.19
altair processAttestation - setStatus - 100% committees join 804.61 us/op 680.55 us/op 1.18
altair processBlock - 250000 vs - 7PWei normalcase 6.7303 ms/op 6.2051 ms/op 1.08
altair processBlock - 250000 vs - 7PWei normalcase hashState 35.507 ms/op 27.952 ms/op 1.27
altair processBlock - 250000 vs - 7PWei worstcase 44.148 ms/op 41.515 ms/op 1.06
altair processBlock - 250000 vs - 7PWei worstcase hashState 96.901 ms/op 88.186 ms/op 1.10
phase0 processBlock - 250000 vs - 7PWei normalcase 2.7675 ms/op 2.5123 ms/op 1.10
phase0 processBlock - 250000 vs - 7PWei worstcase 25.404 ms/op 24.715 ms/op 1.03
altair processEth1Data - 250000 vs - 7PWei normalcase 357.26 us/op 379.24 us/op 0.94
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:15 13.963 us/op 5.8510 us/op 2.39
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:219 42.494 us/op 42.266 us/op 1.01
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:42 18.230 us/op 9.8930 us/op 1.84
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:18 10.706 us/op 9.4940 us/op 1.13
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1020 150.43 us/op 152.78 us/op 0.98
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11777 1.7900 ms/op 1.9004 ms/op 0.94
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 1.8389 ms/op 2.4067 ms/op 0.76
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.2761 ms/op 2.4349 ms/op 0.93
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 5.0090 ms/op 4.9024 ms/op 1.02
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 1.9744 ms/op 2.4294 ms/op 0.81
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 5.0647 ms/op 4.9302 ms/op 1.03
Tree 40 250000 create 620.65 ms/op 437.51 ms/op 1.42
Tree 40 250000 get(125000) 141.99 ns/op 149.21 ns/op 0.95
Tree 40 250000 set(125000) 1.4254 us/op 1.4801 us/op 0.96
Tree 40 250000 toArray() 22.592 ms/op 15.690 ms/op 1.44
Tree 40 250000 iterate all - toArray() + loop 25.098 ms/op 16.105 ms/op 1.56
Tree 40 250000 iterate all - get(i) 58.081 ms/op 60.667 ms/op 0.96
Array 250000 create 4.3567 ms/op 2.4855 ms/op 1.75
Array 250000 clone - spread 4.2151 ms/op 826.81 us/op 5.10
Array 250000 get(125000) 0.58300 ns/op 0.42700 ns/op 1.37
Array 250000 set(125000) 0.74800 ns/op 0.45600 ns/op 1.64
Array 250000 iterate all - loop 80.570 us/op 85.993 us/op 0.94
phase0 afterProcessEpoch - 250000 vs - 7PWei 40.230 ms/op 44.816 ms/op 0.90
Array.fill - length 1000000 3.0419 ms/op 3.7504 ms/op 0.81
Array push - length 1000000 18.527 ms/op 14.760 ms/op 1.26
Array.get 0.27246 ns/op 0.29748 ns/op 0.92
Uint8Array.get 0.35381 ns/op 0.45037 ns/op 0.79
phase0 beforeProcessEpoch - 250000 vs - 7PWei 21.612 ms/op 17.748 ms/op 1.22
altair processEpoch - mainnet_e81889 339.17 ms/op 281.89 ms/op 1.20
mainnet_e81889 - altair beforeProcessEpoch 19.964 ms/op 18.983 ms/op 1.05
mainnet_e81889 - altair processJustificationAndFinalization 5.9000 us/op 5.6110 us/op 1.05
mainnet_e81889 - altair processInactivityUpdates 4.1322 ms/op 4.3490 ms/op 0.95
mainnet_e81889 - altair processRewardsAndPenalties 52.444 ms/op 45.242 ms/op 1.16
mainnet_e81889 - altair processRegistryUpdates 1.0320 us/op 798.00 ns/op 1.29
mainnet_e81889 - altair processSlashings 607.00 ns/op 185.00 ns/op 3.28
mainnet_e81889 - altair processEth1DataReset 492.00 ns/op 179.00 ns/op 2.75
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.0811 ms/op 1.2225 ms/op 0.88
mainnet_e81889 - altair processSlashingsReset 1.4390 us/op 995.00 ns/op 1.45
mainnet_e81889 - altair processRandaoMixesReset 1.7400 us/op 1.2840 us/op 1.36
mainnet_e81889 - altair processHistoricalRootsUpdate 478.00 ns/op 200.00 ns/op 2.39
mainnet_e81889 - altair processParticipationFlagUpdates 1.0380 us/op 538.00 ns/op 1.93
mainnet_e81889 - altair processSyncCommitteeUpdates 495.00 ns/op 141.00 ns/op 3.51
mainnet_e81889 - altair afterProcessEpoch 45.561 ms/op 46.016 ms/op 0.99
capella processEpoch - mainnet_e217614 1.0466 s/op 926.75 ms/op 1.13
mainnet_e217614 - capella beforeProcessEpoch 68.613 ms/op 66.433 ms/op 1.03
mainnet_e217614 - capella processJustificationAndFinalization 5.3660 us/op 6.6150 us/op 0.81
mainnet_e217614 - capella processInactivityUpdates 14.408 ms/op 14.425 ms/op 1.00
mainnet_e217614 - capella processRewardsAndPenalties 199.65 ms/op 201.05 ms/op 0.99
mainnet_e217614 - capella processRegistryUpdates 9.6290 us/op 6.9760 us/op 1.38
mainnet_e217614 - capella processSlashings 505.00 ns/op 180.00 ns/op 2.81
mainnet_e217614 - capella processEth1DataReset 457.00 ns/op 178.00 ns/op 2.57
mainnet_e217614 - capella processEffectiveBalanceUpdates 3.7136 ms/op 5.4153 ms/op 0.69
mainnet_e217614 - capella processSlashingsReset 1.4260 us/op 1.1250 us/op 1.27
mainnet_e217614 - capella processRandaoMixesReset 1.6360 us/op 1.2590 us/op 1.30
mainnet_e217614 - capella processHistoricalRootsUpdate 465.00 ns/op 196.00 ns/op 2.37
mainnet_e217614 - capella processParticipationFlagUpdates 1.0190 us/op 655.00 ns/op 1.56
mainnet_e217614 - capella afterProcessEpoch 110.64 ms/op 123.93 ms/op 0.89
phase0 processEpoch - mainnet_e58758 381.10 ms/op 294.79 ms/op 1.29
mainnet_e58758 - phase0 beforeProcessEpoch 91.098 ms/op 71.326 ms/op 1.28
mainnet_e58758 - phase0 processJustificationAndFinalization 6.0960 us/op 5.7320 us/op 1.06
mainnet_e58758 - phase0 processRewardsAndPenalties 40.461 ms/op 41.077 ms/op 0.99
mainnet_e58758 - phase0 processRegistryUpdates 2.8990 us/op 3.2900 us/op 0.88
mainnet_e58758 - phase0 processSlashings 404.00 ns/op 192.00 ns/op 2.10
mainnet_e58758 - phase0 processEth1DataReset 415.00 ns/op 176.00 ns/op 2.36
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 5.0039 ms/op 1.2178 ms/op 4.11
mainnet_e58758 - phase0 processSlashingsReset 1.6340 us/op 1.0510 us/op 1.55
mainnet_e58758 - phase0 processRandaoMixesReset 1.5750 us/op 1.4290 us/op 1.10
mainnet_e58758 - phase0 processHistoricalRootsUpdate 507.00 ns/op 206.00 ns/op 2.46
mainnet_e58758 - phase0 processParticipationRecordUpdates 1.5320 us/op 1.0220 us/op 1.50
mainnet_e58758 - phase0 afterProcessEpoch 34.083 ms/op 36.911 ms/op 0.92
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.5113 ms/op 2.6004 ms/op 0.58
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 4.8447 ms/op 1.9453 ms/op 2.49
altair processInactivityUpdates - 250000 normalcase 17.100 ms/op 19.438 ms/op 0.88
altair processInactivityUpdates - 250000 worstcase 19.217 ms/op 22.724 ms/op 0.85
phase0 processRegistryUpdates - 250000 normalcase 7.2280 us/op 6.0080 us/op 1.20
phase0 processRegistryUpdates - 250000 badcase_full_deposits 183.36 us/op 344.64 us/op 0.53
phase0 processRegistryUpdates - 250000 worstcase 0.5 119.20 ms/op 113.67 ms/op 1.05
altair processRewardsAndPenalties - 250000 normalcase 26.223 ms/op 28.658 ms/op 0.92
altair processRewardsAndPenalties - 250000 worstcase 28.252 ms/op 27.691 ms/op 1.02
phase0 getAttestationDeltas - 250000 normalcase 10.166 ms/op 7.9372 ms/op 1.28
phase0 getAttestationDeltas - 250000 worstcase 6.8205 ms/op 6.2981 ms/op 1.08
phase0 processSlashings - 250000 worstcase 87.412 us/op 81.700 us/op 1.07
altair processSyncCommitteeUpdates - 250000 10.560 ms/op 11.298 ms/op 0.93
BeaconState.hashTreeRoot - No change 521.00 ns/op 257.00 ns/op 2.03
BeaconState.hashTreeRoot - 1 full validator 82.385 us/op 97.676 us/op 0.84
BeaconState.hashTreeRoot - 32 full validator 1.4446 ms/op 853.37 us/op 1.69
BeaconState.hashTreeRoot - 512 full validator 11.210 ms/op 10.564 ms/op 1.06
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 109.67 us/op 106.51 us/op 1.03
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.7344 ms/op 1.5601 ms/op 1.11
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 21.660 ms/op 28.137 ms/op 0.77
BeaconState.hashTreeRoot - 1 balances 75.036 us/op 83.129 us/op 0.90
BeaconState.hashTreeRoot - 32 balances 736.43 us/op 1.0514 ms/op 0.70
BeaconState.hashTreeRoot - 512 balances 8.8772 ms/op 8.9011 ms/op 1.00
BeaconState.hashTreeRoot - 250000 balances 156.25 ms/op 184.95 ms/op 0.84
aggregationBits - 2048 els - zipIndexesInBitList 22.382 us/op 22.640 us/op 0.99
byteArrayEquals 32 48.396 ns/op 54.596 ns/op 0.89
Buffer.compare 32 16.176 ns/op 17.503 ns/op 0.92
byteArrayEquals 1024 1.3234 us/op 1.6203 us/op 0.82
Buffer.compare 1024 24.768 ns/op 26.120 ns/op 0.95
byteArrayEquals 16384 20.882 us/op 26.142 us/op 0.80
Buffer.compare 16384 208.02 ns/op 265.83 ns/op 0.78
byteArrayEquals 123687377 158.56 ms/op 197.23 ms/op 0.80
Buffer.compare 123687377 6.3205 ms/op 6.7971 ms/op 0.93
byteArrayEquals 32 - diff last byte 48.681 ns/op 54.037 ns/op 0.90
Buffer.compare 32 - diff last byte 17.028 ns/op 17.502 ns/op 0.97
byteArrayEquals 1024 - diff last byte 1.3130 us/op 1.6267 us/op 0.81
Buffer.compare 1024 - diff last byte 23.421 ns/op 25.484 ns/op 0.92
byteArrayEquals 16384 - diff last byte 21.010 us/op 25.947 us/op 0.81
Buffer.compare 16384 - diff last byte 184.65 ns/op 205.94 ns/op 0.90
byteArrayEquals 123687377 - diff last byte 160.31 ms/op 194.49 ms/op 0.82
Buffer.compare 123687377 - diff last byte 7.0718 ms/op 6.7294 ms/op 1.05
byteArrayEquals 32 - random bytes 5.1010 ns/op 5.2210 ns/op 0.98
Buffer.compare 32 - random bytes 16.360 ns/op 17.404 ns/op 0.94
byteArrayEquals 1024 - random bytes 5.0740 ns/op 5.2440 ns/op 0.97
Buffer.compare 1024 - random bytes 16.133 ns/op 17.471 ns/op 0.92
byteArrayEquals 16384 - random bytes 5.1980 ns/op 5.2400 ns/op 0.99
Buffer.compare 16384 - random bytes 16.062 ns/op 17.469 ns/op 0.92
byteArrayEquals 123687377 - random bytes 7.9800 ns/op 6.6500 ns/op 1.20
Buffer.compare 123687377 - random bytes 18.940 ns/op 18.740 ns/op 1.01
regular array get 100000 times 36.569 us/op 33.753 us/op 1.08
wrappedArray get 100000 times 33.573 us/op 33.683 us/op 1.00
arrayWithProxy get 100000 times 10.760 ms/op 12.276 ms/op 0.88
ssz.Root.equals 46.169 ns/op 46.907 ns/op 0.98
byteArrayEquals 45.185 ns/op 46.257 ns/op 0.98
Buffer.compare 9.3370 ns/op 10.457 ns/op 0.89
processSlot - 1 slots 9.2660 us/op 13.868 us/op 0.67
processSlot - 32 slots 1.8681 ms/op 2.1723 ms/op 0.86
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 2.6205 ms/op 3.0323 ms/op 0.86
getCommitteeAssignments - req 1 vs - 250000 vc 1.8732 ms/op 2.1883 ms/op 0.86
getCommitteeAssignments - req 100 vs - 250000 vc 3.6919 ms/op 4.1685 ms/op 0.89
getCommitteeAssignments - req 1000 vs - 250000 vc 3.9097 ms/op 4.4051 ms/op 0.89
findModifiedValidators - 10000 modified validators 923.92 ms/op 712.44 ms/op 1.30
findModifiedValidators - 1000 modified validators 738.57 ms/op 699.61 ms/op 1.06
findModifiedValidators - 100 modified validators 278.88 ms/op 291.65 ms/op 0.96
findModifiedValidators - 10 modified validators 257.59 ms/op 136.06 ms/op 1.89
findModifiedValidators - 1 modified validators 207.16 ms/op 215.37 ms/op 0.96
findModifiedValidators - no difference 182.08 ms/op 256.09 ms/op 0.71
compare ViewDUs 7.9071 s/op 6.1927 s/op 1.28
compare each validator Uint8Array 1.2934 s/op 1.7608 s/op 0.73
compare ViewDU to Uint8Array 1.4559 s/op 1.0366 s/op 1.40
migrate state 1000000 validators, 24 modified, 0 new 817.26 ms/op 858.47 ms/op 0.95
migrate state 1000000 validators, 1700 modified, 1000 new 1.0918 s/op 1.1354 s/op 0.96
migrate state 1000000 validators, 3400 modified, 2000 new 1.1348 s/op 1.2630 s/op 0.90
migrate state 1500000 validators, 24 modified, 0 new 927.58 ms/op 895.08 ms/op 1.04
migrate state 1500000 validators, 1700 modified, 1000 new 1.1086 s/op 1.1197 s/op 0.99
migrate state 1500000 validators, 3400 modified, 2000 new 1.2887 s/op 1.2631 s/op 1.02
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 5.7400 ns/op 4.2800 ns/op 1.34
state getBlockRootAtSlot - 250000 vs - 7PWei 299.39 ns/op 664.18 ns/op 0.45
naive computeProposerIndex 100000 validators 43.578 ms/op 48.142 ms/op 0.91
computeProposerIndex 100000 validators 1.3232 ms/op 1.5235 ms/op 0.87
naiveGetNextSyncCommitteeIndices 1000 validators 5.9328 s/op 6.9417 s/op 0.85
getNextSyncCommitteeIndices 1000 validators 89.019 ms/op 113.17 ms/op 0.79
naiveGetNextSyncCommitteeIndices 10000 validators 6.1591 s/op 6.9375 s/op 0.89
getNextSyncCommitteeIndices 10000 validators 99.444 ms/op 112.17 ms/op 0.89
naiveGetNextSyncCommitteeIndices 100000 validators 5.8735 s/op 6.9501 s/op 0.85
getNextSyncCommitteeIndices 100000 validators 90.353 ms/op 112.52 ms/op 0.80
naive computeShuffledIndex 100000 validators 20.439 s/op 23.369 s/op 0.87
cached computeShuffledIndex 100000 validators 492.12 ms/op 550.39 ms/op 0.89
naive computeShuffledIndex 2000000 validators 390.69 s/op 458.80 s/op 0.85
cached computeShuffledIndex 2000000 validators 15.146 s/op 31.805 s/op 0.48
computeProposers - vc 250000 528.95 us/op 644.62 us/op 0.82
computeEpochShuffling - vc 250000 38.964 ms/op 42.954 ms/op 0.91
getNextSyncCommittee - vc 250000 9.2281 ms/op 10.668 ms/op 0.87
computeSigningRoot for AttestationData 16.871 us/op 21.805 us/op 0.77
hash AttestationData serialized data then Buffer.toString(base64) 1.1610 us/op 1.6194 us/op 0.72
toHexString serialized data 951.83 ns/op 1.1956 us/op 0.80
Buffer.toString(base64) 103.98 ns/op 156.54 ns/op 0.66
nodejs block root to RootHex using toHex 99.421 ns/op 145.47 ns/op 0.68
nodejs block root to RootHex using toRootHex 67.442 ns/op 85.679 ns/op 0.79
nodejs fromhex(blob) 97.085 ms/op 113.30 ms/op 0.86
nodejs fromHexInto(blob) 77.231 ms/op 96.556 ms/op 0.80
browser block root to RootHex using the deprecated toHexString 179.17 ns/op 212.56 ns/op 0.84
browser block root to RootHex using toHex 148.68 ns/op 172.10 ns/op 0.86
browser block root to RootHex using toRootHex 141.47 ns/op 160.99 ns/op 0.88
browser fromHexInto(blob) 669.20 us/op 827.84 us/op 0.81
browser fromHex(blob) 683.02 ms/op 796.65 ms/op 0.86

by benchmarkbot/action

peersByDataColumnSubnet: register.gauge<{subnet: SubnetID; boundary: ForkBoundaryLabel}>({
name: "lodestar_gossip_mesh_peers_by_data_column_subnet_count",
help: "Number of connected mesh peers per data column subnet",
labelNames: ["subnet", "boundary"],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this metric can be pretty expensive in testnets if BPOs are close to each other, eg. if we have fulu + 5 BPOs this results in 768 metric entries, not a blocker but just need to keep that in mind

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to add more on that, if someone come up with a config like BPO at epoch n, n+1, n+2, n+3, n+4 then I think we have bigger issues to deal with than the metric

sample config from fusaka-devnet-5

BLOB_SCHEDULE:
  - EPOCH: 512
    MAX_BLOBS_PER_BLOCK: 15
  - EPOCH: 768
    MAX_BLOBS_PER_BLOCK: 21
  - EPOCH: 1024
    MAX_BLOBS_PER_BLOCK: 33
  - EPOCH: 1280
    MAX_BLOBS_PER_BLOCK: 48
  - EPOCH: 1536
    MAX_BLOBS_PER_BLOCK: 72

we only subscribe to +- 2 epochs at fork boundary FORK_EPOCH_LOOKAHEAD = 2 so most of the time we only get metric for 1 fork boundary

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but the node would still serve them via /metrics until next restart and based on prometheus retention it would be still stored there

I wonder if we even need to fork boundary label or can just assess the metrics based on time, you would probably see a drop at each fork boundary and then it should go up again

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should only do that if there is a clear performance issue
seeing different peer counts at different fork boundary give us confidence on the implementation of lodestar nodes, and help us investigate issues just in case
we did not even get through any fork boundaries on mainnet, can revisit later if needed

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to agree with @nflaig that we will just see the boundary in the metric dislocation. the forks will never overlap so we will never have two lines at the same time needing to be differentiated by different labels. they will always be mutually exclusive and not sure its worth all the extra buckets/data-points for the label that is not "technically differentiating anything" because there would never be boundary overlap. There will just be a metric dislocation. I guess I also see what @twoeths is saying, that if the metric stays the same across a boundary and there is no color change on the dashboard we will not see the change (if there should be one) but not sure that is compelling enough to offset the hundreds of "0" data points we will collect to support the second label

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose I'm fine with either but just wanted to bring that up. What do you think @wemeetagain ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a cheaper way to get the information we're looking for? Imo the primary piece of information / questions we're answering are:

  • how many subnets have 0 mesh peers?
  • what is the distribution of mesh peers for the subnets?

I don't think we really need to know that eg subnet 13 has 3 mesh peers. Its not actionable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I did not notice we have a concern before merging the PR
imo the performance concern is common for attnets/syncnets as well, so we should resolve all if someone can prove we have a performance issue with it

I don't think we really need to know that eg subnet 13 has 3 mesh peers. Its not actionable.

if we miss data column of subnet 13 of a block, block is not imported and it's not easy to find which missing DataColumnSidecars just looking into the gossip log. Then this metric should help

also we already have discovery metrics, for example lodestar_discovery_custody_groups_to_connect
these metrics are addition to that metrics when we need investigation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

created this issue to track #8454

@twoeths twoeths merged commit 4efea58 into unstable Sep 23, 2025
25 of 26 checks passed
@twoeths twoeths deleted the te/gossip_metrics_by_fork_boundary branch September 23, 2025 12:08
@wemeetagain
Copy link
Member

🎉 This PR is included in v1.35.0 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants