Gorgoroth RLPx Compression Validation Plan¶
Document Version: 1.0
Date: December 9, 2025
Status: Draft (repeatable runbook)
Target Build: main branch @ latest commit
Scope: Gorgoroth 3-node Fukuii network, RLPx compression/decompression instrumentation
Overview¶
This plan defines a repeatable workflow for validating recent RLPx compression/decompression changes using the Gorgoroth 3-node topology. The test enforces a single mining leader (node1) while keeping node2 and node3 as passive peers to isolate block header propagation and protocol message handling. Results feed into compression diagnostics, log harvesting, and future harness automation.
Objectives¶
- Generate Deterministic Enodes – Start all three nodes so that fresh node keys and enode URLs can be captured and synchronized via tooling.
- Apply Targeted Mining Mix – Use the new
miner_start/miner_stopRPC endpoints to keep node1 mining while node2/node3 remain passive, avoiding config flips or restarts. - Exercise RLPx Stack – Let node1 mine at least 30 blocks while peers stay synced via static connections.
- Collect Evidence – Capture logs and docker inspection data for post-run parsing.
- Detect Regressions – Scan logs for RLPx compression errors, decompression failures, or missing block headers on passive nodes.
Success Criteria¶
- ✅
net_connectToPeersucceeds for all three pairings (node1↔node2, node1↔node3, node2↔node3) without restarting containers. - ✅ Node1 reports
eth_mining=true; node2 and node3 returnfalse. - ✅ All three nodes maintain ≥2 peers (i.e., fully connected triangle) during the run.
- ✅ Block height on node2/node3 trails node1 by ≤1 block at steady state.
- ✅ No log entries matching
compression error,decompression failed, orSnappyfailures. - ✅ RLPx header propagation confirmed via consistent
eth_getBlockByNumber("latest")hashes across nodes. - ✅ Log bundle archived in
./logs/rplx-<timestamp>directory with README summary.
Prerequisites¶
- Docker ≥ 20.10 and Docker Compose v2
ops/tools/fukuii-cli.shavailable (either via relative path or installed asfukuii-cli)- ≥8 GB RAM, 10 GB free disk space
jq,rg(ripgrep), andwatchutilities for analysis (optional but recommended)- Baseline images pulled:
Configuration Checklist¶
- Mining Controls – Leave
fukuii.mining.mining-enabledat repo defaults; Phase 2 relies onminer_start/miner_stopRPCs to toggle roles at runtime without editing configs. - Clean Volumes (Optional) – If prior state is undesirable:
- Environment Variables – Export helper variables for later steps:
Test Procedure¶
Phase 1 – Bring Up 3-Node Topology¶
- Start Network (Generates Enodes)
- Verify Containers
- Wire Up Static Triangle via RPC (No Restart)
- Capture fresh enodes from each node:
- Push the pairings through the new
net_connectToPeerendpoint so every node dials every other node (triangle): - Validate the mesh without bouncing containers:
bash for port in 8545 8547 8549; do curl -s -X POST -H "Content-Type: application/json" \ --data '{"jsonrpc":"2.0","method":"net_listPeers","params":[],"id":2}' \ http://localhost:$port | jq '.result.peers | length' done- Expect2peers per node (fully connected triangle).
Phase 2 – Validate Mining Roles¶
- Set Mining Roles via RPC (No Restart Needed)
# Start mining on node1 (8545) curl -s -X POST -H "Content-Type: application/json" \ --data '{"jsonrpc":"2.0","method":"miner_start","params":[],"id":1}' \ http://localhost:8545 | jq # Ensure node2/node3 stay passive for port in 8547 8549; do curl -s -X POST -H "Content-Type: application/json" \ --data '{"jsonrpc":"2.0","method":"miner_stop","params":[],"id":1}' \ http://localhost:$port | jq '.result' done- Optional: run
miner_getStatuson each port for a one-shot status view.
- Optional: run
- Check Mining Status
- Expected:
8545→true,8547/8549→false. - Confirm Peering
- Convert hex to decimal; each should read
0x2(two peers).
Phase 3 – Produce Blocks & Capture Telemetry¶
- Allow Mining Window – Let node1 run for ≥10 minutes (≈40 blocks). Optional watcher:
- Verify Propagation
cat > check-blocks.sh <<'EOF' #!/bin/bash for port in 8545 8547 8549; do BLOCK=$(curl -s -X POST -H "Content-Type: application/json" \ --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest",false],"id":1}' \ http://localhost:$port | jq -r '.result.number // "0x0"') HASH=$(curl -s -X POST -H "Content-Type: application/json" \ --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest",false],"id":1}' \ http://localhost:$port | jq -r '.result.hash // "0x0"') printf "Port %s → Block %d, Hash %s\n" "$port" "$((16#${BLOCK:2:-0}))" "$HASH" done EOF chmod +x check-blocks.sh ./check-blocks.sh - Expect identical hashes across ports.
Phase 4 – Collect Logs¶
Artifacts include container logs, Docker inspect output, compose config, and a summary README for traceability.
Log Analysis Checklist¶
| Check | Command | Expected |
|---|---|---|
| RLPx compression errors | rg -n "compression|decompress|snappy" "$RPLX_LOG_DIR" |
No matches containing error, failed, invalid, Snappy decompression failed |
| Block header propagation | rg -n "Imported new chain segment" "$RPLX_LOG_DIR" |
Node2 & Node3 show headers shortly after node1 |
| Peer churn | rg -n "Disconnected" "$RPLX_LOG_DIR" |
Minimal churn; no disconnects tied to compression |
| Message monitor | ops/gorgoroth/test-scripts/monitor-decompression.sh gorgoroth-fukuii-node1 |
No FAILED lines |
| Mining role | rg -n "miner" "$RPLX_LOG_DIR" |
Only node1 logs contain Starting miner |
Harness Integration¶
- Watchdog Script: Wrap steps 2–4 inside
ops/gorgoroth/test-scriptsby cloning the pattern frommonitor-decompression.sh. Trigger via CI job to gate RLPx changes. - Metrics Export: Feed Docker stats into Prometheus by enabling the Grafana stack under
ops/gorgoroth/grafanafor longer experiments. - JUnit Adapter: Convert log analysis results into XML using
tests/tools/log_parser.py(if available) so CI dashboards can display pass/fail.
Optional Manual Spot Checks¶
- Compression handshake – Search for
"rlpx","snappy","compression"inside node logs. - Header timing – Compare timestamps of
"Sealing new block"(node1) vs"Imported new chain segment"(node⅔) to ensure propagation <2s. - RPC verification – Use
net_listPeers(oradmin_nodeInfo) on each node to confirm all peers remain connected.
Reporting Template¶
After each run, append a row to docs/testing/GORGOROTH_VALIDATION_STATUS.md with:
| Date | Commit | Operator | Blocks Mined | RLPx Errors | Propagation Lag | Notes |
|---|---|---|---|---|---|---|
| 2025-12-09 | <short-sha> |
<name> |
~40 |
None |
<≤2s> |
Node1-only mining setup |
Cleanup¶
Next Steps¶
- Automate the checklist via a dedicated script that toggles mining flags and parses logs automatically.
- Integrate the RLPx validation into
test-launcher-integration.shonce stable. - Consider extending the test to mixed-client scenarios (Core-Geth/Besu) for cross-implementation coverage.