Skip to content

Fast Sync Testing Plan for Fukuii Clients

Document Version: 1.0
Date: December 8, 2025
Status: Active
Test Environment: Gorgoroth 6-Node Internal Test Network

Executive Summary

This document outlines a comprehensive testing plan for validating fast sync functionality in Fukuii Ethereum Classic clients using the Gorgoroth internal test network. The plan addresses fast sync performance, peer connectivity, message decompression, and multi-node synchronization scenarios.

Objectives

Primary Goals

  1. Validate Fast Sync Functionality: Ensure fast sync correctly synchronizes blockchain state across multiple Fukuii nodes
  2. Verify Message Decompression: Confirm RLPx/ETH protocol messages are properly compressed and decompressed during sync
  3. Test Peer Connectivity: Validate peer discovery and maintenance during fast sync operations
  4. Performance Benchmarking: Measure sync time, bandwidth usage, and resource consumption

Success Criteria

  • ✅ Fast sync completes successfully on all test nodes
  • ✅ Block state verification passes after sync completion
  • ✅ No message decompression errors in logs
  • ✅ Peer connections remain stable throughout sync (≥2 peers minimum)
  • ✅ Sync completes within expected timeframe (baseline to be established)

Test Environment

Network Configuration: Gorgoroth 6-Node

Node Role HTTP RPC WebSocket P2P Fast Sync Enabled
fukuii-node1 Seed (Full Sync) 8545 8546 30303 No
fukuii-node2 Seed (Full Sync) 8547 8548 30304 No
fukuii-node3 Seed (Full Sync) 8549 8550 30305 No
fukuii-node4 Fast Sync 8551 8552 30306 Yes
fukuii-node5 Fast Sync 8553 8554 30307 Yes
fukuii-node6 Fast Sync 8555 8556 30308 Yes

Network Properties

Property Value
Network Name Gorgoroth
Network ID 7
Chain ID 0x3f (63)
Consensus Proof of Work (Ethash-compatible)
Genesis Configuration Mordor-aligned (see ops/gorgoroth/conf/app-gorgoroth-override.conf)

Test Phases

The testing will be conducted in three distinct phases to ensure comprehensive validation.

Phase 1: Environment Setup and Seed Node Preparation

Objective

Establish a stable blockchain with sufficient history for meaningful fast sync testing.

Steps

1.1 Start Seed Nodes (nodes 1-3)

cd ops/gorgoroth

# Start only the first 3 nodes with full sync
docker compose -f docker-compose-6nodes.yml up -d fukuii-node1 fukuii-node2 fukuii-node3

# Wait for nodes to initialize
sleep 45

# Synchronize static peers
fukuii-cli sync-static-nodes

1.2 Verify Seed Node Connectivity

# Check peer count for each seed node
for port in 8545 8547 8549; do
  echo "Node on port $port:"
  curl -s -X POST -H "Content-Type: application/json" \
    --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' \
    http://localhost:$port | jq
done

Expected: Each seed node should report ≥2 peers.

1.3 Generate Blockchain History

# Ensure mining is enabled on the seed miner (default: node1 only)
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_mining","params":[],"id":1}' \
  http://localhost:8545 | jq

# Wait for substantial block history (target: 1000+ blocks)
# Monitor block height
watch -n 10 'curl -s -X POST -H "Content-Type: application/json" \
  --data "{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}" \
  http://localhost:8545 | jq'

Target: Generate at least 1000 blocks (approximately 4+ hours at 15s block time).

1.4 Verify Blockchain State

# Get current block number
BLOCK_NUM=$(curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
  http://localhost:8545 | jq -r '.result')

echo "Current block height: $((16#${BLOCK_NUM:2}))"

# Verify all seed nodes are synchronized
for port in 8545 8547 8549; do
  BLOCK=$(curl -s -X POST -H "Content-Type: application/json" \
    --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
    http://localhost:$port | jq -r '.result')
  echo "Port $port: Block $((16#${BLOCK:2}))"
done

Expected: All seed nodes should be within 1-2 blocks of each other.

Success Criteria for Phase 1

  • Seed nodes (1-3) running and connected
  • Minimum 1000 blocks mined
  • All seed nodes synchronized to same block height (±2 blocks)
  • No errors in seed node logs

Phase 2: Fast Sync Testing

Objective

Validate fast sync functionality by bringing up nodes 4-6 and synchronizing them with the seed nodes.

Test Scenario 2.1: Single Node Fast Sync

2.1.1 Configure Node 6 for Fast Sync

Verify that node 6 configuration includes fast sync settings:

File: ops/gorgoroth/conf/node6/gorgoroth.conf

fukuii {
  sync {
    do-fast-sync = true
    fast-sync-throttle = 100.milliseconds
    peers-scan-interval = 3.seconds

    # Pivot block offset (sync from N blocks before chain tip)
    pivot-block-offset = 100
  }
}

2.1.2 Start Node 6

# Start node 6 with fast sync enabled
docker compose -f docker-compose-6nodes.yml up -d fukuii-node6

# Monitor logs for fast sync activity
docker compose -f docker-compose-6nodes.yml logs -f fukuii-node6

2.1.3 Monitor Fast Sync Progress

# Check sync status
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
  http://localhost:8556 | jq

# Monitor peer connections
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' \
  http://localhost:8556 | jq

# Get current block
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
  http://localhost:8556 | jq

2.1.4 Log Analysis

# Collect logs for analysis
fukuii-cli collect-logs 6nodes ./logs/fast-sync-test-$(date +%Y%m%d-%H%M%S)

# Search for fast sync indicators
docker compose logs fukuii-node6 | grep -i "fast.*sync"
docker compose logs fukuii-node6 | grep -i "pivot"
docker compose logs fukuii-node6 | grep -i "state.*sync"

Expected Log Patterns

✅ "Starting fast sync"
✅ "Pivot block selected: <block_number>"
✅ "Downloading headers from pivot"
✅ "Downloading block bodies"
✅ "Downloading receipts"
✅ "Fast sync completed"
✅ "Switching to full sync"

Test Scenario 2.2: Message Decompression Validation

2.2.1 Enable Debug Logging

Add to node 6 configuration:

fukuii {
  network {
    protocol {
      # Enable detailed protocol logging
      log-messages = true
    }
  }
}

2.2.2 Monitor for Decompression Issues

# Watch for decompression errors in real-time
docker compose logs -f fukuii-node6 | grep -i "decompress"

# Search for Snappy compression messages
docker compose logs fukuii-node6 | grep -i "snappy"

# Look for RLPx handshake completion
docker compose logs fukuii-node6 | grep -i "rlpx"

Expected Behavior

✅ "RLPx handshake completed with peer <enode>"
✅ No "decompression failed" errors
✅ No "invalid compressed data" errors
✅ Successful message exchange with peers

Error Patterns to Investigate

❌ "Failed to decompress message"
❌ "Snappy decompression error"
❌ "Invalid message format after decompression"
❌ "Peer disconnected: decompression failure"

Test Scenario 2.3: Multi-Node Fast Sync

2.3.1 Start All Fast Sync Nodes (4, 5, 6)

# Start nodes 4, 5, and 6 simultaneously
docker compose -f docker-compose-6nodes.yml up -d fukuii-node4 fukuii-node5 fukuii-node6

# Update static peers for all nodes
fukuii-cli sync-static-nodes

2.3.2 Monitor Parallel Sync Progress

# Create monitoring script
cat > monitor-sync.sh << 'EOF'
#!/bin/bash
while true; do
  clear
  echo "=== Fast Sync Progress Monitor ==="
  echo "Timestamp: $(date)"
  echo

  for port in 8552 8554 8556; do
    node_num=$((($port - 8552) / 2 + 4))
    echo "Node $node_num (port $port):"

    # Get sync status
    SYNC=$(curl -s -X POST -H "Content-Type: application/json" \
      --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
      http://localhost:$port)

    # Get peer count
    PEERS=$(curl -s -X POST -H "Content-Type: application/json" \
      --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' \
      http://localhost:$port | jq -r '.result')

    # Get current block
    BLOCK=$(curl -s -X POST -H "Content-Type: application/json" \
      --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
      http://localhost:$port | jq -r '.result')

    echo "  Peers: $PEERS"
    echo "  Current Block: $((16#${BLOCK:2}))"
    echo "  Sync Status: $SYNC"
    echo
  done

  sleep 10
done
EOF

chmod +x monitor-sync.sh
./monitor-sync.sh

2.3.3 Peer Competition Analysis

Monitor whether fast sync nodes compete for peers or maintain stable connections:

# Check peer distribution across all nodes
for port in 8545 8547 8549 8552 8554 8556; do
  echo "Port $port peers:"
  curl -s -X POST -H "Content-Type: application/json" \
    --data '{"jsonrpc":"2.0","method":"admin_peers","params":[],"id":1}' \
    http://localhost:$port | jq '.result | length'
done

Success Criteria for Phase 2

  • Node 6 completes fast sync successfully
  • No message decompression errors in logs
  • Fast sync nodes maintain ≥2 peer connections throughout sync
  • All fast sync nodes converge to same block height as seed nodes
  • Fast sync completes faster than full sync from genesis
  • State verification passes after sync completion

Phase 3: Validation and Performance Analysis

Test Scenario 3.1: State Verification

3.1.1 Block Hash Verification

# Get block hash from seed node
SEED_HASH=$(curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest",false],"id":1}' \
  http://localhost:8545 | jq -r '.result.hash')

# Compare with fast sync nodes
for port in 8552 8554 8556; do
  NODE_HASH=$(curl -s -X POST -H "Content-Type: application/json" \
    --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest",false],"id":1}' \
    http://localhost:$port | jq -r '.result.hash')

  if [ "$SEED_HASH" == "$NODE_HASH" ]; then
    echo "Port $port: ✅ MATCH"
  else
    echo "Port $port: ❌ MISMATCH (seed: $SEED_HASH, node: $NODE_HASH)"
  fi
done

3.1.2 State Root Verification

# Compare state roots at same block height
BLOCK="0x3e8"  # Block 1000 in hex

for port in 8545 8547 8549 8552 8554 8556; do
  STATE_ROOT=$(curl -s -X POST -H "Content-Type: application/json" \
    --data "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBlockByNumber\",\"params\":[\"$BLOCK\",false],\"id\":1}" \
    http://localhost:$port | jq -r '.result.stateRoot')
  echo "Port $port: $STATE_ROOT"
done

Expected: All nodes should have identical state roots for the same block.

3.1.3 Account State Verification

# Check pre-funded account balance on all nodes
ACCOUNT="0x1000000000000000000000000000000000000001"

for port in 8545 8547 8549 8552 8554 8556; do
  BALANCE=$(curl -s -X POST -H "Content-Type: application/json" \
    --data "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"$ACCOUNT\",\"latest\"],\"id\":1}" \
    http://localhost:$port | jq -r '.result')
  echo "Port $port: $BALANCE"
done

Test Scenario 3.2: Performance Metrics

3.2.1 Sync Time Measurement

# Extract sync start and completion times from logs
docker compose logs fukuii-node6 | grep "Starting fast sync" | head -1
docker compose logs fukuii-node6 | grep "Fast sync completed" | head -1

# Calculate duration
# (Manual calculation based on timestamps)

3.2.2 Bandwidth Analysis

# Monitor container network statistics
docker stats fukuii-node6 --no-stream --format \
  "table {{.Container}}\t{{.NetIO}}\t{{.BlockIO}}\t{{.MemUsage}}"

3.2.3 Resource Utilization

# CPU and memory usage during sync
docker stats fukuii-node4 fukuii-node5 fukuii-node6 --no-stream

Test Scenario 3.3: Post-Sync Functionality

3.3.1 Transaction Broadcast

# Send a test transaction from fast-synced node
# (Requires setting up accounts and sending transactions)
# This validates that the synced state is functional

# Example (pseudo-code):
# 1. Unlock account on node 6
# 2. Send transaction to another address
# 3. Verify transaction is mined
# 4. Check transaction appears on all nodes

3.3.2 Mining After Fast Sync

# Start mining on node 6
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"miner_start","params":[],"id":1}' \
  http://localhost:8556

# Wait for blocks to be mined
sleep 60

# Verify node 6 mined blocks
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest",false],"id":1}' \
  http://localhost:8556 | jq '.result.miner'

Success Criteria for Phase 3

  • Block hashes match across all nodes at same height
  • State roots match across all nodes at same height
  • Account balances consistent across all nodes
  • Fast sync time documented and within acceptable range
  • Fast-synced nodes can mine blocks successfully
  • Fast-synced nodes can broadcast transactions

Test Scenarios Summary

Test ID Scenario Priority Duration Dependencies
FS-1.1 Seed node setup Critical 30 min None
FS-1.2 Blockchain history generation Critical 4+ hours FS-1.1
FS-2.1 Single node fast sync Critical 30-60 min FS-1.2
FS-2.2 Message decompression validation High 30-60 min FS-1.2
FS-2.3 Multi-node fast sync High 30-60 min FS-1.2
FS-3.1 State verification Critical 15 min FS-2.1
FS-3.2 Performance metrics Medium 15 min FS-2.1
FS-3.3 Post-sync functionality High 30 min FS-2.1

Known Issues and Troubleshooting

Issue 1: Message Decompression Errors

Symptoms: - Logs show "Failed to decompress message" errors - Peer connections frequently drop - Sync progress stalls

Investigation Steps: 1. Check RLPx handshake completion in logs 2. Verify Snappy compression is enabled on both ends 3. Examine message size and format 4. Check for protocol version mismatches

Potential Solutions: - Ensure all nodes use compatible protocol versions - Verify Snappy compression library is working correctly - Check network MTU settings for fragmentation issues

Issue 2: Fast Sync Not Starting

Symptoms: - Node shows "syncing: false" but block height is 0 - No "Starting fast sync" message in logs - Node remains at genesis block

Investigation Steps: 1. Verify do-fast-sync = true in configuration 2. Check peer count (need ≥1 peer for fast sync) 3. Examine pivot block selection logic 4. Review fast sync prerequisites

Potential Solutions: - Ensure sufficient blockchain history on seed nodes (≥100 blocks) - Verify peer connections are established - Check that seed nodes are not also in fast sync mode

Issue 3: Peer Connection Issues

Symptoms: - Peer count remains 0 or very low - "No suitable peers for fast sync" in logs - Frequent peer disconnections

Investigation Steps: 1. Check static-nodes.json configuration 2. Verify Docker network connectivity 3. Examine firewall/port configurations 4. Review peer selection logic in logs

Potential Solutions: - Run fukuii-cli sync-static-nodes to update peer list - Verify Docker network is functioning: docker network inspect gorgoroth_gorgoroth - Check enode URLs are reachable from all containers

Issue 4: Sync Stalls at Specific Block

Symptoms: - Sync progress stops at same block repeatedly - "Waiting for pivot block" appears in logs - Block number doesn't advance

Investigation Steps: 1. Check seed node block height 2. Verify pivot block is available from peers 3. Examine fast sync state machine transitions 4. Review block validation errors

Potential Solutions: - Ensure seed nodes are actively mining and advancing chain - Restart fast sync node to trigger new pivot block selection - Check for block validation failures in logs

Automation Scripts

Script 1: Fast Sync Test Runner

Location: ops/gorgoroth/test-scripts/test-fast-sync.sh
Permissions: Executable (chmod +x applied)
Usage: ./test-fast-sync.sh

#!/bin/bash
# Fast Sync Test Script for Fukuii Gorgoroth Network
# Tests fast sync functionality on the 6-node network

set -e

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
GORGOROTH_DIR="$(dirname "$SCRIPT_DIR")"

echo "=== Fukuii Fast Sync Test Suite ==="
echo "Starting at: $(date)"
echo

# Configuration
MIN_BLOCKS=1000
SYNC_TIMEOUT=3600  # 1 hour

# Step 1: Start seed nodes
echo "Step 1: Starting seed nodes (1-3)..."
cd "$GORGOROTH_DIR"
docker compose -f docker-compose-6nodes.yml up -d fukuii-node1 fukuii-node2 fukuii-node3
sleep 45

# Step 2: Verify seed connectivity
echo "Step 2: Verifying seed node connectivity..."
for port in 8545 8547 8549; do
  PEERS=$(curl -s -X POST -H "Content-Type: application/json" \
    --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' \
    http://localhost:$port | jq -r '.result')
  PEER_COUNT=$((16#${PEERS:2}))
  echo "  Port $port: $PEER_COUNT peers"
  if [ $PEER_COUNT -lt 2 ]; then
    echo "  ❌ ERROR: Insufficient peers on port $port"
    exit 1
  fi
done
echo "  ✅ All seed nodes connected"

# Step 3: Check blockchain height
echo "Step 3: Checking blockchain height..."
BLOCK_HEX=$(curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
  http://localhost:8545 | jq -r '.result')
BLOCK_NUM=$((16#${BLOCK_HEX:2}))
echo "  Current block height: $BLOCK_NUM"

if [ $BLOCK_NUM -lt $MIN_BLOCKS ]; then
  echo "  ⚠️  Insufficient blocks for testing (need $MIN_BLOCKS, have $BLOCK_NUM)"
  echo "  Waiting for more blocks to be mined..."
  # In automated environment, would wait or skip
  exit 1
fi
echo "  ✅ Sufficient blockchain history"

# Step 4: Start fast sync node (node 6)
echo "Step 4: Starting fast sync node (node 6)..."
docker compose -f docker-compose-6nodes.yml up -d fukuii-node6
sleep 30

# Step 5: Monitor fast sync progress
echo "Step 5: Monitoring fast sync progress..."
START_TIME=$(date +%s)
SYNCED=false

while [ $(($(date +%s) - START_TIME)) -lt $SYNC_TIMEOUT ]; do
  # Check if syncing is complete
  SYNCING=$(curl -s -X POST -H "Content-Type: application/json" \
    --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
    http://localhost:8556 | jq -r '.result')

  if [ "$SYNCING" == "false" ]; then
    # Get current block
    NODE_BLOCK_HEX=$(curl -s -X POST -H "Content-Type: application/json" \
      --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
      http://localhost:8556 | jq -r '.result')
    NODE_BLOCK=$((16#${NODE_BLOCK_HEX:2}))

    if [ $NODE_BLOCK -gt 0 ]; then
      SYNCED=true
      break
    fi
  fi

  echo "  Still syncing... (elapsed: $(($(date +%s) - START_TIME))s)"
  sleep 10
done

if [ "$SYNCED" = true ]; then
  ELAPSED=$(($(date +%s) - START_TIME))
  echo "  ✅ Fast sync completed in ${ELAPSED}s"
else
  echo "  ❌ Fast sync timeout after ${SYNC_TIMEOUT}s"
  exit 1
fi

# Step 6: Verify state consistency
echo "Step 6: Verifying state consistency..."
SEED_HASH=$(curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest",false],"id":1}' \
  http://localhost:8545 | jq -r '.result.hash')

NODE6_HASH=$(curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest",false],"id":1}' \
  http://localhost:8556 | jq -r '.result.hash')

if [ "$SEED_HASH" == "$NODE6_HASH" ]; then
  echo "  ✅ Block hashes match"
else
  echo "  ❌ Block hash mismatch!"
  echo "     Seed: $SEED_HASH"
  echo "     Node6: $NODE6_HASH"
  exit 1
fi

# Step 7: Check for decompression errors
echo "Step 7: Checking for decompression errors..."
DECOMPRESS_ERRORS=$(docker compose logs fukuii-node6 2>&1 | grep -i "decompress.*error" | wc -l)
if [ $DECOMPRESS_ERRORS -eq 0 ]; then
  echo "  ✅ No decompression errors found"
else
  echo "  ⚠️  Found $DECOMPRESS_ERRORS decompression error(s)"
  docker compose logs fukuii-node6 2>&1 | grep -i "decompress.*error" | head -5
fi

echo
echo "=== Fast Sync Test Complete ==="
echo "Results:"
echo "  - Sync time: ${ELAPSED}s"
echo "  - Final block: $NODE_BLOCK"
echo "  - State verified: ✅"
echo "  - Decompression errors: $DECOMPRESS_ERRORS"
echo
echo "Finished at: $(date)"

Script 2: Decompression Monitor

Location: ops/gorgoroth/test-scripts/monitor-decompression.sh
Permissions: Executable (chmod +x applied)
Usage: ./monitor-decompression.sh [container_name]

#!/bin/bash
# Monitor for message decompression issues in real-time

CONTAINER_NAME="${1:-fukuii-node6}"

echo "=== Monitoring $CONTAINER_NAME for decompression issues ==="
echo "Press Ctrl+C to stop"
echo

docker compose logs -f "$CONTAINER_NAME" 2>&1 | while read line; do
  # Highlight decompression-related messages
  if echo "$line" | grep -qi "decompress"; then
    echo "[DECOMPRESS] $line"
  elif echo "$line" | grep -qi "snappy"; then
    echo "[SNAPPY] $line"
  elif echo "$line" | grep -qi "rlpx"; then
    echo "[RLPx] $line"
  elif echo "$line" | grep -qi "handshake"; then
    echo "[HANDSHAKE] $line"
  fi
done

Performance Baselines

Expected Sync Times (Gorgoroth 6-Node Network)

Blocks Full Sync Fast Sync Improvement
1,000 ~30 min ~5 min 83%
5,000 ~150 min ~15 min 90%
10,000 ~300 min ~25 min 92%

Note: Times are estimates and will vary based on hardware and network conditions.

Resource Usage Baselines

Metric Full Sync Fast Sync Delta
Peak Memory ~1.5 GB ~1.8 GB +20%
CPU Usage 40-60% 60-80% +25%
Network Download ~500 MB/1000 blocks ~50 MB/1000 blocks -90%
Disk I/O High Medium -40%

Test Data Collection

Metrics to Collect

  1. Sync Performance
  2. Time to complete fast sync
  3. Blocks synced per minute
  4. Pivot block selection time
  5. State download time

  6. Network Metrics

  7. Peer count during sync
  8. Peer connection stability
  9. Message send/receive rates
  10. Bandwidth consumption

  11. Resource Utilization

  12. Peak memory usage
  13. Average CPU utilization
  14. Disk I/O operations
  15. Database size growth

  16. Error Tracking

  17. Decompression error count
  18. Peer disconnect reasons
  19. Block validation failures
  20. State verification errors

Log Collection

# Collect all logs after test completion
fukuii-cli collect-logs 6nodes ./test-results/fast-sync-$(date +%Y%m%d-%H%M%S)

Continuous Testing Integration

Nightly Fast Sync Tests

Add to .github/workflows/gorgoroth-nightly.yml (if it exists):

name: Gorgoroth Fast Sync Tests

on:
  schedule:
    - cron: '0 2 * * *'  # Run at 2 AM daily
  workflow_dispatch:

jobs:
  fast-sync-test:
    runs-on: ubuntu-latest
    timeout-minutes: 180

    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker
        uses: docker/setup-buildx-action@v3

      - name: Run Fast Sync Test
        run: |
          cd ops/gorgoroth/test-scripts
          ./test-fast-sync.sh

      - name: Collect Logs
        if: always()
        run: |
          cd ops/gorgoroth
          fukuii-cli collect-logs 6nodes ./logs

      - name: Upload Logs
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: fast-sync-test-logs
          path: ops/gorgoroth/logs/
          retention-days: 30

Documentation Updates

After completing tests, update the following documentation:

  1. README.md - Add fast sync test results
  2. GORGOROTH_COMPATIBILITY_TESTING.md - Include fast sync scenarios
  3. TROUBLESHOOTING_REPORT.md - Document any new issues discovered
  4. KPI_BASELINES.md - Add fast sync performance baselines

Appendices

Appendix A: Configuration Files

Fast Sync Node Configuration Template

# ops/gorgoroth/conf/node6/gorgoroth.conf
include "base-gorgoroth.conf"

fukuii {
  mining {
    coinbase = "0x6000000000000000000000000000000000000006"
  }

  network {
    server-address {
      port = 30303
    }

    rpc {
      http {
        interface = "0.0.0.0"
        port = 8545
      }
      ws {
        interface = "0.0.0.0"
        port = 8546
      }
    }
  }

  sync {
    # Enable fast sync
    do-fast-sync = true

    # Fast sync configuration
    fast-sync-throttle = 100.milliseconds
    peers-scan-interval = 3.seconds

    # Pivot block offset (sync from N blocks before chain tip)
    pivot-block-offset = 100

    # Maximum pivot age (prevent syncing from stale pivots)
    max-pivot-age = 1000

    # State sync parameters
    do-state-sync = true
    state-sync-batch-size = 100

    # Disable SNAP sync for this test
    do-snap-sync = false
  }

  # Enable detailed logging for debugging
  logging {
    level = "DEBUG"

    # Log message protocol for decompression analysis
    network.protocol = "TRACE"
  }
}

Appendix B: RPC Commands Reference

Essential RPC Commands for Fast Sync Testing

# Check sync status
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
  http://localhost:8556

# Get current block
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
  http://localhost:8556

# Get peer count
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' \
  http://localhost:8556

# Get peer info
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"admin_peers","params":[],"id":1}' \
  http://localhost:8556

# Get node info
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"admin_nodeInfo","params":[],"id":1}' \
  http://localhost:8556

# Check mining status
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_mining","params":[],"id":1}' \
  http://localhost:8556

# Get block by number
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x64",false],"id":1}' \
  http://localhost:8556

Appendix C: Expected Log Patterns

Successful Fast Sync Log Sequence

1. Node startup
   INFO  Starting Fukuii node...
   INFO  Network ID: 1337, Chain ID: 0x539

2. Peer discovery
   INFO  Connecting to static peers...
   DEBUG RLPx handshake initiated with <enode>
   DEBUG RLPx handshake completed with <enode>
   INFO  Connected to peer <enode>

3. Fast sync initiation
   INFO  Starting fast sync...
   INFO  Selecting pivot block...
   INFO  Pivot block selected: 950 (target: 1000, offset: 100)

4. Header download
   DEBUG Requesting headers from pivot block
   DEBUG Downloaded headers: 100/950
   DEBUG Downloaded headers: 500/950
   INFO  Header download completed: 950 headers

5. Block body download
   DEBUG Requesting block bodies
   DEBUG Downloaded bodies: 100/950
   DEBUG Downloaded bodies: 500/950
   INFO  Block body download completed: 950 blocks

6. Receipt download
   DEBUG Requesting receipts
   DEBUG Downloaded receipts: 100/950
   INFO  Receipt download completed: 950 blocks

7. State sync
   INFO  Starting state sync from block 950
   DEBUG Downloading state nodes...
   DEBUG State sync progress: 25%
   DEBUG State sync progress: 50%
   DEBUG State sync progress: 75%
   INFO  State sync completed

8. Verification
   INFO  Verifying state root...
   INFO  State verification passed
   INFO  Fast sync completed successfully

9. Switch to full sync
   INFO  Switching to full sync mode
   INFO  Following chain head

Appendix D: Troubleshooting Checklist

Before reporting issues, verify:

  • Seed nodes are running and synchronized
  • At least 1000 blocks have been mined on seed nodes
  • Fast sync is enabled in node configuration (do-fast-sync = true)
  • Node has at least 1 peer connection
  • Docker network is functioning correctly
  • Static nodes configuration is up to date
  • No port conflicts exist
  • Sufficient disk space is available
  • Container logs show node startup success
  • RPC endpoints are accessible

Revision History

Date Version Changes Author
2025-12-08 1.0 Initial fast sync testing plan GitHub Copilot

Maintained by: Chippr Robotics Engineering Team
Last Updated: December 8, 2025
Next Review: March 8, 2026 (Quarterly)