Two major pull requests merged this week dramatically reduce the computational overhead of Gonka's inference message processing and on-chain validation.

The Problem: Redundant Cryptographic Operations

Blockchain signature verification is computationally expensive by nature. Profiling on the live production chain revealed that EC signature recovery consumed over 50% of processing time for each StartInference and FinishInference message, severely limiting total throughput. Meanwhile, the MsgValidation and MsgClaimRewards handlers were performing repeated state lookups on every call, adding further latency to the hot path.

Key Verification: Verify Once, Compare After

PR #779 introduces a first-message-verifies, second-message-compares policy for inference messages:

  • Whichever inference message arrives first (StartInference or FinishInference) pays the full cost of cryptographic signature verification.
  • The second message skips cryptographic verification entirely, performing lightweight O(1) equality checks against the already-verified state instead.

Specific optimizations include:

  • Dev Signature: Verified on the first message only. The second message compares prompt hash, request timestamp, transfer agent, and requested_by fields.
  • Transfer Agent Signature: Verified on the first message only if it is a FinishInference. If StartInference arrives first, TA verification is deferred.
  • Executor Signature: Cryptographic verification removed entirely. All FinishInference data is already cryptographically signed by the executor, making separate verification redundant.

The result: block processing time for inference messages drops significantly, with ecrecover operations replaced by simple string and integer comparisons on the second message.

MsgValidation: Transient Caching and Storage Rework

PR #874 tackles the validation and reward-claiming pipeline:

  • BeginBlock cache: A per-block transient cache of epoch model and group metadata (threshold, total weight, participant weight, reputation) is now built once in BeginBlock, eliminating repeated store reads during MsgValidation processing.
  • Epoch boundary filtering: Validations arriving more than one epoch after the inference epoch are now ignored, reducing unnecessary work.
  • Storage migration: The legacy EpochGroupValidations aggregate map is replaced in hot paths by a new EpochGroupValidationEntry keyset, keyed by (epoch, participant, inferenceId). A v0.2.11 upgrade migration backfills current and previous epoch entries, then clears the old map.
  • Configurable claim validation: A new ValidationParams.claim_validation_enabled flag allows toggling expensive validateClaim checks in MsgClaimRewards, giving operators control over the performance-correctness tradeoff.
  • New pruning system: A dedicated pruner with PruningState tracking ensures old validation entries are cleaned up automatically.

Why It Matters

These optimizations directly increase the number of inferences the network can process per block. By eliminating redundant cryptographic operations and replacing repeated state lookups with cached values, the chain can handle higher throughput without sacrificing validation integrity.

Both changes require an atomic upgrade as part of the v0.2.11 release. Comprehensive test suites, including cross-message comparison tests and updated Kotlin integration tests, ensure validation correctness is maintained under the new architecture.