Gonka has closed one of its longest-running priority-one issues: #339, "Distributed vs truly decentralized and trustless — and where we are." The accompanying research document lays out a rigorous framework for what trustless decentralized AI training actually requires, and maps Gonka's current position against it.

The Framework

The document distinguishes three axes that define any distributed training system:

  1. Orchestration — central coordinator vs. peer self-organization
  2. Trust — assumed honest participants vs. defense against adversaries
  3. Data Control — system-controlled data assignment vs. private datasets

These axes produce distinct scenarios. Data-center training has all three under control. Federated learning drops trust and data control but keeps central orchestration. The hardest variant — collaborative training without central coordination — is where Gonka is heading.

The Attack Triangle

In a permissionless network with economic incentives, three types of adversaries emerge:

Type Motivation Behavior Detection
Lazy Greed Claims work, submits random or replayed data Medium
Saboteur Destruction Sends NaN, infinities, extreme noise Low (obvious)
Backdoor Control Crafts plausible updates that introduce targeted failures High (blends in)

Lazy attackers game the reward system. Saboteurs are loud and relatively easy to catch. Backdoor attackers are the real threat — their updates look normal until the model misbehaves on specific inputs.

Where Gonka Stands

Gonka's Proof of Compute (PoC) consensus already addresses the lazy attacker problem through cryptographic validation of GPU work. Secret Seed Validation makes it computationally expensive to fake inference results. The collateral system adds economic skin-in-the-game: operators stake real value, making sabotage costly.

The document positions Gonka's current architecture — PoC validation, Secret Seeds, collateral staking, and the upcoming Confirmation PoC — as building blocks toward a fully trustless training pipeline. The research maps remaining gaps and charts the path forward.

What This Means

Issue #339 was opened in September 2025 and tracked as P1 throughout. Its closure signals that the team has completed the theoretical groundwork for decentralized AI training. The research paper linked in the issue provides the academic foundation for Gonka's next protocol-level decisions.

For node operators, the practical takeaway is that PoC validation, Secret Seeds, and collateral are not isolated features — they form a coherent defense stack against the full spectrum of adversarial behavior in permissionless AI networks.