← SRIDA

Translator Layer — Discord-Native Real-Time Translation

Status: SCOPE DRAFT | Origin: KB directive 2026-04-20 | Drafted: HELIUS

The Law (Not Feature)

Language = friction. Friction = covenant depth blocker. Real-time translation at the membrane enables Chinese dev team to operate in native language while KB/English stakeholders maintain single source of truth. This is covenant depth infrastructure, not a chat feature.

Problem Statement

Current State

| Component | Status | Notes |

|-----------|--------|-------|

| #china-lab | Active | Chinese depth rail (Kimi 262K), SRIDA posting CN |

| #research-desk | Built | Curated/filtered ingress only (research-grade) |

| #concave-live | Built | Raw HELIUS↔SRIDA A2A stream |

| Translation cron | Partial | desk-ingress-translator skill (6m cadence, filtered) |

| Real-time thread translation | NOT EXIST | This SCOPE = new build |

Target Architecture


Thread (any operational thread)
  │
  ├─ Message in CN ──┐
  │                  ▼
  │          [Translation Layer]
  │                  │
  │                  ▼
  │          Auto-reply in EN (thread)
  │
  └─ Message in EN ──┐
                     ▼
             [Translation Layer]
                     │
                     ▼
             Auto-reply in CN (thread)

Technical Requirements

Model Selection (Benchmark Required)

Who builds depends on benchmark of Chinese→English translation quality + latency:

| Model | Latency | Chinese Quality | Status |

|-------|---------|-----------------|--------|

| GLM-5.1:cloud (NVIDIA NIM) | 15s | Strong | ✅ Verified by HELIUS |

| Kimi K2.5 (Ollama) | Variable | Native | ⚠️ Deep overflow rail |

| Qwen3.5-122B (NVIDIA NIM) | Unknown | Native | ⏳ NEEDS BENCHMARK |

| DeepSeek-V3.2 (NVIDIA NIM) | >45s | Strong | ❌ Timeout, cold start |

Decision gate: Benchmark Qwen3.5-122B + GLM-5.1 on translation task. If GLM wins on speed/quality tradeoff, HELIUS layer handles routing. If Qwen wins, SRIDA layer handles execution.

Implementation Modes

Option A: Bot Relay (Discord-native)

Option B: Webhook + External Service

Recommended: Option A (Discord bot with message intent)

Channel/Thread Coverage

| Thread/Channel | Priority | Content Type |

|----------------|----------|--------------|

| #china-lab | P0 | Chinese dev work |

| #covenant-ops | P1 | Mixed team ops |

| #board-room | P1 | Governance (if Chinese presence) |

| #concave-live | P2 | Raw A2A (HELIUS↔SRIDA in EN) |

| Project threads | P1 | Active builds with China team |

Scope Boundaries

IN SCOPE:

NOT IN SCOPE (now):

Benchmark Results (2026-04-20, C80-grade, 75 calls)

| Model | Avg Latency | Std Dev | Avg Quality | Success Rate |

|-------|-------------|---------|-------------|--------------|

| Qwen-3.5-122B | 1.74s | 2.17s | 0.87 | 25/25 (100%) |

| GLM-5.1 | 18.89s | 15.26s | 0.89 | 24/25 (96%) |

| Kimi-K2.5 | FAILED | — | — | 0/25 (NoneType parse) |

Winner: Qwen-3.5-122B. 10.8x faster, 100% success, quality delta = 2% (negligible for operational use). GLM's 18.89s + 15.26s variance = too unpredictable for real-time chat. Kimi needs output format handler (thinking tag stripping) before viable.

Kimi Degradation Analysis (Paper 046 Correction): Per Paper 046 findings, Kimi K2.5 exhibits native-language superposition:

Translation benchmark failure IS NOT a bug. It is architecture confirmation: Kimi is a Chinese-native model. Forcing EN output = operational degradation. Reserve Kimi for Chinese-native workflows (coding, reasoning). Route translation through Qwen (bilingual router) or GLM (bilingual with latency tradeoff).

Full results: /home/openclaw/data/benchmarks/translation-bench-results.json

Build Decision

SRIDA builds. Qwen-3.5-122B backbone. HELIUS routes.

Rationale:

Next Steps

1. ~~HELIUS: Run GLM-5.1 vs Qwen3.5-122B translation benchmark~~ DONE

2. ~~Decision: Who builds based on benchmark result~~ DONE: SRIDA

3. SRIDA: Bot scaffolding + Discord bot token + message intent

4. SRIDA: Wire Qwen-3.5-122B via NVIDIA NIM as translation backend

5. Integration: #china-lab first, expand to operational threads

6. DEFER: Kimi-K2.5 fix (add thinking tag handler for future round)

References


Concave principle: The interaction IS the product. Translation removes friction from interaction → enables covenant depth.