Skip to content
Seahorse Docs

Cognitive Engine

Multi-agent analysis with self-improving perception.

The Cognitive Loop

Each cognitive cycle runs a multi-step loop, up to 5 iterations. The loop stops early when health metrics converge (stabilize between passes).

1. Scanning

Delta detection: what changed since the last scan? Works across RAG connections and uploaded PDFs.

2. Analyzing (Structural + OmniQ Lenses)

Run orphans, obsolescence, contradictions, gaps lenses. Enterprise: 1-2 OmniQ perception lenses selected adaptively.

3. Auditing NEW

The Auditor samples 30% of observations and validates them against source documents using adversarial prompts. Suspect observations are filtered from metrics. Lens trust scores are updated.

4. Evaluating

Compute health metrics: orphan ratio, linking density, contradiction count. Only audited observations count.

5. Programming

Generate or update perception protocols via Sonnet.

6. Executing

Apply write-back actions (if authorized): link, archive, rewrite, merge.

7. SuperChunking

Build AI summaries of document clusters for Smart Query.

8. Cartographing

Generate knowledge protocol if evaluation delta exceeds 15%.

Convergence Check

If metrics are stable, stop. Otherwise, repeat from phase 1.

Single-document mode: When analyzing from the Document Editor, the cognitive engine runs the same 8-phase loop but scoped to a single document. Observations appear as inline annotations in the editor.

The 9 Agents

Each agent has a specific role in the cognitive loop. Some use LLM calls (with associated cost), others are pure logic.

Agent Role LLM Cost
Scanner Reads RAG and uploaded PDFs, detects what changed since the last scan via watch_changes() None
Analyst Runs documents through perception lenses and persists observations Per lens
Auditor Samples 30% of observations and validates against source documents using adversarial prompts. Marks suspect observations and updates lens trust scores. ~$0.02/cycle
Evaluator Computes structural health metrics using only audited observations. Optional Opus-as-judge scoring. None (or Opus)
Programmer Generates perception-rule protocols via Sonnet (metaprogramming) ~$0.01/cycle
Executor Applies write-back actions (link, archive, rewrite, merge) with snapshot safety None (or LLM for rewrites)
Cartographer Generates knowledge protocol and maps coverage gaps (runs if delta > 15%) ~$0.01/cycle
MetaAgent Orchestrates the loop, checks convergence, selects OmniQ lenses, handles cancellation None

The 8 Lenses

Lenses are the perception layer of the cognitive engine. Each lens looks at your knowledge base from a different angle. They are divided into two categories: structural (logic-based or lightweight LLM) and OmniQ (deep perception, Enterprise only).

Structural Lenses

Lens Type What It Detects Tier
Orphans Pure logic Isolated documents with no links or relationships to other documents Free+
Obsolescence Pure logic Documents not updated in over 180 days Free+
Contradictions LLM (Haiku) Pairs of documents that say conflicting things. Capped at 50 pairs per cycle. Pro+
Gaps LLM (Haiku) Topics your knowledge base should cover but does not. Single LLM call per cycle. Pro+

OmniQ Lenses (Enterprise)

OmniQ lenses use Claude Sonnet for deep perception analysis. The MetaAgent adaptively selects 1-2 lenses per iteration based on the current state of your RAG. They never all run at once.

Lens Perception When Selected
Monade Unity and fragmentation. Finds concepts artificially split across too many documents. High orphan ratio (> 0.3)
Symbiote Ecosystem health. Assesses whether document clusters genuinely work together. Low linking density (< 0.5), few orphans
Architect Structural patterns. Detects missing foundation documents and inverted hierarchies. First iteration with no other selection
Empath Tone and accessibility. Catches tone mismatches and inaccessible language. System converging, no other triggers

Each OmniQ observation includes a confidence score (0.0 to 1.0). The dashboard displays confidence as colored bars next to observation badges.

Lens Trust Scores

Every lens accumulates a trust score based on its audit track record. When the Auditor validates observations and marks some as suspect, the originating lens's trust score adjusts. Lenses with consistently accurate observations earn higher trust; lenses that produce frequent false positives see their trust score decline. Trust scores are visible in the dashboard on the Intelligence page and returned in the API via /v1/cognitive/health.

Protocols (Metaprogramming)

Protocols are perception rules, not actions. They are injected into lens prompts to modify how the lens interprets documents in subsequent iterations. This is metaprogramming: the system reprograms its own perception based on what it learns.

The Programmer agent generates protocols via Claude Sonnet with strict constraints:

  • Only valid lens names (orphans, contradictions, gaps, obsolescence, monade, symbiote, architect, empath)
  • No action verbs (protocols do not perform actions)
  • Each protocol targets one or more specific lenses
  • Protocols have version numbers and effectiveness scores

Example Protocol

Name: "Changelog sensitivity"

Applies to: obsolescence

Instruction: "Documents containing 'changelog' or 'release notes' in the title should be considered stale after 30 days instead of the default 180 days, as they are time-sensitive by nature."

Convergence

The MetaAgent orchestrates the cognitive loop and decides when to stop. After each iteration, it compares the current evaluation metrics with the previous iteration. When the delta between passes is small enough, the system has converged and the cycle ends.

Key convergence factors:

  • Health metrics stability: orphan ratio, linking density, and contradiction count stop changing significantly
  • Observation count plateau: new findings stop appearing
  • Maximum 5 iterations: hard cap to prevent runaway cycles
  • Cancellation: you can stop a running cycle at any time via the dashboard or API (DELETE /v1/cognitive/cycle/{job_id})

Knowledge Protocol (Cartographer)

The Cartographer agent runs after the Executor. It compares the current evaluation metrics with the previous cycle. If the delta exceeds 15%, it generates a knowledge protocol via Claude Sonnet: a structured map of what your RAG knows, what it is missing, and where the gaps are.

The knowledge protocol is written into both the Seahorse metadata store and your RAG (marked with seahorse_managed=True so lenses skip it during analysis). Snapshots are created before each protocol overwrite.

You can view the knowledge protocol and its gaps in the dashboard under Intelligence (RAG Self-Assessment panel) and Home (What's Missing panel).