⚠️ This wiki was generated by Manifold on 11/22/2025 and may be obsolete.

Manifold – Architecture & Module Map

1. What this system does

2. High-level architecture

Manifold centers on a shared internal/agent.Engine that orchestrates LLM + tool loops. Each runtime (CLI, HTTP, Kafka) loads the same config/observability/LLM/providers/tool registry but swaps how intent is injected and how output is surfaced. The big picture combines:

graph TD Config[config.Load + env files] Observ[observability.Init] LLM[internal/llm Provider] Tools[internal/tools Registry] Agent[internal/agent.Engine] Persistence[databases.Manager] Specialists[internal/specialists Registry] MCP[internal/mcpclient Manager] WARPP[internal/warpp Runner] RAG[internal/rag] Playground[internal/playground] Projects[internal/projects] CLI[cmd/agent] Agentd[cmd/agentd → internal/agentd] Orchestrator[cmd/orchestrator] Whisper[cmd/whisper-go] UI[web/agentd-ui] Kafka[(Kafka topics/commands)] Clients[(External LLM / MCP Servers)] DB[(Postgres, Redis, Vector stores)] Config --> Persistence Config --> Observ Config --> LLM Config --> Tools Persistence --> Tools Specials[Specialists DB/YAML] --> Specialists --> Tools MCP --> Tools Tools --> Agent LLM --> Agent WARPP --> Agent RAG --> Tools Playground --> Agentd Projects --> Agentd Agentd --> Agent CLI --> Agent Orchestrator --> WARPP Orchestrator --> Agent Orchestrator --> Kafka Kafka --> Orchestrator Whisper --> Whisper_CLI UI --> Agentd Agentd --> UI Clients --> LLM Clients --> MCP DB --> Persistence

Every major service eventually delegates to the same internal/agent.Engine and shared tool stack, but they differ on intent sources (CLI args vs HTTP requests vs Kafka messages) and sinks (stdout, HTTP response, Kafka response).

3. Modules and packages

graph TD AgentEngine[internal/agent Engine] ToolsRegistry[internal/tools Registry] LLM[internal/llm Providers] Specialists[internal/specialists Registry] MCP[internal/mcpclient] Persistence[internal/persistence] WARPP[internal/warpp Runner] RAG[internal/rag Services] Playground[internal/playground] Projects[internal/projects] Auth[internal/auth] API[internal/agentd Handlers] CLI[cmd/agent] Agentd[cmd/agentd] Orchestrator[cmd/orchestrator] Whisper[cmd/whisper-go] WebUI[web/agentd-ui] Docs[docs + examples + deploy] Config[internal/config] Config --> ToolsRegistry Config --> LLM Config --> Persistence Persistence --> Specialists Specialists --> ToolsRegistry ToolsRegistry --> AgentEngine LLM --> AgentEngine WARPP --> AgentEngine RAG --> ToolsRegistry Playground --> API Projects --> API Auth --> API API --> AgentEngine CLI --> AgentEngine Agentd --> API Orchestrator --> WARPP WebUI --> Agentd Docs --> WebUI Whisper --> STT[STT]

4. Entrypoints & startup flow

flowchart TB subgraph Common [Common Bootstrap] Cfg[Config Load] Obs[Observability/OTel] Prov[LLM Provider] end subgraph CLI [cmd/agent] Seed[Seed Specialists] RegTools1[Register Tools] WarppCheck{WARPP Mode?} Eng[Agent Engine] end subgraph Daemon [cmd/agentd] DB[DB Manager] Svcs[Services: Auth, RAG, Proj] Rtr[Routes] Serv[Listen :32180] end subgraph Orch [cmd/orchestrator] Infra[Redis & Kafka] Wrap[Warpp Adapter] Loop[Consumer Loop] end Cfg --> Obs --> Prov Prov --> Seed --> RegTools1 --> WarppCheck WarppCheck -- Yes --> Eng WarppCheck -- No --> Eng Prov --> DB --> Svcs --> Rtr --> Serv Prov --> Infra --> Wrap --> Loop

Manifold – Runtime & Environments

1. Runtime overview

manifold is a long-lived service (cmd/agentd) that exposes a web UI on port 32180 and orchestrates agent workflows (agents, orchestrator, orchestrated prompts, workflows). The platform ships with supporting CLI utilities (e.g., scripts/dev.sh for fmt/vet/build checks, scripts/pre-commit hook) plus additional command binaries under cmd/orchestrator, cmd/agent, etc. The main runtime (agentd) runs as a Go server backed by the configuration loader (internal/config/loader.go), which merges .env, config.yaml, and defaults before wiring OpenAI/Google/Anthropic providers, databases, and metrics exporters.

flowchart LR User((User)) -->|HTTP:32180| WebUI[Web UI / Node] WebUI -->|API Calls| AgentD[AgentD Runtime\nGo Server] subgraph Core [Core Components] AgentD -->|Load| Config[Config Loader] AgentD -->|Orchestrate| Workflows[Workflows & Agents] AgentD -->|Execute| CLI[CLI Utilities] end subgraph Providers [External Providers] Workflows -->|LLM API| OpenAI[OpenAI / Anthropic] Workflows -->|Tools| MCP[MCP Integrations] end

2. Environments

3. Configuration

flowchart TD subgraph Sources [Configuration Sources] Defaults[Hardcoded Defaults] YAML[config.yaml] EnvVars[.env / Environment Variables] end Defaults -->|Base| Loader[Config Loader] YAML -->|Overlay| Loader EnvVars -->|Override High Priority| Loader Loader -->|Resolve Paths & Secrets| Validation{Valid?} Validation -- Yes --> FinalConfig[Final Runtime Configuration] Validation -- No --> Panic[Panic / Exit] FinalConfig -->|Configures| Database FinalConfig -->|Configures| LLM_Provider FinalConfig -->|Configures| Feature_Flags

4. How to run locally

Prerequisites

  1. Docker – required to build containers, start Postgres, Redis, optional ClickHouse/OTEL.
  2. Node 20 – required for the UI (web/agentd-ui). Use nvm use 20.
  3. pnpm – install global or via package manager to install frontend deps.
  4. Chrome/Chromium – needed for web tools (Playwright MCP).
  5. GNU Make + Go toolchainscripts/dev.sh, make fmt, go build run from repo root.

Steps

cd ~/path/to/manifold                 # repo root (“./manifold” in prompt)
cp example.env .env                   # create env file (contains OPENAI_API_KEY placeholder)
cp config.yaml.example config.yaml    # copy default configuration
# edit .env and config.yaml to provide OPENAI_API_KEY, WORKDIR, and any overrides
# e.g., replace OPENAI_API_KEY in .env with a real key; set WORKDIR to repo root

nvm use 20
cd web/agentd-ui
pnpm install
cd ../..                              # back to repo root

touch manifold.log                    # optional, runtime writes to log
docker compose up -d manifold pg-manifold
journey title Local Development Setup section Configuration Copy example.env: 5: Developer Copy config.yaml.example: 5: Developer Set API Keys & Workdir: 3: Developer section Dependencies Install Node (nvm): 5: System Install FrontEnd (pnpm): 4: System section Runtime Docker Compose Up: 5: Docker Access localhost:32180: 5: User

Manifold – Data & Storage

1. Data stores overview

flowchart TB subgraph Configuration Config[config.yaml] -->|Load| Loader[Config Loader] Loader -->|DBConfig| Factory[internal/persistence/databases/factory.go] end subgraph "Persistence Layer" Factory -->|Check DSN| Decision{Has DSN?} Decision -- Yes --> Postgres[Postgres Implementation] Decision -- No --> Memory[In-Memory Implementation] end subgraph "Domain Subsystems" Postgres -->|Main Store| StoreMgr[Manager] Memory -->|Fallback| StoreMgr StoreMgr --> Chat[Chat History] StoreMgr --> Auth[Auth & RBAC] StoreMgr --> Spec[Specialists] StoreMgr --> Search[Search/Vector/Graph] end classDef db fill:#e1f5fe,stroke:#01579b,stroke-width:2px; classDef mem fill:#fff3e0,stroke:#ef6c00,stroke-width:2px; class Postgres db; class Memory mem;

2. Core data models / entities

Authentication / RBAC (internal/auth/store.go)

ModelKey fieldsRelationships
users id BIGSERIAL, unique email, provider, subject, name, picture, created_at, updated_at Linked from user_roles.user_id (RBAC) and sessions.user_id (session ownership).
roles id BIGSERIAL, unique name, description user_roles.role_id joins users to roles.
user_roles Composite PK (user_id, role_id) Many-to-many between users and roles, cascading on delete.
sessions id TEXT PK, user_id FK, expires_at, id_token, created_at Stores OIDC/OAuth sessions; user_id ties sessions to their owner and enables logout flows.

Playgrounds (internal/persistence/databases/playground_store.go)

TableNotesRelationships
playground_prompts Stores registry.Prompt JSON payloads, indexed by user_id. Versions stored in playground_prompt_versions.
playground_prompt_versions History per prompt with created_at, per-user filtering. prompt_id references playground_prompts.id logically (no FK).
playground_datasets Dataset metadata (JSONB) per user_id. Used as parent for snapshots.
playground_snapshots Composite PK (dataset_id, id) with metadata payload. Each snapshot owns rows (playground_rows).
playground_rows Rows for each snapshot keyed by (dataset_id, snapshot_id, row_id) to guarantee deterministic uniqueness. Cascaded/deleted by code before removing a dataset.
playground_experiments Experiment specs stored as JSONB per user. Linked to runs.
playground_runs Run payloads (JSONB); FK enforced by experiment_id in code. Run results reference run_id.
playground_run_results Results persisted via batched inserts (uses pgx.Batch). Owned by runs; deleted manually when experiments removed.

Entity Relationships

erDiagram USERS ||--o{ USER_ROLES : "assigned" ROLES ||--o{ USER_ROLES : "defines" USERS ||--o{ SESSIONS : "owns" USERS ||--o{ SPECIALISTS : "registers" USERS ||--o{ MCP_SERVERS : "configures" USERS ||--o{ WARPP_WORKFLOWS : "creates" USERS ||--o{ PLAYGROUND_PROMPTS : "experiments" CHAT_SESSIONS ||--o{ CHAT_MESSAGES : "log" PLAYGROUND_DATASETS ||--o{ PLAYGROUND_SNAPSHOTS : "version" PLAYGROUND_SNAPSHOTS ||--o{ PLAYGROUND_ROWS : "data" PLAYGROUND_EXPERIMENTS ||--o{ PLAYGROUND_RUNS : "execution" PLAYGROUND_RUNS ||--o{ PLAYGROUND_RUN_RESULTS : "result" USERS { bigint id PK string email UK } SPECIALISTS { bigint id PK bigint user_id FK string name } CHAT_SESSIONS { uuid id PK bigint user_id FK } CHAT_MESSAGES { uuid id PK uuid session_id FK }

3. Migrations and schema management

sequenceDiagram participant App as Application Main participant Factory as Database Factory participant Store as Postgres Store participant DB as PostgreSQL App->>Factory: NewManager(Config) Factory->>Store: Init() / InitSchema() rect rgb(240, 248, 255) Note over Store, DB: Self-Bootstrapping (Idempotent) Store->>DB: CREATE TABLE IF NOT EXISTS... Store->>DB: ALTER TABLE ... ADD COLUMN... Store->>DB: CREATE EXTENSION IF NOT EXISTS vector Store->>DB: CREATE INDEX IF NOT EXISTS... end Store-->>Factory: Ready Factory-->>App: Persistence Manager (Ready) App->>App: Start HTTP Listeners

Manifold – Interfaces & Workflows

1. Inbound interfaces overview

flowchart TB subgraph Entrypoints [Application Entrypoints] AgentD[cmd/agentd
HTTP Server] Orch[cmd/orchestrator
Kafka Consumer] CLI[cmd/agent
CLI Tool] end subgraph Core [Internal Core Packages] Router[internal/agentd/router] Handlers[HTTP Handlers
chat/projects/warpp] Engine[internal/agent.Engine] OrchLib[internal/orchestrator] Tools[internal/tools.Registry] end subgraph Infra [Infrastructure Services] Kafka((Kafka)) Redis[(Redis Dedupe)] DB[(Persistence Stores)] LLM[LLM Providers] end AgentD --> Router Router --> Handlers Handlers --> Engine Handlers --> DB CLI --> Engine Orch --> Kafka Orch --> OrchLib OrchLib --> Redis OrchLib --> Tools Engine --> Tools Engine --> LLM Tools --> LLM

2. HTTP / RPC APIs

3. Key workflows

Agentd HTTP request → agent engine run

Triggered via POST /agent/run. The handler extracts context, builds prompt/history attributes, and calls agent.Engine.Run, which iterates the LLM reasoning loop.

sequenceDiagram autonumber actor User as Client/User participant Handler as HTTP Handler participant Auth as Auth/Session participant Engine as Agent Engine participant Registry as Tool Registry participant LLM as LLM Provider User->>Handler: POST /agent/run Handler->>Auth: Validate User & Session Auth-->>Handler: Session Context Handler->>Engine: Engine.Run(prompt, history) loop Reasoning Loop Engine->>LLM: Generate Completion LLM-->>Engine: Response (Content or ToolCall) opt Tool Invocation Engine->>Registry: Dispatch(ToolName, Args) Registry-->>Engine: JSON Result end Engine->>User: Stream SSE (Delta/ToolEvent) end Engine->>Handler: Final Conversation State Handler->>User: 200 OK (Complete)

Kafka command → WARPP workflow execution

Messages posted to the command topic are consumed by the orchestrator, deduped via Redis, and executed via the WARPP runner.

stateDiagram-v2 direction LR state "Kafka Ingest" as Ingest state "Deduplication" as Dedup state "WARPP Execution" as WARPP { [*] --> Personalize Personalize --> Scheduler Scheduler --> ExecuteTool ExecuteTool --> MergeDelta MergeDelta --> CheckDone CheckDone --> Scheduler : Next Step CheckDone --> [*] : Complete } state "Result Handling" as Result [*] --> Ingest : Message Arrives Ingest --> Dedup : Check CorrelationID Dedup --> WARPP : ID New Dedup --> [*] : ID Exists (Ignore) WARPP --> Result : Workflow Done Result --> PublishSuccess : Success Result --> PublishDLQ : Retry Exhausted PublishSuccess --> [*] PublishDLQ --> [*]