Prerequisites
- Node.js >= 22.12.0
- An LLM API key (Anthropic, OpenAI, or any OpenAI-compatible provider)
Install
Local Server (recommended for teams)
http://localhost:3000.
On first launch, open the page and register the first user — that account becomes the admin. After that, new registrations require admin authentication.
CLI
From Source
Configure Your LLM
On first run, Siclaw launches a setup wizard to configure your LLM provider.If you started
siclaw local first (recommended for teams), the wizard detects the running Portal and redirects provider setup to the Portal Web UI’s Models page. One configuration in Portal serves every paired TUI — no per-workstation settings.json to drift out of sync..siclaw/config/settings.json manually:
Siclaw supports any OpenAI-compatible API. See LLM Providers for Ollama, vLLM, Azure, and other setups.
Run Your First Investigation
Describe an issue:/dp or Ctrl+I) to run a structured investigation — the agent triages, proposes hypotheses for your review, validates in parallel after your confirmation, and produces a structured report with root cause, confidence score, and remediation steps.
Investigation traces are saved to .siclaw/traces/ (relative to where Siclaw was launched).
Add Cluster Access
To investigate Kubernetes issues, import a kubeconfig into Siclaw:- Standalone TUI (no local Portal):
/setupinside the session - Local Server, or TUI paired with one: Clusters / Hosts in the Web UI — paired TUIs pick up the imports on the next launch
What’s Next?
- Core Concepts — understand the investigation engine
- Your First Investigation — walk through a complete diagnosis
- LLM Providers — detailed provider configuration
- Deploy for your team — production multi-user deployment