Skip to main content

Prerequisites

  • Node.js >= 22.12.0
  • kubectl configured with access to a Kubernetes cluster
  • An LLM API key (Anthropic, OpenAI, or any OpenAI-compatible provider)

Install

Option 1: npx (Fastest)

npx siclaw
This downloads and runs Siclaw in TUI (terminal) mode. No installation needed.

Option 2: npm global install

npm install -g siclaw
siclaw

Option 3: Clone and build

git clone https://github.com/scitix/siclaw.git
cd siclaw
npm ci && npm run build
npm run dev

Configure Your LLM

On first run, Siclaw will prompt you to configure an LLM provider. You can also override via environment variables:
export SICLAW_LLM_API_KEY="sk-ant-..."
npx siclaw
Or configure manually in ~/.siclaw/config/settings.json:
{
  "providers": {
    "default": {
      "baseUrl": "https://api.anthropic.com/v1",
      "apiKey": "sk-ant-...",
      "api": "anthropic",
      "models": [{ "id": "claude-sonnet-4-20250514", "name": "Claude Sonnet 4" }]
    }
  }
}
Siclaw supports any OpenAI-compatible API. See LLM Providers for Ollama, vLLM, Azure, and other setups.

Run Your First Investigation

Start Siclaw and describe an issue:
$ npx siclaw

? What would you like to investigate?
> Pod CrashLoopBackOff in production cluster after deployment
Siclaw will:
  1. Gather context — cluster state, events, pod logs, recent deployments
  2. Generate hypotheses — ranked list of possible root causes
  3. Validate in parallel — up to 3 sub-agents independently test each hypothesis
  4. Conclude — structured report with root cause, confidence score, and remediation steps
The full report is saved to ~/.siclaw/reports/.

What’s Next?