Skip to content

Manual installation

Set up Claworc manually with Docker Compose or Helm without the installer script


Runs the Claworc control plane as a Docker container on a single machine. Agent instances are created as sibling containers via the Docker socket.

  • Docker Engine 20.10+ or Docker Desktop
  • Docker Compose v2
  1. Clone the repository

    Terminal window
    git clone https://github.com/gluk-w/claworc.git
    cd claworc
  2. Create the data directory

    Claworc stores its SQLite database and SSH keys here. Use a path that persists across system restarts.

    Terminal window
    mkdir -p ~/.claworc/data
  3. Start the services

    Terminal window
    CLAWORC_DATA_DIR=~/.claworc/data docker compose up -d

    Or create a .env file in the repo root so you don’t need to pass the variable every time:

    Terminal window
    echo "CLAWORC_DATA_DIR=$HOME/.claworc/data" > .env
    docker compose up -d
  4. Verify it’s running

    Terminal window
    docker compose logs -f
    curl http://localhost:8000/health

    The dashboard is available at http://localhost:8000.

The docker-compose.yml reads these variables from the environment or .env:

VariableDescriptionDefault
CLAWORC_DATA_DIRHost path for the database and SSH keys(required)

Additional Claworc settings (e.g., CLAWORC_AUTH_DISABLED, CLAWORC_RP_ID) can be passed as environment variables inside the docker-compose.yml environment: block. See Environment variables for the full list.

Claworc needs access to the Docker socket to create and manage agent containers. The docker-compose.yml mounts it automatically:

volumes:
- /var/run/docker.sock:/var/run/docker.sock

Verify the mount is present if agent creation fails:

Terminal window
docker inspect claworc-dashboard --format \
'{{range .Mounts}}{{.Source}} -> {{.Destination}}{{println}}{{end}}'

You should see /var/run/docker.sock -> /var/run/docker.sock.

Terminal window
docker compose logs -f # Stream logs
docker compose down # Stop
docker compose up -d # Start
docker compose pull && docker compose up -d # Upgrade to latest image
docker compose down -v # Stop and delete volumes (destructive)
Terminal window
docker compose down
# Remove agent containers
docker ps -a --filter "name=bot-" --format '{{.Names}}' | xargs -r docker rm -f
# Remove data (optional)
rm -rf ~/.claworc/data

Deploys Claworc to a Kubernetes cluster using the Helm chart included in the repository.

  • Kubernetes 1.24+
  • kubectl configured with cluster access
  • Helm v3+
  • A StorageClass that supports ReadWriteOnce
  1. Clone the repository

    Terminal window
    git clone https://github.com/gluk-w/claworc.git
    cd claworc
  2. Install the chart

    Terminal window
    helm install claworc helm/ \
    --namespace claworc \
    --create-namespace

    If your kubeconfig is not at the default path:

    Terminal window
    helm install claworc helm/ \
    --namespace claworc \
    --create-namespace \
    --kubeconfig /path/to/kubeconfig
  3. Verify the deployment

    Terminal window
    kubectl get pods -n claworc
    kubectl logs -f deploy/claworc -n claworc

    Wait for the pod to reach Running state.

  4. Access the dashboard

    The chart exposes a NodePort service on port 30000 by default.

    Terminal window
    # Get a node IP
    kubectl get nodes -o wide
    # Open http://<node-ip>:30000

    For local access without exposing a port:

    Terminal window
    kubectl port-forward -n claworc svc/claworc 8000:8001
    # Open http://localhost:8000

Create a custom-values.yaml to override defaults:

config:
dataPath: /app/data # Path inside the pod for DB and SSH keys
k8sNamespace: claworc # Namespace where agent instances are created
service:
type: NodePort
port: 8001
nodePort: 30000 # External port for the dashboard
persistence:
enabled: true
size: 1Gi
storageClass: "" # Empty = use the cluster default StorageClass
accessMode: ReadWriteOnce
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
rbac:
create: true # Creates ServiceAccount, Role, RoleBinding

Apply your values:

Terminal window
helm install claworc helm/ \
--namespace claworc \
--create-namespace \
-f custom-values.yaml

The chart creates a ServiceAccount, Role, and RoleBinding scoped to the claworc namespace. These grant only the permissions needed to manage agent pods, services, PVCs, secrets, and configmaps.

Set rbac.create: false if you manage RBAC externally.

Verify RBAC resources exist:

Terminal window
kubectl get serviceaccount,role,rolebinding -n claworc
Terminal window
# Pull latest chart changes
git pull
helm upgrade claworc helm/ \
--namespace claworc \
-f custom-values.yaml
Terminal window
helm uninstall claworc -n claworc
kubectl delete namespace claworc

Docker: Check that the container is running:

Terminal window
docker ps --filter "name=claworc-dashboard"

Kubernetes: Check pod status and the service:

Terminal window
kubectl get pods -n claworc
kubectl get svc -n claworc

Check the control plane logs first:

Terminal window
# Docker Compose
docker compose logs -f
# Kubernetes
kubectl logs -f deploy/claworc -n claworc

Docker: Verify the Docker socket mount (see Docker socket access above).

Kubernetes: Verify RBAC:

Terminal window
kubectl get rolebinding -n claworc
Terminal window
# Docker
docker logs -f bot-<instance-name>
# Kubernetes
kubectl logs -f deploy/bot-<instance-name> -n claworc
Terminal window
curl http://localhost:8000/health

Docker Compose:

Terminal window
docker compose down
rm -f ~/.claworc/data/claworc.db
docker compose up -d

Kubernetes:

Terminal window
kubectl delete pvc claworc-data -n claworc
kubectl rollout restart deploy/claworc -n claworc

Windows: script fails with “invalid option” error

Section titled “Windows: script fails with “invalid option” error”

If you run install.sh on Windows directly (not via WSL), you may see:

: invalid option nameet: pipefail

This is caused by Windows line endings. Use install.ps1 instead, or convert line endings first:

Terminal window
dos2unix install.sh
bash install.sh