Wazuh SIEM Deployment
Deploy Wazuh 4.14.3 on k3s for security information and event management (SIEM), extended detection and response (XDR), and compliance monitoring.
1. Overview
Wazuh is an open-source security platform providing:
| Feature | Description |
|---|---|
SIEM |
Log analysis, correlation, alerting |
XDR |
Endpoint detection and response |
File Integrity |
Monitor file changes on hosts |
Vulnerability Detection |
CVE scanning on endpoints |
Compliance |
PCI-DSS, HIPAA, GDPR reporting |
Cloud Security |
AWS, Azure, GCP monitoring |
2. Architecture
| Component | Replicas | Purpose |
|---|---|---|
wazuh-indexer |
1 |
OpenSearch for log storage and search |
wazuh-manager-master |
1 |
Cluster manager, API, agent registration |
wazuh-manager-worker |
1 |
Event processing from agents |
wazuh-dashboard |
1 |
Web UI for visualization |
3. Prerequisites
3.1. Resource Requirements
|
Wazuh is resource-intensive. The k3s VM requires:
|
Verify k3s node resources:
kubectl top nodes
kubectl describe node | grep -A 10 "Allocated resources:"
If CPU requests exceed 80%, increase VM CPU before proceeding.
3.2. Increase VM Resources (if needed)
From workstation (VM must be shut down):
ssh kvm-01 "sudo virsh shutdown k3s-master-01"
Wait for shutdown, then:
ssh kvm-01 "sudo virsh setvcpus k3s-master-01 4 --config --maximum"
ssh kvm-01 "sudo virsh setvcpus k3s-master-01 4 --config"
ssh kvm-01 "sudo virsh setmaxmem k3s-master-01 8G --config"
ssh kvm-01 "sudo virsh setmem k3s-master-01 8G --config"
ssh kvm-01 "sudo virsh start k3s-master-01"
Verify:
ssh k3s-master-01 "nproc && free -h | awk 'NR==2 {print \$2}'"
Expected: 4 CPUs, 7.5Gi RAM
4. Phase 1: NFS Provisioner Setup
The Wazuh StatefulSets use volumeClaimTemplates which dynamically create PVCs. We need an NFS provisioner to handle this.
4.1. 1.1 Create NAS Directory
From workstation:
ssh nas-01 "mkdir -p /volume1/k3s/wazuh && ls -la /volume1/k3s/"
4.2. 1.2 Install NFS Provisioner
On k3s node:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--namespace kube-system \
--set nfs.server=10.50.1.70 \
--set nfs.path=/volume1/k3s/wazuh \
--set storageClass.name=nfs-client \
--set storageClass.reclaimPolicy=Retain
Verify:
kubectl get storageclass | grep nfs
kubectl get pods -n kube-system | grep nfs
Expected: StorageClass nfs-client exists, provisioner pod is Running.
5. Phase 2: Clone and Configure Repository
|
Wazuh 5.0 is not released yet. Use version 4.14.3 (latest stable as of 2026-02). |
5.1. 2.1 Clone Repository
On k3s node:
cd /tmp
git clone https://github.com/wazuh/wazuh-kubernetes.git -b v4.14.3 --depth=1
cd wazuh-kubernetes
5.2. 2.2 Review Directory Structure
find . -name "kustomization.yml" 2>/dev/null
Expected:
./envs/eks/kustomization.yml ./envs/local-env/kustomization.yml ./wazuh/kustomization.yml
Key directories:
-
wazuh/- Base manifests (indexer, manager, dashboard) -
wazuh/certs/- Certificate generation scripts -
envs/local-env/- Local deployment overlay (reduced replicas)
5.3. 2.3 Generate Certificates
Wazuh requires TLS certificates for internal communication.
Generate indexer cluster certificates:
bash wazuh/certs/indexer_cluster/generate_certs.sh
Generate dashboard HTTPS certificates:
bash wazuh/certs/dashboard_http/generate_certs.sh
Verify certificates created:
ls wazuh/certs/indexer_cluster/*.pem | wc -l
ls wazuh/certs/dashboard_http/*.pem | wc -l
Expected: 8+ files in indexer_cluster, 2 files in dashboard_http.
5.4. 2.4 Configure Storage Class
Update the storage class to use our NFS provisioner:
cat > envs/local-env/storage-class.yaml << 'EOF'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: wazuh-storage
provisioner: cluster.local/nfs-provisioner-nfs-subdir-external-provisioner
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
EOF
Verify:
cat envs/local-env/storage-class.yaml | grep provisioner
6. Phase 3: Deploy Wazuh
6.1. 3.1 Apply Traefik CRDs
k3s includes Traefik, but we need the Wazuh-specific CRDs:
kubectl apply -f traefik/crd/
Expected: Multiple CRDs created/configured (warnings about missing annotations are normal).
6.2. 3.2 Deploy with Kustomize
The local-env overlay reduces replicas and resources for single-node deployment:
kubectl apply -k envs/local-env/
6.3. 3.3 Watch Deployment Progress
kubectl get pods -n wazuh -w
Expected progression (3-5 minutes):
-
PVCs created and bound
-
Init containers run (busybox for permissions)
-
Main containers start pulling images (~800MB each)
-
All pods reach Running state
Final expected state:
NAME READY STATUS RESTARTS AGE wazuh-dashboard-xxx 1/1 Running 0 5m wazuh-indexer-0 1/1 Running 0 5m wazuh-manager-master-0 1/1 Running 0 5m wazuh-manager-worker-0 1/1 Running 0 5m
7. Phase 4: LoadBalancer Services and DNS
Wazuh services use MetalLB LoadBalancer for external access. Each service gets a VIP from the MetalLB pool.
7.1. 4.1 Verify LoadBalancer VIPs
ssh k3s-master-01 "kubectl get svc -n wazuh -o custom-columns='NAME:.metadata.name,TYPE:.spec.type,VIP:.status.loadBalancer.ingress[0].ip,PORTS:.spec.ports[*].port'"
NAME TYPE VIP PORTS dashboard LoadBalancer 10.50.1.132 443 indexer LoadBalancer 10.50.1.131 9200 wazuh LoadBalancer 10.50.1.134 55000,1515,514 wazuh-cluster ClusterIP <none> 1516 wazuh-indexer ClusterIP <none> 9300 wazuh-workers LoadBalancer 10.50.1.133 1514
|
Record these VIPs - you’ll need them for DNS records:
|
7.2. 4.2 Add DNS Records to BIND (Authoritative)
BIND (10.50.1.90) is authoritative for inside.domusdigitalis.dev. All hosts use BIND for DNS resolution.
SSH to bind-01:
ssh bind-01
Edit forward zone:
sudo vi /var/named/inside.domusdigitalis.dev.zone
Add these A records (use VIPs from Step 4.1):
; Wazuh SIEM (k3s LoadBalancer VIPs)
wazuh IN A 10.50.1.132 ; Dashboard
wazuh-indexer IN A 10.50.1.131 ; OpenSearch
wazuh-api IN A 10.50.1.134 ; API/Syslog
Increment SOA serial (format: YYYYMMDDNN):
# Find current serial
grep -E "^\s+[0-9]{10}" /var/named/inside.domusdigitalis.dev.zone
# Increment to today's date + sequence (e.g., 2026022401)
Reload zone:
sudo rndc reload inside.domusdigitalis.dev
Verify on BIND:
dig @localhost wazuh.inside.domusdigitalis.dev +short
dig @localhost wazuh-indexer.inside.domusdigitalis.dev +short
dig @localhost wazuh-api.inside.domusdigitalis.dev +short
10.50.1.132 10.50.1.131 10.50.1.134
Exit bind-01:
exit
7.3. 4.3 Add PTR Records (Reverse Zone)
On bind-01:
sudo vi /var/named/10.50.1.rev
Add PTR records:
; Wazuh SIEM reverse records
131 IN PTR wazuh-indexer.inside.domusdigitalis.dev.
132 IN PTR wazuh.inside.domusdigitalis.dev.
134 IN PTR wazuh-api.inside.domusdigitalis.dev.
Increment serial and reload:
sudo rndc reload 1.50.10.in-addr.arpa
Verify:
dig @localhost -x 10.50.1.132 +short
wazuh.inside.domusdigitalis.dev.
7.4. 4.4 Verify DNS Resolution from Workstation
# Forward lookups
host wazuh.inside.domusdigitalis.dev
host wazuh-api.inside.domusdigitalis.dev
# Reverse lookups
host 10.50.1.132
host 10.50.1.134
wazuh.inside.domusdigitalis.dev has address 10.50.1.132 wazuh-api.inside.domusdigitalis.dev has address 10.50.1.134 132.1.50.10.in-addr.arpa domain name pointer wazuh.inside.domusdigitalis.dev. 134.1.50.10.in-addr.arpa domain name pointer wazuh-api.inside.domusdigitalis.dev.
7.5. 4.6 Port Forward (Development)
kubectl -n wazuh port-forward service/dashboard 8443:443 --address 0.0.0.0 &
|
The dashboard uses a self-signed certificate. Browser will show a warning - this is expected. Accept the certificate to proceed. |
7.6. 4.7 Get Credentials
Credentials are stored in the indexer-cred secret:
kubectl get secret indexer-cred -n wazuh -o jsonpath='{.data.username}' | base64 -d && echo
kubectl get secret indexer-cred -n wazuh -o jsonpath='{.data.password}' | base64 -d && echo
Default credentials:
-
Username:
admin -
Password:
SecretPassword
7.7. 4.8 Secrets Management (gopass + dsec)
Credentials stored in two locations for different use cases:
| System | Location | Use Case |
|---|---|---|
gopass |
|
Interactive retrieval, metadata, password managers |
dsec |
|
Shell scripts, automation, |
7.7.1. Step 1: Get Current Password
WAZUH_PASS=$(kubectl get secret indexer-cred -n wazuh -o jsonpath='{.data.password}' | base64 -d)
echo "Password: $WAZUH_PASS"
7.7.2. Step 2: Add to gopass
# Generate password and open editor for metadata
gopass generate -e v3/domains/d000/k3s/wazuh 32
Add metadata below the generated password line:
---
description: "Wazuh SIEM dashboard credentials"
url: "https://wazuh.inside.domusdigitalis.dev:8443"
username: "admin"
namespace: "wazuh"
secret: "indexer-cred"
helm_release: "wazuh"
gopass sync
7.7.3. Step 3: Add to dsec (app.env.age)
dsec edit d000 dev/app
# Add this section:
# === Wazuh SIEM ===
K3S_WAZUH_ADMIN_USER=admin
K3S_WAZUH_ADMIN_PASS=<paste password>
K3S_WAZUH_URL=https://wazuh.inside.domusdigitalis.dev:8443
7.7.4. Step 4: Commit and Push
# Push gopass
gopass sync
# Push dsec
cd ~/.secrets
git add environments/domains/d000/dev/app.env.age
git commit -m "feat(d000/dev): Add Wazuh SIEM credentials"
git push origin main
7.7.5. Retrieve Password
# Option A: From gopass (interactive)
gopass show -c v3/domains/d000/k3s/wazuh # copies to clipboard
# Option B: From dsec (automation)
eval "$(dsec source d000 dev/app)"
echo $K3S_WAZUH_ADMIN_PASS
7.7.6. Step 5: Update k8s Secrets from gopass (Secure Workflow)
This is the correct security workflow: generate secure password in gopass first, then push to k8s.
Understanding Wazuh Secrets Architecture
Wazuh uses multiple secrets for different components:
| Secret | Default User | Purpose |
|---|---|---|
|
admin |
OpenSearch indexer authentication (web UI login) |
|
kibanaserver |
Dashboard → Indexer service account (DO NOT change) |
|
wazuh-wui |
Wazuh Manager API authentication |
|
- |
Agent registration password |
|
The |
Diagnostic Commands (awk/jq patterns)
List all credential secrets:
kubectl -n wazuh get secrets -o custom-columns='NAME:.metadata.name,TYPE:.type' | awk '/cred|password|auth/'
View all secrets with decoded values:
for secret in dashboard-cred indexer-cred wazuh-api-cred wazuh-authd-pass; do
echo "=== $secret ==="
kubectl -n wazuh get secret $secret -o json | jq -r '.data | to_entries[] | "\(.key): \(.value | @base64d)"'
done
Check which secrets a pod uses:
# Dashboard pod secrets and env vars
kubectl -n wazuh get pod -l app=wazuh-dashboard -o yaml | awk '/secretKeyRef/,/name:/' | head -30
# Indexer pod secrets
kubectl -n wazuh get pod -l app=wazuh-indexer -o yaml | awk '/indexer-cred|secretName|PASSWORD/'
Check mounted volumes and secrets:
kubectl -n wazuh get pod -l app=wazuh-dashboard -o yaml | awk '/volumes:/,/^[^ ]/' | head -30
Check environment variables in pod:
kubectl -n wazuh get pod -l app=wazuh-dashboard -o yaml | awk '/env:/,/^[^ ]/' | head -40
View running pod’s actual env vars:
kubectl -n wazuh exec deploy/wazuh-dashboard -- env | awk '/PASS|USER|CRED|URL/'
Change OpenSearch Internal Admin Password (Full Procedure)
The admin user is reserved in OpenSearch - cannot be changed via API. Must update ConfigMap and reload security config.
Step 1: Generate bcrypt hash on workstation
# Get password from gopass
WAZUH_PW=$(gopass show -o v3/domains/d000/k3s/wazuh)
# Generate bcrypt hash (rounds=12)
python3 -c "import bcrypt; print(bcrypt.hashpw(b'<YOUR_PASSWORD>', bcrypt.gensalt(rounds=12)).decode())"
$2b$12$<HASH_OUTPUT_HERE>
Step 2: Find and export the ConfigMap
# Find the configmap name
kubectl -n wazuh get pod wazuh-indexer-0 -o json | jq -r '.spec.volumes[] | select(.configMap) | .configMap.name' | sort -u
# Export to file
kubectl -n wazuh get configmap <CONFIGMAP_NAME> -o yaml > /tmp/internal-users-cm.yaml
Step 3: Update the admin hash
# Set your new hash
HASH='$2b$12$<YOUR_HASH_HERE>'
# Replace the old hash (find old hash first with grep)
sed -i "s|<OLD_HASH>|$HASH|" /tmp/internal-users-cm.yaml
# Verify the change
grep -A3 "admin:" /tmp/internal-users-cm.yaml
Step 4: Apply and restart
# Apply updated configmap
kubectl apply -f /tmp/internal-users-cm.yaml
# Restart indexer to pick up changes
kubectl -n wazuh rollout restart statefulset/wazuh-indexer
# Wait for rollout
kubectl -n wazuh rollout status statefulset/wazuh-indexer
Step 5: Initialize OpenSearch Security
|
After ConfigMap update, security must be reloaded with |
# Find certificate paths
kubectl -n wazuh exec wazuh-indexer-0 -- find /usr/share/wazuh-indexer -name "*.pem" 2>/dev/null
/usr/share/wazuh-indexer/config/certs/admin-key.pem /usr/share/wazuh-indexer/config/certs/admin.pem /usr/share/wazuh-indexer/config/certs/root-ca.pem ...
# Reload security config (with JAVA_HOME set)
kubectl -n wazuh exec wazuh-indexer-0 -- env OPENSEARCH_JAVA_HOME=/usr/share/wazuh-indexer/jdk \
/usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh \
-cd /usr/share/wazuh-indexer/config/opensearch-security/ \
-icl -nhnv \
-cacert /usr/share/wazuh-indexer/config/certs/root-ca.pem \
-cert /usr/share/wazuh-indexer/config/certs/admin.pem \
-key /usr/share/wazuh-indexer/config/certs/admin-key.pem
Security Admin v7 Will connect to localhost:9200 ... done ... Will update '/internalusers' with .../internal_users.yml SUCC: Configuration for 'internalusers' created or updated ... Done with success
Step 6: Verify new password works
# Test indexer API
curl -k -u admin:<YOUR_PASSWORD> https://localhost:9200/_cluster/health
# Test from workstation
WAZUH_PW=$(gopass show -o v3/domains/d000/k3s/wazuh)
curl -k -u admin:$WAZUH_PW https://wazuh.inside.domusdigitalis.dev:9200/_cluster/health
{"cluster_name":"wazuh-cluster","status":"green",...}
Check current k8s secret structure
ssh k3s-master-01 "kubectl -n wazuh get secret indexer-cred -o yaml"
apiVersion: v1
data:
password: U2VjcmV0UGFzc3dvcmQ= # base64 encoded
username: YWRtaW4= # base64 encoded
kind: Secret
metadata:
name: indexer-cred
namespace: wazuh
type: Opaque
Patch secret with gopass password:
# Get password from gopass
WAZUH_PW=$(gopass show -o v3/domains/d000/k3s/wazuh)
# Patch k8s secret
ssh k3s-master-01 "kubectl -n wazuh patch secret indexer-cred -p '{\"data\":{\"password\":\"$(echo -n $WAZUH_PW | base64)\"}}'"
secret/indexer-cred patched
Verify the change:
ssh k3s-master-01 "kubectl -n wazuh get secret indexer-cred -o jsonpath='{.data.password}' | base64 -d && echo"
Should output the new password from gopass.
Restart services to pick up new password:
ssh k3s-master-01 "kubectl -n wazuh rollout restart deployment/wazuh-dashboard && kubectl -n wazuh rollout restart statefulset/wazuh-indexer"
Verify pods are running:
ssh k3s-master-01 "kubectl -n wazuh get pods -w"
Wait until all pods show Running status, then Ctrl+C.
Test login with new password:
curl -k -u admin:$WAZUH_PW https://wazuh.inside.domusdigitalis.dev:8443/api/security/user/authenticate
{"data":{"token":"..."},"error":0}
7.8. 4.9 Expose Agent Ports
For agents to connect:
# Agent registration (1515)
kubectl -n wazuh port-forward service/wazuh 1515:1515 --address 0.0.0.0 &
# Agent events (1514)
kubectl -n wazuh port-forward service/wazuh-workers 1514:1514 --address 0.0.0.0 &
|
Port forwards do NOT persist across VM reboots. See Appendix: Port Forward Persistence. |
8. Phase 5: Firewall Configuration
On k3s-master-01:
sudo firewall-cmd --add-port=514/udp --permanent # Syslog
sudo firewall-cmd --add-port=1514/tcp --permanent # Agent events
sudo firewall-cmd --add-port=1515/tcp --permanent # Agent registration
sudo firewall-cmd --add-port=5601/tcp --permanent # Dashboard (if using NodePort)
sudo firewall-cmd --add-port=8443/tcp --permanent # Dashboard (port-forward)
sudo firewall-cmd --add-port=9200/tcp --permanent # Indexer API
sudo firewall-cmd --add-port=55000/tcp --permanent # Manager API
sudo firewall-cmd --reload
Verify:
sudo firewall-cmd --list-ports
8.1. 5.1 Enable Wazuh Syslog Receiver
|
Wazuh does NOT listen for syslog by default. The ossec.conf is managed by a Kubernetes ConfigMap - direct edits to the pod are lost on restart. You must update the ConfigMap. |
8.1.1. Step 1: Get Wazuh LoadBalancer VIP
The Wazuh service uses MetalLB LoadBalancer. Get the external IP:
ssh k3s-master-01 "kubectl get svc wazuh -n wazuh -o jsonpath='{.status.loadBalancer.ingress[0].ip}'"
10.50.1.134
Record this IP - syslog sources must send to this VIP, NOT the k3s node IP.
8.1.2. Step 2: Extract Current ConfigMap
ssh k3s-master-01
# Find the ConfigMap name
CM_NAME=$(kubectl get statefulset wazuh-manager-master -n wazuh -o jsonpath='{.spec.template.spec.volumes[?(@.name=="config")].configMap.name}')
echo "ConfigMap: $CM_NAME"
# Extract master.conf
kubectl get configmap $CM_NAME -n wazuh -o jsonpath='{.data.master\.conf}' > /tmp/master.conf
8.1.3. Step 3: Find Insertion Point
# The syslog block goes after the existing </remote> (secure connection)
grep -n '</remote>' /tmp/master.conf
41: </remote>
8.1.4. Step 4: Insert Syslog Remote Block
# Insert syslog block after line 41 (adjust if your line number differs)
awk 'NR==41 {
print
print ""
print " <!-- Syslog remote receiver for network devices -->"
print " <remote>"
print " <connection>syslog</connection>"
print " <port>514</port>"
print " <protocol>udp</protocol>"
print " <allowed-ips>10.50.1.0/24</allowed-ips>"
print " </remote>"
next
}
{print}' /tmp/master.conf > /tmp/master-updated.conf
Verify:
awk 'NR>=40 && NR<=55' /tmp/master-updated.conf
<queue_size>131072</queue_size>
</remote>
<!-- Syslog remote receiver for network devices -->
<remote>
<connection>syslog</connection>
<port>514</port>
<protocol>udp</protocol>
<allowed-ips>10.50.1.0/24</allowed-ips>
</remote>
<!-- Policy monitoring -->
8.1.5. Step 4b: Enable Archives Logging and Indexing
By default Wazuh only logs alerts. To archive ALL syslog events and make them searchable via API:
# Enable logall (writes to archives.log file)
sed -i 's/<logall>no</<logall>yes</' /tmp/master-updated.conf
# Enable logall_json (indexes to OpenSearch - makes events searchable via API)
sed -i 's/<logall_json>no</<logall_json>yes</' /tmp/master-updated.conf
Verify:
grep -E '<logall>|<logall_json>' /tmp/master-updated.conf
<logall>yes</logall>
<logall_json>yes</logall_json>
|
8.1.6. Step 5: Create New ConfigMap
# Get existing worker.conf
kubectl get configmap $CM_NAME -n wazuh -o jsonpath='{.data.worker\.conf}' > /tmp/worker.conf
# Create new ConfigMap with syslog config
kubectl create configmap wazuh-conf-syslog \
--from-file=master.conf=/tmp/master-updated.conf \
--from-file=worker.conf=/tmp/worker.conf \
-n wazuh \
--dry-run=client -o yaml | kubectl apply -f -
configmap/wazuh-conf-syslog created
8.1.7. Step 6: Patch StatefulSet to Use New ConfigMap
kubectl patch statefulset wazuh-manager-master -n wazuh --type='json' \
-p='[{"op": "replace", "path": "/spec/template/spec/volumes/0/configMap/name", "value": "wazuh-conf-syslog"}]'
8.1.8. Step 7: Add UDP 514 to Service
kubectl patch svc wazuh -n wazuh --type='json' \
-p='[{"op": "add", "path": "/spec/ports/-", "value": {"name": "syslog", "port": 514, "protocol": "UDP", "targetPort": 514}}]'
Verify:
kubectl get svc wazuh -n wazuh -o jsonpath='{.spec.ports}' | jq '.[] | select(.name=="syslog")'
{
"name": "syslog",
"nodePort": 32358,
"port": 514,
"protocol": "UDP",
"targetPort": 514
}
8.1.9. Step 8: Add ContainerPort to StatefulSet
kubectl patch statefulset wazuh-manager-master -n wazuh --type='json' \
-p='[{"op": "add", "path": "/spec/template/spec/containers/0/ports/-", "value": {"name": "syslog", "containerPort": 514, "protocol": "UDP"}}]'
8.1.10. Step 9: Restart Pod
kubectl delete pod wazuh-manager-master-0 -n wazuh
Wait for pod to recreate:
kubectl get pods -n wazuh -w
8.1.11. Step 10: Verify Syslog Receiver is Active
kubectl exec -n wazuh wazuh-manager-master-0 -- grep -A6 '<connection>syslog' /var/ossec/etc/ossec.conf
<connection>syslog</connection>
<port>514</port>
<protocol>udp</protocol>
<allowed-ips>10.50.1.0/24</allowed-ips>
</remote>
<!-- Policy monitoring -->
kubectl exec -n wazuh wazuh-manager-master-0 -- cat /var/ossec/logs/ossec.log | grep -i 'syslog' | tail -5
2026/02/24 01:48:05 wazuh-remoted: INFO: Remote syslog allowed from: '10.50.1.0/24' 2026/02/24 01:48:05 wazuh-remoted: INFO: Started (pid: 743). Listening on port 514/UDP (syslog).
exit # Return to workstation
9. Phase 6: Configure Syslog Sources
9.1. 6.1 VyOS Syslog
Configure VyOS to send syslog to Wazuh:
# Get Wazuh VIP
WAZUH_VIP=$(ssh k3s-master-01 "kubectl get svc wazuh -n wazuh -o jsonpath='{.status.loadBalancer.ingress[0].ip}'")
echo "Wazuh VIP: $WAZUH_VIP"
# SSH to VyOS and configure syslog
ssh vyos-01
configure
set system syslog host $WAZUH_VIP facility all level info
set system syslog host $WAZUH_VIP port 514
set system syslog host $WAZUH_VIP protocol udp
commit
save
exit
9.2. 6.2 pfSense Syslog (Reference)
| This section is kept for environments using pfSense. Domus infrastructure uses VyOS (see 6.1). |
|
Use the Wazuh LoadBalancer VIP (from Step 1 above), NOT the k3s node IP.
|
Via API (preferred):
dsource d000 dev/network
# Get Wazuh LoadBalancer VIP first
WAZUH_VIP=$(ssh k3s-master-01 "kubectl get svc wazuh -n wazuh -o jsonpath='{.status.loadBalancer.ingress[0].ip}'")
echo "Wazuh VIP: $WAZUH_VIP"
curl -ks "https://${PFSENSE_HOST}/api/v2/status/logs/settings" \
-X PATCH \
-H "X-API-Key: ${PFSENSE_API_SECRET}" \
-H "Content-Type: application/json" \
-d "{
\"enableremotelogging\": true,
\"sourceip\": \"\",
\"ipprotocol\": \"ipv4\",
\"remoteserver\": \"${WAZUH_VIP}:514\",
\"logall\": true
}" | jq .
{
"code": 200,
"status": "ok",
"response_id": "SUCCESS",
"data": {
"enableremotelogging": true,
"remoteserver": "10.50.1.134:514",
"logall": true
}
}
Via Web UI (alternative):
-
pfSense UI → Status → System Logs → Settings
-
Remote Logging Options:
-
Enable: Yes
-
Server 1:
10.50.1.134:514(Wazuh LoadBalancer VIP) -
Remote Syslog Contents: Everything
-
Verify logs arriving:
ssh k3s-master-01 "kubectl exec -n wazuh wazuh-manager-master-0 -- tail -20 /var/ossec/logs/archives/archives.log | grep -i pfsense"
10. Phase 7: Deploy Wazuh Agents
10.1. 7.1 Linux Agent (Rocky/RHEL)
# Import GPG key
sudo rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
# Add repository
cat << 'EOF' | sudo tee /etc/yum.repos.d/wazuh.repo
[wazuh]
gpgcheck=1
gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH
enabled=1
name=EL-$releasever - Wazuh
baseurl=https://packages.wazuh.com/4.x/yum/
protect=1
EOF
# Install
sudo yum install wazuh-agent -y
# Configure manager address
sudo sed -i 's/MANAGER_IP/10.50.1.120/' /var/ossec/etc/ossec.conf
# Start
sudo systemctl daemon-reload
sudo systemctl enable --now wazuh-agent
10.2. 7.2 Linux Agent (Arch)
# From AUR
yay -S wazuh-agent
# Configure
sudo vim /var/ossec/etc/ossec.conf
# Set: <address>10.50.1.120</address>
# Start
sudo systemctl enable --now wazuh-agent
10.3. 7.3 Windows Agent
# Download installer
Invoke-WebRequest -Uri https://packages.wazuh.com/4.x/windows/wazuh-agent-4.14.3-1.msi -OutFile wazuh-agent.msi
# Install with manager address
msiexec.exe /i wazuh-agent.msi /q WAZUH_MANAGER="10.50.1.120"
# Start service
NET START WazuhSvc
10.4. 7.4 Verify Agent Registration
kubectl exec -n wazuh wazuh-manager-master-0 -- /var/ossec/bin/agent_control -l
Expected output:
Wazuh agent_control. List of available agents: ID: 000, Name: wazuh-manager-master-0 (server), IP: 127.0.0.1, Active/Local ID: 001, Name: vault-01, IP: 10.50.1.60, Active ID: 002, Name: kvm-01, IP: 10.50.1.99, Active
11. Troubleshooting
11.1. Pods Pending - Insufficient CPU
Symptom:
kubectl describe pod wazuh-indexer-0 -n wazuh | grep -A 5 "Events:"
# Shows: 1 Insufficient cpu
Fix: Increase VM CPU to 4 cores (see Prerequisites section).
11.2. Pods Pending - PVC Issues
Symptom:
pod has unbound immediate PersistentVolumeClaims
Diagnosis:
kubectl get pvc -n wazuh
kubectl logs -n kube-system -l app=nfs-subdir-external-provisioner --tail=20
Fix: Verify NFS provisioner is running and NAS path exists.
11.3. Indexer CrashLoopBackOff
Common cause: Insufficient memory or vm.max_map_count.
kubectl logs -n wazuh wazuh-indexer-0 | tail -50
Fix vm.max_map_count (if needed):
# On k3s node
sudo sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
11.4. Dashboard Cannot Connect to Indexer
kubectl logs -n wazuh -l app=wazuh-dashboard --tail=20
Check indexer is reachable:
kubectl exec -n wazuh wazuh-dashboard-xxx -- curl -k https://wazuh-indexer:9200
11.5. Images Not Pulling
Symptom: ErrImagePull or ImagePullBackOff
Cause: Using wrong version tag (e.g., 5.0.0 doesn’t exist).
Fix: Ensure using v4.14.3 branch of wazuh-kubernetes.
11.6. Syslog Not Appearing in Archives
Symptom: Network device syslog (VyOS, pfSense, IOS, etc.) is configured, but /var/ossec/logs/archives/archives.log is empty.
Cause: Wazuh logall is disabled by default - only alerts are logged, not all incoming events.
Diagnosis:
kubectl exec -n wazuh wazuh-manager-master-0 -- grep '<logall>' /var/ossec/etc/ossec.conf
# Shows: <logall>no</logall>
Fix:
kubectl exec -n wazuh wazuh-manager-master-0 -- sed -i 's/<logall>no</<logall>yes</' /var/ossec/etc/ossec.conf
kubectl exec -n wazuh wazuh-manager-master-0 -- /var/ossec/bin/wazuh-control restart
Verify:
# Send test syslog
echo "<14>Test from workstation" | nc -u -w1 10.50.1.134 514
# Check archives
kubectl exec -n wazuh wazuh-manager-master-0 -- tail -5 /var/ossec/logs/archives/archives.log
|
This change is lost on pod restart. For persistence, update the ConfigMap (see Phase 5 Syslog ConfigMap section). |
11.7. Archives Not Indexed to OpenSearch
Symptom: archives.log contains data, but no wazuh-archives-* index exists in OpenSearch.
# Archives has data
kubectl exec -n wazuh wazuh-manager-master-0 -- wc -l /var/ossec/logs/archives/archives.log
# Shows: 5000+ lines
# But no archives index
curl -sk -u admin:$INDEXER_PASSWORD https://10.50.1.131:9200/_cat/indices | grep archives
# No output
Cause 1: Filebeat archives disabled
kubectl exec -n wazuh wazuh-manager-master-0 -c wazuh-manager -- grep -A1 "archives:" /etc/filebeat/filebeat.yml
# Shows: archives:
# enabled: false
Cause 2: ILM not supported error
Check Filebeat logs:
kubectl exec -n wazuh wazuh-manager-master-0 -c wazuh-manager -- tail -50 /var/log/filebeat
# Shows: ILM is not supported by the Elasticsearch version in use
OpenSearch doesn’t support Elasticsearch ILM. The setup.ilm.enabled: true setting fails silently.
Fix - Enable archives in filebeat.yml:
The /etc/filebeat/ directory is mounted from NFS PVC, so changes persist. However, sed -i fails on NFS ("Device or resource busy"). Use copy pattern:
kubectl exec -n wazuh wazuh-manager-master-0 -c wazuh-manager -- sh -c "cat /etc/filebeat/filebeat.yml | sed '/archives:/{n;s/enabled: false/enabled: true/}' > /tmp/fb.yml && cat /tmp/fb.yml > /etc/filebeat/filebeat.yml"
Fix - Disable ILM:
kubectl exec -n wazuh wazuh-manager-master-0 -c wazuh-manager -- sh -c "cat /etc/filebeat/filebeat.yml | sed 's/setup.ilm.enabled: true/setup.ilm.enabled: false/' > /tmp/fb.yml && cat /tmp/fb.yml > /etc/filebeat/filebeat.yml"
Restart Filebeat:
kubectl exec -n wazuh wazuh-manager-master-0 -c wazuh-manager -- pkill filebeat
# s6 supervisor automatically restarts it
Verify:
# Check Filebeat config
kubectl exec -n wazuh wazuh-manager-master-0 -c wazuh-manager -- grep -E "(archives:|enabled:)" /etc/filebeat/filebeat.yml
# Check Filebeat logs for modules
kubectl exec -n wazuh wazuh-manager-master-0 -c wazuh-manager -- grep "Enabled modules" /var/log/filebeat | tail -1
# Should show: wazuh (alerts, archives)
# Check archives index exists (wait 1-2 minutes)
curl -sk -u admin:$INDEXER_PASSWORD https://10.50.1.131:9200/_cat/indices | grep archives
11.8. Init Script Fails on NFS (sed -i Error)
Symptom: Manager pod restarts repeatedly, init script exits with code 4.
kubectl logs -n wazuh wazuh-manager-master-0 -c wazuh-manager --previous | grep -E "(sed:|Error|exit)"
# Shows: sed: cannot rename /etc/filebeat/sedXXXX: Device or resource busy
Cause: Init script 1-config-filebeat uses sed -i which requires atomic rename. NFS doesn’t support this.
The init script runs sed when environment variables are set (INDEXER_URL, INDEXER_USERNAME, etc.). If these aren’t needed (Filebeat already configured in PVC), clear them.
Fix - Clear environment variables:
kubectl patch statefulset wazuh-manager-master -n wazuh --type=json -p='[
{"op": "replace", "path": "/spec/template/spec/containers/0/env/0/value", "value": ""},
{"op": "replace", "path": "/spec/template/spec/containers/0/env/1", "value": {"name": "INDEXER_USERNAME", "value": ""}},
{"op": "replace", "path": "/spec/template/spec/containers/0/env/2", "value": {"name": "INDEXER_PASSWORD", "value": ""}},
{"op": "replace", "path": "/spec/template/spec/containers/0/env/3", "value": {"name": "FILEBEAT_SSL_VERIFICATION_MODE", "value": ""}},
{"op": "replace", "path": "/spec/template/spec/containers/0/env/4", "value": {"name": "SSL_CERTIFICATE_AUTHORITIES", "value": ""}},
{"op": "replace", "path": "/spec/template/spec/containers/0/env/5", "value": {"name": "SSL_CERTIFICATE", "value": ""}},
{"op": "replace", "path": "/spec/template/spec/containers/0/env/6", "value": {"name": "SSL_KEY", "value": ""}}
]'
Pod will restart automatically. Verify init script succeeds:
kubectl logs -n wazuh wazuh-manager-master-0 -c wazuh-manager | grep "1-config-filebeat"
# Should show: 1-config-filebeat : Filebeat ... successfully
|
The filebeat.yml on the PVC already has correct settings from initial deployment. Clearing env vars just prevents the init script from trying to modify it with sed -i. |
12. Resource Usage
| Component | Memory Request | CPU Request | Storage |
|---|---|---|---|
Indexer |
1Gi |
500m |
500Mi (dynamic) |
Manager Master |
512Mi |
400m |
500Mi (dynamic) |
Manager Worker |
512Mi |
400m |
500Mi (dynamic) |
Dashboard |
512Mi |
500m |
- |
Total |
~2.5Gi |
~1800m |
~1.5Gi |
|
These are the |
13. Appendix: Port Forward Persistence
Port forwards created with kubectl port-forward do not persist across:
-
VM reboots
-
Pod restarts
-
SSH session termination
13.1. Option A: Background with nohup
nohup kubectl -n wazuh port-forward service/dashboard 8443:443 --address 0.0.0.0 > /tmp/wazuh-dashboard-pf.log 2>&1 &
13.2. Option B: Systemd Service (Recommended)
Create /etc/systemd/system/wazuh-dashboard-pf.service:
[Unit]
Description=Wazuh Dashboard Port Forward
After=k3s.service
[Service]
Type=simple
ExecStart=/usr/local/bin/kubectl -n wazuh port-forward service/dashboard 8443:443 --address 0.0.0.0
Restart=always
RestartSec=10
User=evanusmodestus
Environment=KUBECONFIG=/home/evanusmodestus/.kube/config
[Install]
WantedBy=multi-user.target
Enable:
sudo systemctl daemon-reload
sudo systemctl enable --now wazuh-dashboard-pf
Verify:
sudo systemctl status wazuh-dashboard-pf --no-pager
14. Appendix: Vault PKI Certificate
The default Wazuh dashboard uses a self-signed certificate. Replace it with a Vault-issued certificate to eliminate browser warnings.
14.1. Issue Certificate from Vault
From workstation:
vault write -format=json pki_int/issue/domus-client \
common_name="wazuh.inside.domusdigitalis.dev" \
ttl="8760h" > /tmp/wazuh-cert.json
14.2. Extract Certificate Components
jq -r '.data.certificate' /tmp/wazuh-cert.json > /tmp/wazuh.crt
jq -r '.data.private_key' /tmp/wazuh-cert.json > /tmp/wazuh.key
jq -r '.data.ca_chain[]' /tmp/wazuh-cert.json > /tmp/wazuh-ca.crt
14.3. Verify Certificate
openssl x509 -in /tmp/wazuh.crt -noout -subject -issuer -dates
Expected:
subject=CN=wazuh.inside.domusdigitalis.dev issuer=CN=DOMUS-ISSUING-CA notBefore=... notAfter=... (1 year from now)
14.4. Update Kubernetes Secret
From workstation:
scp /tmp/wazuh.crt /tmp/wazuh.key /tmp/wazuh-ca.crt k3s-master-01:/tmp/
Expected:
wazuh.crt 100% 1842 340.1KB/s 00:00 wazuh.key 100% 1675 287.2KB/s 00:00 wazuh-ca.crt 100% 4248 573.9KB/s 00:00
On k3s-master-01:
kubectl -n wazuh create secret generic dashboard-certs-vault \
--from-file=cert.pem=/tmp/wazuh.crt \
--from-file=key.pem=/tmp/wazuh.key \
--from-file=root-ca.pem=/tmp/wazuh-ca.crt \
--dry-run=client -o yaml | kubectl apply -f -
Expected:
secret/dashboard-certs-vault created
14.5. Update Dashboard Deployment
Patch the dashboard to use the new secret (volume index 1 = dashboard-certs):
kubectl -n wazuh patch deployment wazuh-dashboard --type=json -p='[
{"op": "replace", "path": "/spec/template/spec/volumes/1/secret/secretName", "value": "dashboard-certs-vault"}
]'
Expected:
deployment.apps/wazuh-dashboard patched
14.6. Restart Dashboard
kubectl -n wazuh rollout restart deployment/wazuh-dashboard
kubectl -n wazuh rollout status deployment/wazuh-dashboard
14.7. Verify New Certificate
echo | openssl s_client -connect wazuh.inside.domusdigitalis.dev:8443 2>/dev/null | openssl x509 -noout -subject -issuer
Expected:
subject=CN=wazuh.inside.domusdigitalis.dev issuer=CN=DOMUS-ISSUING-CA
14.7.1. curl TLS Validation (Quick)
curl -vI https://wazuh.inside.domusdigitalis.dev:8443 2>&1 | grep -E "subject:|issuer:|expire|SSL|CN"
* SSL Trust Anchors: * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS * subject: CN=wazuh.inside.domusdigitalis.dev * expire date: Feb 23 08:20:55 2027 GMT * issuer: CN=DOMUS-ISSUING-CA * SSL certificate verified via OpenSSL.
14.7.2. openssl Full Details
openssl s_client -connect wazuh.inside.domusdigitalis.dev:8443 -servername wazuh.inside.domusdigitalis.dev </dev/null 2>/dev/null | openssl x509 -noout -subject -issuer -dates
subject=CN=wazuh.inside.domusdigitalis.dev issuer=CN=DOMUS-ISSUING-CA notBefore=Feb 23 08:20:25 2026 GMT notAfter=Feb 23 08:20:55 2027 GMT
14.7.3. awk Patterns for Certificate Validation
Extract key TLS details:
curl -vI --silent https://wazuh.inside.domusdigitalis.dev:8443 2>&1 | awk '/subject:|issuer:|expire date|SSL connection/'
Extract certificate CN only:
curl -vI --silent https://wazuh.inside.domusdigitalis.dev:8443 2>&1 | awk -F'CN=' '/subject:/ {print $2}'
Calculate days until expiry:
EXPIRE=$(curl -vI --silent https://wazuh.inside.domusdigitalis.dev:8443 2>&1 | awk '/expire date:/ {print $4, $5, $6, $7}')
echo "Expires: $EXPIRE ($(( ($(date -d "$EXPIRE" +%s) - $(date +%s)) / 86400 )) days)"
Validate issuer is Vault PKI:
curl -vI --silent https://wazuh.inside.domusdigitalis.dev:8443 2>&1 | awk '/issuer:/ {print ($0 ~ /DOMUS-ISSUING-CA/) ? "✓ Vault PKI" : "✗ Unknown CA"}'
One-liner status check (all services):
for svc in wazuh:8443 grafana:3000 prometheus:9090 alertmanager:9093; do
HOST="${svc%:*}"; PORT="${svc#*:}"
RESULT=$(curl -vI --silent "https://${HOST}.inside.domusdigitalis.dev:${PORT}" 2>&1 | awk '/issuer:.*DOMUS/ {print "✓"} /SSL certificate problem/ {print "✗"}')
echo "${HOST}: ${RESULT:-?}"
done
|
Browser must trust the DOMUS-ROOT-CA for the certificate to show as valid. Import the root CA into your browser/OS trust store if not already done. |
15. Cleanup
To remove Wazuh completely:
# Delete deployment
kubectl delete -k envs/local-env/
# Delete PVCs (data will remain on NAS due to Retain policy)
kubectl delete pvc -n wazuh --all
# Delete namespace
kubectl delete namespace wazuh
# Clean NAS data (optional - destructive!)
ssh nas-01 "rm -rf /volume1/k3s/wazuh/*"