Wazuh

Wazuh host-based intrusion detection — agent management, custom rules, and alert tuning.

Wazuh Manager and Agent Operations

Agent Management

List all registered agents with status — identify disconnected hosts
/var/ossec/bin/agent_control -l
Register a new agent — server-side (manager)
/var/ossec/bin/manage_agents -a 10.50.1.100 -n web-server-01 -g default
Agent-side registration using agent-auth — authenticates against manager
/var/ossec/bin/agent-auth -m 10.50.1.10 -A web-server-01 -G default
Restart an agent remotely from the manager
/var/ossec/bin/agent_control -R 003
Remove an agent — revokes key and deletes from manager
/var/ossec/bin/manage_agents -r 003
Check agent connection info — verify agent is reporting
/var/ossec/bin/agent_control -i 003

Agent Groups

List available agent groups
/var/ossec/bin/agent_groups -l
Create a new agent group
/var/ossec/bin/agent_groups -a -g linux-servers
Assign an agent to a group — agent inherits group configuration
/var/ossec/bin/agent_groups -a -i 003 -g linux-servers
Show which group an agent belongs to
/var/ossec/bin/agent_groups -s -i 003

Group configuration files live in /var/ossec/etc/shared/<group_name>/. When you assign an agent to a group, the manager pushes that group’s agent.conf to the agent. This is how you manage configuration at scale without touching individual agents.

Manager API

Authenticate and get a JWT token — all subsequent API calls use this
TOKEN=$(curl -s -k -u "wazuh-wui:$WAZUH_API_PASS" \
  -X POST "https://localhost:55000/security/user/authenticate" \
  | jq -r '.data.token')
List all agents via API with status filter
curl -s -k -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/agents?status=active&limit=500" \
  | jq '.data.affected_items[] | {id, name, ip, status, os_name: .os.name}'
Get manager status and running processes
curl -s -k -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/manager/status" \
  | jq '.data.affected_items[]'
Query alerts for a specific agent — last 50 alerts
curl -s -k -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/alerts?agents_list=003&limit=50&sort=-timestamp" \
  | jq '.data.affected_items[] | {timestamp, rule: .rule.id, level: .rule.level, desc: .rule.description}'
Get cluster node status — verify distributed deployment health
curl -s -k -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/cluster/status" \
  | jq '.'

Rule Customization

Rules live in /var/ossec/ruleset/rules/ (stock) and /var/ossec/etc/rules/ (custom). Never edit stock rules — they get overwritten on update. Custom rules in local_rules.xml override by rule ID.

Create a custom rule — detect SSH from unexpected subnet
<!-- /var/ossec/etc/rules/local_rules.xml -->
<group name="custom_ssh,">
  <rule id="100001" level="10">
    <if_sid>5715</if_sid>
    <srcip>!10.50.1.0/24</srcip>
    <description>SSH login from outside trusted subnet</description>
    <group>authentication_success,</group>
  </rule>
</group>
Test a rule against a log sample — verify before deploying
echo 'Apr 10 09:15:22 web-01 sshd[12345]: Accepted publickey for evan from 192.168.1.50 port 52341 ssh2' \
  | /var/ossec/bin/wazuh-logtest
Validate rule XML syntax before restarting
/var/ossec/bin/wazuh-analysisd -t
Restart manager to load rule changes
systemctl restart wazuh-manager

Decoder Editing

Decoders extract fields from raw log lines. Custom decoders go in /var/ossec/etc/decoders/local_decoder.xml.

Custom decoder — parse a custom application log format
<!-- /var/ossec/etc/decoders/local_decoder.xml -->
<decoder name="custom_app">
  <program_name>myapp</program_name>
</decoder>

<decoder name="custom_app_login">
  <parent>custom_app</parent>
  <regex>User (\S+) logged in from (\S+)</regex>
  <order>user, srcip</order>
</decoder>
Test decoder against a log line — verify field extraction
echo 'Apr 10 09:20:00 app-01 myapp[9999]: User evan logged in from 10.50.1.100' \
  | /var/ossec/bin/wazuh-logtest

Active Response

Active response executes scripts on the agent or manager when a rule fires. Use with caution — a false positive triggers an active block.

Enable active response for brute force — block IP after rule 5712 fires
<!-- /var/ossec/etc/ossec.conf -->
<active-response>
  <command>firewall-drop</command>
  <location>local</location>
  <rules_id>5712</rules_id>
  <timeout>600</timeout>
</active-response>
List currently blocked IPs from active response
cat /var/ossec/active-response/bin/../ar-blocked-ips 2>/dev/null
# Or check iptables directly
iptables -L INPUT -n | awk '/DROP/ && /wazuh/ {print}'

File Integrity Monitoring (Syscheck)

Configure FIM for critical directories — detect unauthorized changes
<!-- /var/ossec/etc/ossec.conf - syscheck section -->
<syscheck>
  <frequency>600</frequency>
  <directories check_all="yes" realtime="yes">/etc,/usr/bin,/usr/sbin</directories>
  <directories check_all="yes" realtime="yes">/var/ossec/etc/rules</directories>
  <ignore>/etc/mtab</ignore>
  <ignore type="sregex">.log$|.tmp$</ignore>
</syscheck>
Query FIM events for an agent via API — what files changed
curl -s -k -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/syscheck/003?limit=20&sort=-date" \
  | jq '.data.affected_items[] | {file, event, date, md5_after}'
Force an immediate syscheck scan on an agent
/var/ossec/bin/agent_control -r -u 003

Vulnerability Detection

Enable vulnerability detector in ossec.conf
<vulnerability-detector>
  <enabled>yes</enabled>
  <interval>5m</interval>
  <run_on_start>yes</run_on_start>
  <provider name="canonical">
    <enabled>yes</enabled>
    <os>focal</os>
    <os>jammy</os>
    <update_interval>1h</update_interval>
  </provider>
  <provider name="nvd">
    <enabled>yes</enabled>
    <update_interval>1h</update_interval>
  </provider>
</vulnerability-detector>
Query vulnerabilities for an agent via API
curl -s -k -H "Authorization: Bearer $TOKEN" \
  "https://localhost:55000/vulnerability/003?limit=20&sort=-cvss" \
  | jq '.data.affected_items[] | {cve: .cve, severity, package: .name, version}'

Manager Health and Logs

Check all Wazuh manager service statuses
systemctl status wazuh-manager
Watch the manager log for errors in real time
tail -f /var/ossec/logs/ossec.log | awk '/ERROR|WARNING/ {print}'
Check disk usage of Wazuh data — alerts can fill disks fast
du -sh /var/ossec/logs/alerts/ /var/ossec/queue/ /var/ossec/stats/
Verify cluster node connectivity — distributed deployments
/var/ossec/bin/cluster_control -l