Defensive Shell Patterns
Overview
Defensive shell scripting prevents cascading failures and provides meaningful error handling.
Key principles:
-
Redirect errors to capture access denials
-
Use
||to handle failures gracefully -
Pipe stderr to files for analysis
-
Check for existence before accessing
Essential Operators
The OR Operator ||
cat /etc/shadow || echo "Access denied"
Exit code explanation:
-
Exit 0 = success (command worked)
-
Exit non-zero = failure (command failed)
-
|| runs the right side ONLY if left side failed (non-zero exit)
PRACTICAL EXAMPLES
Check if command exists:
which nmap || echo "nmap not installed"
command -v jq &>/dev/null || echo "jq not found"
Check if file exists and is readable:
cat /etc/ise/config.yaml || echo "ISE config not found"
Check service status:
systemctl is-active nginx || echo "nginx is not running"
Check network connectivity:
ping -c 1 192.168.1.1 &>/dev/null || echo "Gateway unreachable"
Fallback chain (try multiple options):
cat file.txt || cat file.bak || cat file.default || echo "No file found"
Installation fallback:
which tree || sudo dnf install -y tree
Service restart with notification:
systemctl restart nginx || {
echo "Failed to restart nginx"
systemctl status nginx
exit 1
}
The AND Operator &&
mkdir -p /tmp/work && cd /tmp/work
&& runs the right side ONLY if left side succeeded (exit 0)
PRACTICAL EXAMPLES
Chain dependent operations:
cd /var/log && grep "ERROR" syslog && echo "Errors found!"
Only proceed if directory exists:
[ -d "/etc/ise" ] && ls -la /etc/ise/
Only echo success if ping works:
ping -c 1 8.8.8.8 &>/dev/null && echo "Internet connectivity OK"
Create directory and move into it:
mkdir -p "$PROJECT_DIR" && cd "$PROJECT_DIR" && git init
Load secrets before using them:
load-secrets lab network && ssh admin@switch01
Verify before destructive operation:
[ -f "$backup_file" ] && rm "$original_file"
Database operations (only continue if connected):
psql "$DATABASE_URL" -c "SELECT 1" &>/dev/null && \
echo "Database connection verified"
Multi-step deployment:
git pull && npm install && npm run build && npm run deploy
Combining && and ||
ping -c 1 8.8.8.8 &>/dev/null && echo "✅ Online" || echo "❌ Offline"
ADVANCED PATTERNS
Service check with status reporting:
systemctl is-active nginx &>/dev/null && \
echo "✅ nginx is running" || \
echo "❌ nginx is stopped"
File existence check with action:
[ -f "/etc/ise/config.yaml" ] && \
echo "ISE config found" || \
echo "ISE config missing - creating default"
Command existence check with installation:
command -v yq &>/dev/null && \
echo "yq already installed" || \
sudo dnf install -y yq
Database connectivity test:
psql "$DATABASE_URL" -c "SELECT 1" &>/dev/null && \
echo "✅ Database connected" || \
{ echo "❌ Database connection failed"; exit 1; }
SSH connectivity test with timeout:
timeout 5 ssh -o ConnectTimeout=3 admin@switch01 "exit" 2>/dev/null && \
echo "✅ SSH to switch01 OK" || \
echo "❌ SSH to switch01 failed"
IMPORTANT: The && || Gotcha
WRONG - If success_action fails, failure_action also runs!
command && echo "success" || echo "failure"
If 'echo "success"' somehow fails, "failure" prints too!
CORRECT - Use braces for multi-statement blocks:
command && {
echo "success"
do_something
} || {
echo "failure"
handle_error
}
EVEN BETTER - Use proper if/then for complex logic:
if command; then
echo "success"
do_something
else
echo "failure"
handle_error
fi
The Existence Check Pattern
CHECK BEFORE YOU ACT
Check if file exists:
[ -f "/path/to/file" ] && cat /path/to/file || echo "File not found"
Check if directory exists:
[ -d "/path/to/dir" ] && ls /path/to/dir || echo "Directory not found"
Check if command exists:
command -v docker &>/dev/null && docker ps || echo "Docker not installed"
Check if variable is set:
[ -n "$DATABASE_URL" ] && echo "DB configured" || echo "DB not configured"
Check if variable is empty:
[ -z "$UNSET_VAR" ] && echo "Variable is empty/unset"
FILE TEST OPERATORS REFERENCE
[ -f FILE ] - True if FILE exists and is a regular file [ -d FILE ] - True if FILE exists and is a directory [ -e FILE ] - True if FILE exists (any type) [ -r FILE ] - True if FILE exists and is readable [ -w FILE ] - True if FILE exists and is writable [ -x FILE ] - True if FILE exists and is executable [ -s FILE ] - True if FILE exists and has size > 0 [ -L FILE ] - True if FILE exists and is a symbolic link [ -n STR ] - True if STR has length > 0 [ -z STR ] - True if STR has length = 0
Production example: ISE config validation
validate_ise_config() {
local config_file="$1"
[ -f "$config_file" ] || {
echo "❌ Config file not found: $config_file"
return 1
}
[ -r "$config_file" ] || {
echo "❌ Config file not readable: $config_file"
return 1
}
yq eval '.' "$config_file" &>/dev/null || {
echo "❌ Invalid YAML syntax: $config_file"
return 1
}
echo "✅ Config validated: $config_file"
return 0
}
The Fallback Chain Pattern
TRY MULTIPLE OPTIONS IN ORDER
Find an editor (try in order of preference):
EDITOR=$(command -v nvim || command -v vim || command -v vi || command -v nano)
echo "Using editor: $EDITOR"
Find a file in multiple locations:
config=$(
cat ~/.config/myapp/config.yaml 2>/dev/null || \
cat /etc/myapp/config.yaml 2>/dev/null || \
cat /usr/share/myapp/config.default.yaml 2>/dev/null || \
echo "No config found"
)
Try multiple package managers:
install_package() {
local pkg="$1"
command -v dnf &>/dev/null && sudo dnf install -y "$pkg" && return 0
command -v apt &>/dev/null && sudo apt install -y "$pkg" && return 0
command -v pacman &>/dev/null && sudo pacman -S --noconfirm "$pkg" && return 0
command -v brew &>/dev/null && brew install "$pkg" && return 0
echo "❌ No supported package manager found"
return 1
}
Try multiple connection methods (ISE example):
connect_to_ise() {
local ise_host="$1"
# Try HTTPS first
curl -sk "https://${ise_host}:9060/ers/config/endpoint" \
-H "Authorization: Basic $ISE_AUTH" &>/dev/null && {
echo "✅ Connected via HTTPS:9060"
return 0
}
# Try alternate port
curl -sk "https://${ise_host}:443/ers/config/endpoint" \
-H "Authorization: Basic $ISE_AUTH" &>/dev/null && {
echo "✅ Connected via HTTPS:443"
return 0
}
echo "❌ Cannot connect to ISE: $ise_host"
return 1
}
Stream Redirection Mastery
Understanding File Descriptors
╔══════════════════════════════════════════════════════════════════════════════╗ ║ LINUX I/O STREAMS DEEP DIVE ║ ╠══════════════════════════════════════════════════════════════════════════════╣ ║ ║ ║ Every process has three standard streams: ║ ║ ║ ║ ┌──────────────────────────────────────────────────────────────────────┐ ║ ║ │ │ ║ ║ │ ┌──────────┐ │ ║ ║ │ │ Keyboard │ ──▶ FD 0 (stdin) ──▶ ┌─────────┐ │ ║ ║ │ └──────────┘ │ │ │ ║ ║ │ │ Process │ ──▶ FD 1 (stdout) │ ║ ║ │ │ │ │ ║ ║ │ └─────────┘ ──▶ FD 2 (stderr) │ ║ ║ │ │ ║ ║ │ By default, FD 1 and FD 2 both go to the terminal │ ║ ║ │ │ ║ ║ └──────────────────────────────────────────────────────────────────────┘ ║ ║ ║ ║ File Descriptor Numbers: ║ ║ • 0 = stdin (standard input) - where the process reads from ║ ║ • 1 = stdout (standard output) - normal output ║ ║ • 2 = stderr (standard error) - error messages ║ ║ • 3+ = additional file descriptors (files, sockets, pipes) ║ ║ ║ ╚══════════════════════════════════════════════════════════════════════════════╝
Redirection Operators Complete Reference
STDOUT REDIRECTION (FD 1)
Redirect stdout to file (overwrite):
command > file.txt
command 1> file.txt # Explicit form (same thing)
Redirect stdout to file (append):
command >> file.txt
command 1>> file.txt # Explicit form
STDERR REDIRECTION (FD 2)
Redirect stderr to file (overwrite):
command 2> errors.txt
Redirect stderr to file (append):
command 2>> errors.txt
Redirect stderr to /dev/null (silence errors):
command 2>/dev/null
COMBINING STDOUT AND STDERR
Modern syntax: redirect both to same file
command &> output.txt # Bash 4+
command &>> output.txt # Append version
Traditional syntax: redirect stderr to stdout, then to file
command > output.txt 2>&1
WARNING: ORDER MATTERS! This is WRONG:
command 2>&1 > output.txt # stderr still goes to terminal!
Why? Shell processes left-to-right:
-
2>&1= "stderr goes where stdout currently is" (terminal) -
> output.txt= "now stdout goes to file" (but stderr already set)
SEPARATE FILES FOR STDOUT AND STDERR
Different files for output and errors:
command > output.txt 2> errors.txt
Append to different files:
command >> output.txt 2>> errors.txt
The 2>&1 Deep Dive
UNDERSTANDING 2>&1
2>&1 means: "redirect file descriptor 2 to wherever file descriptor 1 is going"
The & is CRITICAL - without it, 2>1 would create a file named "1"
WRONG - Creates a file named "1":
command 2>1
CORRECT - Redirects stderr to stdout:
command 2>&1
COMMON USE CASES
-
Capture both stdout and stderr to a variable:
output=$(command 2>&1) -
Search through both output streams:
command 2>&1 | grep "ERROR\|WARNING" -
Log everything to a file while still seeing it:
command 2>&1 | tee logfile.txt -
Count total lines of output (including errors):
command 2>&1 | wc -l -
Capture errors for analysis:
errors=$(command 2>&1 >/dev/null) # Only capture stderr
Permission Denied Analysis
Find directories you can’t access, save the denials for analysis:
find / -type d 2>&1 | grep "Permission denied" > /tmp/noaccess.txt
Or more elegantly:
find / -type d 2>/tmp/noaccess.txt
Filter out permission denied messages:
find / -type f -name "*.conf" 2>&1 | grep -v "Permission denied"
Save accessible files, log denied ones separately:
find / -name "*.conf" > /tmp/found.txt 2> /tmp/denied.txt
Production Logging Patterns
TIMESTAMPED LOGGING
timestamp=$(date +%Y%m%d-%H%M%S)
Full logging with timestamps:
{
echo "=== Started at $(date) ==="
command
echo "=== Finished at $(date) ==="
} > "output-${timestamp}.log" 2>&1
Or use tee to see and log simultaneously:
command 2>&1 | while read line; do
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $line"
done | tee "output-${timestamp}.log"
PROFESSIONAL LOGGING FUNCTION
Define once, use everywhere:
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
error() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ERROR: $*" | tee -a "$LOG_FILE" >&2
}
warn() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] WARN: $*" | tee -a "$LOG_FILE" >&2
}
debug() {
[ "${DEBUG:-0}" = "1" ] && \
echo "[$(date '+%Y-%m-%d %H:%M:%S')] DEBUG: $*" | tee -a "$LOG_FILE"
}
Usage:
LOG_FILE="/var/log/myapp-$(date +%Y%m%d).log"
log "Starting process..."
command 2>&1 | tee -a "$LOG_FILE" || error "Command failed"
log "Process complete"
System Reconnaissance
Methodical System Enumeration
Create a working directory for results:
recon_dir="/tmp/recon-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$recon_dir"
1. FIND WHAT YOU CAN’T ACCESS (Map the boundaries)
Find all directories, log access denials:
find / -type d 2>&1 | \
tee >(grep -v "Permission denied" > "${recon_dir}/accessible-dirs.txt") | \
grep "Permission denied" > "${recon_dir}/denied-dirs.txt"
Simpler version:
find / -type d > "${recon_dir}/dirs.txt" 2> "${recon_dir}/denied.txt"
Count what you can vs can’t access:
accessible=$(wc -l < "${recon_dir}/dirs.txt")
denied=$(wc -l < "${recon_dir}/denied.txt")
echo "Accessible: $accessible | Denied: $denied"
2. FIND INTERESTING FILES
Configuration files:
find /etc -name "*.conf" -readable 2>/dev/null > "${recon_dir}/configs.txt"
Hidden files in common locations:
find /tmp /var/tmp /home -name ".*" -type f 2>/dev/null > "${recon_dir}/hidden.txt"
Recently modified files (potential backdoors/changes):
find / -type f -mtime -7 2>/dev/null | \
grep -v "/proc\|/sys\|/dev" > "${recon_dir}/recent.txt"
SUID/SGID binaries (privilege escalation vectors):
find / -type f \( -perm -4000 -o -perm -2000 \) 2>/dev/null > "${recon_dir}/suid.txt"
World-writable files:
find / -type f -perm -002 2>/dev/null | \
grep -v "/proc\|/sys" > "${recon_dir}/world-writable.txt"
3. NETWORK RECONNAISSANCE
Active connections:
ss -tupn > "${recon_dir}/connections.txt" 2>&1
netstat -tupln >> "${recon_dir}/connections.txt" 2>&1
Listening services:
ss -tlnp > "${recon_dir}/listeners.txt" 2>&1
ARP table (local network mapping):
ip neigh show > "${recon_dir}/arp.txt" 2>&1
arp -n >> "${recon_dir}/arp.txt" 2>&1
Routing table:
ip route show > "${recon_dir}/routes.txt" 2>&1
4. USER AND PROCESS ENUMERATION
Logged in users:
who > "${recon_dir}/users.txt" 2>&1
w >> "${recon_dir}/users.txt" 2>&1
All processes with full details:
ps auxwww > "${recon_dir}/processes.txt" 2>&1
Process tree:
pstree -p > "${recon_dir}/process-tree.txt" 2>&1
Cron jobs (persistence mechanism):
{
echo "=== System crontabs ==="
cat /etc/crontab 2>/dev/null
echo ""
echo "=== Cron.d ==="
ls -la /etc/cron.d/ 2>/dev/null
cat /etc/cron.d/* 2>/dev/null
echo ""
echo "=== User crontabs ==="
for user in $(cut -f1 -d: /etc/passwd); do
crontab -u "$user" -l 2>/dev/null && echo "--- $user ---"
done
} > "${recon_dir}/cron.txt"
5. PACKAGE AND GENERATE REPORT
echo "Reconnaissance complete. Results in: $recon_dir"
ls -lh "$recon_dir"
Create summary:
{
echo "=== RECON SUMMARY ==="
echo "Timestamp: $(date)"
echo "Hostname: $(hostname)"
echo "User: $(whoami)"
echo ""
echo "Files found:"
wc -l "${recon_dir}"/*.txt
} > "${recon_dir}/SUMMARY.txt"
Archive for exfiltration (if needed):
tar czf "${recon_dir}.tar.gz" "$recon_dir" 2>/dev/null
SSH Key and Credential Hunting
FIND SSH KEYS AND CREDENTIALS (Ethical use only!)
Find SSH private keys:
find / -name "id_rsa" -o -name "id_ed25519" -o -name "*.pem" 2>/dev/null | \
while read -r keyfile; do
[ -r "$keyfile" ] && echo "Readable: $keyfile"
done
Find SSH config files:
find / -name "ssh_config" -o -name "sshd_config" 2>/dev/null
Find authorized_keys (who has access?):
find / -name "authorized_keys" 2>/dev/null
Find .netrc files (FTP/HTTP credentials):
find / -name ".netrc" 2>/dev/null
Find password files:
find / -name "*.password" -o -name "*.passwd" -o -name "credentials*" 2>/dev/null
Find environment files with potential secrets:
find / -name ".env" -o -name ".env.*" 2>/dev/null | \
while read -r envfile; do
[ -r "$envfile" ] && echo "Readable: $envfile"
done
Search for hardcoded passwords in scripts:
grep -rl "password\|passwd\|secret\|api_key" /home /opt /var/www 2>/dev/null
ISE Automation Shell Integration
Shell Patterns for Your netapi-tui Project
ISE API INTERACTION WITH DEFENSIVE PATTERNS
Load ISE credentials safely:
load_ise_credentials() {
# Try encrypted secrets first
if command -v load-secrets &>/dev/null; then
load-secrets lab network || {
error "Failed to load secrets"
return 1
}
fi
# Verify required variables
[ -n "${ISE_HOST:-}" ] || { error "ISE_HOST not set"; return 1; }
[ -n "${ISE_USER:-}" ] || { error "ISE_USER not set"; return 1; }
[ -n "${ISE_PASS:-}" ] || { error "ISE_PASS not set"; return 1; }
# Create auth token
ISE_AUTH=$(echo -n "${ISE_USER}:${ISE_PASS}" | base64)
export ISE_AUTH
log "ISE credentials loaded for: $ISE_HOST"
return 0
}
Test ISE connectivity:
test_ise_connection() {
local response
local http_code
response=$(curl -sk -w "%{http_code}" -o /tmp/ise-test.json \
"https://${ISE_HOST}:9060/ers/config/endpoint?size=1" \
-H "Authorization: Basic ${ISE_AUTH}" \
-H "Accept: application/json" 2>&1)
http_code="${response: -3}"
case "$http_code" in
200) echo "✅ ISE connection OK"; return 0 ;;
401) error "Authentication failed"; return 1 ;;
403) error "Access forbidden"; return 1 ;;
*) error "ISE returned HTTP $http_code"; return 1 ;;
esac
}
Get endpoint by MAC address:
get_endpoint() {
local mac="$1"
local output_file="${2:-/dev/stdout}"
[ -n "$mac" ] || { error "MAC address required"; return 1; }
curl -sk \
"https://${ISE_HOST}:9060/ers/config/endpoint?filter=mac.EQ.${mac}" \
-H "Authorization: Basic ${ISE_AUTH}" \
-H "Accept: application/json" \
2>/dev/null | jq '.' > "$output_file" || {
error "Failed to get endpoint: $mac"
return 1
}
}
Bulk endpoint export:
export_all_endpoints() {
local output_dir="${1:-./ise-export-$(date +%Y%m%d)}"
local page_size=100
local page=1
local total=0
mkdir -p "$output_dir"
log "Exporting ISE endpoints to: $output_dir"
while true; do
local response_file="${output_dir}/page-${page}.json"
curl -sk \
"https://${ISE_HOST}:9060/ers/config/endpoint?size=${page_size}&page=${page}" \
-H "Authorization: Basic ${ISE_AUTH}" \
-H "Accept: application/json" \
2>/dev/null > "$response_file"
# Check if we got results
local count
count=$(jq '.SearchResult.total // 0' "$response_file" 2>/dev/null)
[ "$count" -eq 0 ] && break
total=$((total + $(jq '.SearchResult.resources | length' "$response_file")))
log "Page $page: Retrieved $total endpoints so far..."
page=$((page + 1))
# Rate limiting
sleep 0.5
done
log "Export complete: $total endpoints"
}
ISE SESSION MANAGEMENT
Clear auth session by MAC:
clear_session() {
local mac="$1"
local switch_ip="${2:-}"
[ -n "$mac" ] || { error "MAC address required"; return 1; }
if [ -n "$switch_ip" ]; then
# Clear via switch (CoA)
log "Sending CoA disconnect to $switch_ip for MAC $mac"
# Your netapi-tui would handle this
else
# Clear via ISE API
curl -sk -X DELETE \
"https://${ISE_HOST}:9060/admin/API/mnt/Session/MACAddress/${mac}" \
-H "Authorization: Basic ${ISE_AUTH}" \
2>&1 || {
error "Failed to clear session: $mac"
return 1
}
fi
log "Session cleared: $mac"
}
Reauth endpoint:
reauth_endpoint() {
local mac="$1"
[ -n "$mac" ] || { error "MAC address required"; return 1; }
# Clear then wait for re-authentication
clear_session "$mac" && {
log "Waiting for re-authentication..."
sleep 5
# Verify new session
local session
session=$(curl -sk \
"https://${ISE_HOST}:9060/admin/API/mnt/Session/MACAddress/${mac}" \
-H "Authorization: Basic ${ISE_AUTH}" 2>/dev/null)
echo "$session" | jq '.sessionState' 2>/dev/null && return 0
}
return 1
}
YAML Task Runner Integration
SHELL WRAPPER FOR YOUR PYTHON TASK RUNNER
Run ISE task with logging:
run_task() {
local task_file="$1"
local dry_run="${2:-false}"
[ -f "$task_file" ] || { error "Task file not found: $task_file"; return 1; }
local timestamp
timestamp=$(date +%Y%m%d-%H%M%S)
local log_file="logs/task-${timestamp}.log"
local error_file="logs/task-${timestamp}.errors"
mkdir -p logs
log "Running task: $task_file"
local cmd="uv run python3 runner.py \"$task_file\""
[ "$dry_run" = "true" ] && cmd="$cmd --dry-run"
if eval "$cmd" > "$log_file" 2> "$error_file"; then
log "✅ Task completed successfully"
cat "$log_file"
return 0
else
error "❌ Task failed"
cat "$error_file" >&2
return 1
fi
}
Run task with YAML output for further processing:
run_task_yaml() {
local task_file="$1"
[ -f "$task_file" ] || { error "Task file not found: $task_file"; return 1; }
uv run python3 runner.py "$task_file" --yaml 2>/dev/null | yq eval '.' -
}
Batch run multiple tasks:
run_tasks() {
local task_dir="${1:-tasks/deploy}"
local dry_run="${2:-true}"
[ -d "$task_dir" ] || { error "Task directory not found: $task_dir"; return 1; }
local success=0
local failed=0
for task in "$task_dir"/*.yaml; do
[ -f "$task" ] || continue
log "Running: $task"
if run_task "$task" "$dry_run"; then
((success++))
else
((failed++))
fi
done
log "Results: $success succeeded, $failed failed"
[ "$failed" -eq 0 ]
}
Validate all YAML tasks:
validate_tasks() {
local task_dir="${1:-tasks}"
local invalid=0
find "$task_dir" -name "*.yaml" -o -name "*.yml" | while read -r task; do
if ! yq eval '.' "$task" &>/dev/null; then
error "Invalid YAML: $task"
((invalid++))
fi
done
[ "$invalid" -eq 0 ] && log "✅ All tasks validated" || error "❌ $invalid invalid tasks"
}
Fish & Zsh Compatibility
Cross-Shell Function Patterns
BASH/ZSH VERSION (put in .bashrc or .zshrc)
Common exclusion pattern for tree commands:
TREE_IGNORE='__pycache__|*.pyc|*.log|node_modules|target|.git|.venv|venv|env|.idea|.vscode|*.bak|tmp|temp|cache|credentials|secrets|certs|keys'
Tree with YAML output:
ytree() {
/usr/bin/tree -J -I "$TREE_IGNORE" "$@" | yq -P
}
Tree with JSON output (pretty):
jtree() {
/usr/bin/tree -J -I "$TREE_IGNORE" "$@" | jq .
}
Tree with JSON output (compact):
jtreec() {
/usr/bin/tree -J -I "$TREE_IGNORE" "$@" | jq -c .
}
Directories only:
dtree() {
/usr/bin/tree -d -I "$TREE_IGNORE" "$@"
}
Flat list of file names:
ftree() {
/usr/bin/tree -J -I "$TREE_IGNORE" "$@" | jq -r '.. | objects | select(.type == "file") | .name'
}
Flat list of full paths:
ptree() {
/usr/bin/tree -fI "$TREE_IGNORE" --noreport "$@"
}
Size-aware tree:
stree() {
/usr/bin/tree -shI "$TREE_IGNORE" "$@"
}
YAML files only:
tasktree() {
/usr/bin/tree -J -P '*.yaml|*.yml' --prune -I "$TREE_IGNORE" "$@" | yq -P
}
FISH VERSION (put in ~/.config/fish/config.fish)
Common exclusion pattern:
set -g TREE_IGNORE '__pycache__|*.pyc|*.log|node_modules|target|.git|.venv|venv|env|.idea|.vscode|*.bak|tmp|temp|cache|credentials|secrets|certs|keys'
Tree with YAML output:
function ytree
/usr/bin/tree -J -I "$TREE_IGNORE" $argv | yq -P
end
Tree with JSON output (pretty):
function jtree
/usr/bin/tree -J -I "$TREE_IGNORE" $argv | jq .
end
Tree with JSON output (compact):
function jtreec
/usr/bin/tree -J -I "$TREE_IGNORE" $argv | jq -c .
end
Directories only:
function dtree
/usr/bin/tree -d -I "$TREE_IGNORE" $argv
end
Flat list of file names:
function ftree
/usr/bin/tree -J -I "$TREE_IGNORE" $argv | jq -r '.. | objects | select(.type == "file") | .name'
end
Flat list of full paths:
function ptree
/usr/bin/tree -fI "$TREE_IGNORE" --noreport $argv
end
Size-aware tree:
function stree
/usr/bin/tree -shI "$TREE_IGNORE" $argv
end
YAML files only:
function tasktree
/usr/bin/tree -J -P '*.yaml|*.yml' --prune -I "$TREE_IGNORE" $argv | yq -P
end
Syntax Differences Reference
╔══════════════════════════════════════════════════════════════════════════════╗
║ BASH/ZSH vs FISH SYNTAX ║
╠══════════════════════════════════════════════════════════════════════════════╣
║ ║
║ Feature │ Bash/Zsh │ Fish ║
║ ─────────────────────────────────────────────────────────────────────────── ║
║ Export variable │ export FOO=bar │ set -gx FOO bar ║
║ Local variable │ local foo=bar │ set -l foo bar ║
║ Function │ func() { ... } │ function func; ...; end ║
║ Arguments │ $1 $2 $@ │ $argv[1] $argv ║
║ Logical AND │ cmd1 && cmd2 │ cmd1; and cmd2 ║
║ Logical OR │ cmd1 || cmd2 │ cmd1; or cmd2 ║
║ Command sub │ $(cmd) or `cmd` │ (cmd) ║
║ Test condition │ [ -f file ] │ test -f file ║
║ If statement │ if [ cond ]; then │ if test cond ║
║ │ ... │ ... ║
║ │ fi │ end ║
║ For loop │ for i in list; do │ for i in list ║
║ │ ... │ ... ║
║ │ done │ end ║
║ While loop │ while cond; do │ while cond ║
║ │ ... │ ... ║
║ │ done │ end ║
║ Escape alias │ \command │ command cmd ║
║ String concat │ "${a}${b}" │ "$a$b" or {$a}{$b} ║
║ Array │ arr=(a b c) │ set arr a b c ║
║ Array access │ ${arr[0]} │ $arr[1] (1-indexed!) ║
║ Array all │ "${arr[@]}" │ $arr ║
║ ║
╚══════════════════════════════════════════════════════════════════════════════╝
Fish-Specific Patterns
FISH EQUIVALENTS OF COMMON BASH PATTERNS
Bash: command && echo "success" || echo "failure"
Fish:
command; and echo "success"; or echo "failure"
Bash: [ -f file ] && cat file
Fish:
test -f file; and cat file
Bash: which cmd || echo "not found"
Fish:
which cmd; or echo "not found"
Bash: export PATH="$HOME/bin:$PATH"
Fish:
fish_add_path $HOME/bin
Bash: if [ -n "$var" ]; then echo "set"; fi
Fish:
if test -n "$var"
echo "set"
end
Bash: for file in *.txt; do echo "$file"; done
Fish:
for file in *.txt
echo $file
end
Bash: while read line; do echo "$line"; done < file.txt
Fish:
while read line
echo $line
end < file.txt
FISH DEFENSIVE PATTERNS
Check and install (Fish version):
function ensure_installed
set -l pkg $argv[1]
command -v $pkg &>/dev/null; or begin
echo "Installing $pkg..."
sudo dnf install -y $pkg
end
end
Fallback chain (Fish version):
function find_editor
command -v nvim; or command -v vim; or command -v vi; or command -v nano
end
Service check (Fish version):
function check_service
set -l svc $argv[1]
systemctl is-active $svc &>/dev/null
and echo "✅ $svc is running"
or echo "❌ $svc is stopped"
end
Home Lab Infrastructure
Lab Environment Shell Functions
HOME LAB MANAGEMENT FUNCTIONS
Load lab credentials:
lab() {
load-secrets lab network && {
echo "✅ Lab credentials loaded"
echo " ISE: $ISE_HOST"
echo " WLC: $WLC_HOST"
echo " Switches: $SWITCH_COUNT devices"
}
}
Quick SSH to lab devices:
switch() {
local device="${1:-switch01}"
load-secrets lab network 2>/dev/null || true
ssh "admin@${device}.lab.local" 2>&1
}
Check all lab device connectivity:
lab-status() {
local devices=(
"192.168.1.1:pfsense"
"192.168.1.10:ise"
"192.168.1.11:switch01"
"192.168.1.12:switch02"
"192.168.1.20:wlc"
"192.168.1.100:proxmox"
)
echo "=== Lab Status ==="
for entry in "${devices[@]}"; do
local ip="${entry%%:*}"
local name="${entry#*:}"
ping -c 1 -W 2 "$ip" &>/dev/null && \
echo "✅ $name ($ip)" || \
echo "❌ $name ($ip)"
done
}
Quick port check:
port-check() {
local host="$1"
local port="$2"
[ -n "$host" ] && [ -n "$port" ] || {
echo "Usage: port-check <host> <port>"
return 1
}
timeout 3 bash -c "cat < /dev/null > /dev/tcp/${host}/${port}" 2>/dev/null && \
echo "✅ ${host}:${port} is open" || \
echo "❌ ${host}:${port} is closed/filtered"
}
Multi-port scan:
scan-ports() {
local host="$1"
shift
local ports=("${@:-22 80 443 9060 8443}")
[ -n "$host" ] || { echo "Usage: scan-ports <host> [ports...]"; return 1; }
echo "Scanning $host..."
for port in "${ports[@]}"; do
port-check "$host" "$port"
done
}
PFSENSE MANAGEMENT
pfSense status:
pfsense-status() {
local pf_host="${PFSENSE_HOST:-192.168.1.1}"
echo "=== pfSense Status ==="
# Check web interface
curl -sk "https://${pf_host}" &>/dev/null && \
echo "✅ Web interface accessible" || \
echo "❌ Web interface down"
# Check DNS (if pfSense is DNS)
dig @"$pf_host" google.com +short &>/dev/null && \
echo "✅ DNS resolution working" || \
echo "❌ DNS not responding"
}
PROXMOX MANAGEMENT
List VMs:
proxmox-vms() {
local pve_host="${PROXMOX_HOST:-192.168.1.100}"
ssh "root@${pve_host}" "qm list" 2>/dev/null || \
echo "❌ Cannot connect to Proxmox"
}
Start VM:
proxmox-start() {
local vmid="$1"
local pve_host="${PROXMOX_HOST:-192.168.1.100}"
[ -n "$vmid" ] || { echo "Usage: proxmox-start <vmid>"; return 1; }
ssh "root@${pve_host}" "qm start $vmid" 2>&1 && \
echo "✅ VM $vmid started" || \
echo "❌ Failed to start VM $vmid"
}
Stop VM:
proxmox-stop() {
local vmid="$1"
local pve_host="${PROXMOX_HOST:-192.168.1.100}"
[ -n "$vmid" ] || { echo "Usage: proxmox-stop <vmid>"; return 1; }
ssh "root@${pve_host}" "qm shutdown $vmid" 2>&1 && \
echo "✅ VM $vmid shutting down" || \
echo "❌ Failed to stop VM $vmid"
}
Production Scenarios
Complete Deployment Script Template
#!/bin/bash
# PRODUCTION DEPLOYMENT TEMPLATE
# All defensive patterns applied!
set -euo pipefail
IFS=$'\n\t'
CONSTANTS
readonly SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly TIMESTAMP="$(date +%Y%m%d-%H%M%S)"
readonly LOG_DIR="${HOME}/logs/${SCRIPT_NAME%.sh}"
readonly LOG_FILE="${LOG_DIR}/${TIMESTAMP}.log"
readonly ERROR_FILE="${LOG_DIR}/${TIMESTAMP}.errors"
LOGGING FUNCTIONS
log() {
local level="${1:-INFO}"
shift
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*" | tee -a "$LOG_FILE"
}
info() { log "INFO" "$@"; }
warn() { log "WARN" "$@" >&2; }
error() { log "ERROR" "$@" | tee -a "$ERROR_FILE" >&2; }
success() { log "OK" "✅ $*"; }
fail() { log "FAIL" "❌ $*" | tee -a "$ERROR_FILE" >&2; }
debug() {
[ "${DEBUG:-0}" = "1" ] && log "DEBUG" "$@"
return 0
}
ERROR HANDLING
cleanup() {
local exit_code=$?
if [ $exit_code -ne 0 ]; then
fail "Script exited with code: $exit_code"
[ -s "$ERROR_FILE" ] && {
error "Errors logged to: $ERROR_FILE"
cat "$ERROR_FILE" >&2
}
else
success "Script completed successfully"
fi
# Cleanup temp files
rm -f /tmp/"${SCRIPT_NAME}".*
info "Log file: $LOG_FILE"
}
trap cleanup EXIT
handle_error() {
local line_no="$1"
local error_code="${2:-1}"
error "Error on line $line_no (exit code: $error_code)"
}
trap 'handle_error ${LINENO} $?' ERR
UTILITY FUNCTIONS
require_command() {
local cmd="$1"
command -v "$cmd" &>/dev/null || {
fail "Required command not found: $cmd"
return 1
}
debug "Command available: $cmd"
}
require_var() {
local var_name="$1"
local var_value="${!var_name:-}"
[ -n "$var_value" ] || {
fail "Required variable not set: $var_name"
return 1
}
debug "Variable set: $var_name"
}
require_file() {
local file="$1"
[ -f "$file" ] || {
fail "Required file not found: $file"
return 1
}
[ -r "$file" ] || {
fail "File not readable: $file"
return 1
}
debug "File accessible: $file"
}
confirm() {
local prompt="${1:-Continue?}"
read -r -p "$prompt [y/N] " response
[[ "$response" =~ ^[Yy]$ ]]
}
PREFLIGHT CHECKS
preflight() {
info "Running preflight checks..."
# Required commands
require_command curl || return 1
require_command jq || return 1
require_command yq || return 1
# Required environment
require_var "DATABASE_URL" || return 1
# Required files
require_file "${SCRIPT_DIR}/config.yaml" || return 1
success "Preflight checks passed"
}
Incident Response Script
#!/bin/bash
# INCIDENT RESPONSE DATA COLLECTION
# Run this immediately when you suspect compromise
set -euo pipefail
readonly TIMESTAMP="$(date +%Y%m%d-%H%M%S)"
readonly IR_DIR="/tmp/ir-${TIMESTAMP}"
readonly HOSTNAME="$(hostname)"
mkdir -p "$IR_DIR"
log() { echo "[$(date '+%H:%M:%S')] $*" | tee -a "${IR_DIR}/collection.log"; }
log "=== INCIDENT RESPONSE STARTED ==="
log "Host: $HOSTNAME"
log "User: $(whoami)"
log "Output: $IR_DIR"
VOLATILE DATA (Collect first - changes quickly)
log "Collecting volatile data..."
Current time (for timeline correlation):
date > "${IR_DIR}/timestamp.txt"
uptime >> "${IR_DIR}/timestamp.txt"
Network connections (CRITICAL - collect immediately):
log " Network connections..."
ss -tupna > "${IR_DIR}/network-ss.txt" 2>&1
netstat -tupna > "${IR_DIR}/network-netstat.txt" 2>&1
Running processes:
log " Running processes..."
ps auxwww > "${IR_DIR}/processes-full.txt" 2>&1
ps -eo pid,ppid,user,start,cmd --sort=-start > "${IR_DIR}/processes-timeline.txt" 2>&1
pstree -p > "${IR_DIR}/process-tree.txt" 2>&1
Logged in users:
log " Logged in users..."
who > "${IR_DIR}/users-who.txt" 2>&1
w > "${IR_DIR}/users-w.txt" 2>&1
last -F | head -100 > "${IR_DIR}/users-last.txt" 2>&1
lastb -F | head -100 > "${IR_DIR}/users-lastb.txt" 2>&1 || true
Open files:
log " Open files..."
lsof > "${IR_DIR}/lsof-full.txt" 2>&1 || true
SYSTEM STATE
log "Collecting system state..."
System info:
log " System information..."
uname -a > "${IR_DIR}/system-info.txt"
cat /etc/os-release >> "${IR_DIR}/system-info.txt" 2>/dev/null || true
Memory info:
free -h > "${IR_DIR}/memory.txt"
cat /proc/meminfo >> "${IR_DIR}/memory.txt"
Disk usage:
df -h > "${IR_DIR}/disk.txt"
mount > "${IR_DIR}/mounts.txt"
Network config:
log " Network configuration..."
ip addr > "${IR_DIR}/network-interfaces.txt" 2>&1
ip route > "${IR_DIR}/network-routes.txt" 2>&1
ip neigh > "${IR_DIR}/network-arp.txt" 2>&1
cat /etc/resolv.conf > "${IR_DIR}/network-dns.txt" 2>/dev/null || true
iptables -L -n -v > "${IR_DIR}/network-firewall.txt" 2>&1 || true
PERSISTENCE MECHANISMS
log "Checking persistence mechanisms..."
Cron jobs:
log " Cron jobs..."
{
echo "=== /etc/crontab ==="
cat /etc/crontab 2>/dev/null || echo "Not readable"
echo ""
echo "=== /etc/cron.d/ ==="
ls -la /etc/cron.d/ 2>/dev/null || echo "Not accessible"
cat /etc/cron.d/* 2>/dev/null || true
echo ""
echo "=== User crontabs ==="
for user in $(cut -f1 -d: /etc/passwd); do
echo "--- $user ---"
crontab -u "$user" -l 2>&1 || echo "No crontab"
done
} > "${IR_DIR}/persistence-cron.txt"
Systemd services:
log " Systemd services..."
systemctl list-units --type=service --all > "${IR_DIR}/persistence-systemd.txt" 2>&1
systemctl list-unit-files --type=service > "${IR_DIR}/persistence-systemd-files.txt" 2>&1
Init scripts:
log " Init scripts..."
ls -la /etc/init.d/ > "${IR_DIR}/persistence-initd.txt" 2>/dev/null || true
User startup files:
log " User startup files..."
{
for user_home in /home/*; do
[ -d "$user_home" ] || continue
user=$(basename "$user_home")
echo "=== $user ==="
for rc in .bashrc .bash_profile .profile .zshrc; do
[ -f "${user_home}/${rc}" ] && {
echo "--- ${rc} ---"
cat "${user_home}/${rc}" 2>/dev/null | head -50
}
done
done
} > "${IR_DIR}/persistence-user-rc.txt"
SUSPICIOUS FILES
log "Searching for suspicious files..."
Recently modified files:
log " Recently modified files..."
find / -type f -mtime -7 2>/dev/null | \
grep -v "/proc\|/sys\|/dev\|/run" > "${IR_DIR}/files-recent.txt" || true
Hidden files in unusual places:
log " Hidden files..."
find /tmp /var/tmp /dev/shm -name ".*" 2>/dev/null > "${IR_DIR}/files-hidden-tmp.txt" || true
World-writable files:
log " World-writable files..."
find / -type f -perm -002 2>/dev/null | \
grep -v "/proc\|/sys\|/dev" | head -1000 > "${IR_DIR}/files-world-writable.txt" || true
SUID/SGID binaries:
log " SUID/SGID binaries..."
find / -type f \( -perm -4000 -o -perm -2000 \) 2>/dev/null > "${IR_DIR}/files-suid-sgid.txt" || true
LOGS
log "Collecting logs..."
Auth logs:
log " Auth logs..."
[ -f /var/log/auth.log ] && tail -10000 /var/log/auth.log > "${IR_DIR}/logs-auth.txt" 2>/dev/null
[ -f /var/log/secure ] && tail -10000 /var/log/secure > "${IR_DIR}/logs-secure.txt" 2>/dev/null
System logs:
log " System logs..."
[ -f /var/log/syslog ] && tail -10000 /var/log/syslog > "${IR_DIR}/logs-syslog.txt" 2>/dev/null
[ -f /var/log/messages ] && tail -10000 /var/log/messages > "${IR_DIR}/logs-messages.txt" 2>/dev/null
Journalctl (if available):
command -v journalctl &>/dev/null && {
log " Journal logs..."
journalctl --since "7 days ago" > "${IR_DIR}/logs-journal.txt" 2>&1 || true
}
PACKAGE
log "Creating archive..."
Create manifest:
ls -la "$IR_DIR" > "${IR_DIR}/MANIFEST.txt"
Archive:
archive="${IR_DIR}.tar.gz"
tar czf "$archive" -C "$(dirname "$IR_DIR")" "$(basename "$IR_DIR")"
log "=== INCIDENT RESPONSE COMPLETE ==="
log "Data collected: $IR_DIR"
log "Archive: $archive"
log "Size: $(du -h "$archive" | cut -f1)"
Hash for integrity:
sha256sum "$archive" > "${archive}.sha256"
log "SHA256: $(cat "${archive}.sha256")"
echo ""
echo "Next steps:"
echo "1. Secure the archive: scp $archive secure-server:/evidence/"
echo "2. Review network connections: less ${IR_DIR}/network-ss.txt"
echo "3. Review processes: less ${IR_DIR}/processes-timeline.txt"
echo "4. Check persistence: less ${IR_DIR}/persistence-*.txt"
Quick Reference Card
╔══════════════════════════════════════════════════════════════════════════════╗ ║ DEFENSIVE SHELL PATTERNS CHEATSHEET ║ ╠══════════════════════════════════════════════════════════════════════════════╣ ║ ║ ║ OPERATORS ║ ║ ────────────────────────────────────────────────────────────────────────── ║ ║ cmd || fallback Run fallback if cmd fails ║ ║ cmd && next Run next if cmd succeeds ║ ║ cmd && ok || fail Run ok if cmd succeeds, fail otherwise ║ ║ ║ ║ REDIRECTION ║ ║ ────────────────────────────────────────────────────────────────────────── ║ ║ > file Stdout to file (overwrite) ║ ║ >> file Stdout to file (append) ║ ║ 2> file Stderr to file ║ ║ 2>&1 Stderr to stdout ║ ║ &> file Both to file (bash 4+) ║ ║ > file 2>&1 Both to file (traditional) ║ ║ 2>/dev/null Silence errors ║ ║ &>/dev/null Silence everything ║ ║ |& cmd Pipe both streams (bash 4+) ║ ║ 2>&1 | cmd Pipe both streams (traditional) ║ ║ ║ ║ EXISTENCE CHECKS ║ ║ ────────────────────────────────────────────────────────────────────────── ║ ║ [ -f file ] File exists ║ ║ [ -d dir ] Directory exists ║ ║ [ -r file ] File readable ║ ║ [ -n "$var" ] Variable not empty ║ ║ [ -z "$var" ] Variable empty ║ ║ command -v cmd Command exists (portable) ║ ║ which cmd Command exists (less portable) ║ ║ ║ ║ COMMON PATTERNS ║ ║ ────────────────────────────────────────────────────────────────────────── ║ ║ which cmd || install Install if missing ║ ║ [ -f cfg ] && load Load config if exists ║ ║ cmd &>/dev/null && ok Silent success check ║ ║ cmd > out 2> err Separate output and errors ║ ║ cmd 2>&1 | tee log Show and log everything ║ ║ $(cmd) not `cmd` Modern command substitution ║ ║ "$var" not $var Always quote variables ║ ║ ║ ║ SCRIPT SAFETY ║ ║ ────────────────────────────────────────────────────────────────────────── ║ ║ set -e Exit on error ║ ║ set -u Exit on undefined variable ║ ║ set -o pipefail Exit on pipe failure ║ ║ set -euo pipefail All of the above (recommended) ║ ║ trap cleanup EXIT Run cleanup on exit ║ ║ readonly VAR=val Constant variable ║ ║ local var=val Function-local variable ║ ║ ║ ╚══════════════════════════════════════════════════════════════════════════════╝