CLI Infrastructure Mastery
These tools are your hands. Master them and you can build anything.
The Philosophy
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.
— Doug McIlroy
Every infrastructure system speaks text. Logs, configs, APIs, certificates - all reducible to streams of bytes. The tools in this section transform those streams.
The Toolkit
| Tool | Purpose | Mastery Level |
|---|---|---|
PKI operations, certificate inspection, TLS debugging |
Infrastructure security |
|
HTTP client, API automation, file transfer |
API integration |
|
Column extraction, aggregation, pattern-action processing |
Log analysis, data transformation |
|
Stream editing, substitution, in-place modification |
Config automation |
|
Parallel execution, argument building, pipeline multiplication |
Scale and performance |
|
Combining tools, process substitution, compound workflows |
Orchestration |
Learning Path
Level 1: Foundation
Start here. These patterns appear daily:
# Extract column from output
ps aux | awk '{print $1, $11}'
# Filter and transform
grep ERROR app.log | awk '{print $1, $NF}'
# API call with response parsing
curl -s https://api.example.com/data | jq '.items[]'
# Certificate expiry check
echo | openssl s_client -connect host:443 2>/dev/null | openssl x509 -noout -dates
Level 2: Composition
Combine tools into workflows:
# Check certs on multiple hosts, output JSON
for host in host1 host2 host3; do
expire=$(echo | openssl s_client -connect "$host:443" 2>/dev/null | \
openssl x509 -noout -enddate | cut -d= -f2)
echo "{\"host\":\"$host\",\"expires\":\"$expire\"}"
done | jq -s '.'
# Parallel API calls
cat endpoints.txt | xargs -P4 -I{} curl -s "{}" | jq -s 'add'
# Log aggregation
awk '/ERROR/ {errors[$5]++} END {for (e in errors) print errors[e], e}' *.log | sort -rn
Level 3: Mastery
Build complex automation:
# Full certificate audit with alerting
audit_certs() {
local threshold=${1:-30}
while read -r hostport; do
local host="${hostport%%:*}"
local result=$(echo | timeout 5 openssl s_client -connect "$hostport" \
-servername "$host" 2>/dev/null | \
openssl x509 -noout -subject -issuer -enddate 2>/dev/null)
[[ -z "$result" ]] && { echo "{\"host\":\"$hostport\",\"error\":\"unreachable\"}"; continue; }
local subject=$(echo "$result" | awk -F'CN=' '/subject/{print $2}' | cut -d',' -f1)
local issuer=$(echo "$result" | awk -F'CN=' '/issuer/{print $2}' | cut -d',' -f1)
local enddate=$(echo "$result" | grep notAfter | cut -d= -f2)
local days=$(( ($(date -d "$enddate" +%s) - $(date +%s)) / 86400 ))
jq -n --arg h "$hostport" --arg s "$subject" --arg i "$issuer" \
--arg e "$enddate" --argjson d "$days" \
'{host:$h, subject:$s, issuer:$i, expires:$e, days_left:$d,
alert: (if $d < '$threshold' then true else false end)}'
done
}
cat hosts.txt | audit_certs 30 | jq -s 'sort_by(.days_left)'
The Mental Model
┌─────────────────────────────────────────────────────────────────┐
│ DATA FLOW ARCHITECTURE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ SOURCE TRANSFORM SINK │
│ ────── ───────── ──── │
│ │
│ curl ──────┐ │
│ cat ──────┼──► awk ──► sed ──► sort ──► uniq ──► output │
│ openssl ───┤ ↓ │
│ kubectl ───┘ jq │
│ │ │
│ ▼ │
│ xargs ──► parallel execution │
│ │
│ GENERATORS FILTERS/TRANSFORMERS CONSUMERS │
│ (produce data) (shape data) (use data) │
│ │
└─────────────────────────────────────────────────────────────────┘
Every pipeline follows this pattern:
-
Generate - Produce a stream (curl, cat, find, kubectl, openssl)
-
Transform - Shape the stream (awk, sed, jq, sort, uniq, grep)
-
Consume - Use the result (xargs, tee, redirect, variable capture)
Quick Reference Matrix
| Task | Primary Tool | Alternative | Example |
|---|---|---|---|
Extract JSON field |
jq |
grep + awk |
|
Extract column |
awk |
cut |
|
Find/replace in file |
sed |
awk, perl |
|
HTTP request |
curl |
wget, httpie |
|
Certificate info |
openssl |
certtool |
|
Parallel execution |
xargs -P |
parallel, & |
|
Line filtering |
grep |
awk, sed |
|
Sorting |
sort |
awk |
|
Deduplication |
uniq |
awk, sort -u |
|
Field delimiter |
awk -F |
cut -d |
|
When to Use What
| Situation | Tool Choice |
|---|---|
JSON from API |
|
Log file analysis |
|
Config file editing |
|
Certificate operations |
|
Bulk operations |
|
Column data |
|
Line-by-line substitution |
|
Nested/structured data |
|
Combining multiple sources |
Process substitution |
The Documents
Each deep-dive follows the same structure:
-
Core Concepts - Mental model for the tool
-
Essential Patterns - The 20% you’ll use 80% of the time
-
Infrastructure Applications - Real-world examples from your environment
-
Advanced Techniques - Senior-level patterns
-
Debugging - When things go wrong
-
Quick Reference - Cheat sheet
Related
-
jq Mastery - JSON processing deep dive
-
Shell Fundamentals - Bash/Zsh foundations