CLI Mastery Training
Deliberate practice curriculum for terminal mastery.
Training Philosophy
-
Type commands by hand - no copy-paste during practice
-
Eyes-closed drills - build muscle memory
-
Observation → Immediate reproduction - see pattern, type it
-
Infrastructure context - all exercises use real domus-* data
Resources
| Resource | URL |
|---|---|
regex101.com |
regex101.com/ - Test regex with explanation |
explainshell.com |
explainshell.com/ - Break down complex commands |
shellcheck.net |
www.shellcheck.net/ - Lint bash scripts |
Skill Levels
| Level | Indicators | Status |
|---|---|---|
Beginner |
GUI-dependent, copies commands, afraid of terminal |
[x] PASSED |
Intermediate |
Terminal-first, pipelines, find+grep, heredocs |
[x] CURRENT |
Advanced |
Process substitution, FIFOs, signals, awk/sed without docs |
[ ] IN TRAINING |
Expert |
Designs workflows, portable scripts, teaches others |
[ ] FUTURE |
Module 1: Process Substitution
Concept
<(command) creates a file descriptor (/dev/fd/63) containing command output.
Pattern
diff <(command1) <(command2)
Exercises
Exercise 1.1: Compare attributes across repos
diff <(grep -E "^ [a-z]" ~/atelier/_bibliotheca/domus-infra-ops/docs/asciidoc/antora.yml | sort) \
<(grep -E "^ [a-z]" ~/atelier/_bibliotheca/domus-captures/docs/antora.yml | sort)
Expected: No output (attributes synchronized)
Exercise 1.2: Compare DNS records from two servers
diff <(dig @10.50.1.90 inside.domusdigitalis.dev AXFR | sort) \
<(dig @10.50.1.91 inside.domusdigitalis.dev AXFR | sort)
Exercise 1.3: Compare package lists
diff <(pacman -Qq | sort) <(cat ~/.config/packages/base.txt | sort)
Exercise 1.4: Compare running processes across hosts
diff <(ssh kvm-01 "ps aux --no-headers | awk '{print \$11}' | sort -u") \
<(ssh kvm-02 "ps aux --no-headers | awk '{print \$11}' | sort -u")
Completion Criteria
[ ] Type diff <() <() without reference
[ ] Use for comparing remote vs local files
[ ] Use for comparing git branches
[ ] Explain why it’s better than temp files
Module 2: comm - Set Operations
Concept
Compares two sorted files, outputs 3 columns:
| Column | Indent | Meaning | |--------|--------|---------| | 1 | None | Only in file 1 | | 2 | One tab | Only in file 2 | | 3 | Two tabs | In both files |
Flags
| Flag | Effect |
|------|--------|
| -1 | Suppress column 1 |
| -2 | Suppress column 2 |
| -3 | Suppress column 3 (most common) |
| -12 | Show only common lines |
| -23 | Show only file1-unique |
| -13 | Show only file2-unique |
Exercises
Exercise 2.1: Find files unique to each repo
comm -3 \
<(find ~/atelier/_bibliotheca/domus-infra-ops -name "*.adoc" -printf "%f\n" | sort -u) \
<(find ~/atelier/_bibliotheca/domus-captures -name "*.adoc" -printf "%f\n" | sort -u) \
| head -20
Exercise 2.2: Count unique files with awk
comm -3 \
<(find ~/atelier/_bibliotheca/domus-infra-ops -name "*.adoc" -printf "%f\n" | sort -u) \
<(find ~/atelier/_bibliotheca/domus-captures -name "*.adoc" -printf "%f\n" | sort -u) \
| awk '/^\t/{r++} /^[^\t]/{l++} END{print "infra-ops only:", l, "\ncaptures only:", r}'
Expected: infra-ops only: 275, captures only: 437
Exercise 2.3: Find common files only
comm -12 \
<(find ~/atelier/_bibliotheca/domus-infra-ops -name "*.adoc" -printf "%f\n" | sort -u) \
<(find ~/atelier/_bibliotheca/domus-captures -name "*.adoc" -printf "%f\n" | sort -u) \
| wc -l
awk Pattern Breakdown
| Pattern | Meaning |
|---------|---------|
| /^\t/ | Line starts with tab |
| /[\t]/ | Line starts with non-tab |
| {r++} | Increment counter r |
| END{print} | Print after all lines |
Completion Criteria
[ ] Explain 3-column output
[ ] Use -3 flag correctly
[ ] Combine with awk for counting
[ ] Use -12 for intersection
Module 3: Git Log Ranges
Concept
A..B means "commits reachable from B but NOT from A"
Patterns
# Last N commits
git log HEAD~1..HEAD # 1 commit
git log HEAD~2..HEAD # 2 commits
git log HEAD~5..HEAD # 5 commits
# Branch comparison
git log main..feature # Commits on feature not on main
git log feature..main # Commits on main not on feature
# With formatting
git log --oneline HEAD~5..HEAD
Exercises
Exercise 3.1: View last 3 commits
git log --oneline HEAD~3..HEAD
Exercise 3.2: View commits with full message
git log HEAD~2..HEAD | cat
Exercise 3.3: Compare branches
git log --oneline main..HEAD # What's on current branch not on main
Completion Criteria
[ ] Explain A..B semantics
[ ] Use HEAD~N notation fluently
[ ] Compare branches with ranges
Module 4: Git Reflog
Concept
Every HEAD movement is recorded. Safety net for ~90 days.
Patterns
# View recent movements
git reflog | head -10
# Format
# 45163b7 HEAD@{0}: commit: message
# a4a04bf HEAD@{1}: commit: message
# Recovery after bad reset
git reset --hard HEAD~5 # Oops!
git reflog # Find lost commit
git reset --hard HEAD@{1} # Recover
Exercises
Exercise 4.1: View your reflog
git reflog | head -15
Exercise 4.2: Find a specific action
git reflog | grep -i "reset\|rebase\|checkout"
Module 5: find Patterns
Core Syntax
find <path> -type <f|d> -name "pattern" -exec <cmd> {} \;
Exercises
Exercise 5.1: Find by name with regex OR
find ~/atelier/_bibliotheca/Principia/ -type f \( -name "*.adoc" -o -name "*.md" \) | head -10
Exercise 5.2: Find and grep
find ~/atelier/_bibliotheca -name "*.adoc" -exec grep -l "LUKS" {} \; 2>/dev/null
Exercise 5.3: Find with printf (for sort/comm)
find ~/atelier/_bibliotheca/domus-captures -name "*.adoc" -printf "%f\n" | sort -u | head -10
Exercise 5.4: Find by time
find ~/atelier/_bibliotheca -name "*.adoc" -mtime -1 # Modified in last 24h
Completion Criteria
[ ] Use -type f vs -type d
[ ] Use -name with globs
[ ] Use -exec with {} and \;
[ ] Use -printf for custom output
Module 6: Regex (regex101.com)
Flavors
| Context | Flavor | Notes |
|---------|--------|-------|
| grep | BRE | Escape +, ?, \|, () |
| grep -E | ERE | No escaping needed |
| grep -P | PCRE | Full Perl regex |
| awk | ERE | Extended by default |
| sed | BRE | Use -E for ERE |
Exercises (Use regex101.com)
Exercise 6.1: Match IPv4 addresses
Pattern: \b([0-9]{1,3}\.){3}[0-9]{1,3}\b
Test: "Server at 10.50.1.90 responded"
Exercise 6.2: Match MAC addresses
Pattern: ([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}
Test: "MAC 14:F6:D8:7B:31:80 registered"
Exercise 6.3: BRE vs ERE escaping
# BRE (requires escaping)
echo "test123" | grep 'test\(123\)'
# ERE (no escaping)
echo "test123" | grep -E 'test(123)'
Practice Workflow
-
Open regex101.com/
-
Select flavor (PCRE for testing, then adapt)
-
Enter pattern and test strings
-
Read explanation on right panel
-
Port to grep/awk/sed
Completion Criteria
[ ] Explain BRE vs ERE vs PCRE [ ] Write IP address regex [ ] Write MAC address regex [ ] Use regex101.com effectively
Module 7: awk Fundamentals
Core Patterns
# Field extraction
awk '{print $1, $3}' file
# Pattern matching
awk '/pattern/ {print $0}' file
# Field separator
awk -F: '{print $1}' /etc/passwd
# Counters
awk '{count++} END {print count}' file
# Conditionals
awk '$3 > 100 {print $1}' file
Exercises
Exercise 7.1: Extract usernames from passwd
awk -F: '{print $1}' /etc/passwd | head -10
Exercise 7.2: Count lines matching pattern
awk '/LUKS/ {count++} END {print count}' ~/atelier/_bibliotheca/domus-captures/docs/modules/ROOT/**/*.adoc
Exercise 7.3: Tab-detection pattern (from comm)
# From Session 5
awk '/^\t/{r++} /^[^\t]/{l++} END{print "left:", l, "right:", r}'
Completion Criteria
[ ] Use -F for field separator
[ ] Use pattern {action} syntax
[ ] Use END\{\} for totals
[ ] Write counters with ++
Module 8: jq for JSON
Core Patterns
# Pretty print
cat file.json | jq .
# Extract field
jq '.name' file.json
# Array access
jq '.[0]' file.json
jq '.items[0].name' file.json
# Filter
jq '.[] | select(.status == "active")' file.json
Exercises
Exercise 8.1: Parse netapi JSON output
netapi ise get-endpoints -f json | jq '.SearchResult.resources[].name' | head -5
Exercise 8.2: Extract specific fields
netapi monad pipelines -f json | jq '.[] | {name: .name, status: .status}'
Completion Criteria
[ ] Navigate nested objects with .
[ ] Access arrays with []
[ ] Use select() for filtering
[ ] Use | for chaining
Module 9: xargs Patterns
Core Patterns
# Basic
find . -name "*.txt" | xargs wc -l
# With placeholder
find . -name "*.txt" | xargs -I{} cp {} /backup/
# Null-delimited (safe for spaces)
find . -name "*.txt" -print0 | xargs -0 wc -l
# Parallel
find . -name "*.txt" | xargs -P4 -I{} process {}
Exercises
Exercise 9.1: Count lines in all .adoc files
find ~/atelier/_bibliotheca/domus-captures -name "*.adoc" | xargs wc -l | tail -1
Exercise 9.2: Safe handling with -print0
find ~/atelier/_bibliotheca -name "*.adoc" -print0 | xargs -0 grep -l "LUKS"
Completion Criteria
[ ] Use -I{} for placeholder
[ ] Use -print0 and -0 together
[ ] Use -P for parallel execution
Module 10: sort - Ordering Data
Core Patterns
# Alphabetic sort (default)
sort file.txt
# Numeric sort
sort -n file.txt
# Reverse
sort -r file.txt
# Unique only
sort -u file.txt
# By field (tab-delimited)
sort -t$'\t' -k2 file.txt
# By field (colon-delimited)
sort -t: -k3 -n /etc/passwd
# Human-readable numbers (1K, 2M, 3G)
du -h | sort -h
Exercises
Exercise 10.1: Sort passwd by UID
sort -t: -k3 -n /etc/passwd | head -10
Exercise 10.2: Sort files by size
ls -lS | head -10 # ls has -S flag
du -h * | sort -h # sort human-readable
Exercise 10.3: Unique sorted output for comm
find . -name "*.adoc" -printf "%f\n" | sort -u
Completion Criteria
[ ] Use -n for numeric
[ ] Use -t and -k for field sorting
[ ] Use -u for unique
[ ] Use -h for human-readable sizes
Module 11: cut - Field Extraction
Core Patterns
# By delimiter and field
cut -d: -f1 /etc/passwd
# Multiple fields
cut -d: -f1,3,7 /etc/passwd
# Field range
cut -d: -f1-3 /etc/passwd
# By character position
cut -c1-10 file.txt
# By byte position
cut -b1-10 file.txt
Exercises
Exercise 11.1: Extract usernames
cut -d: -f1 /etc/passwd | head -10
Exercise 11.2: Extract username and shell
cut -d: -f1,7 /etc/passwd | head -10
Exercise 11.3: Extract first 20 characters of each line
head /etc/passwd | cut -c1-20
Exercise 11.4: cut vs awk
# Equivalent operations
cut -d: -f1 /etc/passwd
awk -F: '{print $1}' /etc/passwd
# When to use each:
# cut: Simple field extraction, faster
# awk: Complex logic, calculations, patterns
Completion Criteria
[ ] Use -d for delimiter
[ ] Use -f for fields
[ ] Use -c for characters
[ ] Know when to use cut vs awk
Module 12: tr - Translate Characters
Core Patterns
# Replace characters
echo "hello" | tr 'a-z' 'A-Z' # HELLO
# Delete characters
echo "hello123" | tr -d '0-9' # hello
# Squeeze repeated characters
echo "helllo" | tr -s 'l' # helo
# Complement (delete everything NOT in set)
echo "hello123" | tr -cd '0-9' # 123
Exercises
Exercise 12.1: Uppercase
echo "infrastructure" | tr 'a-z' 'A-Z'
Exercise 12.2: Remove digits
echo "vault-01.inside.domusdigitalis.dev" | tr -d '0-9'
Exercise 12.3: Convert spaces to newlines
echo "one two three" | tr ' ' '\n'
Exercise 12.4: Squeeze multiple spaces
echo "too many spaces" | tr -s ' '
Completion Criteria
[ ] Use character ranges a-z
[ ] Use -d to delete
[ ] Use -s to squeeze
[ ] Use -c for complement
Module 13: head/tail - File Slicing
Core Patterns
# First/last N lines
head -10 file.txt
tail -10 file.txt
# First/last N bytes
head -c 100 file.txt
# All BUT last N lines
head -n -5 file.txt
# All FROM line N onward
tail -n +5 file.txt
# Follow file (live)
tail -f /var/log/syslog
# Follow with retry
tail -F /var/log/syslog
Exercises
Exercise 13.1: Lines 10-20 of a file
head -20 file.txt | tail -11
# OR
sed -n '10,20p' file.txt
# OR
awk 'NR>=10 && NR<=20' file.txt
Exercise 13.2: Skip header line
tail -n +2 /etc/passwd | head -5
Exercise 13.3: Last 5 commits
git log --oneline | head -5
Completion Criteria
[ ] Use -n for lines
[ ] Use +N syntax with tail
[ ] Use -f for following
[ ] Combine head/tail for ranges
Module 14: wc - Counting
Core Patterns
# Lines, words, bytes
wc file.txt
# Lines only
wc -l file.txt
# Words only
wc -w file.txt
# Characters
wc -c file.txt
# Multiple files
wc -l *.txt
Exercises
Exercise 14.1: Count .adoc files
find ~/atelier/_bibliotheca/domus-captures -name "*.adoc" | wc -l
Exercise 14.2: Total lines in all files
find ~/atelier/_bibliotheca/domus-captures -name "*.adoc" -exec cat {} + | wc -l
Exercise 14.3: Lines per file, sorted
find . -name "*.adoc" -exec wc -l {} + | sort -n | tail -10
Completion Criteria
[ ] Use -l for lines
[ ] Combine with find/xargs
[ ] Interpret multi-file output
Module 15: uniq - Duplicate Handling
Core Patterns
# Remove adjacent duplicates (MUST sort first!)
sort file.txt | uniq
# Count occurrences
sort file.txt | uniq -c
# Only duplicates
sort file.txt | uniq -d
# Only unique (non-duplicated)
sort file.txt | uniq -u
# Ignore case
sort file.txt | uniq -i
Exercises
Exercise 15.1: Count file extensions
find ~/atelier/_bibliotheca -type f | sed 's/.*\.//' | sort | uniq -c | sort -rn | head -10
Exercise 15.2: Find duplicate filenames
find ~/atelier/_bibliotheca -name "*.adoc" -printf "%f\n" | sort | uniq -d
Completion Criteria
[ ] Always sort before uniq
[ ] Use -c for counting
[ ] Use -d for duplicates only
Module 16: tee - Split Output
Core Patterns
# Write to file AND stdout
command | tee output.txt
# Append instead of overwrite
command | tee -a output.txt
# Write to multiple files
command | tee file1.txt file2.txt
# With sudo (common pattern)
echo "config" | sudo tee /etc/file.conf
Exercises
Exercise 16.1: Save and display
git log --oneline HEAD~5..HEAD | tee /tmp/recent-commits.txt
Exercise 16.2: Write to privileged file
echo "new line" | sudo tee -a /etc/hosts
Completion Criteria
[ ] Use for logging pipelines
[ ] Use with sudo for privileged writes
[ ] Use -a for append
Module 17: Pipelines - Putting It Together
Classic Patterns
# Find most common words
cat file.txt | tr ' ' '\n' | sort | uniq -c | sort -rn | head -10
# Find large files
find . -type f -exec du -h {} + | sort -h | tail -10
# Process log files
cat /var/log/syslog | grep ERROR | cut -d' ' -f1-3 | sort | uniq -c
# Extract IPs from log
grep -oE '([0-9]{1,3}\.){3}[0-9]{1,3}' /var/log/auth.log | sort | uniq -c | sort -rn
Infrastructure Drills
Drill 17.1: Find most common file extensions in domus-*
find ~/atelier/_bibliotheca -type f | sed 's/.*\.//' | sort | uniq -c | sort -rn | head -10
Drill 17.2: Count lines per repo
for repo in domus-captures domus-infra-ops domus-ise-linux; do
echo -n "$repo: "
find ~/atelier/_bibliotheca/$repo -name "*.adoc" -exec cat {} + 2>/dev/null | wc -l
done
Drill 17.3: Compare repo sizes
du -sh ~/atelier/_bibliotheca/domus-* | sort -h
Completion Criteria
[ ] Chain 5+ commands in pipeline [ ] Use tr, sort, uniq, cut together [ ] Build pipelines incrementally (test each step)
Training Log
Track completed exercises here.
2026-03-12: Session 1
| Module | Exercise | Status |
|---|---|---|
1 |
1.1 |
[x] Completed - attributes comparison |
2 |
2.1 |
[x] Completed - unique files |
2 |
2.2 |
[x] Completed - awk counting (275/437) |
3 |
3.1 |
[x] Completed - git log ranges |
4 |
4.1 |
[x] Completed - reflog viewing |
Notes:
-
Eyes-closed typing drill attempted
-
Typos caught:
/^t/→/^\t/,/{\t]/→/[\t]/ -
Process substitution syntax internalized
Quick Reference Card
# Process substitution
diff <(cmd1) <(cmd2)
# comm (requires sorted input)
comm -3 <(sort file1) <(sort file2)
# Git ranges
git log HEAD~N..HEAD
git reflog | head
# find + grep
find . -name "*.adoc" -exec grep -l "pattern" {} \;
# awk counting
awk '/pattern/{c++} END{print c}'
# jq
jq '.field' | jq '.[] | select(.x == "y")'
# xargs safe
find . -print0 | xargs -0 cmd