Command Chaining & Flow Control
The power of Unix lies not in individual commands, but in the ability to combine them.
Overview
Master the art of combining commands to create powerful one-liners and production-ready scripts. Understanding command chaining separates casual users from true Linux administrators.
What you’ll master:
-
Logical operators (&&, ||, ;)
-
Command grouping (subshells, groups)
-
Background execution & job control
-
Data stream redirection (stdin, stdout, stderr)
-
Pipes and process substitution
-
Real-world production patterns
-
Incident response & recovery techniques
1. Logical Operators (Control Flow)
&& - Logical AND (Short-circuit)
Execute next command only if previous succeeded (exit code 0).
npm run build && npm run deploy
[ -d "apps/backend" ] && cd apps/backend
mkdir -p /tmp/test && cd /tmp/test && touch file.txt && echo "All succeeded"
Exit code logic:
true && echo "This runs"
false && echo "This doesn't"
command1 && command2
|| - Logical OR (Fallback)
Execute next command only if previous failed (exit code non-zero).
rm -rf node_modules 2>/dev/null || sudo rm -rf node_modules
CONFIG_FILE=${1:-"default.conf"} || exit 1
./deploy.sh || { echo "Deploy failed!"; exit 1; }
Exit code logic:
false || echo "This runs"
true || echo "This doesn't"
command1 || command2 || command3
; - Sequential (Unconditional)
Execute next command regardless of previous result.
echo "Starting..."; ./script.sh; echo "Done (maybe failed)"
./risky-operation.sh; cleanup.sh
Use case: When you need guaranteed execution (cleanup, logging).
Combining && and || - The Ternary Pattern
[ -f "config.yml" ] && echo "Config exists" || echo "Config missing"
| This can backfire! If true_action fails, false_action runs too |
[ -f "file.txt" ] && false || echo "Oops"
Safer pattern using if:
if [ -f "config.yml" ]; then
echo "exists"
else
echo "missing"
fi
Golden rule: Don’t trust && || as true ternary. Use if/else for reliability.
2. Command Grouping
( ) - Subshell (Isolated Environment)
Commands run in a child process. Changes don’t affect parent shell.
(cd apps/backend && npm install)
pwd
OUTPUT=$(cd /var/log && cat syslog | grep error)
(cd apps/backend && npm install) &
(cd apps/frontend && npm install) &
wait
Why use subshells:
-
Prevent directory pollution
-
Isolate variable changes
-
Safe parallel execution
-
Contain errors
Exit code: Subshell returns exit code of last command.
{ } - Command Group (Same Shell)
Commands run in current shell. Changes persist.
{ cd apps/backend && npm install; }
pwd
{ echo "Header"; cat data.txt; echo "Footer"; } > output.txt
{ cd /nonexistent && rm -rf *; } 2>/dev/null || echo "Block failed safely"
Syntax rules:
-
Space after
{ -
Commands end with
;or newline -
Space before
}
Comparison Table
| Feature | ( ) Subshell |
{ } Group |
|---|---|---|
Environment |
New child process |
Current shell |
Variable changes |
Lost after |
Persist |
Directory changes |
Lost after |
Persist |
Syntax |
|
|
Performance |
Slower (new process) |
Faster (same shell) |
Use case |
Isolation, parallel |
Grouping for redirection |
Quick decision:
-
Need isolation? → Use
( ) -
Need speed/persistence? → Use
{ }
3. Background & Job Control
& - Background Execution
npm install &
npm install &
echo "Started with PID: $!"
(cd apps/backend && npm install) &
(cd apps/frontend && npm install) &
wait
echo "Both installations finished"
npm install & PID1=$!
npm run build & PID2=$!
wait $PID1
echo "Install done, build still running..."
wait $PID2
Special variables:
-
$!- PID of last background job -
$?- Exit code of last foreground command
4. Data Stream Redirection
The Three Standard Streams
stdin (0) ─→ [COMMAND] ─→ stdout (1)
│
└──────→ stderr (2)
File descriptors:
-
0= stdin (standard input) -
1= stdout (standard output) -
2= stderr (standard error)
Basic Redirection
ls > files.txt
ls >> files.txt
command 2> errors.log
sort < unsorted.txt
command > all.log 2>&1
command &> all.log
command > output.log 2> errors.log
Order matters:
# WRONG
command 2>&1 > file.txt
# RIGHT
command > file.txt 2>&1
Discarding Output
command > /dev/null
command 2> /dev/null
command > /dev/null 2>&1
command &> /dev/null
Use case: Silent cron jobs, hide expected errors.
Here Documents (HEREDOC)
Multi-line input to commands.
cat << 'EOF'
This is line 1
This is line 2
Variables like $HOME are NOT expanded
EOF
cat << EOF
Your home is: $HOME
Current user: $USER
EOF
psql << EOF
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50)
);
INSERT INTO users (username) VALUES ('evan');
EOF
python3 << 'PYTHON'
import sys
print(f"Python version: {sys.version}")
PYTHON
ssh user@host << 'REMOTE'
cd /var/log
tail -100 syslog | grep error
REMOTE
See also: Heredoc Mastery for deep dive.
5. Pipes (The Power of Unix Philosophy)
Basic Piping
ls -la | grep ".txt"
cat access.log | grep "404" | sort | uniq -c | sort -rn | head -10
Performance tip: Every pipe creates a subshell. Minimize unnecessary pipes.
Named Pipes (FIFOs)
mkfifo /tmp/mypipe
echo "Hello" > /tmp/mypipe
cat < /tmp/mypipe
rm /tmp/mypipe
Use case: Inter-process communication without temporary files.
Process Substitution (Advanced)
Treat command output as a file.
diff <(ls dir1) <(ls dir2)
diff <(sort file1.txt) <(sort file2.txt)
paste <(cut -f1 file.txt) <(cut -f3 file.txt)
join <(sort users1.txt) <(sort users2.txt)
while read line; do
echo "Processing: $line"
done < <(find . -name "*.log")
Syntax:
-
<(command)- Output as readable file -
>(command)- Input as writable file
Example:
echo "data" | tee >(gzip > file.gz) >(bzip2 > file.bz2) > file.txt
6. Practical Patterns
Pattern 1: Conditional Directory Operations
[ "$(basename "$PWD")" != "backend" ] && cd apps/backend
rm -rf node_modules && npm install
Problem: You’re now stuck in apps/backend!
Pattern 2: Subshell Isolation (Recommended)
(cd apps/backend && rm -rf node_modules && npm install)
pwd
Always use subshells for directory operations.
Pattern 3: Error Fallback Chain
rm -rf node_modules 2>/dev/null || \
sudo rm -rf node_modules || \
echo "Cannot remove node_modules"
Pattern 4: Grouped Commands with Error Handling
{
cd apps/backend && \
rm -rf node_modules && \
npm install
} || echo "Backend setup failed"
Pattern 5: Sequential Multi-Directory Operations
(cd apps/backend && rm -rf node_modules && npm install) && \
(cd apps/frontend && rm -rf node_modules && npm install)
Pattern 6: Parallel Execution with Wait
(cd apps/backend && npm install) &
(cd apps/frontend && npm install) &
wait
echo "Both complete!"
Advanced: Capture individual exit codes:
(cd apps/backend && npm install) & PID1=$!
(cd apps/frontend && npm install) & PID2=$!
wait $PID1; BACKEND_STATUS=$?
wait $PID2; FRONTEND_STATUS=$?
if [ $BACKEND_STATUS -eq 0 ] && [ $FRONTEND_STATUS -eq 0 ]; then
echo "Both succeeded"
else
echo "At least one failed"
exit 1
fi
Pattern 7: Robust Script Pattern
#!/usr/bin/env bash
set -euo pipefail
cleanup() {
echo "Cleaning up..."
rm -f /tmp/lockfile
jobs -p | xargs -r kill 2>/dev/null
}
trap cleanup EXIT INT TERM
main() {
echo "Starting..."
# Your commands
}
main "$@"
Flags explained:
-
set -e- Exit on any error -
set -u- Exit on undefined variable -
set -o pipefail- Catch errors in pipes -
trap cleanup EXIT- Always run cleanup
7. Quick Reference Table
| Operator | Name | Behavior | Example |
|---|---|---|---|
|
AND |
Next runs if previous succeeds (exit 0) |
|
|
OR |
Next runs if previous fails (exit non-0) |
|
|
Sequence |
Next always runs |
|
|
Background |
Run async, return immediately |
|
|
Subshell |
Isolated child process |
|
|
Group |
Same shell, requires |
|
|
Pipe |
stdout → stdin |
|
|
Redirect |
stdout to file (overwrite) |
|
|
Append |
stdout to file (append) |
|
|
Input |
stdin from file |
|
|
Stderr |
stderr to file |
|
|
Merge |
stderr to stdout |
|
|
All |
stdout+stderr to file |
|
|
Heredoc |
Multi-line input |
|
|
Herestring |
Single-line input |
|
|
Process Sub |
Command output as file |
|
8. Common Pitfalls
Pitfall 1: The False Ternary
# WRONG - if echo fails, "No" runs!
[ -f file ] && echo "Yes" || echo "No"
# SAFE
if [ -f file ]; then
echo "Yes"
else
echo "No"
fi
Pitfall 2: Forgetting Subshell Isolation
# WRONG - you're now stuck in apps/backend
cd apps/backend && npm install && cd ..
# RIGHT - stay in original directory
(cd apps/backend && npm install)
Pitfall 3: Losing stderr
# WRONG - only stdout goes to file, errors to terminal
command > log.txt
# RIGHT - capture both
command > log.txt 2>&1
command &> log.txt
Pitfall 4: Order of Redirection
# WRONG - stderr goes to original stdout (terminal), not file
command 2>&1 > file.txt
# RIGHT - redirect stdout first, then stderr follows it
command > file.txt 2>&1
9. Real-World Case Study: Post-Backup Dev Environment Restore
Documented: 2025-11-29
Scenario: Restoring Domus Digitalis after Fedora reinstall
The Problem
After restoring Linux from backup, the dev environment failed to start:
-
node_modulesdirectories owned byroot(from backup/Docker artifacts) -
.nextcache directory owned byroot -
Database container running but empty (Docker volumes don’t transfer with git)
Diagnosis Commands
ls -la apps/backend/ | grep node_modules
lsof -i :3000 -i :8000 2>/dev/null || ss -tlnp | grep -E '3000|8000'
tail -20 ~/backend.log ~/frontend.log
docker ps
docker logs domus_postgres
Fix 1: Root-Owned Directories
sudo rm -rf apps/{backend,frontend}/node_modules apps/frontend/.next
(cd apps/backend && npm install) &
(cd apps/frontend && npm install) &
wait
./scripts/setup-domus-dev.sh
Techniques used:
-
Brace expansion:
apps/{backend,frontend}/node_modulesexpands to both paths -
Parallel subshells:
( ) & ( ) & waitruns installs simultaneously -
Sequential chaining:
&&ensures each step succeeds
Fix 2: Empty Database (Docker Volume Not Transferred)
The PostgreSQL container was running but had no tables — Docker volumes are local, not in git.
head -5 apps/backend/production-seed.sql
sed '/^\\restrict\|^\\unrestrict/d' apps/backend/production-seed.sql > /tmp/clean-restore.sql
docker cp /tmp/clean-restore.sql domus_postgres:/tmp/restore.sql
docker exec domus_postgres psql -U domus_user -d domus_dev -f /tmp/restore.sql
docker exec domus_postgres psql -U domus_user -d domus_dev \
-c "SELECT COUNT(*) FROM projects; SELECT COUNT(*) FROM books;"
Techniques used:
-
sed filtering: Remove lines matching pattern before piping
-
docker cp: Copy files into container
-
docker exec: Run commands inside container
-
Inline verification: Count rows to confirm restore
Complete One-Liner Recovery
sudo rm -rf apps/{backend,frontend}/node_modules apps/frontend/.next && \
{ (cd apps/backend && npm install) & (cd apps/frontend && npm install) & wait; } && \
sed '/^\\restrict\|^\\unrestrict/d' apps/backend/production-seed.sql > /tmp/clean-restore.sql && \
docker cp /tmp/clean-restore.sql domus_postgres:/tmp/restore.sql && \
docker exec domus_postgres psql -U domus_user -d domus_dev -f /tmp/restore.sql && \
./scripts/setup-domus-dev.sh
Breakdown:
-
Remove root-owned files (requires sudo)
-
Reinstall dependencies in parallel
-
Wait for both to complete
-
Clean SQL dump
-
Copy to container
-
Restore database
-
Run setup script
Time saved: Manual process would take 30+ minutes. This: 5 minutes.
Lessons Learned
| Issue | Root Cause | Prevention |
|---|---|---|
Root-owned node_modules |
Docker volume mounts, backup artifacts |
Add to |
Empty database |
Docker volumes are local, not in git |
Include db restore in setup script |
Railway |
Production dump from Railway PostgreSQL |
Strip before local restore |
Key Takeaway
Docker volumes (domus_postgres_data, etc.) persist data locally only. After a system restore:
-
Code comes from git
-
Dependencies reinstall via
npm install -
Database must be restored from SQL dump or backup
Always maintain:
-
Recent SQL dumps of production data
-
Seed data scripts for development
-
Automated restore procedures
Summary & Quick Wins
Muscle Memory Commands
Practice until automatic:
(cd target && command)
cmd1 & cmd2 & wait
cmd1 || cmd2 || echo "Both failed"
cmd > output.log 2>&1
{ cmd1; cmd2; } || echo "Block failed"
cmd > log.txt 2>&1 &
cmd &> /dev/null
diff <(cmd1) <(cmd2)
The Production Checklist
Before deploying a script with command chains:
-
Exit codes handled (
&&,||,set -e) -
Directory changes in subshells
-
Error output captured or redirected
-
Background jobs have
waitor proper monitoring -
Cleanup via
trap EXIT -
Variables quoted in tests
-
Tested on target platform
Next Steps
Related topics to explore:
-
Heredoc Mastery - Deep dive on heredocs
-
Advanced Search - Advanced search techniques
-
Process Management - Process control & signals
-
Error Handling - Robust error handling patterns
Document Version: 2.0.0
Zettelkasten ID: 2026-LNX-024
Last Updated: 2026-01-11
Author: Evan Rosado (evanusmodestus)
Email: evan.rosado@outlook.com
License: CC-BY-SA-4.0
Location: ~/atelier/_bibliotheca/Principia/02_Assets/ARS-LINUX/
End of Document
Command Chaining Mastery Achieved.
What Changed:
1. Proper Zettelkasten Structure
id: 2026-LNX-024
zettelkasten_id: 2026-LNX-024
zettelkasten_type: reference_note
file_location: .../ARS-LINUX/2026-LNX-024-command-chaining-flow-control.md
2. Rich Metadata (90+ fields added)
-
Learning times
-
Use cases (primary, secondary, specialized)
-
Applications by domain
-
Comprehensive tags/keywords
-
Related documents
-
Prerequisites
-
Quality metrics
-
Security considerations
3. All Arrays Inline
tags: [linux, bash, shell, ...]
operators_logical: [AND &&, OR ||, Sequential semicolon, ...]