CLI Nerdcore: Educational Tracks

Educational nerdcore tracks for memorizing Linux command-line tools. Inspired by YTCracker’s "Subnet Mask Off" - learning through rhythm and repetition.

Beat: "Mask Off" by Future

Track 1: Pipe It Off (Core Philosophy)

Core Unix philosophy: pipes, redirection, exit codes, and shell options.

Hook

Pipe it off, pipe it off
stdout flows to stdin, never stops
Pipe it off, pipe it off
One tool does one thing, Unix philosophy

Pipe the output, chain the commands
Small sharp tools, that’s how Unix ran
Pipe the output, chain the commands
Compose the programs, shell’s the glue, understand

Verse 1: Pipes & Redirection

Pipe is just a connector, vertical bar key
Output from the left side, input on the right, see?
"ls | grep txt" - list files then filter
Left produces data, right is the receiver

stdout is channel one, that’s your normal output
stderr is channel two, errors take that route
command  file - redirect, overwrite the target
"command >> file" - append it, don’t discard it

Two-greater-than-one: "2>&1" - combine the streams
Errors merge with output, unified it seems
"/dev/null" is the void, black hole for the data
"command 2>/dev/null" - silence errors, see you later

Subshell with the dollars: "$(command here)"
Captures the output, variable crystal clear
Process substitution: "<(command)" is a file
Feed it to a program, diff two outputs in style
What You Learned
  • | - pipe stdout to next command’s stdin

  • > - redirect stdout, overwrite file

  • >> - redirect stdout, append to file

  • 2>&1 - merge stderr into stdout

  • /dev/null - discard output

  • $(cmd) - command substitution

  • <(cmd) - process substitution

Verse 2: Exit Codes & Logic

Every command returns a number when it’s done
Zero means success—zero, never one
Non-zero is failure, check it with "echo $?"
Exit status tells you if the last command was lit

Double ampersand "&&" means: if success, then run
"mkdir dir && cd dir" - second needs the first one done
Double pipe "||" means: if it fails, do this instead
"cd dir || mkdir dir" - fallback for the thread

Semicolon ";" don’t care, runs regardless of the fate
"false ; echo hi" - still prints, doesn’t wait
Chain them all together, build your logic flow
"cmd1 && cmd2 || cmd3" - now you’re in the know

Set -e in your scripts, exit on first failure
Set -u for unset vars, catches every error
Set -o pipefail - pipeline fails if ANY fail
"set -euo pipefail" - the holy trinity, never stale
What You Learned
  • Exit 0 = success, non-zero = failure

  • $? - last exit code

  • && - run next if previous succeeded

  • || - run next if previous failed

  • ; - run next regardless

  • set -e - exit on error

  • set -u - error on unset variables

  • set -o pipefail - pipeline fails if any command fails

Track 2: AWK Talk (Field Processing)

AWK fundamentals: field extraction, patterns, variables, and aggregation.

Hook

AWK talk, field walk
Dollar-one dollar-two, split 'em by the awk
AWK talk, field walk
NR is the row, NF the column count

Print the fields, pattern match
Curly brace the action, that’s the AWK dispatch
Print the fields, pattern match
BEGIN sets it up, END wraps the batch

Verse 1: AWK Fundamentals

AWK reads line by line, automatic loop inside
No need to write a while, the iteration’s implied
Each line splits to fields, spaces are the knife
Dollar-one’s the first field, $0’s the whole line’s life

NR is "Number Record" - what line are we on?
NF is "Number Fields" - how many columns spawned?
"$NF" is the last field, "$(NF-1)" second-to-last
Dollar sign with number, field access unsurpassed

"awk '{print $1}'" - just the first column extracted
"awk '{print $1, $3}'" - first and third, compacted
"awk -F:" - colon now the separator
"/etc/passwd" parsing, username is first, player

"awk '/pattern/'" - only matching lines survive
"awk '!/pattern/'" - invert, non-matches thrive
"awk '/start/,/stop/'" - range between two matches
Print everything between them, pattern-pair dispatches
What You Learned
  • $0 - entire line

  • $1, $2…​ - field 1, field 2…​

  • $NF - last field

  • NR - current line number (Number Record)

  • NF - number of fields in current line

  • -F: - set field separator to colon

  • /pattern/ - filter matching lines

  • !/pattern/ - filter non-matching lines

  • /start/,/stop/ - range pattern

Verse 2: AWK Power Moves

"awk 'NR==5'" - print exactly line five
"awk 'NR>=10 && NR⇐20'" - range of lines alive
"awk 'NR>1'" - skip the header row
"awk 'END{print NR}'" - line count, there you go

Variables need no declare, just assign and use
"awk '{sum+=$1} END{print sum}'" - total, no excuse
Associative arrays: "count[$1]++" - tally each unique
"END { for (k in count) print k, count[k] }" - frequency technique

"awk '{print length}'" - character count per line
"awk 'length>80'" - find the lines too wide to shine
"gsub(/old/,"new")" - global substitute the string
"sub(/old/,"new")" - just the first, one-time thing

BEGIN block runs first, before any line is read
Set your variables there, FS separator said
END block runs last, after every line is through
Totals and summaries, that’s what END can do
What You Learned
  • NR==5 - match line 5 only

  • NR>1 - skip header

  • END{print NR} - count lines

  • {sum+=$1} - accumulate values

  • count[$1]++ - frequency counting

  • length - string length

  • gsub(/pat/,"rep") - global substitute

  • sub(/pat/,"rep") - substitute first

  • BEGIN{} - runs before processing

  • END{} - runs after processing

Track 3: sed Said (Stream Editing)

sed fundamentals: substitution, line addressing, in-place editing.

Hook

sed said, stream ed
Edit as it flows, never touch the original bed
sed said, stream ed
Substitute, delete, in-place with the -i flag spread

s-slash-old-slash-new, that’s the substitution game
Add a g for global, or it’s just the first you claim

Verse 1: sed Substitution

"sed 's/old/new/'" - substitute first match per line
"sed 's/old/new/g'" - global, every match align
"sed 's/old/new/2'" - just the second occurrence hit
"sed 's/old/new/gi'" - global AND case-insensitive bit

Delimiter don’t gotta be slash, any char will do
"sed 's|/path/old|/path/new|'" - pipes when paths ensue
"sed 's#http:#https:#'" - hashes work the same
Pick a char not in your pattern, play the delimiter game

Ampersand in replacement, "&" means what you matched
"sed 's/[0-9]*/(&)/'" - wrap the numbers, attached
Capture groups with backslash-parens, "\(pattern\)"
"\1" in replacement, back-reference when you use 'em

"sed -i" for in-place, changes hit the file direct
"sed -i.bak" - backup first, original protect
"sed -E" for extended regex, plus and question work
No backslash escape needed, modern regex perk
What You Learned
  • s/old/new/ - substitute first occurrence

  • s/old/new/g - substitute all (global)

  • s/old/new/i - case-insensitive

  • s|old|new| - alternate delimiter

  • & - reference matched text

  • \(pat\) and \1 - capture groups

  • -i - in-place edit

  • -i.bak - in-place with backup

  • -E - extended regex (ERE)

Verse 2: sed Line Control

"sed -n" suppresses output, silent by default
"sed -n 'p'" - print explicit, or nothing’s your result
"sed -n '5p'" - print line five, nothing more
"sed -n '10,20p'" - range of lines, that’s the core

"sed '5d'" - delete line five from the stream
"sed '/pattern/d'" - matching lines deleted clean
"sed '1d'" - delete header, skip the first line right
"sed '$d'" - delete last line, dollar means end in sight

"sed '/^$/d'" - delete empty lines away
Caret-dollar with nothing between, blank line we slay
"sed '/^#/d'" - delete comments, hash at start
Strip the noise from configs, that’s the sed art

"sed '5q'" - quit at line five, early exit stage
"sed '/pattern/q'" - quit when pattern hits the page
Efficient for big files, stop reading once you’re done
head replacement sometimes, sed’s the faster one
What You Learned
  • -n - suppress automatic printing

  • '5p' - print line 5

  • '10,20p' - print lines 10-20

  • '5d' - delete line 5

  • '/pattern/d' - delete matching lines

  • '$' - last line

  • '/^$/d' - delete empty lines

  • '/^#/d' - delete comment lines

  • '5q' - quit at line 5

Track 4: grep Drip (Pattern Matching)

grep fundamentals: searching, flags, regex, and context.

Hook

grep drip, pattern grip
Search the files, match the lines, let the regex rip
grep drip, pattern grip
-r recursive, -i ignore case, -v invert flip

Find the needle, haystack deep
grep the logs while ops asleep

Verse 1: grep Basics

"grep pattern file" - show me matching lines
"grep -i pattern" - case-insensitive finds
"grep -v pattern" - invert, show NON-matches
"grep -c pattern" - count only, line dispatches

"grep -n pattern" - line numbers on the left
"grep -l pattern " - filenames, that’s the theft
Just which files contain it, not the lines within
"grep -L pattern *" - files that DON’T, begin

"grep -r pattern dir" - recursive, search it deep
"grep -rn pattern ." - recursive with line numbers, neat
"grep --include='.py'" - only Python files, specify
"grep --exclude-dir='.git'" - skip the git, fly by

"grep -w pattern" - word boundary, whole word match
"grep -x pattern" - exact line, full line catch
"grep -o pattern" - only the match, not the line
Extract just what matched, data mining fine
What You Learned
  • -i - case-insensitive

  • -v - invert (non-matching)

  • -c - count matches

  • -n - show line numbers

  • -l - show filenames only

  • -L - show non-matching filenames

  • -r - recursive search

  • --include - filter files to search

  • --exclude-dir - skip directories

  • -w - whole word match

  • -o - only print matched part

Verse 2: grep Regex & Context

"grep -E" for extended, ERE’s the mode
Plus, question mark, and pipe without the backslash load
"grep -E 'cat|dog'" - alternation, or-logic
"grep -E 'colou?r'" - optional u, ergonomic

"grep -P" is Perl regex, PCRE when you need it
"\d" for digits, "\s" for spaces, cleaner when you read it
"grep -P '(?⇐@)\w+'" - lookbehind extraction
Email domains pulled, zero-width satisfaction

"grep -A 3 pattern" - three lines After match
"grep -B 2 pattern" - two lines Before, attach
"grep -C 5 pattern" - Context both directions
Five above, five below, full-range inspections

"grep -f patterns.txt" - file full of patterns
"grep -e 'one' -e 'two'" - multiple patterns, alternate
Chain 'em: "grep this | grep that" - AND logic through the pipe
Both must match the line, filter types you like
What You Learned
  • -E - extended regex (ERE)

  • -P - Perl regex (PCRE)

  • pat1|pat2 - alternation (OR)

  • ? - optional (in ERE/PCRE)

  • \d, \s, \w - PCRE shortcuts

  • (?⇐pat) - lookbehind

  • -A N - N lines after

  • -B N - N lines before

  • -C N - N lines context (both)

  • -f file - patterns from file

  • -e pat - multiple patterns

Track 5: Xargs Bars (Building Commands)

xargs fundamentals: building command lines from stdin.

Hook

Xargs bars, command cars
Take the input, build the args, execute the stars
Xargs bars, command cars
stdin to arguments, that’s the xargs memoirs

Batch it up, parallel run
-P for the processes, watch the work get done

Verse 1: xargs Fundamentals

xargs takes your input and builds command line args
"echo 'a b c' | xargs mkdir" - three dirs, no farce
stdin becomes arguments, space-delimited default
"find . -name '*.log' | xargs rm" - files in the vault

But spaces in filenames? That’s where things get broke
"xargs -0" null-delimited, that’s the safety cloak
Pair with "find -print0", null byte separates
"find . -print0 | xargs -0 rm" - safe deletes, first-rate

"xargs -n 1" - one argument per command run
"echo '1 2 3' | xargs -n 1 echo" - three echoes done
"xargs -L 1" - one LINE becomes one command call
For multi-word arguments, -L stands tall

"xargs -I {}" - placeholder, put args where you need 'em
"echo 'file' | xargs -I {} cp {} {}.bak" - custom freedom
The braces mark the spot where each arg gets placed
Complex command building, elegantly spaced
What You Learned
  • xargs - build args from stdin

  • -0 - null-delimited input

  • find -print0 | xargs -0 - safe for spaces

  • -n 1 - one arg per command

  • -L 1 - one line per command

  • -I {} - placeholder replacement

Verse 2: xargs Power

"xargs -P 4" - four parallel processes spawn
CPU cores burning, parallel dawn
"find . -name '*.jpg' | xargs -P 8 -I {} convert {} {}.png"
Eight conversions parallel, speed is the thing

"xargs -p" for prompt, ask before each run
"xargs -t" for trace, show the command, stun
Debug your pipelines, see what’s executing
Safety and visibility, never muting

When there’s no input, xargs still might run
"xargs -r" or "--no-run-if-empty" - stops that, done
GNU xargs feature, empty stdin protection
No accidental runs, production-grade direction

Combine with awk for power: field extraction
"awk '{print $2}' | xargs" - second field in action
"ps aux | awk '/zombie/{print $2}' | xargs kill"
Pipeline of death, zombie processes killed
What You Learned
  • -P N - N parallel processes

  • -p - prompt before execution

  • -t - trace (print commands)

  • -r - no run if empty

  • Combine with awk for field extraction

  • ps | awk | xargs kill - process pipeline

Track 6: find Grind (File Discovery)

find fundamentals: searching, filtering, and executing.

Hook

find grind, file mind
Search the tree, match the type, exec on what you find
find grind, file mind
-name for the pattern, -type for the kind

Dot is current, slash is root
-exec runs the action, curly brace substitute

Verse 1: find Basics

"find . -name '.txt'" - current dir, name match
"find / -name 'passwd'" - root search, wider catch
"find . -iname '.JPG'" - case-insensitive naming
"find . -name '.log' -o -name '.txt'" - OR, combining

"find . -type f" - regular files only please
"find . -type d" - directories, if you freeze
"find . -type l" - symlinks, symbolic finds
"find . -type f -name '*.py'" - combine the binds

"find . -size +100M" - bigger than 100 megs
"find . -size -1k" - smaller than 1k, dregs
"find . -empty" - zero size, empty files or dirs
Clean up the clutter, that’s what this prefers

"find . -user root" - owned by root alone
"find . -perm 777" - world-writable, danger zone
"find . -perm /u+x" - user executable set
Any executable files, security check met
What You Learned
  • . - current directory

  • -name '*.txt' - glob pattern match

  • -iname - case-insensitive name

  • -o - OR operator

  • -type f - regular files

  • -type d - directories

  • -type l - symlinks

  • -size +100M - larger than

  • -size -1k - smaller than

  • -empty - empty files/dirs

  • -user - by owner

  • -perm - by permissions

Verse 2: find Time & Actions

"find . -mtime -1" - modified within one day
"find . -mtime +30" - older than 30, decay
"find . -mmin -60" - sixty minutes recent
"find . -newer reference.txt" - newer than that, decent

"find . -exec cmd {} \;" - execute on each
Curly braces hold the filename, semicolon reach
Backslash-semicolon required, shell escape the end
One command per file found, that’s the pattern, friend

"find . -exec cmd {} +" - plus for batch mode
All files as arguments, single command load
Like xargs behavior, more efficient run
"find . -type f -exec chmod 644 {} +" - batch done

"find . -delete" - careful, dangerous power
Deletes what matches, devastation hour
"find . -type f -name '*.tmp' -delete" - clean the temp
Always test with "-print" first, before you attempt
What You Learned
  • -mtime -1 - modified < 1 day ago

  • -mtime +30 - modified > 30 days ago

  • -mmin - modified minutes

  • -newer file - newer than file

  • -exec cmd {} \; - execute per file

  • -exec cmd {} + - execute batch

  • -delete - delete matches

  • Always test with -print first!

Track 7: jq Flow (JSON Parsing)

jq fundamentals: navigating, filtering, and transforming JSON.

Hook

jq flow, JSON go
Dot notation drilling, data down below
jq flow, JSON go
Pipe inside the filter, watch the objects grow

Parse the API, extract the key
-r for raw strings, no quotes you see

Verse 1: jq Navigation

"jq '.'" - pretty print, that’s your starting place
"jq '.key'" - grab the key, object interface
"jq '.nested.deep'" - chain the dots to drill
"jq '.array[0]'" - first element, bracket skill

"jq '.array[]'" - iterate every item in the list
No index number needed, each one won’t be missed
"jq '.[]'" - root array, same iteration game
"jq '.[].name'" - grab the name from each, no shame

"jq '.key?'" - question mark, don’t error if it’s null
Optional access, handle missing data, never dull
"jq 'keys'" - list all keys in the object
"jq 'length'" - array length or string, inspect

"jq -r '.name'" - raw output, strips the quotes away
Essential for scripting, bash variables all day
"jq -c '.'" - compact, one line, no pretty spacing
Pipe to other tools, efficient data racing
What You Learned
  • . - identity (whole input)

  • .key - object key access

  • .nested.deep - chained access

  • .array[0] - array index

  • .array[] or .[] - iterate array

  • .key? - optional (no error if null)

  • keys - list object keys

  • length - array/string length

  • -r - raw output (no quotes)

  • -c - compact output

Verse 2: jq Filtering & Transformation

"jq 'select(.status == \"active\")'" - filter where condition’s true
jq 'select(.age  21)' - numeric comparison, passing through
"jq 'map(.name)'" - transform each element, extract what you need
"jq 'map(select(.price < 100))'" - filter then proceed

Pipe inside jq: "jq '.data | .users | .[0]'"
Left to right flow, just like shell, no show
"jq '.users | length'" - how many users exist?
"jq '.items | map(.price) | add'" - sum the list

"jq '{name: .title, id: .uuid}'" - construct new object shape
Pick the fields you want, restructure, escape
"jq '[.items[] | {n: .name}]'" - array of new objects made
Wrap in brackets for array, transformation upgrade

"jq 'to_entries'" - object becomes key-value pairs
"jq 'from_entries'" - back to object, if you care
"jq '.env | to_entries | map(\"\\(.key)=\\(.value)\")'"
Export format, env vars, shell script directive
What You Learned
  • select(.x == "y") - filter objects

  • select(.n > 5) - numeric filter

  • map(.field) - transform each element

  • | - pipe within jq

  • add - sum array

  • {new: .old} - construct objects

  • [expr] - wrap in array

  • to_entries - object to [{key, value}]

  • from_entries - [{key, value}] to object

  • \(expr) - string interpolation

Verse 3: jq Kubernetes & Vault Patterns

"kubectl get secret -o json | jq -r '.data.password | @base64d'"
Decode the base64, secret now displayed
"jq -r '.data | keys[]'" - list all keys in the secret
"jq -r '.data | to_entries[] | \"\\(.key)=\\(.value | @base64d)\"'"

"jq -r '.items[].metadata.name'" - list all resource names
"jq '.items | length'" - count the objects, claims
"jq 'select(.status.phase != \"Running\")'" - pods not running state
Filter for problems, troubleshoot, don’t wait

Vault certificate parsing, JSON from the write:
"jq -r '.data.certificate'" - extract the cert right
"jq -r '.data.private_key'" - key material obtained
"jq -r '.data.ca_chain[]'" - CA chain, each cert contained

"jq -s '.'" - slurp mode, multiple JSONs to array
Line by line objects, unified display
"jq -n '{a:1, b:2}'" - null input, create from scratch
Generate JSON structures, data dispatch
What You Learned
  • @base64d - decode base64

  • @base64 - encode base64

  • @uri - URI encode

  • .items[].metadata.name - k8s name extraction

  • -s - slurp (multiple inputs to array)

  • -n - null input (create JSON)

  • Vault pattern: .data.certificate

  • k8s pattern: .items[] | select()

Track 8: kube Groove (Kubernetes)

kubectl fundamentals: get, describe, exec, logs, and secrets.

Hook

kube groove, container move
Pods and deployments, orchestration smooth
kube groove, container move
kubectl get, describe, exec into

-n for namespace, -A for all
-o yaml for full spec, answer the call

Verse 1: kubectl Basics

"kubectl get pods" - list the pods in current namespace zone
"kubectl get pods -A" - all namespaces, fully shown
"kubectl get pods -n kube-system" - specific namespace pick
"kubectl get pods -o wide" - extra columns, node info thick

"kubectl get pod NAME -o yaml" - full spec revealed
"kubectl get pod NAME -o json" - JSON for jq to wield
"kubectl describe pod NAME" - events at the bottom, debug gold
Why pod won’t start? Describe will unfold

"kubectl get deploy,svc,ing" - multiple resources, comma chain
"kubectl get all" - deployments, services, pods, the main
"kubectl api-resources" - what resources exist to get?
"kubectl explain pod.spec" - documentation, don’t forget

Custom columns for clean output, awk not required:
"kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase"
One-liner table format, exactly what’s desired
What You Learned
  • get pods - list pods

  • -A - all namespaces

  • -n NS - specific namespace

  • -o wide - extra columns

  • -o yaml / -o json - full spec

  • describe - detailed info + events

  • get deploy,svc,ing - multiple types

  • api-resources - list resource types

  • explain - built-in docs

  • -o custom-columns= - custom output

Verse 2: kubectl Exec & Logs

"kubectl exec -it POD — bash" - shell into the container live
"kubectl exec -it POD -c CONTAINER — sh" - multi-container, specify, dive
Double dash separates kubectl args from command passed through
"kubectl exec POD — cat /etc/hosts" - one-off command, no TTY, true

"kubectl logs POD" - stdout from container displayed
"kubectl logs POD -f" - follow mode, live stream parade
"kubectl logs POD --tail=100" - last hundred lines, no more
"kubectl logs POD --since=1h" - last hour, time-based explore

"kubectl logs POD -c CONTAINER" - when multiple containers exist
"kubectl logs POD --previous" - crashed container, logs persist
"kubectl logs -l app=nginx" - all pods with that label match
"kubectl logs deploy/NAME" - from deployment, pods it’ll catch

"kubectl port-forward svc/NAME 8080:80" - localhost tunnel through
"kubectl cp POD:/path local" - copy files, bidirectional too
What You Learned
  • exec -it POD — bash - interactive shell

  • -c CONTAINER - specify container

  • -- - separates kubectl/command args

  • logs POD - view logs

  • logs -f - follow logs

  • --tail=N - last N lines

  • --since=1h - time-based

  • --previous - crashed container logs

  • -l app=X - by label selector

  • port-forward - local tunnel

  • cp - copy files

Verse 3: kubectl Secrets & Config

"kubectl get secret NAME -o jsonpath='{.data.password}'" - extract single field
Jsonpath for precision, exact data yield
"kubectl get secret NAME -o jsonpath='{.data}' | jq 'map_values(@base64d)'"
Decode all values, secrets unmasked, displayed

"kubectl create secret generic NAME --from-literal=key=value" - imperative create
"kubectl create secret generic NAME --from-file=./secret.txt" - file to secret, great
"kubectl create configmap NAME --from-env-file=.env" - env file loaded in
"kubectl get cm NAME -o yaml" - inspect what’s within

Labels and selectors, the kubernetes glue:
"kubectl get pods -l app=web" - filter to matching crew
"kubectl label pod NAME env=prod" - add label to resource
"kubectl get pods --show-labels" - see them all, of course

"kubectl apply -f manifest.yaml" - declarative, desired state
"kubectl delete -f manifest.yaml" - remove what you create
"kubectl diff -f manifest.yaml" - preview before apply
See the changes first, no surprise
What You Learned
  • -o jsonpath='{.data.X}' - extract field

  • jq 'map_values(@base64d)' - decode all

  • create secret generic - imperative secret

  • --from-literal=k=v - inline value

  • --from-file= - file contents

  • create configmap - config data

  • -l app=X - label selector

  • label - add/modify labels

  • --show-labels - display labels

  • apply -f - declarative apply

  • diff -f - preview changes

Track 9: VaultAult (Secrets Management)

HashiCorp Vault fundamentals: auth, secrets, PKI, and policies.

Hook

Vault vault, secrets halt
Encrypted storage, zero-trust, no fault
Vault vault, secrets halt
Seal and unseal, token auth, exalt

Paths like a filesystem, policies control
Who can read what secret, that’s the role

Verse 1: Vault Basics

"vault status" - is it sealed or unsealed, check the state
"vault operator unseal" - provide key shards, unseal fate
"vault login" - authenticate, token method default way
"vault login -method=ldap" - enterprise AD, credential play

"vault secrets list" - what engines are mounted, see
"vault secrets enable -path=secret kv-v2" - key-value, version 2, free
"vault kv put secret/myapp user=admin pass=secure" - write the data down
"vault kv get secret/myapp" - retrieve it, secrets found

"vault kv get -format=json secret/myapp" - JSON output for jq parse
"vault kv get -field=pass secret/myapp" - single field, not sparse
"vault kv metadata get secret/myapp" - versions and timestamps see
"vault kv rollback -version=2 secret/myapp" - restore from history

"vault kv list secret/" - list paths under mount like ls
"vault kv delete secret/myapp" - soft delete, can restore, yes
"vault kv destroy -versions=1,2 secret/myapp" - permanent gone
"vault kv undelete -versions=3 secret/myapp" - bring it back, respawn
What You Learned
  • status - check seal state

  • operator unseal - provide unseal key

  • login - authenticate

  • secrets list - list secret engines

  • secrets enable - mount engine

  • kv put path k=v - write secret

  • kv get path - read secret

  • -format=json - JSON output

  • -field=X - single field

  • kv list - list paths

  • kv delete/destroy/undelete - lifecycle

Verse 2: Vault PKI

"vault secrets enable pki" - mount the PKI engine, CA time
"vault write pki/root/generate/internal" - root CA, paradigm
Common name and TTL, your root certificate born
"vault write pki/config/urls" - CRL and issuing, AIA configured, sworn

"vault secrets enable -path=pki_int pki" - intermediate separate
"vault write pki_int/intermediate/generate/internal" - CSR, create
Sign with root: "vault write pki/root/sign-intermediate" - chain complete
"vault write pki_int/intermediate/set-signed" - import, elite

"vault write pki_int/roles/server-role" - define what certs can issue
Allowed domains, TTL, key usage, tissue
"vault write pki_int/issue/server-role common_name=host.domain.com"
Certificate issued, private key, CA chain, here they come

"jq -r '.data.certificate'" - extract the cert from JSON response
"jq -r '.data.private_key'" - the key, immense
"jq -r '.data.ca_chain[]'" - the CA chain for trust, complete
Bundle to file, deploy to endpoint, secure feat
What You Learned
  • pki/root/generate/internal - create root CA

  • pki/config/urls - set CRL/AIA

  • pki_int - intermediate CA path

  • intermediate/generate/internal - create intermediate CSR

  • root/sign-intermediate - sign with root

  • intermediate/set-signed - import signed

  • roles/NAME - define issuance policy

  • issue/ROLE common_name=X - issue cert

  • jq extracts: certificate, private_key, ca_chain

Verse 3: Vault Policies & Auth

Policy documents, HCL format, who can do what:
"path \"secret/data/myapp/*\" { capabilities = [\"read\"] }" - read that spot
"vault policy write myapp-ro ./policy.hcl" - upload the file
"vault policy list" - see what policies compile

"vault auth enable ldap" - LDAP auth method mount
"vault write auth/ldap/config url=ldap://dc.domain.com" - configure the fount
Bind credentials, user DN, group search too
"vault write auth/ldap/groups/admins policies=admin-policy" - group to policy glue

"vault token create -policy=myapp-ro" - token with limited scope
"vault token create -ttl=1h" - one hour lifetime rope
"vault token revoke TOKEN" - invalidate immediately, done
"vault token lookup" - inspect current token, run

Approle for automation, machines need secrets too:
"vault write auth/approle/role/jenkins" - define the role through
"vault read auth/approle/role/jenkins/role-id" - get the role ID
"vault write -f auth/approle/role/jenkins/secret-id" - secret ID, applied
What You Learned
  • Policy: path "X" { capabilities = […​] }

  • Capabilities: create, read, update, delete, list

  • policy write NAME FILE - upload policy

  • auth enable ldap - enable auth method

  • auth/ldap/config - configure LDAP

  • auth/ldap/groups/X policies=Y - map groups

  • token create -policy=X - scoped token

  • token create -ttl=1h - limited lifetime

  • token revoke - invalidate

  • AppRole: role-id + secret-id = token

Track 10: chmod Mod (Permissions)

chmod fundamentals: numeric and symbolic, ownership, special bits.

Hook

chmod mod, permission god
Read write execute, access to the prod
chmod mod, permission god
Seven is all, four is read, binary nod

User group other, three positions clear
Numeric or symbolic, no need to fear

Verse 1: Numeric Permissions

Three digits, three scopes: user, group, other - that’s the frame
Each digit 0-7, sum of r-w-x, the game
Read is 4, write is 2, execute is 1
Add them up per scope, permissions done

"chmod 755 file" - rwx for user, rx for group and other
Read and execute for all, write for owner, no bother
"chmod 644 file" - rw for user, read for rest, common for files
"chmod 700 dir" - only owner in, private styles

"chmod 600 secret.key" - owner read-write, no one else sees
Private keys and credentials, security with ease
"chmod 777 file" - everyone full access, danger zone complete
"chmod 000 file" - no access, total lockdown, feat

Binary makes it clear: rwx = 111 = 7, that’s the math
r-- = 100 = 4, read only path
rw- = 110 = 6, read and write combined
r-x = 101 = 5, read-execute aligned
What You Learned
  • r=4, w=2, x=1

  • Add for each position: user, group, other

  • 755 - rwxr-xr-x (executable)

  • 644 - rw-r—​r-- (regular file)

  • 700 - rwx------ (private dir)

  • 600 - rw------- (private file)

  • 777 - rwxrwxrwx (danger!)

  • Binary: rwx=111=7, r--=100=4

Verse 2: Symbolic Permissions

Symbolic mode: who-operation-permission, easy to read
"chmod u+x file" - user plus execute, planting the seed
"chmod g-w file" - group minus write, taking away
"chmod o=r file" - other equals read only, that’s what they may

Who: u for user, g for group, o for other, a for all
"chmod a+r file" - everyone can read, answer the call
"chmod ug+rw file" - user and group get read-write powers combined
"chmod o-rwx file" - other gets nothing, access declined

Operations: plus adds, minus removes, equals sets exactly so
"chmod u=rwx file" - user gets all, the rest don’t grow
"chmod go=" - group and other get nothing, blank slate
"chmod +x script.sh" - add execute, shorthand, great

"chmod -R 755 dir/" - recursive, apply to all within
"chmod --reference=other file" - copy permissions, twin
"ls -l" shows permissions: drwxr-xr-x, decode
First char: d=dir, -=file, l=link, the mode
What You Learned
  • u=user, g=group, o=other, a=all

  • + add, - remove, = set exactly

  • u+x - user add execute

  • g-w - group remove write

  • o=r - other set read only

  • a+r - all add read

  • go= - group/other set nothing

  • -R - recursive

  • --reference=FILE - copy from file

Verse 3: Special Bits & Ownership

Special bits: setuid, setgid, sticky - the fourth digit ahead
"chmod 4755 file" - setuid, run as owner instead
When user executes, they become the file’s owner temporarily
"passwd" works this way, root powers momentarily

"chmod 2755 dir" - setgid, new files inherit group, not creator’s own
Shared directories, team collaboration, commonly shown
"chmod 1777 /tmp" - sticky bit, delete only what you own
World-writable but protected, tmp directory known

Four digits: "chmod 4755" - special, user, group, other
"chmod 0755" same as "chmod 755" - zero means no other
In ls output: rwsr-xr-x - s means setuid or setgid set
Lowercase s = executable too, uppercase S = not exec, bet

"chown user:group file" - change ownership, who owns the thing
"chown -R user:group dir/" - recursive, everything
"chgrp group file" - just the group, shortcut way
"stat file" - see all permissions, numeric display
What You Learned
  • 4xxx - setuid (run as owner)

  • 2xxx - setgid (inherit group)

  • 1xxx - sticky (delete own only)

  • s in ls = setuid/setgid + execute

  • S = setuid/setgid without execute

  • t in ls = sticky + execute

  • chown user:group - change owner

  • chgrp group - change group

  • -R - recursive

  • stat - detailed file info

Track 11: Process Boss (Process Management)

Process fundamentals: ps, kill, jobs, fg, bg, and signals.

Hook

Process boss, never at a loss
ps shows the running, kill handles the toss
Process boss, never at a loss
Background, foreground, jobs across

PID identifies, signals communicate
SIGTERM asks nice, SIGKILL seals fate

Verse 1: ps & Viewing Processes

"ps aux" - all users, all processes, BSD style the way
"ps -ef" - System V format, full listing display
Both show everything, different column layout, pick your mode
"ps aux | grep nginx" - find specific process, common code

"ps aux" columns: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME CMD
"ps -ef" columns: UID PID PPID C STIME TTY TIME CMD
PPID is parent PID, who spawned this process, family tree
"ps --forest" shows the hierarchy visually, see

"ps -p PID" - info on specific PID you know
"ps -u username" - processes for that user, their show
"ps -C nginx" - by command name, no grep required
"ps -o pid,ppid,cmd" - custom columns, as desired

"pgrep nginx" - just PIDs matching the pattern, clean
"pgrep -u root sshd" - by user AND name, filter routine
"pidof nginx" - PIDs of exact command name match
"top" or "htop" - real-time view, process dispatch
What You Learned
  • ps aux - BSD style, all processes

  • ps -ef - System V style

  • USER, PID, %CPU, %MEM - key columns

  • PPID - parent process ID

  • --forest - tree view

  • -p PID - specific process

  • -u USER - by user

  • -C NAME - by command name

  • -o cols - custom columns

  • pgrep - pattern match PIDs

  • pidof - exact name match

Verse 2: Signals & kill

Signals tell processes what to do, numbered or named they go
"kill PID" - sends SIGTERM, that’s signal 15, you know
SIGTERM asks nicely: "please shut down, clean up your state"
Process can catch it, handle gracefully, close the gate

"kill -9 PID" - SIGKILL, that’s the nuclear choice
Cannot be caught or ignored, immediate, no voice
Use only when SIGTERM fails, process won’t respond
Forceful termination, the ultimate wand

"kill -l" - list all signals, there are many to learn
SIGHUP (1), SIGINT (2), SIGQUIT (3), each has its turn
SIGINT is Ctrl+C, interrupt from the keyboard sent
SIGHUP is hangup, daemons reload config, that’s the intent

"killall nginx" - all processes with that name, gone
"pkill -f 'python script'" - full command line match, drawn
"pkill -u user" - all processes owned by that user, terminated
"kill -0 PID" - check if exists, don’t kill, just validated
What You Learned
  • kill PID - send SIGTERM (15)

  • kill -9 PID - SIGKILL (force)

  • kill -l - list signals

  • SIGTERM (15) - graceful shutdown

  • SIGKILL (9) - force kill

  • SIGHUP (1) - hangup/reload

  • SIGINT (2) - interrupt (Ctrl+C)

  • killall NAME - kill by name

  • pkill -f PATTERN - kill by pattern

  • kill -0 PID - check if alive

Verse 3: Jobs, Background, Foreground

Run command with "&" at the end, background it goes
"sleep 100 &" - returns immediately, shell still flows
"jobs" - list background jobs in this shell session active
"fg" - bring last background job to foreground, interactive

"fg %1" - job number one specifically brought front
"bg %1" - continue job 1 in background after stunt
Ctrl+Z suspends foreground job, SIGTSTP sent
"bg" resumes it in background, that’s what’s meant

"nohup command &" - immune to hangup, survives logout
Session ends, process continues, no doubt
"disown %1" - remove job from shell’s job table, orphan it
Shell exits, job lives, disowned but legit

"command &>/dev/null &" - background, silence all output, daemon style
"setsid command" - new session, fully detached, versatile
"screen" or "tmux" - terminal multiplexer, persistent sessions reign
SSH disconnect, reattach, work remains
What You Learned
  • cmd & - run in background

  • jobs - list background jobs

  • fg - bring to foreground

  • fg %N - specific job N

  • bg - continue in background

  • Ctrl+Z - suspend (SIGTSTP)

  • nohup cmd & - survive logout

  • disown - remove from job table

  • &>/dev/null & - silent background

  • screen/tmux - persistent sessions

Track 12: ssh Splash (Remote Access)

ssh fundamentals: connection, keys, tunneling, and config.

Hook

ssh splash, secure bash
Encrypted tunnel, remote shell flash
ssh splash, secure bash
Public key, private key, cryptographic cache

Port 22, TCP connect
Authenticate, encrypted, shell direct

Verse 1: ssh Connection Basics

"ssh user@host" - connect remote, simple and clean
"ssh host" - uses current username, routine
"ssh -p 2222 user@host" - non-standard port, specify
"ssh user@host command" - run one command, don’t stay, fly by

"ssh -i ~/.ssh/mykey user@host" - specific private key file
"ssh -v user@host" - verbose debugging, see the trial
"ssh -vvv" - triple verbose, maximum debug detail
Connection problems? This reveals the tale

"ssh-keygen" - generate key pair, public and private, two
"ssh-keygen -t ed25519" - modern algorithm, the new
"ssh-copy-id user@host" - install public key, passwordless after
Authorized_keys on remote, secure access hereafter

"ssh-add ~/.ssh/mykey" - add key to agent, no password prompt each time
"ssh-add -l" - list keys in agent, loaded prime
"ssh -A user@host" - agent forwarding, keys follow along
Chain through jumpbox, authentication strong
What You Learned
  • ssh user@host - basic connect

  • -p PORT - non-standard port

  • ssh host command - run and exit

  • -i KEYFILE - specify key

  • -v / -vvv - verbose debug

  • ssh-keygen - generate keys

  • -t ed25519 - key algorithm

  • ssh-copy-id - install public key

  • ssh-add - add key to agent

  • -A - agent forwarding

Verse 2: ssh Tunneling

"ssh -L 8080:localhost:80 user@host" - local port forward, traffic go
Local 8080 connects to remote’s localhost 80, through the tunnel flow
Access remote services on local ports, firewall bypass
"ssh -L 8080:internal:80 user@jumpbox" - reach internal, VPN pass

"ssh -R 9090:localhost:3000 user@host" - reverse tunnel, expose local out
Remote port 9090 reaches your localhost 3000, shout
Share local dev server, webhook testing, NAT traversal scene
Remote callbacks hit your local machine, unseen

"ssh -D 1080 user@host" - dynamic SOCKS proxy spawned
Browser through the tunnel, traffic bonded, all traffic conned
Configure SOCKS proxy localhost:1080, browse as if you’re there
Geographic bypass, secure public wifi, anywhere

"ssh -N -f -L 8080:localhost:80 user@host" - background, no shell
-N no command, -f fork background, tunnel swell
"ssh -J jumpbox user@target" - jump through, proxy command built-in
-J for jumphost, multi-hop, destination win
What You Learned
  • -L local:host:remote - local forward

  • Local port → tunnel → remote service

  • -R remote:host:local - reverse forward

  • Remote port → tunnel → local service

  • -D port - SOCKS proxy

  • -N - no command (tunnel only)

  • -f - background

  • -J jumphost - proxy jump

  • Chain: local → jump → target

Verse 3: ssh Config

~/.ssh/config - configure hosts, no long commands to type
"Host myserver" - alias definition, setup ripe
"    HostName 192.168.1.100" - the actual IP or domain
"    User admin" - default user, no more claimin'

"    Port 2222" - custom port remembered
"    IdentityFile ~/.ssh/special_key" - key for this host, rendered
Now just "ssh myserver" - all options applied
No more "ssh -p 2222 -i key admin@192.168.1.100", simplified

"Host *" - wildcard, apply to ALL connections
"    ServerAliveInterval 60" - keepalive, prevent disconnections
"    ServerAliveCountMax 3" - three failures before timeout
"    AddKeysToAgent yes" - auto-add keys, no doubt

"Host jump" - define the jumpbox
"Host internal" - define internal target, locks
"    ProxyJump jump" - automatically hop through jump first
Multi-hop configured, single command, versatile burst

"Host *.internal.domain" - wildcard patterns allowed
"    User ops" - ops user for all internal, crowd
Config is powerful, complex setups, one-word command
"ssh internal" expands to full connection, grand
What You Learned
  • ~/.ssh/config - client config

  • Host NAME - define alias

  • HostName - actual address

  • User - default username

  • Port - custom port

  • IdentityFile - key file

  • Host * - defaults for all

  • ServerAliveInterval - keepalive

  • ProxyJump - jump host

  • Wildcards: Host *.domain

Quick Reference Card

Concept Mnemonic Example

Pipe

"Left makes, right takes"

ls | grep txt

stdout

"Channel 1 is the one"

cmd > file

stderr

"Errors go to 2"

cmd 2> errors.log

Exit 0

"Zero hero"

Success = 0

&&

"And then, if win"

mkdir d && cd d

||

"Or else, on fail"

cd d || mkdir d

AWK $1

"Dollar gets the column"

awk '{print $1}'

AWK NR

"Number Record = row"

awk 'NR==5'

AWK NF

"Number Fields = cols"

awk '{print $NF}'

sed s///

"Substitute slash slash slash"

sed 's/old/new/g'

sed -n

"Silent until told"

sed -n '5p'

grep -v

"V for reverse"

grep -v pattern

grep -r

"R for recursive"

grep -r pattern .

xargs -0

"Zero for null bytes"

find -print0 | xargs -0

xargs -P

"P for parallel"

xargs -P 4

find -type f

"F for file"

find . -type f

find -exec

"Exec with braces"

find . -exec cmd {} \;

jq .key

"Dot drills down"

jq '.data.name'

jq -r

"R for raw, no quotes"

jq -r '.name'

kubectl -A

"A for all namespaces"

kubectl get pods -A

kubectl -o yaml

"O for output format"

kubectl get pod X -o yaml

chmod 755

"Seven-five-five: rwx-rx-rx"

chmod 755 script.sh

chmod 644

"Six-four-four: rw-r-r"

chmod 644 file.txt

kill PID

"Kill asks nicely (TERM)"

kill 1234

kill -9

"Nine is the nuclear option"

kill -9 1234

ssh -L

"L for Local forward"

ssh -L 8080:localhost:80

ssh -R

"R for Reverse forward"

ssh -R 9090:localhost:3000

Concept Mnemonic Example

Pipe

"Left makes, right takes"

ls | grep txt

stdout

"Channel 1 is the one"

cmd > file

stderr

"Errors go to 2"

cmd 2> errors.log

Exit 0

"Zero hero"

Success = 0

&&

"And then, if win"

mkdir d && cd d

||

"Or else, on fail"

cd d || mkdir d

AWK $1

"Dollar gets the column"

awk '{print $1}'

AWK NR

"Number Record = row"

awk 'NR==5'

AWK NF

"Number Fields = cols"

awk '{print $NF}'

sed s///

"Substitute slash slash slash"

sed 's/old/new/g'

sed -n

"Silent until told"

sed -n '5p'

grep -v

"V for reverse"

grep -v pattern

grep -r

"R for recursive"

grep -r pattern .

xargs -0

"Zero for null bytes"

find -print0 | xargs -0

xargs -P

"P for parallel"

xargs -P 4

find -type f

"F for file"

find . -type f

find -exec

"Exec with braces"

find . -exec cmd {} \;

Attribution

Inspired by YTCracker (Bryce Case Jr.) - "Subnet Mask Off"

Style: Educational nerdcore for CLI memorization