Discrete Math

Functions as Mappings

Types of functions
Injection (one-to-one):
  f(a) = f(b) implies a = b
  No two inputs produce the same output
  |A| <= |B|

Surjection (onto):
  For every b in B, there exists a in A with f(a) = b
  Every output is hit
  |A| >= |B|

Bijection (one-to-one and onto):
  Both injective and surjective
  |A| = |B|
  Inverse exists

Example:
  f: Z → Z, f(x) = 2x         Injective, not surjective (misses odd numbers)
  f: Z → N, f(x) = |x|        Surjective, not injective (f(3) = f(-3))
  f: Z → Z, f(x) = x + 1      Bijective
Floor and ceiling
Floor:    ⌊x⌋ = largest integer <= x
Ceiling:  ⌈x⌉ = smallest integer >= x

⌊3.7⌋ = 3       ⌈3.7⌉ = 4
⌊-2.3⌋ = -3     ⌈-2.3⌉ = -2
⌊5⌋ = 5         ⌈5⌉ = 5

Properties:
  ⌊x⌋ <= x < ⌊x⌋ + 1
  ⌈x⌉ - 1 < x <= ⌈x⌉
  ⌊n/2⌋ + ⌈n/2⌉ = n

In CS:  integer division is floor for positive numbers
  7 / 2 = 3  (floor)
  Number of pages for n items, k per page: ⌈n/k⌉

Mathematical Induction

Principle
To prove P(n) for all n >= n0:

1. BASE CASE:    Prove P(n0) is true
2. INDUCTIVE STEP: Assume P(k) is true (inductive hypothesis)
                   Prove P(k+1) is true

If both hold, then P(n) is true for all n >= n0.

Analogy: Dominos
  Base case = knock over first domino
  Inductive step = each domino knocks over the next
  Result = all dominos fall
Example: sum of first n integers
Claim: sum_{i=1}^{n} i = n(n+1)/2

Base case (n=1):
  Left side: 1
  Right side: 1(2)/2 = 1  ✓

Inductive step: Assume sum_{i=1}^{k} i = k(k+1)/2

  sum_{i=1}^{k+1} i = [sum_{i=1}^{k} i] + (k+1)
                     = k(k+1)/2 + (k+1)          (inductive hypothesis)
                     = k(k+1)/2 + 2(k+1)/2
                     = (k+1)(k+2)/2               ✓
Strong induction
Instead of assuming only P(k), assume P(n0), P(n0+1), ..., P(k)
(all previous cases, not just the immediate predecessor)

Useful when the recurrence depends on more than one previous term
Example: Fibonacci, binary representation existence

Recurrence Relations

Common recurrences in CS
Linear search:    T(n) = T(n-1) + c           → O(n)
Binary search:    T(n) = T(n/2) + c           → O(log n)
Merge sort:       T(n) = 2T(n/2) + cn         → O(n log n)
Naive multiply:   T(n) = 4T(n/2) + cn         → O(n^2)
Strassen:         T(n) = 7T(n/2) + cn^2       → O(n^2.807)
Master theorem (for T(n) = aT(n/b) + O(n^d))
Compare a vs b^d:

Case 1: a < b^d  → T(n) = O(n^d)            work dominates
Case 2: a = b^d  → T(n) = O(n^d * log n)    balanced
Case 3: a > b^d  → T(n) = O(n^(log_b(a)))   recursion dominates

Examples:
  Merge sort: a=2, b=2, d=1 → 2 = 2^1 → Case 2 → O(n log n)
  Binary search: a=1, b=2, d=0 → 1 = 2^0 → Case 2 → O(log n)
  Strassen: a=7, b=2, d=2 → 7 > 4 → Case 3 → O(n^(log_2(7))) ≈ O(n^2.807)
Solving recurrences by unrolling
T(n) = 2T(n/2) + n

Unroll:
  T(n)   = 2T(n/2) + n
         = 2[2T(n/4) + n/2] + n = 4T(n/4) + 2n
         = 4[2T(n/8) + n/4] + 2n = 8T(n/8) + 3n
  ...
         = 2^k * T(n/2^k) + k*n

Stop when n/2^k = 1, so k = log_2(n):
  T(n) = n * T(1) + n * log(n) = O(n log n)

Modular Arithmetic

Basics
a mod n = remainder when a is divided by n
a ≡ b (mod n)  means  n | (a - b)  means  same remainder

Properties:
  (a + b) mod n = ((a mod n) + (b mod n)) mod n
  (a * b) mod n = ((a mod n) * (b mod n)) mod n
  (a - b) mod n = ((a mod n) - (b mod n) + n) mod n

Example: clock arithmetic (mod 12)
  10 + 5 ≡ 3 (mod 12)
  7 * 8 ≡ 56 ≡ 8 (mod 12)
Applications in CS
Hash functions:      h(k) = k mod m
Cryptography:        RSA uses modular exponentiation
Checksums:           ISBN, credit card (Luhn) use mod arithmetic
Circular buffers:    index = (head + offset) % size
Round-robin:         next = (current + 1) % n
Modular arithmetic in shell
# Modulo operator in bash
echo $(( 17 % 5 ))              # 2
echo $(( 100 % 7 ))             # 2

# Circular index
size=5
for i in {0..9}; do
    echo "i=$i → slot=$(( i % size ))"
done

Asymptotic Growth

Big-O family
f(n) = O(g(n))    f grows at most as fast as g       (upper bound)
f(n) = Ω(g(n))    f grows at least as fast as g      (lower bound)
f(n) = Θ(g(n))    f grows at the same rate as g      (tight bound)
f(n) = o(g(n))    f grows strictly slower than g      (strict upper)
f(n) = ω(g(n))    f grows strictly faster than g      (strict lower)

Formal: f(n) = O(g(n)) iff ∃c, n0 such that f(n) <= c*g(n) for all n >= n0
Growth hierarchy
1 < log(log n) < log n < sqrt(n) < n < n log n < n^2 < n^3 < 2^n < n! < n^n

Practical scale (n = 1,000,000):
  O(1)        = 1 operation
  O(log n)    ≈ 20 operations
  O(n)        = 1,000,000 operations
  O(n log n)  ≈ 20,000,000 operations
  O(n^2)      = 10^12 operations (too slow)
  O(2^n)      = impossibly large