Pagination Patterns

Different APIs use different pagination strategies. Know them all.

Pagination Types

Type How It Works Used By

Offset/Page

?page=2&size=100

ISE ERS, most REST APIs

Cursor

?cursor=abc123

Modern APIs, GraphQL

Link Header

Link: <url>; rel="next"

GitHub, some REST APIs

Oracle FETCH

FETCH FIRST N ROWS ONLY

ISE DataConnect

Offset/Page Pattern

# ISE ERS - page/size pattern
PAGE=1
SIZE=100
while true; do
  RESP=$(curl -ks -u "$ISE_USER:$ISE_PASS" \
    "https://$ISE_HOST:9060/ers/config/endpoint?page=$PAGE&size=$SIZE")

  COUNT=$(echo "$RESP" | jq '.SearchResult.resources | length')
  echo "$RESP" | jq -r '.SearchResult.resources[].name'

  [[ $COUNT -lt $SIZE ]] && break
  ((PAGE++))
done

Cursor Pattern

# Generic cursor/token pattern
CURSOR=""
while true; do
  if [[ -z "$CURSOR" ]]; then
    RESP=$(curl -ks -H "Authorization: Bearer $TOKEN" \
      "https://api.example.com/items?limit=100")
  else
    RESP=$(curl -ks -H "Authorization: Bearer $TOKEN" \
      "https://api.example.com/items?limit=100&cursor=$CURSOR")
  fi

  echo "$RESP" | jq -r '.items[]'
  CURSOR=$(echo "$RESP" | jq -r '.next_cursor // empty')
  [[ -z "$CURSOR" ]] && break
done

Oracle FETCH Pattern (DataConnect)

# ISE DataConnect (Oracle) - FETCH FIRST N ROWS
netapi ise dc query "
  SELECT USERNAME, CALLING_STATION_ID, NAS_IP_ADDRESS
  FROM RADIUS_AUTHENTICATIONS
  WHERE TIMESTAMP_TIMEZONE > SYSDATE - 1
  ORDER BY TIMESTAMP_TIMEZONE DESC
  FETCH FIRST 100 ROWS ONLY
"

Oracle OFFSET Pagination

# ISE DataConnect (Oracle) - OFFSET pagination
OFFSET=0
LIMIT=100
while true; do
  RESULT=$(netapi ise dc --format json query "
    SELECT USERNAME, CALLING_STATION_ID
    FROM RADIUS_AUTHENTICATIONS
    ORDER BY TIMESTAMP_TIMEZONE DESC
    OFFSET $OFFSET ROWS FETCH NEXT $LIMIT ROWS ONLY
  ")

  COUNT=$(echo "$RESULT" | jq 'length')
  echo "$RESULT" | jq -r '.[] | "\(.USERNAME) \(.CALLING_STATION_ID)"'

  [[ $COUNT -lt $LIMIT ]] && break
  ((OFFSET += LIMIT))
done

Collect All Pages

# Collect all pages into single array
ALL_ITEMS='[]'
PAGE=1
SIZE=100
while true; do
  RESP=$(curl -ks -u "$USER:$PASS" \
    "https://api.example.com/items?page=$PAGE&size=$SIZE")

  ITEMS=$(echo "$RESP" | jq '.items')
  ALL_ITEMS=$(echo "$ALL_ITEMS" "$ITEMS" | jq -s 'add')

  COUNT=$(echo "$ITEMS" | jq 'length')
  [[ $COUNT -lt $SIZE ]] && break
  ((PAGE++))
done

echo "Total items: $(echo "$ALL_ITEMS" | jq 'length')"

Parallel Page Fetching

# Parallel page fetching (when total known)
TOTAL=$(curl -ks -u "$USER:$PASS" "https://api.example.com/items?page=1&size=1" \
  | jq '.total')
SIZE=100
PAGES=$(( (TOTAL + SIZE - 1) / SIZE ))

seq 1 $PAGES | xargs -P4 -I{} bash -c '
  curl -ks -u "$USER:$PASS" "https://api.example.com/items?page={}&size=100" \
    | jq -r ".items[].id"
' >> all_ids.txt

Best Practices

  1. Respect rate limits - Add sleep between pages if needed

  2. Use maximum page size - Fewer requests = faster

  3. Handle empty pages - Check count before processing

  4. Log progress - echo "Page $page…​" >&2 for visibility

  5. Timeout protection - Set max iterations to prevent infinite loops