You need a comprehensive network lab for hands-on work across the full Cisco portfolio (WLC, ISE, FTD/FMC, ASA, routers, switches), multi-vendor (Palo Alto, Fortinet, Arista, Juniper), Linux, and modern API/DevOps/SecOps workflows. You hold 4 CCNPs with 10 years of experience and active CCO/SmartNet access for image downloads. This is not a study lab — it’s a working engineer’s infrastructure that needs to keep pace with real-world deployments.
Split approach chosen: Lightweight topologies (multi-vendor, routing/switching, API dev) run on the Razer workstation (64GB). Heavy topologies (FMC, ISE, NX-OSv VXLAN fabric) run on kvm-01 (128GB). EVE-NG Community Edition on both.
Phase 0: Prerequisites & Planning (Documentation First)
Goal: Reserve IPs, plan resource allocation, create project documentation structure.
domus-captures (project tracking)
Create project following mandatory structure:
partials/projects/eve-ng-lab/ ├── metadata.adoc # PRJ-2026-04-eve-ng-lab, P1, Active ├── summary.adoc # Status table with per-phase progress ├── assessment.adoc # Hardware assessment, split-approach rationale ├── appendix-issues.adoc # Problems + resolutions └── appendix-commands-learned.adoc # EVE-NG CLI/API commands discovered
pages/projects/personal/eve-ng-lab/ ├── index.adoc # Shell → includes summary + assessment + metadata ├── phase-0.adoc # Shell → includes phase-0 partial ├── phase-1-workstation.adoc ├── phase-2-kvm01.adoc ├── phase-3-images.adoc ├── phase-4-topologies.adoc ├── phase-5-devops.adoc ├── phase-6-advanced.adoc ├── phase-7-integration.adoc ├── appendix-issues.adoc └── appendix-commands-learned.adoc
domus-infra-ops (infrastructure truth)
-
Update antora.yml — add attributes: eve-ng-ws-ip, eve-ng-kvm01-ip, eve-ng-ws-hostname, eve-ng-kvm01-hostname, vlan-lab, subnet-lab
-
Update partials/ip-allocation-table.adoc — reserve IPs in 10.50.1.144-199 range (e.g., 10.50.1.150 for kvm-01 instance, workstation uses localhost)
-
Update partials/system-inventory-planned.adoc — add EVE-NG entries
-
Create pages/runbooks/eve-ng-deployment.adoc — installation runbook (user executes from this)
-
Create pages/runbooks/eve-ng-operations.adoc — day-2 operations
Resource allocation decision
┌──────────────────────────┬──────────────────┬────────────────────────────────┬─────────────────────────────────────────────────────────┐ │ Environment │ Role │ RAM Budget for EVE-NG │ What Runs Here │ ├──────────────────────────┼──────────────────┼────────────────────────────────┼─────────────────────────────────────────────────────────┤ │ Razer workstation (64GB) │ Lightweight labs │ ~40GB (leave 20GB for desktop) │ Multi-vendor peering, R&S, API/DevOps, Linux endpoints │ ├──────────────────────────┼──────────────────┼────────────────────────────────┼─────────────────────────────────────────────────────────┤ │ kvm-01 (128GB) │ Heavy labs │ ~50-60GB (after existing VMs) │ FMCv, FTDv, ISE, NX-OSv 9000, ASAv, full security stack │ └──────────────────────────┴──────────────────┴────────────────────────────────┴─────────────────────────────────────────────────────────┘
Critical files to modify
-
/home/evanusmodestus/atelier/_bibliotheca/domus-infra-ops/docs/asciidoc/antora.yml
-
/home/evanusmodestus/atelier/_bibliotheca/domus-infra-ops/docs/asciidoc/modules/ROOT/partials/ip-allocation-table.adoc
-
/home/evanusmodestus/atelier/_bibliotheca/domus-infra-ops/docs/asciidoc/modules/ROOT/partials/system-inventory-planned.adoc
-
/home/evanusmodestus/atelier/_bibliotheca/domus-infra-ops/docs/asciidoc/modules/ROOT/nav.adoc
-
/home/evanusmodestus/atelier/_bibliotheca/domus-captures/docs/modules/ROOT/nav.adoc
Phase 1: Workstation EVE-NG Deployment
Goal: EVE-NG CE running on the Razer workstation via KVM/libvirt (not bare metal — you keep Arch).
Approach: EVE-NG as a VM on Arch Linux
EVE-NG CE runs inside a KVM VM on your workstation. This avoids the bare-metal Ubuntu takeover and keeps your Arch/Hyprland desktop intact.
Steps (runbook deliverables — user executes)
-
Download EVE-NG CE ISO (Community Edition OVF/ISO from eveng.net)
-
Create KVM VM via virt-install:
-
16 vCPU, 40GB RAM (adjustable), 200GB qcow2 thin-provisioned
-
Bridge to br0 or use NAT network for management access
-
Enable nested virtualization: cpu mode='host-passthrough'
-
-
Install EVE-NG CE from ISO (Ubuntu 22.04 base)
-
Configure management IP (NAT: 192.168.122.x or bridged: 10.50.1.x)
-
Access web UI at <eve-ng-ip>
-
Configure Cloud0 interface for lab-to-host connectivity
-
Verify nested KVM works: egrep -c '(vmx|svm)' /proc/cpuinfo inside the VM
-
Test with a single IOSv node boot
Verification
-
EVE-NG web UI accessible from browser
-
Single IOSv router boots and reaches console
-
virsh list shows EVE-NG VM running
-
Desktop remains responsive with EVE-NG idle (~2-3GB overhead)
Phase 2: kvm-01 EVE-NG Deployment
Goal: EVE-NG CE as a VM on kvm-01 for heavy topologies.
Steps (runbook deliverables)
-
SSH to kvm-01, verify available RAM: free -h
-
Check nested virt enabled: cat /sys/module/kvm_intel/parameters/nested (enable if N)
-
Transfer EVE-NG CE ISO to kvm-01 storage
-
Create KVM VM:
-
8 vCPU, 60GB RAM, 500GB qcow2 thin-provisioned
-
Bridge to br-mgmt for management (10.50.1.150)
-
Additional bridge for Cloud0 (lab-to-production integration)
-
cpu mode='host-passthrough' for nested KVM
-
-
Install EVE-NG CE
-
DNS record: eve-ng-01.inside.domusdigitalis.dev → 10.50.1.150 (BIND)
-
Vault TLS cert for HTTPS access
-
Wazuh agent for monitoring
-
Test with FTDv or ISE node boot (validates nested KVM handles heavy images)
Verification
-
Web UI at eve-ng-01.inside.domusdigitalis.dev
-
FTDv boots successfully (nested KVM stress test)
-
No performance degradation on existing kvm-01 VMs (VyOS, ISE-01, Vault, etc.)
Phase 3: Image Library
Goal: Full image catalog on both EVE-NG instances. Active CCO access = direct downloads.
Workstation images (lightweight)
┌────────────────────────────┬──────────┬───────────┬────────────────────┐ │ Image │ Version │ RAM │ Source │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ IOSv │ 15.9(3)M │ 512MB │ CCO │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ IOSv-L2 │ 15.2 │ 768MB │ CCO │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ CSR1000v/Cat8000v (IOS-XE) │ 17.x │ 4GB │ CCO │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ ASAv │ 9.x │ 2GB │ CCO │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ vEOS (Arista) │ 4.3x │ 2GB │ arista.com (free) │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ vJunos-switch │ 23.x │ 1GB │ juniper.net (free) │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ FortiGate-VM │ 7.x │ 2GB │ Fortinet eval │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ Palo Alto VM-50 │ 11.x │ 6.5GB │ Eval/SE │ ├────────────────────────────┼──────────┼───────────┼────────────────────┤ │ Linux (Ubuntu/Rocky) │ LTS │ 512MB-2GB │ Public │ └────────────────────────────┴──────────┴───────────┴────────────────────┘
kvm-01 images (heavy)
All of the above PLUS:
┌──────────────┬─────────┬──────┬────────┐ │ Image │ Version │ RAM │ Source │ ├──────────────┼─────────┼──────┼────────┤ │ FTDv │ 7.x │ 8GB │ CCO │ ├──────────────┼─────────┼──────┼────────┤ │ FMCv │ 7.x │ 28GB │ CCO │ ├──────────────┼─────────┼──────┼────────┤ │ ISE 3.x │ 3.3/3.4 │ 16GB │ CCO │ ├──────────────┼─────────┼──────┼────────┤ │ 9800-CL WLC │ 17.x │ 4GB │ CCO │ ├──────────────┼─────────┼──────┼────────┤ │ IOS-XRv 9000 │ 7.x │ 8GB │ CCO │ ├──────────────┼─────────┼──────┼────────┤ │ NX-OSv 9000 │ 10.x │ 8GB │ CCO │ └──────────────┴─────────┴──────┴────────┘
Image organization
/opt/unetlab/addons/qemu/ ├── asav-9x/ ├── csr1000v-17x/ ├── ftdv-7x/ (kvm-01 only) ├── fmc-7x/ (kvm-01 only) ├── iosv-15x/ ├── iosv-l2-15x/ ├── iosxrv9k-7x/ (kvm-01 only) ├── nxosv9k-10x/ (kvm-01 only) ├── ise-3x/ (kvm-01 only) ├── c9800cl-17x/ (kvm-01 only) ├── paloalto-11x/ ├── fortinet-7x/ ├── veos-4x/ ├── vjunos-23x/ ├── linux-ubuntu/ └── linux-rocky/
Runbook deliverable
eve-ng-image-upload.adoc — step-by-step for each image type (qcow2 conversion, directory naming, permission fixes, /opt/unetlab/wrappers/unl_wrapper -a fixpermissions)
Phase 4: Foundation Topologies (Multi-Vendor + API from Day 1)
Goal: Multi-vendor peering lab on workstation with API/DevOps baked in from the start. Every device is API-enabled from first boot.
Topology 1: Multi-Vendor Peering + API Lab (Workstation, ~20GB)
┌──────────┐
│ Ansible │ (Linux VM - management station)
│ pyATS │
│ gnmic │
└────┬─────┘
│ Cloud0 (management network)
┌────────┬───────────┼───────────┬────────┐
│ │ │ │ │
┌───┴──┐ ┌──┴───┐ ┌─────┴───┐ ┌────┴──┐ ┌──┴──────┐
│IOSv-R1│ │vEOS-1│ │vJunos-1 │ │PA-VM-1│ │Forti-1 │
│IOS-XE │ │Arista│ │Juniper │ │PaloAlt│ │FortiGate│
└───┬───┘ └──┬───┘ └────┬────┘ └───┬───┘ └────┬────┘
└────────┴──────┬────┴──────────┴──────────┘
│ eBGP mesh (point-to-point /30s)
Every node API-enabled from first config: - IOS-XE: restconf, netconf, ip http secure-server, YANG models - Arista vEOS: eAPI (HTTPS JSON-RPC), NETCONF, gNMI - Juniper vJunos: NETCONF (native), REST API - Palo Alto: XML/REST API - FortiGate: REST API
Management Linux VM pre-loaded with: - Ansible + collections: cisco.ios, arista.eos, junipernetworks.junos, paloaltonetworks.panos, fortinet.fortios - pyATS + Genie (Cisco testbed YAML) - gnmic for gNMI/streaming telemetry - ncclient for NETCONF - Python requests + Postman collections for REST APIs - batfish for config analysis (optional, lightweight)
Topology 2: Routing Deep-Dive (Workstation, ~8GB)
-
4x IOSv routers + 2x IOSv-L2 switches
-
OSPF multi-area, EIGRP named, BGP iBGP/eBGP with RR
-
All RESTCONF/NETCONF-enabled — practice config changes via API alongside CLI
Topology 3: Switching Foundation (Workstation, ~6GB)
-
4x IOSv-L2 + 2x IOSv
-
STP (RPVST+, MST), EtherChannel, Inter-VLAN routing
-
DHCP snooping, DAI, IP Source Guard
Phase 5: API/DevOps Deep Integration
Goal: Formalize automation workflows against EVE-NG topologies. This runs in parallel with topology building — not a separate phase.
Automation stack (on management Linux VM or your workstation directly)
-
RESTCONF/NETCONF against IOS-XE:
-
YANG model exploration (ncclient capabilities exchange)
-
ietf-interfaces, Cisco-IOS-XE-native, openconfig-interfaces
-
Postman collections → Python scripts → Ansible playbooks (progression)
-
-
gNMI streaming telemetry:
-
gnmic subscribe against IOS-XE/Arista for interface counters, BGP state
-
Pipe to Prometheus on k3s (existing stack) via gnmic Prometheus output
-
Grafana dashboards for lab telemetry
-
-
Ansible automation:
-
Dynamic inventory from EVE-NG REST API
-
Config backup, compliance checks, VLAN provisioning across all vendors
-
Playbook-per-vendor comparison (same task, different collections)
-
-
pyATS/Genie:
-
Testbed YAML for EVE-NG topologies
-
learn, parse, diff workflows
-
Pre/post change validation
-
-
EVE-NG REST API itself:
-
Automate lab start/stop/wipe via API
-
Topology-as-code: export/import lab definitions
-
Script to spin up predefined topologies on demand
-
Documentation deliverables
-
domus-automation-ops: Ansible playbooks, inventory templates
-
domus-netapi-docs: API endpoint patterns per vendor
-
domus-captures: Worklogs documenting API exploration sessions
Phase 6: Heavy Topologies on kvm-01
Goal: Security stack and data center fabric on kvm-01 EVE-NG instance.
Topology 4: Security Stack (kvm-01, ~50GB)
-
FTDv + FMCv (IPS/IDS, URL filtering, malware policies)
-
ASAv (NAT, ACLs, site-to-site IPsec, AnyConnect VPN)
-
ISE 3.x (802.1X, MAB, posture, pxGrid)
-
2x IOSv-L2 (NAC-enabled switches)
-
2x Linux endpoints (supplicant testing)
-
All API-enabled: FMC REST API, ISE ERS/OpenAPI, ASA REST API, ISE pxGrid (WebSocket)
Topology 5: VXLAN/EVPN Data Center Fabric (kvm-01, ~48GB)
-
4x NX-OSv 9000 leaf + 2x NX-OSv 9000 spine
-
BGP EVPN control plane, VXLAN data plane
-
Multi-tenancy with VRFs
-
NX-API (REST) enabled on all switches
Topology 6: MPLS Core (kvm-01, ~24GB)
-
4x IOS-XE + 2x IOS-XRv 9000
-
LDP, RSVP-TE, L3VPN, L2VPN
-
IOS-XR NETCONF/gNMI for automation
Topology 7: Wireless (kvm-01, ~22GB)
-
9800-CL WLC + ISE + IOSv-L2 + Linux RADIUS client
-
FlexConnect, 802.1X with EAP-TLS
-
WLC REST API for monitoring and config
Phase 7: Production Integration
Goal: Bridge EVE-NG labs to production infrastructure where it adds value.
Integration points (kvm-01 EVE-NG only — workstation stays isolated)
-
DNS: Lab devices in lab.inside.domusdigitalis.dev subdomain via BIND
-
ISE: Lab endpoints authenticate against production ISE-02 for realistic NAC testing
-
Vault PKI: Lab devices request certs from Vault for EAP-TLS
-
Syslog/SIEM: Lab devices → Wazuh on k3s
-
Telemetry: gNMI streams → Prometheus on k3s → Grafana
Safety controls
-
Dedicated VLAN 50 (10.50.50.0/24) for lab traffic
-
VyOS firewall: VLAN 50 only reaches DNS (53), NTP (123), RADIUS (1812/1813), Vault (8200), Syslog (514)
-
No default route from lab VLAN to WAN
-
EVE-NG Cloud interfaces default to shut — activated per-topology
Deliverables
-
Change Request: CR-2026-04-DD-eve-ng-production-bridge.adoc
-
VyOS firewall rules in runbook
-
BIND zone file additions
Verification Plan
Per-phase verification
-
Phase 1: EVE-NG web UI accessible, IOSv boots, desktop stays responsive
-
Phase 2: EVE-NG on kvm-01 accessible via HTTPS, FTDv boots (nested KVM stress test), existing VMs unaffected
-
Phase 3: All images boot successfully, fixpermissions clean
-
Phase 4: Multi-vendor eBGP mesh converges, all API endpoints respond (RESTCONF, eAPI, NETCONF, PA REST, FortiGate REST)
-
Phase 5: Ansible playbook runs against all vendors, pyATS testbed connects, gNMI subscription streams data
-
Phase 6: FMC manages FTD, ISE authenticates endpoints, NX-OS VXLAN fabric passes traffic
-
Phase 7: Lab device resolves DNS via BIND, authenticates via ISE, sends syslog to Wazuh
Build verification
After each documentation phase
cd ~/atelier/_bibliotheca/domus-captures && make 2>&1 | grep -E "WARN|ERROR" cd ~/atelier/_bibliotheca/domus-infra-ops && make 2>&1 | grep -E "WARN|ERROR"
cat /proc/cpuinfo | awk '/^processor/{p++} /^model name/{m=$0} /^flags/{f=$0} END{print "Cores:", p; print m; if(f ~ /vmx/) print "VMX: YES (Intel VT-x)"; else if(f ~ /svm/) print "SVM: YES (AMD-V)"; else print "Hardware Virt: NOT DETECTED"}'
# output: Cores: 24
# model name : Intel(R) Core(TM) Ultra 9 275HX
# VMX: YES (Intel VT-x)
awk '/MemTotal/{printf "RAM: %.0f GB\n", $2/1024/1024}' /proc/meminfo
# output: RAM: 62 GB
awk '/MemAvailable/{printf "Available: %.0f GB\n", $2/1024/1024}' /proc/meminfo
# output: Available: 53 GB