Simulating a Virtual Subscriber Network With Docker

In a recent surge of development for CodexMCP, I orchestrated a full-scale synthetic network simulation using Docker. This wasn’t just another lab experiment. This is a testbed built to mirror the chaotic realities of live ISP infrastructure, simulating thousands of subscriber endpoints and observing how a full DHCP stack holds up under sustained churn.
The Mission
CodexMCP is evolving into a full operational and observability cortex for telecom environments. To validate our infrastructure, we needed more than a couple of test clients. We needed simulated chaos, a dynamic population of endpoints, each behaving like a real device, requesting, renewing, and occasionally dropping IP leases.
The Foundation: Kea DHCP on a /16
We began with a Kea DHCP4 server configured to serve a /16 block (10.1.0.0/16) from a virtual VyOS router. Initially set to 120 seconds, short lease times helped surface churn behavior quickly. This was later increased to 1 hour to observe longer-term state transitions.
Kea was configured with:
- A single large pool:
10.1.0.100 - 10.1.255.254
- Logging tuned to expose lease extension activity at the INFO level (using
kea-dhcp4.alloc-engine
) without enabling verbose DEBUG logs - Dual-output logging: to file and syslog, piped to GoatWatch for real-time log bucketing and behavioral modeling
The Simulation Engine: Docker with Alpine and udhcpc
Each simulated client runs in its own Docker container. We built lightweight Alpine-based images with:
udhcpc
for DHCP- Randomized
entrypoint.sh
scripts that restart the DHCP client periodically or on signal - macvlan networking bound to a single interface, isolating traffic from the Docker host network stack
At peak, we launched 1,801 live containers, all running in parallel, renewing and releasing leases with randomized timing. The churn was real, and the lease file grew fast.
The host running this simulation was a VM:
- 8 vCPUs
- 16 GB RAM
- 100 GB ZFS-backed storage
Despite the intensity, the system remained stable. The bottlenecks weren’t CPU or RAM—but logging and peripheral daemons. Notably, we found udisksd
(usually harmless) consuming CPU. It was reacting to the mass mount churn from containers.
Observability: Log Signal and Lease Behavior
Kea’s logs gave us deep visibility into how leases were allocated and extended. With INFO-level EXTEND_LEASE messages surfaced, we observed the churn patterns directly:
- Spikes of initial DISCOVER/OFFER/REQUEST
- Bursts of renewals triggered by randomized timers
- Natural lease expirations and reassignments
The kea-leases4.csv
file validated our assumptions, showing a healthy range of lease durations and active clients.
The Next Frontier: Service Simulation
With the base layer validated, we now move to simulated service interaction. These synthetic users will begin generating real network traffic:
- DNS lookups to internal and external resolvers
- Web traffic to mock portals
- SIP registrations to CodexMCP’s GoatMUX stack
- Authentication via RADIUS or LDAP
We also discussed deeper integrations with identity-aware systems like Nextcloud, possibly leveraging its user management API for centralized provisioning of simulated users.
Longer term, these containers won’t just act like users. They’ll become managed objects within CodexMCP—showing up on dashboards, generating support tickets when services fail, and participating in complex, traceable workflows.
Why This Matters
Too often, network tools are tested in sterile environments—two routers and a ping. CodexMCP isn’t for that world. It’s for environments with thousands of endpoints, behavioral drift, and unexpected load. This synthetic testbed is a proving ground—not just for DHCP but also for service orchestration, observability, and operational intelligence.
It’s not just a stack of containers. It’s the first synthetic town on this digital grid, a fake ISP with real problems built to prepare us for real solutions.
Next Steps
- Expand synthetic users to include service-layer activity
- Build a user manifest system inside CodexMCP to define, launch, and control virtual populations
- Tie all log and metric outputs into the CodexMCP cortex for full-stack visibility
- Publish the Dockerfile, scripts, and architecture as a reusable manifest that others can launch with one command
In the next post, we’ll dive into the Docker image build process, the shell scripts used to randomize client behavior, and the internal architecture that lets 1,800+ containers coexist harmoniously on a single host. We’ll show the code—and more importantly—the philosophy behind it.
--Bryan