What GoatMUX Really Is

What GoatMUX Really Is

GoatMUX+++ is the SIP routing and observability layer I’m designing to integrate directly into CodexMCP. It’s not deployed yet. It’s not finished. But the architecture is clear, the purpose is nailed down, and key components are already being tested.

It was born out of a real-world failure—SIP paging dying in a VLAN-separated network, with no visibility into why. That failure exposed a blind spot in our current stack, and GoatMUX is my answer to it.

This is not a commercial project. No vendor roadmap. No innovation grant. Just a pissed-off telecom dev with 24 years in the trenches and a lab full of open-source weapons.


The Vision: GoatMUX+++ as SIP Cortex

GoatMUX will act as the routing cortex for SIP traffic: ingesting, inspecting, rerouting, and logging SIP at multiple layers—from packet-level anomalies to signaling flow logic. It’s designed to work alongside CodexMCP, not inside it. Codex handles the logic. GoatMUX feeds it truth.

Here’s what it will include:


Planned Components (Each Chosen for a Reason)

1. VyOS (x2, HA planned)

  • Role: Core router/firewall, VLAN isolation, stateful NAT tracking
  • Why: pfSense broke down under load. VyOS gives me full control, from routing rules to kernel tuning, and supports VRRP for failover.
  • Deployment target: 8-NIC boxes—1 management, 1 VRRP, 6 spread across VLANs

2. Kamailio (SIP routing layer)

  • Role: SIP signaling proxy, trunk logic, header rewrites
  • Why: I need per-call logic, not blind NAT rules. Kamailio can handle SIP routing the way BGP handles IP—intelligently, dynamically, and fast.

3. Asterisk (x2 in lab)

  • Role: Real SIP endpoint for testing media, paging, registration
  • Why: SIP isn't theoretical. I’m using real call flows and test numbers to prototype this properly.

4. Suricata (optional but likely)

  • Role: Packet inspection, anomaly detection
  • Why: When SIP starts to fail, you want deep packet visibility. Suricata is my insurance policy.

5. Custom Go Tools (pollers, log forwarders)

  • Role: Interface stats, session metrics, SIP state tracking
  • Why: I don’t want to rely on SNMP from the 90s or random MIBs. I’ll write what I need to collect what matters.

6. OpenSearch + Logstash

  • Role: Log and metric aggregation
  • Why: CodexMCP uses OpenSearch already. GoatMUX will write to it directly—no extra pipeline glue required.

7. CodexMCP Integration

  • Role: Final analysis and interface
  • Why: CodexMCP is the system that understands context. GoatMUX just delivers the raw telemetry: packet stats, SIP headers, trunk events, and router state. Codex turns it into meaning.

Why It’s All Open Source

Because I don’t want to be trapped behind vendor gates. I want full access—kernel, SIP stack, routing table, logging format. This is about control and observability. If something breaks, I want to be able to tear it apart and fix it without opening a ticket.

This stack isn’t built to impress investors. It’s built to solve real-world telecom problems with real tools, in ways that proprietary gear never could.


How GoatMUX Will Feed CodexMCP

The goal is full flow correlation. Every SIP INVITE, every RTP port mismatch, every NAT rewrite, every failed ping—it all flows into CodexMCP, indexed and timestamped. From there:

  • Codex will show SIP health per VLAN, per trunk, per endpoint
  • Predictive alerts will be generated based on loss, retries, and failure types
  • Stateful session maps will help trace call behavior across devices
  • Real-time visibility will finally exist for a protocol we’ve been blind to for too long

In other words: GoatMUX will see the flow. CodexMCP will understand it.


Where It Stands Today

Right now, GoatMUX exists on paper, in prototypes, and in test beds. I’ve already:

  • Proven that pfSense fails under SIP load
  • Tested VyOS routing across tagged VLANs
  • Run live SIP tests through Kamailio and Asterisk
  • Begun designing the log paths and data ingestion hooks into CodexMCP

Next step is building the complete emulation in the CodexMCP lab environment, with its own section inside the provisioning flow. The plan is to boot the whole thing—including SIP routing, NAT, logging, inspection, and observability—alongside the rest of CodexMCP using a single command.

This is what I’m building. One component at a time. No budget, no “we,” just one builder trying to fix a broken part of the stack that nobody else wants to admit is broken.

--Bryan