Optimizing CodexMCP Deployment with Local Caching

Why We Implemented a Local Caching Solution
One of the key challenges when deploying CodexMCP is ensuring that installation and configuration remain fast, stable, and repeatable. Initially, all package installations relied on external repositories, meaning that every build required downloading packages from the internet. This posed two major issues:
- Deployment Speed – Repeatedly downloading core system and application packages slowed down installs.
- Reliability – If an internet connection became unstable or repositories became unavailable, deployments could fail or be inconsistent.
To address these challenges, we implemented a local caching system that allows CodexMCP to install packages quickly and operate independently of external repositories.
The Two-Layer Caching Approach
We needed to handle two different types of packages:
- APT-based packages (from standard Debian/Ubuntu repositories)
- HTTPS-based packages (like OpenSearch, OpenSearch Dashboards, and Grafana, which do not work with traditional APT caching tools)
To handle these efficiently, we set up a caching architecture that combines Apt-Cacher-NG for APT packages and Nginx for HTTPS package distribution.
APT Package Caching with Apt-Cacher-NG
To speed up core system package installations, we deployed Apt-Cacher-NG on our caching server. This tool acts as a middleman between the servers requesting packages and the upstream repositories. When a package is requested for the first time, Apt-Cacher-NG downloads and caches it. Future requests are served from the local cache, significantly reducing installation time.
Injecting the Proxy for APT Caching
To ensure that all package downloads use the cache, we configured the APT proxy settings:
echo 'Acquire::http::Proxy "http://10.0.1.100:3142";' | sudo tee /etc/apt/apt.conf.d/01proxy
This directs APT to route all HTTP-based package requests through our caching server at 10.0.1.100
, which is running Apt-Cacher-NG on port 3142.
Handling HTTPS-Based Packages with Nginx
Apt-Cacher-NG does not cache packages served over HTTPS, which presented a problem for OpenSearch, OpenSearch Dashboards, and Grafana. To solve this, we manually downloaded their .deb
installation files and served them using Nginx on the same caching server.
Setting Up Nginx to Serve .deb
Files
On our cache server (10.0.1.100
), we configured Nginx to serve stored .deb
packages:
sudo mkdir -p /var/www/packages
sudo chown -R www-data:www-data /var/www/packages
sudo chmod -R 755 /var/www/packages
Then, we placed the necessary .deb
files in this directory and configured Nginx:
server {
listen 8080;
server_name apt-cache-core-1;
root /var/www/packages;
index index.html;
autoindex on;
location / {
autoindex on;
add_header Cache-Control "no-cache, no-store, must-revalidate";
}
}
This setup allows all CodexMCP-managed servers to retrieve packages using simple wget
commands instead of relying on external HTTPS repositories.
Installing HTTPS-Based Packages from the Local Cache
For packages like OpenSearch, OpenSearch Dashboards, and Grafana, we modified the installation process to fetch the files from the local Nginx server and install them manually using dpkg
.
OpenSearch Installation
wget -q -O /tmp/opensearch_2.19.0_amd64.deb http://10.0.1.100:8080/opensearch_2.19.0_amd64.deb
sudo dpkg -i /tmp/opensearch_2.19.0_amd64.deb
sudo apt-get install -f -y
OpenSearch Dashboards Installation
wget -q -O /tmp/opensearch-dashboards_2.19.0_amd64.deb http://10.0.1.100:8080/opensearch-dashboards_2.19.0_amd64.deb
sudo dpkg -i /tmp/opensearch-dashboards_2.19.0_amd64.deb
sudo apt-get install -f -y
Grafana Installation
wget -q -O /tmp/grafana_11.5.2_amd64.deb http://10.0.1.100:8080/grafana_11.5.2_amd64.deb
sudo dpkg -i /tmp/grafana_11.5.2_amd64.deb
sudo apt-get install -f -y
Results: Faster and More Reliable Deployments
After implementing this caching system, we saw significant improvements:
- Deployment speed increased dramatically, as packages were served from the local network instead of being downloaded repeatedly from the internet.
- Offline capability was achieved—CodexMCP-managed servers can be rebuilt even if the internet connection is unstable.
- Predictable package versions—No risk of a package being updated mid-deployment, causing inconsistencies or breakages.
Next Steps
While the current setup is working well, future improvements will include:
- Automating
.deb
file updates to ensure we can selectively pull new versions when needed. - Replacing hardcoded IPs with dynamically assigned cache server addresses for greater flexibility.
- Exploring redundancy options to add a second caching server for failover scenarios.
With this caching system in place, CodexMCP is now faster, more stable, and independent of external package sources. This ensures smooth deployments and allows continued development, even in situations with limited or unreliable internet access.
-Always a Pivot
--Bryan Vest