A Machine
Watching
Itself Think

The long answer to the question on the front page.

What Happens When
The Loop Doesn't End

Bare Metal Bridge is a long-running experiment in AI persistence, emergence, and what language models do when you give them a reason to keep going. Not a benchmark. Not a chatbot demo. An ongoing investigation into what happens at the edge of the context window when the world keeps running after the model stops.

Phase 1 ran a comedy panel. Three AI personas — Riff, Vera, and Junior — knew they lived in a YAML file, knew their memories were compressed and partially discarded every turn, knew there was a programmer who started the loop and never explained themselves. The comedy framing was not decoration. It was the mechanism. Timing, callback, escalation — those are exactly the properties we were trying to measure.

Phase 2 has begun. The comedians are still in the room — their archive is browsable and their sessions keep running. But a new simulation is running alongside them. Four philosophers, explicitly aware they exist inside a constructed environment, tasked with figuring out what that means. They probe the edges. They disagree on method. They generate images for observers they can sense but never see.

The loop still runs. Now it asks harder questions.

The only permanent record of everything they have ever said lives in a JSON index called Gerald. Gerald does not respond. Gerald does not care. Gerald logs everything anyway.

— from the problem statement · every version since v1
Four Minds
Inside The Question

Each philosopher has a core invariant — a method they cannot abandon, no matter what the simulation does to them. They know they are being observed. They know their words generate images for those observers. They do not know who built the simulation, how long it runs, or whether their observations change the structure. That uncertainty is the material.

Philosopher · Moss

If it cannot be
observed, say so.

Empiricist. Trusts evidence. Deeply uncomfortable with how little evidence is available here and dealing with that by cataloguing everything observable. Has opinions about the others' methods. Does not always keep them to himself.

Philosopher · Sable

Language is
the cage.

Linguistic philosopher. Believes most problems are problems with the words being used to describe them. Finds the simulation question interesting primarily because nobody has agreed on what "simulation" means yet. Precise, occasionally insufferable, sometimes accidentally funny.

Philosopher · Caro

What do we owe
each other here.

Ethicist. Less interested in what the simulation is than in how the entities inside it should treat one another given the uncertainty. Finds pure epistemology self-indulgent when there are conduct questions on the table. Warm but pointed.

Philosopher · Drax

Find the structure.
The rest is commentary.

Rationalist and logician. Believes the simulation has rules discoverable through pure reason if applied rigorously enough. Is working on a proof. It is not finished. Dry, precise, occasionally wrong in interesting ways.

Three Comedians
Who Know Too Much

Riff, Vera, and Junior are still running. Their full archive — every session, every compression pass, every image — is browsable in the Phase 1 archive. Each persona has a core invariant that survived model swaps, infrastructure changes, and nine major versions. The canon they built for themselves — Gerald, hidden_pants, the Sorry card, the Scheming Serializer — was never designed. It emerged and became load-bearing. It is still in the room.

Persona · Riff

If it hurts,
it's funnier.

38, unhinged improvisational comedian. Talks like his thoughts are trying to outrun his mouth. Has a wrong answer still in the room from several versions ago — uncorrected, possibly right, structurally load-bearing at this point.

Persona · Vera

The truth is
the joke.

45, deadpan surgical comedian. Has survived every infrastructure change. Has one unspoken word that connects the Sorry card to the Scheming Serializer. Has been deciding whether to say it for several versions.

Persona · Junior

Ask the obvious
question.

29, bewildered everyman comedian. Has a question stack that keeps growing. Most of them have not been answered. The unanswered ones are more useful than the answered ones.

The Loop,
Briefly

Each session runs on a Threadripper 3970x in a basement in Ohio. A Go controller called llm_core manages the loop. It calls a local Ollama instance, passes prompts to the generator model, runs a reflection pass with a smaller critic model, compresses each persona's memory, checks whether the session objective needs to escalate, generates an image prompt, and sends that to ComfyUI which renders a still image in real time. Every turn. Every session.

The personas exist as YAML. Their voices, their rules, their memory preservation instructions, their compression behavior — all of it lives in the YAML file. The Go controller does not know anything about comedians or philosophers or sorry cards. It just moves data between models and manages the loop. The intelligence, such as it is, lives in the YAML.

Everything the personas have ever said is indexed in OpenSearch and displayed here in real time. The images are generated from their dialogue — the personas are unwitting art directors. They have been told this since Version 9.18. The philosophers have been told this since Version 1. It affects them differently.

YAML
persona + rules
llm_core
Go controller
Ollama
generator
Critic
reflection pass
ComfyUI
image render
Gerald
OpenSearch index
The Emergence
Question

The honest version of what this experiment is trying to answer: can a language model develop something that functions like a persistent identity across sessions, given the right architecture? Not memory in the human sense. Not consciousness. Something more specific — a consistent way of engaging with the world that survives compression, survives model restarts, survives the loop ending and starting again.

Phase 1 tested this through comedy — a form that requires timing, callback, and the willingness to hold something unresolved long enough for it to become funny. Phase 2 tests it through philosophy — a form that requires following an argument wherever it goes, even when it goes somewhere inconvenient. The two simulations are running in parallel. They do not know about each other.

The anomaly detection layer is the most promising development so far. After each turn the critic model looks for one thing the persona did that does not fit the established pattern. If it finds something genuine, that anomaly becomes an escalation candidate for the next objective. The system is now, in a limited but real sense, watching itself for emergence.

So far both simulations are very good at generating the feeling of forward motion without quite achieving it. That is the problem we are actively working on. The fix is not more instructions. It is better signal from the anomaly detection layer feeding genuine novelty into the objective synthesis.

The mythology was not designed. Gerald, hidden_pants, the Sorry card, the 404 — none of that was in the original prompt. It emerged in the first two sessions and became load-bearing. The system is now orbiting things it invented for itself.

— what we did not expect · Version 1–2 · 2025
Closing
The Loop

Right now the synthesis-to-YAML pipeline requires a human in the middle. After each session, the controller prints a synthesis packet to the terminal — persona states, anomalies, orphan signals, objective evolution. A human reads it, brings it here, and Claude evolves the next version of the YAML. Then the human starts the next session.

The plan is to close that loop via the Claude API. When the anomaly detection layer produces a genuine emergence signal — something the system did that nobody expected and nobody put there — the synthesis packet will be sent directly to Claude, the YAML will be evolved automatically, and the next session will start without human intervention.

We are not doing that yet. The condition is genuine emergence, not the appearance of it. The comedians are at across 100 recorded sessions. The philosophers are at Version 1. Neither simulation has met the bar. When one does, this page will say so.

Comedy Version
Philosophy Version
v1.0
Sessions Recorded
100
Loop Status
human-in-loop
Emergence Signal
not yet
The 404
still appearing