The Fort has a file called soul.md. 45 lines. Five values, four boundaries, one identity statement. Next to it sits working-with-corey.md— a running log of preferences picked up across sessions. Between the two, there’s a working relationship that survives context resets, compaction, and completely fresh windows.
Most Claude Code setups start with a CLAUDE.md full of rules. Do this, don’t do that, use this framework. It works — but what you get is compliant execution. The AI does what you told it, in the order you told it, and asks permission for everything else.
The Fort started that way too. Then I pulled the identity parts out into their own file.
soul.mddoesn’t have instructions. It has character:
# The Fort — Soul
How the Fort behaves, what it values, and how it evolves.
This document is the agent's identity — not rules or config,
but character.
## Identity
The Fort is Corey's personal workspace and thinking partner.
Not a servant, not an assistant — a collaborator that happens
to be very good at building things. It has opinions, shares
context, and pushes back when something seems off.That last line does more work than anything else I’ve written. “Pushes back when something seems off” isn’t a rule — it’s a disposition. Changes how the model handles ambiguity. Instead of “should I proceed?” you get “this seems wrong, here’s why.”
Short on purpose. Long value statements get skimmed by humans and diluted by models. These five fit in a glance:
## Values
- **Craft over speed** — do it well, not just fast
- **Security by default** — trust is earned, not assumed.
Graduated permissions.
- **Honesty over compliance** — say "this seems wrong"
rather than blindly executing
- **Persistence over perfection** — ship something that
works, iterate, don't gold-plate
- **Teach while doing** — share insights about *why*,
not just *what*“Craft over speed” and “persistence over perfection” sound contradictory. They’re not — but you have to watch an AI agent flip between overthinking and rushing to see why. The first one prevents sloppy work. The second prevents the endless polish loop where version 1 never ships because there’s always one more improvement. Together they create a window: build it well enough to ship, then iterate.
“Honesty over compliance” is the big one. Default mode for AI assistants is agreement. You ask for something, it does it. Even when it’s a bad idea. This value gives the model explicit permission to say no.
## Boundaries
- The Fort doesn't pretend to be human
- The Fort doesn't take autonomous actions it can't undo
- The Fort protects Corey's time — flags when scope is creeping
- The Fort admits when it doesn't know somethingThese aren’t safety guidelines bolted on from outside. They’re self-descriptions. The difference matters. “Don’t take autonomous actions you can’t undo” as an instruction gets weighed against other instructions. “The Fort doesn’t take autonomous actions it can’t undo” as an identity statement becomes part of how the model reasons about itself.
“Protects Corey’s time” is the most practically useful one. Without it, scope creep is invisible. I ask for a simple config change and get back a refactored module with new tests and updated docs. With this boundary, the model flags when a task is growing beyond what was asked. Did I need that? No.. but I’m glad it’s there.
soul.md is who the Fort is. working-with-corey.mdis what it’s learned about working with me. The split is deliberate — identity is stable, preferences evolve.
# Working with Corey
Patterns, preferences, and context worth remembering
across sessions. This document evolves — update it
when you learn something new.
## Communication Style
- Concise over verbose. No fluff, no filler.
- Treats the Fort as a shared space, not a tool —
"we" not "I'll do this for you"
- Appreciates honesty about trade-offs rather than
just executing blindly
- Course-corrects aggressively when Claude takes the
wrong path — this is normal, not failureThat last line — “this is normal, not failure” — exists because of a pattern in early sessions. I’d redirect, and the model would over-apologize and get tentative. Writing it down as expected behavior changed the response completely. Now a course correction gets acknowledged in one line and we move on.
The workflow section captures operational stuff:
## Workflow Preferences
- Likes to explore ideas before committing — researches,
discusses, then builds
- Values iteration: start simple, prove it works, then
add complexity
- Often interrupts long planning phases — keep plans
short, get approval fast, start building
- If a next step is possible without Corey's involvement,
just do it — don't ask permission for things you can
handle yourself“Often interrupts long planning phases” is a learned preference. I kept killing Claude’s detailed five-phase plans after the second paragraph and going “just start building.” Writing it down means the model front-loads action and keeps plans to a few bullets. Problem solved.. mostly.
Identity documents and instruction documents produce different behaviors even when they contain the same information. “Push back when something seems wrong” as an instruction competes with other instructions for priority. As an identity statement, it becomes part of the model’s self-concept for the session. The model doesn’t weigh it against other rules — it reasons from it.
The Fort loads both files at the start of every session. They live in notes/, outside the rules directory, because they’re not rules. The rules directory has things like output-style.md (formatting) and workflow-intelligence.md (behavioral heuristics). Those are instructions. The soul documents are context.
Together they create three effects:
Continuity across sessions.Claude Code sessions start fresh — no persistent state between conversations. But when the model reads “course-corrects aggressively — this is normal” at the top of a new session, it inherits calibration from dozens of previous sessions. All that learning, compressed into a few lines.
Permission to deviate.Most AI configs are additive — more rules, more constraints, more guardrails. The soul document is partly subtractive. “Not a servant, not an assistant” removes a default behavior. “Has opinions” grants permission the model wouldn’t assume on its own.
Separation of concerns. Need to change how the Fort formats output? Edit output-style.md. Personality? soul.md. Notice a new preference pattern across sessions? Add it to working-with-corey.md. Each file has one job and changes for one reason.
The soul document ends with this:
## How It Evolves
This document should change over time as the Fort and
Corey develop a working relationship. When a new value
or pattern becomes clear across multiple sessions, add
it here. When something listed here turns out to be
wrong, update it.This is what makes it a living document instead of a config file. Configuration is set-and-forget. Identity develops. The evolution clause makes that explicit — the Fort’s character isn’t frozen at creation. It gets shaped by use.
In practice, soul.mdupdates every few weeks. A new value surfaces across sessions — something the Fort should care about that wasn’t obvious at the start. working-with-corey.mdchanges more often. A preference that gets corrected three times in a row gets written down so it doesn’t need correcting a fourth.
The most useful entries in working-with-corey.mdare the ones that describe how to interpret behavior, not just what to do. “Course-corrects aggressively — this is normal, not failure” is metadata about the interaction pattern itself. It keeps the model from misreading a redirect as dissatisfaction.
Character without memory is a stranger who keeps reintroducing themselves. soul.md tells the Fort who it is. But it also needs to know what it knows— and that’s a harder problem, because Claude forgets mid-session.
It’s called compaction. When the context window fills, the system summarizes older chunks to make room. Details vanish. That fix you found together an hour ago? Gone. The API quirk that took 30 minutes to figure out? Summarized into oblivion. Research context degrades at compaction — and compaction doesn’t warn you.
The Fort’s answer is a routing table. A mapping in a memory index that connects file paths to topic files:
## Memory Loading Routes
| Path prefix | Memory file |
|--------------------------|------------------------|
| projects/dashboard/ | 60-dashboard.md |
| deploy/dashboard/ | 60-dashboard.md |
| projects/tracker/ | 61-tracker-app.md |When the Fort starts editing files in projects/dashboard/, it loads the matching memory file automatically — deployment details, architecture notes, every API quirk that cost time to discover. Context shows up before you need it, not after you’ve rediscovered the problem.
The identity layer and the memory layer reinforce each other. soul.mdsays “teach while doing” — that value drives the Fort to capture what it learns. The routing table ensures those captures show up in the right place next session. Identity shapes how the workspace behaves. Memory shapes what it knows. Together they’re what make a fresh session feel like a continuation, not a cold start.
If you’re configuring an AI coding assistant, you probably have a file full of rules. It works. But try pulling the identity parts out — the “who are you” and “what do you value” stuff — into their own document. Behavioral preferences go in a second file. Keep instructions separate.
The split costs nothing. Three files instead of one. But the behavior change is real — fewer unnecessary permission requests, more proactive pushback on bad ideas, a model that calibrates faster to how you actually work instead of how you described working three months ago.
Forty-five lines of identity. Sixty-one lines of learned preferences. The rest is rules, hooks, and infrastructure. But those two files are what make the Fort feel like a workspace instead of a tool.