
Modern AI coding assistants like Claude Code have a simple but powerful feature: a memory file.
In Claude Code, it's called CLAUDE.md. It sits in your project root and contains everything the assistant should know about your codebase: conventions, architecture decisions, common pitfalls, how to run tests.
The file persists between sessions. Every conversation starts with that context loaded. The assistant isn't starting from zero; it's starting from everything you've taught it.
Some teams take this further with explicit "compounding" workflows. Dan Shipper and Keiran Klassen published a great post in December, and the Claude Plugin, Compound (I've added both to the notes).
Once this is set-up after solving a tricky problem, you tag it:
/compound
The assistant extracts the key insight and writes it to the memory file. Next time anyone on the team hits a similar problem, the assistant already knows the answer.
This creates a new dynamic. The AI isn't just a tool you use—it's a tool that gets better the more you use it. Knowledge compounds instead of evaporating.
The same pattern applies beyond engineering. Every organisation has knowledge that's:
Traditional solutions require deliberate documentation. Write it down. Update the wiki.
But documentation is a second job nobody signed up for. It requires context-switching from doing the work to describing the work. So it doesn't happen.
What if capture happened in the flow of work instead?
Imagine an organisational assistant that works like the engineering tools.
When someone discovers something worth remembering, they tag it:
"@assistant — a few things about client X: their procurement team restructured in March. Key decision maker is now Y, previous contact Z has moved on..."
Or consider compliance. A team member reads through new privacy legislation:
"@assistant — the updated Data Protection Act requires explicit consent for any automated profiling. 90-day implementation window from March 1st. Affects our customer scoring workflow."
Now everyone who asks about customer data handling gets that context automatically—not buried in a policy document nobody reads.
Or internal process discovery:
"@assistant — expenses over $500 need approval, but if you tag it as 'client reimbursable' it only needs manager sign-off. Learned this the hard way."
The knowledge compounds. Each insight makes the assistant more useful. Six months in, new hires benefit from hundreds of lessons without anyone writing a single wiki page.
Under the hood, this is straightforward:
Memory files: Structured storage for captured knowledge, organised by category or domain
Extraction: When tagged, the assistant pulls out the key insight from conversational context
Relevance matching: In future conversations, surface memories that match the current topic
Deduplication: Don't store the same thing twice; update existing entries with new information
The engineering tools have proven this works. The question is applying it more broadly.
Not everything needs remembering. The system works best for:
Tribal knowledge: Undocumented quirks that experienced people just know. The staging environment that needs a VPN. The vendor contact who actually responds.
Decisions and context: Why Postgres over MySQL? Why is this timeout set to 30 seconds? Decisions without recorded context become mysterious constraints.
Real processes: How deployment actually works, not the outdated runbook.
Lessons from incidents: Outages and bugs contain valuable signal. Without capture, the same mistakes repeat.
Traditional knowledge bases have a scaling problem. More content means harder search, more contradictions, more staleness. Nobody knows what's current.
A learning assistant inverts this. More knowledge makes it more useful because relevance surfaces automatically in context. You don't search—the right knowledge appears when needed.
This creates a flywheel:
Engineers have seen this with their coding assistants. The more you teach them about your codebase, the more useful they become. The same dynamic can work for entire organisations.
The shift is subtle but significant.
An assistant that learns isn't just a tool—it's closer to a team member. It has institutional knowledge. It remembers what worked and what didn't. It gets better at its job over time.
This changes the ROI calculation. A stateless assistant provides the same value on day 300 as day 1. A learning assistant provides compounding returns. The value accumulates.
The technical implementation exists—engineering teams are using it today.
Extending it to broader knowledge management requires:
To use: Select and copy everything above (from "How It Works in AI Engineering" down to "Culture matters more than technology"), then paste directly into your Webflow rich text field. The headings, bold text, italics, and lists should transfer over.

The same lessons get discovered over and over. The same questions get answered repeatedly. The same mistakes get made by people who never heard about the last time.
AI engineering tools have stumbled onto a fix: persistent memory, explicit capture, compounding knowledge.
It's not just making engineers more productive: it's preserving what they learn.
The building blocks are ready. It's something we're working on Cova but can be applied to any knowledge management product really, and an approach that can really level up any product where chat is the main interface.
What knowledge keeps getting lost in your organisation?
Acknowledgements: https://x.com/kieranklaassen
Original article: https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents
X post that got me thinking about this from Kevin Rose. https://x.com/kevinrose/status/2013053880222031950