the intern problem

December 2025 · software

here’s a question i keep coming back to: what actually makes someone an employee versus an intern?

it’s not intelligence. interns can be brilliant. it’s not skill. you can hire interns who code better than half your team. it’s not even work ethic.

it’s knowledge. but not the kind you can teach.

tacit vs explicit

michael polanyi made this distinction decades ago:

“we can know more than we can tell.”

explicit knowledge is what you can write down, put in a document, hand to someone. tacit knowledge is what you know but can’t easily articulate—the judgment calls, the pattern recognition, the “we tried that, it didn’t work” that only comes from being there.

an employee has tacit knowledge. they know why we use this architecture. they know which slack channels actually matter. they know that when sarah asks about “the integration,” she means the salesforce one, not the stripe one. they know that the CEO’s “maybe” means “no” and their “interesting” means “keep going.”

an intern doesn’t have any of that. no matter how smart they are.

AI is the world’s smartest intern

i’ve been building with AI agents for a while now. and i keep running into the same wall.

the model is genuinely brilliant. better at code than me in many ways. can reason, analyze, suggest approaches i wouldn’t have considered. and every single morning, it wakes up with no memory of who i am.

it’s like having the smartest intern possible. and firing them every night.

the industry’s response? make the intern smarter. new models. new capabilities. new frameworks. better tool use.

but that’s the wrong lever. you don’t turn an intern into an employee by making them smarter. you do it by giving them context. history. memory. the tacit knowledge that can only accumulate over time.

where context actually lives

here’s what i keep noticing in companies: the important stuff isn’t in the documents.

the decision was made in a slack thread at 11pm. the real constraint came out in a meeting that wasn’t recorded. the architecture choice made sense because of something that happened six months ago that nobody documented. the spec exists in notion, but the reasoning behind the spec was discussed verbally and never written down.

companies have always lost this stuff. every time someone senior leaves, you feel it. the institutional knowledge just… evaporates.

we’ve accepted this as the cost of doing business. people leave, knowledge leaves with them.

but now AI makes it worse. and maybe solvable.

the memory problem

here’s what i mean by that.

knowledge work requires sustained attention to the same problem space. you can’t think clearly if you’re constantly rebuilding context from scratch. — cal newport on context continuity

AI makes it worse because now you have another entity that needs context—and it has none. every conversation, every session, every tool starts from zero. you’re not just onboarding new humans anymore. you’re onboarding AI, over and over, forever.

AI makes it solvable because for the first time, we have systems that can actually understand unstructured information. the slack threads. the meeting transcripts. the reasoning trails. stuff that was always too messy to capture in a database. AI can work with that.

but only if we build systems that let it.

what we’re building

so we’re working on something called memory store. the philosophy is simple:

memory should be portable—it follows you across tools. memory should be individual—your perspective, your context. memory should be collaborative—in shared spaces, it informs the work. memory should be intelligent—not a log, a brain.

it lives where you already work. captures decisions, not just data. and it’s yours.

is it perfect? no. we’re early. lots of rough edges. but the direction feels right to me.

the question i’m exploring

notion had this founding thesis:

tools should capture how teams think, not just what they produce.

andy matuschak’s work on tools for thought pushes further:

systems that don’t just store what you’ve learned, but create conditions for compounding insight.

i think AI needs something similar. not just access to information. access to context. to memory. to the tacit knowledge that makes someone actually useful.

the industry is racing to build smarter interns. i think the real unlock is AI that can become an employee. that knows your company. remembers your decisions. understands your constraints.

capability isn’t the bottleneck anymore. context is.

and context requires memory.


we’re building memory.store with this approach—actively finding ways to create meaningful value for people drowning in context switching while trying to deliver high-value work.

if this resonates, come check it out. we’re in private beta as of today.

memory.store