TL;DR
The Bash toolchain has been around for decades. grep, find, directory structures—all battle-tested. When LLMs gained the ability to operate these tools, plain text + directories suddenly went from “the most primitive option” to “the most powerful knowledge base format.” No fancy apps, no risk of service shutdown. Your data stays yours.
If you’ve worked in software, you know this: Unix/Linux command line tools are the bedrock of the entire operating system.
grep searches text. find locates files. mkdir creates directories. mv moves things around. These commands have been around since the 1970s. Fifty years later, millions of servers are still running them every single day. Every line of code, every config file, every system log—all plain text.
Ever wondered why?
It’s not because nobody tried to replace it. XML had its shot. Binary formats had their shot. Proprietary formats of every flavor had their shot. And yet, everyone keeps coming back to plain text. Because it nails the properties that matter most in engineering: it’s durable—a .txt from fifty years ago still opens today. It’s searchable—one grep command can find what you need across tens of thousands of files. It’s programmable—any language can read and write it. It works with git for version control, so every change is tracked. And when you want to migrate? Just copy the folder.
These properties have been validated for half a century. Never overturned. The programming languages you use might change every few years, frameworks might rotate every two, but plain text underneath has never changed.
Here’s the thing, though. These powerful tools had one fatal limitation: only engineers used them, only for code, and more importantly, they’re not exactly easy to learn.
You’re not going to use grep to manage your reading notes. You’re not going to write a regex with find to search for “that paper about attention mechanisms I read last week.” You’re not going to write a shell script to auto-categorize your personal notes. Not because it can’t be done—it absolutely can—but because it’s too much hassle. The learning curve is steep, normal people won’t bother, and even engineers don’t want to stare at a terminal to manage notes after work.
So this powerful toolchain just sat there quietly, doing its thing with code and servers, completely disconnected from personal knowledge management.
Until LLMs showed up.
Here’s the thing I think most people are missing—when Claude, GPT, and other language models gained the ability to execute Bash, they didn’t just learn a new trick. They plugged into an entire mature toolchain that’s been battle-tested for decades.
This distinction matters.
If LLMs were building file operation capabilities from scratch, we’d have plenty to worry about. Is the search reliable? Will it mess up file operations? Is the format handling mature enough? But none of these are real concerns, because under the hood, it’s all the same old tools that have been running for decades. grep doesn’t mis-search. mkdir doesn’t create the wrong directory. git doesn’t lose your version history. These things have been validated by billions of uses already.
More importantly, LLMs were trained on data that includes how these tools are used. What LLMs do is add a natural language interface on top of these tools.
You tell it “find that note I wrote about transformer attention mechanisms,” and it translates that into a grep command. You say “save this article to the AI research folder,” and it translates that into an mkdir to confirm the directory exists, then a write to put the file in. You say “summarize last month’s meeting notes,” and it uses find to locate the files, read to load them, its semantic understanding to extract key points, and write to save the summary.
Every step is a mature operation. The LLM just adds the layer of “understanding what you mean.”
Think about it from another angle: to use these powerful Bash tools before, you had to learn the command line, memorize flags and parameters, know how to pipe different commands together. That learning curve locked out 99% of people. Now LLMs have flattened that barrier—just speak in plain language, and it operates the tools for you.
This got me seriously thinking: if LLMs are inherently great at understanding and processing text, and they can now operate the filesystem—should we fundamentally rethink how we manage personal knowledge?
Before going further, let me share something I came across on HN a while back.
There’s a professor at Brown University named Jeff Huang who did something pretty interesting: he managed his productivity using a single .txt file for over 14 years. Every to-do, every meeting note, every random thought—all dumped into one plain text file, separated by dates. That’s it.
14 years. One file.
He’s not some tech bro showing off minimalism. Jeff Huang is a computer science professor—he knows better than most of us what tools are out there. He stuck with .txt because he’s watched too many things come and go.
There’s a line in his post that really resonated with me:
“I’ve been doing this for more than 14 years now. Let’s see your productivity app survive that long.”
Think about it. Evernote was all the rage 14 years ago—how many people around you still use it? Google Keep launched and nobody seemed to care. Bear, Notion, Obsidian, Roam Research—every few years there’s a new “note-taking revolution,” each one exciting, each one claiming to be the last note app you’ll ever need. And then what? Some are still around, some have fizzled out, some you’re still paying monthly for but haven’t opened in six months.
Meanwhile, the .txt file never let Jeff Huang down. Not once in 14 years. Because plain text doesn’t depend on any company, any platform, any software. It’s just a file sitting on your hard drive that any text editor can open.
This made me rethink something: maybe the problem isn’t that we’re not trying hard enough to learn new tools. Maybe we’ve been looking in the wrong direction the whole time. We keep searching for “better software,” but maybe what we actually need isn’t better software—it’s a better way to use the most basic format.
But Jeff Huang’s approach has an obvious limitation: his use case is a single chronological productivity log. One person, one timeline, one file.
If we need to handle the diverse kinds of knowledge that real life throws at us, that’s clearly not enough.
Your brain is juggling wildly different things at the same time. In the morning you might be reading a paper on LLM architecture, at lunch you’re in a project meeting jotting down decisions, in the afternoon you reply to some important emails and think some of it’s worth saving, and at night you suddenly want to track your spending because it feels out of control this month. These things have nothing in common, but they’re all your knowledge, your records.
Cram them all into one file, and three months later you’ll never find anything again.
What about organizing them into folders? You create a bunch of directories, but every time you save a note, you hesitate—“does this go under Work or Research?”—and by the time you’re done hesitating, you don’t feel like saving it anymore. Or you do save it, but with a messy filename, and three months later it’s as good as gone.
That’s why tools like Notion and Obsidian felt like saviors when they appeared. They offer tags, categories, search, database views, bidirectional links—they handle the “finding things” and “organizing things” problems for you. Just toss stuff in, and the software sorts it out.
Sounds perfect.
But what’s the cost?
Your data becomes proprietary. Notion stores everything on their servers in their block structure. Obsidian is better—Markdown at the core—but once you start using plugins, embedded queries, canvas, those features don’t travel outside Obsidian. Evernote? Don’t even get me started—the exported .enex format has basically zero native support anywhere else. And more importantly, organizing and categorizing all those notes still eats up a significant amount of your energy.
You spend three, five years building up a knowledge base, locked inside a commercial company’s product. One day they jack up the price, or push a redesign you can’t stand, or straight up shut down—and there you are, staring at a pile of half-broken exported files, contemplating your life choices.
Long-time Evernote users probably know this feeling all too well. The app once called “your second brain”—look at what it’s become.
This has always been a dilemma: if you want simplicity and freedom, you give up structure and intelligence. If you want structure and intelligence, you hand your data over to someone else. In the past, you could only pick one.
Not anymore.
Once LLMs could operate the filesystem, the bottleneck of plain text was broken. Not by more complex software—by an AI assistant that “understands plain language and knows how to operate Bash.”
You used to have too many notes to find anything, because grep was too hard for normal people. Now you don’t need to know grep. Just say “find that thing I wrote about context windows,” and the LLM translates it to grep for you.
You used to not know where to put a new note, hesitating over categories until you gave up and didn’t save it at all. Now you can write down your categorization rules, and the LLM reads them before every save, making the judgment itself. You say “save this,” it reads the content, decides it’s an AI research article, and puts it in the right folder. Doesn’t need to ask you.
You used to struggle with maintaining an index—you’d create a table of contents, but forget to update it every time you added or removed something, and three months later it was useless. Now the LLM updates the index automatically every time a file changes. You don’t have to think about it.
You used to end up with notes in all sorts of inconsistent formats—some have dates, some don’t, some have tags, some don’t—and by the time you want to standardize, it’s too late. Now the LLM reads your format spec before creating each file, and follows the rules.
And through all of this, your data is still .md files. Markdown. Plain text. You can open them in VS Code, in Notepad, or cat them in a terminal. Back them up with git push, migrate by copying the folder. You don’t depend on any company, any subscription, any service.
You get the freedom of plain text and the convenience of a smart note-taking app. At the same time.
I eventually built an actual system based on this idea and have been running it for a while. Going into every detail here would be too much, so let me just share the core design—because it’s genuinely simple. Simple enough that after reading this, you could tell Claude to set it up following this article’s design, and it would work.
Just three things.
First: directory structure is your knowledge taxonomy. No database, no tagging system, just folders. Research/AI/ for AI research notes, Work/ for work files, Personal/Finance/ for personal finances. Open your file manager and you instantly see what’s where. No need to learn any system’s UI logic.
You might think—folders? Isn’t that the most primitive way to organize things? Yes, exactly. But the point isn’t the folders themselves. It’s that when you use folders to categorize knowledge, and there’s an LLM that understands your categorization logic, this “most primitive method” becomes the most efficient one. The LLM doesn’t need to learn some API, doesn’t need to adapt to some block structure—it just needs to know what the directory is called and what goes in it, and it can start working for you.
Second: each directory can have a rule file. I call it RULE.md. It defines the rules of engagement for that directory—what operations are allowed? How should files be named? What metadata is required? Any special policies, like read-only or append-only?
Before the LLM does anything to a directory, it reads the rule file first, then follows the rules. You don’t need to remind it every time—“remember to add a date prefix,” “remember to write frontmatter,” “remember this directory doesn’t allow deletions.” Write the rules once, and it follows them every time.
This might sound like “teaching AI to behave,” but it’s really more like establishing a governance mechanism. You write down the management rules for your knowledge base in plain text, and the LLM becomes your librarian.
Third: each directory has an index, which is just README.md. It lists what files are in the directory, what each one is, and what’s been updated recently. Humans can read it, AI can read it. For humans, it’s a quick-reference table of contents. For AI, it’s a navigation map that lets it locate things without scanning from scratch.
Every time a file changes, the LLM updates the index automatically. You never have to maintain it by hand.
That’s it. Three things: folders, rule files, indexes. All Markdown, all plain text, all openable with any text editor.
And because the rules travel with the folder, the whole structure is inherently recursive—you can move a subdirectory somewhere else, and its rules and index are still right there. No reconfiguration needed. This is fundamentally different from software that stores settings in some central database.
Day to day, it feels something like this—I tell the AI “save this article about AI Agents,” it checks the rule files across directories, decides it best fits under Research/AI/, creates the file following that directory’s format requirements, adds date, tags, source link, and updates the index. The whole thing takes under ten seconds. I don’t have to think about any of it.
Or I say “find that thing I read about context windows,” and it searches around, comes back with “found two—one’s a paper summary from last December, the other’s your own implementation notes. Which one do you want?”
It’s that mundane. No flashy UI, no monthly bill, no onboarding tutorial to sit through. But it’s managing your knowledge every single day.
Honestly though, if all this did was “make note-taking convenient,” I wouldn’t think it’s worth sharing.
What really makes this interesting is what it can do with completely different types of knowledge.
Think about how varied the information you deal with daily is: at work there are software project architecture docs, requirements specs, meeting minutes. Personal stuff includes financial records, credit card statement analysis, investment notes. Learning stuff includes paper summaries, technical article takeaways, reading notes. Life stuff includes travel plans, family schedules, account passwords.
These are wildly different in nature. In the past, you probably handled them like this: Notion for notes and to-dos, Excel for finances, Confluence for work docs, Trello for project tracking. Four or five platforms, data completely siloed. Want to find a decision from last week’s meeting notes and connect it to a project document? Good luck—you have to remember which platform it was on and which page it was under.
But in the plain text world, all of these live under the same directory tree. Software projects have their own rule files, finances have theirs, research has its own. They each have their own categorization and format requirements, but physically, they’re just different subdirectories inside the same folder on the same computer.
What does this mean?
It means the LLM can do truly cross-domain operations. It can run a single grep across all directories, find an insight in your research notes, and discover it’s relevant to the work project you’re currently on. It can extract action items from your meeting notes and create them directly in your to-do list. It can analyze your March credit card statement and compare it to February’s, telling you where you overspent. It can do all this because all the data is in the same format, in the same place—no format conversion issues, no barriers between platforms.
This is something no single note-taking app can do—no matter how powerful it is. Not because the technology isn’t there, but because every app inherently locks data inside its own world. Your Notion notes don’t automatically talk to your Excel spreadsheets. Plain text never had this problem to begin with.
In a way, this is also one of the most underrated capabilities of LLMs. Everyone’s talking about AI writing code, AI generating images, AI making videos. But the most fundamental ability of an LLM is understanding and operating on text—and the thing we produce the most of every day is text. Put an LLM on top of a pile of plain text files, let it understand, search, organize, and connect them—that’s the most natural and efficient way to use it.
Let’s circle back to Jeff Huang’s story.
His .txt has survived 14 years, and it’s still going. I fully believe plain text will continue to survive—this format has been around since the 1970s, and it’s never let anyone down. 14 years is nothing. It’s already been 50.
The difference is that plain text used to be a tradeoff. You chose freedom and durability, but gave up structure and intelligence—all the organizing work was on you. Jeff Huang surviving 14 years took extraordinary discipline.
Not anymore. LLMs have turned plain text from a one-person minimalist struggle into a full knowledge management system with an AI assistant working alongside you. You still get all the benefits of plain text—durable, free, no platform dependency. But you no longer have to do all the grunt work alone, because there’s an assistant that understands semantics and knows how to operate Bash.
What you need is surprisingly little:
A folder—that’s your knowledge base. Some Markdown files—a format both humans and AI can read. A few written rules—telling the AI how you want things done. And any LLM that can run Bash—Claude, GPT, a local open-source model, whatever. As long as it can read text and operate the filesystem, it can manage your knowledge.
No need to choose between “simple” and “powerful.” Plain text plus LLM—take both.
Less is more. Simplicity is the ultimate sophistication.