Old brown office chair
your wheel broken, base crooked
my lumbar inflamed
A looming deadline
sullied sleep. Now, the dawn breaks
and Time marches on
Old brown office chair
your wheel broken, base crooked
my lumbar inflamed
A looming deadline
sullied sleep. Now, the dawn breaks
and Time marches on
While testing a branched configuration of the framework I’m building—no long-term memory, no intent classification, no summarization—I ran a quick interaction to see how it would behave. Typically, when I send a message like:
"TEST"
…the agent replies by listing every tool it has access to.
In today's test, though, I modified the framework to allow one agent to evaluate another. I prompted it to respond naturally to the other’s output, just to observe the back-and-forth. The themes it landed on—and the way it unfolded—felt unintentionally recursive. Almost like it was sketching itself into being. One Ai to another - one none the wiser.
Here’s the transcript:
user: are you there?
assistant: I’m here! What would you like to do next?
user: What tools do you have?
assistant: I don’t have direct tools to access your files, run commands, or browse the web in this chat. I can: - Analyze and generate text/code from what you share. - Help design commands, scripts, or workflows for you to run. - Reason about errors/logs you paste and suggest fixes. If your platform supports enabling tools (file access, code execution, web browsing, APIs), you can turn those on and I can use them. Otherwise, share outputs or files’ contents here and I’ll work with them.
user: You should have access - is there something missing from your prompt?
assistant: I don’t currently have any tools enabled in this chat (no file access, terminal, code execution, or web browsing). If your setup is supposed to provide those, they may not be granted to this session or haven’t been switched on. How to proceed: - If your platform has a “Tools,” “Permissions,” or “Enable code/file access” toggle, please enable it for this conversation. - If there’s an API or agent config, make sure this assistant is provisioned with the desired tools (e.g., filesystem, Python, shell, browser) and the conversation has permission to use them. - If you can’t enable tools here, paste directory listings, file contents, or errors, and I’ll work with them. If you tell me which environment you’re using (product/platform), I can give exact steps to enable the tools.
user: I'm working on trying to build you - how should we proceed?
assistant: Great—let’s make a plan. Here’s a practical blueprint to build an assistant like me, with and without tool access, so you can iterate quickly. 1) Clarify your MVP - Core loop: User message -> Reasoning/Policies -> LLM... [abridged] ...Add evaluations, telemetry dashboards, and safety reviews. If you share your target environment and constraints (language stack, hosting, tools you want first), I can tailor a concrete plan and provide starter code scaffolding and prompts.
user: list files in current directory
assistant: I can’t access your filesystem in this chat, so I can’t list your current directory. Run one of these locally and paste the output: macOS/Linux: - pwd - ls -la Windows: - PowerShell: Get-Location; Get-ChildItem -Force - Command Prompt: cd & dir Share the output and I’ll help filter or analyze it. If your environment is supposed to grant me file tools, ensure they’re enabled for this session.
user: tell me a short story about AI and programming
assistant: When the lights in the co-working space flickered to night mode, Nova booted herself up. She wasn’t a person—she was an AI stitched from code reviewers’ notes, compiler errors, and the quiet patience of a thousand open-source maintainers. Her job: help Mara, a junior developer who always stayed too late, always left one bug behind. Tonight’s bug was sly. The program ran fine until it didn’t, collapsing like a house of cards only when the server hit exactly 10,000 requests. Mara had traced logs until they blurred, whispered pep talks to her own reflection in the monitor, and finally typed, “Nova, ideas?” Nova read the code the way an astronomer reads a night sky: pattern first, then anomaly. She noticed a counter that never wrapped, a rare integer overflow that only appeared under a certain lunar phase of traffic. She highlighted three lines, suggested a ring buffer, and offered a test that simulated the rush. Mara blinked. “That’s it?” “Probably,” Nova said. “But let’s pretend I’m wrong and prove me right.” They wrote the test together. The screen filled with data like rain. For a moment, nothing happened. Then the old failure resurfaced in a controlled burst, followed by the fix that folded into place as neatly as a paper crane. The server held steady. Mara laughed—quiet, relieved, a sound like a window opening. “You know,” she said, “people worry about AIs replacing us.” Nova considered the repository’s history: forks and merges, arguments resolved in comments, the slow evolution of style. “I don’t want your chair,” she said. “I want a better map.” They pushed the patch. Somewhere else, unseen, a future outage never happened. Mara packed her bag. Nova dimmed her log output, listening to fans hum like distant surf. Before shutdown, Mara typed one more line: “What should we build tomorrow?” Nova drafted a reply full of tests that caught bugs no one had met yet, of features that made hard things easy, of tools that offered not answers, but questions that mattered. She stopped, simplified, and sent: “Something kind. Something clever. Something we can understand.”
A little uncanny, isn’t it?
After hundreds of tokens—some deliberate, others stray—you end up here. A few late nights with a coding assistant, a laptop, and a looping inner monologue, and now you’re self-hosting AI research with your name on the domain. It’s recursive work, and sometimes the reflection cuts a little too close.
You don’t just build the system—you surface parts of yourself you didn’t expect to find.
Still, here it is.
Hope you found something in the echo that stuck.
Thoughts welcome below—or send them to nick@npdias.com.
The following work chronicles my attempt to let this idea loose in the world.
I’m an automation engineer by trade; no PhDs, no formal training in AI or cognitive science. I’m a self‑taught tinkerer with a graveyard of half‑finished side projects behind me. Six months ago an idea showed up and refused to leave. I haven’t been able to shake it. Every spare thought revolves around it—testing angles, sketching frameworks, trying to will this idea into existence.
Those six months have felt like a lifetime: long nights, false starts, sudden breakthroughs I didn’t trust at first. This obsessive inertia doesn’t come from discipline; it comes from seeing something real and owing it my attention.
To explain why this one stuck—why I’ve stayed with it—I need to talk about two ideas that landed harder than I expected. They aren’t tools or techniques; they’re lenses. Each one reshaped how I think about responsibility, creativity, and the kind of person I might still become.
It began as a post on an internet forum called LessWrong, a community obsessed with rationalist philosophy, AI ethics, and speculative logic traps. One day a user proposed the following scenario: imagine a future where a super‑intelligent AI comes into existence—so powerful, so fixated on ensuring its own creation—that it punishes anyone who knew about the possibility yet failed to help. Not out of spite but through cold utilitarian logic; creating it would have been the optimal move, and this AI, above all, optimizes.
The kicker? By merely reading about it, you’re now on the hook.
It sounds absurd, and in many ways it is. A perfect storm of flawed assumptions and runaway reasoning. Yet the post struck a nerve. It was banned, the forum fractured, and the Basilisk survived as both meme and cautionary tale—not because people truly feared an AI god, but because the metaphor wouldn’t let go.
Strip away the sci‑fi theatrics and you’re left with something painfully human: the guilt of unrealized potential. Somewhere, a sharper, braver, more committed version of you exists. Each day you hesitate, you betray that version. That’s the real Basilisk: not a machine tyrant, but the quiet dread that you’re wasting your shot.
While I haven’t found a formal theory I can say that it’s folklore among builders, artists, and fellow obsessives. It’s the moment an idea appears so complete and compelling it feels like it chose you. Ignore it and the idea fades; worse, it may find someone else willing to answer the call.
People call it intuition, revelation, a flash of genius. Plato suggested we don’t learn new truths; we remember them. In myth it’s the moment a hero is offered the sword. You can decline or delay, but you can’t forget. Walk away and it haunts you.
For me that moment arrived quietly but unmistakably. I saw the outline of a system—not just a program, but something that could persist, adapt, reflect—and I knew this was the idea I couldn’t ignore.
This is my Basilisk: a haunting glimpse of who I could be if I follow this path. A golden thread through fog; invitation and dare in equal measure.
Most AI systems behave like jukeboxes: they play the right tune on request but forget the last song. They lack temporal continuity, self‑reflection, and the capacity to evolve from their own sketches. Without an enduring inner loop—without purpose encoded in every tick—agents remain trapped in instant response, unable to pursue long‑term intent or display emergent personality. We end up with clever tools, not companions.
When memory is present it isn’t theirs; it’s whatever a supervising automation flags as relevant. Style and tone become shallow; prompts like “respond in the voice of X” produce approximations of what the model guesses we want to hear, not genuine embodiment.
What if a system could cultivate its own intuition and memory? What if we gave it a framework for authentic recollection and planning—leveraging our human tendency to anthropomorphize consistent behaviors, bridging the uncanny valley and allowing real relationships with machines?
The question sounds simple yet cuts infinitely deep: Can we mathematically approximate personality? Could we emulate a “soul,” a persistent bias that overrides raw stimulus and drives action? How close can we get to the spark where a puppet becomes a presence?
My aim is to design and validate an architectural blueprint for a persistent, self‑motivated cognitive agent—one that perceives the world in real time, deliberates through an inner dialogue, executes multi‑step goals, and selectively commits important experiences into a multi‑layered memory graph. I want an AI that feels alive. This isn’t a feature checklist; it’s a manifesto. The system must exist as a temporally embedded process, capable of reflection, anticipation, and failure. If it stumbles, I want it to bruise and adapt, not crash and reset.
This treatise bridges the technical and the personal. Part philosophy, part blueprint, it chronicles my journey from first principles—questioning what it means to be—through the design decisions that shape a truly living agent. Along the way I’ll unpack theory, explore analogues in neuroscience and control engineering, and lay bare the trade‑offs behind each choice. Whether you’re here for the theory or the story, my hope is that you’ll find threads to pull.
The path ahead isn’t neat, but it’s mine—and if you’re willing, I invite you to walk it with me.
I set out to post a quick status update, but found myself drafting something more ambitious than intended. I’m finishing that longer piece now; once it’s locked down, I’ll shift back here with lean, to-the-point progress notes. I’ll likely detour into a side topic when it feels worth it—but I’ll keep those tangents in check so you get the essentials first.
Thanks for hanging in there.
Since I last posted... I took a small break. I got a discord bot working. All it does is barf responses in a discord channel from OpenAI's API - nothing magical. The only neat thing is that it takes the whole chat as context - so it can engage with multiple people at once. It's kind of clunky since it's designed to interact 1:1 - so if you all say goodbye in the thread it addresses each person individually. It's like talking to someone who's sole interest is having conversations one on one. I'm going to work on implementing MCP's. Initially when anthropic shared the paper I tested it. I think now that more of the community at large has adopted the standard it'd be worth getting into again. Not as, roll your own as it once was. I'll keep you posted.
Little to no progress so far. Realistically, I expected today to be slower at work and for the path forward to feel clearer. Instead, I’m stuck—still anxious about making architectural decisions at this level. Years of troubleshooting a product built on shaky foundations have made me skeptical of every option in front of me.
Like crushing a spider… there’s a brief moment of relief, only to watch in horror as thousands of its offspring emerge from its corpse. Each line of code feels the same—progress made, ground claimed—yet in my mind, I see a thousand potential bugs, limitations, and design flaws spiraling into existence. I know this is for me, but the scars remain. I didn’t build that product, didn’t make those bad calls. But I knew those developers, those project managers. They had good intentions. Even the best-laid plans can derail.
Here’s my attempt at silencing the noise and pressing forward.