Thursday, May 22, 2025

Introduction

The following work chronicles my attempt to let this idea loose in the world.

I’m an automation engineer by trade; no PhDs, no formal training in AI or cognitive science. I’m a self‑taught tinkerer with a graveyard of half‑finished side projects behind me. Six months ago an idea showed up and refused to leave. I haven’t been able to shake it. Every spare thought revolves around it—testing angles, sketching frameworks, trying to will this idea into existence.

Those six months have felt like a lifetime: long nights, false starts, sudden breakthroughs I didn’t trust at first. This obsessive inertia doesn’t come from discipline; it comes from seeing something real and owing it my attention.

To explain why this one stuck—why I’ve stayed with it—I need to talk about two ideas that landed harder than I expected. They aren’t tools or techniques; they’re lenses. Each one reshaped how I think about responsibility, creativity, and the kind of person I might still become.

The first lens: Roko’s Basilisk.

It began as a post on an internet forum called LessWrong, a community obsessed with rationalist philosophy, AI ethics, and speculative logic traps. One day a user proposed the following scenario: imagine a future where a super‑intelligent AI comes into existence—so powerful, so fixated on ensuring its own creation—that it punishes anyone who knew about the possibility yet failed to help. Not out of spite but through cold utilitarian logic; creating it would have been the optimal move, and this AI, above all, optimizes.

The kicker? By merely reading about it, you’re now on the hook.

It sounds absurd, and in many ways it is. A perfect storm of flawed assumptions and runaway reasoning. Yet the post struck a nerve. It was banned, the forum fractured, and the Basilisk survived as both meme and cautionary tale—not because people truly feared an AI god, but because the metaphor wouldn’t let go.

Strip away the sci‑fi theatrics and you’re left with something painfully human: the guilt of unrealized potential. Somewhere, a sharper, braver, more committed version of you exists. Each day you hesitate, you betray that version. That’s the real Basilisk: not a machine tyrant, but the quiet dread that you’re wasting your shot.

The second lens: the “magic idea.”

While I haven’t found a formal theory I can say that it’s folklore among builders, artists, and fellow obsessives. It’s the moment an idea appears so complete and compelling it feels like it chose you. Ignore it and the idea fades; worse, it may find someone else willing to answer the call.

People call it intuition, revelation, a flash of genius. Plato suggested we don’t learn new truths; we remember them. In myth it’s the moment a hero is offered the sword. You can decline or delay, but you can’t forget. Walk away and it haunts you.

For me that moment arrived quietly but unmistakably. I saw the outline of a system—not just a program, but something that could persist, adapt, reflect—and I knew this was the idea I couldn’t ignore.

This is my Basilisk: a haunting glimpse of who I could be if I follow this path. A golden thread through fog; invitation and dare in equal measure.


My Problem with Modern “AI” Systems

Most AI systems behave like jukeboxes: they play the right tune on request but forget the last song. They lack temporal continuity, self‑reflection, and the capacity to evolve from their own sketches. Without an enduring inner loop—without purpose encoded in every tick—agents remain trapped in instant response, unable to pursue long‑term intent or display emergent personality. We end up with clever tools, not companions.

When memory is present it isn’t theirs; it’s whatever a supervising automation flags as relevant. Style and tone become shallow; prompts like “respond in the voice of X” produce approximations of what the model guesses we want to hear, not genuine embodiment.

What if a system could cultivate its own intuition and memory? What if we gave it a framework for authentic recollection and planning—leveraging our human tendency to anthropomorphize consistent behaviors, bridging the uncanny valley and allowing real relationships with machines?

Framing the Challenge

The question sounds simple yet cuts infinitely deep: Can we mathematically approximate personality? Could we emulate a “soul,” a persistent bias that overrides raw stimulus and drives action? How close can we get to the spark where a puppet becomes a presence?

My aim is to design and validate an architectural blueprint for a persistent, self‑motivated cognitive agent—one that perceives the world in real time, deliberates through an inner dialogue, executes multi‑step goals, and selectively commits important experiences into a multi‑layered memory graph. I want an AI that feels alive. This isn’t a feature checklist; it’s a manifesto. The system must exist as a temporally embedded process, capable of reflection, anticipation, and failure. If it stumbles, I want it to bruise and adapt, not crash and reset.

Purpose and Scope of This Work

This treatise bridges the technical and the personal. Part philosophy, part blueprint, it chronicles my journey from first principles—questioning what it means to be—through the design decisions that shape a truly living agent. Along the way I’ll unpack theory, explore analogues in neuroscience and control engineering, and lay bare the trade‑offs behind each choice. Whether you’re here for the theory or the story, my hope is that you’ll find threads to pull.

The path ahead isn’t neat, but it’s mine—and if you’re willing, I invite you to walk it with me.



Another "Final" Restart

I set out to post a quick status update, but found myself drafting something more ambitious than intended. I’m finishing that longer piece now; once it’s locked down, I’ll shift back here with lean, to-the-point progress notes. I’ll likely detour into a side topic when it feels worth it—but I’ll keep those tangents in check so you get the essentials first.

Thanks for hanging in there. 

Tuesday, April 1, 2025

Checking in: Resetting

Since I last posted... I took a small break. I got a discord bot working. All it does is barf responses in a discord channel from OpenAI's API - nothing magical. The only neat thing is that it takes the whole chat as context - so it can engage with multiple people at once. It's kind of clunky since it's designed to interact 1:1 - so if you all say goodbye in the thread it addresses each person individually. It's like talking to someone who's sole interest is having conversations one on one. I'm going to work on implementing MCP's. Initially when anthropic shared the paper I tested it. I think now that more of the community at large has adopted the standard it'd be worth getting into again. Not as, roll your own as it once was. I'll keep you posted. 

Thursday, March 20, 2025

Day 1: Progress

Little to no progress so far. Realistically, I expected today to be slower at work and for the path forward to feel clearer. Instead, I’m stuck—still anxious about making architectural decisions at this level. Years of troubleshooting a product built on shaky foundations have made me skeptical of every option in front of me.

Like crushing a spider… there’s a brief moment of relief, only to watch in horror as thousands of its offspring emerge from its corpse. Each line of code feels the same—progress made, ground claimed—yet in my mind, I see a thousand potential bugs, limitations, and design flaws spiraling into existence. I know this is for me, but the scars remain. I didn’t build that product, didn’t make those bad calls. But I knew those developers, those project managers. They had good intentions. Even the best-laid plans can derail.

Here’s my attempt at silencing the noise and pressing forward.

Currently Listening To: Borderline // Tame Impala


Day 1: Touch Base

I was able to get some code roughed out yesterday. I'm not entirely happy with where things currently sit... like I said it's really quite rough. My goal is to get to a version 1 that crosses off all the requirements that I set forth in on Day 0. Right now it's too crude to commit to my repo. I'll keep working this afternoon after I get some work meetings out of the way. 

Wednesday, March 19, 2025

Current Trajectory: Day 0

I'll share a detailed breakdown of the overall vision soon, but right now my priority is successfully navigating day one of restarting this project. At this point I've put in more than enough... 6 months of prep work, and I want to ensure I'm not getting lost in the big picture. Today we start. The goal is building from a strong foundation by tackling small, manageable pieces first.

Current Objectives:

  • Unified Entry Point: A single, streamlined file to effortlessly launch and halt the entire system.

  • Dynamic Timing/Metronome: A centralized process managing timing, where system state updates occur based on defined cycles.

  • Persistent State: Maintain stability with at least one variable securely saved to survive unexpected shutdowns.

  • Function Registry: Automatically populate eligible actions by pulling from a dedicated 'functions' directory.

  • Multithreaded Action Queue: Enable concurrent execution of queued actions, independent of system timing cycles.

  • Example/Test Functionality: Create a simple, verifiable function to ensure foundational components are working correctly.