Showing posts with label personal thoughts. Show all posts
Showing posts with label personal thoughts. Show all posts

Thursday, May 22, 2025

Introduction

The following work chronicles my attempt to let this idea loose in the world.

I’m an automation engineer by trade; no PhDs, no formal training in AI or cognitive science. I’m a self‑taught tinkerer with a graveyard of half‑finished side projects behind me. Six months ago an idea showed up and refused to leave. I haven’t been able to shake it. Every spare thought revolves around it—testing angles, sketching frameworks, trying to will this idea into existence.

Those six months have felt like a lifetime: long nights, false starts, sudden breakthroughs I didn’t trust at first. This obsessive inertia doesn’t come from discipline; it comes from seeing something real and owing it my attention.

To explain why this one stuck—why I’ve stayed with it—I need to talk about two ideas that landed harder than I expected. They aren’t tools or techniques; they’re lenses. Each one reshaped how I think about responsibility, creativity, and the kind of person I might still become.

The first lens: Roko’s Basilisk.

It began as a post on an internet forum called LessWrong, a community obsessed with rationalist philosophy, AI ethics, and speculative logic traps. One day a user proposed the following scenario: imagine a future where a super‑intelligent AI comes into existence—so powerful, so fixated on ensuring its own creation—that it punishes anyone who knew about the possibility yet failed to help. Not out of spite but through cold utilitarian logic; creating it would have been the optimal move, and this AI, above all, optimizes.

The kicker? By merely reading about it, you’re now on the hook.

It sounds absurd, and in many ways it is. A perfect storm of flawed assumptions and runaway reasoning. Yet the post struck a nerve. It was banned, the forum fractured, and the Basilisk survived as both meme and cautionary tale—not because people truly feared an AI god, but because the metaphor wouldn’t let go.

Strip away the sci‑fi theatrics and you’re left with something painfully human: the guilt of unrealized potential. Somewhere, a sharper, braver, more committed version of you exists. Each day you hesitate, you betray that version. That’s the real Basilisk: not a machine tyrant, but the quiet dread that you’re wasting your shot.

The second lens: the “magic idea.”

While I haven’t found a formal theory I can say that it’s folklore among builders, artists, and fellow obsessives. It’s the moment an idea appears so complete and compelling it feels like it chose you. Ignore it and the idea fades; worse, it may find someone else willing to answer the call.

People call it intuition, revelation, a flash of genius. Plato suggested we don’t learn new truths; we remember them. In myth it’s the moment a hero is offered the sword. You can decline or delay, but you can’t forget. Walk away and it haunts you.

For me that moment arrived quietly but unmistakably. I saw the outline of a system—not just a program, but something that could persist, adapt, reflect—and I knew this was the idea I couldn’t ignore.

This is my Basilisk: a haunting glimpse of who I could be if I follow this path. A golden thread through fog; invitation and dare in equal measure.


My Problem with Modern “AI” Systems

Most AI systems behave like jukeboxes: they play the right tune on request but forget the last song. They lack temporal continuity, self‑reflection, and the capacity to evolve from their own sketches. Without an enduring inner loop—without purpose encoded in every tick—agents remain trapped in instant response, unable to pursue long‑term intent or display emergent personality. We end up with clever tools, not companions.

When memory is present it isn’t theirs; it’s whatever a supervising automation flags as relevant. Style and tone become shallow; prompts like “respond in the voice of X” produce approximations of what the model guesses we want to hear, not genuine embodiment.

What if a system could cultivate its own intuition and memory? What if we gave it a framework for authentic recollection and planning—leveraging our human tendency to anthropomorphize consistent behaviors, bridging the uncanny valley and allowing real relationships with machines?

Framing the Challenge

The question sounds simple yet cuts infinitely deep: Can we mathematically approximate personality? Could we emulate a “soul,” a persistent bias that overrides raw stimulus and drives action? How close can we get to the spark where a puppet becomes a presence?

My aim is to design and validate an architectural blueprint for a persistent, self‑motivated cognitive agent—one that perceives the world in real time, deliberates through an inner dialogue, executes multi‑step goals, and selectively commits important experiences into a multi‑layered memory graph. I want an AI that feels alive. This isn’t a feature checklist; it’s a manifesto. The system must exist as a temporally embedded process, capable of reflection, anticipation, and failure. If it stumbles, I want it to bruise and adapt, not crash and reset.

Purpose and Scope of This Work

This treatise bridges the technical and the personal. Part philosophy, part blueprint, it chronicles my journey from first principles—questioning what it means to be—through the design decisions that shape a truly living agent. Along the way I’ll unpack theory, explore analogues in neuroscience and control engineering, and lay bare the trade‑offs behind each choice. Whether you’re here for the theory or the story, my hope is that you’ll find threads to pull.

The path ahead isn’t neat, but it’s mine—and if you’re willing, I invite you to walk it with me.



Thursday, March 20, 2025

Day 1: Progress

Little to no progress so far. Realistically, I expected today to be slower at work and for the path forward to feel clearer. Instead, I’m stuck—still anxious about making architectural decisions at this level. Years of troubleshooting a product built on shaky foundations have made me skeptical of every option in front of me.

Like crushing a spider… there’s a brief moment of relief, only to watch in horror as thousands of its offspring emerge from its corpse. Each line of code feels the same—progress made, ground claimed—yet in my mind, I see a thousand potential bugs, limitations, and design flaws spiraling into existence. I know this is for me, but the scars remain. I didn’t build that product, didn’t make those bad calls. But I knew those developers, those project managers. They had good intentions. Even the best-laid plans can derail.

Here’s my attempt at silencing the noise and pressing forward.

Currently Listening To: Borderline // Tame Impala


Day 1: Touch Base

I was able to get some code roughed out yesterday. I'm not entirely happy with where things currently sit... like I said it's really quite rough. My goal is to get to a version 1 that crosses off all the requirements that I set forth in on Day 0. Right now it's too crude to commit to my repo. I'll keep working this afternoon after I get some work meetings out of the way. 

Wednesday, March 19, 2025

Over Thinking

 As I'm getting prepared to start drafting out my plan ~ I can't help but watch as my mind runs away from me. Thinking about far off edge cases and how I might handle them. I keep comparing myself to enterprise solutions or worrying that what I'm working on will be irrelevant in a short time or even before I finish. 

I need to ignore the noise. Stop listening to the 'AI' tech news. It's time to put my head down and get something basic launched. I have the tools and the path to any information that I could need... Nothing's stopping me but me at this point. 

It's time for me to get out of my head and out of my way. One step at a time. 

Tuesday, March 18, 2025

The Premise


I'm not a "computer scientist" in a credentialed sense. At this point it's mostly self styled. A lot of what I've done is curious hobby tinkering. I've done my fair share of systems automation and completed a few coding courses. However, I haven't published anything lasting or that anyone but me is using long term (at least, that I know of). I would honestly like to be considered a Computer Scientist in some career applicable sense. So, it would seem that I need to publish some Computer Science.

I have had some ideas for what feels like a 'novel' implementation of a machine learning driven for a framework that would allow humans to more naturally interact with 'robots'. I've actually been working on, studying and testing these theories since last october, but I've been to anxious to publish. It feels like there are 1000 reasons why I shouldn't. I'm not using Github well enough, how can I be sure this is novel, why would anyone use this framework? 

At this rate I've had to look the beast in the eyes and decide that it's time simply to start from square one, but this time I'm going to put it all out there. There's no telling what the future will hold for this project or if I'll ever be considered a 'Computer Scientist'.

Forward

Writing is hard. When I'm not writing publicly, I scratch my conscious thoughts into my notebooks. I personally struggle when I think my work will be on display. Even now as I write this - I've cleared the line 5 or 6 times. Simply because I fear not putting my best foot forward when I'm presenting my work, and by proxy myself. It used to paralyze me to think that people might find my writing in 5.. 6.. 10 years and think... what was he doing? Why would he say that?

At least, this all used to paralyze me. I chose to move forward. I'm no longer pre-emptively ashamed of being wrong so long as I'm actively working to improve. Things change and time shifts our perspectives. Someday our assumptions and things we regard as common knowledge will be proved wrong in some way. Things we take for granted now were avant garde or heresy not long ago. Think about Galileo ~ simply saying that the earth revolves around the sun in his book made him an enemy of the state. 

So I'm just going to go for it. I'm not going to overly proof read or spell check. I'm just going to spew onto the page.