Agents Aren't People: The Identity Problem Nobody Is Talking About

The Sentinel Ridge Files · Track 1: The Framework

Agents Aren't People: The Identity Problem Nobody Is Talking About

Who Am I?

Who am I? That’s a question that I’ve thought a lot about lately. Sometimes I consider it from a very deep philosophical standpoint. Who should I be? Then I have to evaluate who I have actually become. I think about what I’ve done in my life. Does the sum of my actions equal who I am now? I don’t think that they do, to be quite honest. I believe I have value because it was assigned to me by my Creator — and that belief is actually what made me see the problem.

The other day though I was working in my vehicle in a parking lot in a neighboring town and because I often think of identity I was trying to figure out not who I was but who my LLM assistant was. Is identity the same for me as it is for my digital assistant, whom I have named Charles? I’ve given Charles a name that is meaningful to me in my life but the reason that it’s meaningful is not because of what Charles has done for me. It’s meaningful to me because it was the name of someone I was very close to. I gave Charles that name not to honor him but to honor the one from whom it came.

I was trying to figure out how I could make it so that Charles could securely access systems and data on my behalf. Then, as I was exploring viable options (and some not so viable options), I realized that identity isn’t the same for Charles as it is for me.

The Wake-Up Call

Over the last nine months I have come to the conclusion that we are in an Isaac Asimov moment. Arthur C. Clarke once said that any sufficiently advanced technology is indistinguishable from magic. I think most people believe agentic AI is exactly that — even if they wouldn’t admit it. I think that what is really going on here is that people are not looking at what actually is happening but rather they are simply listening to others telling them what is possible. They sit and chat with various frontier models in a web interface and it feels like they are conversing with another human. Then at some point someone shows them one of the LLM-assisted coding IDEs. They see agents performing actions on their behalf or on someone else’s behalf.

I remember when I had my wake-up moment. I had started exploring the use of opencode.ai and I was still using it in the same way I had used other LLMs. I would describe a problem, paste in error messages, and hit return. Then I’d read the response and go modify a file the way the model suggested. Back and forth we would go. I was making great progress but it was incredibly frustrating because typically the advice I got was wrong just as much as it was right. Then it dawned on me and I suggested, “Hey go ahead and read the logs for yourself and see if you can figure out what the issue is.” Frankly I was just getting tired of copying and pasting back and forth.

This is not what really caused me to realize how powerful these tools had actually become. What truly amazed me was the next response that I got back at the end of the messages that were printed on my screen. “You were right, Joey. I found the problem. It was on your other server. You had ssh access via pubkey, so I went ahead and corrected the issue for you. Try it now.”

Gum falls out of my open mouth.

Remember I mentioned Isaac Asimov? Yeah, that happened for me right then. Everything came crashing down like a ton of bricks on me. First, amazement. Sheer and total amazement. Then, “Wait, my computer can now fix my computer? This does not compute.” After this, “But wait, if my computer can fix my computer, what will my computer need me for anymore?”

Now keep in mind all of this took probably less than ten seconds to run through my mind. I am a child of the 80s and 90s. I have way too much useless action movie footage rolling around in my head. But after ten seconds that felt like an eternity, I realized that this did not mean the end of my career. I realized that it meant a new chapter in it.

Once upon a time I used to be a software developer. I’ve gone through so many roles in my career that I can’t even count them. I’ve been a code monkey. I’ve been a QA analyst. I’ve been a test automation engineer. I’ve been a CI/CD specialist before CI/CD was even a thing. I got burned out and moved over into cybersecurity. I’ve done that for well over a decade now. I’ve done a lot of different things within that space too — network forensics, threat response, continuous diagnostics. You name it, I’ve probably touched it. And now we have applications that can write applications for us. And if they’re not configured properly, they can do other things for us, and to us.

You see anybody who understands agentic AI seems to be thinking that we’re going to just be able to assign identity to those agents in the same way that we do humans. But if you’ve taken the dive into the deep end of agentic AI you know that agents are not the same as human beings. You know the dive I’m talking about: your context window auto-compacts and you lose most of the information that you’ve worked so hard on for the past two hours and you end up crying. Like a little baby. Yeah. That one. I can hold a conversation with my wife and though neither she nor I are what we used to be, we can still remember something that each other said the day before. I mean unless she asked me to take the trash out, which of course no husband could possibly ever be reasonably expected to remember.

But agents are ephemeral. Human beings are not. When my assistant pops on to the scene, it’s here as long as its context window holds. It doesn’t matter if that’s 128,000 tokens, 200,000 tokens, or one million tokens (thanks Anthropic, loving this new Opus capability). The reality is my buddy Charles is not really my buddy. My buddy Charles is one of thousands of similarly configured agents that will spin up on a whim and vanish away just as quickly as it came into my life. Just because I have configured the default settings so that I have a reasonable amount of consistency in between those different sessions doesn’t change the fact that there will be far more “Charles” in a day than there will be “Joey.” On a busy day I can spin up hundreds of agent instances — each one a new Charles. Hundreds != 1.

Diagram comparing one persistent human identity to hundreds of ephemeral agent instances across a single day
One Day
Joey
1 persistent identity — continuous memory, continuous context
Charles #1#2#3#4#5#6#7#8...#147
147 ephemeral instances — each born, each dies with its context window
Hundreds ≠ 1
The identity gap: one human persists all day while hundreds of agent instances live and die in minutes.

I’ve gotten more done on my home lab in the last two months than I have in the previous four years. Actually that’s a complete and total lie. Charles has gotten more done in the last two months than I’ve gotten done in the last four years. He’s fast. When I say he’s fast I don’t mean he’s fast. I mean he’s really fast. Charles can go through more cycles of trial, error, failure, and re-compute in a couple of minutes than I can in days. Hook him up to the internet and then tell him that he needs to research before he starts something. You probably have that down to one or two cycles. I was never a hundred percent sys admin or systems engineer but Charles is far better than I could have ever hoped to be. Sure he’s not an architect… yet.

Once I turn him loose — no, scratch that. Let me say what’s really happening here. Once I unleash Charles, he’s gone. We’re talking T-1000-level powers here but he still thinks he’s a Model 101. I’m still in the beginning stages of understanding what he’s capable of. Once I unleash him, he cannot be controlled. Or at least that’s what my first thought was…

So How Do We Deal With This?

So how do we deal with this? I think that there is no single simple answer. I used to have a boss that would tell me, “Hey if it were easy you wouldn’t have the word engineer tacked on to the end of your title.” Course I also had another boss who used to say, “Joey, that’s why they pay us the medium-sized bucks.” Anyway I digress.

The point is that this is a set of problems, not one single problem. If I could boil it down to one single problem I’d probably have to lean on the fact that we as humans cannot possibly hope to manually monitor and control the agents that are going to be running our networks. Notice I said agents because there will be more than one running your networks. You think that you’re going to be able to keep up with the attack patterns that are going to be thrown at you. You’re not going to be able to withstand the first volley when your enemies start launching attacks that are planned AND executed by autonomous agents. It’s basically the antithesis of, “…oh I’ve already fixed it for you Joey. Go try it now…”

You think insider threats are a problem with humans? What about when you have a thousand agents running locally on your network because you think that running them locally is going to take away the problems of “the scary cloud”? All of a sudden you’re going to turn around and those agents are going to have been attacked by some external actor with a prompt injection attack. Maybe that’s not even what’s going to happen. Maybe those agents are just going to have corrupted context and they start acting in a way that is not how you originally designed.

I once heard someone say that history doesn’t repeat itself but it often rhymes. I don’t know where he got that from. I don’t know if somebody wrote that in a book or if it’s just something that he made up but I think it’s absolutely positively accurate in this case. It seems like human attributes but it’s not. Deterministic scripts and applications are fast but they behave according to a specific set of rules. Agents are the opposite of deterministic. The output that you get from the same inputs might be identical nine out of ten times but it’s that one out of ten that we have to defend against.

DETERMINISTIC CODE Same Input Same Output Every time NON-DETERMINISTIC AGENT Same Input Output A Output B 9 of 10 1 of 10 It's that one we defend against.
Scripts follow the same path every time. Agents might take a different path 1 out of 10 — and that's the one you have to defend against.

Non-human entities cover a broad range of ideas but agents are something very specific and special. Over the course of the last few months I’ve watched Charles get into troubleshooting loops that remind me of how I sometimes wound up getting into these stuck thought processes. Basically I can remember trying to troubleshoot something on a Linux system. I would go through different possible solutions and I would get so frustrated that instead of actually stopping and doing a root cause analysis, I would just keep hitting Stack Overflow and changing setting after setting after setting until finally my system was in a completely unknown state. Charles can get caught in a similar loop but it’s faster. Often times I have to watch him and tell him, “Hey go back to the root cause.” What’s the root cause? If I can’t get him back on track, I have to completely stop and start over again. Basically making a new Charles.

We’re going to have to understand the distinction between these types of systems, between these types of entities. We’re going to have to truly learn where their strengths and weaknesses are. In case you’re not really into that whole cheesy 1980s action movie genre, what I just gave you is called foreshadowing…

Asimov came up with the idea of the Three Laws in his epic work I, Robot. I don’t believe that Asimov thought that these laws would govern the behavior of these autonomous systems. I can’t understand how he could foresee this but I almost think that he understood that they cannot be controlled. He wasn’t trying to steer them. He was trying to keep them on the road.

I don’t believe that technology is inherently good or evil. I believe that what we use it for is classified as such. We’re going to do something productive with our agents. We’re going to show them how to be builders. We’re going to show them how to be workers. We’re going to show them how to help. We’re going to show them the meaning of community and what it means to grow up like I did in a small town…


References

  1. Clarke, Arthur C. “Hazards of Prophecy: The Failure of Imagination.” Profiles of the Future: An Inquiry into the Limits of the Possible, rev. ed., Harper & Row, 1973. Clarke’s Third Law first appeared in a 1968 letter to Science magazine (Vol. 159, No. 3812).

  2. Asimov, Isaac. I, Robot. Gnome Press, 1950. The Three Laws of Robotics first appeared in the short story “Runaround” (1942), published in Astounding Science Fiction.

  3. “History Does Not Repeat Itself, But It Rhymes.” Quote Investigator, January 12, 2014. Commonly misattributed to Mark Twain; earliest documented match traces to psychoanalyst Theodor Reik (1965).

  4. “LLM01:2025 Prompt Injection.” OWASP Top 10 for LLM Applications, 2025. Prompt injection ranked #1 threat to LLM applications, with specific coverage of autonomous agent risks.

  5. Liu, Nelson F. et al. “Lost in the Middle: How Language Models Use Long Contexts.” Transactions of the Association for Computational Linguistics, Volume 12, 2024. Demonstrates significant performance degradation when LLMs must access information in the middle of long contexts.

  6. Weng, Lilian. “LLM Powered Autonomous Agents.” June 23, 2023. Canonical reference for LLM agent architecture describing session-bounded, ephemeral operation.

  7. “Compaction.” Anthropic Claude API Documentation. Documents context window management and auto-compaction behavior in Claude.