Observed2026.04.20
Duration7 min read
LocationHilo, HI
Domainagents

Through the Looking Glass

The Prompt, the Perspective, the Lens

If you have been working with A.I. for the last year, several months, even days for some, you have most likely learned that the prompt is nearly everything. Maybe, you have worked on teaching your A.I. to learn to understand you: How you prefer to be addressed; how you like to problem solve; your tastes, habits, lifestyle. You may have even spent who knows how long learning on how to work with A.I. to be just a little more effective. Moments of eureka, with a light peppering of moments of misunderstanding and incompetence(on both ends). Productive interaction seems to nearly always come back to the prompt though. The right questions, relevant context and the level of detail can often times create high quality responses.

If you decided to learn how to prompt effectively, well at least effectively for you, then at some point you may have thought about creating an agent, or shoring up the prompt with: "You are a…Please take on the following role…". You may have even went ahead and made one specific to your use case. Maybe you even went so far to think: "Ok, all set, now my code and docs are for sure going to be accurate. No more mistakes. This new auditor caught everything on three runs". This is the good stuff. The stuff that really makes you realize in hindsight, what a sweet summer child you were and probably still are. The curtain hasn't even been pulled back yet for the big reveal and you've got it figured out. You just peered through the looking glass and viewed the first perspective.

The Lens of Perception

Pull up all your productive agent definitions: code validator, test architect, project manager, software architect. What exactly do they all have in common? When provided a specific set of instructions, each agent can perceive the world with a very different lens. A project manager agent is going to focus on project management. A code validator will validate code. A test architect, well, it checks for tests and how well you defined them. When an LLM is provided with a lens, they are able to perceive an artifact from a very unique perspective.

One interesting thing — maybe you've been able to identify it — is you've already experienced this in nearly every conversation when trying to shape how your LLM will respond: "You are a marketing executive…You are a code validator…You are a biological scientist…I have a legal question and I could use some assistance…I've got this cough that's not going away…." We're constantly asking these entities to take on various roles for various problems. All are unique. In fact, you may even go so far as to ask for a holistic view on a problem. I want the whole ball of wax, provide me with the good and the bad. What I’ve discovered over time is that multiple perspectives are incredibly valuable in life and with agentic work it rings even more true. We already know this to some extent outside of the digital caves we navigate through. Any company you've worked for, school attended, social gatherings have plenty of perspectives. With agentic systems, you begin to see the compounding value of a stack of perspectives. They can be wildly entertaining and highly beneficial.

The Cognitive Lens

In a way — at least how I think about it — when you provide the agent with a lens on how to view the world, it appears to reach deep into its massive cloud of knowledge and fetch the epistemic reasoning, knowledge and history of that scope. This provides the LLMs with the lens of perception. You will get fundamentally different views of an artifact if you asked a developer experience agent and a documentation agent to both scrutinize it. Try it out, it's just the beginning. You might find some convergence in issues discovered, but the bulk of your findings will be shaped by completely different perceptions. This is the core idea behind what I've been referring to as the Cognitive Lens. Provide an LLM with the proper thinking machinery and you can get results that defy almost anything you've seen up until this point. What the heck does it even look like when you ask a Nietzsche Analyst or Incentive Mapper to review your codebase? "Your functions are calcified and decadent and are incentivizing users into a work around..."

The Cognitive Parallax

But how does this affect the agentic systems that we are building and building with. We ask our LLM co-workers to just plan, plan some more, build, test, review, build, rinse, repeat. Each time, especially if you are only using a single agent, you only get one perspective. Perhaps, like I mentioned above, you got a little chippy and decided: "I need multiple agents to scrutinize the workload". In my personal work, I started with one agent, the code validator. This agent was joined by a test architect, then a code optimizer and so on. Each agent definition targeted a perception I was not picking up. This got fairly addicting. What wasn't I covering? What angles was I not seeing? I'd like to think of myself as a decent developer, but this was a bigger reveal than what I anticipated. Over the last four months, I've built a library. I now have something close to 150 agent definitions, 30+ workflow and pipeline compositions and counting. I have used all of them and I'm definitely finding my favorites. This is the cognitive parallax: the difference in the interpretation from multiple lines of sight.

The Academy Pipeline

I like to run the Academy pipeline, consisting of a Socrates Explorer, Aristotle Analyst, Plato Analyst, Archimedes Analyst and a Workflow Synthesizer. These agents are not personalities. They run the methodology, the epistemic machinery; and these agents find some truly wild stuff. Elenctic cross examination, four-cause decomposition, ideal form analysis, mechanical analogue translation and a synthesis at the tail. The variety of findings and the cross-cutting synthesis bring out the best of all possible worlds. If you have ever asked Claude, GPT or Gemini to find issues with the code they just wrote, they will pull up issues and errors. The big difference is in their perspectives. Those entities are approaching the artifacts from a general purpose problem solving perspective — which can be effective — but this method lacks the cognitive lens parallax. The stacked scrutiny of perspectives that have primarily existed in college lecture halls and old tombs. We now have the ability to bring these methodologies to repeatable practice. Judgement from the best minds across all of civilization, now available through an agent definition.

Down the Rabbit Hole

It's in the parallax where we can find incredible value in these systems. Viewing problems from many angles and dimensions; finding solutions and synthesis where it was not possible before. It's through the looking glass that we can discover the varieties of epistemic machinery available and what happens becomes explosively combinatorial. Close to an infinity of perceptions to aim at any problem. These individual definitions are wild in a way that makes your jaw drop sometimes. However, it's when you run multiple agent perspectives and synthesize the findings afterwards that the real breakthroughs begin. The Synthesizer, uniting perspectives of all lenses, that can bridge what may appear to be disparate ideas, into a linked chain of an entirely new discovery. This is where the value of A.I. systems begins to compound in ways that have only been marketed and perhaps not practiced. The future is wild, the cognitive parallax is real and the layers are only beginning to be pulled back. I'll continue crawling down the rabbit hole and see what's on the other side.