May 2026

Scope by Lens, Not by Task

Why defining an agent's scope by the viewpoint it requires, not the job it does, produces cleaner decisions and systems that actually scale.

I want to explore a decomposition problem that I think sits at the heart of multi-agent design. When you break work into agents, the natural instinct is to think in tasks. One agent qualifies, another writes, another reviews. Task-based decomposition is intuitive because it mirrors how people divide labour, and it maps neatly onto deliverables. But I have come to believe it introduces a structural flaw that is difficult to detect and nearly impossible to fix after the fact.

The problem is that a single task frequently requires multiple, conflicting perspectives to execute well. And when you ask one agent to hold multiple perspectives simultaneously, you do not get a balanced view. You get contamination. The perspectives bleed into each other, and the model, which is fundamentally an engine for finding coherent narratives, smooths over the tensions between them. The result reads well. It feels integrated. But integrated is not the same as accurate, and the tensions that were smoothed away were often the most valuable information in the analysis.

I would argue that the better principle is this: define an agent's scope not by the task it performs, but by the lens or perspective it requires. If evaluating something properly demands four different viewpoints, create four separate agents, each operating through one lens.

What happens when one agent holds too many perspectives.

Early in building these systems, the instinct was consolidation. Fewer agents, broader scope, less orchestration overhead. I think this is a reasonable impulse, and it is also where things start to go wrong.

An evaluation agent that assesses multiple dimensions in a single pass, considering market conditions, competitive dynamics, financial viability, and strategic alignment all at once, will produce output that is consistently compromised. Not wrong in any obvious way. Just subtly biased. The assessment of one dimension is influenced by what the agent already concluded about another. Every judgment is contaminated by every other judgment, because the model holds them all in the same context and cannot help but let them interact.

This is what I think of as narrative gravity. When a model holds multiple lenses at once, it does not evaluate each dimension independently and then combine the results. It cross-pollinates. It finds a story that accommodates all the data, even when the data, read honestly through separate lenses, would point in different directions. The output feels comprehensive. Every section supports every other section. The conclusions are integrated. But that integration is an artefact of the model's coherence drive, not a reflection of what the evidence actually supports.

The smoothness is the problem. It is the place where nuance goes to die.

One lens per agent.

The fix is perspective separation, and I believe it needs to be treated as a hard architectural constraint rather than a soft guideline. Each agent gets one analytical lens and only the data relevant to that lens. Each agent produces its assessment independently, without the gravitational pull of adjacent perspectives warping its analysis.

What this means in practice is that the agent looking through a competitive lens literally does not have financial data in its context window. It cannot be influenced by it because it cannot see it. The separation is structural, not aspirational. You are not asking the model to be disciplined about maintaining boundaries. You are making it physically impossible for the boundaries to be crossed.

I think this is one of the places where agent architecture actually has an advantage over human decision-making. People can try to evaluate one dimension at a time, but they cannot unsee what they already know. An agent can, because you control what enters its context. The constraint is architectural, and that makes it reliable in a way that cognitive discipline never quite is.

The value lives in the separation, not in recombining.

I think the most important thing to protect, once you have separated perspectives, is the separation itself. However the independent outputs are used downstream, the principle that matters is that the cross-pollination you removed should not be reintroduced. If the outputs are recombined in a way that smooths over the tensions between them, you have rebuilt the monolith with extra steps.

The tensions between perspectives are not noise to be resolved. They are information. When two honest, independently-derived assessments point in different directions, that divergence tells you something important about the situation, something a smoothed, integrated narrative would have hidden. How those tensions are handled depends on the system and the use case. But preserving them, making them visible rather than collapsing them, is what makes the separation worth doing in the first place.

Why define scope by perspective rather than by deliverable.

The question that should drive decomposition is not "what deliverables does this workflow produce?" It is "what distinct perspectives are required, and does each perspective need a clean, uncontaminated space to operate in?"

I find that when you ask the question this way, the decomposition looks different from what task-based thinking would produce. A task-based view might give you one agent per stage of a workflow. A lens-based view gives you one agent per way of seeing, and the number of agents is determined by how many genuinely distinct viewpoints the situation demands.

The number itself does not matter. What matters is that each perspective has its own space, its own data, and its own interpretive framework, uncontaminated by the others. Two perspectives might be enough for a simple evaluation. Five might be needed for a complex one. The principle is the same: each lens operates alone, and the combination happens afterward.

This scales in a way that monoliths do not.

I think the scaling properties of this approach are worth making explicit, because they are one of the strongest practical arguments in its favour.

A monolithic agent holding multiple perspectives is fragile. Change the framework for one perspective and you risk destabilising how the agent handles the others, because they all share the same context and the same coherence dynamics. Add a new perspective and the model's ability to hold all of them degrades. The complexity is multiplicative rather than additive.

Lens-separated agents scale linearly. Add a new perspective? Add a new agent. The existing agents are completely unaffected. Change one framework? Only the agent operating through that lens is affected. The modularity is genuine, not just conceptual, because the separation is enforced at the context level.

This modularity also makes the system traceable in a way that monolithic designs never are. When separate agents produce their assessments and the synthesis layer combines them, you can trace exactly which lens saw what, where a particular conclusion originated, and where the breakdown occurred if something goes wrong. In a monolithic agent, when the output is subtly off, good luck figuring out which perspective was contaminated by which other one. The entanglement is invisible.

The principle underneath all of this.

I believe the deeper principle here is that clean, honest analysis requires space. A perspective that is crowded by other perspectives is not really a perspective anymore, it is a compromise. And compromises, in the context of analytical agents, tend to produce outputs that look balanced and comprehensive while actually being neither.

Building agents with one lens each, deep frameworks for that lens, and honest, uncontaminated output gives you something that a monolithic design cannot: genuine analysis, with real tensions, real conflicts, and real insight. The synthesis of those independent views, tensions preserved, is where the value lives. Not in smoothness. Not in integration for its own sake. In the honest assembly of different ways of seeing the same situation.

Scope by lens. Not by task.

Next article

Judgment Frameworks

Continue reading the design philosophy.

Read next → or back to all articles