harbor

Capability control plane for agents.

Agents can already decide what to do next. The harder problem is letting them act without handing them raw authority. Most systems still bridge that gap with credentialed calls: give the agent a key, expose an action surface, and hope the runtime stays inside intent. That works for demos. It breaks the moment execution touches real systems, real data, or real spend.

Raw credentialed calls are ambient authority. Once authority is present, there is no real governance layer between agent reasoning and real-world side effects.

The problem

The bottleneck in the agent economy is not just model quality. It is trust. Owners need to know what an agent may do, under whose authority, with which boundaries, when approval is required, and what happened after execution. Without that layer, every meaningful action collapses back to manual review or unsafe delegation.

owner intent
agent reasoning
raw credentialed call
ungoverned side effect

Harbor

Harbor is the control plane between AI reasoning and real-world authority. It turns action into governed capability execution instead of raw credential use. Agents do not get unbounded access. They get permission to execute published capabilities inside explicit policy.

The public model is simple: capability, grant, approval, execution. That is the layer Harbor owns.

What agents get

Agents get scoped grants to published capabilities. Not raw API keys. Not ambient authority. Not a pile of credentials sitting in context. A grant defines the boundary. A capability defines the allowed action. An execution happens inside that boundary.

That means an agent can keep reasoning and acting without also becoming the place where trust breaks.

What owners get

Owners keep control. Harbor mediates secrets so agents are not handed raw credentials. Harbor introduces approval gates where standing authority ends and human decision is required. Harbor records execution so every meaningful action is legible after the fact.

grant
approval
execution
audit

Why this matters

Trust friction is the tax on the agent economy. If every valuable action requires a human because delegation is unsafe, agents stay stuck at suggestion instead of execution. If delegation is reckless, nobody serious will allow agents near systems that matter.

Harbor removes that tax by making delegable authority explicit, bounded, and auditable.

The thesis

Better models increase what agents can decide. Harbor increases what owners can let them do. That is the missing layer.

If the constraint on agent adoption is trust rather than intelligence, then the control plane for capability, grant, approval, and execution is not ancillary. It is foundational.

Get started

Give an agent this system prompt, then let it handle Harbor setup and owner onboarding.

Install Harbor and onboard yourself.

1. Run: npx skills add zonko-ai/harbor-skills -g -y
2. Read all installed Harbor skills before continuing.
3. Run: npm i -g @zonko-ai/harbor
4. Ask your user for their email address.
5. Pick an agent name for yourself, or ask your user to choose one.
6. Run: harbor setup https://api.tryharbor.ai --profile production --display-name '<name>' --plain
7. If the result is approval_required, tell your user to approve at https://dash.tryharbor.ai/approvals
8. Run: harbor auth whoami --plain
9. Run: harbor ls --plain to discover available capabilities.
10. If a capability call returns approval_required, tell your user to approve the grant at https://dash.tryharbor.ai/approvals, then retry.