Now in public beta

The agent runtime
that asks before it acts.

GnamiAI runs your AI agent in a hardened sandbox, pauses on every risky move, and ships with signed skills, zero-config memory, and multi-agent handoffs out of the box.

Start free — no install See how it works
Six pillars

Built to say no, by design.

Every feature serves one of six foundational guarantees. None of them is optional.

Granular permissions

Role-based capabilities. Your agent never touches a byte or dollar it wasn't explicitly granted.

Human-in-the-loop

Every destructive action pauses for a cryptographically signed approval from your device. No exceptions.

Native dashboard

A real UI for approvals, memory, budgets, and integrations. No brittle group-chat hacks.

Signed skill registry

Every skill is statically analyzed, manually reviewed, and Ed25519-signed before it can install.

Zero-config memory

RAG out of the box. Your conversations summarize and index themselves — no JSON surgery required.

Agent social protocol

Specialized agents hand off tasks to each other with capability-scoped context slices. Pinned trust, no federation soup.

The difference

Your browser. No shell. Ever.

Other agent products ask you to install a process that owns your terminal. Hosted GnamiAI runs entirely in the browser — shell execution isn't toggled off, it's not registered. The capability literally does not exist in the hosted build.

Create your workspace
FAQ

Common questions

What is GnamiAI?

GnamiAI is a security-first AI agent runtime that runs in your browser. It gives your agent granular permissions, pauses destructive actions for human approval, and lets you compose specialized subagents, skills, long-term memory, and scheduled runs — without installing anything locally.

Is GnamiAI free?

The hosted app is free to use. You bring your own provider API key (OpenAI, Anthropic, OpenRouter, or a local Ollama), which is billed directly by that provider at their rates.

Which AI providers does GnamiAI support?

OpenAI, Anthropic, OpenRouter (access to most open and closed models), and Ollama (self-hosted). You can switch providers per turn and select models from each provider's catalog.

Does GnamiAI store my API keys?

Yes, encrypted at rest with AES-256-GCM using a server-held key. A database dump alone does not leak your credentials. You can disconnect any provider from Settings and the row is deleted immediately.

Does GnamiAI read my chat messages?

No. Chat transcripts live in your browser's localStorage. The server forwards each turn to your chosen provider and does not retain prompt or response bodies. See the Privacy page for the full data map.

Can I run GnamiAI with local models?

Yes — Ollama is a first-class provider and your models stay on your machine. Because the hosted GnamiAI server runs on Vercel, it needs to reach your Ollama instance over the network; the simplest way is a tunnel (e.g. Cloudflare Tunnel, ngrok) that exposes your local ollama serve at a public URL. Inference still happens locally on your hardware — the tunnel only carries the request. SSRF guards block private-network and loopback targets, which is why a raw http://localhost:11434 won't work on the hosted build.

What are skills and subagents?

A skill is a plain SKILL.md file that teaches the agent how to do something specific (write changelogs, draft release notes, generate conventional commits). A subagent is a named specialization with its own system prompt and model preference, pinned into chat with /agent <name>.

Can the agent take actions on its own?

It can create subagents, install skills, and remember facts when you ask it to, via structured gnami-action JSON blocks. Destructive operations pause for an approval in the UI before running.