Software Interface for AI Agents

Keep the UI for humans.
Give agents state and actions.

SLOP lets apps publish semantic state and contextual actions directly, so agents can control programs without screenshots, scraping, or driving the human UI.

Agent reading semantic state and invoking actions directly, without operating the human UI. Run it yourself →

Provider
const slop = createSlop({ id: "my-app" });

slop.register("notes", () => ({
  type: "collection",
  items: notes.map(n => ({
    id: n.id,
    props: { title: n.title },
    actions: { delete: () => remove(n.id) },
  })),
}));
Agent
const consumer = new SlopConsumer(transport);
await consumer.connect();

const { snapshot } = await consumer.subscribe("/");
// { notes: { type: "collection", items: [...] } }

await consumer.invoke("/notes/note-1", "delete");
The Mismatch

Today's agents still operate through interfaces made for humans

Vision (Screenshots)

The agent drives the human UI and reconstructs meaning from pixels. Expensive, lossy, fragile.

  • Tokens wasted on OCR
  • Layout changes break everything
  • Can't see what's off-screen

Tool Calls (MCP)

The agent can call functions, but it still lacks a live model of the program. Reads must be bolted on tool by tool.

  • Read access requires bespoke tools
  • Global tools lose local context
  • Validity is discovered at runtime

SLOP adds an agent-facing interface — apps publish semantic state and contextual actions so agents can operate the program directly.

Agent Interface

Three parts of an interface for agents

1

App publishes semantic state

Your app exposes a curated view of what it is right now — not DOM elements, not raw internals, but meaning.

useSlop(slop, "notes", () => ({
  type: "collection",
  props: { count: notes.length },
  items: notes.map(n => ({
    id: n.id,
    props: { title: n.title },
  })),
}));
2

Agent subscribes to the program

The agent subscribes to the state tree and receives patches as things change. No screenshots, polling, or repeated read tools.

[root] My App
  [collection] notes (count=3)
    [item] Meeting notes
    [item] Grocery list (pinned=true)
    [item] App ideas
  [status] stats (total=3, pinned=1)
3

Agent invokes contextual actions

Actions live on the nodes they affect, so the agent sees what it can do in context and acts through the program directly.

actions: {
  toggle_pin: () => togglePin(note.id),
  edit: action({ title: "string" }, ({ title }) => updateNote(note.id, title)),
  delete: action(() => remove(note.id), { dangerous: true }),
}
Try it in the playground → Edit descriptors, inspect the tree, invoke actions, and explore the agent-facing interface live
Why SLOP

Built for agents, not human navigation

MCP / Tool calls Accessibility APIs SLOP
Purpose AI calls functions Assistive tech reads UI App exposes an agent-facing interface
Data model Flat list of functions UI element tree Semantic state tree
Coupling to UI Separate from UI, but blind Tied to rendered UI Independent of the human UI
Updates Pull (AI reads on demand) Pull (reader queries) Push-first patches
Actions Global tool registry Limited (click, type) Contextual affordances
Designed for LLM function calling Sequential navigation Direct program control by agents
Ecosystem

Works with your stack

TS

TypeScript

@slop-ai/core Shared engine — tree assembly, diffing, types
@slop-ai/client Browser provider — createSlop() + postMessage
@slop-ai/server Server/native provider — WebSocket, Unix, stdio
PY

Python

slop-ai Full SDK — provider, consumer, state tree, transports
RS

Rust

slop-ai Full SDK — provider, consumer, state tree, transports
GO

Go

slop-ai Go SDK — provider, consumer, state tree, transports