← Latest Update

Agent Ops: orchestration, APIs, wiki memory, and cheap GPU

Cursor’s $2 billion bet: The IDE is now a fallback, not the default. Cursor 3 ships an agent-first control plane that treats editors as a fallback and supports portable cloud-local agent sessions. This reframes developer tooling as agent orchestration infrastructure—central for building reliable agentic systems and a practical step toward Principle 09.

research-llm-apis — 2026-04-04 release. Simon Willison publishes a catalog of raw JSON and curl patterns across LLM vendors to rethink LLM abstractions for server-side tool execution. Outcome engineers can use this as a blueprint for robust adapter layers and standardized tool invocation, reducing brittle integrations in multi-agent stacks.

sllm — Split a GPU node with other developers, unlimited tokens. sllm introduces GPU-slicing to let teams share a single node for low-cost, multi-tenant model access with effectively unlimited tokens. That lowers the barrier to iterative agent development and experimentation, making it easier to run continuous agent tests and build your island infrastructure (Principle 07).

LLM Wiki — example of an ‘idea file’. Karpathy demonstrates agents building and maintaining a persistent, interlinked wiki that preserves and evolves context instead of re-deriving it per query. Treating agent memory as a legible, versioned knowledge graph directly improves reproducibility, handoff, and the documentation practices outcome engineers rely on (Principles 11 and 13).

Eight years of wanting, three months of building with AI. The author shows how AI coding agents accelerated building syntaqlite from a long-standing wish into a three-month open-source release. This is a concrete example of multi-agent collaboration speeding delivery and shaping developer workflows—useful reference for teams moving from single-player to agent-enabled engineering (Principle 03).