term-llm

Do a thing

Guides

Guides are for concrete tasks: use a feature, wire up an integration, or operate a workflow without turning the page into a theory seminar.

Core flow

Usage

Core command usage for `exec`, `ask`, `chat`, flags, examples, and agent selection.

Troubleshooting

Debugging

Use provider debug output and debug logs to figure out what the runtime is actually doing.

Media

Image generation

Generate and edit images with Gemini, OpenAI, ChatGPT, xAI, Venice, Flux, or OpenRouter.

Embeddings

Text embeddings

Generate vector embeddings for search, RAG, clustering, semantic similarity, and retrieval workflows.

Media

Video generation

Generate videos with Venice AI using text-to-video or image-to-video models.

Media

Audio generation

Generate speech audio with Venice AI, Gemini, and ElevenLabs text-to-speech models.

Editing

File editing

Edit files with natural-language instructions, targeted ranges, and multiple diff formats.

Automation

Autonomous loops

Run an agent in a loop until a done condition is met.

Media

Music generation

Generate music and sound effects with Venice AI and ElevenLabs.

Web search

Search

Use web search in term-llm, choose external providers, and control native versus external search routing.

Operations

Job runner

Run the jobs server, define scheduled work, and manage job definitions and runs from the CLI or API.

Web runtime

Web UI and API

Run term-llm as a web server, use the browser UI, and call the HTTP API endpoints exposed by serve mode.

Direct connect

WebRTC direct routing

Bypass a relay server and connect the browser directly to a home-hosted term-llm instance over a WebRTC data channel.

Integrations

MCP servers

Add external tools via Model Context Protocol and use them from term-llm commands.

Remote tools

Serving tools via MCP

Run term-llm as an MCP server over HTTP, exposing file, search, shell, and web tools to any MCP client.

Messaging

Telegram Bot

Run term-llm as a Telegram bot: create a bot, configure access control, and chat with your agent from any device.

Audio

Transcription

Transcribe audio files to text with OpenAI, Mistral Voxtral, Venice, ElevenLabs, a local Whisper server, or whisper.cpp CLI.

Workflow bundles

Agents

Use built-in agents or create your own workflow bundles with their own provider, model, tools, and instructions.

Long-term memory

Memory

Mine durable facts from completed sessions, search fragments, manage insights, and understand how memory differs from sessions.

Portable expertise

Skills

Use and manage portable instruction bundles that add task-specific context.

Notify

Notifications

Send notifications through Telegram or web push from the command line.

Ergonomics

Shell integration

Alias and shell-completion setup for using term-llm comfortably from the command line.

Deploy agents

Agent Containers

Run independent term-llm agents in Docker: one container per agent, fully isolated, no image rebuild needed.