Skip to main content

66 AI Models in Your Terminal, With Persistence and Model Mixing

· 4 min read
gl0bal01
Researcher

1min.ai gives you access to 66+ models through one API key. The web interface also exists, and it works, and every single time you use it you're clicking through the same dropdowns, losing your context when you switch models, and there's no way to pipe output anywhere or set a preference that sticks. It's fine for occasional use. It's friction for actual work.

I wanted full control — CLI, persistent configuration, model mixing, conversation management. Not their UX. Mine.

llm-1minai is a plugin for Simon Willison's LLM CLI that plugs 1min.ai's model catalog into the terminal. One key, 66 models, all the LLM framework's tooling on top.

Install

llm install llm-1minai
llm keys set 1min # paste your 1min.ai API key

That's it. All models show up prefixed with 1min/:

llm models list | grep "1min.ai"
# 1min.ai: gpt-4o-mini
# 1min.ai: claude-4-sonnet
# 1min.ai: deepseek-r1
# ...

66 Models, One Key

66+ models across 9 providers — OpenAI, Anthropic, Google, xAI, DeepSeek, Mistral, Meta/LLaMA, Cohere, Perplexity:

llm -m 1min/gpt-4o "Design a system architecture"
llm -m 1min/claude-4-sonnet "Write a REST API with FastAPI"
llm -m 1min/deepseek-r1 "Solve this logic problem"
llm -m 1min/sonar "What's the latest in AI?" # web-aware
llm -m 1min/grok-4 "Creative writing task"

Code-focused models (claude-4-sonnet, deepseek-r1, grok-code-fast-1) automatically switch to CODE_GENERATOR mode. Web-aware models (sonar, sonar-reasoning) enable web search by default. No flags needed.

Persistence — Set It Once

This is what the web UI can't do. Set options once, they apply everywhere:

# Enable web search globally
llm 1min options set web_search true
llm 1min options set num_of_site 5

# Now every query is web-aware by default
llm -m 1min/gpt-4o "Latest developments in memory forensics?"

# Per-model overrides
llm 1min options set --model sonar num_of_site 10

# View what's configured
llm 1min options list

Priority order: CLI flags beat per-model config beat global defaults beat code defaults. Override exactly what you need, inherit the rest.

Model Mixing — The Actually Interesting Part

This is the feature you won't find in most AI interfaces: is_mixed shares conversation context across different models. Start a thread with GPT-4o, continue it with Claude, both models see the full history.

# Start a conversation with GPT-4o
llm -m 1min/gpt-4o -c "I'm analyzing a memory dump from a Windows 10 system. The process tree shows svchost.exe spawning powershell.exe."

# Switch to DeepSeek R1 for the reasoning step — same context
llm -m 1min/deepseek-r1 -c -o is_mixed true "What MITRE ATT&CK techniques does this match?"

# Cross-check with Claude
llm -m 1min/claude-4-opus -c -o is_mixed true "What artifacts should I look for in malfind output?"

One investigation thread, three models contributing, context preserved across all of them. Not possible in any web UI I've used.

Conversation Management

# Continue last conversation (model remembered)
llm -c "follow up question"

# View history
llm logs -n 5

# Clear per model or all at once
llm 1min clear --model gpt-4o
llm 1min clear --all

The LLM framework handles conversation persistence natively — every exchange is logged in a local SQLite database, queryable, exportable.

A Real Workflow

Scripting is where this earns its place:

# Pipe a file through a model
cat suspicious.ps1 | llm -m 1min/claude-4-sonnet "Analyze this PowerShell script for malicious behavior"

# Chain models
llm -m 1min/gpt-4o "Summarize the key IOCs from this report" < report.txt \
| llm -m 1min/sonar "Cross-reference these IOCs with recent threat intel"

# Batch processing
for domain in $(cat domains.txt); do
llm -m 1min/gpt-4o-mini "Is $domain suspicious? One line answer."
done

Pipe in, pipe out, chain models, script it. None of that exists in a web UI.

Setup

# Install
llm install llm-1minai

# Set key
llm keys set 1min

# Or via environment variable
export ONEMIN_API_KEY="your-api-key"

# See all models
llm 1min models

If you use 1min.ai and find yourself fighting the web interface, this is the alternative. Full control, persistent config, model mixing, terminal-native. github.com/gl0bal01/llm-1minai