Learn LocalGPT
Step-by-step tutorials to get you from zero to AI-powered in minutes.
Getting Started
LocalGPT CLI lets you run AI models locally or connect cloud APIs from your terminal. It works alongside the LocalGPT desktop app and supports all the same models.
CLI First
Full-featured terminal interface
Privacy First
Local models stay on your machine
Hybrid
Local models + cloud APIs
Installation
macOS (Homebrew)
brew tap localgpt/tap
brew install localgptmacOS / Linux (curl)
curl -fsSL https://raw.githubusercontent.com/localgpt/localgpt/main/cli/install.sh | bashpip (Any Platform)
pip install localgptVerify Installation
localgpt version
# localgpt version 1.0.0Note: For local models, you also need Ollama installed. The installer will prompt you. Or install manually: curl -fsSL https://ollama.com/install.sh | sh
Your First Chat
Pull a model and start chatting in two commands.
Step 1: Pull a model
localgpt pull llama3.2
# Pulling Llama 3.2 (2.0 GB)
# pulling manifest: [████████████████████████████████████████████████] 100%
# ✓ Model 'llama3.2' pulled successfully.Step 2: Start chatting
localgpt run llama3.2
# ✓ Model: Llama 3.2 (local)
# Type /exit to quit, /clear to reset, /help for commands.
# >>> What is the meaning of life?
# The meaning of life is a philosophical question that has been
# debated for centuries...Quick One-Shot Question
For a single question without entering interactive mode:
localgpt ask "Explain quantum computing in simple terms"System Prompts
Customize the AI behavior with a system prompt:
localgpt run llama3.2 -s "You are a Python expert. Give concise code examples."
# >>> How do I read a JSON file?
# ```python
# import json
# with open('data.json') as f:
# data = json.load(f)
# ```Managing Models
Browse Available Models
# All models (local + cloud)
localgpt models
# Local models only
localgpt models --local
# Cloud models only
localgpt models --cloud
# Filter by category
localgpt models -c code
localgpt models -c image
# Filter by provider
localgpt models -p openai
localgpt models -p anthropicDownload & Remove
# Pull a model
localgpt pull mistral
localgpt pull deepseek-r1
localgpt pull codellama
# List downloaded models
localgpt list
# Show model details
localgpt show llama3.2
# Remove a model
localgpt rm mistral
# Check running models
localgpt psSet Default Model
# Set default model (used by 'localgpt chat')
localgpt config set-model llama3.2
# Chat with default model
localgpt chatCloud APIs
Bring your own API keys to use cloud models. No middleman, no markup.
Set API Keys
# OpenAI
localgpt config set-key openai sk-proj-...
# Anthropic
localgpt config set-key anthropic sk-ant-...
# Google
localgpt config set-key google AIza...
# Mistral
localgpt config set-key mistral ...
# DeepSeek
localgpt config set-key deepseek ...
# Check provider status
localgpt providersUse Cloud Models
# OpenAI
localgpt run gpt-4o
localgpt run gpt-4o-mini
# Anthropic
localgpt run claude-sonnet-4-20250514
localgpt run claude-opus-4-20250514
# Google
localgpt run gemini-2.0-flash
localgpt run gemini-1.5-pro
# DeepSeek
localgpt run deepseek-chat
localgpt run deepseek-reasonerOpenAI
GPT-4o, GPT-4o Mini, o1, DALL-E 3
Anthropic
Claude Sonnet 4, Opus 4, Haiku 3.5
Gemini 2.0 Flash, 1.5 Pro, Imagen 3
DeepSeek
DeepSeek V3, DeepSeek R1
Image Generation
Generate images from text prompts using DALL-E 3 or other models.
# Generate with DALL-E 3 (requires OpenAI API key)
localgpt image "a sunset over mountains in watercolor style"
# Specify size
localgpt image "futuristic cityscape" --size 1792x1024
# Use DALL-E 2 (cheaper)
localgpt image "abstract geometric art" -m dall-e-2
# Output:
# ✓ Image generated!
# URL: https://oaidalleapiprodscus.blob.core.windows.net/...
# Revised prompt: A beautiful watercolor painting of...Available Sizes
1024x10241792x10241024x1792Configuration
# View all config
localgpt config show
# Set default model
localgpt config set-model gpt-4o
# Set temperature
localgpt config set temperature 0.9
# Set max tokens
localgpt config set max_tokens 4096
# Set local backend URL
localgpt config set local_backend_url http://localhost:11434
# Manage API keys
localgpt config set-key openai sk-...
localgpt config remove-key openaiConfig is stored in ~/.localgpt/config.json
Chat Commands
Inside an interactive chat session, use these slash commands:
/exitExit the chat session/clearClear conversation history (keep system prompt)/modelShow the current model/system <prompt>Set or change the system prompt/helpShow available chat commandsTips & Tricks
Use the right model for the task
For coding, use deepseek-coder-v2 or codellama. For math/logic, try deepseek-r1. For general chat, llama3.2 is a great lightweight pick.
Save API costs
Use local models for everyday tasks and reserve cloud models for complex problems.gpt-4o-mini is 20x cheaper than GPT-4o for simple tasks.
System prompts are powerful
Use -s to set a system prompt that shapes the AI's behavior. E.g., localgpt run llama3.2 -s "Respond only in haiku"
Pipe and redirect
Use the ask command for scripting and piping:localgpt ask "Summarize this" < article.txt