Documentation

Learn LocalGPT

Step-by-step tutorials to get you from zero to AI-powered in minutes.

Getting Started

LocalGPT CLI lets you run AI models locally or connect cloud APIs from your terminal. It works alongside the LocalGPT desktop app and supports all the same models.

CLI First

Full-featured terminal interface

Privacy First

Local models stay on your machine

Hybrid

Local models + cloud APIs

Installation

macOS (Homebrew)

terminal
brew tap localgpt/tap
brew install localgpt

macOS / Linux (curl)

terminal
curl -fsSL https://raw.githubusercontent.com/localgpt/localgpt/main/cli/install.sh | bash

pip (Any Platform)

terminal
pip install localgpt

Verify Installation

terminal
localgpt version
# localgpt version 1.0.0

Note: For local models, you also need Ollama installed. The installer will prompt you. Or install manually: curl -fsSL https://ollama.com/install.sh | sh

Your First Chat

Pull a model and start chatting in two commands.

Step 1: Pull a model

terminal
localgpt pull llama3.2

# Pulling Llama 3.2 (2.0 GB)
#   pulling manifest: [████████████████████████████████████████████████] 100%
# ✓ Model 'llama3.2' pulled successfully.

Step 2: Start chatting

terminal
localgpt run llama3.2

# ✓ Model: Llama 3.2 (local)
# Type /exit to quit, /clear to reset, /help for commands.

# >>> What is the meaning of life?
# The meaning of life is a philosophical question that has been
# debated for centuries...

Quick One-Shot Question

For a single question without entering interactive mode:

terminal
localgpt ask "Explain quantum computing in simple terms"

System Prompts

Customize the AI behavior with a system prompt:

terminal
localgpt run llama3.2 -s "You are a Python expert. Give concise code examples."

# >>> How do I read a JSON file?
# ```python
# import json
# with open('data.json') as f:
#     data = json.load(f)
# ```

Managing Models

Browse Available Models

terminal
# All models (local + cloud)
localgpt models

# Local models only
localgpt models --local

# Cloud models only
localgpt models --cloud

# Filter by category
localgpt models -c code
localgpt models -c image

# Filter by provider
localgpt models -p openai
localgpt models -p anthropic

Download & Remove

terminal
# Pull a model
localgpt pull mistral
localgpt pull deepseek-r1
localgpt pull codellama

# List downloaded models
localgpt list

# Show model details
localgpt show llama3.2

# Remove a model
localgpt rm mistral

# Check running models
localgpt ps

Set Default Model

terminal
# Set default model (used by 'localgpt chat')
localgpt config set-model llama3.2

# Chat with default model
localgpt chat

Cloud APIs

Bring your own API keys to use cloud models. No middleman, no markup.

Set API Keys

terminal
# OpenAI
localgpt config set-key openai sk-proj-...

# Anthropic
localgpt config set-key anthropic sk-ant-...

# Google
localgpt config set-key google AIza...

# Mistral
localgpt config set-key mistral ...

# DeepSeek
localgpt config set-key deepseek ...

# Check provider status
localgpt providers

Use Cloud Models

terminal
# OpenAI
localgpt run gpt-4o
localgpt run gpt-4o-mini

# Anthropic
localgpt run claude-sonnet-4-20250514
localgpt run claude-opus-4-20250514

# Google
localgpt run gemini-2.0-flash
localgpt run gemini-1.5-pro

# DeepSeek
localgpt run deepseek-chat
localgpt run deepseek-reasoner

OpenAI

GPT-4o, GPT-4o Mini, o1, DALL-E 3

Anthropic

Claude Sonnet 4, Opus 4, Haiku 3.5

Google

Gemini 2.0 Flash, 1.5 Pro, Imagen 3

DeepSeek

DeepSeek V3, DeepSeek R1

Image Generation

Generate images from text prompts using DALL-E 3 or other models.

terminal
# Generate with DALL-E 3 (requires OpenAI API key)
localgpt image "a sunset over mountains in watercolor style"

# Specify size
localgpt image "futuristic cityscape" --size 1792x1024

# Use DALL-E 2 (cheaper)
localgpt image "abstract geometric art" -m dall-e-2

# Output:
# ✓ Image generated!
# URL: https://oaidalleapiprodscus.blob.core.windows.net/...
# Revised prompt: A beautiful watercolor painting of...

Available Sizes

1024x10241792x10241024x1792

Configuration

terminal
# View all config
localgpt config show

# Set default model
localgpt config set-model gpt-4o

# Set temperature
localgpt config set temperature 0.9

# Set max tokens
localgpt config set max_tokens 4096

# Set local backend URL
localgpt config set local_backend_url http://localhost:11434

# Manage API keys
localgpt config set-key openai sk-...
localgpt config remove-key openai

Config is stored in ~/.localgpt/config.json

Chat Commands

Inside an interactive chat session, use these slash commands:

/exitExit the chat session
/clearClear conversation history (keep system prompt)
/modelShow the current model
/system <prompt>Set or change the system prompt
/helpShow available chat commands

Tips & Tricks

Use the right model for the task

For coding, use deepseek-coder-v2 or codellama. For math/logic, try deepseek-r1. For general chat, llama3.2 is a great lightweight pick.

Save API costs

Use local models for everyday tasks and reserve cloud models for complex problems.gpt-4o-mini is 20x cheaper than GPT-4o for simple tasks.

System prompts are powerful

Use -s to set a system prompt that shapes the AI's behavior. E.g., localgpt run llama3.2 -s "Respond only in haiku"

Pipe and redirect

Use the ask command for scripting and piping:localgpt ask "Summarize this" < article.txt

Need the desktop app?

Get the full visual experience with chat, image studio, agents, and more.