直接 ~/.openclaw/openclaw.json を編集します。
models.providers を定義して任意の provider を追加します。
agents.defaults.model.primary に、使用する provider と model を指定します。
この設定を成功させるためのポイントは以下の 2つです
以下設定例です。contextWindow と maxTokens は、走らせている PC のスペックやモデルに合わせて変更してください。
Local LLM では以下の構成の PC で gpt-oss:120b を使って動くことを確認しました。ただし 120b ではパラメータ数が少なくセキュリティ警告が出ますので利用する場合は注意してください。
Windows PC (Ryzen 7 9700X, GeForce RTX 5060Ti 16GB, RAM128GB) : LMStudio + openai/gpt-oss-120b (Context Window 65536)
Windows PC (EVO-X2, Ryzen AI Max+ 395, RAM128GB) : LMStudio + openai/gpt-oss-120b (Context Window 65536)
🔵 任意の PC 上で走っている ollama を使用する場合の設定例
{
~
"models": {
"providers": {
"ollamalocalpc": {
"baseUrl": "http://<OLLAMA HOST ADDRESS>:11434/v1",
"apiKey": "ollama-local",
"api": "openai-completions",
"models": [
{
"id": "gpt-oss:120b",
"name": "gpt-oss:120b",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 65536,
"maxTokens": 655360
}
]
}
}
},
~
"agents": {
"defaults": {
"model": {
"primary": "ollamalocalpc/gpt-oss:120b"
},
"workspace": "/home/~/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"timeoutSeconds": 1800,
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
}
}
🔵 任意の PC 上で走っている LMStudio を使用する場合の設定例
{
~
"models": {
"providers": {
"lmstudiolocalpc": {
"baseUrl": "http://<LMSTUDIO HOST ADDRESS>:1234/v1",
"apiKey": "lmstudio",
"api": "openai-completions",
"models": [
{
"id": "openai/gpt-oss-120b",
"name": "openai/gpt-oss-120b",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 65536,
"maxTokens": 655360
}
]
}
}
},
~
"agents": {
"defaults": {
"model": {
"primary": "lmstudiolocalpc/openai/gpt-oss-120b"
},
"workspace": "/home/~/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"timeoutSeconds": 1800,
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
}
}
🔵 ollama-cloud (旧 ollama-turbo) を使用する場合の設定例
{
~
"models": {
"providers": {
"ollamaturbo": {
"baseUrl": "https://ollama.com/v1",
"apiKey": "<OLLAMA CLOUD API KEY>",
"api": "openai-completions",
"models": [
{
"id": "gpt-oss:120b-cloud",
"name": "gpt-oss:120b-cloud",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 1310720
},
{
"id": "glm-5:cloud",
"name": "glm-5:cloud",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 1310720
},
{
"id": "glm-4.7:cloud",
"name": "glm-4.7:cloud",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 1310720
},
{
"id": "kimi-k2.5:cloud",
"name": "kimi-k2.5:cloud",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 1310720
},
{
"id": "gemini-3-flash-preview:cloud",
"name": "gemini-3-flash-preview:cloud",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 1310720
}
]
}
}
},
~
"agents": {
"defaults": {
"model": {
"primary": "ollamaturbo/gemini-3-flash-preview:cloud"
},
"workspace": "/home/~/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"timeoutSeconds": 1800,
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
}
}