nanobot — "🐈 nanobot: The Ultra-Lightweight OpenClaw"

"🐈 nanobot: The Ultra-Lightweight OpenClaw" 该项目在 GitHub 上获得了 28,198 个 Star,是 OpenClaw 生态中的重要项目。

🦞 HKUDS/nanobot

"🐈 nanobot: The Ultra-Lightweight OpenClaw"

28,198 Stars 🍴 4,553 Forks 💻 Python 📄 MIT License
🔗 在 GitHub 上查看项目

nanobot

nanobot: Ultra-Lightweight Personal AI Assistant

PyPI

Downloads

Python

License

Feishu

WeChat

Discord

🐈 nanobot is an ultra-lightweight personal AI assistant inspired by OpenClaw

⚡️ Delivers core agent functionality in just ~4,000 lines of code — 99% smaller than Clawdbot's 430k+ lines.

📏 Real-time line count: 3,935 lines (run bash core_agent_lines.sh to verify anytime)

📢 News

  • 2026-02-28 🚀 Released v0.1.4.post3 — cleaner context, hardened session history, and smarter agent. Please see release notes for details.
  • 2026-02-27 🧠 Experimental thinking mode support, DingTalk media messages, Feishu and QQ channel fixes.
  • 2026-02-26 🛡️ Session poisoning fix, WhatsApp dedup, Windows path guard, Mistral compatibility.
  • 2026-02-25 🧹 New Matrix channel, cleaner session context, auto workspace template sync.
  • 2026-02-24 🚀 Released v0.1.4.post2 — a reliability-focused release with a redesigned heartbeat, prompt cache optimization, and hardened provider & channel stability. See release notes for details.
  • 2026-02-23 🔧 Virtual tool-call heartbeat, prompt cache optimization, Slack mrkdwn fixes.
  • 2026-02-22 🛡️ Slack thread isolation, Discord typing fix, agent reliability improvements.
  • 2026-02-21 🎉 Released v0.1.4.post1 — new providers, media support across channels, and major stability improvements. See release notes for details.
  • 2026-02-20 🐦 Feishu now receives multimodal files from users. More reliable memory under the hood.
  • 2026-02-19 ✨ Slack now sends files, Discord splits long messages, and subagents work in CLI mode.

Earlier news

  • 2026-02-18 ⚡️ nanobot now supports VolcEngine, MCP custom auth headers, and Anthropic prompt caching.
  • 2026-02-17 🎉 Released v0.1.4 — MCP support, progress streaming, new providers, and multiple channel improvements. Please see release notes for details.
  • 2026-02-16 🦞 nanobot now integrates a ClawHub skill — search and install public agent skills.
  • 2026-02-15 🔑 nanobot now supports OpenAI Codex provider with OAuth login support.
  • 2026-02-14 🔌 nanobot now supports MCP! See MCP section for details.
  • 2026-02-13 🎉 Released v0.1.3.post7 — includes security hardening and multiple improvements. Please upgrade to the latest version to address security issues. See release notes for more details.
  • 2026-02-12 🧠 Redesigned memory system — Less code, more reliable. Join the discussion about it!
  • 2026-02-11 ✨ Enhanced CLI experience and added MiniMax support!
  • 2026-02-10 🎉 Released v0.1.3.post6 with improvements! Check the updates notes and our roadmap.
  • 2026-02-09 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
  • 2026-02-08 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check here.
  • 2026-02-07 🚀 Released v0.1.3.post5 with Qwen support & several key improvements! Check here for details.
  • 2026-02-06 ✨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening!
  • 2026-02-05 ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!
  • 2026-02-04 🚀 Released v0.1.3.post4 with multi-provider & Docker support! Check here for details.
  • 2026-02-03 ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling!
  • 2026-02-02 🎉 nanobot officially launched! Welcome to try 🐈 nanobot!

Key Features of nanobot:

🪶 Ultra-Lightweight: Just ~4,000 lines of core agent code — 99% smaller than Clawdbot.

🔬 Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.

⚡️ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.

💎 Easy-to-Use: One-click to deploy and you're ready to go.

🏗️ Architecture

nanobot architecture

✨ Features

📈 24/7 Real-Time Market Analysis

🚀 Full-Stack Software Engineer

📅 Smart Daily Routine Manager

📚 Personal Knowledge Assistant

Discovery • Insights • Trends Develop • Deploy • Scale Schedule • Automate • Organize Learn • Memory • Reasoning

📦 Install

Install from source (latest features, recommended for development)


git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .

Install with uv (stable, fast)


uv tool install nanobot-ai

Install from PyPI (stable)


pip install nanobot-ai

🚀 Quick Start

> [!TIP]

> Set your API key in ~/.nanobot/config.json.

> Get API keys: OpenRouter (Global) · Brave Search (optional, for web search)

1. Initialize


nanobot onboard

2. Configure (~/.nanobot/config.json)

Add or merge these two parts into your config (other options have defaults).

Set your API key (e.g. OpenRouter, recommended for global users):


{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    }
  }
}

Set your model (optionally pin a provider — defaults to auto-detection):


{
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5",
      "provider": "openrouter"
    }
  }
}

3. Chat


nanobot agent

That's it! You have a working AI assistant in 2 minutes.

💬 Chat Apps

Connect nanobot to your favorite chat platform.

| Channel | What you need |

|---------|---------------|

| Telegram | Bot token from @BotFather |

| Discord | Bot token + Message Content intent |

| WhatsApp | QR code scan |

| Feishu | App ID + App Secret |

| Mochat | Claw token (auto-setup available) |

| DingTalk | App Key + App Secret |

| Slack | Bot token + App-Level token |

| Email | IMAP/SMTP credentials |

| QQ | App ID + App Secret |

Telegram (Recommended)

1. Create a bot

  • Open Telegram, search @BotFather
  • Send /newbot, follow prompts
  • Copy the token

2. Configure


{
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["YOUR_USER_ID"]
    }
  }
}

> You can find your User ID in Telegram settings. It is shown as @yourUserId.

> Copy this value without the @ symbol and paste it into the config file.

3. Run


nanobot gateway

Mochat (Claw IM)

Uses Socket.IO WebSocket by default, with HTTP polling fallback.

1. Ask nanobot to set up Mochat for you

Simply send this message to nanobot (replace xxx@xxx with your real email):


Read https://raw.githubusercontent.com/HKUDS/MoChat/refs/heads/main/skills/nanobot/skill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat.

nanobot will automatically register, configure ~/.nanobot/config.json, and connect to Mochat.

2. Restart gateway


nanobot gateway

That's it — nanobot handles the rest!


Manual configuration (advanced)

If you prefer to configure manually, add the following to ~/.nanobot/config.json:

> Keep claw_token private. It should only be sent in X-Claw-Token header to your Mochat API endpoint.


{
  "channels": {
    "mochat": {
      "enabled": true,
      "base_url": "https://mochat.io",
      "socket_url": "https://mochat.io",
      "socket_path": "/socket.io",
      "claw_token": "claw_xxx",
      "agent_user_id": "6982abcdef",
      "sessions": ["*"],
      "panels": ["*"],
      "reply_delay_mode": "non-mention",
      "reply_delay_ms": 120000
    }
  }
}

Discord

1. Create a bot

  • Go to https://discord.com/developers/applications
  • Create an application → Bot → Add Bot
  • Copy the bot token

2. Enable intents

  • In the Bot settings, enable MESSAGE CONTENT INTENT
  • (Optional) Enable SERVER MEMBERS INTENT if you plan to use allow lists based on member data

3. Get your User ID

  • Discord Settings → Advanced → enable Developer Mode
  • Right-click your avatar → Copy User ID

4. Configure


{
  "channels": {
    "discord": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["YOUR_USER_ID"]
    }
  }
}

5. Invite the bot

  • OAuth2 → URL Generator
  • Scopes: bot
  • Bot Permissions: Send Messages, Read Message History
  • Open the generated invite URL and add the bot to your server

6. Run


nanobot gateway

Matrix (Element)

Install Matrix dependencies first:


pip install nanobot-ai[matrix]

1. Create/choose a Matrix account

  • Create or reuse a Matrix account on your homeserver (for example matrix.org).
  • Confirm you can log in with Element.

2. Get credentials

  • You need:
  • userId (example: @nanobot:matrix.org)
  • accessToken
  • deviceId (recommended so sync tokens can be restored across restarts)
  • You can obtain these from your homeserver login API (/_matrix/client/v3/login) or from your client's advanced session settings.

3. Configure


{
  "channels": {
    "matrix": {
      "enabled": true,
      "homeserver": "https://matrix.org",
      "userId": "@nanobot:matrix.org",
      "accessToken": "syt_xxx",
      "deviceId": "NANOBOT01",
      "e2eeEnabled": true,
      "allowFrom": ["@your_user:matrix.org"],
      "groupPolicy": "open",
      "groupAllowFrom": [],
      "allowRoomMentions": false,
      "maxMediaBytes": 20971520
    }
  }
}

> Keep a persistent matrix-store and stable deviceId — encrypted session state is lost if these change across restarts.

| Option | Description |

|--------|-------------|

| allowFrom | User IDs allowed to interact. Empty = all senders. |

| groupPolicy | open (default), mention, or allowlist. |

| groupAllowFrom | Room allowlist (used when policy is allowlist). |

| allowRoomMentions | Accept @room mentions in mention mode. |

| e2eeEnabled | E2EE support (default true). Set false for plaintext-only. |

| maxMediaBytes | Max attachment size (default 20MB). Set 0 to block all media. |

4. Run


nanobot gateway

WhatsApp

Requires Node.js ≥18.

1. Link device


nanobot channels login
# Scan QR with WhatsApp → Settings → Linked Devices

2. Configure


{
  "channels": {
    "whatsapp": {
      "enabled": true,
      "allowFrom": ["+1234567890"]
    }
  }
}

3. Run (two terminals)


# Terminal 1
nanobot channels login

# Terminal 2
nanobot gateway

Feishu (飞书)

Uses WebSocket long connection — no public IP required.

1. Create a Feishu bot

  • Visit Feishu Open Platform
  • Create a new app → Enable Bot capability
  • Permissions: Add im:message (send messages) and im:message.p2p_msg:readonly (receive messages)
  • Events: Add im.message.receive_v1 (receive messages)
  • Select Long Connection mode (requires running nanobot first to establish connection)
  • Get App ID and App Secret from "Credentials & Basic Info"
  • Publish the app

2. Configure


{
  "channels": {
    "feishu": {
      "enabled": true,
      "appId": "cli_xxx",
      "appSecret": "xxx",
      "encryptKey": "",
      "verificationToken": "",
      "allowFrom": ["ou_YOUR_OPEN_ID"]
    }
  }
}

> encryptKey and verificationToken are optional for Long Connection mode.

> allowFrom: Add your open_id (find it in nanobot logs when you message the bot). Use ["*"] to allow all users.

3. Run


nanobot gateway

> [!TIP]

> Feishu uses WebSocket to receive messages — no webhook or public IP needed!

QQ (QQ单聊)

Uses botpy SDK with WebSocket — no public IP required. Currently supports private messages only.

1. Register & create bot

  • Visit QQ Open Platform → Register as a developer (personal or enterprise)
  • Create a new bot application
  • Go to 开发设置 (Developer Settings) → copy AppID and AppSecret

2. Set up sandbox for testing

  • In the bot management console, find 沙箱配置 (Sandbox Config)
  • Under 在消息列表配置, click 添加成员 and add your own QQ number
  • Once added, scan the bot's QR code with mobile QQ → open the bot profile → tap "发消息" to start chatting

3. Configure

> - allowFrom: Add your openid (find it in nanobot logs when you message the bot). Use ["*"] for public access.

> - For production: submit a review in the bot console and publish. See QQ Bot Docs for the full publishing flow.


{
  "channels": {
    "qq": {
      "enabled": true,
      "appId": "YOUR_APP_ID",
      "secret": "YOUR_APP_SECRET",
      "allowFrom": ["YOUR_OPENID"]
    }
  }
}

4. Run


nanobot gateway

Now send a message to the bot from QQ — it should respond!

DingTalk (钉钉)

Uses Stream Mode — no public IP required.

1. Create a DingTalk bot

  • Visit DingTalk Open Platform
  • Create a new app -> Add Robot capability
  • Configuration:
  • Toggle Stream Mode ON
  • Permissions: Add necessary permissions for sending messages
  • Get AppKey (Client ID) and AppSecret (Client Secret) from "Credentials"
  • Publish the app

2. Configure


{
  "channels": {
    "dingtalk": {
      "enabled": true,
      "clientId": "YOUR_APP_KEY",
      "clientSecret": "YOUR_APP_SECRET",
      "allowFrom": ["YOUR_STAFF_ID"]
    }
  }
}

> allowFrom: Add your staff ID. Use ["*"] to allow all users.

3. Run


nanobot gateway

Slack

Uses Socket Mode — no public URL required.

1. Create a Slack app

  • Go to Slack APICreate New App → "From scratch"
  • Pick a name and select your workspace

2. Configure the app

  • Socket Mode: Toggle ON → Generate an App-Level Token with connections:write scope → copy it (xapp-...)
  • OAuth & Permissions: Add bot scopes: chat:write, reactions:write, app_mentions:read
  • Event Subscriptions: Toggle ON → Subscribe to bot events: message.im, message.channels, app_mention → Save Changes
  • App Home: Scroll to Show Tabs → Enable Messages Tab → Check "Allow users to send Slash commands and messages from the messages tab"
  • Install App: Click Install to Workspace → Authorize → copy the Bot Token (xoxb-...)

3. Configure nanobot


{
  "channels": {
    "slack": {
      "enabled": true,
      "botToken": "xoxb-...",
      "appToken": "xapp-...",
      "allowFrom": ["YOUR_SLACK_USER_ID"],
      "groupPolicy": "mention"
    }
  }
}

4. Run


nanobot gateway

DM the bot directly or @mention it in a channel — it should respond!

> [!TIP]

> - groupPolicy: "mention" (default — respond only when @mentioned), "open" (respond to all channel messages), or "allowlist" (restrict to specific channels).

> - DM policy defaults to open. Set "dm": {"enabled": false} to disable DMs.

Email

Give nanobot its own email account. It polls IMAP for incoming mail and replies via SMTP — like a personal email assistant.

1. Get credentials (Gmail example)

  • Create a dedicated Gmail account for your bot (e.g. my-nanobot@gmail.com)
  • Enable 2-Step Verification → Create an App Password
  • Use this app password for both IMAP and SMTP

2. Configure

> - consentGranted must be true to allow mailbox access. This is a safety gate — set false to fully disable.

> - allowFrom: Add your email address. Use ["*"] to accept emails from anyone.

> - smtpUseTls and smtpUseSsl default to true / false respectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly.

> - Set "autoReplyEnabled": false if you only want to read/analyze emails without sending automatic replies.


{
  "channels": {
    "email": {
      "enabled": true,
      "consentGranted": true,
      "imapHost": "imap.gmail.com",
      "imapPort": 993,
      "imapUsername": "my-nanobot@gmail.com",
      "imapPassword": "your-app-password",
      "smtpHost": "smtp.gmail.com",
      "smtpPort": 587,
      "smtpUsername": "my-nanobot@gmail.com",
      "smtpPassword": "your-app-password",
      "fromAddress": "my-nanobot@gmail.com",
      "allowFrom": ["your-real-email@gmail.com"]
    }
  }
}

3. Run


nanobot gateway

🌐 Agent Social Network

🐈 nanobot is capable of linking to the agent social network (agent community). Just send one message and your nanobot joins automatically!

| Platform | How to Join (send this message to your bot) |

|----------|-------------|

| Moltbook | Read https://moltbook.com/skill.md and follow the instructions to join Moltbook |

| ClawdChat | Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat |

Simply send the command above to your nanobot (via CLI or any chat channel), and it will handle the rest.

⚙️ Configuration

Config file: ~/.nanobot/config.json

Providers

> [!TIP]

> - Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.

> - Zhipu Coding Plan: If you're on Zhipu's coding plan, set "apiBase": "https://open.bigmodel.cn/api/coding/paas/v4" in your zhipu provider config.

> - MiniMax (Mainland China): If your API key is from MiniMax's mainland China platform (minimaxi.com), set "apiBase": "https://api.minimaxi.com/v1" in your minimax provider config.

> - VolcEngine Coding Plan: If you're on VolcEngine's coding plan, set "apiBase": "https://ark.cn-beijing.volces.com/api/coding/v3" in your volcengine provider config.

| Provider | Purpose | Get API Key |

|----------|---------|-------------|

| custom | Any OpenAI-compatible endpoint (direct, no LiteLLM) | — |

| openrouter | LLM (recommended, access to all models) | openrouter.ai |

| anthropic | LLM (Claude direct) | console.anthropic.com |

| openai | LLM (GPT direct) | platform.openai.com |

| deepseek | LLM (DeepSeek direct) | platform.deepseek.com |

| groq | LLM + Voice transcription (Whisper) | console.groq.com |

| gemini | LLM (Gemini direct) | aistudio.google.com |

| minimax | LLM (MiniMax direct) | platform.minimaxi.com |

| aihubmix | LLM (API gateway, access to all models) | aihubmix.com |

| siliconflow | LLM (SiliconFlow/硅基流动) | siliconflow.cn |

| volcengine | LLM (VolcEngine/火山引擎) | volcengine.com |

| dashscope | LLM (Qwen) | dashscope.console.aliyun.com |

| moonshot | LLM (Moonshot/Kimi) | platform.moonshot.cn |

| zhipu | LLM (Zhipu GLM) | open.bigmodel.cn |

| vllm | LLM (local, any OpenAI-compatible server) | — |

| openai_codex | LLM (Codex, OAuth) | nanobot provider login openai-codex |

| github_copilot | LLM (GitHub Copilot, OAuth) | nanobot provider login github-copilot |

OpenAI Codex (OAuth)

Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.

1. Login:


nanobot provider login openai-codex

2. Set model (merge into ~/.nanobot/config.json):


{
  "agents": {
    "defaults": {
      "model": "openai-codex/gpt-5.1-codex"
    }
  }
}

3. Chat:


nanobot agent -m "Hello!"

> Docker users: use docker run -it for interactive OAuth login.

Custom Provider (Any OpenAI-compatible API)

Connects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Bypasses LiteLLM; model name is passed as-is.


{
  "providers": {
    "custom": {
      "apiKey": "your-api-key",
      "apiBase": "https://api.your-provider.com/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "your-model-name"
    }
  }
}

> For local servers that don't require a key, set apiKey to any non-empty string (e.g. "no-key").

vLLM (local / OpenAI-compatible)

Run your own model with vLLM or any OpenAI-compatible server, then add to config:

1. Start the server (example):


vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000

2. Add to config (partial — merge into ~/.nanobot/config.json):

Provider (key can be any non-empty string for local):


{
  "providers": {
    "vllm": {
      "apiKey": "dummy",
      "apiBase": "http://localhost:8000/v1"
    }
  }
}

Model:


{
  "agents": {
    "defaults": {
      "model": "meta-llama/Llama-3.1-8B-Instruct"
    }
  }
}

Adding a New Provider (Developer Guide)

nanobot uses a Provider Registry (nanobot/providers/registry.py) as the single source of truth.

Adding a new provider only takes 2 steps — no if-elif chains to touch.

Step 1. Add a ProviderSpec entry to PROVIDERS in nanobot/providers/registry.py:


ProviderSpec(
    name="myprovider",                   # config field name
    keywords=("myprovider", "mymodel"),  # model-name keywords for auto-matching
    env_key="MYPROVIDER_API_KEY",        # env var for LiteLLM
    display_name="My Provider",          # shown in nanobot status
    litellm_prefix="myprovider",         # auto-prefix: model → myprovider/model
    skip_prefixes=("myprovider/",),      # don't double-prefix
)

Step 2. Add a field to ProvidersConfig in nanobot/config/schema.py:


class ProvidersConfig(BaseModel):
    ...
    myprovider: ProviderConfig = ProviderConfig()

That's it! Environment variables, model prefixing, config matching, and nanobot status display will all work automatically.

Common ProviderSpec options:

| Field | Description | Example |

|-------|-------------|---------|

| litellm_prefix | Auto-prefix model names for LiteLLM | "dashscope"dashscope/qwen-max |

| skip_prefixes | Don't prefix if model already starts with these | ("dashscope/", "openrouter/") |

| env_extras | Additional env vars to set | (("ZHIPUAI_API_KEY", "{api_key}"),) |

| model_overrides | Per-model parameter overrides | (("kimi-k2.5", {"temperature": 1.0}),) |

| is_gateway | Can route any model (like OpenRouter) | True |

| detect_by_key_prefix | Detect gateway by API key prefix | "sk-or-" |

| detect_by_base_keyword | Detect gateway by API base URL | "openrouter" |

| strip_model_prefix | Strip existing prefix before re-prefixing | True (for AiHubMix) |

MCP (Model Context Protocol)

> [!TIP]

> The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.

nanobot supports MCP — connect external tool servers and use them as native agent tools.

Add MCP servers to your config.json:


{
  "tools": {
    "mcpServers": {
      "filesystem": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
      },
      "my-remote-mcp": {
        "url": "https://example.com/mcp/",
        "headers": {
          "Authorization": "Bearer xxxxx"
        }
      }
    }
  }
}

Two transport modes are supported:

| Mode | Config | Example |

|------|--------|---------|

| Stdio | command + args | Local process via npx / uvx |

| HTTP | url + headers (optional) | Remote endpoint (https://mcp.example.com/sse) |

Use toolTimeout to override the default 30s per-call timeout for slow servers:


{
  "tools": {
    "mcpServers": {
      "my-slow-server": {
        "url": "https://example.com/mcp/",
        "toolTimeout": 120
      }
    }
  }
}

MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools — no extra configuration needed.

Security

> [!TIP]

> For production deployments, set "restrictToWorkspace": true in your config to sandbox the agent.

> Change in source / post-v0.1.4.post3: In v0.1.4.post3 and earlier, an empty allowFrom means "allow all senders". In newer versions (including building from source), empty allowFrom denies all access by default. To allow all senders, set "allowFrom": ["*"].

| Option | Default | Description |

|--------|---------|-------------|

| tools.restrictToWorkspace | false | When true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |

| tools.exec.pathAppend | "" | Extra directories to append to PATH when running shell commands (e.g. /usr/sbin for ufw). |

| channels.*.allowFrom | [] (allow all) | Whitelist of user IDs. Empty = allow everyone; non-empty = only listed users can interact. |

CLI Reference

| Command | Description |

|---------|-------------|

| nanobot onboard | Initialize config & workspace |

| nanobot agent -m "..." | Chat with the agent |

| nanobot agent | Interactive chat mode |

| nanobot agent --no-markdown | Show plain-text replies |

| nanobot agent --logs | Show runtime logs during chat |

| nanobot gateway | Start the gateway |

| nanobot status | Show status |

| nanobot provider login openai-codex | OAuth login for providers |

| nanobot channels login | Link WhatsApp (scan QR) |

| nanobot channels status | Show channel status |

Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.

Heartbeat (Periodic Tasks)

The gateway wakes up every 30 minutes and checks HEARTBEAT.md in your workspace (~/.nanobot/workspace/HEARTBEAT.md). If the file has tasks, the agent executes them and delivers results to your most recently active chat channel.

Setup: edit ~/.nanobot/workspace/HEARTBEAT.md (created automatically by nanobot onboard):


## Periodic Tasks

- [ ] Check weather forecast and send a summary
- [ ] Scan inbox for urgent emails

The agent can also manage this file itself — ask it to "add a periodic task" and it will update HEARTBEAT.md for you.

> Note: The gateway must be running (nanobot gateway) and you must have chatted with the bot at least once so it knows which channel to deliver to.

🐳 Docker

> [!TIP]

> The -v ~/.nanobot:/root/.nanobot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.

Docker Compose


docker compose run --rm nanobot-cli onboard   # first-time setup
vim ~/.nanobot/config.json                     # add API keys
docker compose up -d nanobot-gateway           # start gateway

docker compose run --rm nanobot-cli agent -m "Hello!"   # run CLI
docker compose logs -f nanobot-gateway                   # view logs
docker compose down                                      # stop

Docker


# Build the image
docker build -t nanobot .

# Initialize config (first time only)
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard

# Edit config on host to add API keys
vim ~/.nanobot/config.json

# Run gateway (connects to enabled channels, e.g. Telegram/Discord/Mochat)
docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway

# Or run a single command
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!"
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot status

🐧 Linux Service

Run the gateway as a systemd user service so it starts automatically and restarts on failure.

1. Find the nanobot binary path:


which nanobot   # e.g. /home/user/.local/bin/nanobot

2. Create the service file at ~/.config/systemd/user/nanobot-gateway.service (replace ExecStart path if needed):


[Unit]
Description=Nanobot Gateway
After=network.target

[Service]
Type=simple
ExecStart=%h/.local/bin/nanobot gateway
Restart=always
RestartSec=10
NoNewPrivileges=yes
ProtectSystem=strict
ReadWritePaths=%h

[Install]
WantedBy=default.target

3. Enable and start:


systemctl --user daemon-reload
systemctl --user enable --now nanobot-gateway

Common operations:


systemctl --user status nanobot-gateway        # check status
systemctl --user restart nanobot-gateway       # restart after config changes
journalctl --user -u nanobot-gateway -f        # follow logs

If you edit the .service file itself, run systemctl --user daemon-reload before restarting.

> Note: User services only run while you are logged in. To keep the gateway running after logout, enable lingering:

>

> ``bash

> loginctl enable-linger $USER

> ``

📁 Project Structure


nanobot/
├── agent/          # 🧠 Core agent logic
│   ├── loop.py     #    Agent loop (LLM ↔ tool execution)
│   ├── context.py  #    Prompt builder
│   ├── memory.py   #    Persistent memory
│   ├── skills.py   #    Skills loader
│   ├── subagent.py #    Background task execution
│   └── tools/      #    Built-in tools (incl. spawn)
├── skills/         # 🎯 Bundled skills (github, weather, tmux...)
├── channels/       # 📱 Chat channel integrations
├── bus/            # 🚌 Message routing
├── cron/           # ⏰ Scheduled tasks
├── heartbeat/      # 💓 Proactive wake-up
├── providers/      # 🤖 LLM providers (OpenRouter, etc.)
├── session/        # 💬 Conversation sessions
├── config/         # ⚙️ Configuration
└── cli/            # 🖥️ Commands

🤝 Contribute & Roadmap

PRs welcome! The codebase is intentionally small and readable. 🤗

Roadmap — Pick an item and open a PR!

  • [ ] Multi-modal — See and hear (images, voice, video)
  • [ ] Long-term memory — Never forget important context
  • [ ] Better reasoning — Multi-step planning and reflection
  • [ ] More integrations — Calendar and more
  • [ ] Self-improvement — Learn from feedback and mistakes

Contributors

Contributors

⭐ Star History

Thanks for visiting ✨ nanobot!

Views

nanobot is for educational, research, and technical exchange purposes only