How to Set Up a Private AI Bot on Telegram
Telegram is a surprisingly good interface for an AI assistant. The app is available everywhere, supports long messages, and has a clean bot API that makes integration straightforward. The problem is that most Telegram AI bots send your messages straight to OpenAI or Anthropic — which defeats the privacy advantage of using a messaging app you trust.
This guide shows you two ways to build a Telegram AI bot that keeps your conversations on hardware you control. Part 1 is the full DIY path — every step, including simplified code. Part 2 is the managed option if you want the result without the work.
Prerequisites for Part 1: A computer or server running Linux/macOS (or WSL2 on Windows), Python 3.10+, and around 3–4 hours. You'll also need Ollama installed — see our self-hosted AI setup guide for that step.
Part 1: The DIY Path
Step 1: Create a Telegram bot via @BotFather
Telegram's bot creation process is entirely done through a bot called @BotFather. Here's the sequence:
- Open Telegram and search for @BotFather (verified with a blue checkmark)
- Send
/newbot - BotFather asks for a display name — this is what users see (e.g., "My Private Assistant")
- Then it asks for a username — must end in
bot(e.g.,my_private_ai_bot) - BotFather returns a token that looks like:
7123456789:AAHxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Save that token. It's your bot's API key — treat it like a password. Anyone with this token can control your bot.
While you're in BotFather, also run /setprivacy and set it to DISABLED. This allows your bot to read all messages in group chats, not just commands. If you only plan to use it in private chats, this doesn't matter.
Step 2: Get your Telegram user ID
For a private bot, you want to restrict access so only you (and authorized users) can send it messages. To do this, you need your Telegram user ID — not your username. The easiest way is to message @userinfobot and it will reply with your numeric ID (e.g., 123456789).
Step 3: Make sure Ollama is running
Your bot will send user messages to your local Ollama instance for processing. Verify it's running and responding:
If you get a response back, you're ready to wire up the bot.
Step 4: Install the Python library
Version 21.x uses the async API, which handles concurrent requests properly. Don't pin to an older version — the API changed significantly between v13 and v20.
Step 5: Write the bot
Here's a minimal but fully functional private AI Telegram bot. Save this as pai_bot.py:
Run it with your credentials:
Open Telegram, find your bot, and send it a message. It should respond using your local Ollama model.
Privacy note: With this setup, the flow is: Telegram → your server → Ollama. Your messages leave your device over Telegram's network, reach your bot server, then get processed by Ollama locally. Nothing reaches OpenAI or any third-party AI provider. The only company with visibility into message content is Telegram — which is significantly better than OpenAI, but not zero-trust.
Step 6: Run it persistently
Running the script in a terminal means it dies when you close the window. For 24/7 operation, use systemd or Docker.
Quick systemd setup — create /etc/systemd/system/pai-bot.service:
Practical tips for local development
If you're running Ollama on your laptop and want to test the bot before moving to a server, Telegram needs to reach your local machine. The standard approach is ngrok:
This is useful for development but isn't a production setup — ngrok tunnels have bandwidth limits and your laptop needs to stay on. For a production bot, you want Ollama running on a cloud instance or a dedicated home server.
Model selection for chat bots
Not all models behave equally as chat assistants. Some recommendations from experience:
- llama3:8b-instruct — Best general-purpose choice for most users. Fast responses, good instruction following, handles conversation context well.
- mistral:7b-instruct — Slightly smaller, slightly faster, slightly worse at complex reasoning. Good for quick factual queries.
- qwen2.5:14b — Better than either above if you have the RAM. Noticeably better at coding and structured output.
- phi3:mini — Very fast, very small (3.8B). Useful if you're on constrained hardware. Quality drops off for complex tasks.
Avoid the non-instruct variants for chat — they're base models trained to complete text, not follow instructions, and they'll behave strangely as a chatbot.
Part 2: The Managed Path
If reading Part 1 and you thought "I want the result but not the work" — that's what GetMyPersonalAI is built for.
The flow is: you sign up, provide your Telegram username, and within 60 seconds you receive a Telegram bot token connected to a private AI assistant running on a dedicated EC2 instance. No server setup, no Python code, no systemd configs.
The underlying architecture is similar to what Part 1 describes — an Ollama-backed assistant with conversation memory, accessible only to you — but it runs 24/7 on a server you don't have to manage.
| DIY Bot | GetMyPersonalAI | |
|---|---|---|
| Setup time | 3–5 hours (experienced) / full day (first time) | 60 seconds |
| Ongoing maintenance | Updates, monitoring, crash recovery — you | Handled automatically |
| Model quality | Depends on your hardware | Nemotron Super (large hosted model) |
| 24/7 availability | Only if server stays on | Yes, EC2-hosted |
| Cost | $0 (local) or $40–80/mo (cloud) | $19.99/mo |
| Data location | Your hardware | Your EC2 instance |
| Customization | Full control | Within platform parameters |
Both options keep your AI conversations off OpenAI's infrastructure. The choice is really about whether you want to own the operational work. DIY is a legitimate, excellent choice if you enjoy it. Managed makes sense if you just want the assistant to work.
Get your private Telegram AI bot in 60 seconds
No Ollama setup, no Python code, no server administration. Just a private AI assistant on your own EC2 instance, accessible over Telegram.
Start your $1 trial