How to Set Up a Private AI Bot on Telegram

Apr 2026  ·  15 min read  ·  By the GetMyPersonalAI team

Telegram is a surprisingly good interface for an AI assistant. The app is available everywhere, supports long messages, and has a clean bot API that makes integration straightforward. The problem is that most Telegram AI bots send your messages straight to OpenAI or Anthropic — which defeats the privacy advantage of using a messaging app you trust.

This guide shows you two ways to build a Telegram AI bot that keeps your conversations on hardware you control. Part 1 is the full DIY path — every step, including simplified code. Part 2 is the managed option if you want the result without the work.

Prerequisites for Part 1: A computer or server running Linux/macOS (or WSL2 on Windows), Python 3.10+, and around 3–4 hours. You'll also need Ollama installed — see our self-hosted AI setup guide for that step.

Part 1: The DIY Path

Step 1: Create a Telegram bot via @BotFather

Telegram's bot creation process is entirely done through a bot called @BotFather. Here's the sequence:

  1. Open Telegram and search for @BotFather (verified with a blue checkmark)
  2. Send /newbot
  3. BotFather asks for a display name — this is what users see (e.g., "My Private Assistant")
  4. Then it asks for a username — must end in bot (e.g., my_private_ai_bot)
  5. BotFather returns a token that looks like: 7123456789:AAHxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Save that token. It's your bot's API key — treat it like a password. Anyone with this token can control your bot.

While you're in BotFather, also run /setprivacy and set it to DISABLED. This allows your bot to read all messages in group chats, not just commands. If you only plan to use it in private chats, this doesn't matter.

Step 2: Get your Telegram user ID

For a private bot, you want to restrict access so only you (and authorized users) can send it messages. To do this, you need your Telegram user ID — not your username. The easiest way is to message @userinfobot and it will reply with your numeric ID (e.g., 123456789).

Step 3: Make sure Ollama is running

Your bot will send user messages to your local Ollama instance for processing. Verify it's running and responding:

curl http://localhost:11434/api/generate \ -d '{"model": "llama3", "prompt": "Hello", "stream": false}' \ | python3 -c "import sys, json; print(json.load(sys.stdin)['response'])"

If you get a response back, you're ready to wire up the bot.

Step 4: Install the Python library

pip install python-telegram-bot==21.6 requests

Version 21.x uses the async API, which handles concurrent requests properly. Don't pin to an older version — the API changed significantly between v13 and v20.

Step 5: Write the bot

Here's a minimal but fully functional private AI Telegram bot. Save this as pai_bot.py:

#!/usr/bin/env python3 import os import requests import asyncio from telegram import Update from telegram.ext import ( ApplicationBuilder, MessageHandler, filters, ContextTypes ) # Configuration — set via environment variables BOT_TOKEN = os.environ["TELEGRAM_BOT_TOKEN"] ALLOWED_USERS = set( int(x) for x in os.environ.get("ALLOWED_USER_IDS", "").split(",") if x ) OLLAMA_URL = os.environ.get("OLLAMA_URL", "http://localhost:11434/api/generate") MODEL = os.environ.get("OLLAMA_MODEL", "llama3") SYSTEM_PROMPT = os.environ.get("SYSTEM_PROMPT", "You are a helpful personal assistant.") # Keep a rolling conversation history per user (last 20 messages) conversation_history: dict[int, list[dict]] = {} def query_ollama(user_id: int, message: str) -> str: """Send a message to Ollama and return the response.""" history = conversation_history.setdefault(user_id, []) history.append({"role": "user", "content": message}) if len(history) > 20: history = history[-20:] conversation_history[user_id] = history payload = { "model": MODEL, "messages": [{"role": "system", "content": SYSTEM_PROMPT}] + history, "stream": False, } resp = requests.post(OLLAMA_URL.replace("/generate", "/chat"), json=payload, timeout=120) resp.raise_for_status() reply = resp.json()["message"]["content"] history.append({"role": "assistant", "content": reply}) return reply async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE): """Handle incoming messages — check auth, query model, reply.""" user_id = update.effective_user.id # Enforce allowlist if configured if ALLOWED_USERS and user_id not in ALLOWED_USERS: await update.message.reply_text("Sorry, I'm not available to you.") return user_text = update.message.text if not user_text: return await context.bot.send_chat_action( chat_id=update.effective_chat.id, action="typing" ) try: reply = query_ollama(user_id, user_text) except Exception as e: reply = f"Error reaching the model: {e}" await update.message.reply_text(reply) def main(): app = ApplicationBuilder().token(BOT_TOKEN).build() app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message)) print(f"Bot starting. Allowed users: {ALLOWED_USERS or 'all'}") app.run_polling(drop_pending_updates=True) if __name__ == "__main__": main()

Run it with your credentials:

TELEGRAM_BOT_TOKEN="7123456789:AAHxxxxxxx" \ ALLOWED_USER_IDS="123456789" \ OLLAMA_MODEL="llama3" \ python3 pai_bot.py

Open Telegram, find your bot, and send it a message. It should respond using your local Ollama model.

Privacy note: With this setup, the flow is: Telegram → your server → Ollama. Your messages leave your device over Telegram's network, reach your bot server, then get processed by Ollama locally. Nothing reaches OpenAI or any third-party AI provider. The only company with visibility into message content is Telegram — which is significantly better than OpenAI, but not zero-trust.

Step 6: Run it persistently

Running the script in a terminal means it dies when you close the window. For 24/7 operation, use systemd or Docker.

Quick systemd setup — create /etc/systemd/system/pai-bot.service:

[Unit] Description=PAI Telegram Bot After=network.target [Service] User=ubuntu Environment="TELEGRAM_BOT_TOKEN=YOUR_TOKEN" Environment="ALLOWED_USER_IDS=YOUR_ID" Environment="OLLAMA_MODEL=llama3" ExecStart=/usr/bin/python3 /opt/pai-bot/pai_bot.py Restart=always RestartSec=10 [Install] WantedBy=multi-user.target
sudo systemctl enable pai-bot sudo systemctl start pai-bot sudo systemctl status pai-bot

Practical tips for local development

If you're running Ollama on your laptop and want to test the bot before moving to a server, Telegram needs to reach your local machine. The standard approach is ngrok:

# Install ngrok, then expose your local port ngrok http 11434

This is useful for development but isn't a production setup — ngrok tunnels have bandwidth limits and your laptop needs to stay on. For a production bot, you want Ollama running on a cloud instance or a dedicated home server.

Model selection for chat bots

Not all models behave equally as chat assistants. Some recommendations from experience:

Avoid the non-instruct variants for chat — they're base models trained to complete text, not follow instructions, and they'll behave strangely as a chatbot.

Part 2: The Managed Path

If reading Part 1 and you thought "I want the result but not the work" — that's what GetMyPersonalAI is built for.

The flow is: you sign up, provide your Telegram username, and within 60 seconds you receive a Telegram bot token connected to a private AI assistant running on a dedicated EC2 instance. No server setup, no Python code, no systemd configs.

The underlying architecture is similar to what Part 1 describes — an Ollama-backed assistant with conversation memory, accessible only to you — but it runs 24/7 on a server you don't have to manage.

DIY Bot GetMyPersonalAI
Setup time 3–5 hours (experienced) / full day (first time) 60 seconds
Ongoing maintenance Updates, monitoring, crash recovery — you Handled automatically
Model quality Depends on your hardware Nemotron Super (large hosted model)
24/7 availability Only if server stays on Yes, EC2-hosted
Cost $0 (local) or $40–80/mo (cloud) $19.99/mo
Data location Your hardware Your EC2 instance
Customization Full control Within platform parameters

Both options keep your AI conversations off OpenAI's infrastructure. The choice is really about whether you want to own the operational work. DIY is a legitimate, excellent choice if you enjoy it. Managed makes sense if you just want the assistant to work.

Get your private Telegram AI bot in 60 seconds

No Ollama setup, no Python code, no server administration. Just a private AI assistant on your own EC2 instance, accessible over Telegram.

Start your $1 trial

More from the blog

Self-Hosting
Self-Hosted AI Assistant: Complete Setup Guide
Privacy
Is ChatGPT Safe? What Actually Happens to Your Data