Why I Built GetMyPersonalAI (And What I Learned)
There was a specific moment. I was using ChatGPT to help draft a response to a difficult client situation — the kind of email that needed careful wording, where the stakes were real. I finished the email, closed the tab, and then sat there for a second thinking: that conversation now lives on OpenAI's servers indefinitely.
The email referenced a client by name. It described a sensitive contractual dispute. It included my read on what had gone wrong and what I was actually trying to achieve. None of that was public information, and none of it should be sitting in a training dataset somewhere.
The uncomfortable part was that I had been doing this for months — business strategy, personnel decisions, financial projections, negotiation prep. All of it going through ChatGPT because it was genuinely useful and I'd stopped thinking about where it was going.
The research phase
My first instinct was that there had to be a good alternative. Surely someone had built a privacy-respecting version of this. I spent a few weeks actually looking.
What I found fell into a few categories:
Enterprise upgrades
ChatGPT Enterprise and Azure OpenAI both promise not to use your data for model training. That's an improvement, but your data still flows through their infrastructure. You're trusting their security practices, their staff access controls, and their legal obligations. That's not nothing, but it's not what I was looking for. It also costs significantly more — enterprise tiers aren't $20/month pricing.
Self-hosted open-source tools
There's a whole ecosystem here: Ollama, LocalAI, LM Studio, PrivateGPT, Open WebUI. These tools are genuinely good. The problem is that "self-hosted" in practice means you need to install software, configure a server, manage updates, and troubleshoot when things break. I tried it. I got Ollama running, wired up Open WebUI, and had a working setup. I also spent a Saturday debugging GPU driver conflicts after a system update.
More practically: my setup was on my laptop. When my laptop was off, my AI assistant was off. For something I wanted to use from my phone throughout the day, that wasn't workable.
I was spending more time maintaining my "private AI" than I was actually using it. The overhead was eating the benefit.
Cloud-hosted alternatives
There are other AI services that claim better privacy — smaller companies, different terms of service, some open-source backends. The problem here is trust. A small company with a good privacy policy is still a company that could be acquired, could change its policies, could get breached. The privacy guarantee is only as strong as the organization's incentives, which can change.
The actual insight
After a few months of this research, I arrived at a realization that sounds obvious in retrospect: the right solution is a server that you own, configured and managed by someone else.
Not cloud software you access via API — actual infrastructure under your AWS account, which you control, which nobody else has routine access to. But also not DIY — because the whole problem with DIY is the operational overhead that most people, including me, don't want.
This is what I ended up calling "managed self-hosting." It's a real infrastructure pattern used in other parts of the tech industry — think managed Kubernetes, managed databases, managed mail servers. You get the ownership and data isolation benefits of running your own infrastructure, without becoming an expert in the infrastructure itself.
Applied to AI: your data never leaves an EC2 instance that belongs to you. The AI model runs locally on that instance. No API calls to OpenAI, no data moving between your instance and anyone else's. From a privacy standpoint, it's equivalent to running everything yourself — because it is running on your hardware.
Building it
The technical work took a few months. The actual AI assistant part was relatively straightforward — Ollama is excellent software and the Telegram bot API is well-designed. The harder work was making the deployment process fast and reliable enough that it could happen in under a minute.
Early versions required manual steps after provisioning. I remember spending 45 minutes walking a beta user through some configuration over a video call. That was clearly not acceptable. The goal was: you pay, you get a Telegram bot, it works. No calls, no instructions, no IT involvement.
Getting there required building a deployment system that handles everything — EC2 provisioning, Docker container setup, model download, Telegram bot registration, access control configuration — and doing it automatically. That took iteration, but it's where I ended up.
What I learned about users
The clearest thing I learned from talking to early users is that almost nobody wants to think about their AI infrastructure. They want the assistant to work. The fact that it runs privately is important to them — they value it and they'll pay for it — but they don't want to manage it.
This seems obvious, but it cut against my initial instincts. I spent a lot of early time building configuration options, thinking users would want to tweak things. They didn't. They wanted defaults that worked well and an assistant they could rely on.
The exception is users who came from a technical background and had already tried DIY. Those users understood exactly what the product was and why it was better than maintaining their own setup. They converted quickly and churned slowly.
What I learned about privacy as a value proposition
Privacy is real motivation for real users, but it's more nuanced than I expected. People don't typically explain their AI choices in terms of privacy — they explain them in terms of trust and comfort. "I don't feel comfortable putting that in ChatGPT" is different from "I have a privacy requirement," but the solution is the same.
The users who care most tend to be in specific situations: executives handling sensitive business information, professionals with client confidentiality obligations, people who've read about how training data works and started noticing what they're actually typing. Once that awareness kicks in, it's hard to unlearn.
The goal was never to replace ChatGPT for everyone. It's to be the right answer for people who've asked the question: "Where does this data actually go?"
Where it is now
GetMyPersonalAI is running in production with a small group of users across different backgrounds — executives, consultants, a therapist, a few engineers who wanted something to replace their local setups. The feedback has been that it works well and the privacy model is what they expected.
There's a lot still to build. Better memory systems, more context management, deeper integrations. The core value — private AI on infrastructure you own, without the overhead of running it yourself — is solid, and that's the foundation everything else sits on.
If you've had the same moment I had, closing a ChatGPT tab and wondering what just went where, this was built for you.
Private AI. No infrastructure work.
Your own EC2-hosted AI assistant, running on hardware you control. Start with a $1 trial — no commitment, cancel any time.
Start your $1 trial