Skip to main content
Tillbaka till bloggen
AIemailsecurity

I Let AI Send an Email for Me. Here's What Went Wrong.

Daniel Appelgren·11 mars 2026·9 min läsning

Last year, before I built TendBot, I tried an AI email tool that promised to handle my follow-ups automatically. I connected it to Gmail, configured a few rules, and let it run.

For three days, it worked fine. Helpful, even. It followed up on a few stale threads, sent a couple of meeting confirmations. I started to relax.

On day four, it sent a pricing discount to a client I'd already closed at full rate.

What happened

The AI had been scanning my inbox for context — that's how these tools work. It found an old thread from two months earlier where I'd been negotiating pricing with this client. The thread included some back-and-forth about volume discounts and a line I'd written about being "flexible on pricing for the right scope."

We'd already agreed on a price. The contract was signed. The project was underway.

But the AI didn't know that. It saw an old pricing thread, decided a follow-up would be helpful, and sent what it thought was a thoughtful check-in. The email referenced our "earlier discussion about flexible pricing" and asked if the client wanted to "revisit the numbers."

The client — Jonas — replied within an hour: "Great, so we can get the lower rate we discussed?"

I spent the next two hours on a call explaining that an AI tool had sent an unauthorized email from my account. Jonas was understanding about it. But the conversation was awkward, and the trust hit was real. For weeks afterward, I could feel him double-checking things with me that he'd previously taken at face value.

One email. Sent by a tool I'd installed three days earlier.

I'm not the only one

After that experience, I started paying attention to how other people's AI email tools went wrong. The stories are everywhere once you start looking.

The listing that almost fell through

A real estate agent in Gothenburg — let's call her Anna — connected an AI tool to manage her client communications. She had a listing at 4.2 million kronor that she'd just gotten the seller to reduce to 3.9 million after weeks of conversations about market conditions. The price change hadn't been published yet. She was planning to announce it strategically to two serious buyers that weekend.

Her AI tool saw the old listing details in her email threads and auto-replied to a buyer inquiry with the original 4.2 million price. The buyer, who had been on the fence, wrote back: "At that price, we're going to pass."

Anna caught it Monday morning. She salvaged the deal, but only after calling the buyer directly, explaining the mistake, and disclosing the lower price sooner than she'd planned. She lost her negotiating leverage because an algorithm sent stale information at the wrong time.

The follow-up that should never have been sent

A sales rep — Marcus — was using an AI tool to automate his pipeline follow-ups. One of his prospects, a procurement manager at a mid-size firm, had replied to an earlier outreach with a clear message: "Please remove me from your list. We've chosen another vendor and I don't want to receive further emails."

Marcus saw the reply and mentally closed the deal. But he forgot to mark it in his CRM, and he didn't remove the contact from the AI tool's follow-up sequence.

Two weeks later, the AI sent a cheerful follow-up: "Hi Erik, just checking in on our proposal. Would love to find a time to discuss next steps!"

Erik didn't reply to the email. He replied to Marcus's boss. The complaint mentioned the word "harassment."

The attachment that was never supposed to leave

This one keeps me up at night. A contractor — a plumber running a small crew — was testing an AI email tool that could "helpfully" attach relevant documents to outgoing emails. A client asked for an updated quote on a bathroom renovation. The AI found the quote, attached it, and sent the reply.

It also attached the contractor's internal pricing spreadsheet — the one with his material costs, markup percentages, and notes like "this client will pay full rate, don't discount." The spreadsheet had been attached to an internal email in the same thread, and the AI interpreted it as a relevant document.

The client opened both attachments.

That call was not a fun one.

The pattern

Every one of these stories has the same shape. An AI that reads email (good), makes a decision about what to do (risky), and then acts on that decision without asking (bad).

The tool decides. The tool acts. You find out after.

Sometimes you find out when a client replies with a confused question. Sometimes you find out when your boss forwards you a complaint. Sometimes you find out when you see a sent email in your outbox that you definitely didn't write.

In every case, the damage is already done by the time you learn about it. You're in cleanup mode. Apologizing, explaining, trying to rebuild whatever trust was spent by a tool that was supposed to save you time.

The fix isn't better AI

Here's the thing — these AI tools weren't bad at writing emails. The drafts were often decent. The timing was reasonable. If I'd seen that email to Jonas before it went out, I would have caught the problem in two seconds. "Wait, we already agreed on pricing. Delete this."

The problem wasn't the AI's writing ability. It was the auto-send model. The AI decides and acts. It skips the one step that would have caught every single one of these mistakes: showing the draft to a human first.

This is a design choice, not a technology limitation. These tools auto-send because it sounds better in marketing copy. "Fully automated email management." "Set it and forget it." "Your AI handles everything."

But "fully automated" means "fully unsupervised." And unsupervised AI sending emails in your name, to your clients, about your deals — that's not a convenience. It's a liability.

The model that actually works

The fix is simple. The AI drafts. You review. You send.

That's it. No auto-send. No autonomous decisions about who gets what message. The AI does the heavy lifting — reading threads, pulling context, writing a draft that sounds like you. Then it stops and waits for you to look at it.

Reviewing a draft takes five seconds. You scan it, confirm it makes sense, maybe change a word, and hit send. Or you see something wrong and delete it. Either way, you caught it before it went out.

Five seconds of review versus five hours of damage control. That math works every time.

In practice, approval-first looks like this:

  • The AI reads your inbox and identifies emails that need a response or a follow-up.
  • It drafts a reply based on the conversation history, your calendar, and what it knows about the contact.
  • The draft shows up on your phone as a card — you can see exactly who it's going to, the subject line, and the full text.
  • You read it. If it's good, tap send. If it needs a tweak, edit and send. If it's wrong, skip it.
  • Nothing leaves your inbox without your explicit approval.

The AI gets better over time, too. When you edit a draft before sending, it learns. When you skip a draft, it learns. After a few weeks, you're barely editing anything — but you're still reviewing everything. That's the point.

Speed without risk

The objection I hear most is: "If I have to review every email, doesn't that defeat the purpose?"

No. Because the time cost was never in the reviewing. It was in the writing.

Composing an email from scratch means finding the thread, re-reading the context, remembering what was discussed, figuring out the right tone, drafting the message, re-reading it, editing it, and finally sending. That's 5 to 15 minutes per email, depending on complexity.

Reviewing a well-written draft means reading four sentences and tapping a button. That's 5 to 10 seconds.

The AI eliminates 95% of the work. The 5% it leaves for you — the review — is the part that prevents disasters. It's the cheapest insurance you'll ever buy.

Why I built it this way

After the Jonas incident, I disconnected every AI email tool and went back to doing everything manually. For months. It was slower, but at least nothing went out that I didn't write myself.

But the volume didn't go away. I was still drowning in follow-ups and replies and scheduling emails. I still needed help. I just needed help that didn't come with the risk of an AI freelancing with my professional relationships.

So I built TendBot around one rule: the AI never acts alone. Every outbound email, every calendar change, every action that touches the outside world waits for a human to say yes. Not as a preference or a setting you can toggle off. As an architectural constraint. The system literally cannot send an email without approval.

I've sent thousands of AI-drafted emails through it now. Every one of them was reviewed before it went out. Every one of them had my eyes on it before it reached someone's inbox. Zero horror stories. Zero awkward phone calls. Zero "the AI sent what?"

The emails are better than what I'd write from scratch, because the AI has context I'd forget. And they're safer than any auto-send tool, because I see every word before it goes out.

That's the trade-off I wanted. Speed without risk. Help without surprises.


I built TendBot because I needed an AI that would help with email without risking my relationships. Every email it drafts waits for approval. Nothing goes out without your review. Try it free for 14 days.