Batch writing tweets with GPT The setting I missed that broke everything silently
I went down a rabbit hole trying to automate batch tweet generation with GPT, and I thought I was losing my mind. Everything looked like it was working — the outputs were clean, the Zap said SUCCESS, the timestamps logged in Airtable were correct — but nothing was posting. Not even drafts. Just 💨 into the void. This post is basically my scribbled panic log turned semi-coherent after coffee.
1. Setting temperature too low made GPT output unreadable
I was trying to control GPT to be more “on-brand,” so I set the temperature to 0.3 inside the OpenAI step in my Make scenario. Big mistake 😅
What I got back were tweets that read like IKEA manuals. Stuff like:
“Our product clarifies communication. Clarity improves results. Discover clarity.”
That’s not a tweet, that’s a whiteboard from a marketing off-site gone wrong. Worse, every tweet looked the same. It wasn’t violating any API limits — it was just silently failing the vibe check.
The key was bumping the temperature to 0.7 and using slightly more chaotic prompts. Specificity helps too. I had better luck when I described the tone more directly like:
You’re a snarky but helpful startup founder. Generate five tweets about automation stress that would resonate on Twitter.
At that point, the model started giving me content I’d actually copy-paste and not be embarrassed to attribute 👀
2. Character counts were off without newlines even though UI said they fit

I thought I was being clever by removing all newline joins before pushing outputs into the Twitter publish step. Then Twitter rejected every post in my queue without error. It just… didn’t post. Nothing hit notification logs. Not even the Drafts bin got hit.
Turns out, the character counter in GPT’s response includes “\n” as two characters in some configurations, but Twitter treats it as one — and Twitter’s endpoint fails with INVALID REQUEST when the input is over limit. Zapier, annoyingly, did not throw an alert when this happened. Airtable just logged it as “completed” with a green check 😐
I only found the issue after sending the failed tweet body through twurl — yes, I opened my terminal like it was 2015 — and got this:
{"errors":[{"code":186,"message":"Tweet needs to be a bit shorter."}]}
One fix: I now truncate at 275 characters and append “…” if cutoff. Not ideal, but better than failing silently. There’s probably a smarter regex-y solution out there, but I haven’t had the mental energy to make one that doesn’t eat half the sentence mid-word ¯\_(ツ)_/¯
3. JavaScript date formatting broke tweet timings by hours

I queue tweets hourly and use an Airtable datetime field paired with a Make schedule. Days were fine, but time blocks were off by whole hours once daylight savings kicked in.
The bug? I was using `toISOString()` in my HTML formatter instead of adjusting for local time. That converts to UTC, not PST. And Make runs in GMTR++ or something mysterious. So things slotted for 9am were firing at 2am Pacific 🙃
To fix it, I stopped trusting Make’s internal time formatters and now inject this via JavaScript:
const date = new Date(triggerDate);
return date.toLocaleString('en-US', { timeZone: 'America/Los_Angeles' });
I also had to hard-enter the timezone in my Airtable formula like so (because its default was guessing GMT):
DATETIME_FORMAT(SET_TIMEZONE({Scheduled}, 'America/Los_Angeles'), 'M/D/YYYY h:mm A')
Yeah. Fun times. The output looks slightly janky in the Airtable UI, but the timing is finally accurate. I’ll take it.
4. GPT rate limits silently blocked my entire Zap schedule
This one felt personal lol. I was running about 40 tweet generations per day — 5 batches of 8 — all via OpenAI’s ChatGPT API in Make. For days, it was working fine. Then suddenly, none of the messages came back. The runs hit success, but the message field was empty.
No errors. Just blank output. Imagine watching your smart assistant smile and nod… and do absolutely nothing.
I finally hit GPT’s usage dashboard on openai.com, and yep: “Hard Rate Limit Reached.”
What I didn’t realize: ChatGPT Pro doesn’t include unlimited API usage, and Make doesn’t propagate any of those GPT error codes directly unless you use a custom webhook.
To work around it, I:
- Split my Zap into two schedules: one early AM, one late PM
- Added a filter in Make to stop processing if “choices” array was missing
- Dumped GPT errors into a Notion page via webhook so I could at least review failures asynchronously
Also upgraded my OpenAI quota limit manually by emailing support, because auto-scaling seems like a myth for smaller accounts 👀
5. Tweets auto posted to wrong Twitter account because of session cookies
Buckle up: I was testing manual tweet posts using Twitter’s web interface in the same browser where I had authorized Zapier’s Twitter integration. Thought nothing of it. But at some point, my browser session refreshed and linked to a test Twitter account I was using for fake data.
Zapier still used the same “Linked Account”, but apparently, the token was session-based and piggybacked stale cookies. The result? My real tweets went to @AutomationSandbox instead of my actual brand account.
No errors again. I only noticed because I saw one post get 1 view instead of my usual few hundred. I had posted 17 tweets into the void.
Fix:
- Logged out of all Twitter tabs
- Revoked token in Zapier
- Re-authenticated using Incognito to ensure clean cookie session
And yes, I manually deleted the sneak-tweets on the fake account. One of them was about how AI makes us more human. Just… no.
6. Dynamically generated hashtags sometimes reference wrong brand names
Don’t try to dynamically generate branded hashtags unless you enjoy regret. I used GPT prompts like:
Generate hashtags that include this brand’s name and mission
And thought I had the safety nets in place by feeding in the company name directly. But GPT tries to be helpful. Too helpful.
Like, I run automations for a SaaS tool called Fuseflow, and it generated:
#FlowifyYourWorkplace, #AutoZen, #ZapMonster
Except #ZapMonster is not a thing. It’s a competitor. GPT hallucinated a brand identity. One of the tweets literally said:
With ZapMonster you don’t need to think. Just automate.
NO 😭
The bigger issue was that I didn’t validate the string output — I just piped it straight into publishing. Gross mistake.
Now I parse GPT’s output through a tiny validation function:
function cleanHashtag(str) {
return str.replace(/[^#a-zA-Z0-9]/g, '').substring(0,20);
}
Plus a blocklist filter that red-flags prohibited patterns like competitor names or cusswords (yep, one came back with #FMLSoMuchWork).
Even then, I don’t trust it fully. Still copy-editing everything before it posts, but at least it’s semi-safe by default now.