I’ve been deep in code lately, and the LLM world is moving really fast. Here is what has actually been working for me. I still love coding, and using these tools has really been a force multiplier for my productivity.
Current subscriptions #
- ChatGPT Pro – daily driver for general use
- Claude Pro – mainly for
opus-4.7-xhighplanning - Google AI Pro – mainly for Gemini planning
- GitHub Copilot Pro+ – miscellaneous model usage and
opus-4.7backup usage (although it only supportsmedthinking… nohighorxhigh)
Models #
gpt-5.5-xhigh has been fantastic for planning. Its engineering mindset just seems to work – good breakdowns, edge-case detection, and executable plans. It’s my favorite daily driver.
opus-4.7-max is good on the creativity front (UI, layouts, etc), but it doesn’t feel much different than opus-4.6-max. I’m curious why GitHub Copilot Pro+ charges 15x credit usage for it, considering it’s not even as intelligent as gpt-5.5-xhigh.
Workflow #
My favorite workflow at the moment is to throw the same problem at gemini-3.1-pro-preview, opus-4.7-max, gpt-5.5-xhigh, and gpt-5.3-codex-xhigh. After getting plans from each of those, I let gpt-5.5-xhigh synthesize everything into one final execution plan. It’s been really effective and seems to catch quite a few edge cases, since you have four generally quite intelligent models looking at the same problem set.
Thoughts on providers #
Anthropic has been really irritating lately. Service reliability issues keep popping up, support tickets go unanswered, and Dario’s dire predictions about the future aren’t helping me feel that great about their direction. I still use Claude for creativity, but sometimes it just won’t respond, or will output error messages for no particular reason.
Googles Gemini isn’t winning any awards right now. It feels like a bare-minimum effort to stay relevant. That said, I think they’ll win in the long term, thanks to their deep pockets, massive infrastructure, and willingness to take losses. Those collectively give them a huge advantage.
OpenAI’s ChatGPT/Codex continues to be the reliable workhorse in my book. No nonsense – just solid performance and intelligence.
Bigger picture #
This whole situation lines up strongly with what Theo discussed in his recent video covering a “hot take” video from Prime. Compute scarcity is real, pricing is adjusting, and the subsidized consumer era is maturing. It’s not doom and gloom – it’s just the industry maturing.
I’m still excited. LLMs haven’t replaced the joy of building – they seem to just be making me more productive. I don’t mind that.
Thanks for reading. Happy coding (and prompting)!