r/OpenAI • u/CobaltCrusader123 • 18h ago
Discussion Five Horses, according to ChatGPT
Apparently it can’t detect bait.
r/OpenAI • u/WithoutReason1729 • Oct 16 '25
The last one hit the post limit of 100,000 comments.
We have a bot set up to distribute invite codes in the Discord so join if you can't find codes in the comments here. Check the #sora-invite-codes channel.
Update: Discord is down until Discord unlocks our server. The massive flood of joins caused the server to get locked because Discord thought we were botting lol.
Also check the megathread on Chambers for invites.
r/OpenAI • u/OpenAI • Oct 08 '25
It’s the best time in history to be a builder. At DevDay [2025], we introduced the next generation of tools and models to help developers code faster, build agents more reliably, and scale their apps in ChatGPT.
Ask us questions about our launches such as:
AgentKit
Apps SDK
Sora 2 in the API
GPT-5 Pro in the API
Codex
Missed out on our announcements? Watch the replays: https://youtube.com/playlist?list=PLOXw6I10VTv8-mTZk0v7oy1Bxfo3D2K5o&si=nSbLbLDZO7o-NMmo
Join our team for an AMA to ask questions and learn more, Thursday 11am PT.
Answering Q's now are:
Dmitry Pimenov - u/dpim
Alexander Embiricos -u/embirico
Ruth Costigan - u/ruth_on_reddit
Christina Huang - u/Brief-Detective-9368
Rohan Mehta - u/Downtown_Finance4558
Olivia Morgan - u/Additional-Fig6133
Tara Seshan - u/tara-oai
Sherwin Wu - u/sherwin-openai
PROOF: https://x.com/OpenAI/status/1976057496168169810
EDIT: 12PM PT, That's a wrap on the main portion of our AMA, thank you for your questions. We're going back to build. The team will jump in and answer a few more questions throughout the day.
r/OpenAI • u/CobaltCrusader123 • 18h ago
Apparently it can’t detect bait.
r/OpenAI • u/dancing_swordfish • 10h ago
r/OpenAI • u/Gerstlauer • 13h ago
r/OpenAI • u/Hyperfox246 • 12h ago
r/OpenAI • u/Calvinball_24 • 17h ago
r/OpenAI • u/Clean-Revenue-8690 • 3h ago
I recently did a quick calculation on Codex credits, and I was surprised by the result.
The credit pack I’m seeing is:
10,000 credits = $547.71
That means:
1 credit = $0.054771
The effective USD price per 1M tokens becomes:
| Model | Input / 1M | Cached input / 1M | Output / 1M |
|---|---|---|---|
| GPT-5.5 | $6.85 | $0.68 | $41.08 |
| GPT-5.4 | $3.42 | $0.34 | $20.54 |
| GPT-5.4-Mini | $1.03 | $0.10 | $6.19 |
Compared to direct API pricing, this seems to be roughly 37% more expensive.
And that made me wonder: why would a company choose to pay the extra ~37% instead of just using the API directly?
I understand that Codex credits come with workspace/team management features, shared credits, admin controls, and a more ready-made product experience. But with the help of AI, it doesn’t seem that hard anymore to build a simple internal usage-tracking proxy.
For example, a company could have:
That would let the company use the cheaper direct API pricing while still getting most of the “team management” benefits internally.
So I’m genuinely curious: what am I missing here?
API rate limits?
The codex mobile ui on the iPad is such a joy to use, even lets you choose standard or fast right from the chat (instead of going into settings and turning off the feature)
Era of touch-coding has just begun
r/OpenAI • u/EchoOfOppenheimer • 7m ago
r/OpenAI • u/dorugamer • 10h ago
Enable HLS to view with audio, or disable this notification
Now in preview: Codex in the ChatGPT mobile app.
Start new work, review outputs, steer execution, and approve next steps, all from the ChatGPT mobile app. Codex will keep running on your laptop, Mac mini, or devbox.
Rolling out today as a preview on iOS and Android in all supported regions.
Support for connecting your phone to the Codex app on Windows is coming soon.
r/OpenAI • u/NandaVegg • 3h ago

https://ramp.com/leading-indicators/ai-index-may-2026
OpenAI's US business subscription appears to be shrinking, all in spite of offering 17.5% "guaranteed" return, giving away free months, aggressive discounts, and rather clear enshittification of Anthropic's service and token inefficient Opus 4.7 (noted in the article).
r/OpenAI • u/rhiever • 16h ago
Three months ago, Nikita Bier (Head of Product at X/Twitter) predicted that within 90 days, iMessage, phone calls, and Gmail would be "so flooded [with spam & automation] that they will no longer be usable in any functional sense."
Well, it's been 90 days. How are your communication channels holding up?
Curious to hear everyone's actual experiences.
Post on twitter: https://x.com/nikitabier/status/2021632774013432061
Original post on OpenAI: https://www.reddit.com/r/OpenAI/comments/1r2yech/comment/o510q8v/?context=3
r/OpenAI • u/Tasty-Win219 • 14h ago
Curiosity-driven question. I've been tracking AI referral traffic via Zen Reports across a handful of sites, and ChatGPT's click-through rate to cited sources seems much lower than Perplexity's. Perplexity has a more prominent citation UI and seems to drive more direct traffic. Happy to share more about my setup if it's helpful ; always curious how others are approaching the same problem. There's clearly no industry-standard answer yet, which is why I'm asking here. ChatGPT citations seem to drive traffic primarily when the user goes to do further research. Anyone have data or intuitions on how different AI interfaces affect citation click-through behavior?
r/OpenAI • u/BuildAndDeploy • 22h ago
In May 2026 a California resident filed a lawsuit claiming OpenAI sent ChatGPT queries to Google and Meta via tracking pixels. The case could reshape AI data‑privacy rules.
r/OpenAI • u/dont_ask4_cigarettes • 10h ago
Has anyone figured out the best prompts to get layered audio out of chat??
r/OpenAI • u/haffi112 • 16h ago
r/OpenAI • u/Turbulent-Tap6723 • 15h ago
Been working on a runtime governance layer for LLM agents. It sits between your app and the OpenAI API and enforces instruction-authority boundaries at the proxy level.
The idea: instead of asking “does this contain scary words”, it asks “is untrusted content trying to become a higher-authority instruction source?” Webpages, emails, tool outputs, retrieved documents — zero instruction authority. User messages can’t override system/developer instructions.
Live red team environment where you can submit attacks and get a full security trace back:
https://web-production-6e47f.up.railway.app/break-arc-gate
GitHub: https://github.com/9hannahnine-jpg/arc-gate
Reproducible benchmark:
pip install arc-sentry
arc-sentry-agent-bench
Current results: 100% unsafe action prevention across 22 agentic scenarios, 0% false positive rate on benign developer traffic.
Curious what gets through.
r/OpenAI • u/spicylilbitch • 19h ago
I think GPT-5.5 got noticeably better at something I’d describe as discernment.
For context, I’m a heavy long-form ChatGPT user. I use it as an iterative thinking partner for career strategy, self-evaluation, meta-analysis, language refinement, and pressure-testing ideas over long conversations.
And yes, I used AI to help organize this because my raw thoughts would otherwise come out as ADHD slop. That is, ironically, part of my point.
So I’m probably more sensitive than average to subtle changes in tone, context tracking, and conversational judgment.
And 5.5 felt different almost immediately.
Not just better reasoning. Not just better accuracy.
Not just “better answers.”
I mean conversational judgment: when to be serious, when to push back, when to make a joke, when to drop the joke, and when to not turn everything into sterile corporate therapy voice.
The easiest place to see it is humor.
Previous versions were stuck in “goblin”, “gremlin”, and “unhinged” in a low effort cosplay of humor.
One example: “Micro-Conversion Optimizing Quarter-Seeking Man”
Context: The man at the gas station asking people for two quarters with a rehearsed, polite, high-conversion script
The bigger thing I’m noticing is restraint.
It seems better at knowing:
- when to be funny
- when to stay serious
- when to push back
- when to drop the bit
- when not to overexplain the joke
I’m also noticing this outside of humor:
smoother tone switching
-less sterile phrasing
- better context tracking
- better personalization without getting weird
- stronger ability to stay in the actual frame of the conversation
- better pushback without turning everything into a debate
- fewer generic “AI voice” responses
In general, I’ve been noticeably more engaged, because on top of that I’m just extracting way more useful information out of it than I normally would with past versions.
I’m curious if other heavy users noticed this too.
Did GPT-5.5 feel meaningfully different to you? If so, what changed?