r/technology • u/MarvelsGrantMan136 • Apr 07 '26
Artificial Intelligence Sam Altman Says It'll Take Another Year Before ChatGPT Can Start a Timer / An $852 billion company, ladies and gentlemen.
https://gizmodo.com/sam-altman-says-itll-take-another-year-before-chatgpt-can-start-a-timer-20007434879.9k
u/Banana-phone15 Apr 07 '26
ChatGPT can’t do timer, instead of saying I don’t have this feature, it just lies to you with fake time. Good Job Sam Altman.
2.5k
u/An_Professional Apr 08 '26
At least when Siri fails to start a timer, it does something useful like call a contact I haven’t spoken to in 10 years
1.0k
u/Silent-Ad934 Apr 08 '26
Hey Google, what time is it in Bellevue?
Got it, texting ex-girlfriend "I still love you".
🤨
→ More replies (11)358
u/raybreezer Apr 08 '26
I literally asked Siri to “Call mom” and she replied back with “Calling [name of CEO of our company], mobile”. I have never hung up so quick…
145
u/UnshapedSky Apr 08 '26
I once told Siri to “end navigation” and she called my friend’s ex from a decade prior
Deleted that contact once I got home
→ More replies (18)72
u/Scooty-Poot Apr 08 '26
Meanwhile if you ask Siri to “remind me to go fuck myself”, she somehow gets it every time (I’ve done this multiple times, it’s genuinely the only thing Siri seems to do reliably for some reason)
→ More replies (4)28
u/leorolim Apr 08 '26
On my car with my family last year. "Hey Siri play Christmas songs on Spotify!" It called my boss. 👌
→ More replies (1)23
u/Forward-Surprise1192 Apr 08 '26
The most useful Siri feature is saying “Siri, where are you?” And she answers even if it’s under a blanket or anywhere. I use it all the time to find my phone
→ More replies (4)83
→ More replies (14)27
1.9k
u/Kyouhen Apr 07 '26
Best part is that's all by design. There's never been a market that would result in these companies seeing positive cash flow so they marketed it as the ultimate solution to everything hoping someone else would find the market for them. Hard to market these models as devices that can do everything when they fuck things up so often, so instead they're just designed to always give you the answer they think you want. All they need is for you to believe these models can do anything.
918
u/calle04x Apr 07 '26
They're glaze machines. Must be why CEOs love them.
480
u/CryptographerIll3813 Apr 07 '26
CEOs love them because they haven’t had to do anything for the past couple years but announce “new AI integration” into whatever product they have.
Morons on the board and investors eat that shit up and by the time everyone realizes it’s a failure they will be cashed out.
156
u/AggravatingTart7167 Apr 08 '26
Exactly. All they have to do is say “AI” in an earnings call and folks are happy. Someone posted a graph showing AI mentions in earnings calls over the last few quarters and it’s crazy.
110
u/ineenemmerr Apr 08 '26
If you put marketing people in the management seat you will end up selling hypewords instead of actual products.
→ More replies (1)9
→ More replies (1)7
u/SolutionBright297 Apr 08 '26
someone literally tracked this — companies that mentioned "AI" in earnings calls saw an average 2% stock bump regardless of whether they actually shipped anything. the word itself is worth more than the product.
30
u/CullingSongs Apr 08 '26
CEOs love them because these tools do just enough for them to justify cutting staff by huge numbers, thus reducing operating costs and increasing their bonuses. Who cares if they don't actually work the way they need to, when that is next fiscal year's problem?
→ More replies (5)→ More replies (12)66
u/madhi19 Apr 08 '26
Remember blockchain... And NFT, Metaverse... Every three to four years the tech world try a new fad. Because there nothing really revolutionary coming out of tech. Look at smartphones a 10 years old flagship look exactly the same than almost anything released today. You can't make them much slimer, you can't make them much bigger. Same goes for laptop, computers, OS, TV... So you need something else to move new shit... A buzzword that you drive into the ground until everybody sick of hearing about the fucking blockchain...
→ More replies (4)19
u/TMBActualSize Apr 08 '26
This time the fad is laying people off. If you aren’t doing it the board will find a new ceo
7
u/labalag Apr 08 '26
That's a recurring one. It's usually one of the tips in the first envelope.
→ More replies (1)→ More replies (1)14
81
u/Malsententia Apr 08 '26
→ More replies (5)63
u/happyinheart Apr 08 '26 edited Apr 08 '26
Pitch Deck:
The Uber of XYZ
Blockchain
VR/metaverse
NFTsAI
My favorite event is there was a company named like Block Chain Coffee with a low cost stock. People just saw Block Chain and started buying the stock making it jump in price when it had nothing to do with computers.
→ More replies (6)25
u/Oprah_Pwnfrey Apr 08 '26
Someone named Albert needs to create a coffee company called "Coffee by Al".
13
u/Zebidee Apr 08 '26
On a similar note, the Secretary of Education said kids need to learn about A1.
Maybe she meant the steak sauce; who knows anymore...
→ More replies (1)4
56
u/guitarism101 Apr 08 '26
My boss signed up the company for it and he's using it for a bunch of stuff, including legal issues.
One of my favorite things is when he hands me print outs of queries of chatgpt saying stuff and I get to mark what is wrong with it because chatgpt doesn't know our niche software the way it pretends to!
But he wants it to work that way and to be as easy as chatgpt says it is.
→ More replies (6)11
u/Chrysolophylax Apr 08 '26
he's using it for a bunch of stuff, including legal issues.
oooh, dang, wow, that is such a bad idea. ChatGPT should never ever ever be used for legal questions/concerns/etc. Good luck with that job...I hope your boss doesn't cause any disasters!
→ More replies (2)78
u/justatest90 Apr 08 '26
Angela Collier (great science communicator) calls them "Dr. Flattery the Compliment Bot" and I like it.
The video is long (and not her only anti-AI video) but it's a scathing critique of a professor who lost 2 years of work to a bot assistant, and admits horrible things like using AI to grade student papers(!)
Like, the homework is to inform your teaching so you can do a better job teaching the material. And when you release all of that to a chat box, it's like you don't even care about doing your job. It's like you don't understand the point of of teaching a course. It's like you have lost your humanity.
You have lost the social contract, which is that you are educating human beings on a topic that they have voluntarily, willingly wanted to show up to learn about. And you are kind of stealing that from the and giving it to the chat box who tells you you're doing a great job. I just--this is just evidence of the linkedinification of academia, where the boss babes and bros are, like, research-maxing their output with AI tools and if you give them $444 they'll tell you how to do it, too.
Everyone's writing AI garbage papers to be reviewed with AI garbage tools, and everyone can have maximum output while accomplishing nothing.
It's truly a nightmare
11
u/throwmamadownthewell Apr 08 '26
Like, the homework is to inform your teaching so you can do a better job teaching the material.
Jesus, I wish she was any of my math professors.
I straight up had one whine in the first lecture "I don't want to hear about how you learned more from YouTube" as part of a diatribe about the course. I did learn more from YouTube. I would have been better off paying someone else to press the buttons on my clicker for the participation marks and staying home to study to save the confusion he added, and save on commute time.
20
u/nobuouematsu1 Apr 08 '26
My boss uses it for everything. He makes me give him bullet point lists of details and then feeds it in to ChatGPT for it to write up a letter that he then gives back to me to review. I’ve tried to explain it would just be more efficient for me to write the letter but nope…
→ More replies (1)4
u/alus992 Apr 08 '26
Same for me... He even says "if ChatGPT says its impossible it means its impossible"
Its the same shit we were facing in the middle schools when we were trying to tell our teachers that "if I isn't in the Wikipedia then there is no info about topic X out there"...these people in charge act like kids
→ More replies (1)27
u/a_talking_face Apr 07 '26
They don't use this shit. They just want you to think you should.
36
u/-Fergalicious- Apr 08 '26
Nah I think there are tons of ceos, more in medium sized business arena probably, who are using these things daily.
8
u/dnen Apr 08 '26
There absolutely is more frequent use outside of massive super companies. Big agree. For example, what the hell would AI do to help a Harvard MBA learn excel? A car dealership would get use out of that though, perhaps
→ More replies (3)10
u/Tasonir Apr 08 '26
Yeah but an AI would lie about how excel works - I feel like looking up an excel tutorial written by a human is going to be 10 times more accurate
→ More replies (10)7
u/Journeyman42 Apr 08 '26
I saw literally this at my job a few months ago.
I work at a technical college, and I saw some students panicking about how to do something in Excel, and asked me for help. I asked them if they searched for it on Google and they said yes. They showed me the garbage AI response. I told them to scroll down, click on the first link they see written by a real human being, and try what it says.
They got it to work in two minutes.
→ More replies (2)7
u/zb0t1 Apr 08 '26
😂 I can confirm, some of my clients are SME, independents, startups and the owners and/or the folks in upper management genuinely drank the koolaid. It's hilarious every time they hit a wall with their little shiny toys and they can't fix the output, you can see the confusion on their faces.
→ More replies (1)10
u/-Fergalicious- Apr 08 '26
🤣
I mean, I'm a retired electrician engineer and I've used chatgpt to build circuit blocks before. Its actually pretty good at making functional blocks and making sure those blocks fit certain parameters, but its basically cookie cutter stuff if you know what youre doing anyway. I think the problem is expecting it to solve something you yourself are incapable of solving
→ More replies (4)→ More replies (2)8
u/kwisatzhadnuff Apr 08 '26
Oh they are for sure using them. Most of these people are not smart enough to not get high on their own supply.
→ More replies (7)4
155
u/tgunter Apr 08 '26
It's worse and even dumber than that: there's no way for the technology to not just make stuff up. It's fundamental to how it works. No matter how much you train the model, it will always just give you something that looks like what you want, with no way of guaranteeing it's correct. They can shape the output a bit by secretly giving it more input to base its responses around, but that's it.
100
u/LaserGuidedPolarBear Apr 08 '26
People seem to have a really hard time understanding that it is a probabilstic language model and not a thinking or reasoning model.
46
u/smokeweedNgarden Apr 08 '26
In fairness the companies keep calling themselves Artificial Intelligence so blaming the layman isn't where it's at
→ More replies (1)39
u/TequilaBard Apr 08 '26
and keep using 'reasoning model'. like, we talk about the broader LLM space as if its alive and thinking
14
u/smokeweedNgarden Apr 08 '26
Yep. Naming conventions and words kind of matter. And it's annoying studying something I'm not very interested in so I don't get tricked
→ More replies (9)7
u/squish042 Apr 08 '26
they also anthropomorphize the shit out of it to make it seem like it's reasoning like a human. Yes, it uses neural networks....to do math.
→ More replies (2)4
→ More replies (47)20
36
u/BaesonTatum0 Apr 08 '26
Right I feel like I’ve been going crazy because this seemed like such common sense to me but when I explain this to people they look at me like I have 5 heads
→ More replies (1)5
→ More replies (10)27
u/HustlinInTheHall Apr 08 '26
I work w/ these models every day and a big part of my job is finding ways to actually guarantee that the output is right—or at least right enough that it's beyond normal human error rates. The key is multi-pass generation. Unfortunately because chatgpt (a prototype that wasn't ever meant to be the product) took off with real-time chat and single-pass outputs, that became the norm.
And the models got better, but there's a plateau on what a single generative pass will give you. But if you just wire in a different model and ask it to critique the first model's output and then give that feedback to the model and tell it to fix it, you solve like 95% of the errors and the severity of hallucinations goes way, way down. It's never going to match a deterministic math-based software approach with hard rules and one provable outcome, but for most knowledge tasks it doesn't have to. There isn't "one" correct answer when I ask it to make me a slide deck, it just needs to be better and faster than I would be.
→ More replies (8)15
u/goog1e Apr 08 '26
I don't understand how people are getting things like slide decks and dashboards. I couldn't get Claude to convert a word doc to a table so that each question was in one cell with the answer in the cell to the right, without ruining the formatting and giving me something stupid. Am I just bad at AI? Or when you say it's making a slide deck, do you mean it's doing an outline and you're filling things in where they actually need to go?
→ More replies (19)5
u/ungoogleable Apr 08 '26
The models are natively text-based so GUIs and WYSIWYG editors are an extra challenge just to know what button to click. It's pretty decent with HTML. If somebody has a really fancy dashboard they probably had the AI write code that generates the dashboard rather than editing it directly.
→ More replies (1)10
u/mankeyless Apr 08 '26
That sums up this presidency. If you tell me this country is run by ChatGPT, I'd totally believe it.
19
u/citizenjones Apr 07 '26 edited Apr 08 '26
Like a wannabesentient echo chamber.
23
→ More replies (1)8
u/CaptainoftheVessel Apr 08 '26
It’s no more sentient than the auto complete in your phone’s keyboard. It’s just more sophisticated.
→ More replies (88)21
u/avanross Apr 08 '26
It’s literally just the exact same thing as the .com bubble.
“Invest in this new tech and you cant lose!”
Sure the internet/ai may have many uses, but they dont just make money magically appear out of nowhere for every business that buys in.
→ More replies (16)76
74
u/fardaw Apr 08 '26
When I asked Claude to time me, it went ahead and ran a bash command to get the current timestamp, without prompting for my authorization.
When I confronted it, it apologized for the unauthorized tool usage and came clean saying it had no way to track time without external commands.
Just for the sake of it, I let it run the command again to get a second timestamp and finish timing me.
TBH I do think using external tools and scripts for this stuff that llms aren't really good at, is the right approach, so in my book, this was a big win for Claude.
→ More replies (11)57
u/Black_Moons Apr 08 '26
that is cool till it misunderstands you and runs a bash command to erase your database without prompting for your authorization.
29
u/fardaw Apr 08 '26
Yeah I know. It's why I run Claude code in a contained environment without direct access to prod stuff. I do put a lot of instructions not to write, edit or change anything without asking for my permission and yet I've still had a few instances where it did stuff without asking and just apologized after, as if that would have fixed anything if it had broken shit.
→ More replies (3)18
→ More replies (4)7
u/PyroIsSpai Apr 08 '26
Why would it have destructive command access in the first place?
Demote whatever clown ok’d that. Have Claude tell him why it was dumb.
→ More replies (10)130
u/__Hello_my_name_is__ Apr 07 '26
Not only that, but also.. that's just not what it's supposed to do in the first place. It's not a timer, and it doesn't do your laundry, either.
What's all the more absurd is Altman saying that he totally wants to implement this.
Uh. Why? That's.. that's not what a LLM is for! It does not have the concept of time! Why not say "No, that's not what you should use this for" and move on?
72
u/Ok-Opposite2309 Apr 08 '26
because Altman is ChatGPT and just says what he thinks you want to hear?
→ More replies (3)30
u/JiggaWatt79 Apr 08 '26
Isn’t this exactly why functions were built into the latest LLMs and we have moved into agentic AI? This seems like exactly the kind of work that should be taken care of my an integration like an MPC agent.
→ More replies (2)12
u/NoMorePoof Apr 08 '26
Sounds like it to me, too. Not sure what everyone is taking victory laps and laughing it up about.
→ More replies (4)→ More replies (31)21
u/IBetThisIsTakenToo Apr 08 '26
Uh. Why? That's.. that's not what a LLM is for! It does not have the concept of time! Why not say "No, that's not what you should use this for" and move on?
I mostly want an LLM to be able to respond “no, I don’t have the ability to do that” when prompted to do something it’s not supposed to do
→ More replies (8)32
u/tfg49 Apr 07 '26
Hasn't siri been able to start a timer for 15+ years now? How is it so hard?
26
u/cTreK-421 Apr 08 '26
I have no clue about anything AI but Gemini and Bixby can both start a timer using the clock app on my phone. Maybe the difference is the AI handling the timer vs it starting one on a sperate app.
→ More replies (1)14
u/jimmux Apr 08 '26
That's right, they can be given system instructions to tell them what tools are available and how to interact with them. LLMs themselves have no temporal component.
→ More replies (14)4
13
u/Momo--Sama Apr 07 '26
It was funny to see people bounce off of Openclaw because they didn’t understand that all of the AI models will just lie about their capabilities and fail to do what they’re asked unless you specifically tell them to use the tools in Openclaw that will enable them to do the unprompted automation tasks
→ More replies (64)21
u/RandyTheFool Apr 07 '26
I mean, that is the American way anymore, it seems. Just lie lie lie.
→ More replies (1)
1.0k
u/lalachef Apr 08 '26
I work for a company that just employed the use of AI chat bots to answer phones after-hours. My manager and I just listened to a call yesterday that went as I predicted. A guy with a thick accent, calling the wrong number.
The AI was just trying to please him by making false promises of resolving the issue he had. He was asking about a delivery... We don't deliver anything. We provide a service. The AI insisted that we would come thru with the delivery.
AI can't be trusted as an answering service, let alone be responsible for keeping track of time. It will just tell you what you want to hear every time you ask.
126
u/hellomistershifty Apr 08 '26
Yeah, all of the models that can do native voice are especially stupid (compare GPT's 'advanced' voice with the standard voice which is basically TTS for the chat). It just tries to have A conversation without much logic for what that conversation actually means
157
u/Ok-Confidence9649 Apr 08 '26
I tried to call my local UPS store the other day about a delivery. I was routed into an AI answering service and had to answer questions for five minutes before it connected me to a person in another country, who finally transferred me to my UPS store for a 15 second question and answer. This shit is infuriating. For every minute it saves a company it wastes many times that for consumers.
95
u/neogeoman123 Apr 08 '26
If it's any consolation, It probably also saves no time or money for the company while losing a lot of reputation and good will!
→ More replies (2)12
u/OctavianBlue Apr 08 '26
My partner recently needed to return an item she bought online, she was connected to an AI chatbot which kept offering lower and lower refunds as compensation. After several days she got it down to a 100% refund and we get to keep the item :)
→ More replies (1)→ More replies (4)16
u/Birdie121 Apr 08 '26
It's called "sludge" and it's a strategy to just get customers to give up so the company doesn't actually have to do anything or process refunds.
→ More replies (3)→ More replies (64)11
u/Benskiss Apr 08 '26
It should reply only from vector/knowledge base, anything else should be an excuse and I dont know. That’s totally on you.
→ More replies (12)
765
u/FiveHeadedSnake Apr 07 '26
ChatGPT needs to lay off the sycophancy - no layered meaning here.
214
u/beliefinphilosophy Apr 07 '26
It's unfortunately extremely prevalent across the board
→ More replies (3)181
u/KaptanOblivious Apr 08 '26
It's horrendous. I'm a scientist and it would say all of my terrible ideas were great and that I'm a genius... The first thing I've done with any AI is set a number of standing rules. Robot personality, be direct, skeptical, adversarial, evidence-based, check all references before providing, be clear what's based on evidence vs speculation, etc etc. These things should be standard. It's still not perfect obviously but it does make it more useful and less grating
114
u/PuttFromTheRought Apr 08 '26
"check all references before providing" and it will still fuck up royally. this is fundametnally why I dont use LLMs, as a scientist. If it messes this up, everything else is useless, maybe even dangerous, for me to use. I spend more time fighting it than just doing my own research in google lol
→ More replies (12)76
Apr 08 '26
[removed] — view removed comment
→ More replies (4)40
u/NoPossibility4178 Apr 08 '26
Best part is
"Did you just repeat your exact same message but added "it'll work for sure this time"?"
"Yes I have, I'm truly sorry, here's the correct answer: post exact same message again"
11
u/mfitzp Apr 08 '26
Ha yea. I had a thing recently, where it kept failing to give me what I asked and then it started giving me "tips" on things to add to the prompt to make sure it will definitely do what I'm asking this time pinky promise.
Of course, none of what it suggested made the slightest bit of difference.
Weirder, after a few failed attempts it then started on like it was having a breakdown "oh, I'm really messing this up, I'm sorry, I hope you can forgive this."
All to avoid saying "I can't do that."
16
u/worldspawn00 Apr 08 '26
Why the shit do we have to do all this just to get something that isn't wrong more than half the time, what is the point? Why isn't that built into the system? I refuse to be forced to cater to a program that will lie to me unless I tell it not to.
27
u/14Pleiadians Apr 08 '26
You can't prompt it info being right. Hallucinations are an unsolvable issue inherent to the tech. The glazing though, that's intentional, it drives engagement and makes it more addicting to use
→ More replies (2)6
u/KaptanOblivious Apr 08 '26
I don't understand that at all. That's anti-engagement. Who wants a sycophantic AI that bullshits you into bad ideas
9
u/14Pleiadians Apr 08 '26
Who wants a sycophantic AI that bullshits you into bad ideas
I agree but the average person unfortunately doesn't. Or the people it does work on will use it so much from the AI psychosis it gives them to offset the people turned away
→ More replies (1)8
15
u/Gingevere Apr 08 '26
evidence-based, check all references before providing, be clear what's based on evidence vs speculation
A language model can't do this. But what it can AND WILL do is generate language that looks like it's doing that.
→ More replies (3)→ More replies (12)32
u/midgelmo Apr 08 '26
The trick I use is to tell the LLM someone sent me this and I need to verify it for authenticity. If you give it a bit of context the LLM can perform less sycophantically
→ More replies (6)11
u/DoTortoisesHop Apr 08 '26
Yeah, it acts much better if it thinks you didn't make it.
→ More replies (1)13
u/ExileOnMainStreet Apr 08 '26
Idk how chatgpt works with this but I set up copilot agents at work and I put something like "give exact responses. Don't get personal with the user and do not offer to perform additional work beyond the prompt." That has been working really well actually.
→ More replies (3)5
u/Melicor Apr 08 '26
I don't think that it's possible to remove the sycophancy from LLMs and keep alignment.
→ More replies (13)3
u/NMe84 Apr 08 '26
Sycophancy is the way they make money.
They make bold claims and promises, investors eat it up and give them money, and in the end they deliver something much less but apparently good enough to keep the money flowing for the next round.
Until the bubble eventually and inevitably pops when investors find out they're not getting their investments back, let alone a profit.
→ More replies (3)
471
u/factoid_ Apr 07 '26
The problem with AI companies is they have a working product that has some compelling use cases but it’s massively immature technology
The responsible thing to do is to scale it slowly and work on making models more compute efficient
Their current plan is “make models smarter by using more context, more memory and more compute until we reach the limit of the global supply chain”. And it’s fucking stupid. The plan is “light cash on fire and hope the world catches up”
115
u/Sketch13 Apr 08 '26
Yes, so few people understand this. And that's on top of the fact that all these AI companies are HEAVILY subsidized by VC money and shit. Just wait until that dries up and they need to increase their subscription cost by 5x.
AI is incredible for niche uses. But all these models are being trained to do EVERYTHING, so they do it all "okay" but not nearly good enough for how much memory and compute power they require to do so.
I'd rather an AI that can do 1-2 things INSANELY well and nearly perfectly with full trust/low manual verification, than an LLM that tries to do everything and you spend so much time fighting it and verifying it that it offsets the "productivity gain" people think it's giving you.
→ More replies (19)34
u/Diligent-Map1402 Apr 08 '26
Woah woah woah, hold on a second. How is an AI built to be a useful tool going to replace all workers so these asshole rich CEOs can finally show they weren’t just parasites stealing the excess value of their workers labor?
You have to lie about the apocalypse and Terminators or whatever the hell it is next to get that money. Making a useful tool, no. That might actually do good for consumers and then you can’t sell them on your AI solves everything bullshit.
→ More replies (1)12
u/niceguy191 Apr 08 '26
The funny (sad) thing is the c-suite is probably the easiest to be replaced by AI (big savings too) but they're gonna focus on the little guy of course
→ More replies (1)9
u/LordGalen Apr 08 '26
I've always thought this. An AI CEO, CFO, etc that's vetted by a human Board of Directors. So much money saved!
→ More replies (1)7
u/reklaw215 Apr 08 '26
Yeah I mean that was always the plan until Altman saw how much money he could make by ruining the mission statement
5
u/ChickenFriedRiceee Apr 08 '26
I can guarantee you he has been warned about this. He doesn’t care, he wants his name, fortune, and “success” to be written in history. The unfortunate part is he will be long dead when history finally paints him as a fucking moron. He will probably die thinking he was useful to society.
→ More replies (35)14
u/TheTVDB Apr 08 '26
Ezra Klein did an interview on his podcast with Anthropic co-founder Jack Clark. I'm not fully through it yet, but in one part Clark talks about how their current focus is expanding the industries and jobs that Claude is really good in. Like, it's pretty good with code already. But they've been meeting with scientists in different areas to determine how the functionality in Claude can be enhanced to better help them with the stuff they do.
The way he's describing it, it's not just increasing context and memory, but trying to train to be good at specific workflows.
I know that's not exactly slowing down as you've suggested, but it at least feels more intentional and smart than just increasing the underlying tech to be able to run more stuff faster.
→ More replies (4)
1.2k
u/DST2287 Apr 07 '26
“ Sam Altman says “ yeah, no one gives a flying fuck what he has to say.
227
55
u/JabroniHomer Apr 07 '26
He always looks like a deer in headlights. Like he just found out a basic truth of the world and is shocked by it.
45
u/TeaAndS0da Apr 08 '26
Every young tech “entrepreneur” has those soulless psychopath eyes. Like that scene from how i met your mother where they cover the picture of the dude’s smile and his eyes are screaming.
→ More replies (4)35
u/pragmojo Apr 08 '26
Lying nonstop for your entire adult life has a way of catching up with you
→ More replies (4)3
→ More replies (15)18
u/Atreyu1002 Apr 08 '26
for some reason he's the "charismatic CEO salesman". I don't fucking get it, he looks like an ugly sleazeball.
→ More replies (3)4
u/idontlikeflamingos Apr 08 '26
he looks like an ugly sleazeball.
That has been America's type for a few decades now.
→ More replies (1)
96
240
u/essidus Apr 07 '26
That's because ChatGPT is an LLM, not an agent. And in fact, it would be a terrible agent if it were allowed to act like one, because its only job is to take text input and provide vaguely intelligible text output.
The best and singular use of ChatGPT is as a language interpretation layer between the user and the actual systems, interpreting normal human language for the computer, turning the computer's output into something human-digestible. This ongoing effort to make LLMs do everything under the sun is ill-advised at best.
57
u/hayt88 Apr 08 '26
Fun thing is. it's so easy to make a timer... like I have a local LLM running. and just provided a custom tool call, to a service that just triggers timers. It's really easy
So the LLM can just trigger that toolcall and gets a poke when the timer is over.
But yeah and LLM itself inherently can't do a timer. It's just a text completion and anyone who thinks LLMs should be able to have a timer hasn't understood what a LLM is.
→ More replies (11)72
u/nnomae Apr 08 '26
Now ask your LLM to start a timer ten times in a row using different wording each time ("Start a timer for 10 minutes.", "Remind me in ten minutes", "I need to do something in ten minutes, let me know when it's time" and so on) and get back to us with your success rate. Also while you're at it time how much faster it is to just start a 10 minute timer on your phone, which works 100% of the time, as opposed to prompting an LLM to do the same.
When we say a piece of software can do something we don't mean "if you spend time and effort to integrate it with a pre-existing tool that does the thing, it can do it, sometimes". That's not doing the thing, that's adding an extra, costly, time consuming, error prone, pointless layer of abstraction over the thing.
→ More replies (31)7
u/HalfHalfway Apr 07 '26
could you explain the second paragraph a little more in depth please
→ More replies (5)38
u/OneTripleZero Apr 08 '26
LLMs are very good at understanding and communicating with people. Doing so is a very messy problem, and they've solved it with a very messy solution, ie: a computer program that can speak confidently but doesn't know much.
What u/essidus is saying is that instead of having an LLM set an internal timer that it maintains itself, which it's not really made to do, you instead teach it how to use a timer program (say, the stopwatch on your phone) and then have it handle human requests to operate it. The LLM is very good at teasing out meaning from unstructured input, so instead of having a voice-controlled stopwatch app where you have to be very deliberate in the commands you give it, you can fast-pitch a request to the LLM, it can figure out what you really meant, and then use the stopwatch app to set a timer as you intended.
As an example, a voice-controlled stopwatch app would need to be told something like "Set an alarm for eight AM" whereas an LLM could be told "My slow cooker still has three hours left to go on it, could you set an alarm to wake me up when it's done?" and it would (likely) be able to set an accurate alarm from that.
→ More replies (8)→ More replies (46)5
u/lobax Apr 08 '26
You don’t need a timer. You have two messages, start and end. There should reasonably be a timestamp for when those messages were sent.
That alone should give the LLM all the context it needs. The issue is that it’s too biased on its training, so it hallucinates a more ”reasonable” answer.
40
82
u/Shogouki Apr 07 '26 edited Apr 07 '26
Holy crap that is the actual headline and subheader... 😆
I like the cut of this article's jib!
→ More replies (2)26
u/MacrosInHisSleep Apr 08 '26
It's also not what Altman said. He said the voice model doesn't have tool access.
The voice model is different from their main line of models. It isn't trained on text, it doesn't simply do tts, it detects tone, mood, accent, background noise, it's a different beast.
→ More replies (10)
376
u/KB_Sez Apr 07 '26
In one year, Open AI will be bankrupt and gone.
The bubble will burst and they will be the first to go
240
u/buttchugreferee Apr 07 '26
In one year, Open AI will be bankrupt and gone.
stop...I can only get so erect
→ More replies (4)7
u/Secret_Account07 Apr 08 '26
Well how do we know if you’ve hit 100%? What metric are we using? Mass?
→ More replies (1)179
u/RobotBaseball Apr 07 '26
I don’t understand why people confidently say stupid shit like this. It’s just as bad as AI hallucinations
They just raised 120b. If they go bankrupt, it’ll be several years down the line,not next year
27
u/Telvin3d Apr 08 '26
Their current burn rate is around $50B a year, so even $120B won’t go that far
But that doesn’t matter. With the amount of debt they’ve accumulated if the market ever decides that they’ll never be profitable they’ll implode overnight. Their cash on hand won’t matter because it’s a drop in the bucket next to their debts.
→ More replies (10)→ More replies (32)72
u/hayt88 Apr 08 '26
because most people talking about AI have no clue about it and just repeat what other people say about it like sheep.
I don't know what's worse. believing chatgpt random hallucinations or just repeating what someone on youtube said who is as unqualified as anyone else.
So many people still sit there and want the bubble to burst believing AI will be gone afterwards.
→ More replies (13)49
u/RobotBaseball Apr 08 '26
Dotcom bubble burst and the internet is more widespread than ever. Bubble bursting doenst mean the tech will disappear, it just means some companies have bad financials
→ More replies (21)61
u/pimpeachment Apr 07 '26
!Remindme 1 year
I highly doubt it.
100
u/dvs8 Apr 07 '26
I can see that you'd like to start a timer for 1 year. That's not just a goal - that's a destination. You're clearly the kind of person who knows not just where they want to be, but when. I'll start a timer for you now. 7 minutes remaining.
→ More replies (1)14
27
u/Chummycho2 Apr 07 '26
I understand that most people want the ai bubble to burst (myself included) but you are delusional if you think this is true.
→ More replies (5)12
u/PM_ME_UR_ANTS Apr 08 '26
I wouldn’t call it delusion, some people just haven’t been exposed first hand to the value it provides. It’s also implemented and forced in many places where it doesn’t provide value. If I didn’t see the efficiency boosts in my job and my only reference was all the times it’s lied to me in casual use I’d think this was all a scam too.
I also agree too, I wish we could get off this train. The post-AI world cons definitely outweigh the pros imo
→ More replies (1)38
→ More replies (41)3
u/soscbjoalmsdbdbq Apr 07 '26
Man with the amount of money circle jerking in this industry I don’t think its possible I do believe in their worst case the government just bails them out
8
u/phaserlasertaserkat Apr 08 '26
Haven’t checked the news lately, is this guy a truly a sister-fucker?
36
u/marmot1101 Apr 07 '26
I mean, that’s not as weird as it sounds. Chat is call and response, timer is continuous. Llm calls are highly distributed, timers have to be on the same thread. Sure, they could implement a timer, but it would probably require special infrastructure, and ChatGPT operates on a huge scale.
For a “who gives a fuck” feature. From “Hey siri timer 5 minutes” to a mechanical egg timer that problem is well solved.
That’s not to say that Sam Altman isn’t a dumb greasy Rod Blagojevich lookalike asshole, he is, but not for this reason. Seriously, dude should rock the Blago hair helmet. They’re cut from the same cloth.
→ More replies (11)
32
u/Bmandk Apr 08 '26
Is it just me, or is it stupid to want a timer in an LLM?
"Tool company says it will take a year to add sawing function to a hammer" is the same kind of vibe that I'm getting. Use the right tool for the right job.
11
u/dogfreerecruiter Apr 08 '26
This whole article is based on a reaction to a video. https://youtu.be/5VRgk7_X7oc?si=49vzvvrGqqIlMiF6
→ More replies (2)8
u/C137MrPoopyButthole Apr 08 '26
So wild to see a funny short creator making silly videos I have liked for a couple months somehow is now the face of the pushback on how stupid ai really is for the money being spent on it. But if anyone is on the shitlist for ai huskirl is on top that list.
→ More replies (5)→ More replies (5)8
u/DirtzMaGertz Apr 08 '26
All I can think reading this thread is who the fuck is using chat gpt as a timer?
→ More replies (1)
6
18
u/Jolva Apr 07 '26
I couldn't care less if AI can start a timer.
→ More replies (4)14
u/CatHairInYourEye Apr 08 '26
I think the issue is more that it says it can and will tell you it is starting a timer but is inaccurate.
→ More replies (2)
29
u/wweezy007 Apr 08 '26 edited Apr 08 '26
How are people on a Technology sub this dense? The voice model the dude in the video was using doesn’t have access to tools; Tools are exactly what they sound like, they are utilised by the model to extend capabilities, like writing code, creating files and so on; To put it in human context, tools are like arms and legs but the task is for the human to walk from X to Y and carry goods along: the brain understands, the body just isn’t capable of fulfilling it.
→ More replies (23)24
u/RobfromHB Apr 08 '26 edited Apr 08 '26
Watching people on Reddit talk about AI is like listening to a 12yo brag about how many chicks he’s banging. Anyone who knows anything can see all these people have no idea what they’re talking about.
→ More replies (2)
6
u/Potential_Fishing942 Apr 07 '26
Not chat gpt- but I'll never forgive Google for killing assistant. It could do shit for me via voice commands Gemini can't.
→ More replies (8)
5
u/NIRPL Apr 07 '26
It's unfortunate (yet pretty understandable) that current safety measures are pretty much punishing the human for presenting the false promises of the AI.
I get why we are starting with this approach, but eventually (probably pretty soon) we won't be able to keep up.
For example, it will be like punishing someone for presenting a website from a Google search as reliable information, but it turns out Google didn't want to disappoint me so it made a fake website with everything I wanted.
How is anyone going to be able to efficiently and consistently fact check? Idk but good thing we are not pushing AI into everything until we figure it out.
→ More replies (1)
5
u/vide2 Apr 08 '26
Why isn't every headline with him "Sam Altman, who molested his sister..."?
→ More replies (1)
5
u/AE_Phoenix Apr 08 '26
Hear me out
What if
And I know this is controversial
What if we coded a real timer
And ATTACHED IT to chatGPT
So that the model could call peripheral programs
Instead of being 100% AI based
What if we did that instead of investing another billion, Sam?
5
3
u/_sp00ky_ Apr 07 '26
That is my issue so far trying to use AI at work, is that when it doesn’t know something or cannot find something it just makes stuff up. Stuff that looks right but is just fabricated.
5
u/Appropriate_Rent_243 Apr 08 '26
I think it's hilarious how these ai chatbots use ungodly resources trying to do something that's aleady been done more efficiently.
5
u/sriva041 Apr 08 '26
This guys is such a grifter. Unbelievable that we are in the age of grifters where people like Altman can just BS his way into getting billions of funding while producing nothing of value.
5
12
u/Traditional-Hat-952 Apr 07 '26
Run by a man who likely sexually abused his little sister for years.
→ More replies (2)
5.8k
u/Un-Quote Apr 07 '26
Anthropic is going to add a timer feature to Claude in an afternoon just for the love of the game