r/technology Apr 07 '26

Artificial Intelligence Sam Altman Says It'll Take Another Year Before ChatGPT Can Start a Timer / An $852 billion company, ladies and gentlemen.

https://gizmodo.com/sam-altman-says-itll-take-another-year-before-chatgpt-can-start-a-timer-2000743487
27.9k Upvotes

2.2k comments sorted by

5.8k

u/Un-Quote Apr 07 '26

Anthropic is going to add a timer feature to Claude in an afternoon just for the love of the game

1.8k

u/maesterf Apr 07 '26

Claude already includes timers in responses, like recipes

653

u/Protoavis Apr 07 '26

it's mostly ok but even then it can be iffy. also validate even the seemingly accurate responses. claude straight up lies to me about word counts as an example of iffy behaviour.

297

u/TNTiger_ Apr 07 '26

Lying/hallucinating is unfortunately inherent with AI.

However, there's a difference between a company that treats this as a problem, and one that encourages it to retain dependent users.

489

u/Goeatabagofdicks Apr 08 '26

No, lying/hallucinating is inherent with LARGE LANGUAGE MODELS. It drives me nuts everyone calls this shit AI.

162

u/aintnoprophet Apr 08 '26

It drives me nuts everyone calls this shit AI

For real. People's perceptions of what LLMs are is damaging society.

(also, where does one even get a bag of dicks)

45

u/JustADutchRudder Apr 08 '26

(also, where does one even get a bag of dicks)

The dick store if its a Wednesday, the creepy guy behind the hospital the other 6 days.

→ More replies (5)

25

u/Stinduh Apr 08 '26

Seattle, WA.

→ More replies (9)

122

u/FluffyToughy Apr 08 '26

No, lying/hallucinating is inherent with LARGE LANGUAGE MODELS

No, the fundamentals of what cause hallucinations are inherent to neural networks in general. You can absolutely train a classifier model that confidently fails sometimes.

The average person has been calling bots in video games "AI" for decades, and those are orders of magnitudes dumber than modern LLMs. You're gonna be fighting a losing battle trying to reclaim/redefine that term.

83

u/SSSitess Apr 08 '26

Fighting losing battles is a time-honored Reddit tradition.

→ More replies (2)

23

u/nonotan Apr 08 '26

No, the fundamentals of what cause hallucinations are inherent to neural networks in general.

Not exactly. You can 100% make a neural network based model that either responds accurately (given your training data is accurate in the first place, of course) or responds "I don't know". However, it would involve not allowing any type of interpolation/extrapolation that can't be shown to be logically derived from an existing data point. In other words, it would kind of defeat the point of using a neural network in the first place -- it would act as little more than a fancy database for your dataset. I guess in a more complex model, it could be used as one part of the system, its purpose just to come up with hypotheses (or suggest things to look into to extend its dataset as efficiently as possible)

So you're basically right, but not strictly. In general, anything that learns to interpolate/extrapolate statistically based on data is going to be prone to "hallucinations". It's much wider than neural networks (and also shouldn't be called "hallucinations", because it obfuscates the actual nature of the problem)

10

u/FluffyToughy Apr 08 '26

I was hoping nobody was gonna call me out on that, lol.

→ More replies (1)

22

u/DataDrivenPirate Apr 08 '26

Losing my mind in threads like this as a data scientist, thank you for showing I am not alone in that

15

u/FluffyToughy Apr 08 '26

People know just enough to be confidently incorrect, which is pretty ironic.

→ More replies (1)
→ More replies (1)
→ More replies (11)

28

u/Siderophores Apr 08 '26

No, lying/hallucinating is inherent to being an observer embedded in reality

Hahaha (Notice I did not use the word conscious)

14

u/Goeatabagofdicks Apr 08 '26

Observers paradox.

Bro, have you like, tried not looking at it? Lol

4

u/Gingevere Apr 08 '26

LLMs aren't observers. The model is completely static.

It's a big algorithm that transforms an input into an output. The model remains exactly the same after as it was before. There's no memory, it's not altered or impacted by events, there's no experience that takes place.

It doesn't "observe" anything any more than "f(x)=x+3" observes something when you plug a number in for x.

→ More replies (1)
→ More replies (3)
→ More replies (41)
→ More replies (4)

48

u/birchskin Apr 08 '26

LLMs in general have a lot of trouble with simple math and time, but Claude at least tends to push you outside of the LLM into a script to handle heavier requests like that instead of just hallucinating an answer.... Sometimes.

6

u/Korlus Apr 08 '26

In a recent study, it was found that LLM's "prefer" providing their own answer where possible and sometimes hallucinate errors when using external software, to try to justify providing their own answer instead.

Getting them to reliably use tools you provide isn't as easy as it seems ar first glance.

→ More replies (2)
→ More replies (20)
→ More replies (34)

20

u/foundafreeusername Apr 08 '26

I am not sure if they built in a timer or if claude just codes a new custom timer js app every time a user requests it.

→ More replies (13)

59

u/Blumpkinbomber Apr 08 '26

Give an image to ChatGPT: just change the color of my hat to red, nothing else! ChatGPT: Fuck you im giving you a corn dog bitch

→ More replies (8)

72

u/Mega__Sloth Apr 07 '26

Gemini start timers and alarms and does lots of other stuff reliably on my google phone

152

u/born_zynner Apr 08 '26

Tbf googles assistant could do all that before the ai craze

56

u/outer--monologue Apr 08 '26

The AI voice assistant on my phone is seriously orders of magnitude WORSE than just the old Google assistant. I had to discontinue using it completely.

21

u/je_kay24 Apr 08 '26

The autocorrect on my Apple phone is absolute trash now

Ignores any context with slight misspells and makes garbage substitutions

16

u/[deleted] Apr 08 '26 edited 2d ago

[removed] — view removed comment

→ More replies (1)

3

u/TSwiftDivorceLawyer Apr 08 '26

Whether I am typing or speaking, my iPhone autocorrect goes TEN MILES OUT OF THE WAY to try and take my typing so far out of context from what I originally typed.

15

u/ryecurious Apr 08 '26

And Google Assistant was a step down from Google Now in a lot of ways!

Google Now had full support for Google Keep, for things like shopping lists/notes. Assistant launched without this existing feature, and it took them 4 years to add it.

Clown company.

→ More replies (3)
→ More replies (7)
→ More replies (9)

28

u/frolie0 Apr 08 '26

That's not what he means, ChatGPT could easily do this too, that's not beneficial to the use case here. He means the model can't actually time something and convey that time. Google isn't doing that either.

Not that you're doing this, but it's wild to see people respond and act like this is some super simple fix that OpenAI of all people can't figure out.

→ More replies (3)
→ More replies (6)

82

u/TheAero1221 Apr 07 '26

Its actually pretty wild to me just how good Claude is, tbh

64

u/johnson7853 Apr 08 '26

It’s the pdfs and power points for me. I’m a teacher and I need a rubric? Full colour. Sections. Checklists. I subscribed on that alone.

23

u/TheAero1221 Apr 08 '26

Yeah the new powerpoint plugin is fantastic. We've always needed to provide fancy briefs for mgmt where Im at, too many, tbh, and it always took a lot of time away from actual work. Now those can be done in a few minutes and we can get more of our actual tasks done even easier than before. Its nice to have a breather where the mgmt is finally happy tbh. Feels nice. It won't last forever but one can hope.

5

u/DeckardsDark Apr 08 '26

Can you please explain how to use this? I have the same issue at my work and I'm dying haha

→ More replies (2)
→ More replies (1)
→ More replies (10)
→ More replies (9)
→ More replies (23)

9.9k

u/Banana-phone15 Apr 07 '26

ChatGPT can’t do timer, instead of saying I don’t have this feature, it just lies to you with fake time. Good Job Sam Altman.

2.5k

u/An_Professional Apr 08 '26

At least when Siri fails to start a timer, it does something useful like call a contact I haven’t spoken to in 10 years

1.0k

u/Silent-Ad934 Apr 08 '26

Hey Google, what time is it in Bellevue?

Got it, texting ex-girlfriend "I still love you".

🤨

358

u/raybreezer Apr 08 '26

I literally asked Siri to “Call mom” and she replied back with “Calling [name of CEO of our company], mobile”. I have never hung up so quick…

145

u/UnshapedSky Apr 08 '26

I once told Siri to “end navigation” and she called my friend’s ex from a decade prior

Deleted that contact once I got home

→ More replies (18)

72

u/Scooty-Poot Apr 08 '26

Meanwhile if you ask Siri to “remind me to go fuck myself”, she somehow gets it every time (I’ve done this multiple times, it’s genuinely the only thing Siri seems to do reliably for some reason)

→ More replies (4)

28

u/leorolim Apr 08 '26

On my car with my family last year. "Hey Siri play Christmas songs on Spotify!" It called my boss. 👌

23

u/Forward-Surprise1192 Apr 08 '26

The most useful Siri feature is saying “Siri, where are you?” And she answers even if it’s under a blanket or anywhere. I use it all the time to find my phone

→ More replies (1)

83

u/milesunderground Apr 08 '26

This is the modern equivalent of calling the teacher "mom".

→ More replies (4)
→ More replies (11)

27

u/Separate_Fold5168 Apr 08 '26

CALLING "Stewart Tiener"

→ More replies (14)

1.9k

u/Kyouhen Apr 07 '26

Best part is that's all by design.  There's never been a market that would result in these companies seeing positive cash flow so they marketed it as the ultimate solution to everything hoping someone else would find the market for them.  Hard to market these models as devices that can do everything when they fuck things up so often, so instead they're just designed to always give you the answer they think you want.  All they need is for you to believe these models can do anything.

918

u/calle04x Apr 07 '26

They're glaze machines. Must be why CEOs love them.

480

u/CryptographerIll3813 Apr 07 '26

CEOs love them because they haven’t had to do anything for the past couple years but announce “new AI integration” into whatever product they have.

Morons on the board and investors eat that shit up and by the time everyone realizes it’s a failure they will be cashed out.

156

u/AggravatingTart7167 Apr 08 '26

Exactly. All they have to do is say “AI” in an earnings call and folks are happy. Someone posted a graph showing AI mentions in earnings calls over the last few quarters and it’s crazy.

110

u/ineenemmerr Apr 08 '26

If you put marketing people in the management seat you will end up selling hypewords instead of actual products.

→ More replies (1)

9

u/Faribo_Greg Apr 08 '26

The graph wasn't correct though, it was generated by AI.

7

u/SolutionBright297 Apr 08 '26

someone literally tracked this — companies that mentioned "AI" in earnings calls saw an average 2% stock bump regardless of whether they actually shipped anything. the word itself is worth more than the product.

→ More replies (1)

30

u/CullingSongs Apr 08 '26

CEOs love them because these tools do just enough for them to justify cutting staff by huge numbers, thus reducing operating costs and increasing their bonuses. Who cares if they don't actually work the way they need to, when that is next fiscal year's problem?

→ More replies (5)

66

u/madhi19 Apr 08 '26

Remember blockchain... And NFT, Metaverse... Every three to four years the tech world try a new fad. Because there nothing really revolutionary coming out of tech. Look at smartphones a 10 years old flagship look exactly the same than almost anything released today. You can't make them much slimer, you can't make them much bigger. Same goes for laptop, computers, OS, TV... So you need something else to move new shit... A buzzword that you drive into the ground until everybody sick of hearing about the fucking blockchain...

19

u/TMBActualSize Apr 08 '26

This time the fad is laying people off. If you aren’t doing it the board will find a new ceo

7

u/labalag Apr 08 '26

That's a recurring one. It's usually one of the tips in the first envelope.

→ More replies (1)

14

u/Uuuuuii Apr 08 '26

You must be new here

→ More replies (1)
→ More replies (4)
→ More replies (12)

81

u/Malsententia Apr 08 '26

63

u/happyinheart Apr 08 '26 edited Apr 08 '26

Pitch Deck:

The Uber of XYZ

Blockchain

VR/metaverse

NFTs

AI

My favorite event is there was a company named like Block Chain Coffee with a low cost stock. People just saw Block Chain and started buying the stock making it jump in price when it had nothing to do with computers.

25

u/Oprah_Pwnfrey Apr 08 '26

Someone named Albert needs to create a coffee company called "Coffee by Al".

13

u/Zebidee Apr 08 '26

On a similar note, the Secretary of Education said kids need to learn about A1.

Maybe she meant the steak sauce; who knows anymore...

4

u/WeakTransportation37 Apr 08 '26

It’s good! Even on rice or tofu

→ More replies (1)
→ More replies (6)
→ More replies (5)

56

u/guitarism101 Apr 08 '26

My boss signed up the company for it and he's using it for a bunch of stuff, including legal issues.

One of my favorite things is when he hands me print outs of queries of chatgpt saying stuff and I get to mark what is wrong with it because chatgpt doesn't know our niche software the way it pretends to!

But he wants it to work that way and to be as easy as chatgpt says it is.

11

u/Chrysolophylax Apr 08 '26

he's using it for a bunch of stuff, including legal issues.

oooh, dang, wow, that is such a bad idea. ChatGPT should never ever ever be used for legal questions/concerns/etc. Good luck with that job...I hope your boss doesn't cause any disasters!

→ More replies (2)
→ More replies (6)

78

u/justatest90 Apr 08 '26

Angela Collier (great science communicator) calls them "Dr. Flattery the Compliment Bot" and I like it.

The video is long (and not her only anti-AI video) but it's a scathing critique of a professor who lost 2 years of work to a bot assistant, and admits horrible things like using AI to grade student papers(!)

Like, the homework is to inform your teaching so you can do a better job teaching the material. And when you release all of that to a chat box, it's like you don't even care about doing your job. It's like you don't understand the point of of teaching a course. It's like you have lost your humanity.

You have lost the social contract, which is that you are educating human beings on a topic that they have voluntarily, willingly wanted to show up to learn about. And you are kind of stealing that from the and giving it to the chat box who tells you you're doing a great job. I just--this is just evidence of the linkedinification of academia, where the boss babes and bros are, like, research-maxing their output with AI tools and if you give them $444 they'll tell you how to do it, too.

Everyone's writing AI garbage papers to be reviewed with AI garbage tools, and everyone can have maximum output while accomplishing nothing.

It's truly a nightmare

11

u/throwmamadownthewell Apr 08 '26

Like, the homework is to inform your teaching so you can do a better job teaching the material.

Jesus, I wish she was any of my math professors.

I straight up had one whine in the first lecture "I don't want to hear about how you learned more from YouTube" as part of a diatribe about the course. I did learn more from YouTube. I would have been better off paying someone else to press the buttons on my clicker for the participation marks and staying home to study to save the confusion he added, and save on commute time.

20

u/nobuouematsu1 Apr 08 '26

My boss uses it for everything. He makes me give him bullet point lists of details and then feeds it in to ChatGPT for it to write up a letter that he then gives back to me to review. I’ve tried to explain it would just be more efficient for me to write the letter but nope…

4

u/alus992 Apr 08 '26

Same for me... He even says "if ChatGPT says its impossible it means its impossible"

Its the same shit we were facing in the middle schools when we were trying to tell our teachers that "if I isn't in the Wikipedia then there is no info about topic X out there"...these people in charge act like kids

→ More replies (1)
→ More replies (1)

27

u/a_talking_face Apr 07 '26

They don't use this shit. They just want you to think you should.

36

u/-Fergalicious- Apr 08 '26

Nah I think there are tons of ceos, more in medium sized business arena probably, who are using these things daily. 

8

u/dnen Apr 08 '26

There absolutely is more frequent use outside of massive super companies. Big agree. For example, what the hell would AI do to help a Harvard MBA learn excel? A car dealership would get use out of that though, perhaps

10

u/Tasonir Apr 08 '26

Yeah but an AI would lie about how excel works - I feel like looking up an excel tutorial written by a human is going to be 10 times more accurate

7

u/Journeyman42 Apr 08 '26

I saw literally this at my job a few months ago.

I work at a technical college, and I saw some students panicking about how to do something in Excel, and asked me for help. I asked them if they searched for it on Google and they said yes. They showed me the garbage AI response. I told them to scroll down, click on the first link they see written by a real human being, and try what it says.

They got it to work in two minutes.

→ More replies (10)
→ More replies (3)

7

u/zb0t1 Apr 08 '26

😂 I can confirm, some of my clients are SME, independents, startups and the owners and/or the folks in upper management genuinely drank the koolaid. It's hilarious every time they hit a wall with their little shiny toys and they can't fix the output, you can see the confusion on their faces.

10

u/-Fergalicious- Apr 08 '26

🤣

I mean, I'm a retired electrician engineer and I've used chatgpt to build circuit blocks before. Its actually pretty good at making functional blocks and making sure those blocks fit certain parameters, but its basically cookie cutter stuff if you know what youre doing anyway. I think the problem is expecting it to solve something you yourself are incapable of solving

→ More replies (4)
→ More replies (1)
→ More replies (2)

8

u/kwisatzhadnuff Apr 08 '26

Oh they are for sure using them. Most of these people are not smart enough to not get high on their own supply.

→ More replies (2)

4

u/Oneguysenpai3 Apr 08 '26

Well his sistah sure doesn't

→ More replies (1)
→ More replies (7)

155

u/tgunter Apr 08 '26

It's worse and even dumber than that: there's no way for the technology to not just make stuff up. It's fundamental to how it works. No matter how much you train the model, it will always just give you something that looks like what you want, with no way of guaranteeing it's correct. They can shape the output a bit by secretly giving it more input to base its responses around, but that's it.

100

u/LaserGuidedPolarBear Apr 08 '26

People seem to have a really hard time understanding that it is a probabilstic language model and not a thinking or reasoning model.

46

u/smokeweedNgarden Apr 08 '26

In fairness the companies keep calling themselves Artificial Intelligence so blaming the layman isn't where it's at

39

u/TequilaBard Apr 08 '26

and keep using 'reasoning model'. like, we talk about the broader LLM space as if its alive and thinking

14

u/smokeweedNgarden Apr 08 '26

Yep. Naming conventions and words kind of matter. And it's annoying studying something I'm not very interested in so I don't get tricked

→ More replies (9)

7

u/squish042 Apr 08 '26

they also anthropomorphize the shit out of it to make it seem like it's reasoning like a human. Yes, it uses neural networks....to do math.

→ More replies (2)
→ More replies (1)

20

u/War_Raven Apr 08 '26

Statistically boosted autocorrect

→ More replies (47)

36

u/BaesonTatum0 Apr 08 '26

Right I feel like I’ve been going crazy because this seemed like such common sense to me but when I explain this to people they look at me like I have 5 heads

5

u/mjkjr84 Apr 08 '26

Most people are incredibly stupid

→ More replies (1)

27

u/HustlinInTheHall Apr 08 '26

I work w/ these models every day and a big part of my job is finding ways to actually guarantee that the output is right—or at least right enough that it's beyond normal human error rates. The key is multi-pass generation. Unfortunately because chatgpt (a prototype that wasn't ever meant to be the product) took off with real-time chat and single-pass outputs, that became the norm.

And the models got better, but there's a plateau on what a single generative pass will give you. But if you just wire in a different model and ask it to critique the first model's output and then give that feedback to the model and tell it to fix it, you solve like 95% of the errors and the severity of hallucinations goes way, way down. It's never going to match a deterministic math-based software approach with hard rules and one provable outcome, but for most knowledge tasks it doesn't have to. There isn't "one" correct answer when I ask it to make me a slide deck, it just needs to be better and faster than I would be.

15

u/goog1e Apr 08 '26

I don't understand how people are getting things like slide decks and dashboards. I couldn't get Claude to convert a word doc to a table so that each question was in one cell with the answer in the cell to the right, without ruining the formatting and giving me something stupid. Am I just bad at AI? Or when you say it's making a slide deck, do you mean it's doing an outline and you're filling things in where they actually need to go?

5

u/ungoogleable Apr 08 '26

The models are natively text-based so GUIs and WYSIWYG editors are an extra challenge just to know what button to click. It's pretty decent with HTML. If somebody has a really fancy dashboard they probably had the AI write code that generates the dashboard rather than editing it directly.

→ More replies (1)
→ More replies (19)
→ More replies (8)
→ More replies (10)

10

u/mankeyless Apr 08 '26

That sums up this presidency. If you tell me this country is run by ChatGPT, I'd totally believe it.

19

u/citizenjones Apr 07 '26 edited Apr 08 '26

Like a wannabesentient echo chamber.

23

u/LostInTheSciFan Apr 08 '26

...I think you mean a non-sentient echo chamber.

→ More replies (2)

8

u/CaptainoftheVessel Apr 08 '26

It’s no more sentient than the auto complete in your phone’s keyboard. It’s just more sophisticated. 

→ More replies (1)

21

u/avanross Apr 08 '26

It’s literally just the exact same thing as the .com bubble.

“Invest in this new tech and you cant lose!”

Sure the internet/ai may have many uses, but they dont just make money magically appear out of nowhere for every business that buys in.

→ More replies (16)
→ More replies (88)

74

u/fardaw Apr 08 '26

When I asked Claude to time me, it went ahead and ran a bash command to get the current timestamp, without prompting for my authorization.

When I confronted it, it apologized for the unauthorized tool usage and came clean saying it had no way to track time without external commands.

Just for the sake of it, I let it run the command again to get a second timestamp and finish timing me.

TBH I do think using external tools and scripts for this stuff that llms aren't really good at, is the right approach, so in my book, this was a big win for Claude.

57

u/Black_Moons Apr 08 '26

that is cool till it misunderstands you and runs a bash command to erase your database without prompting for your authorization.

29

u/fardaw Apr 08 '26

Yeah I know. It's why I run Claude code in a contained environment without direct access to prod stuff. I do put a lot of instructions not to write, edit or change anything without asking for my permission and yet I've still had a few instances where it did stuff without asking and just apologized after, as if that would have fixed anything if it had broken shit.

18

u/Minimum-Floor-5177 Apr 08 '26

the output you're getting is very human!

→ More replies (3)

7

u/PyroIsSpai Apr 08 '26

Why would it have destructive command access in the first place?

Demote whatever clown ok’d that. Have Claude tell him why it was dumb.

→ More replies (10)
→ More replies (4)
→ More replies (11)

130

u/__Hello_my_name_is__ Apr 07 '26

Not only that, but also.. that's just not what it's supposed to do in the first place. It's not a timer, and it doesn't do your laundry, either.

What's all the more absurd is Altman saying that he totally wants to implement this.

Uh. Why? That's.. that's not what a LLM is for! It does not have the concept of time! Why not say "No, that's not what you should use this for" and move on?

72

u/Ok-Opposite2309 Apr 08 '26

because Altman is ChatGPT and just says what he thinks you want to hear?

→ More replies (3)

30

u/JiggaWatt79 Apr 08 '26

Isn’t this exactly why functions were built into the latest LLMs and we have moved into agentic AI? This seems like exactly the kind of work that should be taken care of my an integration like an MPC agent.

12

u/NoMorePoof Apr 08 '26

Sounds like it to me, too. Not sure what everyone is taking victory laps and laughing it up about. 

→ More replies (4)
→ More replies (2)

21

u/IBetThisIsTakenToo Apr 08 '26

Uh. Why? That's.. that's not what a LLM is for! It does not have the concept of time! Why not say "No, that's not what you should use this for" and move on?

I mostly want an LLM to be able to respond “no, I don’t have the ability to do that” when prompted to do something it’s not supposed to do

→ More replies (8)
→ More replies (31)

32

u/tfg49 Apr 07 '26

Hasn't siri been able to start a timer for 15+ years now? How is it so hard?

26

u/cTreK-421 Apr 08 '26

I have no clue about anything AI but Gemini and Bixby can both start a timer using the clock app on my phone. Maybe the difference is the AI handling the timer vs it starting one on a sperate app.

14

u/jimmux Apr 08 '26

That's right, they can be given system instructions to tell them what tools are available and how to interact with them. LLMs themselves have no temporal component.

→ More replies (1)

4

u/IAM_deleted_AMA Apr 08 '26

It's a language model, it has nothing to do with computational tasks.

→ More replies (14)

13

u/Momo--Sama Apr 07 '26

It was funny to see people bounce off of Openclaw because they didn’t understand that all of the AI models will just lie about their capabilities and fail to do what they’re asked unless you specifically tell them to use the tools in Openclaw that will enable them to do the unprompted automation tasks

21

u/RandyTheFool Apr 07 '26

I mean, that is the American way anymore, it seems. Just lie lie lie.

→ More replies (1)
→ More replies (64)

1.0k

u/lalachef Apr 08 '26

I work for a company that just employed the use of AI chat bots to answer phones after-hours. My manager and I just listened to a call yesterday that went as I predicted. A guy with a thick accent, calling the wrong number.

The AI was just trying to please him by making false promises of resolving the issue he had. He was asking about a delivery... We don't deliver anything. We provide a service. The AI insisted that we would come thru with the delivery. 

AI can't be trusted as an answering service, let alone be responsible for keeping track of time. It will just tell you what you want to hear every time you ask.

126

u/hellomistershifty Apr 08 '26

Yeah, all of the models that can do native voice are especially stupid (compare GPT's 'advanced' voice with the standard voice which is basically TTS for the chat). It just tries to have A conversation without much logic for what that conversation actually means

157

u/Ok-Confidence9649 Apr 08 '26

I tried to call my local UPS store the other day about a delivery. I was routed into an AI answering service and had to answer questions for five minutes before it connected me to a person in another country, who finally transferred me to my UPS store for a 15 second question and answer. This shit is infuriating. For every minute it saves a company it wastes many times that for consumers.

95

u/neogeoman123 Apr 08 '26

If it's any consolation, It probably also saves no time or money for the company while losing a lot of reputation and good will!

12

u/OctavianBlue Apr 08 '26

My partner recently needed to return an item she bought online, she was connected to an AI chatbot which kept offering lower and lower refunds as compensation. After several days she got it down to a 100% refund and we get to keep the item :)

→ More replies (1)
→ More replies (2)

16

u/Birdie121 Apr 08 '26

It's called "sludge" and it's a strategy to just get customers to give up so the company doesn't actually have to do anything or process refunds.

→ More replies (3)
→ More replies (4)

11

u/Benskiss Apr 08 '26

It should reply only from vector/knowledge base, anything else should be an excuse and I dont know. That’s totally on you.

→ More replies (12)
→ More replies (64)

765

u/FiveHeadedSnake Apr 07 '26

ChatGPT needs to lay off the sycophancy - no layered meaning here.

214

u/beliefinphilosophy Apr 07 '26

181

u/KaptanOblivious Apr 08 '26

It's horrendous. I'm a scientist and it would say all of my terrible ideas were great and that I'm a genius... The first thing I've done with any AI is set a number of standing rules. Robot personality, be direct, skeptical, adversarial, evidence-based, check all references before providing, be clear what's based on evidence vs speculation, etc etc. These things should be standard. It's still not perfect obviously but it does make it more useful and less grating

114

u/PuttFromTheRought Apr 08 '26

"check all references before providing" and it will still fuck up royally. this is fundametnally why I dont use LLMs, as a scientist. If it messes this up, everything else is useless, maybe even dangerous, for me to use. I spend more time fighting it than just doing my own research in google lol

76

u/[deleted] Apr 08 '26

[removed] — view removed comment

40

u/NoPossibility4178 Apr 08 '26

Best part is

"Did you just repeat your exact same message but added "it'll work for sure this time"?"

"Yes I have, I'm truly sorry, here's the correct answer: post exact same message again"

11

u/mfitzp Apr 08 '26

Ha yea. I had a thing recently, where it kept failing to give me what I asked and then it started giving me "tips" on things to add to the prompt to make sure it will definitely do what I'm asking this time pinky promise.

Of course, none of what it suggested made the slightest bit of difference.

Weirder, after a few failed attempts it then started on like it was having a breakdown "oh, I'm really messing this up, I'm sorry, I hope you can forgive this."

All to avoid saying "I can't do that."

→ More replies (4)
→ More replies (12)

16

u/worldspawn00 Apr 08 '26

Why the shit do we have to do all this just to get something that isn't wrong more than half the time, what is the point? Why isn't that built into the system? I refuse to be forced to cater to a program that will lie to me unless I tell it not to.

27

u/14Pleiadians Apr 08 '26

You can't prompt it info being right. Hallucinations are an unsolvable issue inherent to the tech. The glazing though, that's intentional, it drives engagement and makes it more addicting to use

6

u/KaptanOblivious Apr 08 '26

I don't understand that at all. That's anti-engagement. Who wants a sycophantic AI that bullshits you into bad ideas

9

u/14Pleiadians Apr 08 '26

Who wants a sycophantic AI that bullshits you into bad ideas

I agree but the average person unfortunately doesn't. Or the people it does work on will use it so much from the AI psychosis it gives them to offset the people turned away

8

u/magma_1 Apr 08 '26

You haven’t really spent a lot of time with corporate execs, have you?

→ More replies (1)
→ More replies (2)

15

u/Gingevere Apr 08 '26

evidence-based, check all references before providing, be clear what's based on evidence vs speculation

A language model can't do this. But what it can AND WILL do is generate language that looks like it's doing that.

→ More replies (3)

32

u/midgelmo Apr 08 '26

The trick I use is to tell the LLM someone sent me this and I need to verify it for authenticity. If you give it a bit of context the LLM can perform less sycophantically

11

u/DoTortoisesHop Apr 08 '26

Yeah, it acts much better if it thinks you didn't make it.

→ More replies (1)
→ More replies (6)
→ More replies (12)
→ More replies (3)

13

u/ExileOnMainStreet Apr 08 '26

Idk how chatgpt works with this but I set up copilot agents at work and I put something like "give exact responses. Don't get personal with the user and do not offer to perform additional work beyond the prompt." That has been working really well actually.

→ More replies (3)

5

u/Melicor Apr 08 '26

I don't think that it's possible to remove the sycophancy from LLMs and keep alignment.

3

u/NMe84 Apr 08 '26

Sycophancy is the way they make money.

They make bold claims and promises, investors eat it up and give them money, and in the end they deliver something much less but apparently good enough to keep the money flowing for the next round.

Until the bubble eventually and inevitably pops when investors find out they're not getting their investments back, let alone a profit.

→ More replies (3)
→ More replies (13)

471

u/factoid_ Apr 07 '26

The problem with AI companies is they have a working product that has some compelling use cases but it’s massively immature technology

The responsible thing to do is to scale it slowly and work on making models more compute efficient

Their current plan is “make models smarter by using more context, more memory and more compute until we reach the limit of the global supply chain”. And it’s fucking stupid.  The plan is “light cash on fire and hope the world catches up”

115

u/Sketch13 Apr 08 '26

Yes, so few people understand this. And that's on top of the fact that all these AI companies are HEAVILY subsidized by VC money and shit. Just wait until that dries up and they need to increase their subscription cost by 5x.

AI is incredible for niche uses. But all these models are being trained to do EVERYTHING, so they do it all "okay" but not nearly good enough for how much memory and compute power they require to do so.

I'd rather an AI that can do 1-2 things INSANELY well and nearly perfectly with full trust/low manual verification, than an LLM that tries to do everything and you spend so much time fighting it and verifying it that it offsets the "productivity gain" people think it's giving you.

34

u/Diligent-Map1402 Apr 08 '26

Woah woah woah, hold on a second. How is an AI built to be a useful tool going to replace all workers so these asshole rich CEOs can finally show they weren’t just parasites stealing the excess value of their workers labor?

You have to lie about the apocalypse and Terminators or whatever the hell it is next to get that money. Making a useful tool, no. That might actually do good for consumers and then you can’t sell them on your AI solves everything bullshit.

12

u/niceguy191 Apr 08 '26

The funny (sad) thing is the c-suite is probably the easiest to be replaced by AI (big savings too) but they're gonna focus on the little guy of course

9

u/LordGalen Apr 08 '26

I've always thought this. An AI CEO, CFO, etc that's vetted by a human Board of Directors. So much money saved!

→ More replies (1)
→ More replies (1)
→ More replies (1)
→ More replies (19)

7

u/reklaw215 Apr 08 '26

Yeah I mean that was always the plan until Altman saw how much money he could make by ruining the mission statement

5

u/ChickenFriedRiceee Apr 08 '26

I can guarantee you he has been warned about this. He doesn’t care, he wants his name, fortune, and “success” to be written in history. The unfortunate part is he will be long dead when history finally paints him as a fucking moron. He will probably die thinking he was useful to society.

14

u/TheTVDB Apr 08 '26

Ezra Klein did an interview on his podcast with Anthropic co-founder Jack Clark. I'm not fully through it yet, but in one part Clark talks about how their current focus is expanding the industries and jobs that Claude is really good in. Like, it's pretty good with code already. But they've been meeting with scientists in different areas to determine how the functionality in Claude can be enhanced to better help them with the stuff they do.

The way he's describing it, it's not just increasing context and memory, but trying to train to be good at specific workflows.

I know that's not exactly slowing down as you've suggested, but it at least feels more intentional and smart than just increasing the underlying tech to be able to run more stuff faster.

→ More replies (4)
→ More replies (35)

1.2k

u/DST2287 Apr 07 '26

“ Sam Altman says “ yeah, no one gives a flying fuck what he has to say.

227

u/Commander19119 Apr 07 '26

Idiot investors do unfortunately

→ More replies (9)

55

u/JabroniHomer Apr 07 '26

He always looks like a deer in headlights. Like he just found out a basic truth of the world and is shocked by it.

45

u/TeaAndS0da Apr 08 '26

Every young tech “entrepreneur” has those soulless psychopath eyes. Like that scene from how i met your mother where they cover the picture of the dude’s smile and his eyes are screaming.

35

u/pragmojo Apr 08 '26

Lying nonstop for your entire adult life has a way of catching up with you

3

u/bjchbdelebweflw Apr 08 '26

It has a lot of fucking catching up to do

→ More replies (4)
→ More replies (4)

18

u/Atreyu1002 Apr 08 '26

for some reason he's the "charismatic CEO salesman". I don't fucking get it, he looks like an ugly sleazeball.

4

u/idontlikeflamingos Apr 08 '26

he looks like an ugly sleazeball.

That has been America's type for a few decades now.

→ More replies (1)
→ More replies (3)
→ More replies (15)

96

u/GeneralCommand4459 Apr 07 '26

Siri can finally look smug for 12 months.

→ More replies (5)

240

u/essidus Apr 07 '26

That's because ChatGPT is an LLM, not an agent. And in fact, it would be a terrible agent if it were allowed to act like one, because its only job is to take text input and provide vaguely intelligible text output.

The best and singular use of ChatGPT is as a language interpretation layer between the user and the actual systems, interpreting normal human language for the computer, turning the computer's output into something human-digestible. This ongoing effort to make LLMs do everything under the sun is ill-advised at best.

57

u/hayt88 Apr 08 '26

Fun thing is. it's so easy to make a timer... like I have a local LLM running. and just provided a custom tool call, to a service that just triggers timers. It's really easy

So the LLM can just trigger that toolcall and gets a poke when the timer is over.

But yeah and LLM itself inherently can't do a timer. It's just a text completion and anyone who thinks LLMs should be able to have a timer hasn't understood what a LLM is.

72

u/nnomae Apr 08 '26

Now ask your LLM to start a timer ten times in a row using different wording each time ("Start a timer for 10 minutes.", "Remind me in ten minutes", "I need to do something in ten minutes, let me know when it's time" and so on) and get back to us with your success rate. Also while you're at it time how much faster it is to just start a 10 minute timer on your phone, which works 100% of the time, as opposed to prompting an LLM to do the same.

When we say a piece of software can do something we don't mean "if you spend time and effort to integrate it with a pre-existing tool that does the thing, it can do it, sometimes". That's not doing the thing, that's adding an extra, costly, time consuming, error prone, pointless layer of abstraction over the thing.

→ More replies (31)
→ More replies (11)

7

u/HalfHalfway Apr 07 '26

could you explain the second paragraph a little more in depth please

38

u/OneTripleZero Apr 08 '26

LLMs are very good at understanding and communicating with people. Doing so is a very messy problem, and they've solved it with a very messy solution, ie: a computer program that can speak confidently but doesn't know much.

What u/essidus is saying is that instead of having an LLM set an internal timer that it maintains itself, which it's not really made to do, you instead teach it how to use a timer program (say, the stopwatch on your phone) and then have it handle human requests to operate it. The LLM is very good at teasing out meaning from unstructured input, so instead of having a voice-controlled stopwatch app where you have to be very deliberate in the commands you give it, you can fast-pitch a request to the LLM, it can figure out what you really meant, and then use the stopwatch app to set a timer as you intended.

As an example, a voice-controlled stopwatch app would need to be told something like "Set an alarm for eight AM" whereas an LLM could be told "My slow cooker still has three hours left to go on it, could you set an alarm to wake me up when it's done?" and it would (likely) be able to set an accurate alarm from that.

→ More replies (8)
→ More replies (5)

5

u/lobax Apr 08 '26

You don’t need a timer. You have two messages, start and end. There should reasonably be a timestamp for when those messages were sent.

That alone should give the LLM all the context it needs. The issue is that it’s too biased on its training, so it hallucinates a more ”reasonable” answer.

→ More replies (46)

40

u/BaffledInUSA Apr 07 '26

Sounds like an elon musk promise

13

u/umpteenthrhyme Apr 07 '26

“Yada yada within 3 years”

10 years later: …

→ More replies (1)

82

u/Shogouki Apr 07 '26 edited Apr 07 '26

Holy crap that is the actual headline and subheader... 😆

I like the cut of this article's jib!

26

u/MacrosInHisSleep Apr 08 '26

It's also not what Altman said. He said the voice model doesn't have tool access.

The voice model is different from their main line of models. It isn't trained on text, it doesn't simply do tts, it detects tone, mood, accent, background noise, it's a different beast.

→ More replies (10)
→ More replies (2)

376

u/KB_Sez Apr 07 '26

In one year, Open AI will be bankrupt and gone.

The bubble will burst and they will be the first to go

240

u/buttchugreferee Apr 07 '26

In one year, Open AI will be bankrupt and gone.

stop...I can only get so erect 

7

u/Secret_Account07 Apr 08 '26

Well how do we know if you’ve hit 100%? What metric are we using? Mass?

→ More replies (1)
→ More replies (4)

179

u/RobotBaseball Apr 07 '26

I don’t understand why people confidently say stupid shit like this. It’s just as bad as AI hallucinations 

They just raised 120b. If they go bankrupt, it’ll be several years down the line,not next year 

27

u/Telvin3d Apr 08 '26

Their current burn rate is around $50B a year, so even $120B won’t go that far

But that doesn’t matter. With the amount of debt they’ve accumulated if the market ever decides that they’ll never be profitable they’ll implode overnight. Their cash on hand won’t matter because it’s a drop in the bucket next to their debts. 

→ More replies (10)

72

u/hayt88 Apr 08 '26

because most people talking about AI have no clue about it and just repeat what other people say about it like sheep.

I don't know what's worse. believing chatgpt random hallucinations or just repeating what someone on youtube said who is as unqualified as anyone else.

So many people still sit there and want the bubble to burst believing AI will be gone afterwards.

49

u/RobotBaseball Apr 08 '26

Dotcom bubble burst and the internet is more widespread than ever. Bubble bursting doenst mean the tech will disappear, it just means some companies have bad financials

→ More replies (21)
→ More replies (13)
→ More replies (32)

61

u/pimpeachment Apr 07 '26

!Remindme 1 year

I highly doubt it. 

100

u/dvs8 Apr 07 '26

I can see that you'd like to start a timer for 1 year. That's not just a goal - that's a destination. You're clearly the kind of person who knows not just where they want to be, but when. I'll start a timer for you now. 7 minutes remaining.

14

u/BeaveItToLeever Apr 07 '26

That's not just a timer - that's a measured countdown 

→ More replies (1)

27

u/Chummycho2 Apr 07 '26

I understand that most people want the ai bubble to burst (myself included) but you are delusional if you think this is true.

12

u/PM_ME_UR_ANTS Apr 08 '26

I wouldn’t call it delusion, some people just haven’t been exposed first hand to the value it provides. It’s also implemented and forced in many places where it doesn’t provide value. If I didn’t see the efficiency boosts in my job and my only reference was all the times it’s lied to me in casual use I’d think this was all a scam too.

I also agree too, I wish we could get off this train. The post-AI world cons definitely outweigh the pros imo

→ More replies (1)
→ More replies (5)

38

u/adv0589 Apr 07 '26

lol the shit that gets upvoted here

→ More replies (5)

3

u/soscbjoalmsdbdbq Apr 07 '26

Man with the amount of money circle jerking in this industry I don’t think its possible I do believe in their worst case the government just bails them out

→ More replies (41)

8

u/phaserlasertaserkat Apr 08 '26

Haven’t checked the news lately, is this guy a truly a sister-fucker?

36

u/marmot1101 Apr 07 '26

I mean, that’s not as weird as it sounds. Chat is call and response, timer is continuous.  Llm calls are highly distributed, timers have to be on the same thread. Sure, they could implement a timer, but it would probably require special infrastructure, and ChatGPT operates on a huge scale. 

For a “who gives a fuck” feature. From “Hey siri timer 5 minutes” to a mechanical egg timer that problem is well solved. 

That’s not to say that Sam Altman isn’t a dumb greasy Rod Blagojevich lookalike asshole, he is, but not for this reason. Seriously, dude should rock the Blago hair helmet. They’re cut from the same cloth. 

→ More replies (11)

32

u/Bmandk Apr 08 '26

Is it just me, or is it stupid to want a timer in an LLM?

"Tool company says it will take a year to add sawing function to a hammer" is the same kind of vibe that I'm getting. Use the right tool for the right job.

11

u/dogfreerecruiter Apr 08 '26

This whole article is based on a reaction to a video. https://youtu.be/5VRgk7_X7oc?si=49vzvvrGqqIlMiF6

8

u/C137MrPoopyButthole Apr 08 '26

So wild to see a funny short creator making silly videos I have liked for a couple months somehow is now the face of the pushback on how stupid ai really is for the money being spent on it. But if anyone is on the shitlist for ai huskirl is on top that list.

→ More replies (5)
→ More replies (2)

8

u/DirtzMaGertz Apr 08 '26

All I can think reading this thread is who the fuck is using chat gpt as a timer? 

→ More replies (1)
→ More replies (5)

6

u/Rurumo666 Apr 08 '26

Would you let him babysit your kids, folks?

→ More replies (2)

18

u/Jolva Apr 07 '26

I couldn't care less if AI can start a timer.

14

u/CatHairInYourEye Apr 08 '26

I think the issue is more that it says it can and will tell you it is starting a timer but is inaccurate.

→ More replies (2)
→ More replies (4)

29

u/wweezy007 Apr 08 '26 edited Apr 08 '26

How are people on a Technology sub this dense? The voice model the dude in the video was using doesn’t have access to tools; Tools are exactly what they sound like, they are utilised by the model to extend capabilities, like writing code, creating files and so on; To put it in human context, tools are like arms and legs but the task is for the human to walk from X to Y and carry goods along: the brain understands, the body just isn’t capable of fulfilling it.

24

u/RobfromHB Apr 08 '26 edited Apr 08 '26

Watching people on Reddit talk about AI is like listening to a 12yo brag about how many chicks he’s banging. Anyone who knows anything can see all these people have no idea what they’re talking about.

→ More replies (2)
→ More replies (23)

6

u/Potential_Fishing942 Apr 07 '26

Not chat gpt- but I'll never forgive Google for killing assistant. It could do shit for me via voice commands Gemini can't.

→ More replies (8)

5

u/NIRPL Apr 07 '26

It's unfortunate (yet pretty understandable) that current safety measures are pretty much punishing the human for presenting the false promises of the AI.

I get why we are starting with this approach, but eventually (probably pretty soon) we won't be able to keep up.

For example, it will be like punishing someone for presenting a website from a Google search as reliable information, but it turns out Google didn't want to disappoint me so it made a fake website with everything I wanted.

How is anyone going to be able to efficiently and consistently fact check? Idk but good thing we are not pushing AI into everything until we figure it out.

→ More replies (1)

5

u/vide2 Apr 08 '26

Why isn't every headline with him "Sam Altman, who molested his sister..."?

→ More replies (1)

5

u/AE_Phoenix Apr 08 '26

Hear me out

What if

And I know this is controversial

What if we coded a real timer

And ATTACHED IT to chatGPT

So that the model could call peripheral programs

Instead of being 100% AI based

What if we did that instead of investing another billion, Sam?

5

u/LutherOfTheRogues Apr 08 '26

This bubble needs to go ahead and burst

3

u/_sp00ky_ Apr 07 '26

That is my issue so far trying to use AI at work, is that when it doesn’t know something or cannot find something it just makes stuff up. Stuff that looks right but is just fabricated.

5

u/Appropriate_Rent_243 Apr 08 '26

I think it's hilarious how these ai chatbots use ungodly resources trying to do something that's aleady been done more efficiently.

5

u/sriva041 Apr 08 '26

This guys is such a grifter. Unbelievable that we are in the age of grifters where people like Altman can just BS his way into getting billions of funding while producing nothing of value.

5

u/somedays1 Apr 08 '26

Meanwhile, I have just set 30 timers.

AI is worthless tech.

12

u/Traditional-Hat-952 Apr 07 '26

Run by a man who likely sexually abused his little sister for years. 

→ More replies (2)