Conversation
@GossiTheDog "they stall in enterprise use since they don’t learn from or adapt to workflows" - can't wait for some genius to make user prompts persist in the model so the whole thing can get poisoned!
1
0
4

@GossiTheDog btw, that's a bullshit excuse:

"The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations."

If you build a product that is great, just people don't know how to use it, it means you build a shit product. Also, those are freakin' chatbots, they are supposed to not have any learning curve, the promise is that they ingest your data and then you "just talk" to them in plain language and they do stuff for you. Unless, maybe, they don't keep to the promise and the marketers are lying to us, they couldn't be doing that, right? Right..?

1
1
1

@tymwol @GossiTheDog

Some good tools have a big learning curve. At the start of the personal computer revolution, most people couldn't type. It took a long time for that to be a normal skill. And that enabled a load of other things.

That said, there are two important things for LLMs in particular:

  • They are finicky. Two very similar prompts will give very different results. This translates to a steep learning curve for something that is marketed as simple.
  • That finickiness is model-specific. One of the big problems GPT-5 is having is that their most active users have worked out ways to work around the idiosyncrasies of GPT-4o. None of this is transferrable to a new model. Some of it isn't even transferrable to a slightly different version of the same model.

This translates to a tool that is hard to learn, and where the learnings are not transferrable skills. That's very different from any of the things that have actually increased productivity, historically.

1
1
0

@GossiTheDog Generative AI pilots shouldn't be in the flying in the first place. Take back the skies!

0
0
0

@GossiTheDog This report seems particularly grim given that it is coming from a group within MIT who seems to have drunk all kool-aide on the idea.

Apparently 'agentic' is just so awesome that we need to remake DNS in its image(with web3 support, obviously).

This isn't the skeptics or the indifferent-but-ultimately-must-be-mollified-with-beans bean counters saying they don't work. Would explain the 'not failing; just being failed' tone.

0
0
0

@buherator "learning" and "adaptation" seem like things I'd expect from something that is claimed to have "intelligence".

1
1
0
@womble It can do that (kind of...), and you could e.g. append "succesful" conversations to the initial prompt for adaption. But doing that once on sanitized data vs. continuously on user input are radically different risks.
0
0
0

@david_chisnall @tymwol @GossiTheDog Typing (with speed) was never the bottleneck.

1
0
0

@sleepyfox @tymwol @GossiTheDog

It absolutely was. One of the biggest complaints from folks in the '80s was that typing was so much slower than other things (hand writing, dictating, and so on).

1
1
0

@sleepyfox @tymwol @GossiTheDog

And, of course, writing code is what 99% of people who saw productivity boosts from using computers were doing.

1
1
0
Sarcasm, facetious reply
Show content

@GossiTheDog

“But…but AI is literally the future, and those who don’t use it will get left behind! Super-intelligent AGI is just around the corner, just five more years I promise!!”

0
0
0

@david_chisnall @tymwol @GossiTheDog There were no "productivity boosts from using computers".

1
0
0

@sleepyfox @tymwol @GossiTheDog

The degree to which that is nonsense means that there's no point engaging when a block will do.

0
1
0

@GossiTheDog they are failing because LLMs aren't and most likely can't deliver on the over-hyped promises. Sounds like another MIT article that blames the user and not the product. I've read too many MIT articles that are just advertisements for AI slop.

0
1
0