Conversation
@raptor I still stand by my hypothesis that LLM's may be useful if their output is easy to verify. 1 LoC should be easy enough to verify.
1
1
1

@raptor @buherator But then... it isn't significantly different than just, say, a Markov chains generator, isn't it?

2
0
0
@joxean @raptor I never thought it was but I'm not an AI expert...
0
0
1

@joxean @raptor @buherator Not much, simply slightly more flexible: You pre-train it and then give it a prompt.

And I completely agree with @buherator: Verification is key. There are other tasks where I'd like to see AI more too: OCR, TTS and Speech Recognition. Useful, especially also to people with disabilities. Or simply a better grammar checker. And all of it is quite easy to verify without needing to be a highly qualified "AI mistake checker". To a human, the correct output would be obvious, it's just quite a manual task to transcribe audio or speech etc.

For purely generative AI (e.g. image generation) I'm not even that against it either – in theory. The big problem with AI generated illustrations is that artists basically fund their life through getting payed for illustrations. If we don't pay them, we'll lose art. If we would've some kind of UBI and artists wouldn't be dependent on commissions, using AI for simple illustrations wouldn't be too much of a big deal/problem. Because AI doesn't compete in the "market" of /art/ (i.e., expression of human emotions etc.), since that wouldn't be a market anymore. Only in terms of illustrations (providing an image fitting to the text).

0
1
0

@raptor @jetbrains
You have to give it to JetBrains that they have a good track record of utilizing some technology in exactly the right way to be useful and not really get in the way. Writing prompts for an AI chatbot to generate code is BS (kills creativity, creates bad and unmaintainable code that's possibly even encumbered by licenses). But simply assisting you with completing the line you already have in your head mostly anyway...? Sure!

0
1
0