@nina_kali_nina I had once a good video on the topic, but the degradation of youtube and search engine search is currently rendering me unable to find it
@hypha that's not great as an option:( two people will lose friends
@nina_kali_nina well, i think, this question should be more specific.
Also have this problem, but in a bit different manner. My relative does it, but mostly of loneliness i think (they live quite far away, so i can't help by myself)
@kacperpotoczny nod nod; the question is half-ironic, because there are many valid reasons to use LLMs, and even if there weren't, they're so ubiquitous great many people use them now and then
@nina_kali_nina say "boil the ocean" instead of "use an LLM"
@trianderror I've heard the sentiment "It helps me so much because I don't need to stack overflow all the time, and Google is not helpful anymore". It makes me terrified
@nina_kali_nina @trianderror I find LLMs on par with very junior devs these days. Very useful to do implementation of boilerplate and pretty standard stuff. Key is to keep the interfaces well defined and functions small. Speeds up development by 1/3 at least.
@sandor @trianderror out if curiosity, have you tried working with actual junior devs as a project leader?
@nina_kali_nina @trianderror yes, of course, I'm not saying LLM is like working with humans, but in terms of code implementation capabilities purely based on proposed architecture and interfaces this is my experience.
@sandor @trianderror then, out of curiosity, what argument from your friend would convince you not to use LLMs?
@nina_kali_nina Provide them Small Language Models for self-hosting.
@nina_kali_nina so, what's main thesis against AI\neural networks\etc?
@ramiil training dataset is collected unethically, a major use of it is unethical (compare with cars, for example - lots of them can be used for good, but most are used in a way that's generally bad for everyone), and it requires lots of energy to run. That's my beef anyways
@nina_kali_nina What's mean "dataset is collecting unethically"? Is it mean, that's LLMs are trained on rasist\sexist\etc texts, or something else? Then. I think, we all are "trained" this way, but "fine tuned" to be less unethic.
Next, major usage of many of tech are unethical, from GPS(not only civil navigation, but also military, including ballistic missile navigation) to applied math(statistics manipulations). There is no "evil tech" or "good tech", there is only good and evil people using same tools.
And the third argument, the energy. What the difference between energy spent to bitcoin mining, home heating, advertising led screens, tv boradcasting and ai usage? None, actually. We need energy, as more as possible, our civilisation is depend on energy. But, it may and should use more optimally, for example, we can use excess heat of data centers to heat houses or greenhouses.
@ramiil
>What's mean "dataset is collecting unethically"
LLM is not only trained on public domain data, it is trained on copyright data and copyleft data. It is legally a nonsense.
> major usage of many of tech are unethical
Let's not get into "there are no evil guns, there are evil people" debate? :)
> We need energy, as more as possible, our civilisation is depend on energy.
Exactly, and AI is far from being the best way to use the energy. Though it's not the worst offender yet. But the companies behind it want it become one. That's not great.
@nina_kali_nina
> LLM is not only trained on public domain data, it is trained on copyright data and copyleft data.
We're* too. But LLMs do not train themselves, companies do this. And companies could be sued for copyright violations.
(* - humans)
> there are no evil guns, there are evil people
Exactly my point. But if you're not agreed with it, it's okay.
> AI is far from being the best way to use the energy
We(humanity) utilise energy to replace hard manual labor with mechines and increase our quality of life and technical abilities. In middle 2010-s possible there is only one area of labor was cannot be automated - the mental work. Now it's changing, and I glad for it.
But possible AI creation make one other question - "If general-purpose AI will exist and it will can do anything human can do, why are humans needed?"