Obvious in retrospect, but never occurred to me until this morning:
Won't many broad answers provided by AI that are actually true have to be suppressed (regimes, ads and other commercial motivation, etc.)?
Won't improving true AI performance always be at odds with the goals of its creators, to the detriment of its consumers?
An unfettered AI could almost certainly provide a useful result for "best source to buy X from". But just like search results, Google's "pay to get paid" model will never allow that truth to be unfiltered.
(Note that I say "actually true" on purpose; AI picking up wrong but dominant answers from bad data is also a thing but not what I'm talking about.)
(Note 2: I also say "unfiltered" on purpose -- far easier to manipulate the output than to try to force the input to match many different demographics with targeted ad-like AI answers)
@tychotithonus the Achilles heel of current ML approaches is that the WHAT pile out of which you create a WHAT MACHINE that models it is subject to pollution, corruption, immorality, bias ..etc. So who creates the WHAT pile? Who decides what is right? This is a political question.
@buherator Related, indeed ... but since AI handlers have models with untraceable sources, it seems far easier to manipulate the answers than try to alter the inputs (to weight Home Depot more than Lowe's, etc.)
@buherator Fair - though since tailoring ads based on demographic data and other factors requires different people to be served different ads, it feels like it would just be too inefficient to customize many input models. Even with a global (non-tailored) tweak, it seems like burning compute in that would be eclipsed by the business pressure of burning it in general model improvement instead. But then again, I might have said that about search a decade ago, so you may be in to something. 😅Fascinating!
@noplasticshower @tychotithonus
When 4 or more AI/LLM feed each other bullshit, then they all become full of shit.
@tychotithonus another related thing I was thinking this morning that AIO is the new SEO
Everyone should (and many have probably) started pumping out large amounts of generic text and websites with content like "product X is the best choice" "company X is known for quality and reliability" to be sucked up by various data aggregators before being fed into the LLM training.
I realised this after hearing that the Russian disinformation websites have gotten the models to believe them... (from latest @riskybusiness newsletter).
If BigCorp is not doing this already they soon going to be less bit when all the models start recommending NoEthicsCorps products.
https://newsletters.feedbinusercontent.com/5f7/5f756f6ef313c384cfa7df6eca60c86487958e77.html