Conversation

So the CEO of Microsoft's AI group said it cracks them up to hear people call AI underwhelming, when they can have a fluent conversation with it, and AI can generate images and videos. This echoes the sentiment other AI company CEOs shared.

So, here's my theory, based on these observations: these people find AI so smart, because these agents are smarter than them. They're not smarter than a dumb rock, but they're smarter than the CEOs.

This adequately explains the state of the world too.

6
1
1
@algernon smarts are hard to measure. But training a model so it pushes the buttons of typical decision makers, but not necessarily an engineer sounds reasonable.
1
0
1

@algernon I wouldn't say "smarter" but I do think it shows how their power isolated them from having genuine interactions with people, specifically interactions with people not depending on them, interactions with people who can tell them "No."

1
0
0

@tante But when called out, an AI will apologize (and offer another round of bullshit), while CEOs will stick to their misguided beliefs.

In my book, that makes AIs "smarter".

(Yes, this is a shitpost. Don't treat it as a reasonable argument, I'm just dunking on tech CEOs :P)

0
0
0

@buherator

pub trait TechCEOSmartness {
    fn is_smart(&self) -> bool {
        return false
    }
}

struct TechCEO;

impl TechCEOSmartness for TechCEO {}

fn main() {
    let sam_altman = TechCEO;
    println!("Is Sam Altman smart? {}", sam_altman.is_smart());
}

Here's a memory safe implementation of Tech CEO smartness measurement in a modern, strongly typed, memory-safe language. It's in Rust, it compiles, it works, so it must be true!!11!!!

1
0
0

@algernon LLMs are good at bullshitting. So, of course CEOs interact with an LLM and see it as something that could do their job, which means it can do any job (since all other jobs are less important than CEO).

0
1
0

"Smarts are hard to measure!" - and yep, it is. Fortunately, we're specifically talking about the smartness of Tech CEOs, we have a reasonably small set of them to be able to quickly model them adequately closely.

So here's my implmentation of such a model, written in a modern, strongly typed, memory safe language. It compiles, it works, it passes the tests, so it must be correct:

pub trait IsSmart {
    fn is_smart(&self) -> Option<bool>;
}
pub trait TechCEOSmartness {
    fn is_smart(&self) -> Option<bool> {
        return Some(false);
    }
}

struct TechCEO;
impl TechCEOSmartness for TechCEO {}

struct Person;
impl IsSmart for Person {
    fn is_smart(&self) -> Option<bool> {
        None
    }
}

fn main() {
    let sam_altman = TechCEO;
    println!("Is Sam Altman smart? {}", sam_altman.is_smart().unwrap_or_default());

    let person = Person;
    println!(
        "Is a random person smart? {}",
        person
            .is_smart()
            .map_or_else(|| "Not decidable", |v| if v { "yeah" } else { "nay" })
    )
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn is_tech_ceo_smart() {
        let tech_ceo = TechCEO;
        assert_eq!(tech_ceo.is_smart().unwrap(), false);
    }

    #[test]
    fn is_person_smart() {
        let person = Person;
        assert_eq!(person.is_smart(), None);
    }
}

Try it here

0
0
0
Nah, I think the problem is that they're not selling what it actually is.

It IS extremely impressive what LLMs and image generators can do. Just 5 years ago, Dall-E and the like seemed like magic, 7 fingers and wonky faces and all. The early GPT models (1 and 2) caused a lot of excitement in ML circles, and ChatGPT brought that to a wider audience for GPT3. It's incredible how LLMs of today can process text.

But that's not what we're being sold. We're being sold: problem-solvers and general AI "any day now." And that's not what we're getting. I personally don't think we're getting that with LLMs of today: we'd need another paradigm shift before that. That's why progress essentially stopped 1-2 years ago: we're very close to the limit of the current paradigm and it's not obvious where to go next (at least not in a "timely" fashion to satisfy MCaps).

They could have had great success selling LLMs and image generators for the limited tasks they are genuinely useful for. But that's not a trillion dollar business, so instead we have a bubble dependent on an impressive technology turning out to actually be magic.
1
1
0

@michael I'm sorry, but technology that relies on blatant theft and offloading costs to everyone else1, one that comes at great environmental and social harm is in no way impressive.

Even if they sold what these things for what they really are, they'd still be selling shit. Shiny shit, but shit nevertheless.

The only impressive things here is that they continue to get away with all this, and the impressive amount of human art and knowledge there exists in the world, that was available for them to train on.


  1. Like, hello, 51 million requests in a single day for niche sites I host? That's a stupid amount of bandwidth and CPU time wasted, even with my defenses up. ↩︎

1
0
0
I get where you're coming from, and agree to some degree. I don't think that's a productive position, though.

LLMs are not shit, they are very impressive. They are not general AI, and they are trained using doubtful methods.

Scraping the internet is not theft. Nor is it ok. It's somewhere between – not exactly fair use and only quantitative different from inspiration. If I or you insist it's theft, we're are (rightfully) going to be ignored; the music industry tried that in the 90s, and we laughed at them. It's a very valid point to consider, but it is IMO necessary to engage with it, not just dismissing it.

Same with the electricity: right now, we have the problem that use is subsidized by VC funds and training by companies believing (or at least selling) a product they do not have, nor can they provide. Transport by car or eating meat also has a huge environmental impact, but we've accepted that as part of life. I'm not saying that we should do that for LLMs, but we should not outright dismiss it because you personally think it's junk.

So, I agree with you there are fundamental and significant ethical issues with LLMs. I disagree with you they cannot have a purpose, even if it is not what we're being sold. And I believe the only way to influence the direction is to engage with it, rather than just rejecting it entirely.
0
1
0

@sb Lying is dumb, so that reinforces my theory. ;)

0
0
0