So the CEO of Microsoft's AI group said it cracks them up to hear people call AI underwhelming, when they can have a fluent conversation with it, and AI can generate images and videos. This echoes the sentiment other AI company CEOs shared.
So, here's my theory, based on these observations: these people find AI so smart, because these agents are smarter than them. They're not smarter than a dumb rock, but they're smarter than the CEOs.
This adequately explains the state of the world too.
@algernon I wouldn't say "smarter" but I do think it shows how their power isolated them from having genuine interactions with people, specifically interactions with people not depending on them, interactions with people who can tell them "No."
@tante But when called out, an AI will apologize (and offer another round of bullshit), while CEOs will stick to their misguided beliefs.
In my book, that makes AIs "smarter".
(Yes, this is a shitpost. Don't treat it as a reasonable argument, I'm just dunking on tech CEOs :P)
pub trait TechCEOSmartness {
fn is_smart(&self) -> bool {
return false
}
}
struct TechCEO;
impl TechCEOSmartness for TechCEO {}
fn main() {
let sam_altman = TechCEO;
println!("Is Sam Altman smart? {}", sam_altman.is_smart());
}
Here's a memory safe implementation of Tech CEO smartness measurement in a modern, strongly typed, memory-safe language. It's in Rust, it compiles, it works, so it must be true!!11!!!
@algernon LLMs are good at bullshitting. So, of course CEOs interact with an LLM and see it as something that could do their job, which means it can do any job (since all other jobs are less important than CEO).
"Smarts are hard to measure!" - and yep, it is. Fortunately, we're specifically talking about the smartness of Tech CEOs, we have a reasonably small set of them to be able to quickly model them adequately closely.
So here's my implmentation of such a model, written in a modern, strongly typed, memory safe language. It compiles, it works, it passes the tests, so it must be correct:
pub trait IsSmart {
fn is_smart(&self) -> Option<bool>;
}
pub trait TechCEOSmartness {
fn is_smart(&self) -> Option<bool> {
return Some(false);
}
}
struct TechCEO;
impl TechCEOSmartness for TechCEO {}
struct Person;
impl IsSmart for Person {
fn is_smart(&self) -> Option<bool> {
None
}
}
fn main() {
let sam_altman = TechCEO;
println!("Is Sam Altman smart? {}", sam_altman.is_smart().unwrap_or_default());
let person = Person;
println!(
"Is a random person smart? {}",
person
.is_smart()
.map_or_else(|| "Not decidable", |v| if v { "yeah" } else { "nay" })
)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn is_tech_ceo_smart() {
let tech_ceo = TechCEO;
assert_eq!(tech_ceo.is_smart().unwrap(), false);
}
#[test]
fn is_person_smart() {
let person = Person;
assert_eq!(person.is_smart(), None);
}
}
@michael I'm sorry, but technology that relies on blatant theft and offloading costs to everyone else1, one that comes at great environmental and social harm is in no way impressive.
Even if they sold what these things for what they really are, they'd still be selling shit. Shiny shit, but shit nevertheless.
The only impressive things here is that they continue to get away with all this, and the impressive amount of human art and knowledge there exists in the world, that was available for them to train on.
@sb Lying is dumb, so that reinforces my theory. ;)