Conversation

Ubuntu is now allowing users to disable security mitigations Intel has baked into its GPU components. People are claiming the setting provides up to a 20% boost in performance. I'm still trying to understand more about the mitigations, but they appear to involve defending against Spectre-based attacks. Is this wise? On the one hand, I'm not aware of a single Spectre-based attack in the wild. On the other hand, you're leaving yourself potentially exposed. Thoughts

https://www.phoronix.com/news/Disable-Intel-Gfx-Security-20p

8
4
1

@dangoodin

i'm gonna need more coffee.

and probably some tylenol.

1
0
0

@neurovagrant @dangoodin some big questions:

- are there even spectre attacks that are not purely academic in nature?
- remember beast and crime? they were "a big deal", but turned out to be nothingburgers because "you had to have root on the boxes at each end", defeating the purpose of using them for malware
- isnt this sort of the same? you have to 'own the box already' to exploit this?

it would be another story if it was being exploited in the wild

3
0
0

@neurovagrant @dangoodin i figure this way:

any "exploit" that requires you to have root on the box already isnt an exploit.

1
0
0

@neurovagrant @dangoodin the people who should be chiefly worried about this are cloud providers.

if you abuse this on some mammoth system, at the hypervisor level, in like, aws or azure - okay sure, now we're talking.

but individual people with laptops and workstations?

meh

0
0
0

@darkuncle @neurovagrant @dangoodin

okay
lets trade then

you show me all the victims of spectre-based attacks

and ill show you a 20% bump in systems performance.

ready?

0
0
0

@dangoodin This is really without looking it up: I think Spectre was a multi-processing attack (or multi-threading?). So the fixes were more or less only relevant on multi-user machines and for hosting providers, for single-user home machines the mitigations were always superfluous.

2
0
0

@astifter @dangoodin the attacks can be implemented when any attacker-controlled code runs on a user's system, so there are scenarios outside multi-user sessions where it can be relevant, e.g. untrusted code running in a browser (JS, wasm, etc.)

in practice nobody's actually doing those attacks though. no point burning the engineering time for almost no chance of getting anything interesting back out of it.

0
0
0

@astifter

As @gsuberland notes, the attack is not strictly relevant only for multi user machines. As long as I'm executing, untrusted JavaScript or installed apps, I am at least theoretically vulnerable. Or at least that's my understanding.

0
1
0

@dangoodin nobody bothers attacking these vulns because it takes a lot of engineering time to implement attacks against them to any useful level of rigor, and getting any interesting data back outside very targeted scenarios is very unlikely (plus it's noisy due to the number of iterations you need to do on these types of side-channels). the economics just don't stack up for attackers, especially when there are so many lower-effort higher-reward attack approaches they can throw at stuff.

3
0
1

@dangoodin for most people it's just not a realistic part of their threat model so turning the mitigations off is fine.

you'd want it switched on in a cloud host of course, but for end-users... yeah, no biggie.

0
0
0

@dangoodin USB attacks were theoretical until they weren't too

If we have a lack of actual exploits, it's presumably because most people have the mitigations, making them not worth deploying

Do you run code you didn't personally authenticate? Do you run JavaScript in your browsers?

I'd leave them on unless you have a very special case system where all the software is vetted and externally provided code doesn't exist and you really need that performance boost

1
0
0

@igrok

I see your overall point, but when were USB attacks only theoretical? They have been a real world threat for, what, two decades now, right?

1
0
0

@dangoodin @gsuberland from the user perspective its risk/reward too, and for e.g. your average gaming PC the risk is very low and the reward often pretty high. probably don’t disable side-channel mitigations on multitenant servers, but on your gaming rig? you have a much higher threat from downloading malware that does literally anything else

2
0
0

@demize @gsuberland

I get your point, but just to clarify: my understanding is that gamers will not see any performance boost from disabling these mitigations.

1
1
0
@Viss @neurovagrant @dangoodin I think a better analogy would be Stagefright where target diversity was a major factor blocking widespread abuse IIRC: based on my recent experiments with side-channels, target HW can have significant effects.

FTR, this is an example of targeting end-user applications:

https://www.youtube.com/watch?v=ugZzQvXUTIk

And don't forget: as SW mitigations (or even HW assisted ones) get better, attackers may turn to more "painful" alternatives...
0
1
1

Thanks for all your comments so far. Can any of you with Ubuntu familiarity tell me how the change will work? The mitigations will be disabled by default, yes? What if a user wants them on? Will that be easy to do?

1
1
0

@dangoodin @demize yeah this seems mostly confined to GPU compute, but it's the same overall point - you probably don't care about Spectre on an end-user machine where you're doing GPU compute tasks.

0
0
0

@igrok

Oh, by USB attack, you mean juicejacking, yes?

0
0
0

@dangoodin I wasn't sure what hardware this applies to, but it's listed here: https://github.com/intel/compute-runtime -- both integrated and discrete GPUs generations over the last decade. I'm not sure if the change will be applied to the "legacy" platform code (gen 8 to gen 11).

Is there dev discussion of adopting this upstream?

0
1
0

@dangoodin another question I have is: how many mitigations can be avoided in the first place if we moved more code to memory-safe languages?

1
0
1

@josh

I'm pretty sure memory safe code is irrelevant to this class of attack.

1
1
1

@gsuberland @dangoodin i understand from https://bugs.launchpad.net/ubuntu/+source/intel-compute-runtime/+bug/2110131 that the security mitigations are no longer needed in the gpu compute runtime because the kernel takes care of it

1
0
0

@fanf @gsuberland @dangoodin yea, this looks like an obsolete mitigation in a userspace GPU library is being disabled. I suspect the biggest risk here would be if the user’s kernel had Spectre mitigations disabled.

1
1
0

@tmaher @fanf @gsuberland

Interesting. How easy is it to disable Spectre mitigations in the user kernel?

1
1
0
y
Show content

@dangoodin @fanf @gsuberland my recollection is it’s doable with kernel boot params, but I would defer to a hardware expert. https://ostechnix.com/how-to-make-linux-system-to-run-faster-on-intel-cpus/ looks plausible instructions.

So not difficult (edit grub configuration and reboot), but if user is deliberately disabling spectre mitigations at kernel level… I am hard pressed to imagine why they’d want a user space library to enforce the mitigations. Maybe if they know the only path to malicious code being executed is guaranteed to go through this userspace mitigation, AND user wants the perf improvement from disabling spectre mitigations in kernel space… but at that point user is advanced enough to recompile the GPU library

Maybe user is worried that Intel and Canonical security teams are wrong, and there is a theoretical exploit the kernel misses that this GPU mitigation catches?

0
1
0

Another question: if Spectre is a vulnerability affecting CPUs, why did Intel create mitigations for GPUs?

3
1
0

@dangoodin it's a family of vulnerabilities that generally affect certain optimizations. The main idea behind SPECTRE is that the CPU (or GPU) falsely predicts that a certain code path is likely to be executed (for example assuming that an authentication check is likely to succeed, before waiting for e.g. a signature verification to complete), and that the execution of this code itself, even if undone by later actions, already leaks the information the attacker wanted, usually through side channels.

2
2
0

@dangoodin this explains the performance gains observed, as the system can effectively parallelize a lot more actions without requiring expensive synchronization points between the cores, but also why the mitigations also apply to GPUs, the general principle behind the vulnerability does not care about the type of processor, if anything, something massively parallel like a GPU wants to do branch prediction even more liberally than a CPU.

0
2
0

@dangoodin Meltdown was CPU-specific. Spectre impacted TONS of "general purpose processors", including GPUs. Nvidia, for example, released a driver update in 2018 that patched it in then-current Nvidia GPUs.
https://www.zdnet.com/article/spectre-mitigations-arrive-in-latest-nvidia-gpu-drivers/

Because at their heart, modern GPUs ARE "CPUs" in all but name.

0
1
0

@sophieschmieg

Interesting. I thought speculative execution was only a thing for CPUs. I didn't know GPUs behave and can be exploited the same way. So just as Intel restricts some of the ways CPUs do speculative execution, so too does it create GPU speculative execution, restrictions, yes?

2
0
0

@dangoodin @sophieschmieg Remember, CPUs and GPUs are tightly linked physically and architecturally in most contemporary systems. This means in effect that exploits technically in one can directly impact the other, meaning that in many cases mitigations need to be considered (and potentially deployed) in both.

0
0
0

@dangoodin very out of my depth here for sure (I am not a systems programmer). it’s just something that always crosses my mind 🤷‍♂️

0
0
1

@dangoodin Being able to disable it sounds like a good idea: one use of GPUs is for compute-bound programs - simulations, data analysis, etc. - that run for quite a few hours and that might be run on dedicated computers in a lab. A 20% gain in performance can mean being able to start a new
run just before leaving for the day or just before going to bed if you log in from home. (more)

1
0
1

@dangoodin For a concrete example, people I've met who do simulations of merging black holes have mentioned that they may have to make multiple tries to get one to work. The problem is handling the black holes' singularities: you have to treat that specially because the equations blow up. Getting results faster is important when it is "publish or perish".

0
0
0