While environmentalists loosing their shit about carbon footprint of blockchains and LLM models — I have another question: has anyone tried to estimate comparative carbon footprint of JS/JVM/.NET JiT compilers? I mean, like, seriously — instead if shipping pre-compiled binary artifacts we forcing millions of clients with nearly identical hardware and software configuration to perform the same set of unnecessary heavy computations every time when someone visits random web page or opens random app
@d_olex I have wondered the same about CI/CD pipelines and containerisation.
@gsuberland Yeah, this leads to another interesting question: how seriously it’s possible to optimize your workload if you get rid of “security” and “isolation” layers/capabilities that modern systems are full of for the sake of unification rather than actual necessarily
@gsuberland On my performance-first machines I have micro-architectural security mitigations disabled, but what if we will ship hypothetical pseudo-POSIX system that doesn’t have any security/isolation at all
@buherator JIT improves performance when the rival is sorce code interpreter or byte-code VM, I’m speaking about shipping already compiled code for target platform/architecture
@d_olex @buherator we do have that for many JITs, but there can be tradeoffs in things like crash telemetry. still, for cases where you don't care about crash telemetry, there's a strong case to be made for shipping native ahead-of-time JIT'd binaries (and multiarch where possible)
@gsuberland @buherator What is “crash telemetry” in this context? Never heard this term before
@d_olex some gains to be made there for sure (the right compensating controls for the right threat models and all that), although the main thing I was thinking was that most ephemeral containerised workflows (especially CI/CD) pull down and install packages each time they are invoked, which is a huge time and energy sink. and I suspect they even represent a poor cost and time tradeoff in aggregate vs. manual environment creation, since the more you use it the more time you have spent waiting.
@d_olex @buherator software with big user counts (think along the lines of browsers or stuff like Steam or Spotify) often ships with a crash handler program that gets invoked when the program crashes, and sends a telemetry ping back to the vendor, like a private version of what WerFault does. the telemetry usually includes metadata about the fault to help with debugging, and there are backends with clever triage features like stacktrace grouping to help prioritise bugs that hit many people.
@d_olex @buherator native AoT binaries often don't have the same metadata available so the crash data isn't as easy to work with. you don't get quite as rich a picture of what was going on when the crash occurred.
@gsuberland @buherator You don’t need JIT for that, I did exactly the same for for the regular C/C++ programs
@gsuberland @buherator Debug log + minidump is pretty much sufficient to troubleshoot any bugs, cases when it’s not are typical “skill issue” problem
@d_olex @buherator I know, but the issue is that if your developers are working in a non-native language then translating the native crash data back into something that they can understand in their original code flow can be more difficult. it's not impossible, but it's less straightforward.
(keep in mind that most devs aren't like us and don't RE the guts of stuff all the time lol)
@d_olex @buherator another issue is that AoT can restrict certain language features like reflection or dynamic typing, which is frequently relied upon for stuff like network API bindings. this can actually be a really annoying blocker. some languages are better at it than others.
@d_olex @buherator on the plus side, in .NET at least, the GAC persistently caches the JIT'd binary, so you aren't re-JIT'ing every program launch.
@gsuberland @buherator So, it basically boils down to “we already have too many people who already been thought to use legacy inefficient tech that we already have” problem which is rather political/administrative than purely technical
@d_olex @buherator ehhhh, idk on that front. there's strong usability value in managed languages. I think this ultimately becomes a philosophical question. the pragmatist in me would prefer to pick easier battles that net bigger energy reductions at scale.
@gsuberland @buherator Good couter-example — OS kernel and computer game developers are somehow capable of shipping of highly sophisticated and optimized native code to millions of customers without bothering about that
@d_olex @buherator yup, but there's a different mindset there and different tradeoffs. I wouldn't want a kernel dev doing UI/UX development work; I've seen that software and it's horrrribbble lmao
@gsuberland @buherator True, but in the same time modern UI/UX develooment is incomparably more simpler than low level code. I think we all lost our sane mind when started to assume that it’s somehow difficult to write mundane client-side stuff like Spotify or e-mail app
@buherator I can’t say for sure about LLM/AI because I’m kinda new to this tech, but crypto-currencies (and therefore blockchain) are solving “I want to transfer arbitrary amount of funds to random person on the other side of the world without hitting financial regulations and comitting shitload of paperwork” problem which is… also rather political than technical
@buherator Basically it’s about to fight back of your economical freedom from oppressive governments who over-regulated global banking system to the point where it became completely unusable
@d_olex @gsuberland @buherator i couldn't disagree more; i find FPGA development (both RTL and toolchain) nearly trivial compared to UI/frontend