Conversation

While environmentalists loosing their shit about carbon footprint of blockchains and LLM models — I have another question: has anyone tried to estimate comparative carbon footprint of JS/JVM/.NET JiT compilers? I mean, like, seriously — instead if shipping pre-compiled binary artifacts we forcing millions of clients with nearly identical hardware and software configuration to perform the same set of unnecessary heavy computations every time when someone visits random web page or opens random app

2
0
0

@d_olex I have wondered the same about CI/CD pipelines and containerisation.

1
0
0

@gsuberland Yeah, this leads to another interesting question: how seriously it’s possible to optimize your workload if you get rid of “security” and “isolation” layers/capabilities that modern systems are full of for the sake of unification rather than actual necessarily

1
0
0

@gsuberland On my performance-first machines I have micro-architectural security mitigations disabled, but what if we will ship hypothetical pseudo-POSIX system that doesn’t have any security/isolation at all

1
0
0
@d_olex Good question, but I'd argue that bytecode solves existing problems, while in case of LLM/blockchain I mostly don't see that. Also, isn't JIT specifically a thing to improve performance, meaning less resource consumption? A related observation is that many use-cases for LLMs can probably be solved much cheaper, today. E.g.: better IDE features; more QA for web search results; better education so people can write and understand an email.
1
0
0

@buherator JIT improves performance when the rival is sorce code interpreter or byte-code VM, I’m speaking about shipping already compiled code for target platform/architecture

2
0
0

@d_olex @buherator we do have that for many JITs, but there can be tradeoffs in things like crash telemetry. still, for cases where you don't care about crash telemetry, there's a strong case to be made for shipping native ahead-of-time JIT'd binaries (and multiarch where possible)

1
0
0
@d_olex Yeah I get that. My point is (but I'm unsure about history here) that when Java or first browser JS engines were shipped inefficient solutions were probably necessary, and now we try to reduce that debt, while in case of your modern examples we probably have cheaper solutions that work better, but burning GPUs is sexier.
1
0
1

@gsuberland @buherator What is “crash telemetry” in this context? Never heard this term before

1
0
0

@d_olex some gains to be made there for sure (the right compensating controls for the right threat models and all that), although the main thing I was thinking was that most ephemeral containerised workflows (especially CI/CD) pull down and install packages each time they are invoked, which is a huge time and energy sink. and I suspect they even represent a poor cost and time tradeoff in aggregate vs. manual environment creation, since the more you use it the more time you have spent waiting.

0
0
0

@d_olex @buherator software with big user counts (think along the lines of browsers or stuff like Steam or Spotify) often ships with a crash handler program that gets invoked when the program crashes, and sends a telemetry ping back to the vendor, like a private version of what WerFault does. the telemetry usually includes metadata about the fault to help with debugging, and there are backends with clever triage features like stacktrace grouping to help prioritise bugs that hit many people.

1
0
0

@d_olex @buherator native AoT binaries often don't have the same metadata available so the crash data isn't as easy to work with. you don't get quite as rich a picture of what was going on when the crash occurred.

1
0
0

@gsuberland @buherator You don’t need JIT for that, I did exactly the same for for the regular C/C++ programs

2
0
0

@gsuberland @buherator Debug log + minidump is pretty much sufficient to troubleshoot any bugs, cases when it’s not are typical “skill issue” problem

0
0
0

@d_olex @buherator I know, but the issue is that if your developers are working in a non-native language then translating the native crash data back into something that they can understand in their original code flow can be more difficult. it's not impossible, but it's less straightforward.

(keep in mind that most devs aren't like us and don't RE the guts of stuff all the time lol)

2
0
0

@d_olex @buherator another issue is that AoT can restrict certain language features like reflection or dynamic typing, which is frequently relied upon for stuff like network API bindings. this can actually be a really annoying blocker. some languages are better at it than others.

1
0
0

@d_olex @buherator on the plus side, in .NET at least, the GAC persistently caches the JIT'd binary, so you aren't re-JIT'ing every program launch.

0
0
0

@gsuberland @buherator So, it basically boils down to “we already have too many people who already been thought to use legacy inefficient tech that we already have” problem which is rather political/administrative than purely technical

2
0
0

@d_olex @buherator ehhhh, idk on that front. there's strong usability value in managed languages. I think this ultimately becomes a philosophical question. the pragmatist in me would prefer to pick easier battles that net bigger energy reductions at scale.

0
0
0

@gsuberland @buherator Good couter-example — OS kernel and computer game developers are somehow capable of shipping of highly sophisticated and optimized native code to millions of customers without bothering about that

1
0
0

@d_olex @buherator yup, but there's a different mindset there and different tradeoffs. I wouldn't want a kernel dev doing UI/UX development work; I've seen that software and it's horrrribbble lmao

1
0
0

@gsuberland @buherator True, but in the same time modern UI/UX develooment is incomparably more simpler than low level code. I think we all lost our sane mind when started to assume that it’s somehow difficult to write mundane client-side stuff like Spotify or e-mail app

1
0
0

@buherator I can’t say for sure about LLM/AI because I’m kinda new to this tech, but crypto-currencies (and therefore blockchain) are solving “I want to transfer arbitrary amount of funds to random person on the other side of the world without hitting financial regulations and comitting shitload of paperwork” problem which is… also rather political than technical

1
0
0

@buherator Basically it’s about to fight back of your economical freedom from oppressive governments who over-regulated global banking system to the point where it became completely unusable

0
0
0

@d_olex @gsuberland @buherator i couldn't disagree more; i find FPGA development (both RTL and toolchain) nearly trivial compared to UI/frontend

1
0
0

@whitequark @gsuberland @buherator These are different types. Hardware has natural complexity because subject area itself is difficult, UI has artificially introduced complexity because of horrible frameworks/toolkits written by horrible people

1
0
1

@whitequark @gsuberland @buherator There also were good frameworks/toolkits written by good people, like Borland, that made UI development much or less trivial. Industry killed it because as developer you can't ask >$200k salary for being good at C++Builder

1
0
0

@d_olex @gsuberland @buherator I started with C++ Builder and it was neither trivial (ever made a VCL Component? holy shit) nor made for very good UIs (no DPI scaling, not responsive to system fonts, not especially accessible because Delphi quietly reinvented a lot of system stuff and then people using it added on top...)

CLX was a disaster too

1
0
0

@d_olex @gsuberland @buherator Embarcadero is still around, you can download and use C++ Builder, it apparently makes iOS apps nowadays (no idea how), the idea that it was killed by SV careerists is delirious

1
0
0

@whitequark @gsuberland @buherator Have to try, haven't used it for ages. To my memory it was better than Qt and GTK

1
0
0

@d_olex @gsuberland @buherator I started with C++ Builder 1 and transitioned from C++ Builder 6 to Qt 3 (you bringing it up is very much a "I was there when it was written" moment, ha) and haven't looked back. Qt had/has Qt Designer which was a very similar RAD tool with a _much_ better layout engine, though these days you'd be using Qt Quick which is a fair bit different

CLX was (I think I tried it out only once) a very bad port of VCL on top of Qt or something, I think?

1
0
0

@d_olex @gsuberland @buherator I think I've used C++ Builder 7 maybe once or twice and haven't used it since because it or the applications it generated didn't really run on Linux (you can actually get it to work on wine these days), plus it wasn't open source, plus the build system and reusability was atrocious. try making a VCL component and giving it for someone to use. it involved COM registration. what a fucking nightmare, I'm so glad I will never have to do that again

1
0
0

@d_olex @gsuberland @buherator separately from that, the complexity of hardware mostly lies in shit languages (Verilog wouldn't be considered a serious contender anywhere else) and shit toolchains (Vivado is the best of the pack for proprietary ones and that's saying something). the subject area is easier than comparable programming (i.e. anything highly parallel) because if A and B aren't connected A can't affect B, unlike in software. you're mostly fighting tools all day every day

1
0
0

@whitequark @gsuberland @buherator I'd say that software complexity is more or less constant, while hardware complexity tends to increase when tings becoming too small or too fast: charge leak bit-flips, EM interference, signal integrity & RF black magic aren't fits well into simplistic "A can't affect B" view

1
0
0

@whitequark @gsuberland @buherator But I guess it all depends on background and developed habits, anyway: for me it's much simpler to deal with high performance asynchronous network applications using epoll() than, let's say, high-speed serial interfaces using FPGA transceivers

0
0
0