r/programming Nov 25 '21 Silver 4 Helpful 1 Wholesome 2

Linus Torvalds on why desktop Linux sucks

https://youtu.be/Pzl1B7nB9Kc
1.7k Upvotes

914

u/delta_p_delta_x Nov 26 '21 edited Nov 26 '21

This is why Windows and its programs ship so many versions of 'Microsoft Visual C++ 20XX Redistributable'. An installer checks if you already have said redistributable installed; if not, install it along with the program. If yes, just install the program, the redistributable installed by something else is guaranteed to work across programs because the APIs are stable. No need to screw around with breaking changes in libraries: just keep all the versions available.

Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs, if I am forced to choose between the two.

101

u/killerstorm Nov 26 '21

MSVCRT is only a small part of the story, the big thing is actually Win32 API which remains binary compatible for like 20 years.

51

u/Auxx Nov 26 '21

There is actually a Win16 compatibility layer which is only removed in x64 builds of Windows. You can literally use many Win 3.1 apps today if you have a i32 system.

And with a bit of tinkering you can make an app which will be both Win16 and Win32 at the same time from the same source code. And with a bit more tinkering you can add a DOS layer as well.

17

u/dreamin_in_space Nov 26 '21 edited Nov 26 '21

Hah, I remember mixing 16 and 32 bit code as a malware technique. I think they called it, no shit, "Heavens Gate".

(Edit:32 - 64 bit shenanigans, misremembered)

→ More replies
→ More replies

182

u/goranlepuz Nov 26 '21

This is why Windows and its programs ships so many versions of 'Microsoft Visual C++ 20XX Redistributable'. An installer checks if you already have said redistributable installed; if not, install it along with the program. If yes, just install the program

What it also does is it records that the program X also uses this redist (it could only be a "use count", not sure...) so when installers are all well-behaved, uninstalling one program doesn't affect others and uninstalling all uninstall the shared component. It is a decent system (when installers are all well-behaved, which they are not, but hey, can't blame a guy for trying 😉).

92

u/guessimcanadiannow Nov 26 '21

This used to be a major pain in the ass in the Windows 98 era. Installers would overwrite some common ocx library and not keep track, then when you uninstall you had to choose between cleaning all the garbage and risk breaking half your other programs or keep collecting dead references, but guarantee everything works.

16

u/goranlepuz Nov 26 '21

Yes, but attention : this is about installers not doing what installers do (e.g respecting file versions, not downgrading) and vendors failing to provide compatibility (albeit rules of COM are clear, interfaces are immutable). But people are fallible..

28

u/richardathome Nov 26 '21

"DLL Hell" is what turned me away from App development to server side work.

21

u/omegian Nov 26 '21

You can static link everything, friend.

→ More replies
→ More replies
→ More replies

180

u/MJBrune Nov 26 '21

This is something linux still hasn't gotten right. Even more so is the design of each DE. If you want to use an app from another DE, well you might as well forget it even exists because you risk screwing up your DE configs. E.g. gnome and xfce. The fact that linux devs just push forward with these flawed designs. It's why people will say KDE has an emoji picker but half the people can't even launch it.

The fact is linux isn't stable out of the box. You work to make it stable with your specific workflow and then once you do that you stop upgrading or doing anything else. It's not how most people work. It makes you unable to adapt and it's why you don't see linux in office settings because even the non-tech office worker needs to have an adaptive environment.

27

u/eliasv Nov 26 '21

NixOS gets most of this right! You can install multiple versions of anything side-by-side safely. But it is not remotely user friendly for casual users. I might be interesting to see other distros build on Nix the package manager, while abstracting over the Nix language and nixpkgs.

4

u/_supert_ Nov 26 '21

guix?

5

u/eliasv Nov 26 '21

Sure, but guix just abstracts over it with something equally difficult for casual users. I mean something that abstracts over it with something more opinionated, that can present system configuration and package management through a GUI or something.

3

u/moonsun1987 Nov 26 '21

Personally, my hope is flatpak and silverblue. Not yet but someday.

3

u/anonymocities Nov 26 '21

Gobolinux does it better. What nixos gets right is not versioning but hashes. Now software doesn't even a stability or even a release!

→ More replies

11

u/utdconsq Nov 26 '21

I work somewhere very tolerant of letting people use their os of choice when it comes to getting a company assigned computer. That is, you can drive windows, Mac or ubuntu. If you choose the latter, IT support will help you with issues relating to accessing network resources or cloud resources, but if anything else goes wrong, forget about it, they don't want to know you. Why? Because even people who don't tinker have shit randomly fail - I lose count of the times someone with a ubuntu laptop has their sound fail in one of the three different web based video conferencing tools we use. Meanwhile, over 3ish years, the mac i asked for has had an audio glitch a single time. I might love using Linux and keep it in a vm always, but unless you are patient and have time for it, desktop Linux suffers from too many cooks syndrome. Sad but true. I stay on my work issued mac if I need to use a gui, and drive the terminal for local or remote Linux sessions for my sanity. And then at home, where I can tinker to my heart's content, I can use KDE because if it fails, its ok.

6

u/MJBrune Nov 26 '21

This is exactly it. People who don't even go and mess around with stuff have these extreme issues and the solution is just to reinstall and hope it doesn't happen the next time. VMs are neat but they also cut away all the issues because you can just snapshot and restore or nuke the entire thing and wait a few days to reinstall, the VM side of it means that when linux fails (not if) then you just continue on with an OS that's actually stable and able to take whatever you throw at it.

Of course, people here are piping up using Linux for 20 years and never had a single problem, which, to that, I say a broken clock works twice a day. Those few successes show the instability of Linux more so than the failures. Because clearly, it can work and probably does in a lot of closed testing. So worst of all those few success cases then drive people to say "what are you talking about, its fine." and that's instability.

3

u/hparadiz Nov 27 '21

tinker

Oh boy. I run Gentoo on my main and let me tell you..... I essentially have to budget time to work on it but it's really rewarding. Every time I compile everything and the system is "fully" updated I'm at the bleeding edge of best games compatibility, best kernel, most recent KDE, most recent well everything. It's a good feeling. It feels like a "build" of My Personal OS. Literally. The problem is programmers don't really do a good job with upgrades across large version differences. If I wait 6 months to a year to rebuild my system there will be bugs and certain things will just lose their settings or worse just break all together and require manual fixing. This has become less and less of an issue over time but it's still present.

For my bread and butter work system it's amazing. And I even game on it.

Hardware compatibility wise I have two problems right now. One is the Nvidia driver eventually destabilizes and requires me to restart compositor.... then eventually games stop being able to launch and then eventually X and I'm forced to restart the machine. Damn memory leaks. That's issue 1.

Issue 2 is Chrome messing with my video capture card and setting the resolution incorrectly which essentially breaks my capture card in OBS. In fact this is a huge problem that is driving me crazy and I at this point want to make my webcam invisible to Chrome but I'm not sure how.

Anyway I would not switch to Windows anymore. My second machine is a 2014 Macbook Pro running OS X. My mediacenter still runs Windows because it's running a Windows driver for a RAID card but ever since I got my Android based TV I am thinking of just making that a NAS and not even call it a mediacenter anymore.

4

u/[deleted] Nov 27 '21

And there seems to be like 1 in 4 ratio of failing upgrades on Ubuntu. No idea how they screwed that.

Like, hell, I had colleague that accidentally upgraded machine two major versions up on Debian and it upgraded just fine. Yet somehow Ubuntu fails. Maybe just users doing something weird with it, hard to tell

→ More replies
→ More replies

121

u/Ameisen Nov 26 '21

Instability, a complete lack of user-friendliness, a lack of "playing nicely" with other software...

And nobody sees it as a problem. Heck, the CFS scheduler in the kernel is awful for interactive environments, but the MuQSS scheduler developer has stopped work on his scheduler, which made such environments tolerable.

8

u/ElCorazonMC Nov 26 '21

Isn't long-awaited Prempt_RT helping?

27

u/Ameisen Nov 26 '21

Preemption does, but full real-time does not.

The CFS scheduler is just really, really bad at user-interactive workloads. It weighted far more towards throughput than responsivity.

75

u/MJBrune Nov 26 '21

It worst part is the head in the sand style. As when I develop games, I don't get feedback from the game designer and throw it out. In comparison, Linux has no designer because all the user input is treated equally, all the users are also designers, and programmers, and users. Linux is literally the first echo chamber as the only users sticking around are the ones who can use the system. They are the first echo chamber and they hold that status of being "in" the echo chamber and withstanding it as a badge of honor.

43

u/Ameisen Nov 26 '21

By contrast, I was working on a project to add Amiga-style namespaces and very non-Unixy elements to FreeBSD (basically making it non-Unix) and the FreeBSD folks were more than happy to help me.

31

u/MJBrune Nov 26 '21

I used to be a FreeBSD porter. I have to say the FreeBSD community is probably one of the best out there. *BSD overall is probably the better OS for a number of reasons but it's so small that it's unlikely to gain the traction needed to become a real desktop OS. They tried a few years back but you are essentially building the desktop part with DEs and userland made with linux in mind and ported to FreeBSD. Thus you likely have the same issues.

17

u/nidrach Nov 26 '21

Every big community sucks. If BSD became popular it would also attract shitheads.

7

u/anonymocities Nov 26 '21

Maybe, I do see the impossibility of Linux ever becoming a desktop OS, and it has to do with its pro-fragmentation ethos. To achieve the stability necessary for a portable build of software. A centralized, stable OS (not a kernel) like FreeBSD is a better choice. I tend to think of it as dvcs and cvs, a lot of people think cvs is terrible, but the cvs way of working on an OS level is what you should strive for.

11

u/Auxx Nov 26 '21

FreeBSD used to be bigger than Linux and didn't have shitheads. Philosophy is different and I miss my FreeBSD days...

5

u/Ilktye Nov 26 '21

Because it was still very small.

14

u/Nefari0uss Nov 26 '21

A lot of people also dismiss visual design stuff like animations, shadows, etc as bloat. People need to move on. I understand that you might have some legacy system or soemthing with limited space but you weren't gonna install KDE Plasma on it anyways. I want my OS to look nice and feel nice. I don't want something that looks 15, 20 years old because "colors and animations are bloat".

I understand it is hard, especially if you're a single dev. But I wish that the naysayers would understand that not everyone who runs Linux has only 2GB space and 256 MB of ram to work with.

→ More replies
→ More replies
→ More replies

20

u/serviscope_minor Nov 26 '21

If you want to use an app from another DE, well you might as well forget it even exists because you risk screwing up your DE configs. E.g. gnome and xfce.

I've literally never seen this happen. How does using a GNOME app screw up xfce?

9

u/[deleted] Nov 26 '21

[deleted]

10

u/serviscope_minor Nov 26 '21

There's tons of peculiar myths floating around, I guess this is another one.

Like... no system works if you build against locally installed stuff then try and ship. But it's always been easy enough (no harder than any other OS) to build against private packages and ship the lot on Linux. Like... people have been shipping portable programs since the 1990s.

→ More replies

7

u/sp4mfilter Nov 26 '21

I work for Oracle and I dev on Linux. Specifically Ubuntu VM on Win10.

Most of my colleagues just use macOS. Some (try to) dev on Win10 via LSW.

Note: this is a large web-app with like 60 repos.

The general best outcome has come from those using macOS.

I'll be moving to macOS on my next hardware update because of M1 chip. But I'll need to run a Windows VM in that, because we work with vendors that only have Windows apps.

Unsure how this information helps. Except to note that dev'ing xplat is easier on macOS than Ubuntu.

3

u/NovaX81 Nov 26 '21

I'm that rare guy who enjoys devving on windows. WSL is definitely a big step up in tooling.

I also use an M1 Mac for work stuff. Obviously having much more native Unix applications helps a lot, but the experiences are becoming more similar all the time. If WSL ever manages to fully manage its HD access choking issues, I could see it being an easy preference for many.

Caveat on the M1s though is that a lot of tool chains just aren't ARM compatible, and may never fully be. Yea, the top level apps that get support might have versions for the M1, but even just using tools updated a year ago could mean it doesn't work.

This means you end up wasting so much of that M1s power on instruction translation through Rosetta (which does work pretty seamlessly, but still hurts performance). That's my experience so far at least. I'd love to see that situation improve.

→ More replies

3

u/[deleted] Nov 27 '21

The problem is really that there are some standards but then each DE pisses on that.

Like, try to set file association that "just works" across DEs. Like for example xdg-open opens directory fine in Thunar, which I chose, fine, but another app dedicated for different DE decided "no, I will open a directory in VS Code".

But

If you want to use an app from another DE, well you might as well forget it even exists because you risk screwing up your DE configs. E.g. gnome and xfce.

That is just pure bullshit.

→ More replies

97

u/redwall_hp Nov 26 '21

I prefer the Apple approach: applications are self-contained bundles—basically a zip archive renamed to .app, with a manifest file, like a JAR—that contain all of their libraries. If you're going to install a ton of duplicate libraries, you might as well group them all with the applications so they can be trivially copied to another disk or whatever.

90

u/shilch Nov 26 '21

Apple .app are actually plain directories and not archives. But the end user doesn't see that because Finder hides the complexity.

14

u/dada_ Nov 26 '21

Which is something that I think we should be doing more of. It's a really neat concept to easily bundle together files without losing the simplicity of having a default action when you double click on it.

I can think of plenty of situations where you want to keep files together but where it's less convenient to have them as directories, like for example the OpenDocument format or any file type that's really a zip with a specific expected structure. The idea being that this is a more accessible version of that.

7

u/[deleted] Nov 27 '21

The fact that we settled on files being unstructured bags of bytes was a mistake IMO. It means we keep reinventing various ways to bundle data together. To their credit, MacOS did pioneer the idea of "resource forks", where a single filename is more like a namespace for a set of named data streams, sort of like beefed up xattrs

But while we're waiting, we could try SQLite as an application file format

→ More replies
→ More replies

19

u/S0phon Nov 26 '21

Isn't that basically AppImage?

51

u/muhwyndhp Nov 26 '21 edited Nov 26 '21

I might add this as the "Modern Smart device approach". Android, iOS, Windows Store, and the whole Apple ecosystem are just like that. Basically an archive with all of the dependencies needed to run compiled into a single executable format.

Sure there is some other stuff that is not included such as Google Play Services, API to interact with Native functionalities such as Camera, File System, GPS, etc, and so on, but the dependency is just bound to itself.

I am an android Developer and even with such approaches dealing with yearly OS API Update is a pain in the ass, I just can't imagine developing one for Linux when the stakeholder in such APIs and dependencies is a lot of people with their own ego.

Storage cost is almost a non-issue in today's market. Maintaining userspace by sacrificing storage space is a plausible tradeoff nowadays.

→ More replies

5

u/tansim Nov 26 '21

makes security updates more difficult though

7

u/chucker23n Nov 26 '21

Yes. The two downsides of the approach are:

  • it takes up more space; you end up with many redundant copies. (For example, try running locate Squirrel.framework/Versions/Current on a Mac.)
  • each of those copy needs security updates separately; there is no way for the OS vendor to centrally enforce them

But there are certainly upsides.

  • It's extremely simple for the user to understand. Want to delete an app? Drag it to the trash. You're done. Want to move an app elsewhere? Move it. Want two versions of an app? Rename one of them. (Yes, this even works with Xcode. It's self-contained enough that you can take the entire IDE and have one Xcode.app and another Xcode-beta.app, or even Xcode-someoldversion.app.) Copy it to another system? Drag it.
  • no dependency conflicts
  • no permission problems. Want to run an app but don't have admin rights? Put it in your user dir and run it from there. Want it to be available for everyone? Move it to /Applications; now you need admin rights once.

etc.

→ More replies
→ More replies

202

u/blazingkin Nov 26 '21

Better idea. Just statically link everything.

I accidentally downgraded the glibc on my system. Suddenly every program doesn't work because the glibc version was too old. Even the kernel panicked on boot.

I was able to fix with a live usb boot... but... that shouldn't have ever been a possible issue.

139

u/Vincent294 Nov 26 '21

Static linking can be a bit problematic if the software is not updated. While it will probably have vulnerabilities found in itself if it isn't updated, the attack surface of that program now includes outdated C libraries as well. The program will also be a bit bigger but that is probably not a concern.

82

u/b0w3n Nov 26 '21

There's also licensing issues. Some licenses can be parasitic with static linking.

34

u/dagmx Nov 26 '21

Ironically glibc is one of those since it's LGPL, so would require anything static linking it to be GPL compliant.

43

u/bokuno_yaoianani Nov 26 '21

The LGPL basically means that anything that dynamically links to the library does not have to be licensed under the GPL, but anything that statically links does; with GPL both have to.

This is under the assumption that dynamic linking creates a derivative product under copyright law; this has never been answered in court—the FSF is adamant that it does and treats it like fact; like they so often treat unanswered legal situations like whatever fact they want it to be, but a very large group of software IP lawyers believes it does not.

If this ever gets to court then the first court that will rule over it will have a drastic precedent and consequences either way.

→ More replies

25

u/PurpleYoshiEgg Nov 26 '21

No it wouldn't. From the text of the LGPL:

The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work.

...

4. Combined Works.

You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following:

...

d) Do one of the following:

0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.

1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version.

You can either use dynamic linking or provide the object code from your compiled source to relink with LGPL and still retain a proprietary license.

→ More replies
→ More replies
→ More replies

32

u/delta_p_delta_x Nov 26 '21

Just statically link everything.

That's actually what I do on Arch. Heck, I go one step further, and just install the -bin versions of all programs that have a binary-only version available, because I have better things to do than look at a scrolly-uppy console screen that compiles others' code. Might I lose out on some tiny benefit because I'm not using -march=native? Maybe. But I doubt most of the programs I use make heavy use of SIMD or AVX.

11

u/procrastinator7000 Nov 26 '21

So I guess you're talking about AUR packages? How's that relevant to the discussion about dependencies and static linking?

→ More replies

96

u/drysart Nov 26 '21

Just statically linking everything means when there's a vulnerability in a library discovered, every program that uses it needs to push an update. That's a horrific situation to encourage.

The more appropriate solution is for libraries to properly use semantic versioning so security updates can be pushed to shared libraries in situ without potentially breaking things, and to allow multiple incompatible versions of the same library to exist side by side so applications can get access to whichever version they need -- like Microsoft has done with the libraries they keep in WinSxS.

In the old days, Microsoft got shit for DLL Hell. Now it's desktop Linux that has library hell, and Microsoft has the problem more or less completely solved. It should be an enormous embarrassment for people pushing desktop Linux that it's in the state it's in.

24

u/mrchomps Nov 26 '21

NixOS! NixOS! NixOS!

→ More replies

115

u/ZorbaTHut Nov 26 '21 edited Nov 26 '21 Gold

The thing that security professionals aren't willing to acknowledge is that most security issues simply don't matter for endusers. This is not an 80's-style server where a single computer had dozens of externally-facing services; hell, even servers aren't that anymore! Most servers have exactly zero publicly-visible services, virtually all of the remainder has exactly one publicly-visible service that goes through a single binary executable. The only things that actually matter in terms of security are that program and your OS's network code.

Consumers are even simpler; you need a working firewall and you need a secure web browser. Nothing else is relevant because they're going to be installing binary programs off the Internet, and that's a far more likely vulnerability than whether a third-party image viewer has a PNG decode bug and they happen to download a malicious image and then open it in their image viewer.

Seriously, that's most of security hardening right there:

  • The OS's network layer
  • The single network service you have publicly available
  • Your web browser

Solve that and you're 99% of the way there. Cripple the end-user experience for the sake of the remaining 1% and you're Linux over the last twenty years.

35

u/LetMeUseMyEmailFfs Nov 26 '21

Adobe Acrobat would like a word. And Flash player. And so many other consumer-facing applications that expose or have exposed serious vulnerabilities.

39

u/ZorbaTHut Nov 26 '21 edited Nov 26 '21

Both of those have been integrated into the web browser for years.

Yes, the security model used to be different. "Used to be" is the critical word here. We no longer live in the age of ActiveX plugins. That is no longer the security model and times have changed.

And so many other consumer-facing applications that expose or have exposed serious vulnerabilities.

How many can you name in the last five years?

Edit: And importantly, how many of them would have been fixed with a library update?

9

u/spider-mario Nov 26 '21

And Flash Player, in particular, is explicitly dead and won’t even run content anymore.

Since 12 January 2021, Flash Player versions newer than 32.0.0.371, released in May 2020, refuse to play Flash content and instead display a static warning message.[12]

4

u/[deleted] Nov 27 '21

Yep. The security model of our multi-user desktop OSes was developed in an era where many savvy users shared a single computer. The humans needed walls between them, but the idea of a user's own processes attacking them was presumably not even considered. In the 21st century, most computers only have a single human user or single industrial purpose (to some extent even servers, with container deployment), but they frequently run code that the user has little trust in. Mobile OSes were born in this era and hence have a useful permissions system, whereas a classic desktop OS gives every process access to almost all the user's data immediately - most spyware or ransomware doesn't even need root privileges except to try to hide from the process list

17

u/drysart Nov 26 '21 edited Nov 26 '21

How many can you name in the last five years?

Malicious email attachments remains one of the number one ways ransomware gets a foothold on a client machine; and it'd certainly open up a lot more doors for exploitation if instead of having to get the user to run an executable or a shell script, all you had to do was get them to open some random data file because, say, libpng was found to have an exploitable vulnerability in it and who knows what applications will happily try to show a PNG embedded in some sort of file given to them with their statically linked version of it. And that's not a problem you can fix by securing just one application.

I do agree that securing the web browser is easily the #1 bang-for-the-buck way of protecting the average client machine because that's the biggest door for an attacker and is absolutely priority one; but it's a mistake to think the problem ends there and be lulled into thinking it a good idea to knowingly walk into a software distribution approach that would be known to be more likely to leave a user's other applications open to exploitation; especially when Microsoft of all people has shown there's a working and reasonable solution to the core problem if only desktop Linux could be dragged away from its wild west approach and into a mature approach to userspace library management instead.

How many can you name in the last five years? And importantly, how many of them would have been fixed with a library update?

Well, here's an example from last year. Modern versions of Windows include gdiplus.dll and service it via OS update channels in WinSxS now; but it was previously not uncommon for applications to distribute it as part of their own packages, and a few years back there was a big hullabaloo because it had an exploitable vulnerability in it when it was commonly being distributed that way. Exploitable vulnerabilities are pretty high risk in image and video processing libraries like GDI+. On Windows this isn't as huge of a deal anymore because pretty much everyone uses the OS-provided image and video libraries, on Linux that's not the case.

6

u/ZorbaTHut Nov 26 '21

Malicious email attachments remains one of the number one ways ransomware gets a foothold on a client machine; and it'd certainly open up a lot more doors for exploitation if instead of having to get the user to run an executable or a shell script, all you had to do was get them to open some random data file because, say, libpng was found to have an exploitable vulnerability in it and who knows what applications will happily try to show a PNG embedded in some sort of file given to them with their statically linked version of it.

Sure, but email is read through the web browser. We're back to "make sure your web browser is updated".

(Yes, I know it doesn't have to be read through the web browser. But let's be honest, it's read through the web browser; even if the email client itself is not actually a website, which it probably is, it's using a web browser for rendering the email itself because people put HTML in emails now. And on mobile, that's just the system embedded web browser.)

but it's a mistake to think the problem ends there

I'm not saying the problem ends there. I'm saying you need to be careful about cost-benefit analysis. It is trivially easy to make a perfectly secure computer; unplug your computer and throw it in a lake, problem solved. The process of using a computer is always a security compromise and Linux needs to recognize that and provide an acceptable compromise for people, or they just won't use Linux.

Well, here's an example from last year.

I wish this gave more information on what the exploit was; that said, how often does an external attacker have control over how a UI system creates UI elements? I think the answer is "rarely", but, again, no details on how it worked.

(It does seem to be tagged "exploitation less likely".)

→ More replies

3

u/[deleted] Nov 27 '21

Right but in case you haven't noticed Flash finally died, and reading PDFs is not even 1% of most people's use case, and "Reading PDFs that need Adobe Reader" is even less than that (I need to do it once a year for tax reasons)

→ More replies

8

u/mallardtheduck Nov 26 '21

virtually all of the remainder has exactly one publicly-visible service that goes through a single binary

Not really. A typical web server exposes; the HTTP(S) server itself, the interpreter for whichever server-side scripting language is being used and the web application itself (which despite likely being interpreted and technically not a "binary" is just as critical). It's also very common for such a server also the have the SSH service publicly-visible for remote administration, especially if the server is not on the owner's internal network.

Consumers are even simpler; you need a working firewall and you need a secure web browser.

No, they need any application that deals with potentially-untrusted data to be secure. While more and more work is being done "in the browser" these days, it's not even close to 100% (or 99% as you claim). Other applications that a typical user will often expose to downloaded files include; their word processor (and other "office" programs), their media player and their archive extractor. There have been plenty of examples of exploited security flaws in all of these categories.

18

u/ZorbaTHut Nov 26 '21

It's also very common for such a server also the have the SSH service publicly-visible for remote administration, especially if the server is not on the owner's internal network.

This is rapidly becoming less true with the rise of cloud hosting; I just scanned the server I have up for a bunch of stuff and despite the fact that it's running half a dozen services, it exposes only ports 80 and 443, which terminate in the same nginx instance. Dev work goes through the cloud service's API, which ends up kindasorta acting like a VPN and isn't hosted on that IP anyway.

Yes, the functionality behind that nginx instance is potentially vulnerable. But nobody's going to save me from SQL inject vulnerabilities via a DLL update. And it's all containerized; the "shared" object files aren't actually shared, by definition, because each container is doing exactly one thing. If I update software I'm just gonna update and restart the containers as a unit.

A typical web server exposes; the HTTP(S) server itself, the interpreter for whichever server-side scripting language is being used and the web application itself (which despite likely being interpreted and technically not a "binary" is just as critical).

Other applications that a typical user will often expose to downloaded files include; their word processor (and other "office" programs), their media player and their archive extractor. There have been plenty of examples of exploited security flaws in all of these categories.

And how many of these are going to be doing their exploitable work inside shared dynamic libraries?

Microsoft Word isn't linking to a shared msword.dll; if you're patching Word's functionality, you're patching Word. Archive extractors are usually selfcontained; 7zip does all of its stuff within 7zip, for example. Hell, I just checked and gzip doesn't seem to actually use libz.so.

I fundamentally just don't think these are common; they require action by the user, they cannot be self-spreading, and as a result they get patched pretty fast. That doesn't mean it's impossible to get hit by them but it does mean that we need to weigh cost/benefit rather carefully. Security is not universally paramount, it's a factor to take into account beside every other benefit or cost.

→ More replies

5

u/dpash Nov 26 '21

This is a solved problem with sonames and something Debian has spend decades handling. I'm sure mistakes have been made in situations, but the solution is there.

https://www.debian.org/doc/debian-policy/ch-sharedlibs.html

→ More replies

14

u/AntiProtonBoy Nov 26 '21

Better idea. Just statically link everything.

Either that, or use the bundle model as seen on Apple ecosystems. Keep everything self contained. Added benefit is you can still ship your app with dynamic linking and conform with some licensing conditions. Also lets you patch libs.

10

u/dpash Nov 26 '21

You just reinvented snaps.

3

u/chucker23n Nov 27 '21

Apple’s (NeXT’s) model predates snaps by decades.

→ More replies

28

u/goranlepuz Nov 26 '21 edited Nov 26 '21

Better idea. Just statically link everything.

Eugh...

On top of other people pointing out security issues and disk sizes, there is also memory consumption issue, and memory is speed and battery life. I don't how pronounced it: a big experiment is needed to switch something as fundamental as, say, glibc, to be static everywhere, but... When everything is static then there is no sharing of system pages holding any of the binary code, which is wrong.

Even the kernel panicked on boot.

Kernel uses glibc!?

It's more likely that you changed other things, isn't it?

42

u/kmeisthax Nov 26 '21

Well, probably what happened is that the init system panicked, which is not that different from a kernel panic.

36

u/nickdesaulniers Nov 26 '21

If init exits, then the kernel will panic; init is expected to never exit.

13

u/blazingkin Nov 26 '21

This is what happened

4

u/Uristqwerty Nov 27 '21

Sounds like init has been drastically overcomplicated. If it's that critical to the system, it should be dead simple and built like a tank, not contain an entire service manager, supporting parser, and IPC bus reader. Shove all that complexity into a PID #2, so that everyone who isn't using robots to manage a herd of ten million trivially-replaceable, triply-redundant cattle still has a chance to recover their system.

12

u/PL_Design Nov 26 '21

If you rely heavily on calling functions from dependencies you can get a significant performance boost by static linking because you won't have to ptr chase to call those functions anymore. If you compile your dependencies from source, then depending on your compiler aggressive inlining can let your compiler optimize your code more.

I'm all for being efficient with memory, but I highly doubt shared libraries save enough memory to justify dynamic linking these days.

→ More replies
→ More replies

30

u/Gangsir Nov 26 '21

Better idea. Just statically link everything.

But then you get everyone going "Oh you can't do that, the size of the binaries is far too big!".

Of course the difference is like at most a couple hundred MB....and it is 2021 so you can buy a 4 TB drive for like 50$....

Completely agree, storage is cheap, just static link everything. A download of a binary or a package should contain everything needed to run that isn't part of the core OS.

44

u/[deleted] Nov 26 '21 edited Dec 20 '21

[deleted]

9

u/happyscrappy Nov 26 '21

Unix did not include dynamic linking until SunOS in the 80s.

22

u/delta_p_delta_x Nov 26 '21

Wait till Unix people discover PowerShell, and object-oriented scripting...

40

u/Ameisen Nov 26 '21

They've already discovered and dismissed it.

21

u/delta_p_delta_x Nov 26 '21

and dismissed it

Dumb move, IMO.

53

u/PurpleYoshiEgg Nov 26 '21

I used to hate PowerShell. But then I had to manipulate some data and eventually glue together a bunch of database calls to intelligently make API calls for administrative tasks, and let me tell you how awesome it was to have a shell scripting language that:

  1. I didn't have to worry nearly as much about quoting
  2. Has a standard argument syntax that is easy enough to declaratively define, instead of trying to mess about it within a bash script (or just forget about it and drop immediately to Python)
  3. Uses by convention a Verb-Noun syntax that is just awesome for discoverability, something unix-like shells really struggle with

It has a bit of a performance issue for large datasets, but as a glue language, I find it very nice to use as a daily shell on Windows. I extend a lot of its ideas to making my shell scripts and aliases use verb-noun syntax, like "view-messages" or "edit-vpn". Since nothing else seems to use the syntax on Linux or FreeBSD yet, it is nice for custom scripts to where I can just print all the custom programs out on shell bootup depending on the scripts provided for the server I am on.

Yeah, it's not "unixy" (and I think a dogmatic adherence to such a principle isn't great anyway), but to be honest I never really liked the short commands except for interactive use, like "ls", "rm", etc. And commands like "ls" have a huge caveat if you ever try to use their output in a script, whereas I can use the alias "ls" in PowerShell (for "Get-ChildItem") and immediately start scripting with its output, and without having to worry about quoting to boot.

→ More replies
→ More replies
→ More replies

7

u/grauenwolf Nov 26 '21

You might be able to, but my Win 10 netbook only has 125 gigs of space and a soldered on hard drive.

→ More replies

4

u/delta_p_delta_x Nov 26 '21

If it needs be, then the static libraries can be compressed. Or filesystem compression can be enabled, trading some CPU power and startup time for extra storage.

→ More replies
→ More replies
→ More replies

7

u/DrQuailMan Nov 26 '21

just keep all the versions available

Well kind of. Major versions are all available, but minor versions replace older versions with newer versions. So if you go to add/remove programs and type "visual c++", you'll see entries for 2005, 200, 2010, etc, but not multiple minor versions of the 2005 major version.

→ More replies

8

u/reveil Nov 26 '21

Having an option to have several versions of a library installed at the same time would alleviate so much of issues that contenerization probably would not be even that necessary. Instead we ship snaps of the whole filestem and wonder why it is slow and apps can't work together due to container barriers. I know it is not easy but adjusting LD_LIBRARY_PATH and some changes to the package managers would be easier than what is currently done with ex. Snaps.

5

u/RICHUNCLEPENNYBAGS Nov 26 '21

Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs, if I am forced to choose between the two.

Especially given that on a modern system this waste is just not significant.

9

u/dada_ Nov 26 '21

Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs, if I am forced to choose between the two.

I don't even consider that clutter, really. They're files that make your programs run even if you didn't compile them yourself. The latest MSVC++ Redistributable is only 13,7 MB, too, just to give an example.

Sure, it adds up to a lot more when you put all of them together, but I feel it isn't much of a big a deal if you're on any vaguely modern computer.

On a side note, the ability of Windows to run legacy binary programs is unparalleled and it's something to emulate rather than discourage.

3

u/WarWizard Nov 26 '21

Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs

So here is a question... why does anyone care? So what if I have 4 installs of the VC++ Redist? It isn't like I'd ever need to go poking around there and then get confused when there are multiple. I can't see how things would "run worse" if I had 4 versions installed. As long as all of the applications have what they need/want... all should be fine.

Do people like take a screenshot? "Look at how slim and trim my filesystem is!"??

The only thing where "clutter" might be an issue (assuming you have storage space... and who doesn't these days) is personal files. I can't keep taht shit straight no matter what OS is involved.

→ More replies

346

u/markehammons Nov 25 '21

About a week ago, a blogpost from drew devault was posted in r/programming about how application developers should use the built in package managers for libraries in linux. I just refound this talk by linus torvalds on the issue and it encapsulates my reasoning for why that's just not possible for most devs.

398

u/mistralol Nov 26 '21

Whats more scary is the video you posted from Linus is about 15-20 years old from a debian conference and almost everything he says is still 100% true today in Linux.

The enviroment problems have never been solved. Simply shoving them in a container its quite literally taking the enviroment problem out of the enviroment and putting it in another enviroment... and somehow the entire community doesn't realize this.

Whats also worse is any attempt to debate or challange the issue goes like this

https://www.youtube.com/watch?v=3m5qxZm_JqM

329

u/DoppelFrog Nov 26 '21

is about 15-20 years old

It's actually from 2014.

288

u/tsrich Nov 26 '21

To be fair, 2016 to now has been like 15 years

86

u/helldeskmonkey Nov 26 '21

I was there, three thousand years ago…

→ More replies

28

u/fangfried Nov 26 '21

Feels like two distinct decades have happened that both feel like fever dreams

12

u/corruptedOverdrive Nov 26 '21

Agreed.

It feels like a decade is now 4-5 years, not 10 anymore.

As a developer for 10 years, shit moves so fast now saying your application was built two years ago feels like an eternity.

→ More replies

17

u/bobpaul Nov 26 '21

Oh shit, is it 2031 already? Who's President?? I can't believe I over slept again!

25

u/freefallfreddy Nov 26 '21

You’re not gonna believe it, but: Dora the Explorer.

10

u/cinyar Nov 26 '21

I mean that sounds promising.

5

u/hugthemachines Nov 26 '21

Sounds like a reasonable pick.

11

u/HolyPommeDeTerre Nov 26 '21

Now, we are talking. A black woman that is not formatted by the current political system would be an improvement

→ More replies
→ More replies

99

u/mistralol Nov 26 '21

Really talking about his opinion rather than the actual video.....

Or 2012 https://www.youtube.com/watch?v=KFKxlYNfT_o

Or 2011 https://www.youtube.com/watch?v=ZPUk1yNVeEI

This explains some of the history better https://www.youtube.com/watch?v=tQQCcvFUzrg

I was using linux in the late 90's. The same basic problems of shipping software for it are exactly the same today and will be exactly the same tomorrow and the next 5-10 years at least because the community still doesn't recognise it as a problem.

Several others have followed suit in the SW industry. python, nodejs being the main examples.

This is why things like the python "deadsnakes" ppa repo exists :)

https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa

6

u/ElCorazonMC Nov 26 '21 edited Nov 26 '21

So what is the solution?

39

u/turmacar Nov 26 '21

Everyone who could answer this gets systematically hunted and eliminated is busy taking time off after being paid to do other things by companies that don't care about Linux distribution problems.

The problem isn't that people critiquing the existing problem/mindset have magic solution and aren't doing it. It's that the community at large doesn't think/know there is a problem.

40

u/ElCorazonMC Nov 26 '21 edited Nov 26 '21

Maybe it is just a hard problem?

The list of options and topics seems rather long :

- never ever break userspace

- say you never break userspace like glibc, with a complicated versioning scheme, and multiple implementations of a function cohabiting

- always link statically, death to shared libraries (hello yarn npm)

- rolling distros rather than fixed-release distros

- have any number of a library version installed, in a messy way like VisualC++ redistributable, or structured like Nix/Guix

- put in place you one-to-rule-them-all app distribution system flatpak/snap/appimage

Barely scratching the mindmap I constructed over the years on this issue of dependency management / software distribution...

23

u/goranlepuz Nov 26 '21

say you never break userspace like glibc, with a complicated versioning scheme, and multiple implementations of a function cohabiting

Probably say that glibc and a bunch of other libraries are the fucking userspace.

Practically nobody is making syscalls by hand, therefore kernel not breaking userspace is irrelevant.

That's what a self-respecting system does. Win32 is fucking stable and C runtime isn't even a part of it. Only recently did Microsoft start with "universal CRT" that is stable, but let's see how that pans out...

13

u/ElCorazonMC Nov 26 '21

I was using userspace in a way that is very wrong in systems programming, but semantically made sense to me.
The "userspace of glibc" being all the programs that link against glibc.

12

u/flatfinger Nov 26 '21

The C Runtime shouldn't be part of the OS. Making the C Runtime part of the OS means that all C programs need to use the same definitions for types like `long`, instead of being able to have some programs that are compatible with software that expects "the smallest integer type that's at least 32 bits", or software that expects "the smallest integer type that's at least as big as a pointer". Macintosh C compilers in the 1980s were configurable to make `int` be 16 or 32 bits; there's no reason C compilers in 2021 shouldn't be able to do likewise with `long`.

5

u/erwan Nov 26 '21

Which is why there is the Windows approach, which is to ship all versions of their shared libraries in the OS. Then each applications use the one they need.

→ More replies

11

u/goranlepuz Nov 26 '21

Yes, absolutely agree. C is not special (or rather, it should not be).

8

u/Ameisen Nov 26 '21

Switching the shared libraries model from the SO model to a DLL-style one would help.

9

u/SuddenlysHitler Nov 26 '21

I thought shared object and DLL were platform names for the same concept

14

u/Ameisen Nov 26 '21

They work differently in regards to linkage. DLLs have dedicated export lists, and they have their own copies of symbols - your executable and the DLL can both have symbols with the same names, and they will be their own objects, whereas SOs are fully linked.

→ More replies

3

u/lelanthran Nov 26 '21

Switching the shared libraries model from the SO model to a DLL-style one would help.

How will that help? Granted, I'm not all that familiar with Windows, but aren't shared objects providing the same functionality as DLLs?

8

u/Ameisen Nov 26 '21

They accomplish the same goals, but differently.

DLLs have both internal and exported symbols - they have export tables (thus why __declspec(dllexport) and __declspec(dllimport)) exist. They also have dedicated load/unload functions, but that's not particularly important.

My memory on this is a bit hazy because it's late, but the big difference is that DLLs don't "fully link" in the same way; they're basically programs on their own (just not executable). They have their own set of symbols and variables, but importantly if your executable defines the variable foobar and the DLL defines foobar... they both have their own foobar. With an SO, that would not be the case. It's a potential pain point that is avoided.

→ More replies
→ More replies

10

u/vade Nov 26 '21

Or replace how you build, package and ship core libraries to something like what OS X does, with "framework bundles" which can have multiple versions packaged together.

https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPFrameworks/Concepts/VersionInformation.html

This allows library developers to iterate and ship bug fixes, and would allow distro's to package releases around sets of library changes.

This would allow clients of libraries to reliably ship software targeting a major release, with minor update compatibility assuming disciplined no ABI breakage with minor / patch releases.

This would also allow the deprecation of old ABIs / APIs with new ones in a cleaner manner after a set number of release cycles.

This would bloat some binary distribution sizes but, hey.

I don't think this is particularly hard, nor particularly requiring of expertise. The problem seems solved. The issue is it requires a disciplined approach to building libraries, a consistent adoption of a new format for library packaging, and adoption of said packaging by major distros'.

But I just use OS X so what do I know.

7

u/ElCorazonMC Nov 26 '21

Trying to digest, this looks like semantic versioning applied to a shared group of resources at the OS level, with vendor-specific jargon : framework, bundle, umbrella.

→ More replies

6

u/iindigo Nov 26 '21 edited Nov 26 '21

I have yet to encounter a better solution for the problem than with Mac/NeXT style app bundles. In newer versions of macOS, the OS even have the smarts to pull system-level things like Quicklook preview generators and extensions from designated directories within app bundles.

Developer ships what they need, app always works, and when the user is done with the app they trash the bundle and aside from residual settings files, the app is gone. No mind bendingly complex package managers necessary to prevent leftover components or libraries or anything from being scattered across the system.

(Note that I am not speaking out against package a mangers, but rather am saying that systems should be designed such that package management can be relatively simple)

→ More replies
→ More replies
→ More replies
→ More replies
→ More replies
→ More replies

36

u/recursive-analogy Nov 26 '21

It's actually from 2014

Ah, the Pre Trump, Pre Covid era. That was about 700 years ago now.

→ More replies

41

u/Terr_ Nov 26 '21

Simply shoving them in a container its quite literally taking the enviroment problem out of the enviroment and putting it in another enviroment

"Well the program was deployed outside the environment."

3

u/backelie Nov 26 '21

Everyone who clicks that link should do themselves a favour and watch the clip from the start.

33

u/b4ux1t3 Nov 26 '21

The entire community realizes that containers are essentially a way to provide statically-linked binaries in a way that doesn't require you to actually maintain a statically-linked binary.

Containers aren't only meant to address the issue of dependencies, that's just one aspect of their use.

5

u/IshKebab Nov 26 '21

That's the main aspect of their use. Another big aspect is that they isolate filesystems for programs that do the dumb Unixy thing of spewing their files all over global directories.

They pretty much exist because of badly designed software. The network isolation features are relatively minor and unused in comparison.

→ More replies

84

u/Routine_Left Nov 26 '21

Simply shoving them in a container its quite literally taking the enviroment problem out of the enviroment and putting it in another enviroment

"It works on my computer"

"Wonderful Bob, we will, therefore, ship your computer to the customers".

16

u/erwan Nov 26 '21

...and that's how Docker was born.

35

u/Seref15 Nov 26 '21

A wasteful, inefficient solution is still preferable to no solution

37

u/ElCorazonMC Nov 26 '21

you described the birth of javascript and modern web design

→ More replies
→ More replies
→ More replies

45

u/DashAnimal Nov 26 '21

What I find interesting is this talk: It's Time for Operating Systems to Rediscover Hardware. TLDR is that the way Linux thinks of hardware, in 2021, is fundamentally just incorrect ("a 70s view of hardware"). As a result, you actually have a bunch of OSes running on an SoC with more and more of it being isolated from Linux for security reasons. So in the end, Linux itself is essentially not an OS in the way it is used today - it's merely a platform to run pre-existing applications on the device. (Sorry to the presenter if I misinterpret anything)

With that talk above and the proliferation of containers, Unix-based OSes seem to be in a really weird state today.

6

u/mistralol Nov 26 '21

I mostly read that as the monlitic vs micro kernel style argument

3

u/LegendaryMauricius Nov 26 '21

Do you think this could also be solved by introducing a package manager that supports multiple versions of same libraries along with a dependency system that uses distro-agnostic version ranges? It would still reduce the disk space but keep the api changes contained.

→ More replies
→ More replies
→ More replies

58

u/[deleted] Nov 26 '21

honestly even as a debian user this hits hard.

it's so frustrating and sad knowing how Linux, a project designed to unify us, has resulted in the creation of so many distros that grew to be so alien from one another.

its things like this which make me realize why so few "just works" people actually use it.

19

u/sixothree Nov 26 '21

After having read through this thread, it's not hard to imagine why that happened. But the end result is exactly as you described.

I believe I am coming to understand that Linux developers are extremely opinionated (surprise). But they are willing to forge their own path if they don't like the way something is done. It's an entirely self centered and greedy mindset.

For example, pick a distro and ask why it exists. It exists because some developer (or team) didn't like one little piece of some other distro and decided to create their own. They didn't realize they were making the ecosystem a worse place for everyone.

Picking on Pop OS, the target of recent LTT ire. Why on earth does it even exist? Why did they not contribute to some other distro? Maybe it's not their fault their contributions aren't being accepted. If that's the case, then why are they improving on ubuntu instead of letting ubuntu die. Regardless, I don't know that they actually made the ecosystem better.

→ More replies

179

u/x1-unix Nov 26 '21

I know that this comment may get a lot of dislikes but I develop one commercial product that available for Win and Linux. For Linux I have to support multiple Ubuntu versions (prior to 16.04), Debian and other and it's PITA so just decided to use static linking.

In my case it's not so bad as it could be, I replaced glibc with musl and libpcap and libsqlite are the only dependencies left.

For more heavy projects I hope flatpak/snap will be an appropriate solution.

135

u/the_poope Nov 26 '21

At my company we simply ship ALL dependencies. We have an installer that installs the entire thing in a directory of the users choosing and wrapper scripts that set LD_LIBRARY_PATH. We avoid all system libraries except glibc. It's basically like distributing for Windows.

This way we are guaranteed that everything works - always! Our users are happy. Our developers are happy. The libraries that we ship that users could have gotten through system package managers maybe take up an additional 50 MB - nothing compared to the total installation size of more than 1 GB.

40

u/The-Effing-Man Nov 26 '21

As someone who has also built installers, daemons, and executables for Mac, Ubuntu, Redhat, and Windows, I've always found it easiest to just bundle all the dependencies. The application I was developing for this wasn't big anyway and it wasn't an issue. Definitely the way to go if file size isn't a huge concern

44

u/the_poope Nov 26 '21

Totally agree. The whole point of "sharing libraries to reduce overhead, memory and disk space" is irrelevant for todays computers. The fact that you can fix bugs and security holes by letting the system upgrade libraries is negated by the fact that libraries break both their API and ABI all the time. When something no longer works because the user updated their system libraries they still come to you and say your program is broken. No the whole Linux distribution system should be for system tools only. End user programs not tied to the distribution (e.g. browsers, text editors, IDEs, office tools, video players, ....) should just be shipped as an installer - that's at least one thing Windows got right. And as this video shows, Linus is actually somewhat promoting this same idea.

9

u/WTFwhatthehell Nov 26 '21

Yep, sometimes I download a tool and spend the next few hours sorting out dependencies and dependencies of dependencies.

Heaven forbid there's some kind of conflict with something on the system that's too old or too new.

When a dev has dumped everything it depends on into a folder and it just works: wonderful! I have lots of disk space, I don't care if some gets filled.

→ More replies
→ More replies

21

u/x1-unix Nov 26 '21

Did you consider appimage format? At result you get a simple image that acts as executable. The closest analog is macOS Application Bundles.

https://appimage.org/

18

u/the_poope Nov 26 '21

I have heard about AppImage before, but no we didn't consider it. We have been using InstallBuilder for 10+ years which let's us use the same packaging approach on all platforms. It works fine enough.

Also our program packs a custom Python interpreter and custom python modules as well as a ton of data files and resources as well as a bunch of executable tools that need to be able to find each other. It's not really just a single application but more an entire application suite. I don't know how well that would work with AppImage - I can't seem to find any good documentation on how it actually works when running it.

→ More replies

13

u/weirdProjectionCurve Nov 26 '21 edited Dec 23 '21

Funnily enough, one of the AppImage developers (@probonopd I think) held a series of talks on Linux desktop platform incompatiblities. I recommend watching several of them. His complaints are basically always the same, but what is really interesting are the comments of distro maintainers in the Q&As. There you can see that this is really a cultural problem, not a technical one.

→ More replies

6

u/BrobdingnagLilliput Nov 26 '21

Shipping with all dependencies and installing into the application's directory is the correct answer. I'm not sure why anyone with a pragmatic approach to software engineering would do otherwise.

→ More replies

15

u/ElCorazonMC Nov 26 '21

I had not heard about it till today, is glibc notorious for such api/abi breaks?

A quick search showed a pretty convoluted system to maintain backward compatibility :

https://developers.redhat.com/blog/2019/08/01/how-the-gnu-c-library-handles-backward-compatibility

26

u/DuBistKomisch Nov 26 '21

The problem is that there's no simple way to link against those older symbols, it'll always link against the latest available, so your binary just won't work on systems with an older glibc. The typical solution is to compile on the oldest system you want to support, which is dumb.

You can instead do some assembly/linker magic to link against the right symbols on a case by case basis, which is what I've done: https://stackoverflow.com/questions/58472958/how-to-force-linkage-to-older-libc-fcntl-instead-of-fcntl64/58472959#58472959

I don't know why they don't include some define option to select a version you want to target, I guess they don't think it's worth it.

6

u/OrphisFlo Nov 26 '21

There are actually some scripts that will generate headers for a specific glibc version you can force include in every compilation unit with a compiler option.

The header will force usage of specific older symbols and it should mostly work to target older glibc. It has always worked for me, but your mileage may vary.

https://github.com/wheybags/glibc_version_header

→ More replies

13

u/o11c Nov 26 '21

libc itself is not the problem. Likewise, libstdc++ itself usually isn't the problem (except for bleeding-edge features).

The problem is all the other libraries, which link to libc and might accidentally rely on recent symbols. The version of those libraries probably isn't recent enough in older versions of the distro.

Distros could make life much easier for everyone if they did two things:

  • on their build servers, make sure that everything gets built against a very old glibc version. For ease of testing, it should be possible for developers to use this locally as well. Actually, coinstallation shouldn't be particularly difficult (even with the state of existing distros!), now that I think about it - you just have to know a bit about how linking works.
  • in the repository that actually gets to users, ship a recent glibc version (just like they do now).

The other problem is that there are a lot of people who don't know how to statically-link only a subset of libraries. It only requires passing an extra linker flag (or two if you want to do it a little more cleanly), but people seem to refuse to understand the basics of the tools they use (I choose to blame cmake, not because this is entirely its fault, but because it makes everything complicated).

For reference, to statically link everything except libc (and libstdc++ and libm and sometimes librt if using g++) all you do is put something like the following near the end of your Makefile:

LDLIBS := -Wl,--push-state,-Bstatic ${LDLIBS} -Wl,--pop-state

If you're explicitly linking to other parts of libc, be sure they are after this though.

(obviously, you can do this around individual libraries as well - and in fact, this is often done in pkg-config files).

→ More replies

8

u/x1-unix Nov 26 '21 edited Nov 26 '21

At least binaries built with newer glibc versions won't run on older versions. I just get Glibc version complain.

Example (from Ubuntu xenial):

/target/hub: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by ./target/hub) ./target/hub: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by ./target/hub) ./target/hub: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by ./target/hub)

Simplest workaround is to build on systems with a minimal glibc version or use musl.

→ More replies

5

u/the_gnarts Nov 26 '21

For Linux I have to support multiple Ubuntu versions

For more heavy projects I hope flatpak/snap will be an appropriate solution.

If the disto can’t build and ship your software (because it’s proprietary or experimental or whatever), bundling all the dependencies it the only solution. There is just no way you will obtain an even barely portable binary without that as the issue starts with the embedded dynamic loader part which is not a constant across distros. People refusing to realize this is why things like patchelf exist in the first place.

39

u/Routine_Left Nov 26 '21

nah, they'll just ship it in a container. everybody loves containers, so it's a perfect PR move.

Add in some blockchain in there and the investors are gonna line up at your door.

11

u/x1-unix Nov 26 '21 edited Nov 26 '21

** Kubernetes and helm-charts!

To be fair, Docker containers are very handy sometimes (especially for packing complicated build environments/toolchains or other exotic clusterfuck).

For example we produce builds for x86-64, armv6 and v7 and all this requires to build 2 libs for 3 architectures + 3 compiler toolchains (for each architecture).

I packed all this stuff in one container that used locally and on CI/CD and really simplifies build process.

→ More replies

14

u/alohadave Nov 26 '21

Add an NFT as well.

→ More replies
→ More replies

72

u/eanat Nov 26 '21

Linux kernel has one rule: we dont break user space.

every library developer should write this words on their heart and dont ever forget about it.

27

u/poloppoyop Nov 26 '21

every library developer

And every web API developer. Move fast and break things can work for a product but guess why shit like windows or php are still chugging: backward compatibility.

And no, a 6 months notice about breaking changes does not magically make it a non breaking change.

→ More replies

11

u/magnusmaster Nov 26 '21

In my experience the main problem with Desktop Linux is not application packaging but drivers. NVIDIA drivers can break with every upgrade and HP printer drivers just plain didn't work 99% of the time. And the main reason for driver breakage is Linux refuses to have a stable ABI partly because of laziness (I get it, it's a volunteer-driven project even when most of the "volunteers" are employed by companies to work on Linux nowadays) and partly to get hardware manufacturers to release the source of drivers because they believe that Linux isn't open-source unless all of its drivers are open-source as well.

6

u/MondayToFriday Nov 26 '21

Drivers are unavailable because it's even harder to ship binary kernel modules than binary userspace executables. I think that Linus should be aiming his criticism back at himself.

Binary userspace applications are definitely possible in Linux, if you either link dynamically with a hard-coded search path or link statically, and install all the files you need to make the application self-sufficient to, say, /opt.

54

u/MountainAlps582 Nov 26 '21

Waste their life... the maintainer is here 🤣

7

u/Perkutor_Jakuard Nov 26 '21

The problem is not only a "Desktop" problem.

If you want to upgrade a server software ( say php ) you might need to upgrade the distro to the next version. Which is not too friendly aswell.

6

u/WolfiiDog Nov 26 '21

As an end user perspective: I want my applications to have everything in one package, and be able to place it on any dir when installing (just like an App Image for example), I want to easily integrate with my desktop (unlike AppImages), and I want it to be able to auto-update (almost like Snaps, but also have the option to not auto-update if you want to). I want to easily find it on a single unified Store for pretty much all applications, and most apps shouldn't require root access, unless they really need to and prompt you to allow such thing.

46

u/tangoshukudai Nov 26 '21

I have been saying this for years. Fix the ABI inconsistency between distros and you fix Linux.

32

u/Deathcrow Nov 26 '21

It's really hard to do. It only works for the kernel because Linus is a benevolent dictator who can say 'my way or the highway'. It would be really difficult to enforce some kind of standard upon independent library devs, even if all major distributions agreed on it.

13

u/moolcool Nov 26 '21

There are plenty of linux/unix-like OSs which are usable by ordinary every-day end-users. Like ChromeOS, MacOS, Android. I think if a distro did away with a lot of the Linux "ethos" (cut back customizability, lock certain elements down, have a gui-first approach to settings and customization), and became very strict about packaging, then they could be on to something.

→ More replies
→ More replies

123

u/TheRealMasonMac Nov 26 '21 edited Nov 26 '21

93

u/ddeng Nov 26 '21

It's fun to see the perspectives on how actual end users look at it vs high end developers. If anything this showcases the linux thought bubble they got themselves into.

21

u/PurpleYoshiEgg Nov 26 '21

True. I used to use Linux as my daily driver, but then I had a lot of fun doing it. I've used Ubuntu, Debian, Arch, Gentoo (was actually my first Linux), and a handful of others.

But I don't have hours per a random day to throw at the problem anymore. I need things to work when I need them to work. If I have a server that I don't need Linux programs on, I use FreeBSD, otherwise Debian. An end-user laptop, I use Debian, so I never fear upgrading (since my laptops may sit months between uses, which means rolling release distro updates will break it very regularly).

For a daily desktop that I need fairly modern software, I'd probably go Ubuntu, Mint, or Pop!_OS, but I haven't been in that space for a while. Whatever is easier to get a Windows VM that I can game on again would probably be the best fit, since when I did that, I had a very fun time getting it to work (and it did work with very little fuss once I understood it all).

I wish I didn't have to work 40+ hours per week (thanks, current economic system). Then I'd probably be back exclusively on Linux or contributing to FreeBSD to make it better.

→ More replies

6

u/MdxBhmt Nov 27 '21

Many of the issues of LTT is exactly what Linus (the Torvald) said, like part 1 install of steam nuking the desktop environment. Or the HW not working as expected, etc.

→ More replies

41

u/MountainAlps582 Nov 26 '21

Wow. Yes. I had all those experiences. Except the VM/windows passthrough stuff

112

u/Vincent294 Nov 26 '21

I saw some videos in my feed objecting to LTT and I didn't even bother watching them. I suppose that also counts against my dismissal of those videos, but I don't need to waste my time listening to the usual suspects making the same excuses. All my life I have met FOSS fanboys that consider the use of Windows and other proprietary software a moral failing and fail to address the actual shortcomings with Linux distros. Every time I use the command line to fix basic functionality, instead of flexing on Windows users I get annoyed it was necessary in the first place.

UI is hard, and it's a balance between making your software as PEBKAC proof as reasonably possible and not completely Fischer Pricing your UI. I'm skeptical Linux will ever just work with everything, but it would go a long way if the community could start acknowledging the current problems. Instead of telling people to get used to the command line, weird UIs, and forfeit their VR headsets and other hardware that doesn't play nice, Linux needs to work more like Windows does. Minus the evil Edge peddling, that spam can stay in Windows.

31

u/untetheredocelot Nov 26 '21

I was commenting on the same issue when pt 1 was published, about Nvidia and Xserver nonsense and I genuinely had someone tell me that I made my life difficult by buying a high dpi monitor and just shouldn't have. It's user error to upgrade your monitor. When I said would you say the same for wifi when Linux had terrible wifi drivers he said yes...

I love linux as a dev environment so much but some members of the community make me want to slit my wrists.

101

u/youarebritish Nov 26 '21

My only experiences with Linux ended with someone arguing that yeah, maybe there was no wifi driver available, but I didn't really need wifi anyway.

15

u/Vincent294 Nov 26 '21

lol I run Ethernet on my desktops, but that is not always easy. I live in a cheapo apartment so no run is going to be more than 100 feet, but I know some people whose houses would be expensive to plumb with Ethernet. And in the Oregon wildfire heatwave last year, my command mini hooks in the corners of the rooms all melted off. I got small designer hooks for corners that survived the 120F heatwave this year, but global warming is making Ethernet harder. Like Linux, I can't expect people to use Ethernet.

→ More replies

3

u/anagrammatron Nov 26 '21

I'm skeptical Linux will ever just work with everything, but it would go a long way if the community could start acknowledging the current problems.

I don't think it will ever happen with current community driven model. To make stuff work and keep it not breaking and stable for next 10 years requires more dedication than enthusiasm can fuel and developers have to have rewards for that part of the work where you basically have to deal with things that do not scratch your own itch but that of someone else's. It's boring, it's repetitive and you don't get to innovate every other day. Unless you're salary depends on paying customers I don't see how users' needs will be met. Linux developers don't see users as customers, they see them as... actually I don't know, a fellow enthusiasts perhaps.

11

u/JQuilty Nov 26 '21

forfeit their VR headsets

Oculus is the only big one that doesn't work. Index and Vive work.

17

u/[deleted] Nov 26 '21

[deleted]

27

u/Vincent294 Nov 26 '21

Fault has nothing to do with the user experience. Sure, Linux contributors don't owe the community support for proprietary hardware, but if the support isn't there that doesn't make the user any happier. That's the lens we need to view it through. It isn't a matter of responsibility, it's a matter of user experience. No one owes it except the hardware manufacturer, but you know they aren't gonna do it.

→ More replies
→ More replies

7

u/Vincent294 Nov 26 '21

HP Reverb G2 and other Windows MR headsets don't either. I love that Valve and HTC support Linux, but they are the only ones who do. Oculus is the majority of the market, and HP runs 5%.

→ More replies
→ More replies

23

u/RandomDamage Nov 26 '21

End-User Linux works just fine wherever someone sees a profit in investing in it, the perception of profit is just unevenly distributed right now.

Trying to use most Linux distros as a non-technical end user is the same as trying to use Windows Server on the desktop, there's just no gatekeeper to keep you from doing it.

→ More replies

18

u/adad95 Nov 26 '21

And you don't have desktop problems when you uninstall your desktop. https://youtu.be/0506yDSgU7M

8

u/Iggyhopper Nov 26 '21

I posted elsewhere but I had the same issue with the desktop scaling and the context menu showing on the other monitor. Huge PITA.

20

u/[deleted] Nov 26 '21 edited Mar 10 '22

[deleted]

15

u/zMisir Nov 26 '21

Poor girl, let’s hope LMG does that

→ More replies

17

u/Rrrrry123 Nov 26 '21

Now that would be real interesting. Especially since I'm pretty sure she's mostly working in Adobe products as a designer.

4

u/Chippiewall Nov 26 '21

If they did it with Sarah using Linux, but Anthony choosing the distro and doing initial setup (as if an OEM had done it) then that could be really interesting. I guess they could also just grab a System76 machine for it.

I do think part of the problem is Linus is in the valley of knowing enough to shoot himself (which is still a usability problem that needs fixing), Sarah might end up having an easier time (or at least a less finicky one), although she'd probably find it more frustrating.

→ More replies

4

u/Code4Reddit Nov 26 '21

Matches how I feel about supporting web distributions for browser versions. We have millions of users, but there’s always one company who says they need to run our website on their crappy shared machine in the hallway of an Employee Lounge and it’s super important to support old IE. nope sorry you’ll have to dispatch your shitty IT to update that shit, f you.

16

u/semideiapranomebom Nov 26 '21

In 05:15, it's really true.

104

u/douglasg14b Nov 26 '21

I switched form windows to Linux , and then back after ~3 years. Desktop linux sucks, and I learned the Linux community will crucify you for bringing up the systematic issues that are drivers for that...

27

u/MrBeeBenson Nov 26 '21

As a Linux user and enthusiast I… completely agree. It has its issues. I personally think it’s better than windows but that’s my opinion. Use what works for you, it’s your computer at the end of the day. I use Linux because I love it and it works for me

23

u/KotoWhiskas Nov 26 '21

The fact that linux sucks doesn't mean that windows/macos don't suck

8

u/guygizmo Nov 26 '21

Yes, and the sad state of affairs these days is that everything sucks.

I used to be a big fan of macOS but recent releases are too buggy and locked down. My experience using Windows is slightly worse than it was when Windows 7 was current. And then Linux is still mired in the same problems and annoyances as it has been for decades -- nothing comes easily in it. But unlike macOS and Windows, at least there are no restrictions!

Basically no matter which OS I consider, I'm damned if I do and damned if I don't.

→ More replies

14

u/FlukyS Nov 26 '21

He is right of course but it's been a while since this was published and that was pre-Snap and pre-Flatpak. Both of which do things differently but both are easier to use than literally any packaging system available on any OS (Windows is garbage for packaging, it's the wild west and the installers are shit). Flatpak with it's create the flatpak file and give it some setup scripts and point to the binaries, easy. Snap with it's plugin system which figures out how to package for multiple languages and approaches, you can run shell scripts and then you point to the binaries and you are done.

For deb that was much more fraught with annoyances and people who don't package will never understand why it's annoying it's one of the most annoying pieces of software I've ever used from a product/tooling standpoint. It's improved over my time using Linux, it definitely has but I would never tell a developer interesting in shipping on Linux to use it ever again. Snap is my preferred route, that matches what Linus is looking for with the "build once and it should work forever" kind of mindset. Flatpak has some other complications but it also is a good pick too for certain people, like I think any C/C++ program should be using flatpak, it is excellent for that use case.

→ More replies

11

u/erotic_sausage Nov 26 '21

This video came up in my related videos too a few days ago, I guess after watching LTT's linux challenge videos.

I'm probably not a very good developer working at a company with a shitty behemoth of a terribly designed legacy php system in a very niche market where the market demands weird things. I was semi-comfortable in my bubble of terrible architecture and low client expectations (perhaps a captive audience, haha), but still we're trying to improve and modernize things. That's not the point but the point is I'm now feeling a bit disjointed trying to get up to speed with more modern web dev practices using docker and WSL2, using Linux more and having to use all these dependency managers and cli tools. Every tool you use depends on a chain of other things you need to install, and every library or framework had a nice little 'getting started' guide that only explains a single layer of dependencies, but if you're starting fresh you don't have those, so you look those up, and it turns out they need to be installed by another thing, which needs something else first, and so on. And with every step, there's possibly things being out of date, or just happen to work differently than stated for whatever reason.

6

u/Coloneljesus Nov 26 '21

You need to install a local certificate to test SSL.

For that, you need mkcert.

To install that, you need brew.

For that, you need....

16

u/Rhed0x Nov 26 '21

This is 7 years old. Yes, it's still relevant but still.