It takes work off both the distro maintainers and the developers of the software, because they only need to provide a single package that works anywhere instead of packages for every single distro.
Free and Open Source Software
If it's free and open source and it's also software, it can be discussed here. Subcommunity of Technology.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Just a shot in the dark, but it sounds like you have never maintained software. If you have, then my apologies, but your experience must not have included much by way of dependency and version management. Dependencies and versions are a major source of headache for any software engineer and it’s a problem of our own making.
Having completely self-contained applications, while more bloated, makes maintenance and distribution exceedingly simpler and puts the burden of managing that solely on the developer of said application instead of burdening everyone down the line.
I remember when package managers didn’t exist. It was painful. This is the next step in the evolution of the Linux desktop that was mostly solved in Windows and effectively completely solved on macOS since some early 10.x version.
I would not call it a next step. Just another option. The big downsides include much larger on block storage size and worse yet much bigger memory foot print since your app cannot benefit from shared images. Worse system integration too.
In a world where block storage is huge and cheap and memory too maybe less important. I would not say it is without issues though. Maybe convenient but not optimal in a lot of ways.
Flatpak does try to account for storage size by using shared base images. The main problem is that some Flatpak apps don't update to the latest base, and some use different base images altogether, meaning most of the time it needs to have several bases anyway.
From my understanding, Flatpak is built on top of OSTree, which will automatically deduplicate files across different packages. That said, I’m not sure if this extends to downloading packages. The site claims that it does do “delta updates,” which would hopefully mean that it doesn’t download files that are already on the system, even if they’re part of another package.
I’m just going off what I read in the docs. Someone with more understanding of the system can clarify.
Don't know much about Linux, but looking to convert my EOL Asus C302 Chromebook to Linux, so trying to learn anywhere I can 😊
After reading all the comments for this post, if you're right about Flatpak de-duping, that would make a good portion of the arguments against moot.
Hopefully someone knowledgeable can step in and add some more info.
Incidentally, if anyone has a suggestion for a lightweight Linux distro that I can use on my Chromebook, any and all ideas are welcome 😁
If people want the Linux desktop to become more ubiquitous in homes, it better damn well be the next evolution. Someone’s grandmother isn’t going to get on the command line when apt inevitably decides to break.
The concept is not new, and Apple has had .app containers for a very long time that almost always just works. So clearly the concept has long been proven.
Perhaps flatpak, snap, appimage aren’t the final forms of this concept on Linux, but it’s a step toward making application packaging and distribution much more friendly for the common masses.
That Apple can get their own relatively narrow platforms to work really says very little. Shocking if they could not.
As far as home use, Linux works fine. Both my wife and father in law use it too and they are not technical. Cannot remember the last time I had an apt issue or any other issue.
As far as snaps being better. Not my experience. I will take a native package any day over this other stuff. Just more integrated and reliable.
I like Flatpaks and AppImages for application delivery and here's why:
-
Software doesn't just magically appear in various distros' repositories. There is a considerable amount of work (time/effort/energy/thought) that goes into including and maintaining any given program in a single distro's repo, and then very similar work must be done by the maintainers of other independent repositories. To make matters worse, some programs are not straight-forward to compile and/or may use customized dependencies. In those cases, package maintainers for each distro will have to do even more work and pay close attention to deliver the application as intended, or risk shipping a version that works differently in subtle ways and possibly with rare bugs. (Needing to ship custom versions of deps for a certain program also totally eliminates a lot of the benefits of shared libraries; namely reduced storage space and shared functionality or security.) That's part of the reality of managing packages, and the fact is that there's a lot of wasted effort and repeated work that goes into putting this or that application into a distro repository. I have a ton of respect for distro package maintainers, but I would prefer that their talents and energy could be used on making the user experience and polish of their distro better, or developing new/better software, than wrestling with every new version of every package over and over again multiple times per year.
-
As a developer it's very nice to know exactly what is being "shipped" to your users, and that most of your users are running the same code in a very similar environment. In my opinion, it's simply better for users and developers of a piece of software to have a more direct path, instead of running through a third party middle-man. Developers ship it, users use it, if there's a bug the users report it, developers fix it and add features and then ship again. It's simple, it's effective, and there's very little reason to add a bunch of extra steps to this process.
-
The more time I spend using immutable, atomic Linux distros like Silverblue, the more I value a strong separation between system and applications. I want my base system to be solid as a rock, and ideally pretty fucking hard to accidentally break (either on the user end or the distro end). At the same time I also want to be able to use the latest and greatest applications as soon as humanly possible. Well, Silverblue has shown that there's a viable model to do that in the form of an immutable and atomic base system combined with containerized applications and dev environment. What Silverblue does may not be the only way of achieving a separation between system and applications, but I've never been more certain that it's the right direction for creating a more stable and predictable Linux experience without many compromises. I don't necessarily want to update my whole system to get the newest version of an application, and I certainly don't want my system to break due to dependency hell in the process.
-
The advantages of the old way of distributing applications on Linux are way overblown compared to the advantages of Flatpak. Do flatpaks take up more drive space than traditionally packaged apps? Maybe, I don't even know. But even if they do, who the hell cares? Linux systems and applications are mostly pretty tiny, and a 1TB nvme ssd is like $50 these days. Does using shared library create less potential for security flaws going unfixed? Possibly, but again, sometimes it just isn't possible or practical for applications to share libraries, Flatpaks can technically share libraries too, and the containerized nature of Flatpaks mean that security vulnerabilities in specific applications are mitigated somewhat. I'm not a security guy, but I'd guess that Flatpaks are generally pretty safe.
Well, that's all I can think of right now. I really like Flatpaks and to some extent AppImages too. I still think that most "system-level" stuff is fine to do with traditional packaging (or something like ostree), but for "application-level" stuff, I think Flatpaks are the current king. They're very up-to-date, sandboxed, often packaged by the developers themselves, consistent across many distros, save distro maintainers effort that could be better used elsewhere, easy for users to update, integrate with software centers, are very very unlikely to cause your system to break, and so on.
It would be really hard for me to want to switch back to a traditional distro using only repo packages.
I disagree with so much of this.
You might not care about the extra disk space, network bandwidth, and install time required by having each application package up duplicate copies of all the libraries they depend on, but I sure do. Memory use is also higher, because having separate copies of common libraries means that each copy needs to be loaded into memory separately, and that memory can't be shared across multiple processes. I also trust my distribution to be on top of security updates much more than I trust every random application developer shipping Flatpaks.
But tbh, even if you do want each application to bundle its own libraries, there was already a solution for that which has been around forever: static linking. I never understood why we're now trying to create systems that look like static linking, but using dynamic linking to do it.
I think it's convenient for developers to be able to know or control what gets shipped to users, but I think the freedom of users to decide what they will run on their own system is much more important.
I think the idea that it's not practical for different software to share the same libraries is overblown. Most common libraries are generally very good about maintaining backwards compatibility within a major version, and different major versions can be installed side-by-side. I run gentoo on my machines, and with the configurability the package manager exposes, I'd wager that no two gentoo installations are alike, either in version of packages installed, or in the options those packages are built with. And for a lot of software that tries to vendor its own copies of libraries, gentoo packages often give the option of forcing them to use the system copy of the library instead. And you know what? It's actually works almost all the time. If gentoo can make it work across the massive variability of their installs, a distribution which offers less configurability should have virtually no problem.
You are right that some applications are a pain to package, and that the traditional distribution model does have some duplication of effort. But I don't think it's as bad as it's made out to be. Distributions push a lot of patches upstream, where other distributions will get that work for free. And even for things that aren't ready to go upstream, there's still a lot of sharing across distributions. My system runs musl for its C library, instead of the more common glibc. There aren't that many musl-based distributions out there, and there's some software that needs to be patched to work -- though much less than used to be the case, thanks to the work of the distributions. But it's pretty common for other musl-based distributions to look at what Alpine or Void have done when packaging software and use it as a starting point.
In fact, I'd say that the most important role distributions play is when they find and fix bugs and get those fixes upstreamed. Different distributions will be on different versions of libraries at different times, and so will run into different bugs. You could make the argument that by using the software author's "blessed" version of each library, everybody can have a consistent experience with the software. I would argue that this means that bugs will be found and fixed more slowly. For example, a rolling release distro that's packaging libraries on the bleeding edge might find and fix bugs that would eventually get hit in the Flatpak version, but might do so far sooner.
The one thing I've heard about Flatpak/Snap/etc that sounds remotely interesting to me is the sandboxing.
One thing I like about about Flatpak in particular is it allows me to have newer applications on distributions with older package bases (for instance, Debian.) I don't much care for rolling release distros, and I'm not a fan of having to hunt for a 3rd-party repository, so for that purpose I really love the option to just get a Flatpak.
Also Bottles. Bottles is great.
I'm on Debian because the software in the Debian repos is stable. So for mission-critical software, at least for my purposes, I'll pick the version in the Debian repo, especially if it requires detailed integration with the operating system such as real-time audio. If the software does get updated, it is probably important and nearly guaranteed not to break. A great example has been KDE Plasma: I don't get the bleeding-edge features, but it's been a rock-solid, fast, still modern desktop environment on every computer I installed it on, including an old laptop that is so underpowered that Windows 10 is a Power-Point presentation upon a fresh restart. If Debian takes several months or longer to update it's Plasma packages to Plasma 6 when it comes out next year, that would be fine for me because I don't desperately need any new features from Plasma.
However, for software that really benefits from being up-to-date and isn't a showstopper if it breaks, for example FreeTube, I prefer the Flatpak. I primarily use Discover for simple package management and upgrades, and it was trivial to install the Flatpak backend, so now my Flatpaks get updated like anything else. However, Librewolf (a browser, which I prefer to keep up-to-date) is installed from a non-Flatpak external repo because I had problems giving its Flatpak version webcam permissions (even if I enabled them in Flatseal).
AppImages have been great for working on new computers because I can (usually) just download them and go. Except for programs that I expect to be portable, I don't typically use them in the long haul. Still, they're super convenient to have around.
I don't touch Snaps because of the closed-source backend and their role in Canonical's transparent attempt to lock down Ubuntu, but if they open-source the backend I might consider trying them.
IMO part of why I've stuck with Linux is because there is (usually) a choice of how to compute. I.e., there are several ways to solve a problem where Windows or Mac would pigeonhole you into their workflow. Having multiple options is inherently a good thing as far as I'm concerned, even if I don't use all of them.
It is a portable binary distribution format. It helps developers provide a format that should run on any Linux system and it helps users to not have to build software from source. Keep on mind building software for a specific distribution is painful. One has to build a version for every distribution and version out there. Not something a developer can really do. So without a common binary format a developer is stuck providing source archives and maybe a binary package for a few distributions they test against.
This is primarily useful from a user point of view when you want something not in your distributions repo. Either newer or just not there. If you use a Debian based distribution this does not happen often as their repo is huge though often older. Not Debian based, then it is a bigger concern. I used Redhat Desktop 25 years ago ... did a lot of building from source back then. One reason I switched to Ubuntu (a Debian based distribution) at the time, huge repo.
These binary package formats also offer other features too like more isolation but they are less integrated then native packages so frankly I do not like them very much. They also increase your direct facing supply chain and do not benefit from audit and patching from your distributions security team. So again not great.
Every single distro maintaining their own version of every single Linux app is just a lot of work that wouldn't need to be done if there was a way of making a version that worked on every distro out of the box. Plus that way app devs don't have to worry about trying to hunt down every weird bug that only comes up occasionally while doing a specific thing using a specific version of a specific library that only one distro uses.
None of them are better than a well maintained native app from your distro. In fact, realistically they kinda have to be at least a little worse than an actually well maintained one. If you include all the time spent maintaining native apps, universal formats are potentially orders of magnitude less work to maintain if they become the default though, and that is valuable. Valuable enough that a lot of the people doing that work are pushing for them pretty hard.
- package it once, instead many times by many different maintainers
- solves the dependency hell
- makes it easier to run multiple versions of same program (or driver) or install a program without it's complete desktop environment
- sandboxed, better control of permissions (at least with Flatpak) and makes easier to backup the whole program version and state
- same package manager across distributions (at least with Flatpak)
- useful on LTS distributions which does not get new packages or programs or even beta software, other than security fixes (think of Debian)
- useful for write only distributions such as SteamOS
- does not need sudo to install new programs (at least with Flatpak and AppImages)
For simple applications this is probably not that wild. But the more complex programs we talk about, the more helpful are these formats. Programs like OBS or Firefox in example is a lot of trouble to compile quickly. And imagine more of these programs. Package maintainer of your distro could use the time in a better way. Those who want to package it themselves (probably Arch) could still do, but most who want to provide the newest Firefox could just use Flatpak, coming directly from the developer day 0.
One also does not need to wait until its packaged by your distro maintainer and it comes directly from the developer instead (maybe). The original developers often do not support all distros and would like to have a known state and version of the program that they can rely on, like a Flatpak.
That being said, I don't use Flatpak. But I used it in the past and it was helpful in some cases. Even on an Arch based distribution. Currently I use an AppImage for a program that is not in the official Arch repos. The AUR has it, but the -bin is outdated and the -git version building from source takes too long and power. Even on my new modern machine it would take at least an hour for every new version. Or I just download the Applmage once (88 MB) and use the self updating system of it (which downloads newest version automatically and renames it to current executable filename). I'm talking about RPCS3 emulator.
All of this is right, plus AppImage provides portability. I have all of my emulators in AppImage format in portable mode on a portable drive so I can move it from one PC to another and have all my games, configs, saves, etc.
Frankly I have found AppImage more useful because Flatpack and Snap seem to need an updated infrustucture which may not be there on an older distribution. AppImage seems to not need much. So I have not found Flatpacks and Snaps to really run on just any system.
I thought if one app needs one version of a library and another app another, you'll have a problem with normal package managers, sandboxing gets around this (and also has some security benefits?) ?
Also sure maybe I want to wait two years for my distros' maintainers to check and ship a Thunderbird update. But maybe I don't (and also don't want to use Arch), Flatpaks are a (potentially unsafe?) way for me to get updated software faster.
Linux libraries do have versioning. So the system can sort that out... maybe. You also do not want the same app indirectly loading multiple versions of the same library. You do kind of want to have all apps on your system linking to the same shared image though. If it does the system needs to only have one in memory even if multiple running apps are using it. This is a big space and load time savings. These separate binary formats though handy have their issues.
The Linux purist is to provide the source code and you download then compile the small files.
Developers and gamers don't have storage issues so the higher storage size of flapak and the lack of dependency issues (a copy of every library used) make flatpak user friendly enough for normies aka gamers with steamdecks
Ahhh those were the days:
tar -xzf PACKAGE.tar.gz
cd PACKAGE
./configure
make
su
make install
Not sure many people do that any more. I rather prefer:
sudo apt-get install PACKAGE
Or just pick it from synaptic. If you didn't have to do manual integration, plopping down an AppImage has some attractions too.
Another kind of silly benefit is that distros without their own graphical package manager can use the gnome one with Flathub. I actually started installing NixOS on my family’s computers, because I can start from a common config and have everything up and running quickly. Plus it’s super stable. And with Flatpak, they can install software after I’m gone without editing the config. It’s kinda like my config is the base system, and then they can layer on top.
simple example: the app that you want is outdated, is missconfigured on the distro's package manager (e.g. OBS on arch missing wayland capture). If this app has a flatpak version, it's likely it's mainained by the same people who makes the app, thus they can make sure it works fine through flatpak, and since it's distro independant it works everywhere. App images just bring all their dependencies with them, and snaps idk never used them...
It's probably not of any benefit to you as a user if it's also available in the package manager.
It helps you if you want something that's not available in your distro, or a different or maybe multiple specific versions, or you want to contain some stuff and use the additional permissions system. But you don't have support by the distro maintainers this way and it's not tied into the rest of the system any more. I always use the packaged versions if available.
Other than that, software developers can use it to just do one build for their homepage that works on every distro.
They
- add a second source of package truth to the host, introducing uncertainty in content. So they...
- risk consistency in that you can't be sure quickly from where something came.
- add an out of band repo-like entity with no signed manifest of exact contents so you can't validate your install down to the file level
- encourage dependency hell
- break any sort agreements as vendors can all refuse to support people with this oob spooge, at the drop of a hat (happened to me with
moreutils
)
Really, from a build/release standpoint, from an os security standpoint and from an escalation/support standpoint - three jobs I held on the OS/distro side - they're all just toxic and valueless.
But the kids think they're neat, so in a world where they rewarded systemd with more than ridicule, I guess #thisIsFine.
In that specific area, that's where compared to each BSD operating system, Linux will forever be trash garbage.
In what way does BSD solve this issue? I would not consider limited number of distributions, small user base, and package count exactly a solution.
That said, if BSD had been released as FOSS a decade earlier I imagine we would all be using BSD not Linux. Would have been an interesting twist of fate.
I think the comment refers to this:
Not sure. This is a container basically not a distribution format. Not sure how this is different from Linux containers though Linux has a bunch of options. Not sure which is most similar.
Yeah, I think the BSDs lead the way with some things, like jails. But they're not distribution formats. But jailing is part of things like Flatpak. And we have chroots and systemd nspawn. I think I misread the comment and it was more shitposting than anything of substance.
That is the thing about Linux. Linux is so huge and has so many ways of doing things that the days of knowing everything are gone. Same with many of the important tools too. Python has gotten way vast too for example.
This is kind of old people behaviour. I'm still not 100% sure if I'm getting more conservative, having difficulty with things changing, or if things really used to be better... They're different, that's for sure. And I have some valid criticism for some things, too.
Frankly I loved my Commodore 64. My Linux box is better in every measurable way but there was something to simplicity and a time where just making a sound and drawing on the screen with a computer you could afford was quite a thing.
Same with Python. Started using it about 1998. It was simple enough that I learned the language in a day. Now there is so much more. Then add packages for everything these days... lot of the work is understanding packages, venvs, how to deploy, not just opening idle or pywin and writing stuff. Sure Spyder or one of the other IDEs can do static checking, have doc at your fingertips, integrate a debugger, and have a graphical shell where you can do all sorts of stuff. Changes the feel of programming though.
Hehe, that's called nostalgia. I can feel that, too. 😊 Things used to be simpler but that had some appeal to it. And a different vibe. And you had to work hard. That made your achievements more rewarding than spending your time fighting with complex buildchains.
BSD is FOSS, unless you are an idealogue.
BSD does not have distributions, those don't exist.
BSD totally has distributions. Some versions of BSD are separate operating systems from each other, not distros, but things like GhostBSD or MidnightBSD are absolutely FreeBSD based distros.
Wrong, guess again. Read the websites.
You mean like where immediately on the front page of the GhostBSD website it says that it's built on top of FreeBSD code? Just because they don't use the term distro doesn't mean they're anything different.
You stuck in the cult of Linux and projecting your mentality onto other things without deciphering each on their technical marits. You look at all software in Linux terminology rather than making a distinction to articulate correct phrasing in a cohesive manner.
I call a spade a spade. If you can't handle two binary compatible versions of BSD being called distros just because it's a Linux term even though by every possible definition of that term that doesn't include the word "Linux" they absolutely are distros, that's your problem.
It is now. It took most of the 90s for that to sort out plus a big lawsuite. The FSF started in the early 80s and Linux in the early 90s. Not sure BSD was available free to just anyone in 1991 when Linux became a thing.
Linux is a bad attemot at copying UNIX. BSD comes from the orginal UNIX of the 70's. BSD was a summary of the patches, fixes, and other developents that was applied to the Unix codebase and then after the lawsuit took the original UNIX patches and started BSD 4.4-lite