back to article Flatpak and Snaps aren't destined for graveyard of failed Linux tech yet

The world of Linux has long been divided into tribes, or distros as we called them. But what actually makes a distro? The packages it uses? The people who put those packages together? The philosophy behind the choices the people who put the packages together make? The question of what makes a distro is actually very difficult on …

  1. Christian Berger

    You could of course have the same advantages _much_ simpler...

    ... by using static linking. That way you get a single binary you can just drop into a directory and execute.

    There are reasons for dynamic linking, and that is that you can easily fix bugs in libraries, by just updating the binaries. So if you find a bug in libpng, you can just recompile it, and all your programs will use that fixed version automatically. If you ship libraries with your code, you loose that advantage and experience shows that you'll likely have old outdated versions of your libraries.

    Static linking has the advantage of being nice and simple. There's just one file you need to take care off and nothing else. Having such a container achieves the same without the simplicity part. You attempt to somehow cram your program and its libraries into one package, and have that in some sort of jail. This is harder to do than static linking, yet brings no advantages.

    1. frank ly

      Re: You could of course have the same advantages _much_ simpler...

      It does bring security advantages due to the sandboxing - provided this is done well of course.

      At the moment, I have a few applications that are stand-alone, either as executables in a directory of their own library resources or as an Appimage file. For whatever reason, I trust the providers and they haven't borked my computer yet or stolen my personal data (as far as I'm aware).

      1. Christian Berger

        Re: You could of course have the same advantages _much_ simpler...

        Well so far sandboxing has not worked, and it certainly isn't a way to protect yourself against malevolent code. After all, most of the code in a sandbox is about making it communicate with the rest of the system. Having a perfectly well sandboxed program, with no way to communicate with the rest of the system, might be secure, but is rather pointless.

        1. Anonymous Coward
          Anonymous Coward

          Re: You could of course have the same advantages _much_ simpler...

          Well so far sandboxing has not worked, and it certainly isn't a way to protect yourself against malevolent code.

          TBH I'm usually more concerned about protecting my system from fucking stupid code.

        2. DropBear

          Re: You could of course have the same advantages _much_ simpler...

          "Well so far sandboxing has not worked"

          Not that I'm disputing that, but AFAIK sandboxing is not supposed to be about isolating the sandbox from keyboard, mouse and display (or the allotted amount of storage) - it's supposed to be isolating it from all the other sandboxes in the system, and I see no inherent contradiction there as long as the sandboxed app doesn't specifically _require_ system-wide access to fulfil its purpose.

    2. Voland's right hand Silver badge

      Re: You could of course have the same advantages _much_ simpler...

      ... by using static linking.

      There is stuff around librt which behaves strangely if you link the executable statically. There is other stuff which will just stop working altogether.

      Once upon a time it used to be "everything linked statically just works, link dynamically and it may not". We are now in the exact opposite situation - a lot of software has never been tested to work correctly if you link it statically. In addition to that, there are licensing issues - some stuff cannot be linked in statically without violating licenses. On top of that most interpreted languages will stop functioning if you do not have a working dynamic linker. Perl, Java, Python and anything written in them (to name a few).

      So by doing static linking of everything you are preparing a massive can of worms which you really do not need to open.

  2. Jay 2

    Given the lack of understanding by most devs where I work that the latest bleeding edge stuff they claim they need usually won't run on the old/boring/stable (and without loads of dependancy problems!) enterprise Linux we have, this sort of thing may be a good thing.

    1. bombastic bob Silver badge
      Devil

      'bleeding edge stuff' is often overrated

      yeah, 'bleeding edge stuff' is often overrated. In particular, when the interface suddenly changes to something you *HATE*, like 2D FLATSO or "hamburger menu" fat-finger-friendliness.

      Both of those *kinds* of things have happened somewhat recently. I like the look of Firefox without the hamburger. And I despise what's been done to Chrome.

      if "developers" would stop changing things around like that, and focus on security and useful features, maybe the existing Linux distros would track more closely to 'bleeding edge'.

      And yeah, static link the binaries, please. It would make cross-distro installs (and especially running on things like FreeBSD) a *lot* simpler. Maybe "just offer a binary built like that" and see how popular it becomes.

  3. Chewi
    Thumb Down

    They might seem nifty now but I don't see this ending well. Windows has been able to use this approach for years because it is a closed ecosystem with a relatively stable base. New Windows releases are few and far between and Windows software tends to run standalone with dependencies limited to invisible libraries. Free software, on the other hand, is subject to much more rapid change, and is designed to reuse as much as possible, be it libraries or other applications. This stuff doesn't always just slot together like magic and that's just one of the reasons why we have distributions. Of course, there's also the bug fixes, security concerns, and so on. I think distribution developers are very underappreciated, especially by upstreams who only care about their own software and don't think about the bigger picture. If they made things easier rather than harder for distributions then we could package their updates more quickly. Fortunately the new Meson build system is making it easy to support both approaches and it is starting to gain traction.

    I may be biased but I did try the OpenShot AppImage a couple of times. The first time, it crashed horribly. The second time, it was too slow to be usable, probably due to the video acceleration failing in some way.

    As for bundled libraries, I wrote a script for Gentoo called esteam to unbundle libraries from Steam games as much as possible. Without this, I don't benefit from a fix to SDL that allows me to run games on my second screen, which is a much bigger TV in front of a much comfier sofa.

    1. phuzz Silver badge
      Gimp

      This isn't the approach that Windows uses.

      Snaps are very similar to containers, so they come with all the dependencies they need, and are limited to their own sandbox.

      Windows programs don't need to worry about their dependencies (until they do) because Microsoft has taken care that new versions are backwards compatible with as many old versions of Windows as possible. This has some upsides (compatibility and, um, that's about it), and some downsides (cruft, security flaws, slowness, more different and overlapping APIs than a big bag of APIs).

      One could argue that most of the problems with Windows come from Microsoft's insistence on keeping compatibility so one person can still use the WIN16 program they wrote to do their taxes twenty years ago, on Win10.

      1. hplasm
        Devil

        Ha!

        "...keeping compatibility so one person can still use the WIN16 program they wrote to do their taxes twenty years ago, on Win10."

        You'd think MS execs could afford a Quickbooks license...

  4. tiggity Silver badge

    Dependency duplication

    Have they fixed duplication handling yet? *

    Last time I played with this was ages ago, when snaps was quite a bleeding edge thing, a snaps package could locally include the dependencies it needed, but had the case where multiple snaps package duplicated local installs of various dependencies

    If this is not yet resolved and get to state of most software was done via snaps then that's a lot of wasted space via dependency duplication.

    (At the time it just struck me as formalizing the approach of "local" install & run of an applications dependencies to get around issue of distro "default" versions of required dependencies being different)

    * yes I could web search for the answer, but as author has recently been trying all this out I assume they can give me an answer & other article readers m,ay have same question. I know de-dupe handling was a TODO thing for snaps, but lots of software has well intentioned TODOs that tend to stay TODO (& unfixed) for ages

    1. Orv Silver badge

      Re: Dependency duplication

      How much extra space are we really talking about? The binaries for most libraries are a couple dozen megabytes at most, which isn't much on modern disks. (If you're building a compact installation for an SD card or something you're not going to use containers to begin with.)

      In an ideal world I think deduplication would be handled at the filesystem level, but that doesn't seem to be a well-solved problem yet, unfortunately. Even ZFS dedup has issues.

  5. Anonymous Coward
    Anonymous Coward

    Just in case the author gets bored of Flatpak, one of Arch developers has a personal repository with automatic daily builds of Firefox Dev, so you don't need to compile that.

  6. Mark 110
    Trollface

    Faff

    Sounds like a right faff. Why can't you just tick the 'Update automatically' box?

    1. Bronek Kozicki

      Re: Faff

      The answer is in the articule - because distro maintainers won't bother to prepare new package overnight for any application which just happened to bump to new minor version last afternoon. Application developers on the other hand, just might do that. And if you really really want to always run the most recent version of an application, then Flatpak will allow you to rely on developers rather than on distro maintainers providing the new version for you, overnight.

      1. Havin_it

        Re: Faff

        Most of the time Firefox binary release is fine, at times I've used that in preference to whichever distro's package, and that meant getting updates straight away too. I wish Libreoffice had a standalone installer as it's a nightmare trying to satisfy dependencies for Gentoo's binary package of it. I might give the Snap/Flatpak option a look.

    2. Orv Silver badge

      Re: Faff

      "Why can't you just tick the 'Update automatically' box?"

      For permission reasons browser auto-update features rarely work in Linux. The browser can't rewrite its own binaries unless it's running as root, and nobody wants to contemplate the horror of a browser with root privileges.

      1. Electron Shepherd

        Re: Faff

        "For permission reasons browser auto-update features rarely work in Linux"

        But surely a daemon (running as root, but not doing any actual browsing) could do it? I don't see it as Linux permissions problem - it sounds like a "the developers couldn't be bothered" problem.

        On Windows, Chrome and Acrobat Reader both manage it without requiring that the desktop application that the user actually interacts with is running with administrative privileges.

  7. Buzzword

    Obligatory XKCD reference

    Situation: There are 14 competing packaging systems.

    14?! Ridiculous! We need to develop one universal packaging system that covers everyone's use cases. Yeah!

    Situation: There are 15 competing packaging systems.

    https://xkcd.com/927/

  8. Gene Cash Silver badge

    Distro value-add

    My distro packaging folks (Debian) do quite a lot of work. They track and report bugs, They keep an eye on updates. They handle security issues by informing people and packaging the fix as quick as possible. They also do documentation like manpages and READMEs where these are missing from the application. They aggregate user feedback to the developer.

    A lot of the application developers aren't so keen on supporting end users like that. They're usually focused on churning out code for bug fixes and new features.

    I don't see where cutting out the distro ecosystem gets me anything.

    1. frank ly

      Re: Distro value-add

      For about a year now, the GIMP 2.8 application in Debian and dependent distros, such as Ubuntu and Mint, has had a fault that causes it to freeze and lock up if you try to perform a cage transform. This is because the Debian people used the wrong version of the cegl library when they compiled the GIMP source code. This has been pointed out in various bug reports for some time.

      I got around the problem by using the PPA of a main GIMP developer to install the 2.9 development version. A less easy way would have been for me to compile it myself using the correct library version. So, having a well known and respected distro is no guarentee that things will be looked after properly.

      In case you're wondering, Debian 9 (Stretch RC) still has this problem with GIMP 2.8.

      Also, for about a year, the Mint distro has had a faulty version of IDJC and I did have to compile a later version from source code so that I could have a working version. Debian 9 has the later correctly working version and this should eventually work its way into Ubuntu and Mint.

      RSS-Owl simply will not normally work because the standard libraries have advanced while it has stood still. I know how to 'fake it' using symbolic links so I'm ok for now, until the standard libraries become totally unsuitable. A standalone version would be a nice thing to have.

    2. Anonymous Coward
      Anonymous Coward

      Re: Distro value-add

      "Faster updates and eliminating the distro middleman are just two advantages of Flatpaks"

      > I don't see where cutting out the distro ecosystem gets me anything.

      Exactly. Packaging is an art in itself, ensuring that software is going to work cleanly with whichever permutation of applications, libraries, configuration settings, permissions, and whatnot a user decides to use.

      (I have had experience both as a developer and a packager--not of my own code, so got an inside perspective of the sort of problems one solves, plus another set of eyes never hurts)

      "Perhaps the even bigger advantage – especially with software like web browsers – is the security sandboxing."

      Sandboxing, if done properly (a big if) is quite handy, as would be a smartphone-like, per-feature permissions system, but those are quite orthogonal to the problem of how to deliver the software from the developer to the end user in a safe, reliable, and convenient manner, which is what packaging addresses rather well.

  9. Trey Pattillo

    Plug & Play-----NOT !

    FTA - A single command that I put in a cron task updates my browser every night without me needing to every think about it.

    .......comments

    I got around the problem by using the PPA of a main GIMP developer to install the 2.9 development version.

    What are you people smoking? 'Cause it is bad shit as in no good.

    Why is Windows at the top of the list?

    The user gets what they get [oem machine] and they don't have to dick around with it.

    Dell, HP, Lenovo, others will take care of auto updates the same as windows auto updates and hopefully none of them shit in their diaper.

    All the take of more eyes, doing it for humanity, etc will NOT get linux off the bottom right above pond scum.

    As far as *buntu, I ascribe that to "if a company [they are] give you something for free, the you might highly consider that you are their product."

    Google anything even in windows and find how much *buntu comes back with same crap, and specifying linux makes it worse.

    I recommend for windows transition, and have been on it for several year is a "solid" distro off of debian, and the developers are ex-debian programmers. SolydXK [K(de) for me but X is ok).

    True Firefox and a few others that a few days to 2-3 weeks to get updated, but other than that....IT IS SOLYD.

    [/rant]

  10. AdamWill

    False opposition

    I think the opposition quite a lot of the comments here are assuming is a false one. This isn't a case of 'app developers are pushing Flatpak / Snappy to cut out those distribution packagers'.

    Snappy is made by a distribution vendor (Canonical). Flatpak is supported by several distributions, notably Fedora (note: I work on Fedora). Distributions are actually quite interested in shipping stuff using these 'sandboxed blob' systems.

    Fedora Workstation folks, for instance, are currently working on building an OStree-based version of the product, for which you'd install additional applications as Flatpaks. They envision shipping at least some first-party Flatpaks as part of this effort.

    I believe Canonical is similarly interested in Snap as a distribution vector for software on Ubuntu IoT and cloud products.

  11. ckdizz

    As a long time Ubuntu user who's embraced things that the Linux community usually hate (systemd, GNOME 3, Windows 10), I can honestly say that I haven't ignored something as hard as I've ignored Snaps. I just haven't found a good use case for them yet. They haven't filled a niche that Docker isn't already consuming with it's wide load arse.

  12. itzman
    Paris Hilton

    I thiought this was what PPAs did?

    Get the right PPA, install it and hey presto, you are downloading the app straight from the developer.

  13. Anonymous C0ward

    Is Arch really not bleeding edge enough already?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like