I am arguing about this on FreeBSD forums - from an end user perspective. I think the benefits of saving disk space by sharing libraries do not justify the inconveniences we, desktop BSD and Linux users, obtain from being forced to disrupt and upgrade hundreds of installed software packages, just because one single desired upgrade requires to pull its dependencies. The whole ecosystem falls down like a card house - once in a while I have to say "screw it" and auto-update more than a thousand packages, praying my Python and other projects survive, just because I want a browser update or a security fix.
I wonder if there are others who support this opinion that desktop Unix has very complicated future unless complex apps will begin to bundle their own libraries.
I also think that large projects should just vendor their dependencies, including compilers, so you git clone the thing, type "bazel build", and have a working binary. (I am fine if the dependencies are not checked in to the repository, a bazel WORKSPACE file is fine with me. It records a checksum for all dependencies, so even if you reach out to the Internet to get it, you get the same bytes as the developers working on the code.)
Building and distributing software should be very simple. But it's not, because of an accumulation of legacy tools (shared libraries) and practices ("please install these 6000 dependencies to build our project").
> auto-update more than a thousand packages, praying my Python and other projects survive, just because I want a browser update
You may want to check out Nix. It's a package manager which isolates each program's dependencies; so you can have multiple versions of the same package.
> or a security fix
having a centralized repository like this helps in case a it's a library that needs a security fix, because you only need a single update to apply the fix to all applications. Applications that bundle their own deps (including Docker images) will need to publish their own update with the updated version of the lib.
Is this kind of like Ubuntu's "snap" thing? (I know snap involves cgroups, which aren't strictly necessary to give Firefox a different version of rust than everything else... but it seems widely deployed and easy to use, at least if you're on Ubuntu.)
You may want to check out Nix. It's a package manager which isolates each program's dependencies; so you can have multiple versions of the same package.
In principle true, yes. But it comes with a cost. Even the nixpkgs Firefox maintainers are considering to only ship Firefox ESR starting with NixOS 20.03:
Nix is a fantastic concept, and I hope it takes over the world. But the NixOS packages are a mess. I tried it for a few months last year before giving up after several packages and even whole collections of packages became unusable even in the stable repository. The repository needs some serious reworking before Nix can really shine.
It's really trading one problem for a different one. The reason traditional distributions only package one version of a given library and make everything use that same version is so they only have to maintain one version of the library.
If you allow every application to choose its own version then they all choose a different one, which means someone then has to continue to maintain every different version of the library separately. Doing that isn't much if any of a reduction in workload compared with getting every application to use the version the distribution ships, and is much more likely to end up with a situation where dozens of versions of the same library exist and half of them are broken in some way.
The better solution is for libraries to only break compatibility between major versions. Then the packager can ship some suitably recent minor version for each major version of the library, at most two or three major versions are supported at any given time which keeps the maintenance overhead feasible, and every application can successfully link against one of those major versions or another.
It doesn't entail any of that. Traditional distributions like Debian allow you to install any number of library combinations, they just do so badly by requiring you to do so on different installations updated at different times.
Supporting the presence of multiple library combinations doesn't mean that you "support" all version combinations in the sense of supporting all combinations of their versions being used in the wild.
It just means when users upgrade they can do so gradually for some sets of packages at a time. You can emulate this on traditional distributions by having a single VM image you use for your browser, then cloning the image, updating packages, and using that one for your E-Mail etc.
Having a mechanism like this should mean less support is needed from the distribution, because system-wide breakages are less likely to occur.
You'll get the same bugs, but users can easily back out say an OpenSSL version with a security fix for the 5% of packages it breaks with, while retaining the fix for their browser & other high-risk packages.
For example, I currently can't upgrade my firefox version on Debian testing because it ">=" depends on a version of a couple of libraries that some 100-200 other packages on my system don't like.
In Debian testing this sort of thing happens occasionally and will get fixed sooner than later, but that it happens at all is just an emergent property of how the versions are managed. If I were to run the latest firefox with those new versions and not touch the other packages until they're recompiled both myself and the package maintainers would be exposed to fewer bugs, not less.
I'd get a bugfixed firefox today, and they wouldn't need to deal with bug reports about a broken upgrade, or firefox bugs in an older version users like me can't upgrade past simply because firefox happens to share the use of a popular OS library.
> I think the benefits of saving disk space by sharing libraries do not justify the inconveniences we, desktop BSD and Linux users, obtain from being forced to disrupt and upgrade hundreds of installed software packages, just because one single desired upgrade requires to pull its dependencies
It's funny, from your point of view having a centralized repository with a (usually) single (usually) latest version of a library is a bad thing that may be (and probably isn't) justified by the goal of saving space.
From my point of view having a centralized repository with a (usually) single (usually) latest version of a library is an awesome thing that I would leave any other ecosystem to get, and the space savings is just a bonus that doesn't much matter.
Most dependency maintainers don't provide updates for more than a few versions of their software. When one piece of software depends on -latest and another piece of software depends on -legacy, you can ship both with the central repository model. In the Linux distributions I've used this is a solved problem. Arch Linux has five different versions of the JRE that are separately installable.
That said, just because you had good luck doesn't mean that it's stable.
Here's the most trivial way I can think to explain this:
Check out Arch News.[0] Ctrl-F (Find) 'manual intervention'.
Six years of results on the first page ; 13 instances of 'manual intervention required'.
Reliability != stability. Stability usually implies a platform on which one can use and develop for without expecting common major changes, if ever.
As for the anecdote of using the [Testing] repositories, a users experience with such things really depends on their use of new and currently developing software.
A simple computer with simple peripherals that is used to run emacs all day isn't likely to be broken by the Testing repositories.
Personally , my anecdote : the second someone starts using Arch for something new-fringe (hi-dpi, touch, tablets, SPDIF, SLI, NVRAM, new window managers, new x-windows replacements, prototype schedulers, filesystems, or kernels, PCI pass-through, exotic RAID configurations..) [Testing] repository is an act of masochism. It's only a matter of time before something drops out.
I found that my best bet was chicken sacrifice and Opus Dei-style self flagellation before each [Testing] 'pacman -Syu' . At least then I had a 50/50 chance of the next boot. But, granted, I use a lot of weird or fringe hardware.
All that said : other distributions don't have better [Testing] repositories. It's just that [Testing] is for .. testing. It's unstable by its' very nature.
> It is the user who is ultimately responsible for the stability of their own rolling release system. The user decides when to upgrade, and merges necessary changes when required. If the user reaches out to the community for help, it is often provided in a timely manner. The difference between Arch and other distributions in this regard is that Arch is truly a 'do-it-yourself' distribution; __complaints of breakage are misguided and unproductive, since upstream changes are not the responsibility of Arch devs__.
Even their wiki is very clear to inform you that you're at the whims of the package upgrades.
But I also want to point out that "it hasn't broke in a year" is cute, but I ran arch for probably 10+ years until I finally got tired of it and moved to Ubuntu. The last straw was me sitting down to get paying work done and spending half my day trying to recover my work environment. In the 2-3 years since I moved to Ubuntu I've never once sat down at my PC and had something stop working that was working before.
This defense of your favorite distro is especially silly when you think about it logically. Of course a rolling release system is going to have more breakages. The sensible response isn't "it never breaks!", but is instead "that's the nature of rolling release, you opt into it when you choose Arch".
This is funny, I could tell exactly the same story the other way around. I came as a 10+ year Debian user to Archlinux because it did too much automagic under the hood that broke and took a lot of time to fix. No breakage on arch because no automagic behind your back.
For a long time before using arch I thought too that rolling release might be more unstable, but I have come to the conclusion that quite the opposite might be true.
They first time I tried Arch, I installed it, did a system update and libc completely broke and my system was unusable. I went back to Gentoo.
I went through some hell when I ran Gentoo unstable(~) but for the past several years I've been on Gentoo stable and have very few issues. I've even used it on work laptops at three different companies.
It is definitely more of a do-it-yourself distro than Arch for sure, but I enjoy working with it.
Obviously this is true for "Arch Linux users", but that is a self-selected set of individuals with correspondingly biased circumstances/behaviors etc... I wonder if your statement still holds holds when we change the domain to people who have used Arch Linux at some point in the past?
As I understand it, Arch Linux is like having a pet. It requires constant care and feeding to keep it alive but can be very rewarding (allegedly)
You misunderstand it, then. Arch is stable enough that (unless you're doing stupid things, like grabbing half of your system from the AUR) there will only be a breaking change every three years or so. Just set a cron job to update it for you and you have nothing to worry about. It was literally created for lazy system administrators, and it's only gotten better for them as time goes on.
I ran a stable arch system for 4 years with only one breaking change introduced. Everything else was minor with clear instructions on how to solve it on either main page or the forums
Unless Arch Linux broke for you because they ship multiple versions of some packages, that's not really relevant. This is a thing all the popular Linux distributions do as far as I know.
Also, personally I've stuck with Arch Linux for 6-7 years now because it's the first distribution I found that consistently works for me, so I don't know that anecdotes will get us very far either way.
Is that what's going on? I wonder how the Gentoo maintainers are dealing. They frequently unbundle libraries, but I've also see a lot of packages with use flags like system-jpeg or system-sdl to force the ebuild to use a system library instead of the built in.
Firefox, Libreoffice and others big applications typically take a long time to compile because they have so much stuff built-in instead of depending on the system libs. They gain a lot of stability and predictability but you can have dependent libs with security issues too.
> I wonder how the Gentoo maintainers are dealing.
I spoke to the gentoo firefox package maintainer last night. He even has firefox building on aarch64 with musl, so things are pretty good. For more common targets, there's also the option of using the official binaries from Mozilla.
Gentoo policy is to attempt to provide a way to use system libraries where possible except when those libraries are too heavily modified. If something breaks, they use the bundled package. You can see that's currently happening with harfbuzz in firefox until they bringup the patch: https://github.com/gentoo/gentoo/blob/master/www-client/fire...
From experience, it's usually fine to use system libs when available unless those system libs are unstable or development releases. Then all bets are off.
Complex apps can bundle their own dependencies-- that's what the Flatpak and Snap package formats do. You can also run Firefox in a Docker or LXD container with it's dependencies. By sharing the X11 or Wayland socket with the docker, the apps can appear on your main desktop.
Yes, and it sounds ideal but I've had a recurring problem where having snap installed on a system increases the boot time, and apps hang on load. They stop hanging when I install the native app, and use that instead.
RedHat’s “streams” model will certainly do a much better job of handling this than the other distributions do today. I hope that the need for having multiple parallel versions of a dependency coexist is incorporated into the other distros, because I’ve lost a lot of sanity this past two decades to the assumption that “one installed version should be enough for anybody” on Linux and BSD servers.
AppStream does not allow installation of multiple versions of the same app AFAIK. I believe this is what they refer to in the clumsy sentence "The one disadvantage of Application Streams from SCLs is that no two streams can be installed at the same time in to the same userspace. However, in our experience, this is not a common use case and is better served using containerization or virtualization to provide a second userspace." in the linked article in the sibling comment.
It's a good idea in general, and it would be cool to solve this problem, but modular "streams" as they currently exist have so many edge cases that I don't really suggest using them if you can avoid it.
You can check the status for the firefox package in Fedora at https://admin.fedoraproject.org/updates/ which shows firefox-72.0.1-1.fc30 as "testing" (that is, you can install through "yum --enablerepo=updates-testing update firefox"). The reason for it not being promoted to stable yet is, according to that page, "disabling automatic push to stable due to negative karma" - that is, the update was marked as broken by a couple of people.
It is not disk space you should be concerned with saving, but real memory usage. I wonder how much memory a statically liked Firefox uses under a heavy load. I checked with esr (what I use) and it is about a bit more that 1.3G excluding shared. So as firefox creates threads, I would think memory could get tight with a statically linked FF.
I fully agree with what OpenBSD has decided, I think the only thing worse that compiling Firefox is Gnome 3 :)
Statically linked processes can’t share dependency pages with other, different processes. Multiple instances of the same process or multiple threads don’t have to incur that same penalty.
> praying my Python and other projects survive, just because I want a browser update or a security fix.
You mean you aren't familiar with Python's virtual environment system exactly intended for isolating development dependencies from system ones but you're blaming the distribution. Please.
Me too. Especially as combined with fat binaries in the OS X rosetta years. It'd make it a lot easier to package something that would run on arm, arm64 and amd64 in a more accessible way for users.
Well the reality is, some developers of these complex software complain how their software ported to alternative OSes is difficult to maintain even when the BSD devs would offer to maintain it, which is why the BSDs are always on their own here in terms of porting, testing, packaging and updating the software in question.
For example, Chromium. Any BSD developer would know that attempting to upstream their port there is dead in the water. AFAICT, They Chromium devs only care about Win, Mac, Linux and nothing else. Likewise for Firefox, which is why the BSDs don't get official releases.
I feel some sympathy for OSes like the BSDs that for "some" software they are limited by the arrogance of other open-source maintainers who would rather go for maintaining the convoluted distros and testing with a huge disintegrated software-stack than testing a single unified OS which the BSDs maintain themselves.
> For example, Chromium. Any BSD developer would know that attempting to upstream their port there is dead in the water. AFAICT, They Chromium devs only care about Win, Mac, Linux and nothing else. Likewise for Firefox, which is why the BSDs don't get official releases.
The BSDs qualify as "Tier 3" in Mozilla's build terminology, which means that the onus is on external contributors to identify problems and propose fixes, as there is no continuous integration support for these architectures.
Is the shared library issue actually related to this? In the OP they are talking about a Rust upgrade which is presumably only a build-time dependency.
The whole "centralized, trusted repository that has all your apps" system is wrong at a fundamental level.
The way shared libraries are used in Linux is built upon the assumption that package managers and centralized repositories are the right way to do things.
This! Our company OpenSUSE Tumbleweeds were still vulnerable on Friday. On Arch, the package was already available for two days then.
The thing is, the distro maintainers need to decide whether they want a fast moving "latest and greatest" approach, which might break stuff by accident, or go a "slow but always dependable" route, which you can depend on as company, with painful losses if things go south (in which case there should be an express lane for important security fixes).
It's perhaps coincidental, but I don't remember ever having a trouble on an update that I couldn't find a quick fix for in the Archlinux homepage.
In my opinion, the reason why Archlinux can be so dependable is that packaging is based on really simple ideas, like not starting or stopping services on installation like Ubuntu would, and that most if not all system files belong to a package. The package manager can do a whole OS installation just by installing the basic packages on a directory different from /. The package format is also really simple. Getting into the guts of Archlinux packaging is really approachable and that makes me feel at ease for whatever trouble I may find myself in.
As I mentioned downthread, surprisingly little stuff breaks by accident nowadays on Arch (on my desktop, zero breakage in the last year with a fairly complex stack; on one server machine I had one strange interaction between a language package manager and pacman that ended up being my fault). The "slow but dependable" solution ends up breaking as soon as software becomes too complex for maintainers to be able to backport security fixes onto old versions, so they just ship the latest version of those packages, negating the dependability aspect (like Debian's treatment of Firefox and Chromium).
"brake" makes it sound like one would have to reinstall the whole thing. I don't think I've ever had that happen in the 10 years since I started using it for nearly everything.
I've had my drive fail, but that was the drive's fault, and even in that case I also didn't need to reinstall everything. I just copied the good files and reinstalled only the packages whose files got corrupted.
I see, when you said "break", I thought you meant because of a bug or other error, but now I guess the fact that it needs configuration and learning how to do that configuration is what you're calling broken.
Certainly, Arch is definitely not a "it just works" OS. It's a tinkerer's OS. Different distros favor different types of users. By what you said, probably something like Ubuntu or Mint is better, something that "just works" with minimal learning curve for someone not familiar with Linux in general and that does not wish to invest the time in learning it. (Not saying that you're not at least familiar with it, but that's what these distros optimize for, I believe.)
Arch is not an easy distro to use even if you know what you're doing. Every Arch user essentially creates their own distribution which can break at any point depending on their particular environment. You need to take care of all the little things yourself.
Personally I didn't experience all that much breakage, but eventually got frustrated by kernel updates breaking hotplug kernel module loading until reboot because Arch removes kernel modules for the running kernel. This breaks random things like plugging in a USB drive unless you happened to load the module previously, so you're essentially forced to reboot every time you upgrade the kernel or take care to manually exclude it.
I'm grateful for Arch because their wiki is beyond excellent, but I wouldn't recommend it to anyone unless they just want to tinker.
I believe they are suboptimal for the same reason that lexical binding won over dynamic binding in language design: it is easier to reason with immutable bindings, and to maintain as little global state as possible.
The work done in Nix and Guix is interesting in this regard.
I wouldn't say nix isn't anti-centralization, I'd say it's policy-agnostic. At the end of the day, not everything composes, only things whose interface is in some way compatible.
Traditional package manager conflate having something with composing something (everything is ambiantly available and interacting). Nix separates those, so you can have every version of everything, and only try to compose things that fit.
I agree, and I think that people are too emotionally invested in the package manager concept to back out now. I mean, for years Linux proponents have been touting it as the key advantage over software distribution on Macs/PCs.
I think the empirical evidence is against you here, though. If package managers weren't a good way to do things, brew wouldn't exist. Neither would the Mac app store, or whatever the equivalent on Windows is these days.
As a Debian user, I appreciate that the stuff I install had at least gone through some minimal vetting first. And if I have to add a third party repository or download something myself to run, I'm much more likely to view that as the possibly-dangerous action that it is, and try to actively assess the reputation and trustworthiness of what I'm installing.
That's certainly not an average, noon-technical user thing, though. Vetted app stores are there to help average users avoid malware, assuming they're doing their job properly.
I agree with the OP a little. Central repository models have advantages but they've always seemed like short term benefits in exchange for long term costs.
Things like the app store, in for profit scenarios, seem like ways to slip in monopoly control. Brew is an attempt to circumvent it.
I don't want to come across as suggesting they're a bad idea or don't have advantages, just that on balance I've always had a sense there had to be a better way.
The existence and popularity of homebrew, macports and fink (on mac) and ninite, chocolatey, cygwin, mingw and nuget (on windows) would suggest that the concept appeals to mac and windows users also.
I mean, it is a key advantage, on ArchLinux. If you don't have a rolling release, it's pure pain: you're often just stuck with old versions of software which aren't any more stable than the current stable release.
Non-rolling releases might work with smaller software where your distro maintainers can backport security bugfixes easily, but as soon as your software gets too big and complicated the distro maintainers won't be able to backport fixes themselves and will have to just resort to packaging the latest version of certain packages (like Debian had to do with Chromium and Firefox).
Also, for the most part, the "Arch is unstable" thing hasn't been true for a long time. I find that it's far more stable than Debian or Ubuntu LTS on my machine because of newer kernels containing newer hardware drivers.
You can run backported kernels on Debian and Ubuntu LTS too. On the latter it's pretty much the default - they provide "hardware enablement" releases so that users can avoid issues on cutting-edge hardware.
Would it be possible to run Firefox in it's own jail? All of my FreeBSD boxen are servers, so I don't have any experience with this but... put your crustier apps in containers so they don't poop in your main sandbox. Works for servers
Hm. For me this is a question about interface design (vs. implementation). IMHO Unix programming philosophy is a lighthouse how to solve this commendable. In an ideal world you should have to upgrade base program and dependent libraries only if their interfaces have changed.
EDIT: And of course for security reasons - but then you only upgrade the affected player and not huge parts of your installed base. What I want to say - this is all about the ruling design philosophy.
Outside packages should NOT be disrupted given semver. The whole point of using shared objects (dynamically-linked libraries) is so that when a problem arises you can update whatever pieces of code in a centralized, system-wide store and every single one of the projects you use can benefit from the new, up-to-date version. Using the latest version is just the right thing to do.
Semver is not a given in practice. Many libraries haven't adopted it, some of those that claim to break things in practice, and the very definition of a "breaking change" can mean different things - sometimes depending on the consumer (this is especially true for cross-language interactions).
Distributions can make their own choices about versioning if upstreams "break" semver. Sometimes this results in weird version strings and you might not want to depend on these relabeled system packages for your own projects - that's the one case where "vendoring" a dependency version might be worthwhile. But it ought to be quite rare indeed.
This works if you have a stable ABI, but breaks when libraries are written in languages that don't have a stable ABI (in Firefox's case, Rust). Then even a simple recompilation of one package breaks dynamic linking.
Doesn't Rust use static linking between crates as a rule, precisely because dynamic linking would break given reliance on anything but a pure C-like ABI? (BTW, the cbindgen tool mentioned in the linked article is meant to address precisely that - provide a clean interface to a Rust crate that won't break with internal ABI changes.)
Does it actually work though? It seems like there is a significant difference between the expectation that semantic versioning should result in no disruption, and the experience of the parent to your comment. I'd be interested to know exactly what went wrong that lead that poster to be pessimistic.
Yes it does actually work and has for decades. The poster is either confused or doing something different, I expect.
The specific notice mentions rust dependencies. Rust does not have shared libraries, so a Rust [security] update means all rust binaries must be completely rebuilt. That seems to be part of OpenBSD's concern, and perhaps this has "triggered" the poster.
user blackhaz random forum post found:
> mariourk, you're definitely not alone. The same breakage happens to FreeBSD desktops as well. I have been vocal around the forum a few times about this. Not every desktop user wants to upgrade all their packages every X months. The way the default repositories are structured, the upgrades are forced onto users. I am a huge proponent of using application bundles, like on Mac OS, as quite often those upgrades break complex desktop apps too. I don't mind upgrades but sometimes I need to stick with a specific version of software, or roll back after something went wrong, and if it came as a bundle with its own dependencies, it wouldn't have to depend on other stuff that is being "force-upgraded" periodically.
So his complaint is about the way FBSD handles updates; he is generally incorrect about shared libraries; and his comment is completely OT for this thread.
I have been interested in finding BSD users who are interested in https://nixos.org/nix/ or even a NixOS/kBSD. It solves all these problems and in my view is the continuation of the spirit of the port system.
(The problem with bundling isn't disk space, but composition. Individual applications can compose fine, but libraries can't if they link other libraries at different versions, and use those library's types (ABI) in their own interface (ABI). To solve this problem you need to distinguish between public vs private dependencies in your package manager.)
> being too complicated to package (thanks to cbindgen and rust dependencies)
Can anyone explain what is behind? Is it symptomatic for any programs with those dependency? Especially curious about rust because it seems to be hyped very much lately (I have almost zero rust experience and even less bias about it, just being curious)
They don't want to need to update Rust in order to do a presumably small security patch on Firefox.
Which honestly sounds like a totally awesome and legit reason to use -esr. Keep -current current with upstream, stable branch gets patches from firefox-esr.
Keep in mind stable patches to the ports tree are pretty rare on OpenBSD. They didn't do them as binary packages until fairly recently, either.
In a stable branch you want the size of changes to be small and targeted despite how severe the issue is. You don't want to take on new bugs from patches that aren't related to issues you want to see fixed.
If the change is so small, I don't see why they wouldn't be able to backport the fix? Don't upgrade firefox (and any dependencies), just fix the bug.
It would be a lot harder if the actual fix was more complicated and a much more complicated diff. Possible that master has diverged sufficiently from their version that backporting the fix would be unreasonable.
If the change is so small, I don't see why they wouldn't be able to backport the fix? Don't upgrade firefox (and any dependencies), just fix the bug.
As far as I understand, you cannot call it Firefox anymore if you deviate from upstream (which is understandable, because upstream doesn't want to get bug reports for custom changes):
That's a good point. Perhaps they don't have deep Firefox expertise and resources to test. Perhaps it's better to let Mozilla decide what gets backported to esr and how to do it.
The same has been proposed in NixOS [1]: basically updating nss, sqlite and other common dependencies for firefox requires recompiling tons of software, which in turn requires testing, especially in the case the update was a major one. NixOS is special in this regard, because the dependencies can be updated just for a specific package, by adding ad-hoc packages for multiple versions, and it's ultimately what has been done for Firefox.
This is the old packaging design where disk space and bandwidth were expensive, so you tried to have one version of each library or package on disk. This design leads to cascading complexity and breakage when many package depend on the same library and some need different versions of the library.
Modern packaging has changed the approach to bundle dependencies-- using more disk space and bandwidth but isolating apps from each other and allowing independent upgrade cycles. Flatpak and Snap work like that and the title wave of interest in containers on servers is related, as server containers are also used to isolate dependency stacks.
Flatpak and Snap are Linux-specific, though. If the BSDs had a comparable solution to package GUI apps along with their dependencies, I presume that Firefox would be one of the first apps to get that treatment.
Shared libraries aren't just about reducing disk space and bandwidth consumption; it's also about fixing bugs in one place fixing it for all consumers. It requires discipline to only fix bugs and not break consumers, though, and therein lies the devil.
Shared libraries just don't work anymore. That's one of the reasons behind Go and containers.
That way, the users always have working applications. Some applications may take more time to be up to date with their dependencies, but at least some of them can be updated asap without worrying about the others.
For example, I use Debian 10 stable on my laptop, on which I installed Snapd. I then installed Firefox with Snap, and I can be confident that Firefox will have the latest security updates, while keeping the system stable (some parts of the system are probably unsecure, but at least everything still work, they will be updated later)
Although, I would like to point that, while the storage space issue is solved with current tech, the bandwidth is still an issue when on mobile data.
On your personal system it probably works really well. In reality tracking thousands of containers on top of operating systems which are using different methodologies gets really complex and in reality isnt ever really done well. At least in my experience. ( granted fortune 100 / 500 never does anything well :) )
What seems to be happening is the burden of the developer to maintain dependencies is now shifted to operations to maintain the automation that manages the various versions of containers because the complexity is too great to do it any other way.
What I see happening in the application of this model is very similar to train wrecks. You see the crash coming for a long time but cant do anything to stop it.
it wouldn't necessarily. you could just update rust, build firefox and test that. leave the others alone (with older rust).
now if the complaint is solely about rust, not specifically FF, then that's a different story. it might indeed be that FF is the only package using Rust right now?
BSD ports tree can be thought of as versioned as a whole, and packages are basically a snapshot of one particular state of it. If you update one port, and it's a dev dependency for other ports, their packages will reflect that.
Right but you will also find versioned ports e.g. gcc4, gcc5, etc. It's not so unreasonable to assume the maintainers could / would do something similar with rust.
- Firefox is hard to package so they won't package it (on stable).
- OpenBSD-current users are not affected as firefox has already been committed.
- Firefox-ESR is still maintained.
> (thanks to cbindgen and rust dependencies) on the stable branch (as this would require testing all rust consumers)
Can someone explain what they mean with this? What have other non firefox rust consumers to do with firefox being packaged?
The best guess I have is that to package firefox they need to package cbindgen and rust. And if they package cbindgen and rust officially all packages using it would now need to be tested as now (potentially) all their dependencies have been packaged?? But this seems strange TBH.
EDIT: So I guess they still will package Firefox-ESR on stable? Which will have (or maybe already has) all that dependencies, too? So it's just about having to do that packaging less often?
It appears that they are unwilling to update to the latest Rust-stable release due to the testing burden on the OpenBSD-stable team; which then conflicts with the Firefox Rust update policy of, as I read it, ‘latest Firefox stable will use latest Rust stable’.
Ok thanks, I guess this is somewhat understandable.
Through I wonder a bit why they don't automate the tests (maybe the computation cost?). (E.g. rust automatically runs the tests of all libraries/programs published on crates.io to find regressions, through that takes a day or so to complete).
Okay, suppose someone backporting a Rust update runs a big batch of tests and finds, say, two dozen packages with regressions.
Now what?
Spend two weeks investigating all the test failures? Backporting updates to these packages as well, all while users are patiently waiting for their Firefox to have its zero‐day fixed? Are the tests even correct? Were they failing before and nobody noticed?
And all this to only get automated tests passing. Any regressions in behavior not tested are not noticed (or more likely, noticed by users much later, requiring further investigation at that point in time to narrow down the Rust backport as the cause).
In the meantime, this packager’s work on other OpenBSD packages in -current (what most OpenBSD developers actually use day‐to‐day) completely stops.
That’s some insight into the mindset of a software packager. Non‐security backports to language runtimes are a serious maintenance burden.
Discard the outdated assumption that a single installed dependency version is sufficient for all packages, and then improve the packaging system to permit multiple releases of Rust, Python, etc. to coexist as dependencies so that packages can migrate gradually over time rather than forcibly whenever one crosses the line.
Homebrew does a fine job of this. Installing python@2 doesn’t necessarily mean it’ll be made “the default”, but it does make it available for dependencies without interfering with the default python 3 package.
Python 2/3 is not a representative example, because it's actually designed to be installed side-by-side by the authors. Many libraries and apps on Unix are not.
NixOS changes a lot of things to make it all work. If you're willing to pay that tax for the sake of package management, great! People who use BSDs generally aren't.
Python 2/3 is not a representative example, because it's actually designed to be installed side-by-side by the authors. Many libraries and apps on Unix are not.
Rust is designed to support multiple toolchains with rustup. I've got the following installed on my desktop box:
> Okay, suppose someone backporting a Rust update runs a big batch of tests and finds, say, two dozen packages with regressions.
> Now what?
> Spend two weeks investigating all the test failures? Backporting updates to these packages as well, all while users are patiently waiting for their Firefox to have its zero‐day fixed? Are the tests even correct? Were they failing before and nobody noticed?
Yes, people do exactly that. It's part of running a "rolling" distro, see e.g. the Debian Testing transition tracker at https://release.debian.org/transitions/ - These transitions are running essentially all the time; they're only "put on hold" as a first step in the process of making a new stable release. And even then, newer versions of packages such as rust can still enter stable as part of an "unrelated" security update.
> We expect esr releases will stay on the same minimum Rust version, so backporting security fixes may require Rust compatibility work too.
The policy suggests that ESR will update infrequently to the latest Rust-stable at each major .0 release, so as long as 6.7-stable and ESR end up having the same Rust stable version, that will work out. That’s a coincidence-based success, not a certain one, though.
The title on this story makes it sound like OpenBSD stopped updating Firefox altogether. That's not the case, it just stopped backporting non-ESR Firefox updates to its -stable branch.
If you don't know about Pledge and Unveil you can think of it as similar to Firejail sandboxing from Linux but on steroids. It dynamically limits the types of kernel calls and filesystem addresses a process can make use of so if a rogue thread causes a process to perform an illegal operation under the Pledge rules OpenBSD will kill the process with SIGTERM. Whats more Pledge and Unveil allow the process to execute whatever calls are needed when the program initializes itself but it will relinquish the privilege to run those calls for the remainder of the program's runtime once it no longer needs them (after init).
For anyone else that wasn't aware, ESR[0] stands for extended support release, so an older Firefox that still has security patches back ported (like Ubuntu LTE I guess)
The whole modality of a single package in a single configuration at a single version is broken and dead dependency-hell of yesteryear. Still, ports and packages collections mechanically and unthinkingly continue this failed and broken modality of wasted effort. Packaging multiple versions and multiple configurations side-by-side similar to Habitat (hab) and Nix independent is the only way to go. This approach supersedes vendored dependencies because it allows garbage collection and sharing of identical dependencies rather than duplicating them. Furthermore, it allows real choices without an either-or and real real flexibility that single recipes that always attempt to track a rolling version can never possibly achieve.
Also broken is maintaining multiple packages that package multiple versions of the same package with combinations of external, frequently changing platform packages like rubygems that are packaged manually. Multiple versions of the same package should share common recipe declarations as much possible and be managed more cleanly. Native extensions should be built automatically in CI/CD using polling or notifications rather than manual methods that too often lead to outdated, vulnerable dependencies and create too much pointless, repetitious busywork.
The problem seems to be one of command line interface.
In the C/C++ world, you specify the language version using a flag passed to the compiler. e.g. -std=c++98
In most other programming languages, you specify language version by installing multiple copies of the compiler/interpreter and running the corresponding version.
The C/C++ way works fine if your language spec is updated once every 3 years. It does not work fine for anything much more frequent than that. Until recently C and C++ were popular enough that the other approach didn't need to be accommodated. Now it does.
Is there a better alternative to Firefox? Of all the browsers it seems like the "least bad" choice (above Chromium, and other proprietary browsers) and I use it, but is there something safer, simpler, and more secure?
> is there something safer, simpler, and more secure?
No.
Building a good browser is hard. Typically you can get something "safer" (from a privacy/business model perspective) and/or "simpler" (from a development perspective) relatively easily - see KHTML and other niche efforts. But when it comes to "more secure" while supporting modern web features, you need a lot of skilled eyeballs on code and a lot of people trying to break things. You achieve that either with tons of visibility, or with tons of money. Those small projects have neither. Mozilla has both.
The downvotes in this section clearly show the real state of browser technologies and the few choice one will have if they use an alternative OS.
Brave and Vivaldi are still forks of Chromium, Waterfox is a fork of Firefox. Thus you are not going to find any updated alternatives like those on the BSDs anytime soon.
Meanwhile, WebKit-based browsers doesn't seem to suffer from the overuse of dependencies and multiple languages nor does it have packaging hell unlike Chromium and Firefox.
> Meanwhile, WebKit-based browsers doesn't seem to suffer from the overuse of dependencies and multiple languages nor does it have packaging hell unlike Chromium and Firefox.
I use vivaldi. It provides a lot of customization for my taste unlike others.
My biggest problem with most browsers is tied to how they handle tab management and lacks basic features that should be there without installing an extension.
Example - stacking, auto closing, filter, finder etc. Not hiding tabs after certain number of them.
My understanding is that there will never be a consumer-facing browser called "Servo." Instead, pieces of Servo will get merged into Firefox as they become production-ready (this has already begun). Presumably, eventually everything will be merged in and Servo will stop existing as a separate project.
I'd still be seriously concerned about how few eyeballs they get with regards to security---Google has entire fleets of machines dedicated to fuzzing every line of Chromium code and both Chrome and Firefox use advanced sandboxing to make it so that a single exploit in the JavaScript engine wouldn't be able to result in user-level remote code execution. I'm sure they're both probably fine (if nothing else from security by obscurity) but modern browsers are complicated pieces of software that require a lot of engineering to keep secure.
It's basically a lightweight interface to webkit. There's only so light one can go, however; a browser basically ships an entire rendering stack and big chunks of an OS, and with all the weird features and backwards compatibility issues that have accumulated, one can only go so small.
Yes. Firefox forks that forked before Mozilla jumped the shark (v37, then multiprocess, then rust) that evolved into their own thing without all the features/attack surfaces that aren't strictly required for a browser to just render html and execute JS.
Simpler, sure. Safer and more secure, how? There's been a lot of new security features in Firefox recently that you'd be missing out on if you used something that old. You can't put "just" in front of "execute JS" (or "render HTML" for that matter); that's a pretty complex task with a lot of security concerns. I can't imagine that the communities of these Firefox forks can keep up with backporting upstream security fixes, especially since the modern Firefox codebase has diverged so much.
If security is something you’re looking for, “Firefox plus some ancient, unmaintained legacy code and patches jammed in by random third parties” is not substantially more appealing than just Firefox by itself.
Your description does not apply to the two examples given. It's obvious you don't even know what they are even if you knew what they were 5 years ago. Look again.
Pale Moon and the like aren't just ancient Firefox code. And without all the 'features' Firefox keeps adding in that are irrelevant to a browser that renders html and executes JS they're just as secure.
If I'm to avoid Firefox, I'd like to avoid its forks as well - they don't really improve upon anything meaningful both of those have had more issues than Firefox in the past. I'm thinking smaller than Firefox.
From looking at the commit history of Pale Moon, it is maintained by essentially three people. Their maintenance strategy is to freeze at an old version of Firefox, and randomly backport patches purely to try to keep somewhat up-to-date on JS or DOM features. Given the sheer size of the codebase, its inherent complexity (a JIT compiler is going to be very ripe for potential security vulnerabilities), and the utter lack of any sign of trying to mitigate these problems (e.g., fuzzing, or even merely attempting to identify security fixes in Firefox that may warrant backporting), if you are worried about any security issues in Firefox, moving to such a fork should only worry you more.
>utter lack of any sign of trying to mitigate these problems (e.g., fuzzing, or even merely attempting to identify security fixes in Firefox that may warrant backporting),
Would you please leave personal swipes out of your HN comments, or edit them out if they make it in? They break the site guidelines, you've done it more than once in this thread, and I'm sure you can make your substantive points without them.
I don't read release notes, I read the commits and the patches themselves. Actually, I did check after posting, and they appear to do the bare minimum--port the posted CVEs, which won't even account for all the security bugs. There are definitely several commits I've seen them do where they specifically revert changes that rewrite functionality to be safer, but don't actually fix any specific known security flaw.
If I really wanted to scare you, I'd tell you about such security-friendly-sounding commits as "we don't want the newest release of the [crypto] library, let's freeze at an older version" or "re-enable support for RC4 in TLS." But that would be unfair, because you'd actually have to read the commit to judge for yourself if I'm quoting them in good context.
Can't the required rust version be (temporarily) treated as part of firefox then, and not bother building everything else using rust with that version?
It seems likely the OpenBSD would consider such a patch if it were presented to them, but they might also reject it due to ideals they hold that outsiders can’t predict.
- it's much bigger and resourceful than openbsd maintainers.
- it decided to adopt this fancy update policy, and instead of making it easy/seamless, left it up to the whole open source community to play catch up.
I wonder if there are others who support this opinion that desktop Unix has very complicated future unless complex apps will begin to bundle their own libraries.
reply