Hacker News new | past | comments | ask | show | jobs | submit login
Simple Dockerfile examples are often broken by default (pythonspeed.com)
270 points by itamarst 7 hours ago | hide | past | web | favorite | 114 comments





I have a mixed opinion about his first point.

There are two basic approaches to take with dependency management.

The first version is to lock down every dependency as tightly as you can to avoid accidentally breaking something. Which inevitably leads down the road to everything being locked to something archaic that can't be upgraded easily, and is incompatible with everything else. But with no idea what will break, or how to upgrade. I currently work at a company that went down that path and is now suffering for it.

The second version is upgrade early, upgrade often. This will occasionally lead to problems, but they tend to be temporary and easily fixed. And in the long run, your system will age better. Google is an excellent example of a company that does this.

The post assumes that the first version should be your model. But having seen both up close and personal, my sympathies actually lie with the second.

This is not to say that I'm against reproducible builds. I'm not. But if you want to lock down version numbers for a specific release, have an automated tool supply the right ones for you. And make it trivial to upgrade early, and upgrade often.


> The first version is to lock down every dependency as tightly as you can to avoid accidentally breaking something...The second version is upgrade early, upgrade often...Google is an excellent example of a company that does this.

This is misleading. My understanding of Google's internal build systems is that they ruthlessly lock down the version of every single dependency, up to and including the compiler binary itself. They then provide tooling on top of that to make it easier to upgrade those locked down versions regularly.

The core problem is that when your codebase gets to the kind of scale that Google's has, if you can't reproduce the entire universe of your dependencies, there is no way any historical commit of anything will ever build. That makes it difficult to do basic things like maintain release branches or bisect bugs.

> if you want to lock down version numbers for a specific release, have an automated tool supply the right ones for you. And make it trivial to upgrade early, and upgrade often.

This part sounds like a more accurate description of what Google and others do, yes.


Yes they have a huge mono repository and tooling to update projects in it to specific versions. You don't get a choice really. You can go home one night with your project on say Java 7 and then wake up and find someone has migrated it to Java 8 because they've decided it's Java 8 now.

But that change only happened once all the tests for "your project" passed on Java 8.

This is the crucial difference. Library developers at Google know all their reverse dependencies, and can easily build, test, notify, or fix all of them.

You can't do that with external FOSS libraries. The closest thing we have is deprecation log messages and blog posts with migration guides.


Their external FOSS dependencies are imported into the monorepo and are built from there. So they get to use the same pattern there. Someone who updates the copy of the dependency in the monorepo will see the test failures of their reverse dependencies at that time, before the change is merged to master.

(Yeah they use different version control terminology since their monorepo doesn't use git, but I've translated.)


> The closest thing we have is deprecation log messages and blog posts with migration guides.

Rust has crater, which can at least build/test/notify over a large chunk of the rust FOSS ecosystem. It won't pick up every project, granted, and I haven't heard of anyone really using it outside of compiler/stdlib development itself, but it's an example of something a bit closer to what google has.


So now you're saying that I have to write tests?

/s


For an easy open source example of such tooling, see Pyup.

We use it to do exactly that: pin down every dependency to an exact version, but automatically build and test with newly released versions of each one. (And then merge the upgrade, after fixing any issue.)


Or the original ruby bundler, which locks down exact versions in a `Gemfile.lock`, but lets you easily update to latest version(s) with `bundle update`, which will update the `Gemfile.lock`.

Actually, it goes further, `bundle update` doesn't update just to "latest version", but to latest version allowed by your direct or transitive version restrictions.

I believe `yarn` ends up working similar in JS?

To me, this is definitely the best practice pattern for dependency management. You definitely need to ruthlessly lock down the exact versions used, in a file that's checked into the repo -- so all builds will use the exact same versions, whether deployment builds or CI builds or whatever. But you also need tooling that lets you easily update the versions, and change the file recording the exact versions that's in the repo.

I'm not sure how/if you can do that reliably and easily with the sorts of dependencies discussed in the OP or in Dockerfiles in general... but it seems clear to me it's the goal.


I’d imagine this is easier now with dependabot joining Github, being free for all, and implementing a proper CI test system for your repos.

Logically, the next step is supporting such infra for containers. Automate all the mundane regression/security/functionality testing while driving dependency upgrades forward.


> Google is an excellent example of a company that does this.

Which part of Google would that be? My impression is the complete opposite, dependencies are not only locked down and sometimes even maintained internally.


A larger problem is that Docker is nearly inherently unreproducible.

Downloading and installing system packages lists, etc.

For this reason, Google doesn't use Docker at all.

It writes the OCI images more or less directly. https://github.com/bazelbuild/rules_docker


well also docker's sha hash for each layer is just a random sha and not a sha of the actual content. also it includes a timestamp. thus docker is not reporducible. but google actually has kaniko and jib which correct that problem.

Your first point is incorrect. That was true of v1 docker images, but layers have been content-addressable for a while now.

Your second point is absolutely correct - we strip timestamps from everything which tends to confuse folks :)


Thou shalt be able to recognise the Unix Epoch 0

If I remember correctly Angular comes with unpinned dependencies.

The package.json file specifies unpinned dependencies.

The package-lock.json or yarn.lock or similar specifies the pinned dependencies.


Correct.

But neither the package-lock.json or the yarn.lock file is part of what you get when you create an angular project using the angular cli, meaning that the versions aren't pinned from googles side.


Yeah but only google is google. You are not google, your code doesn’t need to google scale and you don’t need to go to their extremes to manage dependencies. They do it because they are forced to, doesn’t mean it is right or their way is how it should work for everyone.

I haven't worked at Google, but I have worked at Facebook, and I can say with some confidence that in this respect Facebook is Google too :)

For sure there are tradeoffs for big projects that don't make sense for small ones. But there are also times where big projects need a tool that's "just better" than what small projects need, and once that tool has been built it can make sense for everyone to use it. I think good, strong, convenient version pinning is an example of the latter, when the tools are available. That was the inspiration for the peru tool (https://github.com/buildinspace/peru).


I agree with this, but I think to the extent that such tools are lacking (or at least that the overhead is prohibitively high for smaller projects), the parent is correct. Thanks for tipping me off to peru; hadn't seen that before.

This isn't really a 'wow, look at the crazy stuff Google needs' thing.

Any tiny open source project benefits from a reproducible build (when you come back to it months later) and also new versions (with fixed vulnerabilities, and compatibility with the new thing you're trying to do).


I think this depends on your definition of "reproducible build." If you're talking about builds being bit for bit identical, that might not be worthwhile given the complexities of doing so with most build tools. But if you mean the same versions being used for dependencies, then absolutely.

Yes, I agree completely, as replied to sibling: https://news.ycombinator.com/item?id=20032980

No not really, any tiny open source project isn’t worth the hassle of making a reproducible build for.

Well, I look at reproducible as a scale (and incidentally, with an increase in effort as you slide along it, too).

A certain amount of reproducibility - a container, pinned dependencies - gives such large reward for how easy it is to achieve that it absolutely is worth it for a tiny open source project.

Worrying about the possibility of unavailable package registries and revoked signing keys, on the other hand, probably isn't.

It's a trade-off. But you certainly don't need to be Google-scale for some of it to be very worth your while.


> The post assumes that the first version should be your model.

No, it doesn't. It just assumes that you want explicit control over when you upgrade. You can always change your Dockerfile or your requirements.txt and build again when you've tested your software against a new Python version or a new version of a package. You can do that as often as you like, so this is perfectly consistent with "upgrade early, upgrade often". But not specifying exact versions in those files means they can get upgraded automatically when you haven't tested your software with them, which can break something.


From what I've seen, explicit version control does not really work unless there's an organizational force toward timely upgrade. In every company, everyone's busy, nobody has time to look at a service that's been running fine for six months (and risk getting paged "Why did you change it? It suddenly stopped working for us!"). The path of the least resistance is to not upgrade anything that's running "fine", and then old versions and their dependencies accumulate, and when you actually have to upgrade it becomes much more painful.

It might work if there's a dedicated team whose mission is upgrade dependencies for everyone in time, but I haven't seen one in action so I'm not sure how well it might work out. (Well, unless you count Google as one such example. But Google does Google things.)


Totally agree with you. At my current company we've got a devpy repository where almost everything is ancient. Even trying to add a more modern version for your own service doesn't work because some people have pinned versions and some people haven't. It's not ideal.

At my last company (a smaller startup) we used to have a Jenkins job which would open a pull request with all of the requirements.txt updated to the latest available pypi version. That worked pretty well, you always had a pull request open were you could review what was available, it would run the test suite, you could check it out and try it, hit merge if everything looked good and roll it back if it caused an issue somewhere. It made it easy to trace where things changed but not as 'cowboy' as accepting all changes without any review or traceability.


Updating pinned dependencies is a form of paying down tech debt. You want to do it as quickly as you can afford to, but not mandate doing it robotically. If a new python version comes out, great, but mitigating a site outage is not the right time to try it.

> From what I've seen, explicit version control does not really work unless there's an organizational force toward timely upgrade.

I agree. But that doesn't contradict what I was saying. I was not saying that explicit version control always works. I was only saying that it is perfectly compatible with "upgrade early, upgrade often", since the post I was responding to claimed the contrary.

Also, if an organization can't reliably accomplish timely explicit upgrades, I doubt it's going to deal very well with unexpected breakage resulting from an automatic upgrade either.


> and risk getting paged "Why did you change it? It suddenly stopped working for us!"

So, the alternative is that it suddenly stops working, but caused by the update being available instead of by any explicit action on your part. You'll have more time to react to the problem in this scenario than the other?


And then you've just moved the immutability boundary to include the whole Docker image (including the application itself).

Enters dependabot and it's nice integration with Github

You get pinned versions that get updated when needed


Isn't this dichotomy the whole point of dependency locking? Sometimes, you want to specify that your code requires a specific version. Sometimes, you just want to keep track of the most recent version that the code has been tested with. They are two totally different needs

You have to have tests and you need a CI that will scan your requirements.txt regularly and throw a warning when they're out of date.

Test are ESSENTIAL. You should be able to bump all your versions, run your tests and fix the errors. If something gets through broken, then you know where to add a test (before you fix it).

You should pin versions for your sanity. You should also have a process (a weekly process) to deal with updates to dependencies. Dependency Rot will catch up with you!


I learned the hard way to lock down dependencies. Long ago, in a galaxy far far away, I was doing some Java/SQL Server stuff. We upgraded Java (which was badly needed), and immediately all the SQL stuff stopped working, which led to a few days of paralyzed bafflement.

Found out a few days later that the official release of the JVM broke the Microsoft SQL Server drivers, and Oracle had to ship a new version out asap. Meanwhile, we lost days of work.

Of course, that was also the bad old days of bad old configuration management. But I'd never do something like put an arbitrary version of a language driver in a Dockerfile, not for production.

edit: Of course, the main reason we get scared to upgrade is because we often can't easily back out the change. Docker fixes a lot of that.


> we often can't easily back out the change. Docker fixes a lot of that.

Software can be hard to downgrade. Sometimes dependencies change. Sometimes data models are migrated one way only. Nobody takes the time to properly test them. Among other things.

How Docker, or any other container packaging format for that matter, could possibly help with that I do not understand. It is not the first time I've heard something like this, but I have never been in a situation where the application packaging was part of this particular problem.

Surely starting the an old version of some software isn't neither harder nor easier with Docker than any other way.


Having the old Docker image makes it easy to revert to it. Whether this is easier or harder depends on what other way people were doing deployment.

Sometimes people don't have good deployment processes that automatically back up whatever they deployed. They might even have installed stuff manually, so they don't know how they did it last time. In that case Docker helps. The build might not be reproducible, but at least you have the binary.


Yeah, seriously, having been in the nightmare that is trying to rollback on systems not designed for it, it is my number one (background) priority to get systems to a place where rolling forward isn't the only answer for trying to rollback. I tell you virtualization really made my life easier, being able to at least take a snapshot prior to major upgrades was a game changer. After that, finally getting chef to the point where we could rebuild production in a mostly repeatable way (dependency chains can only be tamed so far without increasing infrastructure costs) really made dev work easier. Using chef kitchen to trivial build a new VM to know that you're close to production really helped reduce dev time by a lot (even if it seemed like the chef recipes would break in subtle ways every month or so). I've been watching Docker for years now, and am hoping it's hit the tipping point where the benefits outweigh the added complexity. I suspect my next gig that's lacking reproducibility I'll start with docker rather than chef and see how far that takes me.

I think of such virtualization as "rolling back by rolling forward". I can just deploy whatever version I want, when I want. If I don't like what's there, I can deploy a different version that happens to be an earlier version.

Docker by itself isn't enough. But Docker in concert with Kubernetes (or Openshift, in my world) is very, very powerful.


> The first version is to lock down every dependency as tightly as you can to avoid accidentally breaking something. Which inevitably leads down the road to everything being locked to something archaic that can't be upgraded easily, and is incompatible with everything else. But with no idea what will break, or how to upgrade. I currently work at a company that went down that path and is now suffering for it.

If you use a system like nix or guix, this concern is largely obviated.


There is a 3rd and IMO ideal approach. Instead of pinning to dependencies, you pin to an entire set of dependencies. You are guaranteed stability and that dependencies work together while you are pinned to one set. Non breaking changes like security fixes can still happen but major updates don’t. This is less reproducible than locking everything, but we get to reuse fixes among projects and backport them to the shared set. Updating your dependencies is just moving to a new package set. This approach is utilized by NixOS and Stack, but AFAICT no where else.

At my previous workplace we were using https://greenkeeper.io/ and locking dependencies, which i think may be the perfect compromise between those 2 systems of organization. You get pinned dependencies, resulting in stable builds. For every package update (automatically scanned), you get a branch spawned w/ tests run if you've set up CI. It makes staying up to date easy when it's an easy upgrade (just merge a green branch!), and you get isolated knowledge up-front when a dependency has upgraded and you're gonna need to budget some time on it.

GitHub just bought Dependabot, so something like this is now available in beta and eventual general availability for all GitHub users.

I used to be strongly in favor of using fixed versions everywhere, but now I also have a mixed opinion. I think a reasonable compromise is to continually update and to promote images. That way you can start with `dev --> prod` when you're small and add more QC layers as things grow.

Something that's even more difficult is dealing with upstream changes. What do you do when `ubuntu:18:04` updates? It's easiest if the upstream is released with a predictable cadence (ex: every Wednesday), but none are AFAIK. That way you could plan a routine where you regularly promote an update through QC.

I'm not sure what to think about event driven release engineering like auto-builds (repo links) on Docker Hub. I think that might be an ok solution for development builds or rebuilds of base containers, but it seems to be abused. I bet there are maintainers of popular images on Docker Hub that are effectively triggering new deployments for downstream projects every time they publish a new image.


I don’t think your Dockerfile should be downloading your dependencies and building from scratch. Let your build pipeline pull them in, run tests, and pass them verbatim into your container.

Vendor your dependencies if you have to, or maintain a cache, but don’t make your Dockerfile redo all of that work.


We have a Dockerfile for builds, and a Dockerfile that consumes the built artifacts for deployment.

Indeed. Don't people tag their images with version numbers? I'm finding this whole docker debate in this thread quite surreal.

Build and test a container. Pull the container into production. Build and test the next container. Once it passes, update the version tag and pull into production. I'm really struggling to understand the issues everyone is raising here.


I think the last option you mentioned is (effectively) the best of both worlds. Lock down dependencies explicitly for the sake of reproducibility, but make it very easy to upgrade (as automatically as possible).

You can always have both in parallel. The first one to test changes and deploy to production and the second one to try to upgrade your dependencies. Should all the test pass on the second one, you can then commit the new requirements.txt and other updated package versions.

You can then run the second continuously and warn when it fails and handle whatever happened manually, without having a broken prod.


we do that but it's hard to CI a whole operating system, some week ago we got bit by a weird imagemagick bug that was triggered by very specific tiffs that weren't in out test suite but that one of our clients used extensively for their product images.

annoying and wouldn't have happened if we were running pinned versions, that said getting stuck on old software would be worse. however nothing can ever test something like that fully, just too many combinations :(


Yup software will always break. The key is whether you can fix it quickly (fix meaning land commits AND get it in prod) and test for the issue in an automated way in the future.

Not sure about Python, but I think its language specific. In the JS world, we have "yarn upgrade" which bumps all non-major versions of your dependencies to the latest. It then locks them in until the next time you upgrade something. There are other actions that may also upgrade them, but it's always through a dependency change in some way.

I still think the overall advice is good. We depend on node in our Dockerfile like this:

FROM node:11

If we went further into the version, of course we'd be even better off probably, but there's a tiny point to make here. We don't build any docker images for deployments from dev to production. In fact the last time a docker build is run is for the development environment. After that it's just carrying the image from dev to qa to stg to prod, and we simply change the configuration file along the way.

This makes it so that we're not re-building again and possibly getting a different set of binaries that were not tested in any of those other environments.


>FROM node:11

Node follows semver and rarely has breaking changes within major versions, so this makes sense to do. The article recommends pinning a minor version of Python because it doesn't follow semver and sometimes has breaking changes within minor versions.


Something we have built for our stuff: There's a private repository and all applications run with minimum versions for their dependencies, so if there's a new available version, everything will update.

Beyond that, we have a daily job that runs the integration tests of all applications with the upstream repository, and if all integration tests end up green, the current set of upstream dependencies gets pushed into the private repository.

It is work to get good enough integration tests working, and at times it can be annoying if a flaky new test in the integration test suite breaks fetching new versions. But on the other hand, it's a pretty safe way to go fast. Usually, this will pull in daily updates and they get distributed over time.

And yes, sometimes it is necessary to set a maximum version constraint due to breaking changes in upstream dependencies. Our workflow requires the creation of a priority ticket when doing that.


Your last option us exactly what the author recommends -- pip-tools.

> This will occasionally lead to problems, but they tend to be temporary and easily fixed. And in the long run, your system will age better.

This is the reason I love archlinux. Most of the time, updates are no big deal. Sometimes, they break the system. Rolling release distros force you to deal with each change as it happens, usually with a warning that breakage is about to happen, and a guide for how to quickly deal with it. Once the system is up and running, basic periodic maintenance will keep it that way. In the past, I've used arch machines continuously for 5+ years and they work great and stay up to date.

Compare to intermittent release distros like Ubuntu. Every time I need to update an ubuntu machine, I end up reinstalling from scratch and configuring from the ground up. There are too many things that need tweaking or simply break between when releases are 6-24 months apart. And I'm not convinced that locking down dependencies actually solve anything. Wait six months after an LTS release, when you need to get the latest version of some package. Suddenly, you are rummaging through random blog posts and repos trying to find the updated package. PPAs, Flatpacks, Snaps, oh my! Intermittent distros offload a lot of their responsibility onto users by pretending like package update problems don't exist.


Good points, but it's amusing that his solution to #1 didn't lock down the patch version, nor the distro around it. I think that also makes a decent point for Nix[0], which solves #1-#3 by default (since choosing a particular version of Nixpkgs locks down the whole environment, and considers the build as a DAG of dependencies rather than a linear history). It also supports exporting Docker images, while preserving Nix's richer build caching.[1]

[0]: https://nixos.org/nix/

[1]: https://grahamc.com/blog/nix-and-layered-docker-images


Good point, will go fix that. My soon-to-be-ready attempt at a production-ready template (https://pythonspeed.com/products/pythoncontainer/) covers the tradeoff between point releases vs. not-point-releases, and it does pin the OS.

And yes, Nix fixes some of the problems of building a production-ready image, but only a subset.


Could you elaborate on the remaining problems with Nix for building Python images?

Not an expert on Nix, but it's not so much that Nix has problems (though I'm sure it does, my initial research suggested it's not quite there yet for Python packages) but that there other things you need to get right.

For example:

1. Signal handling (only one bit of https://hynek.me/articles/docker-signals/ is Dockerfile specific, the rest still applies.)

2. Configuring servers to run correctly in Docker environments (e.g. Gunicorn is broken by default, and some of these issues go beyond Gunicorn: https://pythonspeed.com/articles/gunicorn-in-docker/).

3. Not running as root, and dropping capabilities.

4. Building pinned dependencies for Python that you can feed to Nix.

5. Having processes (human and automated) in place to ensure security updates happen.

6. Knowing how to write shell scripts that aren't completely broken (either by not writing them at all and using better language, or by using bash strict mode: http://redsymbol.net/articles/unofficial-bash-strict-mode/)

etc.


> though I'm sure it does, my initial research suggested it's not quite there yet for Python packages)

Can you expand on what's missing? I've successfully used nix to cross-compile a pretty substantial python application (+ native extensions, hence the cross compilation), for embedded purposes, and it pretty much worked out of the box. Adding extra dependencies was straightforwards.

I think you can use pypi2nix for pinned dependencies, and you can run it periodically for security updates.


Like I said, it was very preliminary research... I reached the bit where pypi2nix did "nix-env -if https://github.com/garbas/pypi2nix/tarball/master" and wasn't super happy about the implications of "just use master" for production readiness.

If it works, though, that's great!

The more general point though is that in my experience no tool is perfect, or completely done, or without problems. E.g. the cited https://grahamc.com/blog/nix-and-layered-docker-images suggests you need to spend some time manually thinking about how to create layers for caching? Again, very preliminary research—I know people are using it, I'm just skeptical it's a magic bullet because nothing tends to be a magic bullet.


> The more general point though is that in my experience no tool is perfect, or completely done, or without problems

I agree. I had to do a lot to cajole nix to cross-compile some python extensions.

However, I've done this before manually and using various build systems, and the advantage of Nix is that (1) equivalent builds are cached (reducing compile time), (2) the dependency graph is assured to be clean, (3) the entire state is pure (I can send my nix expressions to a hydra and be guaranteed a successful build), and (4) reuse -- once I modified the higher-level python combinators to build cross extensions, I can add new modules easily.


Regarding layering, it used to be a completely manual process (just like with Dockerfiles), but the point of the blog post was that you can now use `buildLayeredImage` and correct layering will Just Happen.

Ah, neat, hadn't realized that was an actual Nix feature now. The post made it sound like this was just something they were writing for themselves.

Nix does a pretty poor job of being able to specify the exact version that you need of something.

It's not great at mix-and-match pinning, but if you pin a version of Nixpkgs then you will always build from the same environment (down to the libc). There's some boilerplate, sadly, but it's still fairly simple to understand.[0]

[0]: https://github.com/Etimo/photo-garden/blob/f597b95c0c488abad...


I have spent a ridiculous time building this so I'll take the opportunity and share. It builds python wheel packages in a build container and installs them in an app container. Works great for cpython and pypy. Also allows to build for alpine and works for most other languages. We started to build basically everything that way.

https://gist.github.com/tuco86/67d84dfb27268b1faf05d2dbb1acb...

Ok, I kind of cheated and added the user just now. Sue me. Also posted this in the other Docker related news. Sue me again.


For reproducible builds, `python:3.7` isn't specific enough. python:3.7.3-alpine3.9 is more specific, for example. There aren't supposed to be breaking changes in the bugfix releases, but they'll happen anyway.

Ran into this recently. Docker container was running into issues until I changed "python:3.7-alpine" to "python:3.7.3-alpine3.9". It was because a package I was relying on from "apk add" changed between Alpine versions.

I could probably safely make it "python:3.7-alpine3.9" (instead of pinning to Python 3.7.3), since the issue was the Alpine version, but at this point I'm starting to really buy into the whole reproducible build thing.


And `python:3.7@sha256:35ff9f44818f8850f1d318aa69c2e7ba61d85e3b93283078c10e56e7d864c183` is even better.

overfitting here?

> A broken Docker image can lead to production outages, and building best-practices images is a lot harder than it seems. So don’t just copy the first example you find on the web: do your research, and spend some time reading about best practices.

While I may not agree with absolutely everything in the article, this final point is paramount. Please don't blindly use technology because you managed to find a copypasta config that runs. Running != good.


Definitely very true. I write more C++ than anything else, and the sheer number of online examples that start with

  using namespace std;
is just staggering. Sure, it works in a toy example posted to stackoverflow, but it will cause problems in larger projects. I think globally there needs to be better emphasis on using best-practices in tutorials and examples; I remember this particular pet-peeve of mine also being present in college textbooks. Especially for content aimed at newbies, it should be frowned upon to show the wrong way to do things, since then it gets harder to show how to do it the right way. I've had people who were surprised to find out that they could type:

  using std::chrono::duration;
  using std::cout;
instead of pulling in the entire std namespace; simply because they'd only ever seen examples that did it the lazy way.

edit: lack of semicolons strikes again!


While I agree with the general point of using best practices in code samples, the Cpp Core Guidelines actually encourage[0] using

  using namespace std;
for std specifically, giving the reasoning that:

> sometimes a namespace is so fundamental and prevalent in a code base, that consistent qualification would be verbose and distracting.

I also work mainly in C++, and personally I prefer using it, together with -Wshadow to catch possible issues.

0: https://github.com/isocpp/CppCoreGuidelines/blob/master/CppC...


That's... true in the specific example, becuase C++'s standard library is a huge, promiscuous mess of obvious-looking symbol names that are just asking for a collision with a user name.

But in general the notion that we want to isolate "everything" into a namespace is a net loss. Clear and simple abstractions have real value, and short undeclared names are an important part of being clear and simple.

The modern convention of separately importing every symbol you use gets really out of hand, when most of the time it really is appropriate that you just declare "my code is using this API" and expect things to work without having to link your program by hand with a giant shipping manifest of symbols at the top of your source files.


I strongly disagree. In Python, you can find exactly where every identifier comes from (unless you use `from foo import *`, but that's frowned upon) and it makes it extremely easy to navigate code and documentation.

I've had to look at some C# web service code recently, and the amount of magic it relied on made it impossible for me to find what I was looking for, even using grep.


But isn't something like

  from tk import *
essentially equivalent to my C++ example? They're bad practices in both languages. Thankfully python examples seem to be better in this regard, since I don't recall seeing wildcard imports in any of the tutorials or references I've used.

Yes, and it does make code more difficult to read. I've looked at a fair amount of Python code, and it is very rare to see import * in the wild.

On the other hand, import * is nice to have in a repl.


I am actually preparing my own article titled "Docker antipatterns" that will include many more points like this.

why is running as root in the docker a problem? Isn't the whole point of containers to isolate the container? So what is the difference in a container running root or a user? If there is, wouldn't that be more of a docker bug?

If you are in a UID namespace, it likely is not a big deal. But if you were ever to have something escape from a container, it would be a much bigger problem if it was uid 0 in the root namespace.

So in other words it isn't really an issue.

If the server is comprimised and paths from the host are mounted inside the container, the attacker could potentially make more damage.

It seems like the folks making docker files could stand to learn package management with a mature package system before making docker files.

I wonder how many use docker after they learn ?


I do not intend to play down the importance of using docker carefully.

But the reproducible build aspect of the critic seems unnecessary to me: Isn't that more a concern of the packaging system? (no python scripter)

If your packaging systems supports version selection/locking, then use your packaging system right. If your packaging system cannot pin a version, how should docker solve this?


Docker can't escape all the blame here - its layer caching mechanism is IMHO flawed. It's fine to say that a packaging system should offer reproducibility but Docker's layer caching design assumes that every RUN command produces reproducible results.

You could of course blame users for not making sure that all the commands they use in their Dockerfiles are actually reproducible but many/most examples even in the official documention are clearly not reproducible.

Therefore you end up with what is in my opinion a semi-broken system - building images seems to be reproducible (and fast) until you lose your layer cache or you spin up a new CI build agent or a new dev joins the team and tries to build the same image.

Not that I can think of an clean and performant solution to this problem.


This is a great point -- you could potentially solve this using --cache-from which makes the layer cache explicit, and not something that varies between dev / CI / new devs.

This issue isn't Docker, Docker itself obviously can't deal with this. The issue is the examples and tutorial people provide on Docker packaging often don't talk about the requirements (pun intended) for reproducible builds at all.

Or they don't talk about need to run as not-root.

Or they suggest base images that are often broken in subtle ways (Alpine Linux).

Or they talk about multi-stage builds for small images, and neglect to explain that you've just destroyed your caching (this is fixable, but you need to know to expect it and how to fix it.)

Etc.


The caching part is (mostly) generally fixable by making sure to always add your source code in two stages - dependency files first, then resolve dependencies, and the rest of the source code later.

Rarely is this done though. And definitely alpine+musl doesn't always do what you might expect, and it's often language dependent as to whether or not you'll encounter something strange (not to mention you forfeiting bash)


>If your packaging systems supports version selection/locking, then use your packaging system right.

That's exactly what the article is recommending in point 2. The original Dockerfile author was using pip in a way that's only intended for in development. Having a requirements.txt file is the correct way to use pip when distributing a project.


Most Python packaging systems don't include the Python VM itself, though they can specify which version is required. In the post, Docker is used to provide a specific version of the Python VM.

Docker is a packaging system.

Thank you for clarifying this.

Did you understand the point i tried to make nonetheless or do i need to detail it?


This is an advertisement disguised as a technical post on container security.

Then I wish all advertising were like this. It's very informative and provides a solution rather than just pointing out the problem. I hope this page ends up ranking highly in search results because there are a lot of incomplete Dockerfiles employing questionable practices that sit at the top of search results and proliferate due to cargo culting.

It's a technical post on a company's blog. (which contains links to the company's products, but I didn't feel advertised to) Like thousands that have been posted on HN over the years.

its an ad: https://pythonspeed.com/products/pythoncontainer/ Production-Ready Python Containers is getting closer to release, but it isn’t quite ready. So sign up below to get notified when the template is available for purchase.

(1) and (2) aren't really broken, IMHO. For most cases always using the most up to date version is better than having 100% reproducible builds. After all, you have the docker image that you can distribute if you really need to. Better to pick up security and performance patches as they become available. If those updates break something then you can make the decision to fix on a known good version.

If you always pin, you have history to tell you which versions were good. If you mostly don't, you have to start disassembling a bunch of old images just to figure out what they were built from.

The final example in the article is broken. Python interpreter as PID 1 can't handle linux signals.

This is why I have a caveat at the top of the article as well as right after the last example. This particular issue is fixable with `docker run --init`, so not strictly necessary to fix in images.

In general, in Go, Java, and Python I've resorted to copying in the Gopkg files, pom.xml, and requirements.txt, and then running the requisite dependency installer for the language (dep, pip, mvn, etc...) and then just copying in the rest of the repo, relying on the .dockeringore with a default-ignore for everything and specifying the individual files/directories you may want to add, and in some cases a rootfs folder when necessary.

This seems to be the happy medium for me. I don't have very strong opinions on requirements.txt always being the pinned output from a pip freeze, and it seems like pipenv may actually die in a few years, and poetry will evolve to take the mantle, but I do lots of things with conda anyway.


not everyone is on a upgrade-daily churn, and should not have to be ! if you are externally exposed, sure, because security .. but really, isn't there some room here for different life cycles ?

Probably going to want to use tagged docker repos so that updating certain packages, no matter the language, don't suddenly break your images

Isn't it ironic that he isn't pining down the docker version?

"Broken" means "does not work". These examples do work. I'm annoyed by this incongruity. "Not sustainable"/"Not Forward Compatible", etc. would have been preferable.

Not locking it to a specific version is better for security updates. Do you want it to run stable with vulnerabilities or to run secure and broken?

> Not locking it to a specific version is better for security updates.

The idea is that you should take responsibility for your containers and verify fixes and test your application.

> Do you want it to run stable with vulnerabilities or to run secure and broken?

If these are your two choices, you have a staffing or a workflow problem.


The problems described here are called 'release engineering.' Dockerfiles don't solve release engineering, they provide an abstraction for building a release candidate, putting it through a pipeline, and then tagging a successful build as your release. In other words, the end-container is the immutable object that should be deployed, not the Dockerfile.

If you are building the container in each stage of your CI/CD pipeline, you are doing it wrong.


Very good clues!



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: