Hi, I created Docker. I have exactly 3 things to say:
1) Competition is always good. Lxc brought competition to openvz and vserver. Docker brought competition to lxc. And now tools like lxd, rocket and nspawn are bringing competition to Docker. In response Docker is forced to up its game and earn its right to be the dominant tool. This is a good thing.
2) "disappointed" doesn't even begin to describe how I feel about the behavior and language in this post and in the accompanying press campaign. If you're going to compete, just compete! Slinging mud accomplishes nothing and will backfire in the end.
3) if anyone's interested, here is a recent exchange where I highlight Docker's philosophy and goals. Ironically the recipient of this exchange is the same person who posted this article. Spoiler alert: it tells a very different story from the above article.
I think you're reading too much - or too little - into this if you think they're "slinging mud". Any fork is going to list its reasons for the fork- if they didn't have issues with how Docker is heading, why would they be making the fork in the first place?
If they just quietly gave an ambiguous non-disparaging statement like "we're forking because we're unhappy with the direction Docker is taking", it would seem frivolous and ill-considered, and nobody would know on what points the fork would be aiming to distinguish itself.
This statement needs to be made, the way it was made, for the same reasons any project announcement is made: it needs to announce that it exists, and why it exists. It's the same as Docker's "debut" blog post(s).
Every schism needs its 95 Theses, and the odds favor the ones who can read them, understand them, and take them into consideration.
---
Disclaimer (re https://twitter.com/kenperkins/status/539528757711622145): I make edits to my comments after posting, usually posting a line or two then fleshing them out over time. If I make a change that conflicts with a statement in an earlier revision, I'll note it: otherwise I'm pretty much just composing live.
It's really bugging me people are using the word "fork". This is not a fork, it's a competing container format, there isn't any docker code in Rocket AFAIK. Even @shykes called it a fork in a comment, it's not somebody taking your code and doing something different with it, they are doing their own implementation. Ideas aren't "forked", code is.
As to everything else, I manage CoreOS clusters with docker for now, and while this came out of the blue (seems like for Docker folks as well) I'm happy to see what happens as a result. I'm not sure why there are hurt feelings over the announcement, didn't find anything particularly in bad taste and what exactly is wrong with promoting your new product?
The CoreOS team isn't under any obligation to docker to contribute however anyone on the docker team want's them too. Even if these issues have been discussed before they've clearly taken a different path and that's within their right, not sure where mud is being slung. Where this will lead who knows, but hopefully there will still be good collaboration between different groups as they pursue their own goals that align with their needs.
EDIT: I haven't actually looked at the code, so if somebody wants to prove what I'm saying wrong please do. I'm basing what I know off the announcement.
IMO rewriting something from scratch is like forking but worse because it's impossible to merge later. And Rocket is definitely forking the Docker community.
If it can't be merged, it's not a fork, that's the key part of forks (well, not entirely, but the lack of shared code means it's not a fork by my definition).
That said, you're on point: this is forking the community. A hard fork, too.
I don't have a horse in this race, but from what I read this is the part that can be construed as "slinging mud". I've put some [read between the lines] comments in square brackets:
"Unfortunately, a simple re-usable component is not how things are playing
out. Docker [much to our dismay] now is building tools for launching cloud
servers, systems for clustering, and a wide range of functions: building
images, running images, uploading, downloading, and eventually even overlay
networking, all compiled into one [big and nasty] monolithic binary running
primarily as root [how insecure is that?] on your server. The standard
container manifesto was removed [those flip-floppers!]. We should stop
talking about Docker containers, and start talking about the Docker
Platform [since we can focus attention on our efforts that way]. It is not
becoming the simple composable building block we had envisioned [which puts
our offerings at a disadvantage]."
"We still believe in the original premise of containers that Docker
introduced, so [unlike those silly Docker people] we are doing something
about it."
Later on, they specifically say:
"the Docker process model ... is fundamentally flawed"
"We cannot in good faith continue to support Docker’s broken security model..."
All these may be valid criticisms, but even ignoring my potentially off-base annotations it's difficult to read their announcement as anything other than "Docker is broken and can't be fixed". It's reminiscent of political attack ads which focus on the shortcomings of your opponent rather than the strengths of your own platform.
personally, I think the long term value of Rocket is not about Rocket -- its about the ACI specification for the formats of containers.
Right now I'm already taking a Dockerfile, exporting it to a tar, and then running systemd-nspawn -- I love Dockerfiles, I love being able to grab a postgres server and get it up quickly from Docker Hub, but I didn't need or want the rest of docker.
If both Docker and Rocket support ACI, then you have a composable image layer, and that means people aren't locked into either ecosystem just to build images of their applications.
ACI :: Docker-tar-format to me is like QCOW2 :: VMDK. Wouldn't it be cool if projects like Packer[1] didn't have to exist, because the image format of Virtual Machines was open and documented as an independent standard?
Now we're talking. Yes, I agree having a better spec for the underlying image format would be nice. In fact I also agree you should be able to use the Docker runtime without its packaging system, and vice-versa.
However I think it makes more sense to do this on the actual Docker format which everyone already uses... That way you get the benefit of increased openness without the drawback of fragmentation. I have the impression I've been pretty vocal in asking for help in making this happen, and wish these guys had stepped in to help instead of forking. I pretty distinctly remember pitching this to them in person.
So, I'll re-iterate my request for help here: I would like to improve the separation between the Docker runtime and packaging system, and am asking for help from the community. Ping me on irc if you are interested.
Looking back from the long-term future, though, what's the difference between the two approaches?
Whether the work on a standard container format happens inside or outside of Docker, it would result in a format presumably a bit different from how Docker containers are now (e.g. not overlay-layered by default, since most build tooling wants to just output an atomic fileset.) And either way, work would then occur to make Docker support that standard format.
The only real difference is that, in this approach, the ecosystem also gets a second viable runtime for these standard containers out of the deal, which doesn't seem like a bad thing. You can't have a "standard" that is useful in a generic sense without at least two major players pulling it in different directions; otherwise you get something like Microsoft's OpenDocument format.
In theory, OVF is the 'answer' for Virtual Machines -- but its failure has been in adoption -- if you can't get Amazon and OpenStack to adopt it, what's the point?
Before Rocket/ACI there wasn't even a contender for Containers. Now there is a published spec. Start there. Iterate.
The Docker team does this a lot, and it's part of their PR machine. They creep their way into and eventually try to steer every conversation regarding containers, especially when it can potentially be damaging to their "brand". (part of what has rubbed me the wrong way)
~~~
Frankly, shykes and other Docker employees shouldn't be commenting here. It only serves to make them look petty with any attempt of a "rebuttal" and, as shykes put it, "sling mud". CoreOS made a grand announcement, and yes it competes with Docker... but just let it play out.
Frankly, there is a lot of things Rocket aims to do that are more appealing to me. Security being one of them, and a standardized container specification is another. If anything, it will make Docker compete better.
Actually, I appreciate that shykes and others take the time and try to explain their side of things and engage in a dialog. There's a lot of people confused right now about what's going on.
I think it's a little less scary than you think. The person who commented was a dev. Like a very devvy dev, who spends lots of time devving on Docker. He's free to express an opinion, but he prob should have mentioned who he was (I recognised his name because he dev a lot on docker). But he's not part of the PR machine. He's a dev. A dev with a kinda ill thought out opinion, but a dev.
> The Docker team does this a lot, and it's part of their PR machine. They creep their way into and eventually try to steer every conversation regarding containers, especially when it can potentially be damaging to their "brand".
If you must know, the opposite just happened. Someone who happens to work at Docker just voiced their individual opinion. He was then reminded by "the PR machine" that it is better to take the high road and refrain from answering, and let the company make an official answer. This is pretty standard communication practices, and a good way to avoid feeding trolls like you. I know this, because I myself will get in trouble for replying to you :)
Interesting to see you resort to calling your users "trolls" simply because they feel it's not good for you, the head of Docker, to respond off-the-cuff and angry about a PR announcement from a competitor.
> that it is better to take the high road and refrain from answering, and let the company make an official answer
Your company already released an official announcement 2+ hours ago (with much of the same rhetoric as your post here). Seems you didn't even follow your own advice.
I'm just calling you a troll, and it's for implying that a cabal of Docker employees somehow manipulates and suppresses the public conversation about containers for the profit of their employers.
> I'm just calling you a troll, and it's for implying that
> a cabal of Docker employees somehow manipulates and
> suppresses the public conversation about containers for
> the profit of their employers.
You came here with the explicit intent of disseminating your viewpoint that CoreOS is making a terrible decision and why your company and it's ideals are better. Your company already made an official PR response, leave it at that. (and you call me a troll?)
For the first time in Docker's short history, it's future and mission are being directly challenged. This is your response? (it won't be the last time Docker is directly challenged).
Imagine if Microsoft went around rattling the cage every time Apple released some product -- it would make them look pretty petty pretty quickly. Just get out there and compete. Produce a superior product and the market will speak.
In all seriousness, you made a few blaming statements early on in this thread which is the most likely reason got the reaction you did from Solomon. I'm not opposed to people making observations, but speaking for others really has no place here!
Specifically talking about the "PR machine" comment. Say what you mean!
> Hi, I created Docker. I have exactly 3 things to say:
In line of making lists of things to say. I got 2.
1) Don't use Twitter for having long conversations and public fights. Just don't. No good will come out of it. Engaging in that is feeding the trolls and slinging mud, which you accuse the other party of doing.
2) Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?
Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.
> 1) Don't use Twitter for having long conversations and public fights. Just don't. No good will come out of it.
I encourage you to read the twitter exchange I linked to. It predates all of this, and is not at all a fight. On the contrary it is a constructive exchange and I am using it to assert Docker's philosophy in a positive way
> Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?
> Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.
Hey Solomon; honest question - skipping the tête-à-tête for a moment, the first tenant you outline:
> 1) interface to the app and developer should be standardized, and enforced ruthlessly to prevent fragmentation
Is one I've been pondering and asking myself about a bit - what does this mean?
Is the interface the API? The docker CLI? Interfaces to libcontainer?
Where does the line "enforced ruthlessly" fall exactly?
Does this mean wrapping the CLI or API in another convenience layer is a no-no if it doesn't expose the docker API directly?
I think the rest of the 13 make perfect sense, and I actually don't think the CoreOS guys we're going against any of those in practice or philosophy; more they wanted something small that did one thing very well.
Anyway, I love you guys and the coreos guys, so I'm only in it for the swag.
If you were trying to make sure as many people as possible paid attention to Rocket as a serious alternative to Docker, which is the current de facto standard Linux containerization scheme, well done.
You have to realize that commenting here, in this thread in particular is not helping things... Instead of keeping your head down, letting the buzz blow over, you just made the PR that much stronger for the CoreOS POV. You should have thought about posting an article in a few days/weeks that, while not directly refuting the CoreOS post, put the Docker vision front and center and made it seem like you were the leader of the market, not just a company that was blindly reacting.
I think it's safe to say that while your comments here made you feel better, they didn't help your position at all, regardless of how valid your points are.
Your comments on this post have done more to damage my faith in Docker's philosophy than the Rocket announcement did.
Somebody highlighted concerns they have with the direction of your product. You may not agree with their opinions, but that doesn't make them FUD. They have every right to ship a product that adheres to their vision, just as you do.
Docker and CoreOS are in a pre-monetization land grab for a single market.
They've so far been approaching it from opposing corners, but CoreOS just made the first play at the opponent's territory, and it apparently rattled Docker a bit.
Cloud Foundry also quietly forked Docker with Warden/Diego (edit: I meant Garden, thanks kapilvt), although in that case they remained compatible with Docker images.
clearing up some facts.. warden predates docker, its a container impl. diego is something entirely different more like kubernetes or mesosphere (scheduling & health, etc). garden the go implementation of warden containers does add fs compatibility for docker.
My +1 goes to Kelsey Hightower. He posted on 7 Nov some worries which many Docker users and contributors, already had since the past year, when you dropped off LXC containers, instead of working together with https://linuxcontainers.org/ project to get a better code. That seems already a strategic business decision to decouple your "product value" from his mother and generator: LXC. IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.
cutting & pasting (hacking :) is faster, if you're not mother tongue but believe me, in Italian it won't sound so gentle & polite.
Moreover, ... we all hope that like a "plagiarism" it won't became a a common feeling, a meme.
So what about the other 75% of my worry? That's not a cut & past, is my worry, what do you think about:
> ... Kelsey Hightower ... posted on 7 Nov some worries which many Docker users and contributors, already had since the past year, when you dropped off LXC containers, instead of working together with https://linuxcontainers.org/ project to get a better code. That seems already a strategic business decision to decouple your "product value" from his mother and generator: LXC.
Infact I just copy & past that ..., not the least, I didn't found better words to express that feeling, which I hope it will be wrong honestly because I'm on board with Docker since the early days and I'd like to see it more community driven that private business driven.
Ops (particularly in Enterprise) doesn't want batteries included by default. Principle #3 and #5 are incompatible IMO. Do one thing and do it well...
Seems to me that post-Docker 1.2, the Docker team has taken Ops concerns much less seriously and is focused almost exclusively on iterating Dev-friendly features.
1. Competition? How can open source software be in competition with anything? It's free, its source code is there; if people want it they'll use it, if not they won't. Why would anyone care what other projects are doing or saying? Just build your tools how you want and go on with life. (Unless you're building your tools specifically to make money, in which case I guess PR and 'competition' does matter a lot)
> How can open source software be in competition with anything?
Market share is power. Popular open-source projects can, and do, shape the industry. If you believe your trajectory is the right one for the industry, competition matters a lot.
As an example, Mozilla's Firefox was created to compete with Internet Explorer. It succeeded, and now Mozilla is working to defend the open web, so market share is still crucial for Mozilla even today.
I'm sorry but you're incorrect. Mozilla's Firefox was originally called Phoenix, and it was created because Mozilla the browser was a dog-slow encumbered monstrosity of Netscape's attempt to create an all-in-one solution for the web. Firefox was essentially competing with Mozilla Suite, but it wasn't so much "competing" as filling a necessary role: a browser that didn't suck.
Mozilla Suite was also not created to compete with Internet Explorer. In fact, Internet Explorer was created to compete with Netscape, which was the dominant browser for years until IE finally knocked it off its catbird seat. It never recovered because IE offered a simple, fast browsing experience, even if it sucked dick at actually rendering content.
In this vein, Phoenix was created in the model of Internet Explorer. So in a way you could say it competed, but in actual fact it was competing against its own progenitor.
Reflecting more on 'competition': the browser wars nearly destroyed the web as we know it as each browser introduced incompatible proprietary extensions which were then picked up (badly) by each other over time. The lack of standards, or good implementations of standards, severely hampered the adoption of more advanced technology. Firefox continues that tradition today by pushing more and more features that IE can't support; we're just lucky that Firefox is the dominant browser now, and that people are now used to upgrading their browser virtually every week.
You seem to narrow down (i.e. restrict) pretty heavily what competition can mean. Open source projects can compete even if no money is involved, e.g. on visibility and amount of help and traction they can get from the community. This is partly related to the concept of fragmentation (where some people argue that fragmentation dilutes efforts).
* Despite Brandon Philips (CoreOS CTO) serving on the Docker governance board, Docker has aggressively expanded their scope well beyond their original container manifesto.
* CoreOS believes the Docker runtime is now too unwieldy and "fundamentally flawed"; the unwritten word that really sprung to mind was that Docker was getting "greedy."
* CoreOS reaffirms their original operating model of being capable of running their infrastructure on and with Docker.
* Rocket is CoreOS's answer to stay true to the "simple composable building block" mantra.
This is great news, particularly for Enterprise customers adopting containers. IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.
But crucially, they also crossed the business models of many startups (including CoreOS, Weave, Flocker, etc.) that rely on Docker maintaining an Open Platform. So this is an entirely logical response.
I'll be surprised if now Docker in response doesn't unveil an 'enterprise' Docker version that basically just strips away the unnecessary features and has more security by default. The enterprise market is just too valuable to let it just slip away like this. Your move...
Docker's 'new' direction is to direct its attention towards solving the orchestration and management problems involved in actually running infrastructure on Docker.
A number of third parties had begun work on various (sometimes proprietary) orchestration and management systems for creating a reliable/scalable/easily manageable cluster with Docker as a building block. CoreOS is one. But Docker is pushing towards an official, open-source orchestration/management system that threatens to make all of those companies irrelevant.
> Docker's 'new' direction is to direct its attention towards solving the orchestration and management problems involved in actually running infrastructure on Docker.
IME examining Docker, this is actually the hard problem.
I think it is a great stand for Docker. Very recently (IMHO in 1.3), it merged the functionality of Fig into Docker.
I think Docker orchestration and coreos can coexist - if I had to use COREOS to use the goodness of Docker, then systemd-nspawn would come and eat Docker's lunch.
I wish that Docker bless one of Ansible/Chef as the official orchestration base and take it forward. I really don't want to earn something Docker specific.
I attended Docker Global Hack Day #2 on Oct 30 from Austin. A talk was given on an active Docker project for host clustering and container management, which was non-pluggable, and made no reference to and used none of the code from CoreOS's etcd/fleet/flannel projects.
This was where I first started worrying about CoreOS and Docker divergence.
But since the hack day there has been a pretty reasonable (IMO) GitHub discussion about the tradeoffs between out-of-box ease of use and customizability.
I saw that same presentation at the same event, but came away with a very different impression: the container management they showed was implemented completely outside of docker itself, with no patches to the docker codebase needed. Also, IIRC it actually did use significant code from etcd for coordination.
It had no etcd in it and the POC was implemented as part of the Docker API/CLI, as best I recall. There were significant questions in the discussion about etcd not being there.
I believe the two (Docker & CoreOS) might have rather similar strategies and / or product roadmaps.
What seemingly gets mixed up by quite a few commentators on this topic:
Docker is an orchestration, deployment, management, etc solution - the "container" is created by LXC, jails, libVirt or other OS features and now also libContainer.
This discussion also shows how far away / how early we are with "containerization" or container that are exchangeable / movable between different (OS) environment - we are discussing the companies that are building cranes to load and unload the boxes before we even have an understanding how the boxes will really look like.
> CoreOS believes the Docker runtime is now too unwieldy and "fundamentally flawed"; the unwritten word that really sprung to mind was that Docker was getting "greedy."
I wonder if that comes from the partnership with Microsoft.
They raised $55 million [1], so you have to believe their ambitions are to extract as much rent from the container ecosystem as possible. That's not a bad thing, but it's behind a lot of their moves.
Docker's MO is to become "that thing that is on all servers" so that when they flip the switch and start monetizing off support and tertiary services, people will be more-or-less locked in.
It has indeed surprised me how quickly a normally-slow-to-accept-new-things community has adopted Docker (even well before it was considered "stable").
> how quickly a normally-slow-to-accept-new-things community
I think you're referring to the sysadmin community - but I think the driver for this has been the search for deployment nirvana. Deployment is a much more fragmented field, so it makes sense that a good solution would find fertile ground.
Absolutely. Not only does it simplify deployment, you also get the ability to quickly spin up a new development environment. That means it's easy to dip a toe in and slowly increase how much you use it.
> You are not "locked in" by Docker Inc if you are using Docker just like you aren't locked in by Github if you are using git.
A much more accurate analogy would be you are not "locked in" by Oracle if you are using MySQL. It may be true today, but no guarantee that will always be the case.
Regardless of whether your comment was lightly sarcastic or not, I agree that the Docker VMWare[1] & Microsoft partnership announcements may have been conditional upon a committed Docker roadmap outlining some or all of the features that others (such as CoreOS) may feel should be broken out. Typically larger ecosystem players want to be assured that your offering will have a clearly defined role within their existing ecosystem that plays to your core brand and technical competency.
I have been concerned that Docker's scope was expanding too far for a while now, so I'm glad to see an alternative that might work appear on the horizon. That said, I am somewhat concerned that CoreOS has a suspiciously similar business model to where Docker would probably like to be.
It's in a business's best interest, and exceedingly common practice, to "land and expand" with something clear and compelling, and following that add features to compete with alternative solutions. I don't think there's anything inherently altruistic about CoreOS that would keep Rocket lean in the long-run, especially as they begin migrating their various tools away from Docker containers.
I had the same initial reaction, but I think there's good reason to trust the CoreOS folks to remain faithful to the project's goals. Containerization (although foundational) is one part of CoreOS's platform. It's easy to see where the boundaries fall, e.g. I expect systemd and fleetd to keep their respective functionality and not overlap with Rocket.
It become pretty clear once dotCloud became Docker Inc. that they intended to capitalize on the "Docker" brand to sell an integrated orchestration platform. CoreOS already has enterprise customers for their operating system and related components. They seem like the perfect team to take this challenge on.
I hope you can understand that it's frustrating when, after hard work pitching an API to dozens of ecosystem players, spending weeks trying to wrangle a working implementation which makes as many of them as happy as possible, without compromising integrity of design - after all that, in the end, all it takes is one unhappy camper to write a blog post and that immediately tramples everything else.
It's even more discouraging in this particular case, because after this blog post, Alexis and I have discussed this topic extensively, and as a result he has since joined the effort. In fact I will be hacking with him in person on integrating Weave as a native networking plugin in 2 days in Amsterdam.
So, sorry for the insta-snark. But it can be frustrating to see so much good will and hard work be crushed in a second.
Hey, regardless of the technical sides of anything, I'm sure this is not a fun day for you. I think you're handling the situation terribly, but still, not a fun day. Anyway, docker is awesome, and thanks for building it. I know it's made my devops life a lot more enjoyable of late.
Concerning Weave that's quite a good news, the weave point of view for docker networking is good and can be easily setup in many (not all of them of course) infrastructure.
The difference here would be, IMO, that they have clearly made openness one of Rocket's goals: the formats should be well-specified and maintained separately so that other implementations can run them.
I hope Rocket will be more stability oriented than Docker. After runing few hundreds containers on machine for almost a year know I would not chosen Docker again. Docker has stability issues all the time and it is taking months to solve them.
I had just landed LXC container support in Velociraptor [1] when Docker was announced last year. It uses Supervisor to launch LXC containers and run your app inside. I thought long and hard about switching to Docker, but their decision to remove standalone mode [2] would have meant replacing all of Velociraptor's Supervisor integration with Docker integration instead. With Docker being such a moving target over that time span, it just seemed like a bad move.
Since then I've been mulling writing my own standalone 'drydock' utility that would just start a single container and then get out of the way (as opposed to the Docker daemon that insists on being the parent of everything). I'm optimistic that Rocket could be that thing.
Question though: Does Rocket have any concept of the image layering that Docker does? That still seems to me like a killer feature.
I'm still digesting the Go-like syntax for vanity URLs and how that works here. If a fileset manifest lets you specify the URLs where the layers can be fetched from, then I like it.
Does Rocket just 'cp' files on top of each other to implement layering? It'd be nice to not require a bunch of copies of the same files. I thought that the hard link implementation in Docker's new overlayfs support was a smart idea.
Yes, all of this was designed with overlayfs in mind. I am waiting anxiously for Linux Kernel 3.18 to land, this is a huge step forward for Linux and years in the making.
Cool! The idea of filesets is very nice - there are some very interesting workflow ideas buried in there. I've been looking forward to a mainstream unioning filesystem for a while too - and I hope rocket does some serious exploration (or enables it) of how to take advantage of them fully in both development and deployment. (And while I'm at it, testing too).
My personal wish-list down this path includes:
* options in the relevant manifests on which layer is writable ... if I'm doing development on libfoo which is used by several different apps, let me make that layer writable so I can rapidly iterate integration tests and (bad practice) live coding on testing/dev servers.
* tools to help me smash a dev layer or 3 into a single fileset (and similarly dissect a layer into a few new filesets during a refactoring)
* the ability to use filesets and overlays in a way similar to package management is now, but with extended features that are similar to python's virtualenv.
One of the things I see as a boon of the filesets as described is: I can update parts of my system without having to rebuild the whole dang app silo from the get go. Combining this with some of the above features looks like it could be useful for making "thin" images - where I can build all my code in one place, and port only the binaries to the staging and deployment images, just by doing a few fileset/overlay tricks. (no more complicated scripts)
Please no json file, use yaml or have an option for yaml T___T.
The curly braces and brackets can get ugly when nested.
edit:
It may seems stupid but when you're on the terminal with vim and the directory path is long in some nested object/array. Things is really hard to parse out with your eyes.
That's open source. The early implementation of an idea is broken. Someone creates an alternative which fixes the problems. The alternative often doesn't gain the same traction and the original continues as the broken dominant implementation. But the alternative is also broken, but maybe in different ways. As design decisions pile on, the broken spreads. In the end, we again learn that software sucks. It will always suck. For people who don't like reinventing the wheel (or relearning the reinvention) stick with the "good enough" and focus on building cool stuff.
Great news. I'm not a fan of Docker's new monolithic approach to containerization. Things like orchestration and networking should not be included in docker, but rather pluggable.
I prefer the Unix model - many programs that work together. That might not be practical for networking (a natural plug-in, probably), but feels like it should be the way for orchestration.
The Docker image registry and image management should really be a separate program as well - that is a huge pain point that Rocket seems more likely to get right.
Interestingly enough, with flannel, docker's advanced networking capabilities become pretty trivial, and communication across hosts is also pretty trivial.
I think all in all, CoreOS has built out a ton of tools to make using Docker easier, and they're all very well defined, and compossible. I'd even say that a lot of docker's features could be completely removed by using some of these tools.
Links? Nah just use ips/dns + etcd for service discovery.
Networking? Need very basic bridged networking, and flannel will handle communication on a single host, or multihost.
Deployment? Use fleet.
Not that all these are 100% perfect like I've made them out to be, but any individual component could be swapped out if you want.
The thing I like about the link model is that they hide your containers from other containers and only expose the connections you want (I think using iptables?)
I'd like a tool that makes this linking easier outside of Docker, but for now this is one of the features I like about it (although holy moly do Docker links have a lot of baggage you have to bring along for the ride, like giving everything names).
I think this is probably more indicative of the issue that Future Docker would like to be a CoreOS-competing platform, and has been edging towards that state. This is CoreOS' natural bounceback from that.
I think this was the original model proposed by Docker. What we have now is (as other posters have mentioned), a Docker organization reasonably bent towards creating value for their investors, which means they need to start building things that, you know, make money.
To clarify, I don't think there's anything inherently wrong with what Docker's doing, but it is at odds with an entirely open, pluggable system. It doesn't make any sense for their business model to truly make it easy to just use their containers and none of the revenue-generating offerings.
I've not been following the discussions but if it's such a critical piece of the whole puzzle and it's in everybody's interest that it remains open, wouldn't a foundation, rather than a single private company, be the best venue for leading the project forward?
Then how do you fund that foundation? Good developers cost a ton of money. Marketing, organizing events, organizing conferences etc also costs a ton of money. I think something like Docker, especially given its growth and adoption rate, never would have been possible without VC funding. VCs wouldn't invest in a non-profit foundation.
From one point of view, I'm thinking "why did coreos need to be so aggressive?", and "boy, what a gift Solomon Hykes did to coreos by mismanaging this thing so badly", and "man, all of these guys look sort of immature to me".
From the other point of view, I'm respecting docker and coreos even more, as open source projects and as a companies, because it feels like there are real people behind them.
If this is the new wave of enterprise companies, I really like it. These are people like us, that engage with us and sometimes screw up, without hiding it. They are doing great things, and the fact that they are a bit immature is actually great.
I'm an entrepreneur myself, I've done enterprise software my whole life, and I always thought it's a shame that companies in this space are so distant from their users and have such little humanity.
It isn't tied to systemd. The stage1 that is in the current prototype uses systemd to monitor and fork processes but we would love to see other stage1's that configure other process runners. For example configure and run a qemu-kvm filesystem as the container.
Also, even though it is using systemd to monitor and fork processes a design goal is to run on all Linux's that have a modern Kernel.
What about non-Linux platforms (FreeBSD, Mac OS X with a kext)?
One thing that I believe Docker has failed at is in taking a purely declarative approach to image definition; rather than specifying the packages that are assembled/inserted to create the container, Docker ships around non-portable Linux binaries.
I second that. At the begining Docker people were mentioning adding FreeBSD Jails support, what seemed to me an awesome thing, a platform independent containerization middleware, but recently they just seemt to forget about it and they're doing only linux-centric things - what a shame.
Yes, but the Docker Remote API allows for a great deal of implementation freedom -- including running on a different OS substrate. We're doing this with sdc-docker[1] to run Docker on top of SmartOS and in a SmartOS container, and the Docker folks have been incredibly supportive. Despite the rhetoric, Rocket appears to be much more bound to the OS platform than Docker -- and given @philips' comment that "part of the design difference is that rocket doesn't implement an API"[2], this binding appears to be deliberate.
The post mentions not having a daemon running as root, but then you have to run `rkt` as root anyway. Won't this just mean that instead of having a single implementation of a Rocket daemon running as root, there is now one custom one every time it needs to be automated?
It's great to see this problem broken up into reusable pieces though. It totally makes sense to function without a daemon, especially out of the box.
There actually is a significant difference between having 'rkt' as a setuid-root process that's invoked from the command line, and having a docker server always running waiting for commands. There are more ways for a potential attacker to get at the server. So, Rocket at least looks like they're trying to shrink the attack surface.
> There actually is a significant difference between having 'rkt' as a setuid-root process that's invoked from the command line, and having a docker server always running waiting for commands. There are more ways for a potential attacker to get at the server.
Wrong. With a server, the only thing an attacker has control over is its input. With a setuid-root binary, they still have control over its input, but they also have control over the entire environment under which it executes, including many things that developers generally assume an attacker can't control. Setuid binaries are incredibly scary from a security perspective and much harder to get right than servers.
Hmm, I played around with CoreOS for the past weeks, it was nice, I'm getting the hang of it. What is constantly difficult though is that there is no cross linking of containers (mysql database accessible from user@172.ip.add.r while the Nginx/PHP-fpm docker is looking for a specific mysql ip addr). Restarting containers from images changes both IPs. Not handy. Why not always share a common /etc/hosts with all current containers (given name with current ip addr) in them?
I was also having some issues with php5-fpm in a docker, it doesn't seem designed for it (it gets the file paths communicated from Nginx, not the files so dockers need to sync files)
Somehow I though CoreOS and Docker would be figuring this out together. I hope somehow that the knowledge I now have will remain relevant, I was planning a hosting service for sports clubs based on drupal8.
Ah well, we are at the beginning of an era, I should have expected this. I'm very curious, who knows, the container space is far from filled, we'll be seeing many distros. There will be Gentoo's, there will be Ubuntu's. It's going to be nice.
As a heavy user of CoreOS and docker, I'm interested to see how this plays out.
My problems with docker have been the security model, for which the only recourse I've had is to use the USER keyword in my Dockerfiles. Furthermore, networking has been a pain point, which I've had to resolve by using host networking to access interfaces.
Let's see how rocket deals with these issues and others. I pay for CoreOS support, so I'm glad to see that they're addressing this.
Docker's main focus is to "get people agree on something". And they are doing great in getting traction and adoption. But if everyone starts to create their own flavor of containers, we still don't get portability across servers and clouds. It would be better IMHO if Rocket implements the Docker API, or if they collaborate together in creating a minimal standard. Then everyone would benefit. I'm really curious how Solomon will respond to this...
FWIW, part of the design difference is that rocket doesn't implement an API. When you do `rkt run` it is actually executing under that PID hierarchy; there is no rktd that forks the process.
This is a design goal so that you can launch a container under the control of your init system or other process management system.
it's a very exciting time for Linux Containers. it's been a fun to watch the evolution from BSD jails to lxc to docker, but the rate of innovation and usefulness is certainly accelerating. it sure seems like rocket's approach will be much less of a black box than docker images/registry, which should make it much more approachable to people trying to understand what linux containers are all about.
How will App Container Images be built? I'm guessing that unlike Docker, the standard App Container build tool(s), if any, will be separate from Rocket.
Right now there is a `actool build` subcommand that will build an ACI given a root filesystem. That tool is used to build the validation ACI's and the etcd ACI. It is rough right now and we will make it simpler to use overtime; and as rkt gets better people can run the build tool from inside of a container given source code.
Nice. It occurs to me that since an ACI is just a tarball, the build process is decoupled from the runtime engine, unlike in Docker. I've found the Docker build process to be unsuitable for creating minimal images (though I've read that nested builds plus layer squashing will fix this). It'll be interesting to watch the exploration of different build tools and processes that Rocket's decoupled approach will enable, if it catches on.
Docker already supports alternative build systems via docker import.
Realistically, if the stack is broken into a dozen pieces then somebody will create a bundle with sensible defaults (let's call it "CoreOS") and then we'll be back in the same situation.
Forget the interpersonal back-and-forth. My suspicion is that this is due largely because CoreOS (the company) does not their product completely dependent on another for-profit company's platform (Docker). It's just smart business.
This looks very interesting - it'll be really useful to have something like Docker that isn't so monolithic - it should be much more composable in new ways.
Have a look at this container [1] I put together for accessing GPU instances on AWS via Docker. Runs various compute tasks including multiple containers against a single GPU without issue.
From the looks of your other comments in this tangent it might be exactly what you need or a starting point at least.
It's a base for these BOINC [2] and F@H [3] containers.
Thank you very much! this is really useful information. Aside from cuda, I also want to make EGL/opengl work with docker, hopefully I can find examples for that.
Certainly. The kernel can simply pass through the device, although you lose some of the security of containerization that way. There may be issues with multiple containers sharing the same GPU though.
I actually need gpu, not the ui. I need it to do scientific computation. Video streaming service is another case. gpu has better video encoding capabilities.
I previously heard that docker has trouble loading device drivers.
Not the parent poster, but needing GPU isn't necessarily the same as having UI. You can use GPU for a variety of general purpose math (Example: mining bitcoins, or doing stuff like Folding@Home), or for offline rendering.
yes, I understand offline rendering. I'm looking into egl off screen rendering. But due to historical reason, the current gpu drivers (NVIDIA) need x server.
Interesting what the CoreOS team is building. If the code becomes as neat as some of the main parts of CoreOS, then this alone merits attention, we cannot have to much security.
I'm all for a new container runtime if it lets me start containers as a non-root user. Allowing non-root users to start containers would open up a whole new level of applications, particularly on multi-tenant HPC-style clusters.
Interesting branding. "Rocket" is basically only one letter different from "Docker". That can't be coincidental. Also has opposite implications - taking off vs settling in.
Improving the security model of docker is mentioned. Docker is known to be currently unsafe to run untrusted containers. Does anyone know yet if Rocket plans to support running untrusted containers safely, ala sandstorm.io?
Yes, we have prototyped doing socket activation with rocket already but the patches haven't been merged. So, yes, the intention is to make socket activation work.
So here's my take on this. From the docs on github:
The first step of the process, stage 0, is the actual rkt binary itself. This binary is
in charge of doing a number of initial preparatory tasks:
Generating a Container UUID
Generating a Container Runtime Manifest
Creating a filesystem for the container
Setting up stage 1 and stage 2 directories in the filesystem
Copying the stage1 binary into the container filesystem
Fetching the specified ACIs
Unpacking the ACIs and copying each app into the stage2 directories
Questions:
Don't all these steps seem like a lot of disk, cpu and system-dependency-intense operations just to run an application?
Why is this thing written in Go when a shell script could do the same thing while being more portable and easier to hack on?
Why are they saying this thing is composable when they just keep shoving features (like compilation, bootstrapping, configuration management, deployment, service autodiscovery, etc) into a single tool?
Fragments? Certainly.
Dies? Linux has been fragmented from its inception. If you include the world's Android phones, Linux probably runs on more computers than any other kernel or OS. Rocket will not kill Linux, containers, or docker. In the worst case, it will kill CoreOS, and even that's unlikely.
Likely not even close. Just about every single washing machine, refrigerator, microwave, digital stove, etc, runs a variant of an open source operating system called Tron, or the more common ITron variant.
I dunno if linux fragments is causing linux to die... BSD have many fragments and they're doing fine.
I think competition is good, this will give us an option that's not monolithic.
I didn't realize docker direction was to encompass orchestration until this thread. This isn't something I want to use docker for and also I'm glad the competition is address the security issue where there is a need for more security.
And with a rival option I'm happy to choose rocket as an option when it's stablized and there aren't any other options out there.
There were many VM engines, now there are a few. I imagine the same thing will happen with container technology. Generally 1 technology stands out, then things fragment, then things coalesce into a tiny handful of solid solutions.
Docker may or may not be the container engine that lasts a long time. There is a reason they raised a bunch of money. Clearly containers are going to be big, but is Docker the one that goes on to be dominant? Docker is trying through building features & biz dev, but it's far from over.
That may be a little overly dramatic. There have been failed fragments/forks/derivatives, but there have also been some sweeping successes.
It's too early to foretell the fate of Rocket. Containers are getting lots of attention, so I'm actually pretty happy to look at this as a potentially rewarding experiment. Worst case, it fails and we keep using Docker (or whatever else springs up).
These aren't really containers. They're giant statically linked binaries, more or less. The actual operating system is now just a VM host for running containerized giant WIMPs (weakly interacting massive programs). Fast-forward a few years and the host can wither and die and be replaced with a proprietary or custom/fragmented management layer. Linux survives only as an internal pseudo-OS within each mega-binary "container."
Edit: what I was really getting at was that these technologies are patches for the inadequacy of the OS. The fact that we need containers at all stems from the difficulty of managing software installations, configuration, etc on the actual operating system.
I've excited for competition but unfortunately the post seems a bit confused in its message.
On one hand it talks about the original Docker manifesto and later says it was removed, with the removal being a "bad" thing. However, it refers to Docker not being simple as there are plans to add more and more features to it.
Including, "wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server". However, in the original manifesto (that was removed), Docker announced/claimed those features would/should exist: https://github.com/docker/docker/commit/0db56e6c519b19ec16c6....
Competition is good but this was a bit weak in its first appearance.