Hacker Newsnew | comments | show | ask | jobs | submitlogin
CoreOS is building a container runtime, Rocket (coreos.com)
600 points by kelseyhightower 8 hours ago | 190 comments




shykes 5 hours ago | link

Hi, I created Docker. I have exactly 3 things to say:

1) Competition is always good. Lxc brought competition to openvz and vserver. Docker brought competition to lxc. And now tools like lxd, rocket and nspawn are bringing competition to Docker. In response Docker is forced to up its game and earn its right to be the dominant tool. This is a good thing.

2) "disappointed" doesn't even begin to describe how I feel about the behavior and language in this post and in the accompanying press campaign. If you're going to compete, just compete! Slinging mud accomplishes nothing and will backfire in the end.

3) if anyone's interested, here is a recent exchange where I highlight Docker's philosophy and goals. Ironically the recipient of this exchange is the same person who posted this article. Spoiler alert: it tells a very different story from the above article.

https://twitter.com/solomonstre/status/530574130819923968 (this is principle 13/13, the rest should be visible via Twitter threading)

EDIT: here is the content of the above twitter thread:

1) interface to the app and developer should be standardized, and enforced ruthlessly to prevent fragmentation

2) infrastructure should be pluggable and composable to the extreme via drivers & plugins

3) batteries included but removable. Docker should ship a default, swappable implementation good enough for the 80% case

4) toolkit model. Whenever it doesn't hurt the user experience, allow using one piece of the platform without the others.

5) Developers and Ops are equally important users. It is possible and necessary to make both happy.

6) If you buy into Docker as a platform, we'll support and help you. If you don't, we'll support and help you :)

7) Protect the integrity of the project at all cost. No design decision in the project has EVER been driven by revenue.

8) Docker inc. in a nutshell: provide basic infrastructure, sell services which make the project more successful, not less.

9) Not everyone has a toaster, and not everyone gets power from a dam. But everyone has power outlets. Docker is the outlet

10) Docker follows the same hourglass architecture as the internet or unix. It's the opposite of "all things to all people"

11) Anyone is free to try "embrace, extend extinguish" on Docker. But incentives are designed to make that a stupid decision

12) Docker's scope and direction are constant. It's people's understanding of it, and execution speed, that are changing

13) If you USE Docker I should listen to your opinion on scope and design. If you SELL Docker, you should listen to mine.

reply

spb 5 hours ago | link

I think you're reading too much - or too little - into this if you think they're "slinging mud". Any fork is going to list its reasons for the fork- if they didn't have issues with how Docker is heading, why would they be making the fork in the first place?

If they just quietly gave an ambiguous non-disparaging statement like "we're forking because we're unhappy with the direction Docker is taking", it would seem frivolous and ill-considered, and nobody would know on what points the fork would be aiming to distinguish itself.

This statement needs to be made, the way it was made, for the same reasons any project announcement is made: it needs to announce that it exists, and why it exists. It's the same as Docker's "debut" blog post(s).

Every schism needs its 95 Theses, and the odds favor the ones who can read them, understand them, and take them into consideration.

---

Disclaimer (re https://twitter.com/kenperkins/status/539528757711622145): I make edits to my comments after posting, usually posting a line or two then fleshing them out over time. If I make a change that conflicts with a statement in an earlier revision, I'll note it: otherwise I'm pretty much just composing live.

reply

efuquen 3 hours ago | link

It's really bugging me people are using the word "fork". This is not a fork, it's a competing container format, there isn't any docker code in Rocket AFAIK. Even @shykes called it a fork in a comment, it's not somebody taking your code and doing something different with it, they are doing their own implementation. Ideas aren't "forked", code is.

As to everything else, I manage CoreOS clusters with docker for now, and while this came out of the blue (seems like for Docker folks as well) I'm happy to see what happens as a result. I'm not sure why there are hurt feelings over the announcement, didn't find anything particularly in bad taste and what exactly is wrong with promoting your new product?

The CoreOS team isn't under any obligation to docker to contribute however anyone on the docker team want's them too. Even if these issues have been discussed before they've clearly taken a different path and that's within their right, not sure where mud is being slung. Where this will lead who knows, but hopefully there will still be good collaboration between different groups as they pursue their own goals that align with their needs.

EDIT: I haven't actually looked at the code, so if somebody wants to prove what I'm saying wrong please do. I'm basing what I know off the announcement.

reply

wmf 2 hours ago | link

IMO rewriting something from scratch is like forking but worse because it's impossible to merge later. And Rocket is definitely forking the Docker community.

reply

girvo 1 hour ago | link

If it can't be merged, it's not a fork, that's the key part of forks (well, not entirely, but the lack of shared code means it's not a fork by my definition).

That said, you're on point: this is forking the community. A hard fork, too.

reply

Aeoxic 1 hour ago | link

That's great, except forks can be merged later.

reply

biot 3 hours ago | link

I don't have a horse in this race, but from what I read this is the part that can be construed as "slinging mud". I've put some [read between the lines] comments in square brackets:

  "Unfortunately, a simple re-usable component is not how things are playing
   out. Docker [much to our dismay] now is building tools for launching cloud
   servers, systems for clustering, and a wide range of functions: building
   images, running images, uploading, downloading, and eventually even overlay
   networking, all compiled into one [big and nasty] monolithic binary running
   primarily as root [how insecure is that?] on your server. The standard
   container manifesto was removed [those flip-floppers!]. We should stop
   talking about Docker containers, and start talking about the Docker
   Platform [since we can focus attention on our efforts that way]. It is not
   becoming the simple composable building block we had envisioned [which puts
   our offerings at a disadvantage]."

  "We still believe in the original premise of containers that Docker
   introduced, so [unlike those silly Docker people] we are doing something
   about it."
Later on, they specifically say:

  "the Docker process model ... is fundamentally flawed"
  "We cannot in good faith continue to support Docker’s broken security model..."
All these may be valid criticisms, but even ignoring my potentially off-base annotations it's difficult to read their announcement as anything other than "Docker is broken and can't be fixed". It's reminiscent of political attack ads which focus on the shortcomings of your opponent rather than the strengths of your own platform.

reply

[deleted]
pquerna 4 hours ago | link

personally, I think the long term value of Rocket is not about Rocket -- its about the ACI specification for the formats of containers.

Right now I'm already taking a Dockerfile, exporting it to a tar, and then running systemd-nspawn -- I love Dockerfiles, I love being able to grab a postgres server and get it up quickly from Docker Hub, but I didn't need or want the rest of docker.

If both Docker and Rocket support ACI, then you have a composable image layer, and that means people aren't locked into either ecosystem just to build images of their applications.

ACI :: Docker-tar-format to me is like QCOW2 :: VMDK. Wouldn't it be cool if projects like Packer[1] didn't have to exist, because the image format of Virtual Machines was open and documented as an independent standard?

[1] - https://www.packer.io/

reply

shykes 4 hours ago | link

Now we're talking. Yes, I agree having a better spec for the underlying image format would be nice. In fact I also agree you should be able to use the Docker runtime without its packaging system, and vice-versa.

However I think it makes more sense to do this on the actual Docker format which everyone already uses... That way you get the benefit of increased openness without the drawback of fragmentation. I have the impression I've been pretty vocal in asking for help in making this happen, and wish these guys had stepped in to help instead of forking. I pretty distinctly remember pitching this to them in person.

So, I'll re-iterate my request for help here: I would like to improve the separation between the Docker runtime and packaging system, and am asking for help from the community. Ping me on irc if you are interested.

reply

derefr 3 hours ago | link

Looking back from the long-term future, though, what's the difference between the two approaches?

Whether the work on a standard container format happens inside or outside of Docker, it would result in a format presumably a bit different from how Docker containers are now (e.g. not overlay-layered by default, since most build tooling wants to just output an atomic fileset.) And either way, work would then occur to make Docker support that standard format.

The only real difference is that, in this approach, the ecosystem also gets a second viable runtime for these standard containers out of the deal, which doesn't seem like a bad thing. You can't have a "standard" that is useful in a generic sense without at least two major players pulling it in different directions; otherwise you get something like Microsoft's OpenDocument format.

reply

losnggenration 1 hour ago | link

I think it's called OVF[1]. It's just not as widely supported as it probably should be.

[1] - http://en.wikipedia.org/wiki/Open_Virtualization_Format

reply

pquerna 1 hour ago | link

In theory, OVF is the 'answer' for Virtual Machines -- but its failure has been in adoption -- if you can't get Amazon and OpenStack to adopt it, what's the point?

Before Rocket/ACI there wasn't even a contender for Containers. Now there is a published spec. Start there. Iterate.

reply

losnggenration 52 minutes ago | link

I don't disagree, but OVF is an ANSI[1] & ISO[2] standard. Like you said, Amazon & OpenStack have chosen not to adopt it.

[1] http://webstore.ansi.org/RecordDetail.aspx?sku=INCITS+469-20...

[2] http://www.iso.org/iso/home/store/catalogue_tc/catalogue_det...

reply

lclarkmichalek 5 hours ago | link

Might be worth mentioning that you are employed by Docker, if you're going to engage in this discussion.

reply

Alupis 4 hours ago | link

The Docker team does this a lot, and it's part of their PR machine. They creep their way into and eventually try to steer every conversation regarding containers, especially when it can potentially be damaging to their "brand". (part of what has rubbed me the wrong way)

~~~

Frankly, shykes and other Docker employees shouldn't be commenting here. It only serves to make them look petty with any attempt of a "rebuttal" and, as shykes put it, "sling mud". CoreOS made a grand announcement, and yes it competes with Docker... but just let it play out.

Frankly, there is a lot of things Rocket aims to do that are more appealing to me. Security being one of them, and a standardized container specification is another. If anything, it will make Docker compete better.

reply

ridruejo 4 hours ago | link

Actually, I appreciate that shykes and others take the time and try to explain their side of things and engage in a dialog. There's a lot of people confused right now about what's going on.

reply

lclarkmichalek 4 hours ago | link

I think it's a little less scary than you think. The person who commented was a dev. Like a very devvy dev, who spends lots of time devving on Docker. He's free to express an opinion, but he prob should have mentioned who he was (I recognised his name because he dev a lot on docker). But he's not part of the PR machine. He's a dev. A dev with a kinda ill thought out opinion, but a dev.

reply

chroma 4 hours ago | link

> The Docker team does this a lot, and it's part of their PR machine. They creep their way into and eventually try to steer every conversation regarding containers, especially when it can potentially be damaging to their "brand".

Can you give three examples of this happening?

reply

shykes 4 hours ago | link

If you must know, the opposite just happened. Someone who happens to work at Docker just voiced their individual opinion. He was then reminded by "the PR machine" that it is better to take the high road and refrain from answering, and let the company make an official answer. This is pretty standard communication practices, and a good way to avoid feeding trolls like you. I know this, because I myself will get in trouble for replying to you :)

reply

Alupis 4 hours ago | link

> avoid feeding trolls like you.

Interesting to see you resort to calling your users "trolls" simply because they feel it's not good for you, the head of Docker, to respond off-the-cuff and angry about a PR announcement from a competitor.

> that it is better to take the high road and refrain from answering, and let the company make an official answer

Your company already released an official announcement 2+ hours ago (with much of the same rhetoric as your post here). Seems you didn't even follow your own advice.

reply

shykes 4 hours ago | link

I'm just calling you a troll, and it's for implying that a cabal of Docker employees somehow manipulates and suppresses the public conversation about containers for the profit of their employers.

reply

sagichmal 4 hours ago | link

    > I'm just calling you a troll, and it's for implying that 
    > a cabal of Docker employees somehow manipulates and 
    > suppresses the public conversation about containers for 
    > the profit of their employers.
Really? This strikes you as a good idea?

reply

Alupis 3 hours ago | link

You came here with the explicit intent of disseminating your viewpoint that CoreOS is making a terrible decision and why your company and it's ideals are better. Your company already made an official PR response, leave it at that. (and you call me a troll?)

For the first time in Docker's short history, it's future and mission are being directly challenged. This is your response? (it won't be the last time Docker is directly challenged).

Imagine if Microsoft went around rattling the cage every time Apple released some product -- it would make them look pretty petty pretty quickly. Just get out there and compete. Produce a superior product and the market will speak.

reply

kordless 2 hours ago | link

> Imagine if Microsoft went around rattling the cage every time Apple released some product

You mean like this? https://www.youtube.com/watch?v=eywi0h_Y5_U FIVE HUNDRED DOLLARS FOR A PHONE?

In all seriousness, you made a few blaming statements early on in this thread which is the most likely reason got the reaction you did from Solomon. I'm not opposed to people making observations, but speaking for others really has no place here!

Specifically talking about the "PR machine" comment. Say what you mean!

reply

ascendantlogic 3 hours ago | link

You're just digging a hole here. Better to take your own advice and take the high road.

reply

rdtsc 5 hours ago | link

> Hi, I created Docker. I have exactly 3 things to say:

In line of making lists of things to say. I got 2.

1) Don't use Twitter for having long conversations and public fights. Just don't. No good will come out of it. Engaging in that is feeding the trolls and slinging mud, which you accuse the other party of doing.

2) Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?

Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.

reply

shykes 4 hours ago | link

> 1) Don't use Twitter for having long conversations and public fights. Just don't. No good will come out of it.

I encourage you to read the twitter exchange I linked to. It predates all of this, and is not at all a fight. On the contrary it is a constructive exchange and I am using it to assert Docker's philosophy in a positive way

> Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?

Surely it's possible to launch a competing tool without resorting to a press campaign like this one: http://techcrunch.com/2014/12/01/coreos-calls-docker-fundame...

> Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.

No, I would definitely not say that.

reply

jnoller 1 hour ago | link

Hey Solomon; honest question - skipping the tête-à-tête for a moment, the first tenant you outline:

> 1) interface to the app and developer should be standardized, and enforced ruthlessly to prevent fragmentation

Is one I've been pondering and asking myself about a bit - what does this mean?

Is the interface the API? The docker CLI? Interfaces to libcontainer?

Where does the line "enforced ruthlessly" fall exactly?

Does this mean wrapping the CLI or API in another convenience layer is a no-no if it doesn't expose the docker API directly?

I think the rest of the 13 make perfect sense, and I actually don't think the CoreOS guys we're going against any of those in practice or philosophy; more they wanted something small that did one thing very well.

Anyway, I love you guys and the coreos guys, so I'm only in it for the swag.

reply

tptacek 3 hours ago | link

If you were trying to make sure as many people as possible paid attention to Rocket as a serious alternative to Docker, which is the current de facto standard Linux containerization scheme, well done.

reply

shykes 2 hours ago | link

An article spreading fud on Docker's philosophy is at the top of HN. I added a comment describing the actual Docker philosophy.

reply

mbreese 1 hour ago | link

You have to realize that commenting here, in this thread in particular is not helping things... Instead of keeping your head down, letting the buzz blow over, you just made the PR that much stronger for the CoreOS POV. You should have thought about posting an article in a few days/weeks that, while not directly refuting the CoreOS post, put the Docker vision front and center and made it seem like you were the leader of the market, not just a company that was blindly reacting.

I think it's safe to say that while your comments here made you feel better, they didn't help your position at all, regardless of how valid your points are.

reply

akerl_ 2 hours ago | link

Your comments on this post have done more to damage my faith in Docker's philosophy than the Rocket announcement did.

Somebody highlighted concerns they have with the direction of your product. You may not agree with their opinions, but that doesn't make them FUD. They have every right to ship a product that adheres to their vision, just as you do.

reply

xorcist 5 hours ago | link

What's with all the drama? Did we read the same announcement?

What is it that we end users don't know?

reply

burke 4 hours ago | link

Docker and CoreOS are in a pre-monetization land grab for a single market.

They've so far been approaching it from opposing corners, but CoreOS just made the first play at the opponent's territory, and it apparently rattled Docker a bit.

I am excited to have more viewpoints in play.

reply

MyDogHasFleas 4 hours ago | link

Yes and Pivotal (CloudFoundry) has posted a fairly supportive blog entry on Rocket. So it's not just CoreOS "making a play".

https://news.ycombinator.com/item?id=8683540

reply

wmf 4 hours ago | link

Cloud Foundry also quietly forked Docker with Warden/Diego (edit: I meant Garden, thanks kapilvt), although in that case they remained compatible with Docker images.

reply

kapilvt 4 hours ago | link

clearing up some facts.. warden predates docker, its a container impl. diego is something entirely different more like kubernetes or mesosphere (scheduling & health, etc). garden the go implementation of warden containers does add fs compatibility for docker.

reply

burke 4 hours ago | link

Yeah, I'm not saying it's just a cheap shot; Rocket does a good job of addressing some real issues with Docker.

I'm optimistic that the ecosystem as a whole will benefit a lot from this, no matter how much or how little market share Rocket manages to capture.

reply

lgs 4 hours ago | link

My +1 goes to Kelsey Hightower. He posted on 7 Nov some worries which many Docker users and contributors, already had since the past year, when you dropped off LXC containers, instead of working together with https://linuxcontainers.org/ project to get a better code. That seems already a strategic business decision to decouple your "product value" from his mother and generator: LXC. IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.

reply

dminor 4 hours ago | link

> IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.

Kind of weird that this line from your comment is identical to a line in this comment from another user: https://news.ycombinator.com/item?id=8682864

reply

jsprogrammer 3 hours ago | link

Hacker plagiarism.

reply

lgs 1 hour ago | link

@jsprogrammer yes,

cutting & pasting (hacking :) is faster, if you're not mother tongue but believe me, in Italian it won't sound so gentle & polite.

Moreover, ... we all hope that like a "plagiarism" it won't became a a common feeling, a meme.

So what about the other 75% of my worry? That's not a cut & past, is my worry, what do you think about:

> ... Kelsey Hightower ... posted on 7 Nov some worries which many Docker users and contributors, already had since the past year, when you dropped off LXC containers, instead of working together with https://linuxcontainers.org/ project to get a better code. That seems already a strategic business decision to decouple your "product value" from his mother and generator: LXC.

reply

lgs 2 hours ago | link

Infact I just copy & past that ..., not the least, I didn't found better words to express that feeling, which I hope it will be wrong honestly because I'm on board with Docker since the early days and I'd like to see it more community driven that private business driven.

reply

23david 2 hours ago | link

Ops (particularly in Enterprise) doesn't want batteries included by default. Principle #3 and #5 are incompatible IMO. Do one thing and do it well...

Seems to me that post-Docker 1.2, the Docker team has taken Ops concerns much less seriously and is focused almost exclusively on iterating Dev-friendly features.

Hope things change.

reply

jsprogrammer 3 hours ago | link

I like how 'exactly 3 things' turned into two lists, one with 13 items :)

reply

shykes 2 hours ago | link

Don't be unfair, clearly the 2nd list is nested :)

reply

peterwwillis 5 hours ago | link

Two thoughts:

1. Competition? How can open source software be in competition with anything? It's free, its source code is there; if people want it they'll use it, if not they won't. Why would anyone care what other projects are doing or saying? Just build your tools how you want and go on with life. (Unless you're building your tools specifically to make money, in which case I guess PR and 'competition' does matter a lot)

2. On Twitter you suggested things should be 'composable to the extreme' ..... using plugins and drivers. https://www.youtube.com/watch?v=G2y8Sx4B2Sk

reply

nsmartt 4 hours ago | link

> How can open source software be in competition with anything?

Market share is power. Popular open-source projects can, and do, shape the industry. If you believe your trajectory is the right one for the industry, competition matters a lot.

As an example, Mozilla's Firefox was created to compete with Internet Explorer. It succeeded, and now Mozilla is working to defend the open web, so market share is still crucial for Mozilla even today.

reply

peterwwillis 1 hour ago | link

I'm sorry but you're incorrect. Mozilla's Firefox was originally called Phoenix, and it was created because Mozilla the browser was a dog-slow encumbered monstrosity of Netscape's attempt to create an all-in-one solution for the web. Firefox was essentially competing with Mozilla Suite, but it wasn't so much "competing" as filling a necessary role: a browser that didn't suck.

Mozilla Suite was also not created to compete with Internet Explorer. In fact, Internet Explorer was created to compete with Netscape, which was the dominant browser for years until IE finally knocked it off its catbird seat. It never recovered because IE offered a simple, fast browsing experience, even if it sucked dick at actually rendering content.

In this vein, Phoenix was created in the model of Internet Explorer. So in a way you could say it competed, but in actual fact it was competing against its own progenitor.

Reflecting more on 'competition': the browser wars nearly destroyed the web as we know it as each browser introduced incompatible proprietary extensions which were then picked up (badly) by each other over time. The lack of standards, or good implementations of standards, severely hampered the adoption of more advanced technology. Firefox continues that tradition today by pushing more and more features that IE can't support; we're just lucky that Firefox is the dominant browser now, and that people are now used to upgrading their browser virtually every week.

reply

nsmartt 1 hour ago | link

Huh. I was basing my comment on the knowledge that Mozilla feared IE would become the way to browse the web. I should have double checked.

reply

thu 4 hours ago | link

You seem to narrow down (i.e. restrict) pretty heavily what competition can mean. Open source projects can compete even if no money is involved, e.g. on visibility and amount of help and traction they can get from the community. This is partly related to the concept of fragmentation (where some people argue that fragmentation dilutes efforts).

reply

ykumar6 4 hours ago | link

Can't let the ecosystem fracture. Docker creates a set of standards that are badly needed. This creates value for everyone

reply

otoburb 8 hours ago | link

Interesting takeaways from the post:

* Despite Brandon Philips (CoreOS CTO) serving on the Docker governance board, Docker has aggressively expanded their scope well beyond their original container manifesto.

* CoreOS believes the Docker runtime is now too unwieldy and "fundamentally flawed"; the unwritten word that really sprung to mind was that Docker was getting "greedy."

* CoreOS reaffirms their original operating model of being capable of running their infrastructure on and with Docker.

* Rocket is CoreOS's answer to stay true to the "simple composable building block" mantra.

reply

Rapzid 0 minutes ago | link

CoreOS doesn't mind using systemd...

reply

23david 7 hours ago | link

This is great news, particularly for Enterprise customers adopting containers. IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.

But crucially, they also crossed the business models of many startups (including CoreOS, Weave, Flocker, etc.) that rely on Docker maintaining an Open Platform. So this is an entirely logical response.

I'll be surprised if now Docker in response doesn't unveil an 'enterprise' Docker version that basically just strips away the unnecessary features and has more security by default. The enterprise market is just too valuable to let it just slip away like this. Your move...

reply

tveita 7 hours ago | link

What is Docker's 'new' direction? I don't see any related announcements on their blog besides adding support on new platforms.

reply

superuser2 6 hours ago | link

Docker's 'new' direction is to direct its attention towards solving the orchestration and management problems involved in actually running infrastructure on Docker.

A number of third parties had begun work on various (sometimes proprietary) orchestration and management systems for creating a reliable/scalable/easily manageable cluster with Docker as a building block. CoreOS is one. But Docker is pushing towards an official, open-source orchestration/management system that threatens to make all of those companies irrelevant.

reply

pnathan 5 hours ago | link

> Docker's 'new' direction is to direct its attention towards solving the orchestration and management problems involved in actually running infrastructure on Docker.

IME examining Docker, this is actually the hard problem.

reply

sandGorgon 5 hours ago | link

I think it is a great stand for Docker. Very recently (IMHO in 1.3), it merged the functionality of Fig into Docker.

I think Docker orchestration and coreos can coexist - if I had to use COREOS to use the goodness of Docker, then systemd-nspawn would come and eat Docker's lunch.

I wish that Docker bless one of Ansible/Chef as the official orchestration base and take it forward. I really don't want to earn something Docker specific.

reply

monatron 2 hours ago | link

Fig functionality was not merged into Docker's 1.3 release.

Ansible/Chef orchestration IMHO solves a very different problem than container orchestration.

reply

MyDogHasFleas 6 hours ago | link

I attended Docker Global Hack Day #2 on Oct 30 from Austin. A talk was given on an active Docker project for host clustering and container management, which was non-pluggable, and made no reference to and used none of the code from CoreOS's etcd/fleet/flannel projects.

This was where I first started worrying about CoreOS and Docker divergence.

reply

wmf 6 hours ago | link

But since the hack day there has been a pretty reasonable (IMO) GitHub discussion about the tradeoffs between out-of-box ease of use and customizability.

https://github.com/docker/docker/pull/8859

reply

gabrielgrant 3 hours ago | link

I saw that same presentation at the same event, but came away with a very different impression: the container management they showed was implemented completely outside of docker itself, with no patches to the docker codebase needed. Also, IIRC it actually did use significant code from etcd for coordination.

reply

MyDogHasFleas 2 hours ago | link

Are we talking about the same thing?: https://github.com/docker/docker/pull/8859 ???

It had no etcd in it and the POC was implemented as part of the Docker API/CLI, as best I recall. There were significant questions in the discussion about etcd not being there.

reply

fpp 5 hours ago | link

I believe the two (Docker & CoreOS) might have rather similar strategies and / or product roadmaps.

What seemingly gets mixed up by quite a few commentators on this topic:

Docker is an orchestration, deployment, management, etc solution - the "container" is created by LXC, jails, libVirt or other OS features and now also libContainer.

This discussion also shows how far away / how early we are with "containerization" or container that are exchangeable / movable between different (OS) environment - we are discussing the companies that are building cranes to load and unload the boxes before we even have an understanding how the boxes will really look like.

reply

higherpurpose 8 hours ago | link

> CoreOS believes the Docker runtime is now too unwieldy and "fundamentally flawed"; the unwritten word that really sprung to mind was that Docker was getting "greedy."

I wonder if that comes from the partnership with Microsoft.

reply

krschultz 7 hours ago | link

They raised $55 million [1], so you have to believe their ambitions are to extract as much rent from the container ecosystem as possible. That's not a bad thing, but it's behind a lot of their moves.

[1] http://www.crunchbase.com/organization/docker

reply

Alupis 7 hours ago | link

Docker's MO is to become "that thing that is on all servers" so that when they flip the switch and start monetizing off support and tertiary services, people will be more-or-less locked in.

It has indeed surprised me how quickly a normally-slow-to-accept-new-things community has adopted Docker (even well before it was considered "stable").

reply

simtel20 7 hours ago | link

> how quickly a normally-slow-to-accept-new-things community

I think you're referring to the sysadmin community - but I think the driver for this has been the search for deployment nirvana. Deployment is a much more fragmented field, so it makes sense that a good solution would find fertile ground.

reply

vcarl 7 hours ago | link

Absolutely. Not only does it simplify deployment, you also get the ability to quickly spin up a new development environment. That means it's easy to dip a toe in and slowly increase how much you use it.

reply

digitalzombie 3 hours ago | link

Like mongodb hype?

Docker seems decent but I don't think I want them to do ochestration..

reply

hobofan 3 hours ago | link

There is no switch to flip.

You are not "locked in" by Docker Inc if you are using Docker just like you aren't locked in by Github if you are using git.

reply

frostmatthew 3 hours ago | link

> You are not "locked in" by Docker Inc if you are using Docker just like you aren't locked in by Github if you are using git.

A much more accurate analogy would be you are not "locked in" by Oracle if you are using MySQL. It may be true today, but no guarantee that will always be the case.

reply

otoburb 7 hours ago | link

Regardless of whether your comment was lightly sarcastic or not, I agree that the Docker VMWare[1] & Microsoft partnership announcements may have been conditional upon a committed Docker roadmap outlining some or all of the features that others (such as CoreOS) may feel should be broken out. Typically larger ecosystem players want to be assured that your offering will have a clearly defined role within their existing ecosystem that plays to your core brand and technical competency.

[1] http://www.forbes.com/sites/benkepes/2014/08/25/vmware-gets-...

reply

outside1234 7 hours ago | link

Wow, that is a random jump of logic. Answer: no.

reply

sentiental 8 hours ago | link

I have been concerned that Docker's scope was expanding too far for a while now, so I'm glad to see an alternative that might work appear on the horizon. That said, I am somewhat concerned that CoreOS has a suspiciously similar business model to where Docker would probably like to be.

It's in a business's best interest, and exceedingly common practice, to "land and expand" with something clear and compelling, and following that add features to compete with alternative solutions. I don't think there's anything inherently altruistic about CoreOS that would keep Rocket lean in the long-run, especially as they begin migrating their various tools away from Docker containers.

reply

rafikk 7 hours ago | link

I had the same initial reaction, but I think there's good reason to trust the CoreOS folks to remain faithful to the project's goals. Containerization (although foundational) is one part of CoreOS's platform. It's easy to see where the boundaries fall, e.g. I expect systemd and fleetd to keep their respective functionality and not overlap with Rocket.

It become pretty clear once dotCloud became Docker Inc. that they intended to capitalize on the "Docker" brand to sell an integrated orchestration platform. CoreOS already has enterprise customers for their operating system and related components. They seem like the perfect team to take this challenge on.

reply

Alupis 7 hours ago | link

I think it's also crucial users have more than one viable container option.

reply

altcognito 7 hours ago | link

> I have been concerned that Docker's scope was expanding too far for a while now

What features were recently introduced that it increased Docker's scope?

reply

lclarkmichalek 5 hours ago | link

Talk of Docker cluster, which might include a network overlay layer a la weave

reply

shykes 5 hours ago | link

All of which will be fully pluggable with a "batteries included but removable" design, just like we did with sandboxing and storage.

reply

lclarkmichalek 5 hours ago | link

You may need to make that clearer to some of the people that are due to be building your plugins: reading http://weaveblog.com/2014/11/13/life-and-docker-networking/, I get the feeling that they're not thrilled about it.

reply

shykes 4 hours ago | link

maybe they're busy designing the interface and implementing a proof-of-concept with us as we speak, instead of blogging and twittering.

reply

lclarkmichalek 4 hours ago | link

Welp, I'm putting "been snarked by founder of Docker" on my CV.

reply

shykes 4 hours ago | link

I guess so - sorry ;)

I hope you can understand that it's frustrating when, after hard work pitching an API to dozens of ecosystem players, spending weeks trying to wrangle a working implementation which makes as many of them as happy as possible, without compromising integrity of design - after all that, in the end, all it takes is one unhappy camper to write a blog post and that immediately tramples everything else.

It's even more discouraging in this particular case, because after this blog post, Alexis and I have discussed this topic extensively, and as a result he has since joined the effort. In fact I will be hacking with him in person on integrating Weave as a native networking plugin in 2 days in Amsterdam.

So, sorry for the insta-snark. But it can be frustrating to see so much good will and hard work be crushed in a second.

reply

lclarkmichalek 4 hours ago | link

Hey, regardless of the technical sides of anything, I'm sure this is not a fun day for you. I think you're handling the situation terribly, but still, not a fun day. Anyway, docker is awesome, and thanks for building it. I know it's made my devops life a lot more enjoyable of late.

reply

shykes 4 hours ago | link

I guess I am. PR has never been my thing. I'll get back to hacking, after all it's the reason we do all this: building cool things.

reply

jsprogrammer 2 hours ago | link

Don't do PR, just build the better thing.

No malice, just a friendly tip :)

reply

jbaptiste 2 hours ago | link

Concerning Weave that's quite a good news, the weave point of view for docker networking is good and can be easily setup in many (not all of them of course) infrastructure.

reply

jmendeth 7 hours ago | link

The difference here would be, IMO, that they have clearly made openness one of Rocket's goals: the formats should be well-specified and maintained separately so that other implementations can run them.

reply

wmf 6 hours ago | link

They'll probably keep Rocket lean and introduce new features as "separate projects" that will all be bundled into CoreOS.

reply

_mikz 1 hour ago | link

I hope Rocket will be more stability oriented than Docker. After runing few hundreds containers on machine for almost a year know I would not chosen Docker again. Docker has stability issues all the time and it is taking months to solve them.

Offering strace logs to developers without feedback and finally it was fixed by someone from outside the project. https://github.com/docker/docker/issues/7348

Allocating ports pops now and then every odd docker release: https://github.com/docker/docker/issues/8714

Even stupidest things like allowing to have more dockerfiles in one folder. https://github.com/docker/docker/issues/2112

Docker has own agenda and it is clearer and clearer.

reply

bjt 5 hours ago | link

I had just landed LXC container support in Velociraptor [1] when Docker was announced last year. It uses Supervisor to launch LXC containers and run your app inside. I thought long and hard about switching to Docker, but their decision to remove standalone mode [2] would have meant replacing all of Velociraptor's Supervisor integration with Docker integration instead. With Docker being such a moving target over that time span, it just seemed like a bad move.

Since then I've been mulling writing my own standalone 'drydock' utility that would just start a single container and then get out of the way (as opposed to the Docker daemon that insists on being the parent of everything). I'm optimistic that Rocket could be that thing.

Question though: Does Rocket have any concept of the image layering that Docker does? That still seems to me like a killer feature.

[1] https://bitbucket.org/yougov/velociraptor/ [2] https://github.com/docker/docker/issues/503

reply

philips 5 hours ago | link

Yes, the app-container spec has the concept of dependent filesets. See:

https://github.com/coreos/rocket/blob/master/app-container/S... https://github.com/coreos/rocket/blob/master/app-container/S...

What do you think of the filesets concept?

reply

bjt 5 hours ago | link

I'm still digesting the Go-like syntax for vanity URLs and how that works here. If a fileset manifest lets you specify the URLs where the layers can be fetched from, then I like it.

Does Rocket just 'cp' files on top of each other to implement layering? It'd be nice to not require a bunch of copies of the same files. I thought that the hard link implementation in Docker's new overlayfs support was a smart idea.

reply

philips 5 hours ago | link

Yes, all of this was designed with overlayfs in mind. I am waiting anxiously for Linux Kernel 3.18 to land, this is a huge step forward for Linux and years in the making.

reply

sophacles 4 hours ago | link

Cool! The idea of filesets is very nice - there are some very interesting workflow ideas buried in there. I've been looking forward to a mainstream unioning filesystem for a while too - and I hope rocket does some serious exploration (or enables it) of how to take advantage of them fully in both development and deployment. (And while I'm at it, testing too).

My personal wish-list down this path includes:

* options in the relevant manifests on which layer is writable ... if I'm doing development on libfoo which is used by several different apps, let me make that layer writable so I can rapidly iterate integration tests and (bad practice) live coding on testing/dev servers.

* tools to help me smash a dev layer or 3 into a single fileset (and similarly dissect a layer into a few new filesets during a refactoring)

* the ability to use filesets and overlays in a way similar to package management is now, but with extended features that are similar to python's virtualenv.

One of the things I see as a boon of the filesets as described is: I can update parts of my system without having to rebuild the whole dang app silo from the get go. Combining this with some of the above features looks like it could be useful for making "thin" images - where I can build all my code in one place, and port only the binaries to the staging and deployment images, just by doing a few fileset/overlay tricks. (no more complicated scripts)

reply

bjt 4 hours ago | link

Whether Velociraptor uses Rocket or not, implementing the App Container Spec seems like a no brainer. I've made https://bitbucket.org/yougov/velociraptor/issue/136.

reply

digitalzombie 3 hours ago | link

Please no json file, use yaml or have an option for yaml T___T.

The curly braces and brackets can get ugly when nested.

edit:

It may seems stupid but when you're on the terminal with vim and the directory path is long in some nested object/array. Things is really hard to parse out with your eyes.

reply

ecnahc515 2 hours ago | link

I think using JSON is a solid choice. You can easily make yaml to JSON translators for this purpose.

reply

ash 1 hour ago | link

JSON is okay, but I hope TOML will gain traction soon. Because comments. And trailing commas. Rust Crate is already using TOML for package metadata.

https://github.com/toml-lang/toml

http://doc.crates.io/manifest.html

reply

yannisp 6 hours ago | link

Docker just posted a blog response

http://blog.docker.com/2014/12/initial-thoughts-on-the-rocke...

reply

cheshire137 5 hours ago | link

Hacker News thread: https://news.ycombinator.com/item?id=8683276

reply

gexla 1 hour ago | link

That's open source. The early implementation of an idea is broken. Someone creates an alternative which fixes the problems. The alternative often doesn't gain the same traction and the original continues as the broken dominant implementation. But the alternative is also broken, but maybe in different ways. As design decisions pile on, the broken spreads. In the end, we again learn that software sucks. It will always suck. For people who don't like reinventing the wheel (or relearning the reinvention) stick with the "good enough" and focus on building cool stuff.

reply

bketelsen 8 hours ago | link

Great news. I'm not a fan of Docker's new monolithic approach to containerization. Things like orchestration and networking should not be included in docker, but rather pluggable.

reply

justinsb 7 hours ago | link

I prefer the Unix model - many programs that work together. That might not be practical for networking (a natural plug-in, probably), but feels like it should be the way for orchestration.

The Docker image registry and image management should really be a separate program as well - that is a huge pain point that Rocket seems more likely to get right.

reply

ecnahc515 7 hours ago | link

Interestingly enough, with flannel, docker's advanced networking capabilities become pretty trivial, and communication across hosts is also pretty trivial.

I think all in all, CoreOS has built out a ton of tools to make using Docker easier, and they're all very well defined, and compossible. I'd even say that a lot of docker's features could be completely removed by using some of these tools.

Links? Nah just use ips/dns + etcd for service discovery.

Networking? Need very basic bridged networking, and flannel will handle communication on a single host, or multihost.

Deployment? Use fleet.

Not that all these are 100% perfect like I've made them out to be, but any individual component could be swapped out if you want.

reply

spb 5 hours ago | link

The thing I like about the link model is that they hide your containers from other containers and only expose the connections you want (I think using iptables?)

I'd like a tool that makes this linking easier outside of Docker, but for now this is one of the features I like about it (although holy moly do Docker links have a lot of baggage you have to bring along for the ride, like giving everything names).

reply

vishvananda 2 hours ago | link

Shameless self plug, but not sure if you saw my project that does something along these lines:

https://github.com/vishvananda/wormhole

reply

bananaoomarang 7 hours ago | link

I think this is probably more indicative of the issue that Future Docker would like to be a CoreOS-competing platform, and has been edging towards that state. This is CoreOS' natural bounceback from that.

reply

digitalzombie 2 hours ago | link

That sounds awesome. I'm learning docker and I might wait on this issues to resolve first.

I don't like the sound of locking into one vendor for everything.

reply

cpuguy83 8 hours ago | link

This is exactly the model being proposed in Docker.

reply

mmcclure 7 hours ago | link

I think this was the original model proposed by Docker. What we have now is (as other posters have mentioned), a Docker organization reasonably bent towards creating value for their investors, which means they need to start building things that, you know, make money.

To clarify, I don't think there's anything inherently wrong with what Docker's doing, but it is at odds with an entirely open, pluggable system. It doesn't make any sense for their business model to truly make it easy to just use their containers and none of the revenue-generating offerings.

reply

gtirloni 6 hours ago | link

I've not been following the discussions but if it's such a critical piece of the whole puzzle and it's in everybody's interest that it remains open, wouldn't a foundation, rather than a single private company, be the best venue for leading the project forward?

reply

FooBarWidget 6 hours ago | link

Then how do you fund that foundation? Good developers cost a ton of money. Marketing, organizing events, organizing conferences etc also costs a ton of money. I think something like Docker, especially given its growth and adoption rate, never would have been possible without VC funding. VCs wouldn't invest in a non-profit foundation.

reply

degio 1 hour ago | link

I found reading these comments very interesting.

From one point of view, I'm thinking "why did coreos need to be so aggressive?", and "boy, what a gift Solomon Hykes did to coreos by mismanaging this thing so badly", and "man, all of these guys look sort of immature to me".

From the other point of view, I'm respecting docker and coreos even more, as open source projects and as a companies, because it feels like there are real people behind them.

If this is the new wave of enterprise companies, I really like it. These are people like us, that engage with us and sometimes screw up, without hiding it. They are doing great things, and the fact that they are a bit immature is actually great.

I'm an entrepreneur myself, I've done enterprise software my whole life, and I always thought it's a shame that companies in this space are so distant from their users and have such little humanity.

Looks like things are changing.

reply

darren0 7 hours ago | link

Rocket is tied to systemd, that will definitely spawn some interesting discussions. https://github.com/coreos/rocket/blob/9b79880d915f63e7389108...

reply

philips 7 hours ago | link

It isn't tied to systemd. The stage1 that is in the current prototype uses systemd to monitor and fork processes but we would love to see other stage1's that configure other process runners. For example configure and run a qemu-kvm filesystem as the container.

Also, even though it is using systemd to monitor and fork processes a design goal is to run on all Linux's that have a modern Kernel.

reply

teacup50 6 hours ago | link

What about non-Linux platforms (FreeBSD, Mac OS X with a kext)?

One thing that I believe Docker has failed at is in taking a purely declarative approach to image definition; rather than specifying the packages that are assembled/inserted to create the container, Docker ships around non-portable Linux binaries.

reply

tachion 6 hours ago | link

I second that. At the begining Docker people were mentioning adding FreeBSD Jails support, what seemed to me an awesome thing, a platform independent containerization middleware, but recently they just seemt to forget about it and they're doing only linux-centric things - what a shame.

reply

bcantrill 6 hours ago | link

Yes, but the Docker Remote API allows for a great deal of implementation freedom -- including running on a different OS substrate. We're doing this with sdc-docker[1] to run Docker on top of SmartOS and in a SmartOS container, and the Docker folks have been incredibly supportive. Despite the rhetoric, Rocket appears to be much more bound to the OS platform than Docker -- and given @philips' comment that "part of the design difference is that rocket doesn't implement an API"[2], this binding appears to be deliberate.

[1] https://github.com/joyent/sdc-docker

[2] https://news.ycombinator.com/item?id=8682798

reply

robszumski 7 hours ago | link

The great part of having a spec separate from the runtime is that Rocket can use systemd, but other compatible tools won't have to.

reply

tknaup 6 hours ago | link

This part is super important for Rocket support in Mesos and other things that run containers.

reply

jtchang 4 hours ago | link

I don't see any mud slinging.

I've used Docker. And I am looking forward to Rocket. I will use both and I will compare without prejudice.

I personally like the idea of Rocket and am looking forward to more blog posts comparing the two!

reply

pron 6 hours ago | link

Looking at the code[1] this seems to be a simple wrapper around systemd-nspawn[2]

[1]: https://github.com/coreos/rocket/blob/9ae5a199cce878f35a3be4...

[2]: http://lwn.net/Articles/572957/

reply

vito 7 hours ago | link

The post mentions not having a daemon running as root, but then you have to run `rkt` as root anyway. Won't this just mean that instead of having a single implementation of a Rocket daemon running as root, there is now one custom one every time it needs to be automated?

It's great to see this problem broken up into reusable pieces though. It totally makes sense to function without a daemon, especially out of the box.

reply

rst 6 hours ago | link

There actually is a significant difference between having 'rkt' as a setuid-root process that's invoked from the command line, and having a docker server always running waiting for commands. There are more ways for a potential attacker to get at the server. So, Rocket at least looks like they're trying to shrink the attack surface.

reply

vito 5 hours ago | link

Yep, setuid would make sense. Hopefully that's how people end up using it. (i.e. Rocket should document or distribute it that way)

reply

makomk 2 hours ago | link

> There actually is a significant difference between having 'rkt' as a setuid-root process that's invoked from the command line, and having a docker server always running waiting for commands. There are more ways for a potential attacker to get at the server.

Wrong. With a server, the only thing an attacker has control over is its input. With a setuid-root binary, they still have control over its input, but they also have control over the entire environment under which it executes, including many things that developers generally assume an attacker can't control. Setuid binaries are incredibly scary from a security perspective and much harder to get right than servers.

reply

teekert 4 hours ago | link

Hmm, I played around with CoreOS for the past weeks, it was nice, I'm getting the hang of it. What is constantly difficult though is that there is no cross linking of containers (mysql database accessible from user@172.ip.add.r while the Nginx/PHP-fpm docker is looking for a specific mysql ip addr). Restarting containers from images changes both IPs. Not handy. Why not always share a common /etc/hosts with all current containers (given name with current ip addr) in them?

I was also having some issues with php5-fpm in a docker, it doesn't seem designed for it (it gets the file paths communicated from Nginx, not the files so dockers need to sync files)

Somehow I though CoreOS and Docker would be figuring this out together. I hope somehow that the knowledge I now have will remain relevant, I was planning a hosting service for sports clubs based on drupal8.

Ah well, we are at the beginning of an era, I should have expected this. I'm very curious, who knows, the container space is far from filled, we'll be seeing many distros. There will be Gentoo's, there will be Ubuntu's. It's going to be nice.

reply

andruby 6 hours ago | link

Docker has responded on their blog. https://news.ycombinator.com/item?id=8683276

reply

smegel 4 hours ago | link

Every open source project starts off so well, then the "founders" decide they want to be gazillionaires, and it's all downhill from there.

Sad.

reply

HorizonXP 7 hours ago | link

As a heavy user of CoreOS and docker, I'm interested to see how this plays out.

My problems with docker have been the security model, for which the only recourse I've had is to use the USER keyword in my Dockerfiles. Furthermore, networking has been a pain point, which I've had to resolve by using host networking to access interfaces.

Let's see how rocket deals with these issues and others. I pay for CoreOS support, so I'm glad to see that they're addressing this.

reply

tedchs 6 hours ago | link

Has libcontainer[1] been considered as a minimal Docker alternative?

[1] https://github.com/docker/libcontainer

reply

bastichelaar 7 hours ago | link

Docker's main focus is to "get people agree on something". And they are doing great in getting traction and adoption. But if everyone starts to create their own flavor of containers, we still don't get portability across servers and clouds. It would be better IMHO if Rocket implements the Docker API, or if they collaborate together in creating a minimal standard. Then everyone would benefit. I'm really curious how Solomon will respond to this...

reply

philips 7 hours ago | link

FWIW, part of the design difference is that rocket doesn't implement an API. When you do `rkt run` it is actually executing under that PID hierarchy; there is no rktd that forks the process.

This is a design goal so that you can launch a container under the control of your init system or other process management system.

reply

otoburb 6 hours ago | link

Docker's initial response: https://blog.docker.com/2014/12/initial-thoughts-on-the-rock...

reply

bastichelaar 6 hours ago | link

Thanks!

reply

jambay 8 hours ago | link

it's a very exciting time for Linux Containers. it's been a fun to watch the evolution from BSD jails to lxc to docker, but the rate of innovation and usefulness is certainly accelerating. it sure seems like rocket's approach will be much less of a black box than docker images/registry, which should make it much more approachable to people trying to understand what linux containers are all about.

reply

mwcampbell 7 hours ago | link

How will App Container Images be built? I'm guessing that unlike Docker, the standard App Container build tool(s), if any, will be separate from Rocket.

reply

philips 7 hours ago | link

Right now there is a `actool build` subcommand that will build an ACI given a root filesystem. That tool is used to build the validation ACI's and the etcd ACI. It is rough right now and we will make it simpler to use overtime; and as rkt gets better people can run the build tool from inside of a container given source code.

reply

mwcampbell 7 hours ago | link

Nice. It occurs to me that since an ACI is just a tarball, the build process is decoupled from the runtime engine, unlike in Docker. I've found the Docker build process to be unsuitable for creating minimal images (though I've read that nested builds plus layer squashing will fix this). It'll be interesting to watch the exploration of different build tools and processes that Rocket's decoupled approach will enable, if it catches on.

reply

jzxcv 6 hours ago | link

> Nice. It occurs to me that since an ACI is just a tarball, the build process is decoupled from the runtime engine, unlike in Docker.

Yep, this is _exactly_ one of our design goals. ACIs are trivially buildable and inspectable with standard Unix tools.

reply

cpuguy83 5 hours ago | link

Docker can import any tarball as a rootfs for a container, essentially allowing you to use whatever build tool you want.

Dockerfiles/`docker build` is an implementation of a build system which uses the docker engine to make said rootfs.

reply

ash 3 hours ago | link

Yes, but the actual container image that is being distributed can only be created by Docker. The ability to import is nice, but irrelevant here.

reply

wmf 6 hours ago | link

Docker already supports alternative build systems via docker import.

Realistically, if the stack is broken into a dozen pieces then somebody will create a bundle with sensible defaults (let's call it "CoreOS") and then we'll be back in the same situation.

reply

bkeroack 5 hours ago | link

Forget the interpersonal back-and-forth. My suspicion is that this is due largely because CoreOS (the company) does not their product completely dependent on another for-profit company's platform (Docker). It's just smart business.

reply

justinsb 8 hours ago | link

This looks very interesting - it'll be really useful to have something like Docker that isn't so monolithic - it should be much more composable in new ways.

reply

billconan 6 hours ago | link

This may be a noob question,

I'm looking into using containers for ui applications. I need to access GPU within the application. is this doable with Rocket or Docker?

Also does Rocket have to be used with CoreOS?

reply

ozzyjohnson 4 hours ago | link

Have a look at this container [1] I put together for accessing GPU instances on AWS via Docker. Runs various compute tasks including multiple containers against a single GPU without issue.

From the looks of your other comments in this tangent it might be exactly what you need or a starting point at least.

It's a base for these BOINC [2] and F@H [3] containers.

1: https://registry.hub.docker.com/u/ozzyjohnson/cuda/

2: https://registry.hub.docker.com/u/ozzyjohnson/boinc-gpu/

3: https://registry.hub.docker.com/u/ozzyjohnson/cuda-fah/

reply

billconan 3 hours ago | link

Thank you very much! this is really useful information. Aside from cuda, I also want to make EGL/opengl work with docker, hopefully I can find examples for that.

reply

wmf 6 hours ago | link

GNOME is working on sandboxing + application packaging that's basically containers under a different name. http://blogs.gnome.org/aday/2014/07/10/sandboxed-application... http://blogs.gnome.org/uraeus/2014/07/10/desktop-containers-... http://www.superlectures.com/guadec2013/sandboxed-applicatio...

reply

billconan 5 hours ago | link

thank you very much. I will take a look. but the fact that this is tied to Gnome worries me. I actually need a console application with gpu access.

reply

wmf 4 hours ago | link

Ah, in that case Docker may be a better choice. You can (probably?) use volumes to expose /dev/drm and such into the container.

reply

pipeep 6 hours ago | link

Certainly. The kernel can simply pass through the device, although you lose some of the security of containerization that way. There may be issues with multiple containers sharing the same GPU though.

reply

billconan 5 hours ago | link

I indeed need multiple containers to share the same gpu. :(

reply

jambay 6 hours ago | link

no, rocket does not require coreos, just linux, see: https://github.com/coreos/rocket#trying-out-rocket

reply

pron 6 hours ago | link

I'm interested: why do you need a container for a UI application? It would be better for your users if it could run as a simple process.

reply

billconan 5 hours ago | link

I actually need gpu, not the ui. I need it to do scientific computation. Video streaming service is another case. gpu has better video encoding capabilities.

I previously heard that docker has trouble loading device drivers.

reply

tedreed 4 hours ago | link

Not the parent poster, but needing GPU isn't necessarily the same as having UI. You can use GPU for a variety of general purpose math (Example: mining bitcoins, or doing stuff like Folding@Home), or for offline rendering.

reply

billconan 3 hours ago | link

yes, I understand offline rendering. I'm looking into egl off screen rendering. But due to historical reason, the current gpu drivers (NVIDIA) need x server.

reply

darren0 8 hours ago | link

I wonder if Ubuntu LXD will participate in this?

reply

bboreham 7 hours ago | link

LXD is another competitor to Docker, as I understand it, so it will participate in the fight, for sure.

reply

retrack 6 hours ago | link

Interesting what the CoreOS team is building. If the code becomes as neat as some of the main parts of CoreOS, then this alone merits attention, we cannot have to much security.

reply

mbreese 8 hours ago | link

I'm all for a new container runtime if it lets me start containers as a non-root user. Allowing non-root users to start containers would open up a whole new level of applications, particularly on multi-tenant HPC-style clusters.

reply

carllerche 8 hours ago | link

This would only be possible on very new linux kernels (that provide user namespacing)

reply

craneca0 4 hours ago | link

Interesting branding. "Rocket" is basically only one letter different from "Docker". That can't be coincidental. Also has opposite implications - taking off vs settling in.

reply

pnathan 2 hours ago | link

Rocket and Docker are both 6 letter words.

Can't be coincidental.

reply

ash 3 hours ago | link

And if you read "Docker" backwards you'll get "Rocket". Somewhat.

reply

mrmondo 5 hours ago | link

Interesting that they're talking about security when CoreOS has always had SELinux disabled?

reply

meesterdude 8 hours ago | link

awesome! this sounds like a great philosophical fork of docker, I'm excited to see this grow.

reply

jtolds 7 hours ago | link

Improving the security model of docker is mentioned. Docker is known to be currently unsafe to run untrusted containers. Does anyone know yet if Rocket plans to support running untrusted containers safely, ala sandstorm.io?

reply

pierreozoux 8 hours ago | link

Would there be support for Socket Activation? (something that is still missing on Docker..)

reply

philips 8 hours ago | link

Yes, we have prototyped doing socket activation with rocket already but the patches haven't been merged. So, yes, the intention is to make socket activation work.

reply

thebeardisred 8 hours ago | link

Absolutely!

reply

nstott 3 hours ago | link

Congrats on the release, I look forward to seeing what you guys do with this

reply

preillyme 7 hours ago | link

Docker has a new competitor (wired.com) https://news.ycombinator.com/item?id=8682794

reply

peterwwillis 5 hours ago | link

So here's my take on this. From the docs on github:

  The first step of the process, stage 0, is the actual rkt binary itself. This binary is
  in charge of doing a number of initial preparatory tasks:
  
    Generating a Container UUID
    Generating a Container Runtime Manifest
    Creating a filesystem for the container
    Setting up stage 1 and stage 2 directories in the filesystem
    Copying the stage1 binary into the container filesystem
    Fetching the specified ACIs
    Unpacking the ACIs and copying each app into the stage2 directories
Questions:

Don't all these steps seem like a lot of disk, cpu and system-dependency-intense operations just to run an application?

Why is this thing written in Go when a shell script could do the same thing while being more portable and easier to hack on?

Why are they saying this thing is composable when they just keep shoving features (like compilation, bootstrapping, configuration management, deployment, service autodiscovery, etc) into a single tool?

reply

make3 3 hours ago | link

coreOs is the name of the apple operating systems department. feels weird to read that around

reply

gfunk911 4 hours ago | link

Any plans to "support" Dockerfiles in any way?

reply

lgas 8 hours ago | link

Thank god.

reply

api 8 hours ago | link

This is how Linux fragments, and ultimately dies as the Linux we know.

I'm not really making a value judgement, just an observation.

reply

loudmax 7 hours ago | link

Fragments? Certainly. Dies? Linux has been fragmented from its inception. If you include the world's Android phones, Linux probably runs on more computers than any other kernel or OS. Rocket will not kill Linux, containers, or docker. In the worst case, it will kill CoreOS, and even that's unlikely.

reply

SEJeff 5 hours ago | link

Likely not even close. Just about every single washing machine, refrigerator, microwave, digital stove, etc, runs a variant of an open source operating system called Tron, or the more common ITron variant.

http://en.wikipedia.org/wiki/TRON_project

Tron has been around since the mid 80s I believe and Linux was first released in the early 90s.

reply

digitalzombie 2 hours ago | link

I dunno if linux fragments is causing linux to die... BSD have many fragments and they're doing fine.

I think competition is good, this will give us an option that's not monolithic.

I didn't realize docker direction was to encompass orchestration until this thread. This isn't something I want to use docker for and also I'm glad the competition is address the security issue where there is a need for more security.

And with a rival option I'm happy to choose rocket as an option when it's stablized and there aren't any other options out there.

reply

krschultz 8 hours ago | link

There were many VM engines, now there are a few. I imagine the same thing will happen with container technology. Generally 1 technology stands out, then things fragment, then things coalesce into a tiny handful of solid solutions.

Docker may or may not be the container engine that lasts a long time. There is a reason they raised a bunch of money. Clearly containers are going to be big, but is Docker the one that goes on to be dominant? Docker is trying through building features & biz dev, but it's far from over.

reply

gtaylor 8 hours ago | link

That may be a little overly dramatic. There have been failed fragments/forks/derivatives, but there have also been some sweeping successes.

It's too early to foretell the fate of Rocket. Containers are getting lots of attention, so I'm actually pretty happy to look at this as a potentially rewarding experiment. Worst case, it fails and we keep using Docker (or whatever else springs up).

reply

tree_of_item 8 hours ago | link

How does additional container management software fragment GNU/Linux?

reply

api 6 hours ago | link

Think a few steps ahead.

These aren't really containers. They're giant statically linked binaries, more or less. The actual operating system is now just a VM host for running containerized giant WIMPs (weakly interacting massive programs). Fast-forward a few years and the host can wither and die and be replaced with a proprietary or custom/fragmented management layer. Linux survives only as an internal pseudo-OS within each mega-binary "container."

Edit: what I was really getting at was that these technologies are patches for the inadequacy of the OS. The fact that we need containers at all stems from the difficulty of managing software installations, configuration, etc on the actual operating system.

reply

codecraig 2 hours ago | link

I've excited for competition but unfortunately the post seems a bit confused in its message.

On one hand it talks about the original Docker manifesto and later says it was removed, with the removal being a "bad" thing. However, it refers to Docker not being simple as there are plans to add more and more features to it.

Including, "wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server". However, in the original manifesto (that was removed), Docker announced/claimed those features would/should exist: https://github.com/docker/docker/commit/0db56e6c519b19ec16c6....

Competition is good but this was a bit weak in its first appearance.

reply




Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: