Hacker News new | past | comments | ask | show | jobs | submit | shykes's comments login

That looks very neat! Let me see if we can integrate it with Dagger :)

If I may offer a different perspective: Dagger has always been a general-purpose composition engine, built on container tech. Its most successful use case is CI - specifically taking complex build and test environments, and making them more portable and reproducible. But we never claimed to replace Jenkins or any other CI platform, and we've always been very open with our community about our desire to expand the use of Dagger beyond CI. We also never claimed to replace Docker, or to "be a shell" (note that the title of this HN page doesn't reflect the title of our post in that regard).

Every feature we ship is carefully designed for consistency with the overall design. For example, Dagger Shell is built on the same Dagger Engine that we've been steadily improving for years. It's just another client. Our goal is to build a platform that feels like Lego: each new piece makes all other pieces more useful, because they all can be composed together into a consistent system.


It's a fair point - My opinions and use case are my own, I didn't mean to imply or assume there were promises not kept. The dagger team has been nothing but supportive and I do think has built a great community.

That said, in the early days it was definitely pitched for CI/CD - and this how we've implemented it.

> What is it? > Programmable: develop your CI/CD pipelines as code, in the same programming language as your application.

> Who is it for? > A developer wishing your CI pipelines were code instead of YAML

https://github.com/dagger/dagger/blob/0620b658242fdf62c872c6...

Edit: This functionality/interaction with the dagger engine still exists today, and is what we rely on. The original comment is more of an observation on the new directions the project has taken since then.


Yes it's a fair observation. In terms of use cases, we did focus exclusively on CI/CD, and only recently expanded our marketing to other use cases like AI agents. It's understandable that this expansion can be surprising, we're trying to explain it as clearly as possible, it's a work in progress.

I just wanted to clarify that in terms of product design and engineering, there is unwavering focus and continuity. Everything we build is carefully designed to fit with the rest. We are emphatically not throwing unrelated products at the wall to see what sticks.

For example, I saw your comment elsewhere about the LLM type not belonging in the core. That's a legitimate concern that we debated ourselves. In the end we think there is a good design reason to make it core; we may be wrong, but the point is that we take those kinds of design decisions seriously and we take all use cases into account when we make them.


Someone already built a web UI for composing Dagger Shell scripts in a notebook format: http://docs.runme.dev/guide/dagger

It's really neat, I recommend checking it out.


woah , that actually looks really nifty.

I am imagining this with a simple cloudflare tunnel and self hosting gitlab and I am really really seeing an open source way that developers can REALLY scale.

I mean docker is really great but dagger in notebook formats just seems really cool ngl.


love the enthusiasm. cocreator of Runme here.

the best part with the Dagger + Runme combo is that it runs entirely local. this isn't just a huge with for portability. it also cuts down development cycle times significantly.


It's the same product. Dagger Shell is a new client for the same Dagger Engine and its API. It allows you to do more from the command-line, before you have to switch to code. But the SDKs and "pipeline as code" are still there, unchanged.

Yes you can.

Dagger is built on the same underlying tech as docker build (buildkit). So the compatibility bridge is not a re-implementation of Dockerfile, it's literally the official upstream implementation.

Here's an example that 1) fetches a random git repo 2) builds from its dockerfile 3) opens an interactive terminal to look inside 4) publish to a registry once you exit the terminal:

  git https://github.com/goreleaser/goreleaser |
  head |
  tree |
  docker-build |
  terminal |
  publish ttl.sh/goreleaser-example-image

> 3) opens an interactive terminal to look inside 4) publish to a registry once you exit the terminal

It seems like it would be good to be able to prevent the pipeline from publishing the image, if the inspection with 'terminal' shows there's something wrong (with e.g. 'exit 1'). I looked a little bit into the help system, and it doesn't seem that there's a way from inside the 'terminal' function to signal that the pipeline should stop. Semantics like bash's "set -e -o pipefail" might help here.

with-exec lets you specify that you want a command to succeed with e.g.

  container | from alpine | with-exec --expect SUCCESS /bin/false | publish ...
If you try that, the pipeline will stop before publishing the image.

Huh, you're right, I would expect `exit 1` in the terminal to abort the rest of the pipeline.

By the way, in your example: `--expect SUCCESS` is the default behavior of with-exec, so you can simplify your pipeline to:

  container | from alpine | with-exec /bin/false | publish ...
Thank you! Would you be willing to open an issue on our github repo? If not I will take care of it.

If that docker-build command fails, will the interactive debugger do something sensible?

Assuming you use `dagger --interactive`, the debugger will kick in so you can inspect the failed state.

In the specific case of Dockerfile compatibility, I don't actually know if it will be smart enough to drop you in the exact intermediary state that the Docker build failed in, or if it reverts atomically the whole 'docker build' operation.


You're right, not sure what's going on there. Escalating internally.

Dagger is fully declarative. It's just built on a dynamic declarative API, instead of a static declarative DSL.

So, if you took Nix, and replaced the static scheme-like DSL with a proper API, and then built SDKs in 5 languages for that API; and then built a bash-like shell also for easy scripting; then you would start to have something that approximates Dagger.


I stand corrected! I shall investigate more closely with this additional context. Thanks.

Thanks for giving it another chance! We need to get better at explaining all this, it's a lot to unpack, although prior experience with declarative systems like Nix or Bazel does help a lot :)

We have a very active discord, feel free to come by and ask all the tough questions!


In theory you could replace much of Docker's low-level tooling with Dagger. In practice, that's not what we're trying to do. There's a lot of inertia to ripping out existing tools, and even if the replacement is better, it may still not be worth the effort.

What we're focusing on is green field application of container tech. Things that should be containerized, but aren't. For example:

- Cross-platform builds

- Complex integration testing environments

- Data processing pipelines

- AI agent workflows (where you give a LLM the ability to perform tasks in a controlled environment)

In those kinds of workflows, there is no dominant tool to replace - Docker or otherwise. Everyone builds their own unique snowflake of a monolith, by gluing together a dozen tools. Dagger aims to replace that glue with a modular, composable system.


this is helpful. and i appreciate the intellectual/technological humility.

i think there's a decent chance we end up giving Dagger a spin this year.


Does it replace Kabuki in non priviledged CI builds? Can one exchange lower / independent layers like with nix container builds?

> Does it replace Kabuki in non priviledged CI builds?

I have never heard of Kabuki, and couldn't find it in a quick web search. Did you mean Kaniko?

> Can one exchange lower / independent layers like with nix container builds?

Yes.


How does that work? Are you building special adaptors that understand how package manager metadata files work? A deb package's postinstall is basically just a blob of bash running as root with access to the whole filesystem. If I tell dagger to install three packages in three different steps, those are not trivially reorderable operations. Nor are they reproducible at all, without Dagger having additional insight into the state of the local apt cache and what is offered by the remote apt repo at the moment of execution.

Yes, Dagger provides the core system API that allows creating and combining filesystem layers in any order you want. However it doesn't provide compatibility with existing package managers: that's something that you, or someone else in the Dagger community, would implement. You can extend the Dagger API with your own types and functions, then share them as reusable modules.

For example, here's a Dagger module for building Alpine Linux containers in an order-independent way. Under the hood it's derived from the 'apko' build tool created by the folks at Chainguard. https://daggerverse.dev/mod/github.com/dagger/dagger/modules...

And here's a Dagger Shell command for building a container using that module:

  github.com/dagger/dagger/modules/alpine | container --packages=git,openssh,curl | terminal

You mentioned deb packages. Your intuition is correct, Dagger doesn't magically make .deb or .rpm packages reorderable, since those packaging systems are designed in a way that makes it difficult. But it does provide the primitives, and a cross-language programming environment for creating the "adaptors" you described in a way that maximizes reuse. If someone does for deb what Chainguard did for apk, you can trivially integrate that into a Dagger module.

Neat! Okay, yes, I can definitely see the value here, particularly if all or most of an image is otherwise able to be expressed in terms of things that are deterministic, such as language ecosystem package managers with lockfiles.

It seems like you're cutting an interesting track on providing a better way than Dockerfiles with their imperative layers, but pragmatic enough to not insist on taking over the whole world like Bazel or Nix.


Yes Kaniko (my mobile keyboard did this weird autocorrect, sorry)

OK, just making sure :)

To answer your original question then: no, Dagger cannot fully replace Kaniko because it requires container execution privileges.


Hi Ben! You should definitely not use Dagger Shell as your default shell. It's meant to complement it rather than replace it.

From the post:

> Dagger Shell isn’t meant to replace your system shell, but to complement it. When a workflow is too complex to fit in a regular shell, the next available option is often a brittle monolith: not as simple as a shell script, not as robust as full-blown software. The Dagger Shell aspires to help you replace that monolith with a collection of simple modules, composed with standard interfaces.


So part of the idea would be to keep my secrets out of my default shell? I’ve been looking for ways to do that but I would want specific commands to have access to them. For instance I would just want git and gh to have access to my GitHub credentials and for another program not to be able to spawn git or gh with these credentials. It would also be important not to be able to accidentally run one of these by copy and pasting something. It seems that it would need to at least be partly taken care of by my default shell for it to be usable though.

Hmm. Sounds as an interesting problem.

But how can you really differentiate b/w a user opening git and some other program running git.

I think we would need friction there , some sort of manual intervention.

The best I could think of was something like bitwarden/keepassxc like cli where it requires a password and it would just straight up copy that token into github.

If we are really talking / you have the source code and you want end to end security , you could theoeretically also compile git with the specific idea / implementation of whatever encrypted password manager you might use directly within the code of git / github but I know that can be an overkill for this problem.


What I’d want is to be able to run top level commands differently from other commands and being able to have git and gh be a wrapper that injects the permissions. They could also filter the arguments and environment variables, which I know is hard to get right. Subshells and other programs would be able to run git and gh, but not with the permissions.

I could even run git and gh in a container that has a volume to be able to access the directory.

I think I have an idea of what this could look like and I might try and prototype it with fish and see what code parts it goes down to gauge how secure it’s likely to be.


Docker introduced an ambiguity in the meaning of the word "container". The word existed before Docker, and it was about sandboxing. Docker introduced the analogy of the shipping container, which as ranger207 says, is about sandboxing at the service of distribution.

The two meanings - sandboxing and distribution - have coexisted ever since, sometimes causing misunderstandings and frustration.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: