Hacker News new | past | comments | ask | show | jobs | submit | mitchellh's comments login

It works on both macOS and Linux.

<3 This has been a work of passion for the past two years of my life (off and on). I hope anyone who uses this can feel the love and care I put into this, and subsequently the amazing private beta community (all ~5,000 strong!) that helped improve and polish this into a better release than I ever could alone.

Ghostty got a lot of hype (I cover this in my reflection below), but I want to make sure I call out that there is a good group of EXCELLENT terminals out there, and I'm not claiming Ghostty is strictly better than any of them. Ghostty has different design goals and tradeoffs and if it's right for you great, but if not, you have so many good choices.

Shout out to Kitty, WezTerm, Foot in particular. iTerm2 gets some hate for being relatively slow but nothing comes close to touching it in terms of feature count. Rio is a super cool newer terminal, too. The world of terminals is great.

I’ve posted a personal reflection here, which has a bit more history on why I started this, what’s next, and some of the takeaways from the past two years. https://mitchellh.com/writing/ghostty-1-0-reflection


Looks really awesome. I'm going to sound like I don't belong in the hipster terminal club, but the reason I shied away from some of the other terminals is the lack of tabs, which looks like yours has when I did a quick Google question/search. (if wezterm and the like have them, I must have missed it or it wasn't obviously apparent in the settings how to achieve them).

I know everyone will say but tmux and/or native multiplexing bla, but I'm kind of old school and only do screen remotely if needed, and I just like a lot of terminal tabs in my workflow with a quick mod left/right arrow to navigate between (and if native multiplexing in Ghostty is simple and easy I'd probably do some of that too). Perhaps this is why I've never left iterm2.


Wezterm does have tabs, and their related keyboard shortcuts are configurable.

See https://wezfurlong.org/wezterm/config/lua/keyassignment/Spaw... for a starting point in the config.


Thanks!

I also use tmux, but I love the native tabs of Konsole in KDE. I have Shift-Arrow configured to move between them, it is far more comfortable than the dual shortcuts needed by tmux, Ctrl-B to call tmux's attention then l (if I remember correctly) to get to the last tab.

Konsole also has easy resizing of text and supports images in the console, you might like it.


We've got native tabs and splits on both macOS and Linux. :)

WezTerm has tabs but they're not native UI elements.


Right, I was admittedly too lazy to dig far enough with wezterm it appears. Was looking for the button to click I guess.

Quick correction: I currently use Wezterm on Linux and it has tabs. Alacritty does not for developer philosophical reasons.

Looking forward to checking out Ghostty.


Wezterm has tabs right out of the box and they are fully customizable, though I prefer tmux since I prefer to not have my data extinguished if I accidentally close the terminal :D

WezTerm shines in ease and breadth of configurability due to using lua, so it's simple to have the theme change between light/dark depending on host OS theme.


Interestingly there’s another comment ITT complaining that they need to use a programming language for configuring wezterm :)

As a wezterm user I’ll admit that configuring it was mildly annoying to start, but ended up feeling like an accomplishment. A few years in, it’s just another annoying program I have to re-remember how to use when i update twice a year.


Yeah. I could get by with the default Linux terminal and tmux really. Tmux is just the best. Second to vim it’s the single most useful thing I’ve ever used.

Just make sure not to get caught in the pitfall that is maximum render speed, which can lead to missing out on efficiency during slow and partial rendering.

Missing damage tracking, always painting everything (even when the window is a full 4k monitor), etc. kills performance and input latency when dealing with realistic redraw workloads like text editing, blinking cursors and progress bars. Much too often to terminals worry only about the performance of `cat /dev/urandom`...


> kills performance

And battery.

I gave up on alacritty because it was always using the dedicated graphics card of my MacBook and there was no way to use the integrated graphics card because it was “low performance”.


- Ghostty does vsync by default and supports variable refresh rates (DisplayLink). If you're on battery and macOS wants to slow Ghostty down, it can and we respect it.

- Ghostty picks your integrated GPU over dedicated/external

- Ghostty sets non-focused rendering threads as QoS background to go onto E-cores

- Ghostty slows down rendering significantly if the window is obscured completely (not visible)

No idea if Alacritty does this, I'm not commenting about that. They might! I'm just talking from the Ghostty side.


That's a great approach.

Not sure on the current state of Alacritty, but a few years back the suggested solution for users interested in battery performance was to switch a different terminal emulator: https://github.com/alacritty/alacritty/issues/3473#issuecomm...


Right, input latency is what matters for me. I'm not seeing whether they've measured that in the docs/on Github.

He mentions input latency[1] as one of 4 aspects of being fast that were considered during development. I’m not aware of how that was tested, but would trust that it outperforms Iterm2 in that regard.

[1] https://www.youtube.com/watch?v=cPaGkEesw20&t=3015s


For those interested, the link below from Mitchell's blog explains the different goals he's trying to reach, compared to other terminal emulators.

https://mitchellh.com/writing/ghostty-is-coming


> Ghostty has different design goals and tradeoffs and if it's right for you great, but if not, you have so many good choices.

I was looking on the website to try and understand this more but couldn’t find any information. Perhaps I missed it?



I think you've done an excellent job running the community for Ghostty and it is a prime example of how to do it right. From the Discord to Github repos you've been a class act all the way through and have pushed folks to be good, civil internet denizens. Much respect.

If anyone cares to search through Github, they will see loads and loads of Issues and PRs created by Mitchell in many of the related Open Source projects that Ghostty uses/references. From zig to kitty to supporting libraries, Mitchell has been trying to get the terminal community working together and have some sort of standards. A lot of them are like "X does this, Y does that, why are you doing it this way? Can we all do it this way?" and then having Ghostty follow the most reasonable solution (or supporting several!).


Congrats Mitchell! It has been really cool to see Ghostty progress as a project, and I've enjoyed having it as my daily driver these past few years :)

Thanks so much for this. I really enjoy using it and I also refer to the source code quite a bit as I'm trying to get more familiar with Zig :)

Alacritty on macOS and Linux user here (Windows Terminal on Windows due to easily different shells available, formerly used iTerm2 on macOS). I make up for lack of tabs with zellij locally (tmux remotely). Also allows me to relog or close/update Alacritty. I will give Ghostty a whirl but why no shout out to Alacritty? Which features am I missing out on?

I'm a fan of your work! I'm curious about how you decided to work on building a terminal for your next project among your other ideas. If you have time later, could you share your main motivation with us or link to an existing post if you already mentioned it elsewhere?

I've been a beta tester from very early on. I came for the performance but stayed for the stability. I've only had a rare few crashes and all but one was a duplicate in the bug tracker.

I thought I needed search but as Mitchell put it, not a 1.0 feature. Ripgrep was always the answer.

Very happy to share the ghostty experience with the world!


Looks cool man.

Does it have a way to do tabs, and split terminal vertical and horizontal? Those are the only features keeping me on Terminator.

I've tried Tmux, but it isn't the same, so please commenters, avoid from the suggestion.


Yes we have both as native UI elements.

Thank you for building this! I’ve loved using this over the last two months or so and really appreciate the work you’ve put into it.

I’ve been a very happy iTerm2 user and support the dev on GitHub Sponsors (and I’ll continue to do that), but I love your commitment to making a fast, native app (and cross platform, no less) and really appreciate this very obvious labor of love that has also been really interesting to watch from afar as the development has progressed!


Shoutout to you sir for Shouting out the other terminals. It’d be easy for someone of your fame and talent and history to ride the hype to the GOAT of all terminals. But you stayed humble. Props.

Thank you for making this! I've been waiting to use it for quite some time. Really happy to take it for a spin.

No mention of Cool Retro Term!?!? Typical elitist behavior... /s

I'm just having a bit of fun, but it is a fun terminal every once in a while. https://github.com/Swordfish90/cool-retro-term


I LOVE Cool Retro Term! I mean, every once in a while. But if I’m sharing it in Zoom, I’m damn sure re-opening my tmux session in CRT.

Wow that's great

When people find out I use jj (Jujutsu), I often get asked some version of "how's it better than Git?" And while I can list a number of reasons why I think it's better and you could argue whether or not that reason is contrived or not, I think it's all missing the point.

I think it's better -- in the most pessimistic case -- to look at jj as reframing how you think about branches and commits in the same way that learning a type of Lisp reframes your thinking even if you're a full time Python developer and have zero intention of ever using a Lisp.

The idea of shuffling commits around without fear, changing your working train of thought mid-branch, etc. is natural... mindless, even. It's one command away and you get so much muscle memory executing that command you just do it automatically. (There's no fear because `jj undo` undoes any operation you did if you regret it. Of course there a ways to undo N operations back and so on too).

I use jj full time now, but even when I periodically go back to using git (for older projects I don't have a jj clone out for), it has altered the way I look at my stream of work. I think there's value in that.

That's the pessimistic case. The optimistic case is you should be using jj because it's better and there's almost zero downside to doing it (your coworkers don't even need to know).

(This blog post was great, I just expect and already see some people focusing on the minutia of how to Git golf your way to achieving the same thing easily when that doesn't invalidate that jj is good, in my opinion).


> I think it's better -- in the most pessimistic case -- to look at jj as reframing how you think about branches and commits in the same way that learning a type of Lisp reframes your thinking even if you're a full time Python developer and have zero intention of ever using a Lisp.

For some reason this actually had me pause in fear. I've been using git deeply for years and find it very natural and mindless to use at this point. The thought that I might again be trapped in Plato's cave like I found I had been prior to learning Common Lisp is actually disturbing...

Now I don't have a choice but to give jj a shot


Bruce Lee has that quote: “ I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.”

If Lisp’s “one kick” is the cons list, then jj’s is the commit: by using them for everything (they replace, at least, the git stash, index, and working copy) you actually get really fluent in manipulating them and they become more powerful than special-cased tools.


I loved git for a long time. I never understood the folks that said git’s UX was too hard.

I’ll never go back after jj. Be warned, haha.


Can’t wait to hear your experience. I felt the same way and I’m a convert now.


I didn’t know you were a jj fan! I’ve been a convert for a long time now. I fully agree with what you’ve said here.


I am! I think I heard about it through you somehow, actually. And then I kept getting PRs from people I respected with weird branch names and thought "what the hell is going on" and both of these things pushed me to look into it.

I switched cold turkey in one afternoon after reading your tutorial in about 30 minutes and never touched Git ever again (except in the very rare cases noted in my previous post). And also... bisect.


That’s awesome, it’s such a small world.


This subthread has convinced me to try it, I'm going through your tutorial now, thanks!


Nice! I’m working on a second version that reads very differently, it’s taking me a while though. Here’s the opening: https://gist.github.com/steveklabnik/53b51724920dac76fc623d9...


Thanks for your efforts. Tutorials like this is a must-have on this new frontier (for many of us).


The crazy thing to me having made the change is how utterly fearless I am doing long chains of that I would have double- or triple-checked with git. Reordering commits, fixing an earlier commit, even doing these things with multiple unmerged ancestor branches… are all trivial. I don't even have to think about it. There's no case where I'm dumped into a conflict state that must be resolved right here and right now (and where I don't even get to use any of the tools in my VCS until I fix it).

It's so fucking freeing.


What do you mean by "jj clone" ? I assumed it was possible to start using jj on an existing Git repository, and continue using Git (and jj) afterwards. Isn't this the case ?


It is. I assume they just mean an existing repo that hasn't had jj bootstrapped in it. Which is trivial, but maybe you just don't want to do it for whatever reason.


This is my project.

I use it as the core cross-platform event loop layer for my terminal (https://mitchellh.com/ghostty). I still consider libxev an early, unstable project but the terminal has been in use by hundreds to now over a thousand beta testers daily for over a year now so at least for that use case its very stable. :) I know of others using it in production shipped software, but use it at your own risk.

As background, my terminal previously used libuv (the Node.js core event loop library), and I think libuv is a great project! I still have those Zig bindings available (archived) if anyone is interested: https://github.com/mitchellh/zig-libuv

The main issue I had personally with libuv was that I was noticing performance jitter due to heap allocations. libxev's main design goal was to be allocation-free, and it is. The caller is responsible for allocating all the memory libxev needs (however it decides to do that!) and passing it to libxev. There were some additional things I wanted: more direct access to mach ports on macOS, io_uring on Linux (although I think libuv can use io_uring now), etc. But more carefully controlling memory allocation was the big one.

And it worked! Under heavy IO load in my terminal project, p90 performance roughly matched libuv but my p99 performance was much, much better. Like, 10x or more better. I don't have those numbers in front of me anymore to back that up and my terminal project hasn't built with libuv in a very long time. But I consider the project a success for my use case.

You're probably better off using libuv (i.e. the Node loop, not my project) for your own project. But, the main takeaway I'd give people is: don't be afraid to reimplement this kind of stuff for you. A purpose-built event loop isn't that complicated, and if your software isn't even cross-platform, it's really not complicated.


> libxev's main design goal was to be allocation-free

Maybe "allocation-free" should be in the GitHub project description instead of or in addition to "high performance".


It is

> Zero runtime allocations. This helps make runtime performance more predictable and makes libxev well suited for embedded environments.


To be clear, I am discussing the text under "About" in the top right, labeled as "Description" when edited, which currently states:

> libxev is a cross-platform, high-performance event loop that provides abstractions for non-blocking IO, timers, events, and more and works on Linux (io_uring or epoll), macOS (kqueue), and Wasm + WASI. Available as both a Zig and C API.

... with no mention of zero-allocation though yes it is mentioned later as a feature in the README.


Very nice! TBH, libuv sometimes felt like it is popular because it's popular rather than sheer technical prowess. I was never comfortable with how much allocation is done by it, and I don't always find how it deals with platform primitives as useful as I'd like.

> don't be afraid to reimplement this kind of stuff for you. A purpose-built event loop isn't that complicated,

Amen. There's no need to view the event loop as mysterious. It's just a while loop that is constantly coordinating IO.


Thank for you for sharing.

What do you think are the next steps for a next generation event loop?

I've been experimenting with barriers/phasers, LMAX Disruptors and my own lock free algorithms.

I think some form of multithreaded structured concurrency with coroutines and io_uring.

I've been experimenting with decoupling the making sending and recv independently parallel with multiple io_urings "split parallel io" - so you can process incoming traffic separately from the stream that generates data to send. Generating sends is unblocked by receive parsing and vice versa.

Interested in seastar and reactors.



Is dpdk still needed after io_uring? io_uring can also do zero-copy packet processing

edit: there is this thesis https://liu.diva-portal.org/smash/get/diva2:1789103/FULLTEXT...

On 5.1.5 Summary of Benchmarking Results (page 44)

> Of the three different applications and frameworks, DPDK performs best in all aspects con- cerning throughput, packet loss, packet rate, and latency. The fastest throughput of DPDK was measured at about 25 Gbit/s and the highest packet rate was measured at about 9 mil- lion. The packet loss for DPDK stays under 10% most of the time, but for packet sizes 64 bytes and 128 bytes, and for transmission rates of 32% and over, the packet loss reaches a maximum of 60%. Latency stays at around 12 μs for all sizes and transmission rates under 32% and reaches a maximum latency of 1 ms for packets of size 1518 bytes with transmission rates of 64% and above.

> Based on these results, it was determined that DPDK can optimally handle transmission rates up to around 64 bytes, above rate 64% performance increases are non-existent while packet loss and latency increase.

> io_uring had a maximum throughput of 5.0 Gbit/s and was achieved at a transmission rate of 16% or higher when the packet size was 1518 bytes. The packet loss was significant, especially for transmission rates over 16%, and when packet size was below 1280 bytes. Gen- erally, the packet loss decreased when packet sizes increased for all different transmission rates. The packet rate reached a maximum of approximately 460,000 packets per second. For higher transmission rates and for larger packet sizes, the packet rate decreased. This reached a minimum of around 40,000 packets per second for a transmission rate of 1%. The latency of io_uring is highest at size 1518 and transmission rate 100% with a latency of around 1.3 ms. For lower transmission rates under 64%, the latency decreases when packet size increase, reaching a minimum of around 20 to 30 μs.

> The results of running io_uring at different transmission rates show that io_uring reaches its best performance on our system at around transmission rate 16%. Above rate 16% there are no improvements in performance and latency and packet loss increase.

Ok 25Gbps vs 5Gbps seems like a huge difference, specially since io_uring was having higher packet loss as well



Thank you for that. Would be interesting to see benchmarks.


Three, I wanted an event loop library that could build to WebAssembly (both WASI and freestanding) and that didn't really fit well into the goals of API style of existing libraries without bringing in something super heavy like Emscripten.

This is a cool motivation!

Could you drop this into Node to make Nodeex ? A kind of experimental allocation-free Node that somehow carves out the allocations into another layer (admittedly still within the node c code)?


I saw ghostty and thought, “isn’t that the terminal written by the guy who cofounded hashicorp?”. I really enjoy your ghostty blog posts and will be checking out libxev!


(Off topic) but any chance you might include me in the ghostty private testers? (adonese@nil.sd)


Hi, this is my project.

The README only states that I use kqueue on macOS, but I don't claim it is specific or originated from macOS. I've read the README over a few times and can't find where you'd get the feeling that it's a macOS-only thing. If I can edit it in any way to make that clearer let me know.

libxev is not compatible with BSD currently because macOS's kqueue API is very slightly different from BSDs to make it incompatible (i.e. I use mach ports a lot on macOS, but other parts of the syscall interface also vary slightly).


If you depend heavily on Mach ports I don't think "kqueue (macOS)" is an accurate description. That makes it sound like it has more of a chance to work on BSD than it does.


It is an accurate description. The mach ports are waited on through kqueue, and I use kqueue for all other waiters with "standard" fds (i.e. files). But my usage of mach ports (even for a partial use case) make it incompatible with BSD, and even if I didn't use mach ports the kqueue structures used by macOS are slightly different and incompatible anyways, and I don't claim BSD support anywhere.

It's splitting hairs and being a bit pedantic, but you also reordered my descriptions: in the README I always say "macOS (kqueue)" and not the reverse which you incorrectly quoted. I think that makes a small but tangible difference.


I did misread and misquote that. But when a remark is parenthetical I guess I consider them equivalent. macOS and kqueue is not equivalent. Maybe macOS (using kqueue and Mach ports) would make it clearer?


No one but you is saying macOS and kqueue are equivalent. OP’s phrasing is perfectly fine.


This pattern is effectively how Vagrant (for anyone who remembers that) always worked, also in Ruby! I even gave a talk on it at MountainWest RubyConf back somewhere around 2013, although I compared it moreso to a "middleware" pattern. Even the API/DSL is almost identical.

The middleware pattern had a lot of the same concepts present in this post: we called context "state" and you could use special exceptions to halt or pause a middleware chain in the middle.

This was a really great way for over a decade (to this day!) to represent a long-running series of steps that individually may branch, fail, accumulate values, etc. I don't recall the exact count, but an old `vagrant up` used to execute something like 40 "actions" (as we called each step).

I'm not trying to disregard this blog post in any way, I'm only pointing out this pattern is indeed very useful and has been proven in production environments for a very long time!


Thanks Mitchell (big fan, btw!).

Indeed!https://github.com/hashicorp/vagrant/blob/main/lib/vagrant/a...

Yes, it's not a new pattern by any means, and there's many ways to "halt" the pipeline as you say. For example ActiveRecord stops callback chains if any callback throws ":halt".

Other examples are Redis.rb's pipelining API https://github.com/redis/redis-rb?tab=readme-ov-file#pipelin...

Or more generally any builder-style pattern that composes a set of operations for later execution, including again ActiveRecord's query chaining.

In my article I tried to show a specific implementation using the Railway pattern (where the result must only respond to "#continue?() => Boolean")


I've used Nix with Nix-Darwin on my macOS machines for a couple years now. The primary benefit is that when I get a new macOS machine, it's only three steps to having ALL my apps, configurations, etc. exactly as they were before:

1. Download Nix installer (I prefer the one by Determinate Systems)

2. Clone my Nix configs (public, so I don't even need secrets like keys yet)

3. Run `make`

I then wait around 15 minutes or so (mostly network time) and I'm good to go.

ONE BIG RECOMMENDATION: I don't like using nix-darwin to manage graphical apps directly. It's a bit awkward. But, you can use nix-darwin to declaratively manage `homebrew`, so I still get my graphical apps from Homebrew. The linked article seems to suggest ditching homebrew entirely but I found its best as a mix of both worlds with Nix being the source of truth to manage everything else, including brew.

You can find my configurations on GitHub but note that if you're new to Nix you would do better finding a simpler starting point: https://github.com/mitchellh/nixos-config (these are shared Nix configurations between macOS, Linux, and WSL on Windows).


I've just migrated from a 4 year old Macbook to top of the line M3 Max. My previous Macbook had multiple kernel extensions, lots of customisation and homebrew apps and such. I was convinced that I'd spend a significant amount of time setting it up again - so I'd made some bash scripts in preperation to reinstall my homebrew stuff etc.

So upon receiving my macbook I used the migration assistant and ticked everything, and to my surprised I've spent less than an hour getting this new machine up to speed. It moved everything including all my settings, apps, configurations for those apps etc. The only issues I've had is docker didn't move across, and I've had to manually download universal binaries of a few intel apps. It has been completely flawless.

I don't see the value add for the use-case you describe of machine migrations when Apple's tool for this works so well.


I agree, I'm not getting new computers more than once a year, at most. Setup time doesn't seem like something I should optimize for.


I would argue the benefit is also it’s declarative, done forever, and your machine becomes relatively bulletproof.

Dev environment issues are a thing of the past, once you’ve defined your configuration.

If something is broken with a package, I don’t have to figure it out myself —- I just rollback, wait for someone to fix it upstream in nixpkgs and pull down the patch later.


At least in my opinion, the leverage here isn't about directly saving time in some hypothetical universe where you set up new devices every week.

It's about confidence in your ability to quickly bootstrap a productive system and the relative freedom/security that flow from knowing it.

When you know you can be productive this quickly without access to a backup or a working device, you have relative freedom and security from a decent spectrum of manufacturing defects, hardware failures, disasters, accidents, thieves, and so on.


> The primary benefit is that when I get a new macOS machine, it's only three steps to having ALL my apps, configurations, etc. exactly as they were before

This is the same for macOS? I’ve done that twice in the last couple of years and it was just a matter of letting the migration assistant run and letting Homebrew install the list of packages exported from the old system.

This is not to say anything negative about Nix, only that this particular point doesn’t seem like a big selling point for something I do every few years.


For single-user single-device scenarios, you probably won't find Nix much better than brew or apt. Declarative infrastructure isn't very necessary when you're the only person running your software. With multiple devices, you at least want a way to automate your build process. Bash scripts and Makefiles work fine, but they both fail silently and won't work consistently across devices, OSes or even architectures.

Nix smooths that out, which is great for a single user with multiple devices but even better on teams. Instead of coordinating devices individually, you can update Nix environments and push them out to everyone at once. The build environment is self-contained, isolated and updated silently alongside the rest of your repo. On larger teams, that saves a lot of configuration headache.


> these are shared Nix configurations between macOS, Linux, and WSL on Windows

This is the "killer app" of Nix for some people. I've got a lot of machines, spread out between a variety of usecases. With Nix, I can have individual module files for my desktop apps, my gaming software and my terminal configuration, then link them into the various machines that need it. Once it's fully set up, you can essentially manage the environment of dozens of different machines in a single Git repo.

Nix will frustrate people because it doesn't offer a lot of imperative or immediate solutions to things. If you can handle the learning curve though, you'll be hard-pressed to find a more comprehensive package management solution on any OS.


Despite how the learning curve keeps the Nix community relatively small (it's grown a lot since 2021 actually) nixpkgs has assimilated like 90% of OSS. Learning Nix is hard, but once you know enough Nix to be productive it's a huge enabler.


How does Nix manage Homebrew? Haven’t heard of that. Does it make Homebrew declarative somehow?


Unclear at what level you're asking, but Mitchell's comment identified it as a nix-darwin feature, and you can see how he's using it in the config he linked at https://github.com/mitchellh/nixos-config/blob/b73a16fc918c5...


nix-darwin supports managing your Brewfile.

The docs[2] are very helpful!

[1]: https://github.com/Homebrew/homebrew-bundle

[2]: https://daiderd.com/nix-darwin/manual/index.html#opt-homebre...



What's the benefit vs a restore from backup step which also restores your data in addition to apps and configs (and avoids any network)?


In exchange for having taken the time to understand and declare exactly what you depend on to be productive (and nothing else), you get to start ~productively-fresh instead of living in a cargo cult of whatever unexamined arbitrary state and executables were present at your last backup.

It also means you aren't completely out of luck if you get lazy and don't test your backups for 6 months only to find out they stopped working after you ran some `blah blah update/upgrade` command.


This isn't veggies, so what's the actual benefit of freshness?

Also not clear on the backup risk - why would a disk snapshot stop working because you've updated with blah?


I guess having your config as a text file in a public repo instead of a multi GB private file can be nice.


A small nitpick: the other aircraft were doing _visual_ approaches, not VFR approaches. A visual approach is a type of instrument approach operated under IFR regulations. Practically, this has no affect on your comment. Just pointing this out in case its interesting to you or others (if you didn't know this already).


Does this mean controllers still have a responsibility of separating aircraft under a visual approach? (A comment in a sibling thread mentioned that Lufthansa pilots are allowed visual approaches, but are not allowed to be responsible for visual separation at night.)

Edit: sounds like visual approach means ATC do not have responsibility for separation. I thought the entire point of IFR (which – according to you – visual approach falls under) was that ATC is responsible for separation!


ooo thanks. Too late to edit now but appreciate it! Not a pilot currently; just very interested and will probably get one in my lifetime.


We've gone full circle! I originally launched Vagrant here on HN in 2010, which was at the top of HN very briefly for the day. Now here I am 14 years later witnessing my departure post in that very same spot. A strange experience! Thanks for the support over the years. A lot of the initial community for the projects I helped start came from here.


Just the Link. It all started here. https://news.ycombinator.com/item?id=1175901

Vagrant: A tool for building and distributing virtual development environments (vagrantup.com) 129 points by mitchellh on March 8, 2010 | hide | past | favorite | 28 comments

I still remember reading that post on HN. And subsequently Vagrant took off. Cant believe it is nearly 14 years! Thank You Mitchell for everything as I am (still) using Vagrant. First Child is always going to be a hectic job beyond comprehension. Hopefully you will have more free time to play with Zig and may be even Crystal once your child grows a little more. Best of Luck.

Edit: I guess HN momentarily went down due to this announcement on front page.


Is there somewhere a list of these big stuff that launched in the HN, eg this and Dropbox https://news.ycombinator.com/item?id=8863 .



Another big one on top of my head is Coinbase

https://news.ycombinator.com/item?id=3754664


This is great to read now and reflect on where they eventually went in the next 13 years. The basic questions "why Chef" and "why Virtual Box" and so on with requests to support other hypervisors and provisioning tools. Now Packer and Terraform provision and deploy machines to damn near any platform using damn near any provisioning tool, but neither Mitchell himself nor the HashiCorp team in general had to learn all of those tools and platforms. Instead, they provided orchestration systems that allow for a common configuration language and execution model but delegate the logic of how to use the APIs of specific platforms and provisioners to plugins. Seed the ecosystem with plugins representing the most common platforms and toolchains your company is already familiar with and extend from there to stuff you either figure out later or let the community contribute.

This feels like open source and community creation at its best. It's why GNU/Linux systems did what they did. A bunch of professors and hackers tried to clean-room recreate Unix but never finished the kernel. Meanwhile, some grad student made a kernel but no userspace. Then some entirely different teams put these together along with a package manager, installer, and remote filesystems users could fetch ISOs and packages from, and finally you've got a usable system that didn't require a beast the size of Microsoft to do everything in-house. None of them could have done it alone.

It also makes the events of the past year kind of poignant. I see a lot of commenters talking about HashiCorp needing the license change to capture the value of what they created and not allow other companies to siphon it off. But isn't that the point of civilization? We all stand on the shoulders of giants. We generally don't want the first person to come up with an idea and their direct descendants to be the only people who ever profit off of that idea. That's aristocracy. Mitchell is a billionaire, isn't he? None of Linus Torvalds, Richard Stallman, or Ian Murdoch ever became billionaires, but they weren't exactly starving in the street, either. Exactly how much value does a single person need to capture? It's the community we want to see thrive, isn't it? Not just our single company. Every employee of that company can work elsewhere if they need to and every investor has other investments. They aren't going to starve either if the company someday stops growing.

I get not wanting the Amazons and Googles of the world to take open source inputs and put them into proprietary sinkholes where further innovation gets stuck inside of a single company. But isn't that the point of the GPL? Anything they add they also have to give back. You don't need BUSL for that.


Congrats on what you've accomplished here. Building an industry-standard company and then carefully planning your exit on your own terms is a huge win.

I (selfishly) hope whatever is next is still hacker adjacent, bc your work has been a big inspiration to a lot of us. Best of luck to you!


> I (selfishly) hope whatever is next is still hacker adjacent, bc your work has been a big inspiration to a lot of us. Best of luck to you!

You should check out the terminal he's been working on codenamed Ghostty [0].

[0]: https://mitchellh.com/ghostty


Note: Ghostty is still a private project. I plan to open source it one day and share it with more people but for now this is a private personal project. If you are really interested in helping with the project, please feel free to email me, but no promises!


I remember getting my first commit into Packer, you review, approved and merged it. It was one of the best days of my (fairly early) coding life because I think your work is amazing and I was so happy to contribute something back.

Thanks for all the amazing work thus far!


You're one of the very few technical people that made it big that I continue to look up to. Congratulations on your achievements, and looking forward to whatever is in the pipeline.


Congrats on all your accomplishments Mitchell and looking forward to what you'll create next.


Well deserved Mitchel! Thank you for Vagrant, Packer, Consul, Vault and Terraform, all of which I used back in my DevOps days.


I remember hacking on the same Ruby projects as you, then running into you the same way in the Erlang world. Man, that was almost 15 years ago! You’ve built some awesome stuff along the way… congrats, and keep hackin’!!


Congratulations on a new page in your journey! And thank you for documenting the story - you’ve inspired many great developers and founders along the way!

Being on almost the opposite end of the software design, I haven’t yet had a good place to apply the tools you’ve built, but I’ve heard many nice things about them from practically everyone, including direct competitors. That says it all.


Congrats! You've Terraformed the industry!


I built a career following in your vision. Thank you 1000x


Thanks for everything! I can't wait to see what you'll do next!

You and Armon have truly shaped the world of infrastructure with your tools and ideas. Although we never met in person (I only had the pleasure to meet Armon so far) - we've interacted a couple of times through some PRs and I really like you as an engineer. It's incredible the value that you created over these 11 years - not only on the product side of things (Terraform and Vault are incredible!), but also with all of your Go packages. The amount of time your name pops up in my go.mod files is just impressive :)

You and Armon are incredible engineers and I'm so happy you built something as cool as Hashicorp! All the best with the next chapter of your life!


I once (March 2014) emailed hashicorp to retrieve a lost vagrant licence key. I got a direct reply from you Mitchell with instructions on what to do. Blew my tiny mind, those were rare bygone days in our industry.

Thank you for all you've created.


In 1992 I replied to an email from Steve Jobs that was shipped in the default email client of the NeXT workstation I was using. I checked the 'read receipt' box in the client. He replied, ignoring my question and berating me for violating his privacy by using the read receipt feature.


> He replied, ignoring my question and berating me for violating his privacy by using the read receipt feature.

Sounds totally legit!


My first major foss was heavily based around Vagrant (PuPHPet). It was a joy building on top of your tooling to make web engineers lives easier.

Thank you for your work, it was great while it lasted!


Some of my first OSS work was also based on Vagrant (https://github.com/ezekg/tj). I eventually turned that into a commercial desktop app, built on top of that CLI project. Ultimately, the project didn't work out, but it was a big step in my open source and entrepreneurial journey.

ty, mitchellh!


PuPHPet was very useful, I started many projects with it.

I actually still have some projects with a Vagrant VM based on it that I have to move to docker compose or something.


Big fan of your products (Vagrant, Terraform, Vault, Consul). Thank you for the amazing contributions to the community. Best wishes.


I remember meeting you at a DC Ruby conference after a talk you gave on middleware as a design concept. This was years ago, in the early days of Vagrant I think. Amazing to see how much you’ve accomplished since then. Congrats on your achievements and best of luck with your future hacking!


Thank you for all the great products that you have created! Your vision on infrastructure has always been inspiring to me and you are one of the few engineers who has truly moved the field as a whole forward.


Congratulations Mitchell! You’ve inspired and influenced the careers of many, many developers, including me.

I always look to HashiCorp first when searching for tooling. Always something interesting coming out of that shop.


Incredible run! Thanks for all the tools you’ve given us. But more importantly all the relentless passion for automation has been very inspiring! Chapeau, sir! All the best in whats next for you.


Back to hacking Neopets?

-Iridium


Good luck! Consul, Terraform and Nomad were pivotal for my carrier. Went from a server unboxer to a container wizard during the last decade. Thanks a lot!


Good luck and thanks for everything! It is very hard to make such a big impact on millions of developers and companies around the world like you did.


Just want to say thank you, from a engineer who in 2010 kickstarted a career by using vagrant to create reproducible dev environments.


Followed your work since that launch - thanks for everything! Enjoy your family and find some new fun things to explore.


Congratulations, Mitchell!

Many accomplishments and the ability to change an entire industry. One of a kind!

Can't wait to see what else you get up to.


Congrats Mitchell! Thanks for all your great work, and best of luck for what's ahead.


Man you're a rocket ship. Such an inspiration and congratulations on your success.


Congratulations! I was an early Vagrant user, it helped immensely, thank you!


I've been using Vagrant for ~10 years, on a daily basis!

Thank you, @mitchellh !


Thanks! You made quite an impact establishing all those projects.


Congrats Mitchell, you've been a huge inspiration!


Just wanna say thank you for all your hard work!


Thank you for all your hard work, man!


I would like to thank you for the amazing products, especially Consul, Nomad, Vagrant, Vault and Terraform, which I started to promote in my company to improve their tech stack.

But at the same time I don't really like some of the features are guarded behind paywalls...like Vault/Consul namespaces or multimaster deployment...and the price for those useful features are nefariously hostile for startups like us.


let's go flying


Now you get to work on your terminal fulltime now ;)


Congrats!


I left the board in 2021 (link to that announcement is in the post).


Less than 2 years ago, Hashicorp was still claiming that their software would be OSS forever: https://web.archive.org/web/20220703202305/https://www.hashi...

I noticed you mentioned leaving the leadership team in addition to the Board of Directors in 2021 (which was before Hashicorp changed their projects' license to Business Source License). Were you one of the champions for OSS at Hashicorp during your time in leadership there?

And can you speak to whether the leadership's new direction was a factor in your decision to leave the company?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: