Hacker Newsnew | past | comments | ask | show | jobs | submit | alexgartrell's commentslogin

For the peanut gallery more: I worked with both of these guys at Meta on this.

The "servers are only on for a few hours" thing was like never true so I have no idea where that claim is coming from. The web performance test took more than a few hours to run alone and we had way more aggressive soaks for other workloads.

My recollection was that "write zeroes" just became a cheaper operation between '12 and '14.

A fun fact to distract from the awkwardness: a lot of the kernel work done in the early days was exceedingly scrappy. The port mapping stuff for memcached UDP before SO_REUSEPORT for example. FB binaries couldn't even run on vanilla linux a lot of the time. Over the next several years we put a TON of effort in getting as close to mainline as possible and now Meta is one of the biggest drivers of Linux development.


[ Edit: "servers" in this context meant the HHVM server processes, not the physical server which of course had a longer uptime ]

People got promoted for continuous deployment

https://engineering.fb.com/2017/08/31/web/rapid-release-at-m...

I think it's fair to say the hardware changed, the deployment strategy changed and the patches were no longer relevant, so we stopped applying them.

When I showed up, there were 100+ patches on top of a 2009 kernel tree. I reduced the size to about 10 or so critical patches, rebased them at a 6 months cadence over 2-3 years. Upstreamed a few.

Didn't go around saying those old patches were bad ideas and I got rid of them. How you say it matters.


The linked article says they decided to do CD in 2016 fwiw so that's not inconsistent with what I said.

You reduced the number of patches a lot and also pushed very hard to get us to 3.0 after we sat on 2.6.38 ~forever. Which was very appreciated, btw. We built the whole plan going forward based on this work.

I'm not arguing that anyone should be nice to anyone or not (it's a waste of breath when it comes to Linux). I'm just saying that the benchmarking was thorough and that contemporary 2014 hardware could zero pages fast.


I use Facebook and Instagram and think you all suck. Slagging each other in public. Grow tf up.

I did something similar a long time ago https://github.com/facebookresearch/py2bpf

It was definitely a toy, I transliterated from python bytecode (a stack based vm) into bpf. I also wrote the full code gen stack myself (bpf was simpler back then)

But using llvm and not marrying things to cpython implementation makes this approach way better


Thank you! Ours is a toy for now as well, but I think the idea is pretty good, so we'll continue to work on it. (This was actually a hackathon project, so the code is pretty messy and not something I am proud of)


Not sure it’s relevant in the cloths these guys take


The cloud business model is to use scale and customer ownership to crush hardware margins to dust. They’re also building their own accelerators to try to cut Nvidia out altogether.


I've always felt that the business model is nickel & diming for things like storage/bandwidth and locking in customers with value-add black box services that you can't easily replace with open source solutions.

Just took a random server: https://instances.vantage.sh/aws/ec2/m5d.8xlarge?duration=mo... - to get a decent price on it you need to commit to three years at $570 per month(no storage or bandwidth included). Over the course of 3 years that's $20520 for a server that's ~10K to buy outright, and even with colo costs over the same time frame you'll spend a lot less, so not exactly crushing those margins to dust.


Cloud is propped up by the tax laws.

Cloud bills can be written off in the month in which they are paid; while buying hardware has to be depreciated over years.


Section 179 allows immediate expensing of equipment including computers, but is limited to $1.25M/yr. That’s enough for many small and medium businesses.


I’d imagine that these clouds are probably being incentivized to participate


I don’t think pivot_root is necessary for something like this, but a new mount namespace will definitely help avoid creating a mess on accident


More low effort posts please!


I can't tell you how much I love that this post has generated 156 comments and I have nothing to rebut about any of them. I'm sure they're all right!


Emacs is better than vi and spaces are better than tabs!!! (I just wanted to get that in as long as you're agreeing with all the comments.)


I second both of these


(dry gags)



Tabs for indenting, spaces for aligning.

Por que no los dos? You both it. GIVE ME BOTH! Both is good. Get you a dev who can do both.


FWIW, Emacs defaults to mixed for indentations that aren't a multiple of the tabstop.


Honestly, it's a great post, and I love the idea of low effort posts, so i hope you keep doing it.


Sharing a queue itself is not new https://www.kernel.org/doc/html/v5.8/networking/packet_mmap.... and https://docs.kernel.org/next/userspace-api/perf_ring_buffer.... are two examples.

Issues with io_uring security mostly stemmed from an old architecture and just the fact that there's a ton of surface area.


The thing that we need in order for your dream to become a reality is excellent user space frameworks, so I encourage you (and anyone else) to go build one or (better) find one you like and contribute.


> File IO is perhaps the best example of this (at least on Linux). To handle such cases, languages must provide some sort of alternative strategy such as performing the work in a dedicated pool of OS threads.

AIO has existed for a long time. A lot longer than io_uring.

I think the thing that the author misses here is that the majority of IO that happens is actually interrupt driven in the first place, so async io is always going to be the more efficient approach.

The author also misses that scheduling threads efficiently from a kernel context is really hard. Async io also confers a benefit in terms of “data scheduling.” This is more relevant for workloads like memcached.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: