Hacker News new | past | comments | ask | show | jobs | submit login
You Are Not Google (2017) (bradfieldcs.com)
907 points by gerbilly 9 hours ago | hide | past | web | favorite | 372 comments





The big issue I think we miss when people say "why are you using Dynamo, just use SQL" or "why are you using hadoop, a bash shell would be faster" or "why are you using containers and kubernetes, just host a raspberry pi in your closet":

The former examples are all managed! That's amazing for scaling teams.

(SQL can be managed with, say, RDS. Sure. But it's not the same level of managed as Dynamo (or Firebase or something like that). It still requires maintenance and tuning and upkeep. Maybe that's fine for you (remember: the point of this article was to tell you to THINK, not to just ignore any big tech products that come out). But don't discount the advantage of true serverless.)

My goal is to be totally unable to SSH into everything that powers my app. I'm not saying that I want a stack where I don't have to. I'm saying that I literally cannot, even if I wanted to real bad. That's why serverless is the future; not because of the massive scale it enables, but because fuck maintenance, fuck operations, fuck worrying about buffer overflow bugs in OpenSSL, I'll pay Amazon $N/month to do that for me, all that matters is the product.


> My goal is to be totally unable to SSH into everything that powers my app.

If the 0day in your familiar pastures dwindles, despair not! Rather, bestir yourself to where programmers are led astray from the sacred Assembly, neither understanding what their programming languages compile to, nor asking to see how their data is stored or transmitted in the true bits of the wire. For those who follow their computation through the layers shall gain 0day and pwn, and those who say “we trust in our APIs, in our proofs, and in our memory models and need not burden ourselves with confusing engineering detail that has no value anyhow” shall surely provide an abundance of 0day and pwnage sufficient for all of us.

Thus preacheth Pastor M. Laphroaig.


For anyone who's not used these "managed" services before, I want to add that it's still a fuck ton of work. The work shifts from "keeping X server running" to "how do I begin to configure and tune this service". You will run into performance issues, config gotchas, voodoo tuning, and maintenance concerns with any of AWS's managed databases or k8s.

> I'll pay Amazon $N/month to do that for me

Until you pay Amazon $N/month to provide the service, and then another $M/month to a human to manage it for you.


>You will run into performance issues, config gotchas, voodoo tuning, and maintenance concerns with any of AWS's managed databases or k8s.

The default config for a LAMP stack will easily handle 100 requests per second. 10 if you app isn't optimized.

Run apt upgrade once a month and enable automatic security updates on ubuntu.

That is neither hard nor "vodoo".

I've used managed services and I don't see the point until you hit massive scale, at which point you can afford to hire your engineers to do it.


Exactly. There's no silver bullet, only trade offs.

In this case you're only shifting the complexity from "maintaining" to "orchestrating". "Maintaining" means you build (in a semi-automated way) once and most of your work is spent keeping the services running. In the latter, you spend most of your time building the "orchestration" and little time maintaining.

If your product is still small, it makes sense to keep most of your infrastructure in "maintaining" since the number of services is small. As the product grows (and your company starts hiring ops people), you can slowly migrate to "orchestrating".


Funny enough, I've experienced the largest benefits from "scaling down" with Amazon's managed databases.

For instance I made an email newsletter system which handles subscriptions, verifications, unsubscribes, removing bounces, etc. based on Lambda, DynamoDB, and SES. What's nice about is that I don't need to have a whole VM running all the time when I just have to process a few subscriptions a day and an occasional burst of work when the newsletter goes out.


I have a db.example.com $10 a month VM on digital ocean.

It is strictly for dev and staging. Not actual production use because prod doesn't exist yet anyways.

My question is what kind of maintenance should I be doing? I don't see any maintenance. I run apt upgrade about once a month. I'm sure you'd probably want to go with something like Google Cloud or Amazon or ElephantSQL for production purely to CYA but other than that if you don't anticipate heavy load, why not just get a cheap VM and run postgresql yourself? I mean I think ssh login is pretty safe if you disable password login, right? What maintenance am I missing? Assuming you don't care about the data and are willing to do some work to recreate your databases when something goes wrong, I think a virtual machine with linode or digital ocean or even your own data center is not bad?


Amazon are pushing very hard to convince the next generation of devs who grew up writing front-end JS that databases, servers etc. are some kind of technical wizardy best outsourced, when a few weeks of reading and playing around would be enough to get them up to speed.

Hate to be super paranoid, but isn't it rather convenient the top comment on this section expresses exactly this sentiment? If anything, it proves that this perspective is working, or at least, a huge host of devs online really are front-end JS'ers who have this opinion already.

Very much this. For most use cases, the out-of-the-box configuration is fine until you hit ridiculous scale, and it's not really all that complicated to keep a service running if you take time to read the docs.

It just came to mind Jason Fried's Getting Real chapter titled "Scale Later". Page 44.

"For example, we ran Basecamp on a single server for the first year. Because we went with such a simple setup, it only took a week to implement. We didn’t start with a cluster of 15 boxes or spend months worrying about scaling. Did we experience any problems? A few. But we also realized that most of the problems we feared, like a brief slowdown, really weren’t that big of a deal to customers. As long as you keep people in the loop, and are honest about the situation, they’ll understand."


It’s not about “getting up to speed”. It’s about not having to manage it on and ongoing basis.

I wouldn’t work for a company that expects devs to manage resources that can be managed by a cloud provider and develop.

How well can you “manage” a Mysql database with storage redundancy across three availability zones and synchronous autoscaling read replicas?

How well can you manage an autoscaling database that costs basically nothing when you’re not using but scales to handle spikey traffic when you do?


Right.. so now your developers don't need to understand how to configure and tune open source directory services and RDBMS's and in-memory caches... they just need to understand how to configure and tune a cloud-provider's implementation of a directory service and RDBMS and in-memory cache..... ?

If you think using a cloud "service" out-of-the-box will "just work" your scale is probably small enough that a single server with the equivalent package installed with the default settings is going to "just work" too.


You did just read our use case didn’t you? Yes we could overprovision a single server with 5x the resources for the once a week indexing.

We could also have 4 other servers running all of the time even when we weren’t demoing anything in our UAT environment.

We could also not have any redundancy and separate out the reads and writes.

No one said the developers didn’t need to understand how to do it. I said we didn’t have to worry about maintaining infrastructure and overprovisioning.

We also have bulk processors that run messages at a trickle based on incoming instances during the day but at night and especially st the end of the week, we need 8 times the resources to meet our SLAs. Should we also overprovision that and run 8 servers all of the time?


The question is how often is that necessary? Once again the point goes back to the article title. You are not Google. Unless your product is actually large, you probably don't need all of that and even if you do, you can probably just do part of it in the cloud for significantly cheaper and get close to the same result.

This obsession with making something completely bulletproof and scalable is the exact problem they are discussing. You probably don't need it in most cases but just want it. I am guilty of this as well and it is very difficult to avoid doing.


You think only Google needs to protect against data loss?

We have a process that reads a lot of data from the database on a periodic basis and sends it to ElasticSearch. We would either have to spend more and overprovision it to handle peak load or we can just turn on autoscaling for read replicas. Since the read replicas use the same storage as the reader/writer it’s much faster.

Yes we need “bulletproof” and scalability or our clients we have six and seven figure contracts with won’t be happy and will be up in arms.


Did you read the article? You are not Google. If you ever do really need that kind of redundancy and scale you will have the team to support it, with all the benefits of doing it in-house. No uptime guarantee or web dashboard will ever substitute for simply having people who know what they're doing on staff.

How a company which seems entirely driven by vertical integration is able to convince other companies that outsourcing is the way to go is an absolute mystery to me.


No, we are not Google. We do need to be able to handle spiky loads - see the other reply. No we don’t “need a team” to support.

Yes “the people in the know” are at AWS. They handle failover, autoscaling, etc.

We also use Serverless Aurora/MySQL for non production environments with production like size of data. When we don’t need to access the database, we only pay for storage. When we do need it, it’s there.


I agree for production because you want to be able to blame someone when autoscaling fails but we trust developers to run applications locally, right? Then why can't we trust them with dev and staging?

By the way what is autoscaling and why are we autoscaling databases? Im guessing the only resource thst autoscales is the bandwidth? Why can't we all get shared access to a fat pipe in production? I was under the impression that products like Google Cloud Spanner have this figured out. What exactly needs to auto scale? Isn't there just one database server in production?

In dev (which is the use case I'm talking about) you should be able to just reset the vm whenever you want, no?


The big innovation of aurora is they decoupled the compute from the storage [1]. Both compute and storage need to auto scale, but compute is the really important one that is hardest. Aurora serverless scales the compute automatically by keeping warm pools of db capacity around [2]. This is great for spiky traffic without degraded performance.

1. https://www.allthingsdistributed.com/files/p1041-verbitski.p... 2. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...


Cloud spanner seems awesome but has a base price of $740/mo, doesn’t seem to autoscale, and doesn’t support a mysql or postgres interface.

No, it’s not just the bandwidth. It’s also the CPU for read and writes in the case of Serverless Aurora.

For regular autoscaling of read replicas. It brings additional servers on line using the same storage array. It isn’t just “one database”.

That’s just it. There isn’t “just one database”.


I have _more_ maintenance, _because_ of AWS. Most libraries/services reach some sort of a stable version where they are backwards compatible. AWS (and other big providers) changes libraries all the time and things break, then you have to figure out what they did instead of having a stable interface to work with.

Curious about where you would put your DO data. A cheap VPS from DO doesn't come with much storage. You can either buy more storage for $$ or use DO Spaces for fewer $, but do Spaces talk to PostgresQL? I apologize for my ignorance; I'm just beginning to explore this stuff.


You have a database that only needs to be accessible part of the time. Why not keep going to an embedded database?

The costs associated with "maintaining" usually involve the possibility of a 3am call for whoever is in charge of maintaining. Orchestrating can be done ahead of time, during your 9-5, and that's super valuable. It's still a lot of work, but it's work that can be done on my time, at my pace.

Managed services still have plenty of unexplained goings-on, and 3AM pages.

We moved some stuff from AWS back to on prems because it broke less often and in more obvious ways.

Said like a person who hasn't actually ever converted. It's nothing like that at all.

12 outages since 2011, and none of them are anything like what you're describing: https://en.wikipedia.org/wiki/Timeline_of_Amazon_Web_Service...


We've moved from on-prem to AWS fully and we see random issues all the time while their status page shows all green, so I feel you probably have a small amount of resources in use with them or something, because what you're saying doesn't jive with what we see daily. I see you've also copy-pasted your response to other comments too.

Can you actually quantify any of this or are you asking me to trust you? What I gave is an objective standard, what you've given so far is "trust me I'm right".

Are you just ignoring that the cloud isn't a the savior for everyone because it furthers your own agenda, either publically or personally? We spend the GDP of some nations with AWS yearly, so I guarantee you're not at our scale to see these sorts of issues that most definitely are not caused by us and are indeed issues with AWS as confirmed by our account rep.

There are nations that have very tiny GDPs (and AWS is very expensive) so that's not saying much, and I didn't say AWS was flawless, I said that AWS was a hell of a lot better than anything you could come up with unless you're literally at Amazon scale (and we both know you arent), and nearly none of your software's problems are AWS's fault.

Using AWS has been a legitimate excuse for an outage or service delivery issue a dozen times since 2011. The end. Write your apps to be more resilient if you've had more than that many outages at those times (and honestly even those outages weren't complete).


> AWS was a hell of a lot better than anything you could come up with unless you're literally at Amazon scale

AWS is a hell of a lot better than most people here can come up with for solving AWS specific set of problems and priorities. But those problems and priorities generally do not match the problem space of the people using it exactly.

For example, AWS has a lot of complexity for dealing with services and deployment because it needs to be versatile enough to meet wildly different scenarios. In designing a solution for one company and their needs over the next few years, you wouldn't attempt to match what AWS does, because the majority of companies use only a small fraction of the available services, so that wouldn't be a good use of time and resources.

AWS is good enough for very many companies so they appeal to large chunks of the market, but let's not act like that makes them a perfect or "best" solution. They're just a very scaleable and versatile solution so it's unlikely they can't handle your problem. You'll often pay more for that scaleability and versatility though, and if you knew perfectly (or within good bounds) what to expect when designing a system, it's really not that hard to beat AWS on a cost and growth level with a dell designed plan.


AWS status page is notorious for not indicating there is an issue either at all, or until well after the event began.

Without breaking my employer confidentiality agreement? No. But you basing off reported outages is you trusting Amazon in the same way you don't trust me, that is, on their word.

I'd rather trust a corporation providing some information over an individual providing absolutely nothing, especially when that corporation's information matches with my own internal information.

The reality is, if you're having problems with AWS, it's you, and not AWS, for 99.9999999% of your problems. Continuing to pretend it's AWS is a face-saving, ego protecting activity that no rational person plays part in.


Which is funny because I've had an instance go down. I don't have ridiculously high volume, or distributed:

-One instance -No changes to the config -I'm the only person with access -No major outages at the time

It went down, and the AWS monitor didn't actually see it go down until 45 minutes later, which it needs to be down for a certain amount of time before their techs will take a look.

It was my first time using AWS, and I didn't want to risk waiting for tech so I rebooted the instance and it started back up again. I have no idea why, but it failed with out reason, and on reboot worked like it always did.

My point is that AWS has been solid, but they are like anything else, there are tradeoff's in using their service, and they aren't perfect.


I don't know the last time I had an instance go down. Not because it doesn't happen, but because it's sufficiently unimportant that we don't alert on it. Our ASG just brings up another to replace it.

Many applications won't be as resilient. That's the trade-off. We don't have a single stateful application. RDS/Redis/Dynamo/SQS are all managed by someone else. We had to build things differently to accommodate that constraint, but as a result we have ~35M active users and only 2 ops engineers, who spend their time building automation rather than babysitting systems.

If you lean in completely, it's a great ecosystem. Otherwise it's a terrible co-lo experience.


Funny enough, that exact scenario is covered in the certification exams too, and the correct answer is to do what you did. An ASG will fix too like another poster said, also.

Yeah, you just demonstrated why AWS is keen on you having backups to your services on their platform. You failed to do that (follow their guidance) and suffered an outage because of it. How exactly is that AWS's fault?

MY point is that AWS is very solid, and while there are plenty of trade offs, to be sure, the tradeoff is "operational" vs. "orchestration", and operational doesn't let you decide when to work on it whereas orchestration does.


While I want to avoid getting into this argument, what you are saying is the same as "well it works on my machine" and "there can't be anything wrong with Oracle Database because Oracle says there are no bugs."

No, what I'm saying is, "None of your problems are consistent across use cases, therefore they're your problems not the system being used."

I haven't actually said anything about my own experience, so it's funny you claim I have...


It's bad enough that a Chrome extension exists to tease out real info from the lies that is the AWS status page:

https://chrome.google.com/webstore/detail/real-aws-status/ka...


How about network partitions across availability zones? Happens all the time for us, so much in fact that we had to build a tool to test connectivity across AZs just to correlate outages and find the smoking gun.

"For anyone who's not used these "managed" services before, I want to add that it's still a fuck ton of work. The work shifts from "keeping X server running" to "how do I begin to configure and tune this service"."

I have noticed that too. With some managed services you are trading a set of generally understood problems with a lot of quirky behavior of the service that's very hard to debug.


So I've worked with AWS and with our internal clusters as a dev. My experience has been that I have to make work-arounds for both, but at least with AWS, I don't have to spell out commands explicitly to the junior PEs.

EDIT: I should be clear, our PEs are generally pretty good, but because their product isn't seen by upper management as the thing which makes money, they're perpetually understaffed.


Also Amazon documents their stuff in a nice public website, internal teams documented the n-2 iteration of the system and have change notes hidden in a Google drive somewhere that if you ask the right person on the other side of the world they might be able to share you a link to.

You were able to find documentation? Where do you work?

Yes, but I think very broadly speaking the quirky behavior is stuff you bump into, learn about, fix, and then can walk away from.

The daily/monthly maintenance cycle on a self hosted SQL server is “generally understood” but you still have to wake up, check your security patches, and monitor your redeployments.

You can do some of that in an automated fashion with public security updates for your containers and such. But if monitoring detects an anomaly, it’s YOU, not Heroku who gets paged.

It’s a little like owning a house vs renting. Yes if you rent you have to work around the existing building, and getting small things fixed is a process. But if the pipes explode, you make a phone call and it’s someone else’s problem. You didn’t eliminate your workload, but you shrunk the domain you need to personally be on call for.


The problem is that if I run my own servers I can fix problems (maybe with a lot of effort but at least it can be done) but with managed services I may not be able to do so. There is a lot of value in managed services but you have to be careful not to allow them to eat up your project with their bugs/quirks.

So what “problems” were you unable to fix with AWS?

Exactly this

The point is with a managed service, none of your problems will be with the service. That's what the managed service is selling.

I just finished a 2+ week support ticket w/ AWS. We were unable to connect over TLS to several of our instances, because the instance's hostname was not listed on the certificate. This is a niche bug that's trivially fixable if you own the service, but with AWS, it's a lot harder: you're going to need a technical rep who understands x509 — and nobody understands x509.

I've found & reported a bug in RDS whereby spatial indexes just didn't work; merely hinting the server to not use the spatial index would return results, but hinting it to use the spatial index would get nothing. (Spatial indexes were, admittedly, brand new at the time.)

I've had bugs w/ S3: technically the service is up, but trivial GETs from the bucket take 140 seconds to complete, rendering our service effectively down.

I've found & worked w/ AWS to fix a bug in ELB's HTTP handling.

All of these were problems with the service, since in each case it's failing to correctly implement some well-understood protocol. AWS is not perfect. (Still, it is worth it, IMO. But the parent is right: you are trading one set of issues for another, and it's worth knowing that and thinking about it and what is right for you.)


Okay, I'm sorry you thought I said AWS was perfect and bug free. I didn't, however, say that. I said (implied, really) it's better than anything you could possibly home brew. Nothing you've said here changes that.

Further, didn't I say that it's trading one set of issues for another? Or at least, I explicitly agreed with that.

I feel like you didn't read what I wrote honestly, and kind of came in with your own agenda. All I ever said was that the issues you trade off are orchestration issues vs. operational issues, and operational issues are 10x harder than orchestration issues because you don't get to decide when to work on operational issues, you tend to have to deal with them when they happen.


You wrote “The point is with a managed service, none of your problems will be with the service.”

What deathanatos wrote sounds awfully like problems with the service to me.

I don’t think S3 taking 100+ seconds to respond to a GET request can be solved by orchestration alone.


It definitely can. Reasonable timeouts and redundant systems.


That's the promise but in reality every software has bugs, including managed services.

Not really, not anything like what you're describing.

12 outages since 2011, and none of them are anything like what you're describing: https://en.wikipedia.org/wiki/Timeline_of_Amazon_Web_Service...


We've moved from on-prem to AWS fully and we see random issues all the time while their status page shows all green, so I feel you probably have a small amount of resources in use with them or something, because what you're saying doesn't jive with what we see daily. I see you've also copy-pasted your response to other comments too, so I'll do the same with my response.

I don't feel like copy/pasting all of our comments to each other, so I'd appreciate it if you didn't do that, thanks.

Then don't do it yourself. You're dead set on ignoring people whose experience is different than yours, wrapping yourself in an echo chamber of sorts and telling others they are wrong.

I'm not dead set on anything, I'm trying to have conversations with multiple people, not create an immutable record.

And I don't think you know what an echo chamber is if you think one person can create one alone...


This is not about outages. There are many more things that can go wrong besides outages.

Sorry but if you don't know how to use/manage your AWS stack, that's not on AWS, that's on you.

A bad workman blames his tools.


How long have you been working in tech? Just curious.

You sound like someone who hasn't had much real world experience and thinks AWS or whatever is the best thing because it's the only thing you know.


You may want to ask the OP how much time she/he has been working at Amazon instead.

Long enough to know that some dinosaurs refuse to learn anything new (read: AWS) and will bend over backwards to try and keep themselves relevant.

Bro, what're you so upset about in this thread? That people had different experiences than you with AWS..?

I'm not upset, I'm simply pointing out that AWS isn't the problem in any of these examples, it's the various commenter's lack of understanding about how to work in AWS that's caused these problems.

I don't think anyone is actually upset, do you? I certainly hope I haven't upset anyone... :/


It's someone else problem unless it prevents you from living here, in which case it's still your problem too. I think the analogy works quite well :)

But you’re talking about a reduction in the number of types of specialized people to the number of specializations per type of person. That makes this more scalable.

That's no joke. I have a decent software background and it was far from trivial to get going with aws services. Their documentation doesn't always quite tel you everything you need to know and half thhe time there are conflicting docs both saying to do something that's wrong. Still has been less work than a production server at my last engineering job but then again that project had a lot of issues related to age and shitty code bases. Hard to say which would have been less work honestly.

If your "managed services" are a ton of work, then they're not really managed.

I built a system selling and fulfilling 15k tshirts/day on Google App Engine using (what is now called) the Cloud Datastore. The programming limitations are annoying (it's useless for analytics) but it was rock solid reliable, autoscaled, and completely automated. Nobody wore a pager, and sometimes the entire team would go camping. You simply cannot get that kind of stress-free life out of a traditional RDBMS stack.


The general fact of reality is that if you are building anything technical, then knowing and managing the details, whatever the details are, will get you a lot more bang for your buck. Reality isn't just a garden variety one-size fits all kind of thing, so creating something usually isn't either. If you just want a blog like everyone else's, then that comes packaged, but if you want something special, you will always have to put in the expertise.

> will get you a lot more bang for your buck

And it _really_ is a lot. A company I work with switched from running two servers with failover to AWS, bills went from ~€120/m to ~€2.2k/m for a similar work load. Granted, nobody has to manage those servers any more, but if that price tag continues to rise that way, it's going to be much cheaper to have somebody dedicated to manage those servers vs use AWS.

Also, maybe that's just me, but I prefer to have the knowledge in my team. If everything runs on AWS, I'm at the mercy of Amazon.


Proponents of the cloud love to ignore that for 'small' places, it's often perfectly fine to have a 'part-time +emergencies' ops/sysadmin person/team to manage infra.

Yes, some places will need a full time team of multiple people, but a lot of places don't, and can get a tailored solution to suit their needs perfectly, rather than just trying to fit into however it seems to work best using the soup of rube-goldberg like interdependent AWS services.


Holy cow, that's nuts. A pair of perfectly webservers (m5.xlarge) comes in at ~€250/m. And cheaper if you get reserved instances. ~€2.2k/m for a pair of instances and maybe a load balancer is incredible!

Bang on. Realistically the value of Amazon's managed side is in the early stages. At latter stages with people, it's significantly lower cost to tune real resources, and you get added performance benefits.

We make a decent business out of doing just this, at scale for clients today.


Agree. AWS and the likes is an awesome tool to get access to a lot of compute power quickly which is great for unexpectedly large workloads or experimenting during early stages. For established businesses/processes the cost of running on premises is often significantly lower.

We manage about 150T of data for company on relatively inexpensive raid array + one replica + offline backup. It is accessible on our 10Gbps intranet for processing cluster, users to pull to their workstations, etc. The whole thing is running on stock HP servers for many years and never had outages beyond disks (which are in raid) predicting failure and maybe one redundant PSU failure. We did the math of replicating what we have with Amazon or Google and it would cost a fortune.


Would love to hear more about that business — do you help people go from cloud back to on-prem?

If that were universally true, then why did Netflix go all in on AWS?

hype maybe? Got a backroom deal for the exposure? Oh, and you probably aren't netflix either. But they have a pretty severe case of vendor lock in now, will be interesting to see how it plays out. As of 2018 they spend 40 million on cloud services, 23 million of that on aws.

Do you think you are gonna get AWS's attention with that $10 s3 instance when something goes wrong?!? You will have negative leverage after signing up for vendor lock in.

I'll take linux servers any day, thanks.


So you think Netflix was suckered into using AWS and if they had just listened to random people on HN they would have made different choices?

I’m sure with all of your developers using the repository pattern to “abstract their database access”, you can change your database at the drop of a dime.

Companies rarely change infrastructure wholesale no matter how many levels of abstraction you put in front of your resources.

While at the same time you’re spending money maintaining hardware instead of focusing on your business’s competitive advantage.


so, you might be dealing with some half/truths here.

Netflix does NOT use aws for their meat and potatoes streaming, just the housekeeping. They use OWS for the heavy lifting.

https://www.networkworld.com/article/3037428/netflix-is-not-...

But re: maintaining hardware, I only maintain my dev box these days. We are fine with hosted linux services, but backup/monitoring/updating is just too trivial, and the hosting so affordable (and predictable) w/linux it would have to be a hype/marketing/nepotic decision to switch to aws in our case. The internet is still built on a backbone and run by linux, any programmer would be foolhardy to ignore that bit of reality for very long.


Please do not spread falsehoods.

Yeah, I think it's a trade off. Certain services can be a no-brainer, but others will cause pain if your particular use case doesn't align precisely with the service's strengths and limitations.

DynamoDB vs RDS is a perfect example. Most of that boils down to the typical document store and lack of transactions challenges. God forbid you start with DynamoDB and then discover you really need a transnational unit of work, or you got your indexes wrong the first time around. If you didn't need the benefits of DynamoDB in the first place, you will be wishing you just went with a traditional RDBMS vs RDS to start with.

Lambda can be another mixed bag. It can be a PITA to troubleshoot, and their are a lot of gotchas like execution time limits, storage limits, cold start latency, and so on. But once you invested all the time getting everything setup, wired up...

In for a penny, in for a pound.



When you buy a service from a big company, and it doesn't work, you get to debug the service.

Which is exactly why everything runs Linux instead of Windows.

I guess that's his point, you are not doing less work, all you did was moved your debugging from one place to another.

And let's not forget! Got a support contract so we can all BLAME someone not in the room and feel good.

That doesn't sound like a shift of work. It sounds like work I already would have done - performance tuning doesn't go away by bringing things in house.

Now that person I pay to operate the service can focus on tuning, not backups and other work that's been automated away.

Sounds like a massive win to me.


It does make performance tuning harder since you likely don't have access to the codebase of the managed service, requiring more trial-and-error or just asking someone on the support team ($$)

Then you pay $O/month for AWS Enterprise Support (who are actually quite good and helpful) to help augment your $M/month employees and $N/month direct spend.

Support - in the long run - is pretty cheap (I think I pay around 300) and 100 percent worth the tradeoff. The web chat offers the fastest resolution times in my experience once you are out of the queue

To be fair, this is only a valid shift for folks moving. If you are creating something, you have both "how do I configure" and "how do I keep it moving?"

That is, the shift to managed services does largely remove a large portion, and just changes another.


Yes, but it's still pretty much a straight cost offset. If you hold your own metal, you have to do all of that and still administer the database. Sure, there could be a little overlap in storage design, nut most of the managed systems have typical operational concerns at a button click: backup, restore, HA... Unless your fleet is huge and your workload is special, you're going to win with managed services.

define huge?

Bigger than Netflix, assuming that you don't know something that Netflix doesn't.

If you hold your own metal

That's going too much to the other extreme, ec2, droplets, etc.. are fine.


And a lot of the time keeping X running is simpler than configuring and tuning this service.

He specifically mentioned Dynamo. There is nothing to configure except indexes, and read and write capacity units.

> but because fuck maintenance, fuck operations, fuck worrying about buffer overflow bugs in OpenSSL, I'll pay Amazon $N/month to do that for me, all that matters is the product.

That's nice in theory, but my experience with paying big companies for promises like that is I still end up having to debug their broken software; and it's a lot easier to do that if I can ssh into the box. I've got patches into OpenSSL that fixed DHE negotiation from certain versions of windows (especially windows mobile), that I really don't think any one's support team would have been able to fix for me / my users [1], unless I had some really detailed explanation -- at that point, I may as well be running it myself.

[1] And as proof that nobody would fix it, I offer that nobody fixed this, even though there were public posts about it for a year before my patches went in; so some people had noticed it was broken.


This.

I've been informally tracking time spent on systems my team manages and managed tools we use. (I manage infra where I work.)

There is very little difference.

And workarounds and troubleshooting spent on someone else's system mean we only learn about some proprietary system. That's bad for institutional knowledge and flexibility, and for individual careers.

Our needs don't mesh well with typical cloud offerings, so we don't use them for much. When we have, there has yet to be a cost savings - thus far they've always cost us more than we pay owning everything.

I mean, I personally like not dealing with hardware or having to spend time in data centers. But I can't justify it from a cost or a service perspective.


After amazon switched to ec2/sec kind of billing in 2017 i would say cloud is pretty good for prototyping and running CI. As to anything else i would say it depends.

absolutely. I am _right_now_ fighting ELB because we see traffic going to it but not coming out of it. If it were MY load balancer, I would just log in and tcpdump.

And you couldn’t use VPC logs?

Your experience with companies or your specific experience with AWS?

With companies. I don't have a lot of specific experience with AWS, except for the time when nobody mentioned that the default firewall rules are stateful, and there's a connection cap that's kind of low but scaled to instance size, and there's no indication about it, and the sales droid the assigned to my account because they smelled money didn't mention it either; but I'm not bitter. :)

Based on all the articles I see from time to time, I fully expect if I worked for a company that was built on AWS, I would spend a lot of time debugging AWS, because I end up debugging all the stuff that nobody else can, and it's a lot easier to do when I have access to everything.


Everything you mentioned is covered in a Pluralsight video.

But you criticize AWS based on a few articles and hardly any real world experience?


I didn't start off criticizing AWS; just large companies in general, based on my experience with them, and extrapolating it to AWS based on my experience with the one issue, combined with reading things that sound like the same pattern of people having to debug big company's software, and it's hard when you can't see anything.

On this specific issue; I worked with sales support, and also looked on their forums; and found things like this: https://forums.aws.amazon.com/message.jspa?messageID=721247

Which says 'yes, we know there's a limit; no we won't tell you what it is' and ignores the elephant in the room, that apparently if you reconfigure the network rules, the problem evaporates.

Excuse me for not banging my head against the wall or watching random videos from third parties. Actually, I only know there's a way to avoid this because I complained about it for a long time in HN threads, and finally, some lovely person told me the answer. In the mean time, I had continued my life in sensible hosting environments where there wasn't an invisible, unknowable connection limit hovering over my head.


So you should be excused for not doing research on something your entire infrastructure is based on?

And just “extrapolating”?


Three worries for me:

1. I know the risk of only being able to peer through the fence at the distant piece of software that is running your business, unable to gain any insight while your production application is limping badly and customers are running away like water. On top of that, thrashing it out with Mr Clippy is better than average support. If my business depends on it I want experts at hand who have the tools and the access they need to do the job they do.

2. The insane pace of serverless is entirely fad driven and lacks quality engineering which is required for critical pieces of software. The tools are universally poor quality, unstable, unreliable, poorly documented and built on gaining mindshare, making IPO and selling conference seats. Best practice never materialise as the rate of change does not allow an ecosystem to settle and work the bugs out. The friction is absolutely terrible but no one speaks of this in fear of their cloud project being labelled a failure. Every person I have spoken to for months is hiding little pockets of pain under a banner of success. Some people clearly will never deliver and burn mountains of cash hiding this.

3. Once you enter an ecosystem, you are at the mercy of the ecosystem entirely, be that a service provider or a tool. Portability is always valuable. It has cost, scalability, redundancy and risk benefits far beyond the short term gain of a single vendor decision. I'm currently laughing at an AWS quote for a SQL Server instance with half the grunt, no redundancy, no insight possible for only 2x the capex and opex combined of dedicated hardware including staff. But can't move to Azure because everything is tied into SQS.

I can never be behind anything but IaaS myself. This is contrarian especially in this arena but I will put my 25 years of experience on the line every time and say that it is the right thing to do. IaaS is choice, flexibility, allows you to gain deep insights and protects you from serfdom, fad technology and pick and choose mature products rather than what the vendor sees fit.

This is just another rehash of buying a mainframe. It's just bigger and you pay hourly to write COBOL.


1. You have to pay for those but the thinking goes that if you are at the complexity and scale of those problems you reach out to a TAM to sort them out for you (ie. $$$).

2. Give it time; this will evolve and it is still in its infancy. Like all things it is buggy in the beginning but as adoption hockey-sticks so too will the stability, documentation, etc.

3. Portability is traded for breadth of services and depth is gained through vendor lock-in and the one-size-fits-all package. Of your concerns I would say this one will be around for a long time or at least until a "conversion kit" is built to shoehorn all of your stuff into another provider allowing you to jump ship or test the waters elsewhere.

I am a veteran like you (20 years) and while you choose IaaS because you like the control most want to punt their problem to someone else and to pay for that.

If our goal is to get from LAX to JFK we could fly our own plane, charting our own course, looking up weather, doing engine checks, refueling, dealing with air traffic control. and we'll get there and be in full control the whole way. However most will pay to be shuttled in to a commercial airplane. Then of course there are some that are willing to pay more to have their own personal pilot get them there in a chartered aircraft where they are afforded a more tailored experience.

There are some really great pilots you can hire out there to do it all for you (or if you are in fact one yourself) but if the goal is to simply go from point A to point B in as little time as possible we don't have time to find, train, and rely on a single pilot or ourself to get there. We will take the hit and pay others to get us there.

That is what all this serverless non-sense is about if you ask me. The tradeoffs of simplicity and handing the busywork off to someone else is more enticing than the control we have in the process. Also, isn't it nice to be able to say, when you arrive late, that it was the airlines fault? :)


I work on a team where our products are all in Fargate containers. I understand the appeal of serverless -- you never need to exec into the container, but half the time when we're debugging an issue in prod that we can't reproduce locally, we'll say, "wouldn't this be easier if we could just exec into the container and find out exactly what's going on?"

Removing SSH should be the goal though. If you follow the old Visible Ops book you also "Electrify the Fence" and introduce accountability, etc. If your goal is to see what a process is doing introduce tracing. If you need to add a "println()" then push that out as a change because the environment is changing from your altering of it. Because the tool doesn't exist yet that you need to SSH into a box doesn't mean it shouldn't - you have to instrument the tooling to prevent you from needing this adhoc ability. Admittedly it scares me still but ideally the end game is to never need to or have the ability to do so through a tool which has all the things you are looking for without allowing a human to be too human and miss a semi-colon.

Not being able to ssh into a container sounds like a missing feature of that particular container solution? I would expect that I can ssh into a docker container hosted in a kubernetes cluster. Hmm, pretty sure I must've done this dozens of times.

> My goal is to be totally unable to SSH into everything that powers my app

That sounds like a nightmare to me, and I'm not even a server guy, I'm a backend guy only managing my personal projects.

> I'll pay Amazon $N/month to do that for me, all that matters is the product.

I don't want to pay Amazon anything, I want to pay Linode/Digital Ocean 5 or 10 or 20 dollars per month and I can do the rest myself pretty well. My personal projects will never ever going to reach Google's scale and seeing that they don't bring me anything (they're not intended to) I'm not that eager to pay an order a magnitude more to a company like Amazon or Google in order to host those projects.


it's astounding how much a $20 linode can handle (as a web host) with a proper software stack and performance-minded coding.

but companies throw money, hardware and vendor lock-in at problems way before commiting to sound engineering practices.

i have a friend who works at an online sub-prime lender, says their ruby backend does hundreds of queries per request and takes seconds to load. but they have money to burn so they just throw hardware at it. they spend mid-five figures per month on their infrastructure and third party devops stuff).

meanwhile, we run a 20k/day pageview ecommerce site on a $40/mo linode at 25% peak load. it's bonkers how much waste there is.

i think offloading this stuff to "someone else" and micro-services just makes you care even less about how big a pile of shit your code is.


And we just thought the frontend of websites and services was bad.

Seems the insanely bad coding and ignorance of speed and performance goes all the way to the backend too!

Disappointing.


One of the things I'm most grateful for as a programmer was the ability to learn a lot of performance-minded backend design principles within the Salesforce ecosystem. Salesforce was always brutally unforgiving in terms of the amount of I/O it would let you do per transaction, which led me to become much more aware of how those things worked under the hood, and had the effect of teaching me how to better design my code to fit within the limits.

When you can't use more than a couple seconds of CPU time per transaction, or you have a fixed and spartan heap size for a big algorithm, you learn good habits real quick.

I imagine a lot of engineers today haven't had the experience of working within hardware limits, hence all the waste everywhere.


> When you can't use more than a couple seconds of CPU time per transaction

hmm, i guess that explains why salesforce is universally known for being slow; 2 seconds is an eternity.

we deliver every full page on the site within 250ms from initial request to page fully loaded (with js and css fully executed and settled). images/videos finish later depending on sizes and qtys.


I don’t disagree that Salesforce is slow but this is a different workload, think db/business logic to perform a task, processing records, etc, not loading a typical page/assets.

i just signed up for a trial of salesforce and at least when there's not much data in the account it doesnt seem too slow. i was going off of what some coworkers related to me from past experience.

i think it's expected that if you're doing some editing/reporting/exporting en masse, then a couple seconds could be okay (assuming it's thousands or tens of thousands of records, not a hundred). but not for single-record actions / interactions.


We process up to 15 million events per second on a $15K HP server.

a pretty meaningless metric without knowing what an "event" is.

The event is market data update. On one hand it is in fact pretty simple. On the other hand, it is not much simpler than something like “user X liked Y” or even “product Z added to cart”. I think it shows what a well designed, optimized code can do on a normal hardware. Remember there is 3-4 billion instructions that a modern cpu is pumping out per second.

One large bank I used to work for had a payment processing system doing about 40 thousand transactions per day. It ran on about $8 million worth of hardware - mainly IBM mainframe. I was impressed until I found out each transaction was essentially a 60-100 byte text message. I can confidently say a well designed system can do 1000 times that load on an iPhone.


Cloud providers have generous free tiers, so many personal projects pay $0. However, I'm in the same boat. I'd rather pay $20/month to DO and risk my site going down in a DDoS attack than having a REST endpoint basically tied to my credit card and hoping Amazon blocks all the malicious traffic.

Yep, keep your stuff portable and you can always migrate to AWS/Azure/etc. later if your personal project should turn into something bigger.

As someone tasked with protecting infrastructure, devs not being able to SSH into production is a godsend.

There's a difference from devs not being able to ssh in due to access restrictions and devs not being able to ssh in because it's not a feature of the service. I actually agree that not having unilateral access to the production system is a good thing. But not being able to have anyone able to be granted access, even for a limited time, is neither productive or safe.

so, what is to stop them from running code from the app?!? Who do you call when something breaks? Do you expect the dev to not be able to log in and investigate at 3am or whenever?

I mean if you can't trust your dev team, or the procedures surrounding them, you are kinda screwed.


To be clear, the issue isn't about trusting developers in prod. It's about limiting access to prod as an avenue for attackers who will be looking to get in there and exfil sensitive data.

Nothing to do with trust of devs.


I use managed services a lot. And fortunately my data size is big enough for it to actually make sense to be using these big data tools.

The issue is the quirks. Every managed service has at least a dozen quirks that you're not going to know about when you visit the flashy sales page on the cloud provider's website. And for the vast majority of users, they're not going to have access to the source code to understand how these quirks work on the backend. So you end up in a situation where yes, it does take way less time to get 95% of the functionality done, but getting that last 5% can still take a considerable amount of work.

As an example, I am using Azure Event Hubs lately. It is supposed to provide something like a simple consumer api like Kafka does, but with consumer group load balancing across partitions. Awesome, there is a system that automatically handles leasing across partitions in a way that abstracts this all away from the client! Except, well actually the load balancing is accomplished via "stealing leases" (meaning, they are not true leases) so if you use the api you are meant to use, you will get double reads - potentially very many if you want to commit reads after doing more-than-light processing which can take time. So you need to use the much more poorly documented, barebones low level api and probably still end up writing a bunch of logic to dedupe.

Except, you use this kind of tool to begin with because you want to set up a distributed consumer group to read from a stream... so now you have a non-trivial engineering problem figuring out a way to get a distributed system to manage deduping in a light-weight way across hundreds of processes and machines...


Enjoy the many (many, many) connection reset by peer, TCP idle timeouts, and 500 Internal Server Errors on Dynamo's end.

Between that and the dogshit documentation, it's truly thrilling to be paying them a ridiculous amount of money for the privilege of having a managed service go undebuggably AWOL on the regular, instead of being able to resolve the issues locally.


Yeah and if you’re on gcp you also get dogshit support whose main strategy is to sandbag you with endless (and useless) logs collection until you give up and move on or take too long so they can close the ticket.

I can see my code handling all the exceptions correspond to the points you raised with exponential backoff. Which part of Dynamo's documentation is dogshit?

To me this comment is not a response to the article.

The article isn’t asking why use Dynamo vs SQL in the context of the traditional NoSQL vs SQL way

It’s why Dynamo vs QLDB.

Or why Cloud Firestone vs Cloud Spanner.

Or Firehose + some lambda functions vs EMR

It’s wholly separate of the managed angle and focuses on the actual complexity of the products involved and the trade offs therein.

In the surface the two sound similar, but there’s much more nuance to this article's point (take the example of Dynamo, the author isn’t opposed to it for being NoSQL, they’re opposed to it for low level technical design choices that are a poor fit for the problem their client had)


Yeah, "not my problem" sounds like a winning strategy, until you realize that it also means "nothing I can do about it" when things inevitably go wrong.

Not to mention that you're the lowest priority when something does go wrong.

It is important to understand the following:

Depending on the problem, choose the right tools. For example so far everything I've seen about React and GraphQL tells me that maybe GQL isn't necessarily the best solution for a 5-person team, but a 100 person team may have a hard time living without it.

Kubernates / Docker is significant work. And when we had 4 developers making that effort was wasteful and we literally didn't have the time. Now at 20 we're much more staffed and the things Kubernates / Docker solves is useful.

Meanwhile we have a single PostgreSQL server. No nosql caching layer. We're at a point where we aren't sure how far we gotta go before we hit limits of a PSQL server, but at least 3-4 years before we hit it.

Point is look at the tools. See where these tools win it for you. Don't just blindly pick because it can scale super well. Pick it because it gives you things you need and a simplicity you need for the current and near-term context, and have a long-term strategy how to move away from it when it is no longer useful.


Security and privacy yet again traded for convenience. We need strict data privacy laws and then I would have a little more trust in large centralized entities that would otherwise have every incentive to abuse people's data.

Such centralized systems also become significant points of failure. What happens to your small business if one of the many services you rely on disappears or changes their API?

"All that matters is the product" sounds very much like "all that matters is the bottom line" and we've seen the travesties that occur when profit is put above all else.


It's not clear that having your own datacenter is more secure than using AWS/GCP/Azure services. In both cases there are good monitoring solutions, however I'd say that cloud-based solutions have easier immediate access to most things because they just integrate with the provider's API whereas on prem you're installing agents and whatnot.

Also having granular IAM for services and data is very helpful for security. You have a single source of truth for all your devs, and extremely granular permissions that can be changed in a central location. Contrast to building out all of that auditing and automation on your own. Granted, IAM permissions give us tons of headaches on the regular, but on balance I still think it's better when done well.

If you're concerned about AWS/GCP/Azure looking at your private business's data, I think 1.) that's explicitly not allowed in the contracts you sign 2.) many services are HIPAA compliant, hence again by law they can't 3.) They'd for sure suffer massively both legally and through loss of business if they ever did that.


Which Dynamo are you talking about? The Dynamo paper, or Dynamo, the AWS managed database product? Dynamo-the-product makes sense for all sorts of random tasks. Full-on implementations of Dynamo-the-paper (for instance: Cassandra) fall into the bucket of "understand the problem first" that the author is talking about.

A SQL database, bash, and a Raspberry Pi in your closet all are managed too. They're built on top of software that's been battle-tested over decades to keep itself running. MySQL/Postgres/whatever will manage running multiple queries for you and keeping the data consistent as well as Dynamo will. bash will manage running multiple processes for you as well as Hadoop will. A Raspberry Pi with cron will manage starting a service for you every time it powers on as well as Lambda will. The reasons to use Dynamo/Hadoop/Lambda are when you've got different problems from the ones that MySQL/bash/cron solve.

If you don't believe that this counts as "managed," then for all your words at the end, you believe at the end of the day that making a solution more complex so that failures are expected and humans are required is superior to doing a simple and all-in-code thing. For all that you claim to not like operations, you think that a system requiring operational work makes it better. You are living the philosophy of the sysadmins who run everything out of .bash_history and Perl in their homedir and think "infrastructure as code" is a fad.


If everyone did this, in the limiting case, no one has the ability to investigate and repair actual bugs until the point where the application can only crash every 10 ms and does nothing but get continually restarted.

I'm not sure I follow, how does managed services help with choosing the right database technology for the problem set?

He doesn't care about that. JUST THE PRODUCT. That's how amazing his fantasies are.

Yeah, I can't quite square what the OP is saying with how to deal with something like Amazon Dynamo's lack of large scale transactions. (Even basic ACID was only introduced late last year, if I read correctly.)

If you choose Dynamo, then you get to blame Dynamo for not having ACID when you lose data!

This is an incredibly common fallacy in our industry. Remember, you are responsible for the appropriateness of the dependencies you choose. If you build your fancy product on a bad foundation, and the foundation crumbles, your product collapses with it and it's unprofessional to say "Well, I didn't pour the foundation, so I can't be blamed." Maybe not, but you decided to build on top of it.


Here I am still using SQLite (had to add an index yesterday!) Maybe one day I will grow up :(

SQLite is good quality software, and very heavily tested. There's nothing wrong with using it so long as you don't outright misuse it, and they are clear about when you shouldn't be using it.

I very rarely run into a situation where I need more than SQLite. The page on the subject is great: https://www.sqlite.org/whentouse.html

N gets to be a big number my friend. I love aws but I can’t get over the number of startups I’ve seen with $100k MRR and $35k MRC allocated to AWS.

If the main reason for your switch from sql to dynamo is that it is managed why not use aws aurora, aurora serverless, or any other more managed sql solution?

Now when many sql databases support json I don’t quite understand why you would use a documentdb for anything else then for actual unstructured data, or key value pairs that needs specific performance.

I used mongodb for a while now I’m 100% back in sql. Document dbs really increased my love for sql. But I’m also long from being a nosql expert and I never have to deal with data above a TB.


Before we make a technology choice, we should be clear what those choices are. SQL is a query language and DynamoDB is a database. "NoSQL" technologies can be fronted with SQL interfaces. Relational databases can vary a lot too in their ACID compliance ("I" being the component with maximum variance).

The choice of technology should first be based on the problem and not whether it is serverless. Choosing DynamoDB when your application demands strict serializability would be totally wrong. I hope more people think about the semantic specifications of their applications (especially under concurrency).


To your point, when I often see the "shell script" solution it's often missing basic things you'd want like version control or any kind of bread crumb trail (I find myself sshing in and "grep -r" trying to find things). That works if you're cobbling something together, but doesn't scale beyond 1 person.

It's tough to get the correct level because you can ansible > docker > install an rpm or just change the file in place. Both have their place and "hacky" solutions can work just fine for many years.


Issue is, this works only for very simple systems. With complexity comes maintenance and operations. You also cannot build anything innovative in case of heavy load systems because you cannot embbed your custom highly performant solutions (cloud provider simply does not support them ootb).

Serverless sounds great for "some" kind of systems but in other cases its a complete missfire. Our job is to know which tool to use.


>SQL can be managed with, say, RDS. Sure. But it's not the same level of managed as Dynamo (or Firebase or something like that).

It's managed at every shared hosting in the world.


Not at the same level, hardly anywhere. Replication, auto-scaling, failover and point-in-time backups are typically not managed for you. AWS and GCP have decent solutions now, but they're not nearly as hands-off in terms of management.

What specific managed services do you use, if I may inquire?

> That's why serverless is the future;

Future of what exactly?


The future of the pit they will dump money into.

The future of making nouns out of adjectives, apparently.

Future of server ?

Future of server is serverless? :P

Serverless is just somebody else's server.

No, "cloud" is somebody else's server. Serverless is somebody else's server that scales your services seamlessly (and usually opaguely).

I was riffing off of the phrase, "The cloud is just someone else's computer".

Serverless is the cloud?

Oh, I didn't realize we were doing trick questions.

What's the safest way to go skiing? Don't ski.


If you don't care about cost and don't care about someone else having all of your data, that is probably a fine route.

>> My goal is to be totally unable to SSH

Great goal. I guess you never had any use case to login to a production server to investigate a business critical issue.


I have had to SSH into prod boxes many times to debug emergent issues. Now I'm using all serverless, and I get a happy fuzzy feeling whenever I think about how I never ever have to do that again.

"The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at and repair." -- Douglas Adams

It's more likely that parent has had to do that a lot of times, or perhaps has managed people who have. That would be a sign of organizational dysfunction, and fixing that is a good goal to have.

Good example for which I’d highlight a particular thing to note: The kinds of systems that minimize failure probability at a large scale are often not the kinds of systems that minimize failure at a small scale.

At a large scale (e.g. hundreds or especially millions of hardware nodes) the most common faults will be due to individual nodes / services / whatever failing, so you want a complex fault tolerant system to deal with those faults.

At a small scale (e.g. stuff that can fit on one or several servers) the most common faults are from the system itself, not from individual nodes. Here, using a complex system will drive up the likelihood of failures, especially when you don’t have a large team to manage the system.


Our longest outage was our cloud provider (one of the big guns) turning off our entire account for 10 hours due to suspicious activity.

If im not mistaken, Azure went down recently in one region for a wooping 24 hours.

That's pretty scary for anyone having at least three nines SLA


Grandparent speaks about a different scenario. In a similar vein: imagine your credit card blocks AWS payments, you don't notice, and then AWS payment reminders land in your spam folder. Boom, services out.

> As of 2016, Stack Exchange served 200 million requests per day, backed by just four SQL servers: a primary for Stack Overflow, a primary for everything else, and two replicas.

This was the most enlightening piece of the article for me. Their alexa rank today is 48 (globally, 38 in U.S.), so whatever your site is, you are probably not dealing with as heavy a load as them. What techniques do you have to employ to serve this many requests from a single database server?


https://stackexchange.com/performance

https://nickcraver.com/blog/2016/02/17/stack-overflow-the-ar...

Lots of caching (redis, CloudFlare) and trying not to use the database unless absolutely necessary, I would expect.


That's the beauty of serving mostly read-only data - you can just cache the most frequented pages. I can't think of a time that I clicked on a link for Stack Exchange that wasn't served via a search engine. I've never purposefully clicked a link from within Stack Exchange, and I've never posted any answers. I can thank those who do for enabling my success in my career, but I'm willing to bet that most Stack Exchange users are just like me.

"I've never purposefully clicked a link from within Stack Exchange..."

How do you resist those crazy unrelated questions in the sidebar? Currently shown for me:

  Did Shadowfax go to Valinor?
  Is it legal for company to use my work email to pretend I still work there?
  Why can't we play rap on piano?
  Can a virus destroy the BIOS of a modern computer?

These became so distracting that I applied an Adblock filter to hide the “Hot Network Questions” div, lol!

That shadowfax one, quora is horrible that way too. You're interested in one bit of canon triva one time, and the site immediately assumes that particular story is all you want to read about, every day for months.

I bet gwaihir carried shadowfax to valinor.


I don't think hot questions are tailored at all, are they? I often get questions from boards I never visited, so I assumed it's a global collection.

Also, caching at the correct granularity. I've seen too many systems that cache on a fine grained level and performance drops because because there are so many cache lookups, even with an in memory cache.

They have details here: https://nickcraver.com/blog/2016/02/17/stack-overflow-the-ar...

Lots of caching, lots of RAM, SSD storage, and a low-level ORM for SQL.


well the "orm" is more like a database driver that can serialize to models than an orm.

but consider their stack there are better "starting" alternatives (they also used them before dapper), like Linq2Sql, EF Core, etc...

Also compared to other languages Linq and EF Core are way better than most alternatives (besides that some things are just not working, like lateral joins)


Caching and a whole lot of it. Most of their pages could be served by a CDN.

How much do you think stack overflow is reading vs writing? How often do you even click a link from a stack overflow page once there?


Upvotes are clicked a lot but I guess they are just an api call and can be tuned to be high performance.

I rarely click anything on stack overflow and neither do most people I know. The only ones I know who do that are stack overflow evangelists. I'd guess very few people have an account and use it.

I've taken to using upvotes for bookmarking, that way when I inevitably look the same question up again six months from now I'll know which of the answers I thought was most useful last time.

I’m in between. I have an SO score of just 1000, but I vote a lot. Mainly because I’m logged in via oauth anyway so it’s no hassle to do so.

it's also not super critical to show correct numbers in real time for that so there's that.

Upvotes are actually ideal for an in-memory service like Redis that fsyncs less frequently. Even if you lose the last few reads (highly unlikely, with some care), it's not the end of the world.

Except that Stack Exchange is a very static site so caching and read ratio is probably very high, I guess 99% of the people just type a question on Google land on SE and that's it.

Well you have to remember that outside of cloud unknown blac kbox vcpus, generic networking setups and extremely underpowered storage options exist a different world where you can have a box with 200+ physical cores few TBS of RAM and extremely high performance storage

Simple: don't hit the database with every request. SO is mostly hosting static text content that updates infrequently and can be very heavily cached. You don't need to read the database to generate every pageview, quite the opposite.

It's the outcome of good architecture, application code, and the fact that single row lookups are already just as fast as a cache, especially in SQL Server running on highend hardware.

By the way, 200M is less than 2500 requests/second which is not much at all. In terms of actual throughput, they are not that big.


> By the way, 200M is less than 2500 requests/second which is easily doable.

They are probably not spready uniformly throughout the day...


They're also not all full page loads either. Their stats show only 66M of those 200M are pageviews.

and they're using that very un-hip microsoft .net

Which goes to show that the answer to "what technology should I use to start my startup in [current year]" is always "the one you know best" since Joel was a former Microsoft employee before starting Stack Overflow and his other projects.

C# is great. It's a high level programming language and it's also super-fast handling ~7.000.000 requests per second on a single server according to the following benchmark:

https://www.ageofascent.com/2019/02/04/asp-net-core-saturati...


C#/.NET can be super fun and productive. Use what you know.

C# was great in 2008, then it got kind of enterprisey and closed - not it's ridiculous and getting steam into all avenues of software engineering. It's arguably the most powerful language in the world.

How is it closed? It's now better than ever with the new cross-platform .NET Core with all kinds of runtime and language advancements.

s/not/now

Well, regardless of caching and special ORM, doing 11K peak sql qps on 15% cpu load, is remarkable. They did a great job optimizing the database and access.

They probably also save a lot on ops team - running a single sql server with a failover is well-understood, error-resistant and covered in vendor's guides/best practices.


Well it's easier than you think (if you know what you are doing). I am serving around 90-100 mil requests per day from one 4-core xeon machine with most of the CPU idle. With around 2500 queries/s to redis and ~2k queries per second to postgres. App is written mostly in Go. A lot of those requests are updates.

Where did you look for tuning Postgres? I am interested.

I would love to know as well.

What this article, and the comments on it at the moment, are missing is that developers choose many of these technologies because they are sexy and will help the developer get their next job.

Which of the following two developers has the better chance of breaking six figures next year:

"Used Hadoop, MapReduce, and GCP for fraud detection on..."

...or...

"Used MySQL and some straightforward statistics for fraud detection on..."

This is a big part of why all these things exist in places where they shouldn't. As a dev that always goes for the simplest solution first and has yet to break a hundred k at 40 ... I'm spending my evenings now trying to figure out how to deploy the latest technologies where they're totally not needed.


Maybe I’m just swimming against the tide, but I work at a “big N” company and I am more impressed by “saved X dollars”, “made process Y faster” or “built feature Z” than I am with a specific set of technologies.

I’ve interviewed a lot of incredibly bright people that didn’t know any technologies more modern than C++.


That's surely because C++ is the language To Rule Them All!

I jest partly, because every time I write something in C# or PHP (even with the half-baked "strong" typing they are introducing), I constantly curse and think how easy it'd be in C++


I do a lot of interviewing at a big company. It's easy to detect when people are cramming technologies on their resume, and it's a moderate negative signal. They aren't thinking of the customer, or of the problem; they're thinking about the tech. That gets in the way of good design and is a red flag, in my book. I'm not at all alone in this, so I'm not sure it's as clear as you make it out to be.

Same here, but I can tell you from experience with group interviews that while we're definitely not alone, we're probably in the minority.

The competent software teams and companies don't hire by keyword though, they look at actual output and results delivered.

I have this silent debate with my engineers from time to time when one of them gets an itch they feel the need to scratch with an industrial strength back-scratcher. I usually go "lawyer mode" and ask them question after question to justify their choices. They either forget their itch or realize that rubbing themselves against a wall will fix it. I understand their desire to put en vogue frameworks on their resume, but I can't have someone's flight of fancy fucking up our tech stack.

I actually think it's incredibly insulting when people assume that engineers are choosing frameworks just for their resume. Most in fact are looking to these tools e.g. React, Spark, Kafka etc because so many other engineers are using them with success. And so they think they will equally have success.

But then they didn't have the context as to why those tools were chosen and so often they aren't suited. But I've never met anyone in the last 20+ years with thousands of engineers who was doing it for their resume. In fact the best thing for your resume is for the project to be a success anyway.


I've followed people managing tech teams and I think, like anyone else, engineers spend more time doing things they like and are interested in and slow-walk things they aren't. Trying to align those incentives with the bigger project or the company is what managers do.

Looking at new tech is way more interesting than doing the same thing you've been doing for 10 years. Its good to have some mechanism in place to vet that since I know I don't trust myself to always make a good choice (and in my opinion I have excellent taste).


A perverse incentive I've noticed at almost every company I've worked for is that the guy who spends every day slugging it out in a bash terminal to keep a ten year old service running won't ever get promoted. However, but the guy who tries to rewrite it in Apache Flamboozle and AWS Fuzzworks will get promoted long before his house of cards tumbles down.

The end result is that not only is the routine stuff boring but it's also career-limiting.


I think any support department (tech or not) has what I think I've heard others call the "janitor problem." If you're doing your job and the trash is taken out, nobody notices.

I got advice when switching jobs that I had to be my own advocate and nobody was going to do it for you. I think the advice was meant specifically for that company, but I think it applies to most behind the scenes roles.

Similarly, everyone is excited about a new UI while a bad backend can only drive you away. Massive improvements and scaling can only really be measured by new dollars brought in (but that was due to sales and marketing, right?)


> I actually think it's incredibly insulting when people assume that engineers are choosing frameworks just for their resume

I don't find it insulting at all. Reality is that many, if not most, recruiters are looking for candidates that have experience in whatever technologies are hot at the time. Expertise in a sought after technology can be worth tens of thousands of dollars extra in annual compensation. It's natural that some people will try to use that technology to further their career if possible. And I think to a degree, a lot of people base their job decisions on what they'll get to work on. 'Is it a stack that is in demand and growing?' 'Is it a stack that might be difficult to learn but pays really well?'

Now the sensible thing to do is to not shoehorn in some tech if it doesn't make sense for a given application. And beyond that, the a big reason people are tempted to chase the shiny new object is because the IT recruiting process is broken.

I just see it as people trying to do the best they can for their career, usually there's no malice involved.


> I actually think it's incredibly insulting when people assume that engineers are choosing frameworks just for their resume.

Maybe, or maybe it's just realistic in a world where common advice is not to stay at any company for more than 2 years.


My experience is engineers get heavily judged for not having the latest bro tech on their resume.

Mostly younger people are choosing to jump around jobs.

The common advice and wisdom is actually not to do this.


The common advice and wisdom is actually not to do this.

This is a long thread unto itself, but in an environment where many companies have zero loyalty to their employees and would rather hire more experienced people than train and promote their existing employees who already know the ins and outs of the company...

Job hopping is often the fastest way to more money and more crucially, more responsibility which means more personal growth.


The old fashioned advice was to do this, at a company that paid you for long service.

Those days are mostly dead.

While you might pause for fear of burning your bridges by leaving a project in the lurch, staying at a company for over 2-3 years is not the way forward in a career anymore - and that is very much the common advice and wisdom nowadays.

This isn't because young people want to, this is because you tend to get huge jumps in wage jumping between companies, while simultaneously HR seems to be allergic to giving good raises.


The common advice and wisdom from what I've been hearing is that it's perfectly acceptable (especially if you are underpaid the market rate), but not to do it frequently or too fast. There was some evidence of within company promotions being poorer than just switching jobs.

It's not just for their resumes, it's for the pleasure and challenge of learning something new, the feeling of confidence and with-it-ness when you get to say "yeah, we use that, I set it up" when people are talking about the latest and greatest thing, the fear of missing something transformative and getting left behind. It all boils down to nurturing a person's confidence that they are good at what they do and their ability to sound that way to other people, so "resume" for short.

And even if it is just for the resume, it's smart and understandable. Consider the difference between, "Eh, I evaluated some NoSQL options here and there, but it seemed safer to stick with Postgresql since we didn't have a compelling reason to experiment. Postgresql works fine at the scale of our product and we're very familiar with it," versus, "Yeah, we used Mongo for a project and it was a shit show, switched to DynamoDB which has been solid. The product is still 80% on Postgresql, but we use Cassandra for analytics, and of course we run InfluxDB for Grafana, which we're going to replace with Prometheus for better scalability." These could be two people facing exactly the same serious of engineering choices, with the first guy making the better decision every single time, but the second guy sounds like the kind of curious and hardworking person you'd want to hire, while the first guy sounds maybe... stuck? Counting the days to retirement? Maybe boring SQL guy has saved his company a ton of engineering work that they were then able to invest in product work instead of engineering, but when he and NoSQL dilettante guy are both interviewing at a new company that wants people who are "curious" and "passionate" about "dedicated to learning and growing their skills" and "ready to meet new challenges head-on" he has to be worried about sounding like a dud.

But I've never met anyone in the last 20+ years with thousands of engineers who was doing it for their resume.

It's not something you can distinguish from being eager to learn and overenthusiastic about new tech, which we all are to some extent.


Unfortunately, people can and do indulge in exactly this behaviour, and there does need to be some pushback against it. Individual self-interest must be counterbalanced by what is in the interest of the business when the two are not aligned.

I have seen this happen before, and when the interests and ambitions of an individual are counterproductive to the direct needs of the company, that can be deeply problematic.


I doubt it's usually for the resume, per se. But I do think peer pressure is a very strong force, as are aspirations. Most of us would rather be able to tell our friends, "We're using this huge and complicated but massively scalable Kafka cluster", than, "Yeah, our message queue is a database table. No, it isn't web scale. No, we don't anticipate ever growing large enough for that to become a problem."

Also, we like playing with new toys. Solving the same old problems the same old way we were doing it 20 years ago just isn't sexy.


Oh come on, it is not just developers but non technical higher ups also read some articles and now I have to come up with something that has "Micro-services, Docker, Blockchain, AI" in name to get budget for simple web api. Hell it does not even matter if we actually do it that way, just have to fill in buzzword bingo correctly and you win budget money.

>Most in fact are looking to these tools e.g. React, Spark, Kafka etc because so many other engineers are using them with success.

This is not a real criteria for choosing anything. You can say the exact same thing about ASP.NET WebForms, MySQL, MSMQ or pretty much anything that was popular at any point in time. If you're only looking for "success" stories, you're not doing due diligence as an engineer. Who defines "success" anyway? The same person who chose React, Spark and Kafka in the first place and whose salary is directly proportional to how hyped-up those technologies currently are?


"ask them question after question to justify their choices"

Asking questions is not assuming. I think you're letting yourself be triggered by a flippant remark at the end of an otherwise solid comment.


It's frankly bizarre to have a knee-jerk reaction that new technology is worse. New technology is probably better, given that it was created by people who had more context about both problems and solutions than was available when older technology was created. When I hear people talking about doing the same thing for 10 years I wonder what problem they are working on that could be so complex that in 10 years they have not managed lights-out automation for it. My suspicion is that attitudes like yours are not only wrong but also massively destructive of value.

Honestly this is why I have a homelab and side projects. If there's an interesting tech that I think may be applicable(or it's just interesting) I'll spin it up at home and give it a run.

Keeps me sharp and gives me an opportunity to explore tech that isn't in my day-to-day wheelhouse.


Bad news is pretty much anybody smart, and talented wants to work on new, exciting and bleeding edge stuff.

If you are still running on Tomcat, all the best getting the market standard genius talent to work for you.

You would also like to learn about what happens to workplaces where smart people don't work or even like to work at.


I used to agree with this, but then I started to look for a new job without knowing en-vogue frameworks. Now I make sure I include some fashionable tech, its important for people's careers.

or maybe you discover that it's an industrial size itch and the chosen backscratcher is needed.

Almost never.

Often there's an aspiration to reach Google-level operation. Invariably some director/VP insists on building for that future. Then it turns out sales can't break into the market and adoption is low.

Now you have a neglected minimum viable product because you're scaling that skeleton instead of adding features your existing customers want. Or you're delivering those features at a slow pace because you're integrating them into two versions of the software: the working one and the castle-in-the-sky one. Then there are all sorts of other things that throw a monkey wrench into your barely moving gears: new regulations, competitors nipping at your heels, pivots, demands from management for "transparency and visibility" into why it's taking you forever to deliver their desired magnum opus.

Products that reach Google scale probably get there without even noticing it because they're correctly iterating for the marginal growth.


yes and no. there is a time for building and a time for learning. I actually encourage people to think how they would build something “google scale”. it’s not meant to be what pays the bills - but it can have dramatic effects on motivation, productivity and making the thing that pays the bills better. you see, once really smart people are free to dream they understand the cage they’re in better. timebox it and share what you’ve learned.

I'm having flashbacks to when, at a startup in 2000, our CIO insisted we needed an EMC Storage Solution. "We're going to be _huge_!" (Over my written objections, mind you.) I spent precious time and huge amounts of money to build out the colo cages to fit his folly and then the company promptly went belly up. Good times.

You should optimize for the more likely scenario.

Could not agree more with this. Whether it is data warehousing, a maze of micro-services, or machine learning for your basic crud app you should look into whether you truly actually need it and whether it helps solve a real problem you have. A basic stack with Rails/Django and Postgres can get you quite far. This is often as much as most companies/startups ever need.

Also personally love the callout to Joe who while being a professor is consistently practical on when approaches do or don't make sense and at what scale they do.


Not sure what reality people are living in.

But a basic stack can only be used for basic systems and for basic problems. And there just isn't many of these going around anymore as they've all been solved. Or more commonly nobody is interested (including from the business) in delivering something basic. They want to innovate for their users.


I'd argue the complete opposite: every company and their mother is online these days, and nearly always they're a) doing something that's been done before and b) are nowhere near the amount of throughput that would require some kind of overcomplicated setup.

No, your company isn't special, and "innovating for your users" is probably the worst thing you could do, compared to delivering a good product using established practices that are tried and tested to deliver results.


I think I'm on a different page here. If a business wants to sell an innovative product, that doesn't mean they have to have an innovative database for purchase orders.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: