Hacker News new | comments | show | ask | jobs | submit login
Google Kubernetes Engine's third consecutive day of service disruption (cloud.google.com)
414 points by rlancer 7 hours ago | hide | past | web | favorite | 257 comments





I am currently evaluating GCP for two separate projects. I want to see if I understand this correctly:

1) For three whole days, it was questionable whether or not a user would be able to launch a node pool (according to the official blog statement). It was also questionable whether a user would be able to launch a simple compute instance (according to statements here on HN).

2) This issue was global in scope, affecting all of Google's regions. Therefore, in consideration of item 1 above, it was questionable/unpredictable whether or not a user could launch a node pool or even a simple node anywhere in GCP at all.

3) The sum total of information about this incident can be found as a few one or two sentence blurbs on Google's blog. No explanation nor outline of scope for affected regions and services has been provided.

4) Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems.

5) Some users here are reporting that they have received no response from GCP support, even over a time span of 40+ hours since the support request was submitted.

6) Google says they'll provide some information when the next business day rolls around, roughly 4 days after the start of the problem.

I really do want to make sure I'm understanding this situation. Please do correct me if I got something wrong in this summary.


We had an issue a few weeks back where all nodes in west1-a could not pull docker images. Google support was pinballing P1 issue around the globe and across multiple teams for a few days untill I root caused it for them - turned out to be gce service account issues affecting entire zone. 2 days to rollback (no status page update). I know nobody gives a fuck but can’t help but feel vindicated as an ex google sre.

I think a lot of people give a fuck here; I do, at least. Thanks for outlining it, these things are fascinating (to me anyway, who has never worked in IT/ops).

We are GCP customers for the last couple of years. We use other cloud platforms(AWS, IBM, Oracle, OrionVM) too. We don't use GKE but use rancher/kubernetes combo on their standard platform.

So far GCP is the best, hands down in terms of stability. We never had a single outage or maintenance downtime notification till now. We are power users but our monitoring didn't pick any anomaly so i don't think this issue had rampant impact on other services.

But i find it concerning that they provided very little update on what went wrong. I also think its better to expect nil support out of any big cloud provider if you don't have paid support. Funny how all these big cloud providers think you are not eligible for support de-facto. Sigh.


I use AWS free tier and get customer support through email, but thats not the case with GCP. Do they provide free email support?

If you are an early stage startup can you afford their 200/Month support, when your entire GCP bill is under $1. However, that doesn't mean you don't have to support them.


I agree with this. Compared to AWS, when Google says it's down, it's down, and that's rare. When they say it's up, it's up.

I don't understand why someone would choose to deploy anything mission critical without having an support contract with the ISP, the manufacturer of the the software etc.

Simple, the cost of an outage is less than the cost of a support contract. Very few things are really mission critical as in they can never go down. Rather they simply have a cost to going down and you can choose to pay that one way or another.

I transitioned from collocation to self managed remote server farm and then onto self managed remote vms. All these providers provided de-facto support whether we opted for one or not. You can go to their portal and raise a ticket.

I am not saying with vast numbers its feasible but big cloud providers don't even give you the opportunity to raise a ticket if its their fault. There is a price you pay extra when you opt for any one of them but many don't realize. Having said that - almost all the time, our skilled expertise is better than their initial two level of support staff. We realized it early so we handle it better by going over the documentation and making our code resilient since all cloud platforms have some limit or another since overselling in a region is something they can't avoid. Going multiple regions across when you handle these exceptions is the only way through.


You're doing me a scare. I'm in the evaluation phase with them. Maybe I'm missing something here, but this is not at all what the linked post says.

"We are investigating an issue with Google Kubernetes Engine node pool creation through Cloud Console UI."

So, it's a UI console issue, it appears you can still manage

"Affected customers can use gcloud command [1] in order to create new Node Pools. [1]"

Similarly, it actually was resolved in Friday, but they forgot to mark it as so.

"The issue with Google Kubernetes Engine Node Pool creation through the Cloud Console UI had been resolved as of Friday, 2018-11-09 14:30 US/Pacific."


We've had no issues deleting and creating node pools this weekend (on asia-east1-a). No other problems noticed either.

I've been failing all weekend to create nodes in a GKE cluster through either the UI console or gcloud. Even right now I can't get any nodes to spin up.

I can't comment regarding GKE as we don't use that particular service, however we are very heavy users of many other GCP services, including Compute, Datastore, BigQuery, Pub/Sub, Storage, Functions, Speech, and others. Zero issues this weekend, everything is running 100% as any normal day.

> For three whole days, it was questionable whether or not a user would be able to launch a node pool (according to the official blog statement)

What blog statement are you referring to? I don't see any such statement. Can you provide a link?

The OP incident status issue says "We are investigating an issue with Google Kubernetes Engine node pool creation through Cloud Console UI". It also says "Affected customers can use gcloud command in order to create new Node Pools."

So it sounds like a web interface problem, not a severely limiting, backend systems problem with global scope.

Also, the report says "The issue with Google Kubernetes Engine Node Pool creation through the Cloud Console UI had been resolved as of Friday, 2018-11-09 14:30 US/Pacific". So the whole issue lasted about 10 hours, not three whole days.

> Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems

I don't see much of that.


Right now we don't know. It's one of two possibilities from what I can tell:

a) Google had a global service disruption that impacted Kubernetes node pool creation and possible other services since Friday. They had a largely separate issue for a web UI disruption (what this thread links to) which they forgot to close on Friday. They still have not provided any issue tracker for the service distribution and it's possibly they only learned about it from this hacker news thread.

b) People are having various unrelated issues with services that they're mis-attributing to a global service disruption.


This is why GCP has no hope of ever taking significant market share from AWS. Google thinks they can treat their cloud customers like they treat users of their free services. Customer support and communication are essential.

As if something like this has never happened to AWS?

I'm not sure about the market share, but I agree with the last two sentences.

...and I'm a happy GCP customer.


“2) This issue was global in scope, affecting all of Google's regions. Therefore, in consideration of item 1 above, it was questionable/unpredictable whether or not a user could launch a node pool or even a simple node anywhere in GCP at all.”

Ok. So on aws we were* paying for putting systems across regions, but, honestly I don’t get the point. When an entire region is down what I have noticed is that all things are fucked globally on aws. Feel free to pay double - but it seems* if you are paying that much just pay for an additional cloud provider. Looks like it’s the same deal on GCP.


> When an entire region is down what I have noticed is that all things are fucked globally on aws.

Do you have an example on this?


On 17 October, there was a multi-AZ network failure at us-east-1. It only lasted 3m35s, but it was enough that our customers were calling about our site being down.

Just grabbed first article. Example: In this case capitalone went down. I don’t work at capitalone - but I imagine they had their data copied across every region 30 times.

https://www.geekwire.com/2018/widespread-outage-amazon-web-s...


I think you're much too optimistic about capitalone. They probably had a single point of failure, possibly one they didn't realize they had.

We had an issue a few weeks ago where the google front-end servers were mangling responses from Pub/Sub and returning 502 responses, making the service completely unusable and knocking over a number of things we have running in production. Despite paying for enterprise support and having in a P1 ticket, we had to spend Friday to Sunday gathering evidence to prove to the support staff that there was indeed a problem, because their monitoring wasn't detecting it. Right now I'm doing something similar (and since Friday!) but for TLS issues they're having. Again, because their support reps don't believe there's a problem. There are so many more problems than they ever show on their status page...

They work for Google so obviously they are much smarter than you. If theres a problem its probably the customers fault. /sarcasm

I was so mad to read that until you said /sarcasm :p

That being said I really do think there is a difference between who is working at google today and the google we all fell in love with pre-2008.

I am sure there are a amazing people still working at google, but nowhere near like it was.

The way I like to think about google is that some amazing people mad ea awesome train that builds tracks in front of it -- you can call them gods maybe -- but those people are gone -- or a least the critical mass required to build such a train has dwindled to just dust. What we have left is a awesome train full of people pulling the many levers left behind.

To make things even worse my last interview as a SRE left me wondering if even the people who are there know this as well, and they are actually working hard to keep out those who might expose light on to this. I don't say that because I did not get the job -- I am actually happy I did not get extended a offer.

I say this with one exception, the old-timer who was my last interview. I could tell he was dripping in knowledge and eager to share it with any that would listen. I came out of his 45 min session learning many things -- I wold actually pay to work with a guy like that.

I would also like to point out that the work ethic was not what I expected. I was told that when on call, my duty was to figure out the root cause was in the segment I was responsible for. I don't know about you, but if my phone rings at night I am going to see through to a resolution and understand the problem in full -- even if it is not on the segment that I was assigned.

/end rant


The work ethic is in tact. It is not fair to load people with stress and ask them to drop everything. You're conflating poor resource allocation with "work ethic". Burning the midnight oil when it can be avoided is not work ethic. The correct way is to load balance outage resolution.

I really don't know how to reply to you as you have set up a bunch of windmills you assumed from my previous post. Who said anything about poor resource allocation? Who said we need to load people with stress?

That being said -- when you are on call -- dropping everything is exactly what is expected.


Many many AWS people have left citing on call as the worst part of their job. Also, it is really a far fetched allegation that interviewers try to fail interviewees to hide their own incompetence. I get that you may not think much of Google SREs but to allege that is just in bad spirit. I hope you do get to see that the people inside Google are one of the best perks of working here. They are smart and motivated and willing to help each other.

But limiting responsibilities for on call employees is a way of limiting the workload they have during that time - ultimately benefiting the employees. They are on call, not working. I don't see the windmills here.

Yes this was my point.

Many of the tier 1 GCP support reps work for external vendors nowadays, which is probably part of the problem.

During my time on the GCE team (note I don't work at Google now) I knew multiple full-time Google employee support reps, including some still at the company. They have the good attitude and deep knowledge you'd hope for.

The problem is simply about how Google scales their GCP support org. To be completely clear, AWS support is by and large not great either.

If you're a big or strategically important customer, of course, you can get a good response from either company.


80% of my support experiences are laughably bad.

20% of my support experiences are amazing.

Fortunately, I don't require decent support to keep my service running. My sales rep tells me that he's aware of the problem.

I speculate it's simply the result of GCP trying to grow the org very quickly.


I totally heard that when trying to get engineer attention to YouTube Premium’s frequent “download errors” due to their transcoding-on-the-fly (or something). I was telling a support rep I had evidence that if I switched the setting from standard to high def (or vice versa) that the error would go away, but I could reproduce it with the same video, and thought it was a CDN/transcode issue. They kept marking the ticket as “unable to reproduce” and I had to wonder — as a paying customer, don’t they have analytics on my phone that would show exactly the request I was making which was failing in their logs? And if they saw it succeed, why not tell me the problem was my ISP? I’d have been happy to follow up... but nothing’s ever wrong in Google-land. :/

In general GCP has quota limits so its expected of customers to catch 5xx and do exponential back off. But this info is not explicitly stated.

From my personal experience - i think all big cloud providers first two level support staff is no good if it isn't an obvious dumb one on your part. I always prefer to forgo support and try to go through every bit of their documentation to figure it out on our own. This helps to save huge amount of time. But if you have developer support - it can help to expedite things little faster though.


Did they ask you for a screenshot?

That's my favorite.


"The data says engagement is down 46%, I think its time we drop the product."

- Someone at Google right now, probably.


I can assure you that's not the case! Also, while people like to repeat this meme, Google Cloud does have a formal deprecation policy (https://cloud.google.com/terms/), whose intent is to give you some assurances.

(I work at Google, on GKE, though I am not a lawyer and thus don't work on the deprecation policy)


> Google may discontinue any Services or any portion or feature for any reason at any time without liability to Customer

for any reason

at any time


Nice job cherry picking text.

> 7.1 Discontinuance of Services. Subject to Section 7.2, Google may discontinue > any Services or any portion or feature for any reason at any time without > liability to Customer.

Let's take a look at Section 7.2:

> 7.2 Deprecation Policy. Google will announce if it intends to discontinue or > make backwards incompatible changes to the Services specified at the URL in > the next sentence. Google will use commercially reasonable efforts to continue > to operate those Services versions and features identified at > https://cloud.google.com/terms/deprecation without these changes for at least > one year after that announcement, unless (as Google determines in its > reasonable good faith judgment): > > (i) required by law or third party relationship (including if there is a change > in applicable law or relationship), or > > (ii) doing so could create a security risk or substantial economic or material > technical burden. > > The above policy is the "Deprecation Policy."

To me that looks like a reasonable deprecation policy.


> To me that looks like a reasonable deprecation policy.

It might be, until they jack up the prices 15X with limited notice (looking at you, Google maps [1]). No deprecation needed, just force users off the platform unless they're willing to pay a massive premium.

[1] https://www.google.com/search?q=google+maps+price+increase


Google Maps has never been subjected to that policy, unlike GCP services. These org chart divisions are real but only clear to Googlers, Xooglers (I'm in this category), and people who pay extremely close attention.

The fact that they're all Google makes reputation damage bleed across meaningfully different parts of what's in truth now a conglomerate under the umbrella name Google.


The maps price gouge is yet another reason I will not use google services for anything but ancillary services.

It's ok I guess but still lets them turn it off if, in their judgement, its an economic burden i.e. costing them money.

If they ever do deprecate something people have built on though they're gonna get absolutely crucified. That's probably better protection than any terms of service.


> If they ever do deprecate something people have built on though they're gonna get absolutely crucified.

They do this all the time, and they get crucified every time. I built a Google Hangout App and a Chrome App, both of which were platforms eventually shut down.

This is where the meme came from, and it's why I personally stopped building on top of Google products. A 1-year deprecation policy is no assurance to me if I plan for my app to live longer than that.


Their approach to things like GCP is very different than their approach to those other areas of Google. But they don't separate their branding or unify their deprecation attitudes enough to avoid cross-org-chart reputation damage like this.

> It's ok I guess but still lets them turn it off if, in their judgement, its an economic burden i.e. costing them money.

If a service Google runs is losing money, what reason would they have to not shut it down?


With this terms of service, none. Which is why people don't trust them.

If I pay you for a service that would take time to migrate off of, and you are making money off me now, I am going to be ripshit if you decide to just turn it off because it's suddenly not making money for you in the short term. Google's done this a lot, and the fact that don't provide concrete time lines in their contract gives even less reason to trust them


It's not about the contract. AWS doesn't even have a deprecation policy in the contract - seriously, GCP provides more legally binding guarantees than AWS. It's about trust.

People look at AWS's track record, and trust that. People look at Google's track record, overlook what to an inside-the-company Googler perspective are dramatically significant organizational boundaries or product lifecycle definitions that are very poorly communicated outside the company, mentally apply reputational damage from one part of Google (or from a preview-stage GCP product) to a different part of the company (or to a generally available GCP product), and don't trust that.

Google has always been worse at externally facing PR than at the internal reality, even when I worked there (2011-2015). Major company weakness.

But the internal reality inside GCP, perceptions aside, is pretty good even now.


This is the subtle, but important, difference between SaaS and PaaS/IaaS. Services are to use. Platforms are built upon. Flickr is a service. If they shutdown I'll get another one. If they shutdown I'll just move to another. GCP is a platform, if they shutdown I have to re-architect the entire thing from scratch.

If it's costing them money they haven't figured out a model, yet, that works in their favour.


Customers won't pay money in the first place to use a service if it may vanish out from under them? I expect a cloud service provider not to offer a service unless they think it is going to be profitable, and I expect them to continue to offer it even if it turns out not to be profitable, because otherwise I will take my business to a cloud service provider that will give me that guarantee.

Let's not forget that Google can change the terms as they please with a 90 Days notice as per Section 7.1 Modifications of the terms. So any promise that is longer than 90 days, even without a escape hatch like Section 7.2, would be legally weak and subject to change at any time without much recourse.

>Subject to Section 7.2

Which is the deprecation policy. (I mean I share your frustration with Google's what-appears-to-be-at-least haphazard policy of shutting down services instead of trying to gain traction. But, let's not misrepresent what they say).


I thought that 'subject to 7.2' meant that they can use the escape-hatch there 'substantial economic or material technical burden'? They can list anything under that.

I don't think it's wrong - they can deprecate any service they want to do whenever they want, unless people have paid for and signed a contract that says otherwise which I guess people aren't doing.

But the policy doesn't really guarantee anything at all does it, due to the reference escape-hatches? It might as well not exist?


The preceding line is the imprtant part. Which is essentially

"Subject to the deprecation policy [which says that Google will give at least 1 year notice before cancelling services], Google may discontinue..."

In other words, at any time, google can give you a years notice.

(I work at Google, but am not a lawyer and this isn't official in any capacity).

Please don't selectively quote things out of context to give a misleading impression.


But what do these things mean?

> commercially reasonable

> substantial economic or material technical burden

Is one engineer working on an old service to keep it alive commercially reasonable or a substantial burden? I don't know. Do you?

In practice this policy lets them shut off anything they want any time they want. Again it's their playground they can do what they want unless they signed a contract saying they'd do something else for you so I don't have a problem with it.


I think you're ascribing an unreasonable amount of bad faith here, and, to rephrase what I had here before, you're approaching this from an engineering perspective, not a legal one. And that's not how those things work.

To be clear, that policy is a contract. And those things would be decided by a jury. And if my understanding is correct, the reasonable person standard applies. So you can answer this yourself, do you think a reasonable person would believe that your interpretation is valid?

If not, why mention it?


>If not, why mention it?

Because it makes more people feel comfortable enough to use your services and pay you, without actually binding you towards any sort of behavior that would cost you money. There's a direct financial incentive here to use legalese to give the semblance of reliability without having to deliver on it


Google lost our faith after shutting down countless services.

But hey, it's all spelled out in the policy, so don't say we didn't warn you!

Caveat emptor, folks.


What happens when they suddenly deprecate the deprecation policy?

Nothing, sort of. Subject to "Section 1.7 Modifications" of Terms:

b. To the Agreement:

Google may make changes to this Agreement, including pricing (and any linked documents) from time to time. .... Google will provide at least 90 days’ advance notice for materially adverse changes to any SLAs by either: (i) sending an email to Customer’s primary point of contact; (ii) posting a notice in the Admin Console; or (iii) posting a notice to the applicable SLA webpage. If Customer does not agree to the revised Agreement, please stop using the Services. Google will post any modification to this Agreement to the Terms URL.


im pretty sure he just forget the /s (sarcasm) on his post, but this was pretty cool information anyway, so thanks!

I think it’s telling of Google’s culture that the corporate arm felt the need to formalize this in law. I won’t pretend to know what it’s telling. Just suggest that you listen for yourself. Look at rule of law versus the ideas of liberty if you’d like a stronger nudge.

AWS' Customer Agreement[1] essentially has the same language. I wouldn't be surprised to see similar language from other cloud providers as well. Seems rather prudent on their part.

[1] https://aws.amazon.com/agreement/


I would suggest the inference I was alluding to would also apply amazon.

Note how I never stated the inference. This is because I wanted to share a way of thinking without feeling the responsibility to reply to people attempting to force me to prove some prescriptive, arbitrary inference rule by exhaustion. I do not participate in such practices casually. I also consider it rude to subject people to such practices without consent. I also believe it is a practice that kills online discussion platforms. See this community’s thought provoking guidelines :)

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.


Hi - I work at Google on GKE - sorry about the problems you're experiencing. There's a lot of people inside Google looking into this right now!

It looks like the UI issue was actually fixed, and that we just didn't update the status dashboard correctly. But we're double checking that and looking into some of the additional things you all have reported here.


The status dashboard is inaccurate and/or a lie. It only tells about the GKE incident, while in fact the problem also impacts Google Compute Engine users. I was unable to create any google compute instance today, not even a basic 1vcpu, on NA and Europe-west.

As another comment pointed out, what's the point of having so many zones and redundancy around the globe if such global failure can still happen? I thought the "cloud" was supposed to make this kind of failure impossible


> I thought the "cloud" was supposed to make this kind of failure impossible

You have to remember that you're trying to have access to backend platforms and infrastructure at all times, which almost no public utility does (assuming "the cloud" is "public utility computing"). Power plants go into partial shutdown, water treatment plants stop processing, etc. Utilities are only designed to provide constant reliability for the last mile.

If there's a problem with your power company, they can redirect power from another part of the grid to service customers. But some part of your power company is just... down. Luckily you have no need to operate on all parts of the grid at all times, so you don't notice it's down. But failure will still happen.

Your main concern should be the reliability of the last mile. Getting away from managing infrastructure yourself is the first step in that equation. AppEngine and FaaS should be the only computing resources you use, and only object storage and databases for managing data. This will get you closer to public utility-like computing.

But there's no way to get truly reliable computing today. We would all need to use edge computing, and that means leaning heavily on ISPs and content provider networks. Every cloud computing provider is looking into this right now, but considering who actually owns the last mile, I don't think we're going to see edge computing "take over" for at least a decade.


This is unfortunately the norm. Like when AWS S3 went down (but couldn't update its own status images because they're in S3 and we all laughed) and along with it went Alexa, lambda, and every other service dependent on S3.

S3 is really the one of the few services on aws that can do that unfortunately. It has no concept of zone/region, it's truly global. To me it seems like a serious design flaw, as everything else in aws is striped by region, but not sure why exactly it was built like that.

edit:

nvm s3 has regions, it's the bucket names that are global.


Buckets are globally addressable because they planned for each S3 bucket + object key to have an associated URL (actually several), and URLs are a global namespace.

http(s)://<bucket>.s3.amazonaws.com/<object> http(s)://s3.amazonaws.com/<bucket>/<object>


urls would have to be global, but why buckets themselves? It seems like a many to one relationship would easily be possible.

Given the historical context (S3 launched 12 years ago, 5 months before EC2 launched with the us-east-1 region), it's reasonable that S3 buckets were global because regions didn't really exist yet as a concept.

If you look at the docs now[1], new buckets are regionalized and the region is in the URL for non-us-east-1 regions.

[1] https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_...


Got it, makes sense. So the only way you'd have a global bucket is if you created it before a certain date (whenever they were regional-ized, I assume). Thanks!

> I was unable to create any google compute instance today, not even a basic 1vcpu, on NA and Europe-west.

I've been creating GCP instances in us-central1-a and us-central1-c today without issue. Which zone were you using in NA?

I have been noticing unusual restarts, but I haven't been able to pin down the cause yet (may be my software and not GCP itself).


Tried on us-east, us-north, europe-west, also tried asia, with different instance sizes and with both UI and CLI. None worked for me.

Have not seen any restarts this weekend, and we have several hundred instances on GCE.

Thanks! I'm running Skylake 96 core instances but I haven't given up to try the 64 core instances for comparison yet. If I get another restart, I'll do a 96 vs 64 to try to narrow down the cause. Most likely, of course, this is a software issue on my end, not Google's.

I’ll suggest considering whether entities enamored with centralizing ideals are more likely to fail to properly realize the robustness of a distributed system.

> I thought the "cloud" was supposed to make this kind of failure impossible

If set up properly to be utilized correctly, yeah. But, it's not a perfect world though.


We have created GCE instances in several US regions without any issue today. Last one was 10 minutes ago in west2.

I appreciate all the effort you're putting in and I understand such situations can be stressful but user's having to depend on someone responding on hacker news for status updates seems really amateur for an organization the size of google.

The default is : https://status.cloud.google.com/incident/container-engine/18...

People who respond here could be employees of Google, caring about it and respond here because they know it.

What he can mention ( a lot of people are working on it) is what you can suspect when something is going down. All other cloud providers do the same.


The default you linked to has not been updated in 2 days... which is my whole point regarding having to rely on hacker news for any status updates.

edit: The default is also only about the UI issue and there's no issue tracker for the broader non-UI disruptions going on since Friday.


Even an update of "no change" is tremendously valuable.

Strangely and sadly with gmail account blocking and other such issues HN and Twitter is often better way to get Google's support than to contact support.

>really amateur for an organization the size of google.

There is a reason while Google have been having hard time making inroads in the enterprise cloud. Kind of impedance mismatch between enterprise and the Google style. That 2 stories like high "We heart API" sign on the Google Enterprise building facing 237 just screams about it :)


As much as I love bashing big corps I see HN as a supplementary communication channel for products like GCP - its a luxury we get to access alongside normal customer support channels in the GCP console, twitter, etc.

Let me put it this way. HackerNews, or in fact, any news outlets are not official. Customers should be getting emails from Google and be informed on its official webpage to explain what's going on. You don't want your neighbor to tell you you owe taxes. You want the government to send a notice to you.

Critical service is failing, minimal information about why, but we should be so happy someone says a few sentences on here? For all of the engineering elitism coming out of google, Amazon is way more on their game across a number of products.

Creating clusters via the UI is still not working for me.

UPDATE: Created a Cluster successfully in Australia... Still not able to do so in the US.

Have you tried via the gcloud command?

So, given that i filed this months ago via official support and it’s still not fixed, can you look into misleading container memory reporting ui bug. It reports memory_total but should be working_set

Thanks for jumping in here on your own time. The following question is not meant to be hostile, it is merely curiosity. Isn’t this supposed to be the kind of thing that monitoring and diagnostics software should find automatically? Serious question, not meant to embarrass you.

Question to Google employees:

Why do you guys suffer global outages? This is your 2nd major global outage in less than 5 years. I’m sorry to say this, but it is the equivalent of going bankrupt from a trust perspective. I need to see some blog posts about how you guys are rethinking whatever design can lead to this - twice - or you are never getting a cent of money under my control. You have the most feature rich cloud (particularly your networking products), but down time like this is unacceptable.


2 outage in 5 years sounds pretty low, to be honest.

Disclaimer: google employee in ads, who worked on many many fires throughout the years, but talking from my personal perspective and not from my employer. I am sure we are striving to have 0, but realistically, i have seen many that says things happen. Learn, and improve.


The issue people have with it is that it's global, not regional, indicating that there are dependencies in the entire architecture that people does not expect to be there.

There are many other possible causes for global outages, that specific one is not high on my list of likely culprits.

Yes, hello, canaries anyone

Plenty of bugs happen despite canaries.

Just like YouTube a few weeks ago?

Yt, ads, a bunch of services. Sure.

5 years? I remember a major outage maybe in the past year.

I believe there was a multi-hour global YouTube/Bigtable/Cloud SQL/Datastore outage in October.

Then there was the global load balancer outage in July.

Looking though the incident history, there were essentially monthly multi-region or global service disruptions of various services.


Most feature rich cloud? I think that title belongs to AWS.

You're right in terms of breadth officially covered. But if you look at the features where they both officially have support, there are many examples where the GCP version is more reliable and usable than the AWS version. Even GKE is an example of this, despite the outage in node pool creation that we're discussing here. Way better than EKS.

(Disclosure: I worked for Google, including GCP, for a few years ending in 2015. I don't work or speak for them now and have no inside info on this outage.)


Yeah. Perhaps feature rich was an overstatement. I meant that when GCP does do a product it works like I’d expect it to work and has the features I need. Not always the case with a AWS, particularly around ELBs and VPCs.

It is a natural effect of building massive yet flat homogeneous systems, failures tend to be greatly amplified.

Most of what you can read of Google's approach will teach you their ideal computing environment is a single planetary resource, pushing any natural segmentation and partitioning out of view.


I’d be curious to know what alternatives are you considering at this point?

Azure and AWS.

Not to minimize here (well, yes, a little), but this was a UI-only outage, from what I can tell. You could still create the pools from the command-line. It doesn't seem unreasonable to have a single, global UI server, as long as the API gateway is distributed and not subject to global outages.

Was certainly not UI only

OK. Perhaps I misunderstood. In the status page, it says:

Affected customers can use gcloud command [1] in order to create new Node Pools. [1] https://cloud.google.com/sdk/gcloud/reference/container/node...

That led me to believe that only the web UI was affected.


I believe this is a fair question. I’d really like to understand what Google thinks about this.

2 outages in 5 years.

5. Years.

Nothing to see here, move along.


> I’m sorry to say this, but it is the equivalent of going bankrupt from a trust perspective.

It's the opposite really: the expectation that service providers have no unexpected downtime is unrealistic, and it's strange this idea persists.


(disclaimer: I work for another cloud provider)

I agree, in general, outages are almost inevitable, but global outages shouldn't occur. It suggests at least a couple of things:

1) Bad software deployments, without proper validation. A message elsewhere in this post on HN suggest that problems have been occurring for at least 5 days, which makes me think this is the most likely situation. If this is the case, presumably given this is multiple days in to the issue, rolling back isn't an option. That doesn't say good things about their testing or deployment stories, and possibly their monitoring of the product? Even if the deployment validation processes failed to catch it, you'd really hope alarming would have caught it.

or:

2) Regions aren't isolated from each other. Cross-region dependencies are bad, for all sorts of obvious reasons.


That shouldn't, but they do. S3 goes down [1]. The AWS global console goes down, right after Prime Day outages [2]. Lots of Google Cloud services go down [3, current thread]. Tens of Azure services go down hard [4].

Are software development and release processes improving to mitigate these outages? We don't know. You have to trust the marketing. Will regions ever be fully isolated? We don't know. Will AWS IAM and console ever not be global services? We don't know.

Blah blah blah "We'll do better in the future". Right. Sure. Some service credits will get handed out and everyone will forget until the next outage.

Disclaimer: Not a software engineer, but have worked in ops most of my career. You will have downtime, I assure you. It is unavoidable, even at global scale. You will never abstract and silo everything per region.

[1] https://www.theregister.co.uk/2017/03/01/aws_s3_outage/

[2] https://www.cnbc.com/2018/07/16/aws-hits-snag-after-amazon-p...

[3] https://www.cnet.com/news/google-cloud-issues-causes-outages...

[4] https://www.datacenterknowledge.com/uptime/microsoft-blames-...


Can't speak for Google, but Facebook and Salesforce chose Cells for HA.

http://highscalability.com/blog/2012/5/9/cell-architectures....


Look how frequent and detailed Amazon's update logs are in that first Register article. Multiple updates throughout the day going into some detail.

The major issue is that outages are global instead of regional, effectively making it impossible to design around using the typical region/zone redundancy.

Because they sell themselves as being far more reliable than internal IT. If they weren't selling on uptime, people probably wouldn't be quite so critical of downtime.

As a technology practitioner, it is your failing if you believe them.

Let me know the next time you hear about the CIO of a fortune 500 asking his technology practitioners to validate what he read in Gartner and heard from Diane Greene.

My advice would be to find opportunities to get paid to tell people the right answer, not to implement the wrong answer against your better judgement. Hot job market right now and all that jazz.

The pitch from cloud vendors always includes the idea that the cloud is more reliable than any in-house shop can achieve. So the expectation is set by the vendors.

This has been going on longer than three days. We have been dealing with this exact issue since at least Monday (11/5) morning in us-central1.

same here. using gcloud, not web console

Say I were a CTO (I’m nowhere near it), why would I choose GCP over AWS or Azure? Even if after doing a technical assessment and I thought that GCP was technically slightly better, if something happened, the first question I would be asked is “why did you choose GCP over AWS?”

No one would ever ask why you chose AWS. The old “no one ever got fired for buying IBM”.

Even if you chose Azure because you’re a Microsoft shop, no one would question your choice of MS. Besides, MS is known for thier enterprise support.

From a developer/architect standpoint, I’ve been focused the last year on learning everything I could about AWS and chose a company that fully embraced it. AWS experience is much more marketable than GCP. It’s more popular than Azure too, but there are plenty of MS shops around that are using Azure.


- Native integration with G-Suite as an identity provider. Unified permissions modeling from the IDP, to work apps like email/Drive, to cloud resources, all the way into Kubernetes IAM.

- Security posture. Project Zero is class leading, and there's absolutely a "fear-based" component there, with the open question of when Project Zero discovers a new exploit, who will they share it with before going public? The upcoming Security Command Center product looks miles ahead of the disparate and poorly integrated solutions AWS or Azure offers.

- Cost. Apples to apples, GCP is cheaper than any other cloud platform. Combine that with easy-to-use models like preemptible instances which can reduce costs further; deploying a similar strategy to AWS takes substantially more engineering effort.

- Class leading software talent. Google is proven to be on the forefront of new CS research, then pivoting that into products that software companies depend on; you can look all the way back to BigQuery, their AI work, or more recently in Spanner or Kubernetes.

- GKE. Its miles ahead of the competition. If you're on Kubernetes and its not on GKE, then you've got legacy reasons for being where you're at.

Plenty of great reasons. Reliability is just one factor in the equation, and GCP definitely isn't that far behind AWS. We have really short memories as humans, but too soon we seem to forget Azure's global outage just a couple months ago due to a weather issue at one datacenter, or AWS's massive us-east-1 S3 outage caused by a human incorrectly entering a command. Shit happens, and it's alright. As humans, we're all learning, and as long as we learn from this and we get better then that's what matters.


Your response is from a geek’s viewpoint. No insult, intended, I’m first and foremost a 30 year computer geek myself - started programming in 65C02 assembly in 6th grade and still mostly hands on.

But, whether it is right or not, as an architect/manager, etc, you have to think about what’s not just best technically. You also have to manage your reputational risks if things go south and less selfishly, how quickly can you find someone with the relevant experience.

From a reputation standpoint, even if AWS and GCP have the same reliability, no one will blame you if AWS goes down if you followed best practices. If a global outage of an AWS resource went down, you’re in the same boat as a ton of other people. If everyone else was up and running fine but you weren’t because you were on the distant third cloud provider, you don’t have as much coverage.

From a staffing standpoint, you can throw a brick and hit someone who at least thinks they know something about AWS or Azure GCP, not so much.

It’s not about which company is technically better, but I didn’t want to ignore your technical arguments...

Native integration with G-Suite as an identity provider. Unified permissions modeling from the IDP, to work apps like email/Drive, to cloud resources, all the way into Kubernetes IAM.

You can also do this with AWS - use a third party identity provider and map them to native IAM user and roles.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_cr...

Cost. Apples to apples, GCP is cheaper than any other cloud platform. Combine that with easy-to-use models like preemptible instances which can reduce costs further; deploying a similar strategy to AWS takes substantially more engineering effort.

The equivalent would be spot instances on AWS.

From what (little) I know about preemptible instances, it seems kind of random when they get reassigned but Google tries to be fair about it. The analagous thing on AWS would be spot instances where you set the amount you want to pay.

Class leading software talent. Google is proven to be on the forefront of new CS research, then pivoting that into products that software companies depend on; you can look all the way back to BigQuery, their AI work, or more recently in Spanner or Kubernetes.

All of the cloud providers have managed Kubernetes.

As far as BigQuery. The equivalent would be Redshift.

https://blog.panoply.io/a-full-comparison-of-redshift-and-bi...

Reliability is just one factor in the equation, and GCP definitely isn't that far behind AWS

Things happen. I never made an argument about reliability.


GCP has a few features that set it apart from other cloud providers. GKE is head and shoulders above the other offerings from AWS and Azure.

GCP can be a fair bit cheaper than AWS and Azure for certain workloads. Raw compute/memory is about the same. Storage can make a big difference. GCP persistent SSD costs a bit more than AWS GP2 with much better performance and way cheaper than IO2. Local SSD is also way, way cheaper than I2 instances.

Most folks deploying distributed data stores that need guaranteed performance are using local disk, so this can be a really big deal.


Do not use GCP without paying for support. We have had resource allocation errors for weeks, as have a lot of other people. Check out the posts in their forum where folk on basic support get zero love. https://groups.google.com/forum/?utm_medium=email&utm_source...

>Nov 09, 2018 05:59

>We will provide more information by Monday, 2018-11-12 11:00 US/Pacific.

Wait, did the people tasked with fixing this just take the weekend off?


The incident with the UI (where we suggested using gcloud temporarily) was opened in https://status.cloud.google.com/incident/container-engine/18..., but then what sure looks to me like the same incident was closed in https://status.cloud.google.com/incident/container-engine/18....

My working assumption is that 18006 should have closed out 18005. But now it sounds like there's a different issue, which we're working to get to the bottom of.


The people tasked with fixing this aren't the ones providing the updates.

Understandable but in my experience the incident manager assigned is still supposed to keep track of progress during weekends when you have a major incident.

And this is likely a major incident with significant customer impact.

The way google is handling all this gives a pretty poor impression. Seems like this kubernetes is just a PoC.


Incident manager isn't public comms person.

The person updating that status dashboard may or may not be an engineer, the IM certainly is.


They should take a few mins out of their weekend to update the dashboard regardless. If my small 25 employee company can do it Google can do it.

See my other comment, to say what, exactly?

That's yes, it's still being investigated?


Yeah why not? A billion dollar cloud provider can't have one person communicating to customers that are facing multi day outage. That not updating is an option is absurd in that time range.

Yes precisely. At least with a new timestamp we will then know the status dashboard isn't also broken.

At a smaller, less corporate company the engineer/public comms person dividing line would not be so ossified the divide couldn't be bridged when the situation called for it.

I think the broader point here is that they (whomever they is) don't think the situation calls for it. That's why they said the next update would be on Monday on Friday.

You may feel that's a bad decision, but I doubt that people are in a panic because they can't push out an update that would not be noticably different from the last one.

Just to clarify, what should this update contain?


>Just to clarify, what should this update contain?

"We're working on it", possibly an ETA for a fix or some details. Technically it's fluff but people are not machines and the update is for people. We like the feeling that people are working on a fix, that people care and that the end is in sight. It makes the situation less stressful and, as for why Google should care, less stressed engineers won't bad mouth Google as much after the fact.


Compare Amazon's responses during their S3 outage: https://www.theregister.co.uk/2017/03/01/aws_s3_outage/ (toward the bottom)

What's the better communication plan: detailed, hourly updates or terse, one-line blog posts scattered across several days?


I think this comes down to the two outages being not at all similar. An s3 outage affects practically every Amazon customer. This affects a relatively small number of go customers.

And I'm confused about what was good about that response. That article is about how the s3 outage caused so many issues that Amazon couldn't update their status dashboard to inform users at all.


Fair point but still seems odd that the people providing updates took the weekend off during a large scale customer impacting issue. I'm sure all the people spending the weekend trying to mitigate the impact of this on their infrastructure would love to have timely updates.

This is a problem in the web ui creating additional node pools, there is a very simple workaround to use gcloud. What major impact are you referring to?

It's likely the fix is checked in and will start roll out on Monday.

Disclaimer: I work on Google Cloud and while I believe we could use more words here, this doesn't seem like a huge problem. It's embarrassing the the issue with the ui was shipped, and I'm sure this will be addressed in the post mortem as well as whether it could have been mitigated quicker than a roll forward.


>This is a problem in the web ui creating additional node pools, there is a very simple workaround to use gcloud. What major impact are you referring to?

Based on comments in this thread even gcloud is failing and so are other non-kubernetes services. Which may be inaccurate but there's a lot of people saying the same thing so maybe it is.

You're right however that the linked issue is only about the UI. So Google isn't even tracking the service distribution issue in it's issue tracker much less updating people on. I personally think that's even worse...


It's not just the UI though. Resources have been exhausted, or that's the error I'm getting, in a lot of regions for both my work and personal accounts (GKE and GCE). Someone on the GCP slack also said they were getting similar issues from Dataflow so it seems to be widespread across products.

It's weekend, why wouldn't you take it off? It's just silly software.

A quick search suggests that silly software is used by dozens of companies classified as hospitals or healthcare.

Let's hope you don't have a life threatening medical emergency that can't wait near an affected healthcare facility while that silly software is down.

https://idatalabs.com/tech/products/kubernetes


> Let's hope you don't have a life threatening medical emergency that can't wait near an affected healthcare facility while that silly software is down.

If your ability to operate an ER is dependent on a remote data center, you have no business being a public health provider.


A lot of people's businesses, reputations and livelihoods depend on "silly software," not to mention that they are paying customers themselves.

I know sarcasm doesn't translate well on the internet, but how the heck did you manage to miss it in OPs post?!

Ah, I guess I missed it. My fault.

More to the point, why would you depend on Google for any critical infrastructure after this?

Because you tried to run your own infrastructure and it was so much worse.

They have whole book describing who fixes, provides updates, etc. Fun meditative reading while waiting for the outage to get fixed.

https://landing.google.com/sre/sre-book/chapters/managing-in...

Looks like this time Mary took the whole week off without telling Josephine :)


Status page is inaccurate as issues doesn't only affect the web UI, the same operations are not functioning via the CLI.

Its kinda strange that HN seems to be the most effective way to give feedback to Google Cloud :/

Yeah almost all regions and zones for any compute instance have been exhausted since about 1pm PST on Friday. I finally got one up last night on us-east1, but my older cluster is basically SOL until it's fixed on us-west1. It went down for an upgrade and never came back up because of the same resource issue.

I just tried turning up my 1-node test cluster via terraform, and it worked fine. I would have thought the gcloud CLI would be using the same API.

I did this in the australia-southeast1-a zone.


What operations? Status just shows node pool creation.

Can not create a new Clusters or Node Pool and can not resize exiting Node Pools, as far as users are reporting it's happening in all regions too.

Error message when creating a new Cluster:

Deploy error: Not all instances running in IGM after 35m7.509000994s. Expect 1. Current errors: [ZONE_RESOURCE_POOL_EXHAUSTED]: Instance 'gke-cluster-3-pool-1-41b0abf8-73d7' creation failed: The zone 'projects/url-shortner-218503/zones/us-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later. - ; .


Hi, just curious: why are you in us-west2?

Just the latest region I tried, been trying to spin up Clusters around the country all day.

UPDATE: Got some clarity, these issues are caused by "resource exhaustion" meaning there are no resources left to be allocated.

I'm curious to see if this is true.

I faced some pretty serious resource allocation issues earlier in the year. The us-west1-a region was oversubscribed. I was unable to get any real information from support with regard to capacity. Eventually my rep gave me some qualitative information that I was able to act on.


A generic question: Our company is completely dependent on AWS. Sure we have taken all of the standard precautions for redundancy, but what happened here could just as easily happen with AWS - a needed resource is down globally.

What would a small business do as a contingency plan?


This might be an unpopular opinion but,

Going multi region on AWS should be safe enough.

If a multi region, multi service meltdown happens on AWS, it will feel like most of the internet has gone down to a lot of users. Being such a catastrophic failure, I bet the service will be restored pretty fast, not in 3 days.

You could go multi cloud though. But when half of the internet struggles to work correctly, I’d not feel too bad about my small business’ downtime.


> If a multi region, multi service meltdown happens on AWS, it will feel like most of the internet has gone down to a lot of users. Being such a catastrophic failure, I bet the service will be restored pretty fast, not in 3 days.

Additionally, from a "nobody ever got fired for buying IBM" perspective, you're unlikely to catch much blame from your users for going down when everyone else was down too.


Yeah, AWS has never had a global outage, Google has had 2 now.

I'm not sure where you get this idea, but AWS has definitely had global outages. 1.5 years ago there were massive global issues with S3, causing even their own status dashboard to malfunction.

edit: I stand corrected. Apparently the S3 outage wasn't global, though its effects were.

Meanwhile, this outage has only really been noticeable to ops teams, since it doesn't affect existing nodes or anything outside GKE. It's definitely concerning and the fix is taking far too long, but as far as global outages go the impact is relatively minor.


On February 28th, 2017, S3 had issues in us-east-1, not globally. It just happens that most customers create their buckets in us-east-1. (It’s the default bucket creation region.)

Buckets creation/deletion is a global operation, namespaces are all global for it. When us-east-1 goes down, so does bucket creation. All other operations proceed merrily, but yeah as we see over and over again, whenever us-east-1 has an outage everything seems to shit the bed because they've built everything out in us-east-1. Hopefully us-east-2 is starting to eat in to that.

This is correct (or at least was a couple years ago).

Correct, but quite a few AWS services are built on S3, and some apparently weren't using buckets per region.

That only affected buckets on us-east-1.

> You could go multi cloud though.

Multi cloud is almost always more pain than gain. You’d spend time and effort abstracting away the value that a cloud provider brings in canned services.

Hell, multi region is often more than many workloads need.


My entire infrastructure is on k8s which should make multi-cloud easy...

Nope.


First, determine your tolerance. How much does downtime per min cost you? How long can you be down? A lot of the time it may be cheaper to apologize to your customers than build a truly reliable system.

Then start looking at points of failure and sort them based on severity and probability. Is your own software deployment going to generate more downtime per year than a regional aws outage?

There are formal academic ways to determine what your overall availability is, but don't have those on hand. Suffice to say, it takes significant research, planning, execution, and testing to ensure a target availability. (See Netflix https://medium.com/netflix-techblog/the-netflix-simian-army-... ) if someone says they have 99.9% or better up time, they had better have proof in my mind (or a fat SLA violation payout)

People outsource to cloud providers not because they are cheap, but because managing infra in house is hard. Also move fast and break things.

Read AWS docs about availability, there are availability zones in a region, spread across those to minimize impact. Then test when something goes down. Fix/repeat.


People outsource to cloud providers because building / hiring / maintaining a team of decent engineers that provide a baseline industry bar of SLAs, SLOs is much more expensive than the eye watering costs of most cloud providers at even a IaaS level. Opex is tough.

Most companies I’ve been at don’t offer multi region support for their services because it’s too expensive for the service provided even in so-called “price insensitive” enterprises (you can’t just make up a price that’s huge, they do have budgets still) and most of their customers are unwilling / unable to pay more for the extra availability. If your software is designed from the start better, multi region failovers should be fairly inexpensive though. But all the bolted on “multi region” software I’ve seen has been hideously expensive and oftentimes less reliable due to the design being soundly not able to tolerate failures well.


That does bring up an interesting point. In hindsight, we already have duplicate infrastructure - a dev account and a production account. Why in the world was it decided to put both accounts in the same region?

The separate account was setup partially on my insistence but it was set up in the same region.

If needed, we could have done VPC peerings across regions. (https://aws.amazon.com/about-aws/whats-new/2017/11/announcin...)


There are some significant differences region to region in AWS. Different numbers of availability zones, different latencies based on datacenter location, different types and availabilities of EC2 instances, etc. I think it makes sense to develop in the same region that your production service runs in just so you don't shoot yourself in the foot by deploying something that runs fine in your development region but doesn't run as well in your production region.

The baseline is that it takes 12 dedicated people across the world to run a 24/7 support operation.

Considering that even tech companies hardly manage to have a pair of DevOps or Sysadmin, running one own infrastructure is completely out of question.


What's the math/logic to get to 12 people?

Timezone coverage, vacations, training, sick days, etc.

Two people per 6-hour timezone chunk, with one overlapping person splitting each chunk?

2 at UTC 0-8

1 at UTC 4-12

2 at UTC 8-16

1 at UTC 12-20

2 at UTC 16-24

1 at UTC 20-04

(Repeat)


Most small companies on AWS with revenue outsource support to an MSP.

If all your competitors are down too, the equation changes, no?

Yep - If all my competitors are down the equation changes such that I _definitely_ want to be up.

Any trade comes to me if it's urgent, and I appear more professional as I've got a functioning system.

I might be an chancer running my entire system off an shoestring but being up when everyone else has taken a dive looks good.


Assuming you have architected multi-region I'm not sure how realistic that scenario is. AWS regions are mostly standalone, I have seen services go down in a region but never globally.

The most recent example that I can think of when something went down globally was Route 53 - the one service that AWS promises 100% up time for.

Are you thinking of the event in April? That was not a Route 53 outage, that was a BGP hijack.

https://blog.thousandeyes.com/amazon-route-53-dns-and-bgp-hi...


Citation needed?

Basically, you have three options. You can go full milti cloud, with all the expenses and overhead that entails. You can have everything run in one place but have a plan to switch to a backup system. Or you can look at the overhead and cost associated with those, and decide that it's not worth it. If the business can handle the costs and risks, then any of them can be a valid option.

My entire production infrastructure is in GCP. What happened here has caused approximately zero impact to the availability of my service.

You need to have a secondary location for backups, not AWS. Stores copie of customer data, orders, accounts, balances and anything that is critical to the business.

If AWS ever screws up, you will be able to continue running the business even if it might take weeks to start over.

For live redundancy, you should have a secondary datacenter on another provider, but realistically it's hard to do and most business never achieve that. Instead, just stick with AWS and if there is a problem the strategy is to sip coffee while waiting for them to resolve it. Much better this way than you having to fix it yourself.


> What would a small business do as a contingency plan?

Depends on your definition of small. If it's small enough not to have a dedicated infrastructure team designing multicloud solution, then the contingency plan may be: switch DNS to a static site saying "we're down until AWS fixes the issue, check back later".

Otherwise it depends on your specific scenario, your support contracts, and lots of other things. You need to decide what matters, how much the mitigation costs vs downtime, and go from there.


Unpopular opinion: very little perhaps. You have to make sure all your your 3rd party dependencies have the same contingency plan as you do, but I guess it is going to be difficult to even figure that out...

Very little, unless that small business has a big enough budget to design something that spans multiple clouds.

Ultimately it’s a risk/return decision.

“Is going exclusively with AWS/azure/GCP etc a better decision in reliability, financial and mantainability terms than complicating the design to improve resiliency? And will this more complex solution actually improve reliability?”


Multi-Cloud-Service-as-a-Service application redundancy wrappers.

I wish I was only being tongue-in-cheek.


Roll the dice? What are the consequences for you, especially if you can shift the blame? What are the odds of you having better uptime rolling your own tooling? Can you afford the complexity of multi cloud? Is the added complication worth it?

Using Kubernetes is a good start. It should be easy to migrate your server between EKS and GKE. However data is trickier to move around. So you won’t be immune from all global outages.

We invested early in being multi-region on GCP as well as multi-cloud with AWS as a fully redundant option if it ever became necessary to fail over to them.

Infrastructure as code.

Terraform using AMIs plus chef recipes that work in the cloud and bare metal. Dont use AWS specific services.

This would allow you to spin over to another cloud provider , vsphere or bare metal with minimal work


Totally impractical for any small business and of questionable usefulness for large ones. You'd be giving up the largest benefit of platforms like AWS--ready to use services for common tasks--to avoid the infinitesimally small chance of AWS having some doomsday global outage.

To answer the original question: It looks like this issue was just a UI bug that affected the console, the service itself wasn't impacted. Events that do impact the service will be contained to a region, meaning you can mitigate it with proper redundancy across regions, no zany multi-cloud solution required.


Evidently you dont know how to do this well

That’s not how Terraform works. Each provisioner has separate syntax depending on the cloud provider. The template for AWS wouldn’t work on GCP or Azure.

Correct. However written properly you get 90% of the way there.

Disaster recovery by switching to another provider is simple when minimal centos/rhel images are used.


Assume I’m not fully informed here. What does “written properly” mean? Sure I can move over route53 to cloud DNS easily but Firehose to PubSub? Lambda to Cloud Functions? DynamoDB to BigTable and moving the data?

The syntax for provisioning these doesn’t work that well for some find and replace to work. Are you using a templater to generate cloud-specific HCL from a template or something? Sounds like a pretty big problem to solve to me and not just something where you can win via discipline.


Then struggle like hell for the other 90%

I think you are downplaying minimal

I don't think OP was intending "minimal" to mean it would be easy to get to the stage where it's possible, just that once you've got all your infrastructure-as-code stuff set up correctly, you ought to be able to just be pressing buttons / running scripts and have your infrastructure up and running in another cloud provider.

Even when working in small companies with small infrastructure, I've kept recreation of infrastructure as one of my high priorities (one reason it really bugged me in one job to have to depend on Oracle Databases that I couldn't automate to the same degree.)

In my mind, it's not different from the importance of having, and testing restoration of, backups. If your infrastructure gets compromised somehow, or you find yourself up the creek with your provider, you've got to be able to rebuild everything from scratch.


No, with over 20 years experience and having done exactly this for several different companies and startups I pretty much have this process down.

I honestly don't mind if providers have outages - we can't expect 100.00% accuracy, I know the systems I manage certainly don't achieve that.

One thing I do care about though, is root cause analysis. I love reading a good RCA, it restores my faith in the company and makes me trust them more.

(I'm not affect by the GKE outage so opinions may differ right now!)


There is no magic public clouds have incredibly complex control planes and marketing fluff aside you would very likely experience much better uptime at singe top tier DC than @ a cloud provider.

I have a question. At what point does k8s make sense?

I have a feeling that a microservice architecture is overkill for 99% of businesses. You can serve a lot of customers on a single node with the hardware available today. Often times, sharding on customers is rather trivial as well.

Monolith for the win! Opinions?


I hate the word Microservice, so I'm just going to use the word Service.

Most monoliths software companies build aren't actually monoliths, conceptually. Let's say you integrate with the Facebook API to pull some user data. Facebook is, within the conceptual model of your application, a service. Hell, you even have to worry "a little bit" about maintaining it; provisioning and rotating API keys, possibly paying for it, keeping up to date on deprecations, writing code to wire it up, worrying about network faults and uptime... That sounds like a service to me; we're three steps short of a true in-house service, as you don't have to worry about writing its code and actually running it, but conceptually its strikingly similar.

Facebook is a bad example here. Let's talk Authentication. Its a natural "first demonolithized service" that many companies will reach to build. Auth0, Okta, etc will sell you a SaaS product, or you can build your own with many freely available libraries. Conceptually they fill the same role in your application.

Let's say you use Postgres. That's pretty much a service in your application. A-ha; that's a cool monolith you've got there, already communicating over a network ain't it. Got a redis cache? Elasticsearch? Nginx proxy? Load balancer? Central logging and monitoring? Uh oh, this isn't really looking like a monolith anymore is it? You wanted it to be a monolith, but you've already got a few networked services. Whoops.

"Service-oriented" isn't first-and-foremost a way of building your application. It's a way of thinking about your architecture. It means things like decoupling, gracefully handling network failures, scaling out instead of up, etc. All of these concepts apply whether you're building a dozen services or you're buying a dozen services.

Monolithic architectures are old news because of this recognition; no one builds monoliths anymore. It's arguable if anyone ever did, truly. We all depend on networked services, many that other people provide. The sooner you think in terms of networked services, the sooner your application will be more reliable and offer a superior experience to customers.

And then, it's a natural step to building some in-house. I am staunchly in the camp of "'monolith' first, with the intention of going into services" because it forces you to start thinking about these big networking problems early. You can't avoid it.


K8s is nice even without microservices. Yeah you don't get nearly the benefits you would in a microservice architecture, but I consider it a control plane for the infrastructure, with an active ecosystem and focus on ergonomics. If you have a really simple infrastructure, you will still need to script spinning up the VMs, setting up the load balancing, etc. but K8s gives you a homogenous layer upon which to put your containers. It's not too much of an overkill, especially with a hosted K8s from e.g. Google, AWS, and soon Digital Ocean and Scaleway.

Things like throwing another node into the cluster, or rolling updates are free, which you would otherwise need to develop yourself. All of that is totally doable, of course, but I like being able to lean on tooling that is not custom, when possible.

When your infrastructure does need to become more complicated, you're already ready for it. Even if I were only serving a single language, starting with a K8s stack makes a lot of sense, to me, from a tooling perspective. Yeah normal VMs might be simpler, conceptually, but I don't consider K8s terribly complicated from a user perspective, when you're staying around the lanes they intend you to stay in. Part of this may also be my having worked with pretty poor ops teams in the past, but I think K8s gives you a really good baseline that gives pretty good defaults about a lot of your infrastructure, without a lot of investment on your part.

That said, if you're managing it on a bare metal server, then VMs may be much easier for you. K8s The Hard Way and similar guides go into how that would work, but managing high availability etcd servers and the like is a bit outside my comfort zone. YMMV.


There's a huge range between monolith and microservice approach, and even a monolith will have dependent services. A simple web stack these days might include nginx, a database, a caching layer, some sort of task broker and then the 'monolith' web app itself. All of that can be sanely managed in k8s.

Right... IMO monolith is better understood as a reference to the data model than deployment topology. If you only have a single source of truth, then your application is naturally going to trend towards doing most of its business logic in one place. This still doesn't displace the need for other services like caching, async tasks, etc., that you identify.

I definitely wouldn’t be managing my own database or caching layer without a very good reason. I would use a managed service if I were using a cloud provider.

As someone whose daily work happens on k8s, I'd say you better be paining a lot before you move to k8s. I take great care to avoid this, but if you aren't careful, you can end up "feeling" productive on k8s without actually being productive. K8s gives a lot of room for one to tweak workflows, discuss deployment strategies, security, "best practices", etc. And you can get things done reasonably fast. But that's like a developer working all day on fine tuning their editor and comparing and writing plugins and claiming that they are getting productive.

The key issue here is that k8s was written with very large goals in mind. That a small business can easily spin it up quickly and run a few microservices or even a monolith + some workers is just coincidental. It is NOT the design goal. And the result of that is that a lot of the tooling and writing around k8s reflects that. A lot of the advice around practices like observability and service meshes comes from people who've worked in the top 1% (or less) of companies in terms of computing complexity. What I'm personally seeing is that this advice is starting to trickle down into the mainstream as gospel. Which strangely makes sense. No one else has the ability to preach with such assurance because not many people in small companies have actually been in the scenarios of the big guns. The only problem is that it's gospel without considering context.

So at what point does k8s make sense? Only when you have answers to the following:

* Getting started is easy, maintaining and keeping up with the going ons is a full time job - Do you have at least 1 engineer at least that you can spare to work on maintaining k8s as their primary job? It doesn't mean full time. But if they have to drop everything else to go work on k8s and investigate strange I/O performance issues, are you ready to allow that?

* The k8s eco system is like the JS framework ecosystem right now - There are no set ways of doing anything. You want to do CICD? Should you use helm charts? Helm charts inherited from a chart folder? Or are you fine using the PATCH API/kubectl patch commands to upgrade deployments. Who's going to maintain the pipeline? Who's going to write the custom code for your github deployments or your brigade scripts or your custom in house tool? Who's going to think about securing this stuff and the UX around it. That's just CICD mind you. We aren't anywhere close to the weeds about deciding if you want to use ingresses vs Load balancers and how you are going to run into service provider limits on certain resources. Are you ready to have at minimum 1 developer working on this stuff and taking time to talk to the team about it?

* Speaking about the team, k8s and Docker in general is a shift in thinking - This might sound surprising but the fact that Jessie Frazelle (y'all should all follow her btw) is occasionally seen reiterating the point that containers are NOT VM's is a decent indicator that people don't understand k8s or Docker at a conceptual level. When you adopt k8s, you are going to pass that complexity to your developers at some point. Either that or your dev ops team takes on that full complexity and that's a fair amount to abstract away from the developers which will likely increase the work load of devops and/or their team size. Are you prepared for either path?

* Oh also, what do your development environments start to look like? This is partly related to microservices but are you dockerizing your applications to work on the local dev environment? Who's responsible for that transition? As much as one tries to resist it, once you are on k8s you'll want to take advantage of it. Someone will build a small thing as a microservice or a worker that the monolith or other services depend on. How are you going to set that up locally? And again, who's going to help the devs accumulate that knowledge while they are busy trying to build the product. (Please don't put your hopes on devs wanting to learn that after hours. That's just cruel).

I can't write everything else I have in mind on this topic. It'd go on for a long long time. But the common theme here is that the choice around adopting k8s is generally put on a table of technical pros and cons. I'd argue that there's a significant hidden cost of human impact as well. Not all these decisions are upfront but it is the pain that you will adopt and have to decide on at some point.

Again, at what point does k8s make sense? Like I said, you ideally should be paining before you start to consider k8s because for nearly every feature of k8s, there is a well documented, well established, well secured parallel that already exists in the myriad of service providers. It's a matter of taking careful stock of how much upfront pain you are trading away for pain that you WILL accumulate later.

PS - If anyone claims that adopting a newer technology is going to make things outright less painful , that's a good sign of immaturity. I've been there and I picture myself smashing my head into a table every now and then when I think of how immature I used to be. Apologies to people I've worked with at past jobs.

PPS - From the k8s site, "Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team." <-- is the kind of claim that we need to take flamethrowers to. On paper, 1 dev with the kubectl+kops CLI can scale services to run with 1000's of nodes and millions of containers. But realistically, you don't get there without having incurred significantly more complex use cases. So no, nothing scales independently.


Very nicely written. While not a direct response to OP, you articulated some great points on k8s. k8s will naturally succeed as the future of data center orchestration as VM's give way to containers. But it is questionable if everyone needs it.

This outage really doesn’t have much to do with K8s.

Maybe so, but you won't be affected by this outage if you never decided to deploy k8s in the first place.

Even if you deploy k8s privately, or over at Amazon, I think there's enough horror stories to make you think twice about the technology.

Then, if it isn't going to be k8s for microservices, what's a more reliable alternative?


i agree with you on major points.

Also, migrating to microservices for existing services might not be worth it, especially if you don't operate at a massive scale.

Keep it simple stupid is still a solid design decision, despite all the microservice/container hype.

Most bussinesses only need a couple of servers that provide the service, spread redundantly with a HA capability.


Cloud providers have all of the potential in the world to make each region truly isolated. I shouldn't have to architect my application to be multi-cloud, at least for stability reasons.

Yet, somehow every major cloud provider experiences global outages.

That old AWS S3 outage in us-east-1 was an interesting one; when it went down, many services which rely on S3 also went down, in other regions beside us-east-1 because they were using us-east-1 buckets. I have a feeling this is more common than you'd think; globally-redundant services which rely on some single point of geographical failure for some small part.


AWS regions are very much isolated from each other.

We know because we are still waiting here in ap-southeast-2 for services such as EKS to be made available. Pretty sure that any reliance within their backend services on us-east-1 was just a temporary bug and nothing systemic.


Seems to be some weird underlying issue going on at GCP at the moment. Had cloud build webhooks returning a 500 error. Noticed we were at 255 images and deleting some fixed the issue. Created a P2 ticket about the issue before we managed to solve it and haven't had a response in 40+ hours.

The timeline of this disruption matches when we started experiencing cloud build errors.


Outsider here, but I believe Cloud Build runs on GKE Jobs, so if they’re having trouble, it does indeed sound related.

Doesn’t GKE “just” run an independent Kubernetes cluster on customer VMs? How is a widespread outage like this possible?

GKE does the creation of the VMs and setup of them, joining them to the cluster and applying labels for example.

The specific issue appears to be about creating new "node pools". Creating standard VMs in GCP works fine however, so this is specific to GKE and their internal tooling that integrates with the rest of GCP.

GKE doesn't (at least to my knowledge) allow you to create VMs separately and join them to the cluster in any kind of easy fashion.


It's actually not just GKE, there have been issues creating normal VMs since late Friday night. It seems anything that required creating VMs gave back resource exhaustion errors. I finally got a cluster for us-east1 setup last night so it looks like the resource issues are clearing up though.

Nope, GKE = master/control plane owned by Google. Customers are just tenants, who can schedule workloads.

GKE gives you a fully managed Master Node.

I use preemptible machines in autodialing and for first time did not have any machines available for multiple hours yesterday. I am wondering whether this falls under the normal preemptible behaviour or this service degradation.

Our company is dependent on this as well and the way customer service has been handling this has been abysmal thus far.

Is this just about creating new pools? I haven't noticed an issue with our existing pools scaling.

You were able to add more Nodes to you're pool? Are you using any auto scaling?

Had it affected all regions or just some?

Is there another status page Google? Coz the last update I'm looking at...is dated on the 9th..


The general page is at https://status.cloud.google.com/; you can scroll down to see GKE, and my (unofficial) belief is that https://status.cloud.google.com/incident/container-engine/18... should have closed out https://status.cloud.google.com/incident/container-engine/18...

_If_ that's the case, something else is causing the error messages other people are seeing


Why do cloud providers have more global outages than major flagship websites like google.com?

Whey don't run on the same infra. Amazon.com doesn't run on AWS.

When guerilla marketing backfires
More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: