Hacker News new | past | comments | ask | show | jobs | submit login
Google Cloud Is Down
1218 points by markoa 11 hours ago | hide | past | web | favorite | 531 comments
https://status.cloud.google.com

https://status.cloud.google.com/incident/compute/19003

Status page reports all green, however the outage is affecting YouTube, Snapchat, and thousands of other users.






Disclosure: I work on Google Cloud (but disclaimer, I'm on vacation and so not much use to you!).

We're having what appears to be a serious networking outage. It's disrupting everything, including unfortunately the tooling we usually use to communicate across the company about outages.

There are backup plans, of course, but I wanted to at least come here to say: you're not crazy, nothing is lost (to those concerns downthread), but there is serious packet loss at the least. You'll have to wait for someone actually involved in the incident to say more.


To clarify something: this outage doesn’t appear to be global, but it is hitting us particularly hard in parts of the US. So for the folks with working VMs in Mumbai, you’re not crazy. But for everyone with sadness in us-central1, the team is on it.

It seems global to me. This is really strange compared to AWS. I don't remember an outage there (other than s3) impacting instances or networking globally.

You obviously don't recall the early years of AWS. Half of internet would go down for hours.

Back when S3 failures would take town Reddit, parts of Twitter .. Netflix survived because they had additional availability zones. I can remember the bigger names started moving more stuff to their own data centers.

AWS tries to lock people in to specific services now which makes it really difficult to migrate. It also takes a while before you get to the tipping point where hosting your own is more financially viable .. and then if you trying migrating, you're stuck using so many of their services you can't even do cost comparisons.


Netflix actually added the additional AZs because of a prior outage that did take them down.

"After a 2012 storm-related power outage at Amazon during which Netflix suffered through three hours of downtime, a Netflix engineer noted that the company had begun to work with Amazon to eliminate “single points of failure that cause region-wide outages.” They understood it was the company’s responsibility to ensure Netflix was available to entertain their customers no matter what. It would not suffice to blame their cloud provider when someone could not relax and watch a movie at the end of a long day."

https://www.networkworld.com/article/3178076/why-netflix-did...


We went multi-region as a result of the 2012 inc. source: I now manage the team responsible for performing regional evacuations (shifting traffic and scaling the savior regions).

That sounds fascinating! How often does your team have to leap into action?

We don’t usually discuss the frequency of unplanned failovers, but I will tell you that we do a planned failover at least every two weeks. The team also uses traffic shaping to perform whole system load tests with production traffic, which happens quarterly.

Do you do any chaos testing? Seems like it would slot right in, there.

I think some Google engineers published a free Meap book on service relatability and uptime guarantees. Seemingly counterintuitive, scheduling downtime, without other teams’ prior knowledge, encourages teams to handle outages properly and reduce single points of failure, among other things.

I think you’re misremembering about Twitter, which still doesn’t use AWS except for data analytics and cold storage last I heard (2 months ago).

Avatars were hosted on S3 for a long time, IIRC.

I am not sure if a single S3 outage pushed any big names into their own "datacenter". S3 has still the world record of reliability that you cannot challenge with your inhouse solutions. You can prove it otherwise. I would love to hear a solution that has the same durability, avabiality and scalability as S3.

For the downvoters, please just link here the proof if you disagree.

Here are the S3 numbers: https://aws.amazon.com/s3/sla/


> For the downvoters, please just link here the proof if you disagree.

> Here are the S3 numbers: https://aws.amazon.com/s3/sla/

99.9%

https://azure.microsoft.com/en-au/support/legal/sla/storage/...

99.99%


It's not so much AWS vs. in-house. But AWS (or GCP/DO/etc.) vs. multi/hybrid solutions. The latter of which would presumably have lower downtime.

I don't see why multi/hybrid would have lower downtime. All cloud providers as far as I know, though I know mostly of AWS, already have their services in multiple data-centers and their endpoints in multiple regions. So if you make yourself use more then one of their AZs and Region, you would be just as multi as with your own data center.

Using a single cloud provider with a multiple region setup won't protect you from some issues in their networking infrastructure, as the subject of this thread supposedly shows.

Although I guess depending on how your own infrastructure is setup, even a multi cloud provider setup won't save you from a network outage like the current Google cloud one.


Why would you think that self-managed has lower downtime than AWS using multiple datacenters/regions?

Actually, I imagine that if you could go multi-regional then your self-managed solution may be directly competitive in terms of uptime. The idea that in-house can't be multi-regional is a bit old fashioned in 2019.

Multi/hybrid means you use both self managed and AWS datacenters.

> For the downvoters, please just link here the proof if you disagree.

https://wasabi.com/


How can they possibly guarantee eleven nines? Considering I’ve never heard of this company and they offer such crazy-sounding improvements over the big three, it feels like there should be a catch.

11 9s isn't uncommon. AWS S3 does 11 9s (upto 16 9s with cross region replication?) for data durability, too. AFAIK, AWS published papers about their use of formal methods to ascertain bugs from other parts of the system didn't creep in to affect durability/availability guarantees: https://blog.acolyer.org/2014/11/24/use-of-formal-methods-at...

This is a pretty neat and concise read on ObjectStorage in-use at BigTech, in case you're interested: https://maisonbisson.com/post/object-storage-prior-art-and-l...


You have to be kidding me. 14 9's is already microseconds a year. Surely below anybody's error bar for whether a service is down or not.

16 9's and aws should easily last as long as the great pyramids without a second worth of outage.

What a joke


This is for data loss. 11 9s is like 1 byte lost per terabyte-year or something, which isn't an unreasonable number.

The 16 9's are for durability, not availability. AWS is not saying S3 will never go down; they're saying it will rarely lose your data.

This number is still total bullshit. They could lose a few kb and be above that for centuries

It's not about losing a few kb here and there.

It's about losing entire data centers to massive natural disasters once in a century.


None of the big cloud providers have unrecoverably lost hosted data yet, despite sorting vast volumes, so this doesn't seem BS to me.

AWS lost data in Australia a few years ago due to a power outage I believe.

Not losing any data yet doesn't give justification for such absurd numbers

For data durability? I believe some AWS offerings also have an SLA of eleven 9's of data durability.

Always in Virginia, because US-east has always been cheaper.

I know a consultant who calls that region us-tirefire-1.

I and some previous coworkers call it the YOLO region.

The only regions that are more expensive than us-east-1 in the States are GovCloud and us-west-1 (Bay Area). Both us-west-2 (Oregon) and us-east-2 (Ohio) are priced the same as us-east-1.

I would probably go with US-EAST-2 just because it's isolated from anything except perhaps a freak Tornado and better situated on the eastern US. Latency to/from there should be near optimal for most eastern US/Canada population.

And for those of us in GST/HST/VAT land, hosting in USA saves us some tax expenditures.

AWS is registered for Australian GST - they therefore charge GST on all(ish) services[0].

IBM/Softlayer, Rackspace, Google Cloud, Microsoft and I imagine everyone else large enough to count also does, too.

For Australian businesses, at least, being charged GST isn't a problem - they can claim it as an input and get a tax credit[1].

[0] https://aws.amazon.com/tax-help/australia/

[1] https://www.ato.gov.au/Business/GST/Claiming-GST-credits/


How?

At least in EU services bought from overseas are subject to reverse charge, i.e. self-assessment of VAT (Article 196 of https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02... ).

Though note that if you are an EU AWS customer, you are not buying from outside EU, you are buying from Amazon's EU branches regardless of AWS region. If Amazon has a local branch in your country, they charge you VAT as any local company does. Otherwise you buy from an Amazon branch in another EU country, and you again need to self-assess VAT (reverse charge) per Article 196.


My experience is with Canadian HST.

Since AWS built a DC in Canada, I’m paying HST on my Route53 expenses, but not on my S3 charges in non-Canadian DCs.

I’m not an HST registrant (small supplier, or if you’re just using services personally), so there’s nothing to self-assess.

Even if self-assessment was required, you get some deferral on paying (unless you have to remit at time of invoice?).


You know, normally you still have to pay that tax - just through a reverse charge process

Not the case in Canada if you’re not an HST registrant (non-business or a small enough business where you’re exempt).

Even if you did have to self-assess, better to pay later than right away.


Mostly because those sites were never architected to work across multiple availability zones.

Years ago, when I was playing with AWS in a course on building cloud-hosted services, it was well-known that all the AWS management was hosted out of a single zone, and there were several days we had to cancel class because us-east-1 had an outage, so while technically all our VMs hosted out of other AZs were extant, all our attempts to manage our VMs via the web UI or API were timing or erroring out.

I understand this is long-since resolved (I haven't tried building a service on Amazon in a couple years, so this isn't personal experience), but centralized failure modes in decentralized systems can persist longer than you might expect.

(Work for Google, not on Cloud or anything related to this outage that I'm aware of, I have no knowledge other than reading the linked outage page.)


> it was well-known that all the AWS management was hosted out of a single zone, and there were several days we had to cancel class because us-east-1 had an outage

Maybe you mean region, because there is no way that AWS tools were ever hosted out of a single zone (of which there are 4 in us-east-1). In fact, as of a few years ago, the web interface wasn’t even a single tool, so it’s unlikely that there was a global outage for all the tools.

And if this was later than 2012, even more unlikely, since Amazon retail was running on EC2 among other services at that point. Any outage would be for a few hours, at most.


Quoting https://docs.aws.amazon.com/general/latest/gr/rande.html

"Some services, such as IAM, do not support Regions; therefore, their endpoints do not include a Region."

There was a partial outage maybe a month and a half ago where our typical AWS Console links didn't work but another region did. My understanding is that if that outage were in us-east-1 then making changes to IAM roles wouldn't have worked.


The original poster said that none of AWS services are in a single AZ, the quote you referenced says that IAMs do not support regions.

Your quote cd mean two things.

- that IAM services are hosted in one region (not one AZ)

And/Or

- that IAM is for the entire account not per region like other services (which is true)


Quite possibly, it has been a number of years at this point, and I didn't dig out the conversations about it for primary sourcing.

Just this year an issue in us-east-1 caused the console to fail pretty globally.

Where are you based? If you’re in the US (or route through the US) and trying to reach our APIs (like storage.googleapis.com), you’ll be having a hard time. Perhaps even if the service you’re trying to reach is say a VM in Mumbai.

I am in Brazil, with servers in southamerica. Right now it seems back to normal.

EUW doesn't seem to be affected.

My instance in Belgium works fine

I have an instance in us-west-1 (Oregon) which is up, but an instance in us-west-2 (Los Angeles) which is down. Not sure if that means Oregon is unaffected though.

us-west-1 is Northern California (Bay area). us-west-2 is Oregon (Boardman).

Incorrect. GCE us-west1 is the Dalles, Oregon and us-west2 is Los Angeles.

What I said is correct for AWS. In retrospect I guess the context was a bit ambiguous.

(I will note that I was technically more right in the most obnoxiously pedantic sense since the hyphenation style you used is unique to AWS - `us-west-1` is AWS-style while `us-west1` is GCE-style :P)


Some services are still impacted globally. Gmail over IMAP is unreachable for me. (Edit: gmail web is fine)

+1- imap gmail is down for me in Australia

Yes, same here in UK (for some hours now).

Quick update from Germany, both youtube and gmail appear to work fine

I’m from the US and in Australia right now. Both me and my friends in the US are experiencing outages across google properties and Snapchat, so it’s pretty global.

Fiber cut? SDN bug that causes traffic to be misdirected? One or more core routers swallowing or corrupting packets?

It seemed to be congestion in the North East US.

> including unfortunately the tooling we usually use to communicate across the company about outages.

There's some irony in that.


Edit: and I agree!

I’m not in SRE so I don’t bother with all the backup modes (direct IRC channel, phone lines, “pagers” with backup numbers). I don’t think the networking SRE folks are as impacted in their direct communication, but they are (obviously) not able to get the word out as easily.

Still, it seems reasonable to me to use tooling for most outages that relies on “the network is fine overall”, to optimize for the common case.

Note: the status dashboard now correctly highlights (Edit: with a banner at the top) that multiple things are impacted because Networking. The Networking outage is the root cause.


> the status dashboard now correctly highlights that multiple things are impacted because Networking.

this column of green checkmarks begs to differ: https://i.imgur.com/2TPD9e9.png


This is a person who's trying to help out while on vacation...can we try being more thankful, and not nitpick everything they say?

Thanks! I’ll leave this here as evidence that I should rightfully reduce my days off by 1 :).

The banner at the top. Sorry if that wasn’t clear.

While not exactly google cloud, G suite dashboard seems accurate: https://www.google.com/appsstatus#hl=en&v=status

For me, at least, that was showing as all green for at least 30 minutes.

AWS experienced a major outage a few years ago that couldn't be communicated to customers because it took out all the components central to update the status board. One of those obvious-in-hindsight situations.

Not long after that incident, they migrated it to something that couldn't be affected by any outage. I imagine Google will probably do the same thing after this :)


Was just reading it, they made their status page multi-region.

> Not long after that incident, they migrated it to something that couldn't be affected by any outage.

Like the black box on an airplane, if it has 100% uptime why don’t they build the whole thing out of that? ;)


Even more irony: Google+ shown as working fine: https://i.imgur.com/52ACuiY.png

G+ is alive and well for G Suite subscribers, not the general users.

> including unfortunately the tooling we usually use to communicate across the company about outages.

So memegen is down?


I'm guessing this will be part of the next DiRT exercise :-) (DiRT being the disaster recovery exercises that Google runs internally to prepare for this sort of thing)

Well, lots of revenue is lost, that's for sure.

We are also hosted on GCP bit nothing is down for us. We are using 3 regions in US and 2 in EU.

>nothing is lost

except time


Can't use my Nest lock to let guests into my house. I'm pretty sure their infrastructure is hosted in Google Cloud. So yeah... definitely some stuff lost.

You have my honest sympathy because of the difficulties you now suffer through, but it bears emphasizing: this is what you get when you replace what should be a physical product under your control with Internet-connected service running on third-party servers. IoT as seen on the consumer market is a Bad Idea.

It's a trade-off of risks. Leaving a key under the may could lead to a security breach.

I am pretty sure there are smart locks that don't rely on an active connection to the cloud. The lock downloads keys when it has a connection and a smartphone can download keys. This means they work even if no active internet connection at the time the person tries to open. If the connection was dead the entire time between creating the new key and the person trying to use the lock it still wouldn't work.

If there are not locks that work this way it sure seems like there should be. Using cloud services to enable cool features is great. But if those services are not designed from the beginning with fallback for when the internet/cloud isn't live that is something that is a weakness that often is unwise to leave in place imo.


It may not be worth the complexity to give users the choice. If I were to issue keys to guests this way I would want my revocations to be immediately effective no matter what. Guest keys requiring a working network is a fine trade-off.

You can have this without user intervention - have the lock download an expiration time with the list of allowed guest keys, or have the guest keys public-key signed with metadata like expiration time.

If the cloud is down, revocations aren't going to happen instantly anyway. (Although you might be able to hack up a local WiFi or Bluetooth fallback.)


So can a compromise of a "smart" lock.

It's a fake trade-off, because you're choosing between lo-tech solution and bad engineering. IoT would work better if you made the "I" part stand for "Intranet", and kept the whole thing a product instead of a service. Alas, this wouldn't support user exploitation.


Yeah, my dream device would be some standard app architecture that could run on consumer routers. You buy the router and it's your family file and print server, and also is the public portal to manage your IoT devices like cameras, locks, thermostats, and lights.

You can get a fair amount of this with a Synology box. Granted, a tool for the reasonably technically savvy and probably not grandma.

I love my Synology, I wish they would expand more into being the controller of the various home IOT devices.

Don't be ridiculous. Real alternatives would include P2P between your smart lock and your phone app or a locally hosted hub device which controls all home automation/IoT, instead of a cloud. If the Internet can still route a "unlock" message from your phone to your lock, why do you require a cloud for it to work?

Or use one of the boxes with combination lock that you can screw onto your wall for holding a physical key. Some are even recommended by insurance companies.

At least you can isolate your security risk to something you have more control over than a random network outage.

Any key commands they have already set up will still work. Nest is pretty good at having network failures fail to a working state. They might not be able to actively open the lock over the network is the only change.

One of the reasons why I personally wanted a smart-lock that had BLE support along with a keypad for backup in addition to HomeKit connectivity.

You should have foreseen this when you bought stuff that rely on "the cloud"

To bad we don't have google cars yet.

"Cloud Automotive Collision Avoidance and Cloud Automotive Braking services are currently unavailable. Cloud Automotive Acceleration is currently accepting unauthenticated PUT requests. We apologise for any inconvenience caused."

Our algorithms have detected unusual patterns and we have terminated your account as per clause 404 in Terms And Conditions. The vehicle will now stop and you are requested to exit.

Phoenix Arizona residents think otherwise

Sure you can, but you'll need to give them your code or the master code. Unless you've enabled Privacy Mode, in which case... I don't know if even the master code would work.

Everyone talking about security and not replacing locks with smart locks seems to forget that you can just kick the fucking door down or jimmy a window open.

Except kicking the door down is not particularly scalable or clandestine

Or just sawzall a hole in the side of the house...

After you've cut the power, just to be safe? ;)

I wonder if in the future products will advertise that they work independently (decoupling as a feature).

holy shit lmao. I'm sorry that sucks.

And reputation. With this outage the global media socket is going to be in gCloud nine.

and a nice Sunday afternoon

And lots of sales on my case

And the illusion of superiority over non cloud offerings.

I keep trying to explain to people that our customers don’t care that there is someone to blame they just want their shit to work. There are advantages to having autonomy when things break.

There’s a fine line or at least some subtlety here though. This leads to some interesting conversations when people notice how hard I push back against NIH. You don’t have to be the author to understand and be able to fiddle with tool internals. In a pinch you can tinker with things you run yourself.


> I keep trying to explain to people that our customers don’t care that there is someone to blame they just want their shit to work. There are advantages to having autonomy when things break.

There are also advantages to being part of the herd.

When you are hosted at some non-cloud data center, and they have a problem that takes them offline, your customers notice.

When you are hosted at a giant cloud provider, and they have a problem that takes them offline, your customers might not even notice because your business is just one of dozens of businesses and services they use that aren't working for them.


Of course customers don't care about the root cause. The point of the cloud isn't to have a convenient scapegoat to punt blame to when your business is affected. It's a calculated risk that uptime will be superior compared to running and maintaining your own infrastructure, thus allowing your business to offer an overall better customer experience. Even when big outages like this one are taken into account, it's often a pretty good bet to take.

How come?

The small bare metal hosting company I use for some projects hardly goes down, and when there is an issue, I can actually get a human being on the phone in 2 minutes. Plus, a bare metal server with tons of RAM costs less than a small VM on the big cloud providers.

> a bare metal server with tons of RAM costs less than a small VM on the big cloud providers

Who are you getting this steal of a deal from?


Hetzner is an example. Been using them for years and it's been a solid experience so far. OVH should be able to match them, and there's others, I'm sure.

Hetzner is pretty excellent quality service overall. OVH is very low quality service, especially with the networking and admin pane.

Anywhere. Really.

Cloud costs roughly 4x than bare metal for sustained usage (of my workload). Even with the heavy discounts we get for being a large customer it’s still much more expensive. But I guess op-ex > cap-ex


I've had pretty good luck with Green House Data's Colo Service and their Cloud offerings. A couple of RU's in the data center can host 1000's of VM's in multi-regions with great connectivity between them.

hetzner.de, online.net, ovh.com, netcup.de for the EU-market.

Care to name names? I've been looking for a small, cheap failover for a moderately low traffic app.

In the US I use Hivelocity. If you want cheapest possible, Hetzner/OVH have deals you can get for _cheap._

I've a question that always stopped me going that route, what happens when a disk or other hardware fails on these servers? beyond data loss I mean, like physically what happens who carries out the repair how long does it takes

Most bare metal providers nowadays contact you just like AWS and say "hey your hardware is failing get a new box.". Unless it's something exotic it's usually not long for setup time, and in some cases just like a VM it's online in a minute or two.


Thanks a million. Those prices look similar to what I've used in the past, it's just been a long time since I've gone shopping for small scale dedicated hosting.

You weren't kidding, 1:10 ratio to what we pay for similar VPS. And guaranteed worldwide lowest price on one of them. Except we get free bandwidth with ours.

Solutions based on third-party butts have essentially two modes: the usual, where everything is smooth, and the bad one, where nothing works and you're shit out of luck - you can't get to your data anymore, because it's in my butt, accessible only through that butt, and arguably not even your data.

With on-prem solutions, you can at least access the physical servers and get your data out to carry on with your day while the infrastructure gets fixed.


Any solution would be based on third parties, the robust solution is either to run your own country with fuel sources for electricity and army to defend the datacenters or rely on multiple independent infrastructures. I think the latter is less complex.

This is a ridiculous statement. Surely you realise that there is a sliding scale.

You can run your own hardware and pull in multiple power lines without establishing your own country.

I’ve ran my own hardware, maybe people have genuinely forgotten what it’s like, and granted, it takes preparation and planning and it’s harder than clicking “go” in a dashboard. But it’s not the same as establishing a country and source your own fuel and feed an army. This is absurd.


Correct. Most CFO's I've run into as of late would rather spend $100 on a cloud vm than deal with capex, depreciation, and management of the infrastructure. Even though doing it yourself with the right people can go alot further.

Assuming you have data that is tiny enough to fit anywhere other than the cluster you were using. Assuming you can afford to have a second instance with enough compute just sitting around. Assuming it's not the HDDs, RAID controller, SAN, etc which is causing the outage. Assuming it's not a fire/flood/earthquake in your datacenter causing the outage.

...etc.


Ah, yes, I will never forget running a site in New Orleans, and the disaster preparedness plan included "When a named storm enters or appears in the Gulf of Mexico, transfer all services to offsite hosting outside the Gulf Coast". We weren't allowed to use Heroku in steady state, but we could in an emergency. But then we figured out they were in St. Louis, so we had to have a separate plan for flooding in the Mississippi River Valley.

Took me a second.

I didn’t know the cloud-to-butt translator worked on comments too. I forgot that was even a thing.


I keep forgetting that I have it on, my brain treats the two words as identical at this point. The translator has this property, which I also tend to forget about, that it will substitute words in your HN comment if you edit it.

But yeah, it's still a thing, and the message behind it isn't any less current.


There is a cloud I've developed that is secure and isn't a butt :P

https://hackaday.io/project/12985-multisite-homeofficehacker...

I made IoT using cheap (arduino, nrf24l01+, sensors/actuators) for local device telemetry, MQTT, node-red, and Tor for connecting clouds of endpoints that aren't local.

Long story short, its an IoT that is secure, consisting of a cloud of devices only you own.

Oh yeah, and GPL3 to boot.


Oh that’s weird, because it totally worked for me with “butts” as a euphemism for “people”, as in “butt-in-seat time” — relying on a third-party service is essentially relying on third party butts (i.e. people), and your data is only accessible through those people, whom you don’t control.

And then “your data is in my butt” was just a play on that.


There are some whole argue that the resiliency of cloud providers beats on prem or self hosted, and yet they’re down just as much or more (GCP, Azure, and AWS all the same). Don’t take my word for it; search HN for “$provider is down” and observe the frequency of occurrences.

You want velocity for your dev team? You get that. You want better uptime? Your expectations are gonna have a bad time. No need for rapid dev or bursty workloads? You’re lighting money on fire.

Disclaimer: I get paid to move clients to or from the cloud, everyone’s money is green. Opinion above is my own.


and reputation.

Seems to be the private network. The public network looks fine to us from all over the world?

Not on my end. Public access in us-west2 (Los Angeles) is down for me.

Hmmm... why is our monitoring network not showing that?

Edit: ah, looks like the LB is sending LA traffic to Oregon.


Our Oregon VMs are up.

You're brave to jump on here when on holiday!

Shouldn't that outage system be aware when service heartbeats stop?

Could this be a solar flare?


This happened to Amazon S3 as well once. The "X" image they use to indicate a service outage was served by... yup, S3, which was down obviously.

One of the projects I worked on was using data URIs for critical images, and I wouldn’t trust that particular team to babysit my goldfish.

Sounds like Google and Amazon are hiring way too many optimists. I kinda blame the war on QA for part of this, but damn that’s some Pollyanna bullshit.


Now is a good time to point out that the SLA of Google Cloud Storage only covers HTTP 500 errors: https://cloud.google.com/storage/sla. So if the servers are not responding at all then it's not covered by the SLA. I've brought this to their attention and they basically responded that their network is never down.

Ironically I can't read that page because, since it's Google-hosted, I'm getting an HTTP 500 error... but which means at least that service is SLA-covered...

Cloud services live and die by their reputation, so I'd be shocked if Google ever tried to get out of following an SLA contract based on a technicality like that. It would be business suicide, so it doesn't seem like something to be too worried about?


There goes 3 nines for June and for Q2. I guess everyone gets a 10% discount for the month? https://cloud.google.com/compute/sla

Remember to request the credit!

From that linked page:

"Customer Must Request Financial Credit

In order to receive any of the Financial Credits described above, Customer must notify Google technical support within thirty days from the time Customer becomes eligible to receive a Financial Credit. Customer must also provide Google with server log files showing loss of external connectivity errors and the date and time those errors occurred. If Customer does not comply with these requirements, Customer will forfeit its right to receive a Financial Credit. If a dispute arises with respect to this SLA, Google will make a determination in good faith based on its system logs, monitoring reports, configuration records, and other available information, which Google will make available for auditing by Customer at Customer’s request."


A couple more hours and everyone will get 25% off for June.

Does that apply to the rest of June?

Might be a good month to rebuild all your models ;)


The vultures are circling.

Ironically, the SLA page returns a 502 error.

The discount seems way too small.

I would pay a premium for a cloud provider happy to give 100 percent discount for the month for 10 minutes downtime, and 100 percent discount for the year for an hour's downtime.


Any cloud provider offering those terms would go out of business VERY quickly. Outages happen, all providers are incentivized to minimize the frequency and severity of disruptions - not just from the financial hit of breaching SLA (which for something like this will be significant), but for the reputational damage which can be even more impactful.

How often does amazon or google go down for ten minutes?

But let's work backwards from the goal instead.

If you charge twice as much, and then 20-30% of months are refunded by the SLA, you make more money and you have a much stronger motivation to spend some of that cash on luxurious safety margins and double-extra redundancy.

So what thresholds would get us to that level of refunding?


> Any cloud provider offering those terms would go out of business VERY quickly

Minimum spends and a 50,000% markup based on adding that term to your contract.


I think you're proving the parent comment's point. The number of businesses willing to pay a 500x markup is exceedingly small (potentially less than 1), and at that point the cost is high enough where it's probably cheaper to just build the redundancy yourself using multiple cloud providers (and, to emphasize, that option tends to be horribly expensive).

Just take the premium that you'd be willing to pay and put it in the bank -- the premium would be priced such that the expected payout of the premium would be less than or equal to what you'd be paying.

Besides, a provider credit is the least of most company's concerns after an extended outage, it's a small fraction of their remediation costs and loss of customer goodwill.


You know this reminds me of a bad taste that Google Sales team left when I asked for some of my billing that I was unaware of running after following a quickstart guide.

AWS refunded me in the first reply on the same day!

GCP sales rep just copy pasted a link to a self support survey that essentially told me, after a series of YES or NO questions that they can't refund me.

So why not just tell your customers like it is? Google Cloud is super strict when it comes to billing. I have called my bank to do a chargeback and put a hold on all future billing with GCP.

I'm now back to AWS and still on a Free Tier. Apparently the $300 Trial with Google Cloud did not include some critical products, AWS Free tier makes it super clear and even still I sometimes leave something running on and discover it in my invoice....

I've yet to receive a reply from Google and its been a week now.

I do appreciate other products such as Firebase but honestly for infrastructure and for future integration with enterprise customers I feel AWS is more appropriate and mature.


Are you seriously complaining about having to pay for using their resources? I understand that you're surprised some things aren't covered in the free trial or free credit or whatever, but getting $300 free already sounded a little too good to be true (I heard about it from a friend and was dubious; at least in Europe, consumers are told not to enter deals that are too good to be true), you could at least have checked what you're actually getting.

I think it's weird to say you get credit in dollars and then not be able to spend it on everything. That's not how money works. But that's the way hosting providers work and afaik it's quite well known. Especially with a large sum of "free money", even if it's not well known, it was on you to check any small print.


The thing that worries me most about Google Cloud and these billing stories is that I’m assuming if you chargeback or block them at your bank then they’ll ban all Google accounts of yours - and they’re obviously going to be able to make the link between an account made just for Google Cloud and my real account.

They WILL absolutely block and suspend all accounts indefinitely. They have terminated accounts for credit card failing transactions.

I really wanted to try out their new autoML but I was paranoid of entering my credit card and getting banned from Google


oh man....so ALL of my gmail gets banned?

this is FUCKED. its aking to holding my youtube and google play accounts hostage.


all your bases belong to us.

Google is well known for not caring about small shops, only if you are a multi million dollar customer with dedicated account manager you can expect reasonable support. That's been the case forever with them.

Does Amazon treat smaller customers any better? I am genuinely asking, as I have no clue.

Absolutely. I've seen them wipe a number of bills away for companies that have screwed up something. They definitely take a longer view on customer happiness than GCP. Azure also tends to be pretty good in this regard.

Depends.

AWS is mostly easy going.

Only some people at the partner programm can vary.

I had a guy who wanted to help me out even tho I was just a one person shop. After he left I got a woman who threw me out of the program faster than I could look.


Yes. 100%. We don’t pay AWS much but their help is top notch. We accidentally bought physical instances instead of reserved instances. AWS resolved the issue and credited us. I’ll prob never touch GCE. Google just isn’t a good company at any level.

I've got a personal account with an approximately $1/mo bill (just a couple things in S3) and a work account with ~$1500/mo AWS bill (not a large shop by any means) and I've always felt very positive about my interactions with AWS support

Definitely. A previous small company I worked in had some S3 Snafu and AWS Support was super helpful.

Their ecommerce and AWS both have fantastic help and followups (and also aggressive marketing).

Yes

If you buy their support (which isn't that expensive). Holy fuck it's good. You literally have an infrastructure support engineer on the phone for hours with you. They will literally show you how to spend less money for your hosting while using more AWS services.

>I asked for some of my billing that I was unaware of running

>I have called my bank to do a chargeback

You're issuing a chargeback because you made a mistake and spent someone else's resources? And you're admitting to this on HN? I'm not a lawyer, but that sounds like fraud and / or theft to me.


What was the quickstart guide?

Anything created in-house at Google (GCP) is typically created by technically-proficient devs, those devs then leave the project to start something new and maintenance is left to interns and new hires. Google customer service basically doesn't care and also has no tools at their disposal to fix any issues anyway.

The infinite money spout that is Google Ads has created a situation in which devs are at Google just to have fun - there really is no incentive to maintain anything because the money will flow regardless of quality.

Source: I interned at Google.


From what I’ve been told, the issue is that the people with political capital (managers, PMs, etc) are quick to move after successful launches and milestones. No matter how many competent engineers hang around, the product/team becomes resource and attention starved.

Isn't it also that promotions at Google are based on creating new products/projects rather than maintaining existing ones? So engineers have a negative incentive to maintain things since it costs them promotions.

That explains the proliferation of chat services and why they all get actively worse over time until they're replaced

I'm not sure why you are downvoted - seems like a reasonable insight and explanation for the drop in quality and weird decisions Google is making recently.
enneff 8 hours ago [flagged] [-]

It’s not insightful at all. Just one intern’s very brief observations of something way more complicated and nuanced than is deserving of such a dismissive comment.

I'll take brief comments that shed partial light on something over no comments at all and no insight at all.

I have mentioned this multiple time: Any criticism of Google is met with barrage of downvotes. I guess all the googlers hang around here and they are usually commenting with throwaways.

Google seems to be sort of like a sect of narcissists.


This should be voted higher up.

According to https://twitter.com/bgp4_table, we have just exceeded 768k Border Gateway Protocol routing entries, which may be causing some routers to malfunction.


Isn't it weird that it's happening now even though that number was surpassed nearly a month and half ago?

Different locations see different counts because of aggregation/de-aggregation.

GCP status page is worthless as it's always happy and green when production systems are down and then they might acknowledge something an hour later

Just like AWS, then. "Some users are experiencing increased error rates" = "Everything has been down for hours"

"Everything is fine, unless you're Carl. There's a massage outage, but only at Carl's house. Sorry, Carl."

I'm also experiencing a massage outage. Please send masseuse.

Goddammit some (most?) days I can't type. "Massive"

I got the missive. Thanks.

I remember when S3 was down and the status was green because the updates for the status page with pushed via S3.

That's not just ironic, that's stupid. How do you count on S3 to update S3 status? Isn't that a huge design fault?

Yes and they fixed that. :)

Azure too. During the most recent outage a couple weeks ago their Twitter account acknowledged the incident an hour before the status page did.

So no matter where you go for your cloud services, you're guaranteed a useless status page. Yippee.


AWS is no better. Something from 2015 I remember: https://twitter.com/SIGKILL/status/630684777813684224?s=19

I swear most status pages are run by folks who aren't "there".

I haven’t written a status page in a while, but the rest of my infrastructure starts freaking out if it hasn’t heard from a service in a while. Why doesn’t their status page have at least a warning about things not looking good?

In my experience public status pages are "political" and no matter how they start tend to trend towards higher management control in some way... that leads to people who don't know, aren't in the thick of it, don't understand it, and / or are cautious to the point that it stops being useful.

Not only political, but with SLAs on the line they have significant financial and legal consequences as well. Most managers are probably happier keeping the ‘incident declaring power’ in as few a hands as possible to make sure those penalty clauses aren’t ever un-necessarily triggered.

That’s fraud in other industries.

Same with most corporate twitter feeds. I’d like to follow my public transit/airport/highway authority, but it’ll be 10 posts about Kelly’s great work in accounting for every service disruption.

And No, I don’t want to install a separate app to get push notifications about service disruptions for every service I use.


A good Twitter account is a wonderful thing....the bad ones hurt so bad.

Ugh. I guess that just goes to show that any metric can be politicized.

It's just Goodhart's law in effect: If a status page is used as a target metric in an SLA the status page ceases to be a useful measure.

Status pages are the progress bars of the cloud.

I worked on the networking side for years.

Now the web development side and I'm all "Wait a minute...are there any progress bars that are based on, anything real!?!?!"

I should have known...


Was noticing massive issues earlier and thought that maybe my account was blocked due to breaching from TOS as I was heavily playing with Cloud Run. Then I noticed gitlab was also acting up but my Chinese internet was still surprisingly responsive. Tried the status page which said everything was fine and searched Twitter for "google cloud" and also found nobody talking about it. Typically Twitter is the single source of truth for service outages as people start talking about it

I think this might be a static page they are hosting on Akamai?


Digitalocean has the same issue: status pages are actually manually updated and no live data is fed into them.

They update the page manually.

Google Cloud is the number 4 most monitored status page on StatusGator and Google Apps is number 12. In addition, at least 20 other services we monitor seemingly depend on Google Cloud because they all posted issues as soon as Google went down.

It's always interesting to see these outages at large cloud providers spider out across the rest of the internet, a lot of the world depends on Google to stay up.


This feels like 80's.

When the mainframe is down terminals are useless.


Yep. The cloud is just a lot of cheap hardware acting together as a shitty mainframe.

Server hardware is actually quite expensive. End users "smart" phones are cheap hardware, running dumb software which renders them as terminals for the cloud. That's sad because smartphone hardware is quite capable of doing useful work.

(For instance, I have a 500GB MicroSD card in my phone which contains a copy of my OwnCloud)


"a lot of the world depends on Google to stay up."

Yup, I'm trying to check the Associated Press News right now and it's having trouble connecting to "storage.googleapis.com".


I guess we know what steam uses (the store at least).

I don't know about Steam, but I know Apple must use Google Cloud: https://www.apple.com/support/systemstatus/

Less than 1% of users are affected

Is there any reason to presume these statuses are correlated?

Apple's issue is

> Users may be experiencing slower than normal performance with this service.


Could be the only users who were affected were ones caught right in the failover between redundant clouds

I'm just assuming they are because it's been previously reported that Apple uses GCP (and also AWS).

https://techcrunch.com/2018/02/27/apple-now-relies-on-google...


Guess they don't eat their own dog food; no racks of proprietary Apple servers anywhere (unless they somehow run Darwin images in Google Cloud)

Can't tell if you know the answer to your own question and just can't talk about it due to NDA...

Apple runs Linux on the vast majority of the servers behind their cloud offerings.

No issues for me. Maybe they have a failover mechanism?

yeah, maybe it was coincidence. seems to be back up for me as well: https://steamstat.us

...and only the paranoid survive?

Just because you're paranoid, it doesn't mean they're not out to get you.

And thus was ruined hundreds or thousands of pleasant Sunday afternoons.

I don’t miss being on pager duty one bit. I see it looming in my headlights, sadly.


Spare a thought for the pleasant Australian early Monday mornings too! Always a rude awakening...

It's the Queen's birthday, a Monday off here in New Zealand...

... but not for everybody now.


So what happens when the crown changes? They change the holiday? Immediately? For the next year? Sounds like a bit of a nightmare.

The holiday is on the official birthday. The sovereign's actual birthday has been separate from the official birthday for centuries, so the holiday does not need to change.

If you think that's a hassle, in Japan the calendar changes with the emperor:

https://www.theguardian.com/technology/2018/jul/25/big-tech-...


Nah, it's not even her actual birthday. Different countries with the same queen even celebrate it on different days. Presumably it'll be renamed to "king's birthday" but the day kept the same when the monarch changes. Or done away with/re-purposed - there's a general feeling in Australia at least that once the queen dies there will be less support for the monarchy.

Australia celebrates the Queen's Birthday public holiday on different dates in different states already.

It’s not actually the queens birthday.

In Australia, many states have different dates for the queens birthday.

So not a nightmare at all.


The only response is to wait for Google to fix it.

Nothing you or I or the pager can do will speed that up.

I am aware some bosses won't believe that and I am not trying to make light of it. But there really isn't much else to do except wait.


Or you wait for Google or you are frantically trying to move everything you got to AWS.

If you wait, you get back to 100% with no effort or stress on your part.

If you try to be heroic, you get back to 100% with a bunch of wasted effort and stress on your part.

Because it will be fixed by Google, regardless of what you do or don't do.

After the incident is over would be the time to consider alternatives.


So, for some companies, failing over between providers is actually viable and planned for in advance. But it is known in advance that it is time consuming and requires human effort.

The other case is really soft failures for multi-region companies. We degrade gracefully, but once that happens, the question becomes what other stuff can you bring back online. For example, this outage did not impact our infrastructure in GCP Frankfurt, however, it prevented internal traffic in GCP from reaching AWS in Virginia because we peer with GCP there. Also couldn't access the Google cloud API to fall back to VPN over public internet. In other cases, you might realize that your failover works, but timeouts are tuned poorly under the specific circumstances, or that disabling some feature brings the remainder of the product back online.

Additionally, you have people on standby to get everything back in order as soon as possible when the provider recover. Also, you may need to bring more of your support team online to deal with increased support calls during the outage.


Multi-cloud for those times when you really need that level of availability and can afford it.

It's not even about being able to afford it. Some things just don't lend themselves to hot failover. If your data throughput is high, it may not be feasible or possible to stream a redundant copy to a data center outside the network.

All parts of the system should be copied (if you decided to build multi-cloud system), not just some of them.

Do you work at G?

Nope. I was more thinking of everyone else.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: