All this talk about redundancy, real-time apps, scalable architecture and and a "simple" DDOS against DNS architecture brings half of the internet down.
Honestly did nobody think about having a spare dns at some other company? or even backup dns server exactly for a situation like that?
Not really, I can't access our production servers which are in US east. Can't access Intercom with which we provide customer support. Our clients are mailing us that payment provider doesn't work either. So we're losing money while being in central EU.
The TTL for the glue records of a .com domain is 48 hours, so even if you have Route53 set up and ready to go, it takes a long time to switch the zone away from Dyn.
We switched from Dyn to Rout53 a few weeks ago. It took about 12 hours before half of the traffic had shifted over.
That's the reason to have your DNS at at least two different companies, working in tandem. In a case where one is down, your Unicorn Corp doesn't go down with it.
Exactly: there's nothing wrong with only using one provider if you're not willing to pay for two services but if you can't afford downtime you really need active diversity all the way down.
Route53 uses a bunch of different top-level domains for the same reason – if someone does manage to take the .com servers offline you'll be glad .co.uk is run by a separate organization.
How does that work in practice? Even if I set NS records pointing to two different DNS providers, I don't think a DNS client would automatically switch and retry if one is too slow to respond/times out.
Most DNS resolvers will automatically try each NS record until they get a response. That might be your ISP rather than your iPhone but that's an old practice going back to when the internet was even less reliable because some random Sun box under someone's desk failed.
Modern web browsers will also do the same thing if a query returns multiple A records and they get a connection error.
Sadly we're too interconnected. Every company that relies on that DNS should do what you suggest, but the control is definitely not in our ( users ) hands.
Yeah, users are screwed. Unless they have a little more experience with how unreliable cloud can be, and they made a local copy of everything* their work depends on, just in case.
I'm a GitHub employee and want to let everyone know we're aware of the problems this incident is causing and are actively working to mitigate the impact.
"A global event is affecting an upstream DNS provider. GitHub services may be intermittently available at this time." is the content from our latest status update on Twitter (https://twitter.com/githubstatus/status/789452827269664769). Reposted here since some people are having problems resolving Twitter domains as well.
My Uber partner app crashed at 8am I was trying to complete a trip and it frozen my phone. It took about 5min for me to be able to enter back but it asked me for my SSN and permission to do a background check which is standard by Uber but I had already done so. Should I be concern that my personal data has been compromised? I contacted Uber but their idiots support people don't seem to have a clue and third fix is super basic like restart your phone, turn airplane mode or data on and off
bigcommerce, volusion, new relic, optimizely, wistia, volusion, aweber, cnn, campaign monitor, all down for me. The biggest thing is seeing that ALL shopify stores are offline, so much $$$ being lost right now.
Is this another IP webcam etc. attack? Does anyone know of a write-up from a researcher in possession of one of these currently exploited bits of kits?
reply