May 30, 2008 4:00 AM PDT

Google spotlights data center inner workings

SAN FRANCISCO--The inner workings of Google just became a little less secret.

The search colossus has shed only occasional light on its data center operations, but on Wednesday, Google fellow Jeff Dean turned a spotlight on some parts of the operation. Speaking to an overflowing crowd at the Google I/O conference here on Wednesday, Dean managed simultaneously to demystify Google a little while also showing just how exotic the company's infrastructure really is.

Google fellow Jeff Dean

Google fellow Jeff Dean

(Credit: Stephen Shankland/CNET News.com)

On the one hand, Google uses more-or-less ordinary servers. Processors, hard drives, memory--you know the drill.

On the other hand, Dean seemingly thinks clusters of 1,800 servers are pretty routine, if not exactly ho-hum. And the software company runs on top of that hardware, enabling a sub-half-second response to an ordinary Google search query that involves 700 to 1,000 servers, is another matter altogether.

Google doesn't reveal exactly how many servers it has, but I'd estimate it's easily in the hundreds of thousands. It puts 40 servers in each rack, Dean said, and by one reckoning, Google has 36 data centers across the globe. With 150 racks per data center, that would mean Google has more than 200,000 servers, and I'd guess it's far beyond that and growing every day.

Regardless of the true numbers, it's fascinating what Google has accomplished, in part by largely ignoring much of the conventional computing industry. Where even massive data centers such as the New York Stock Exchange or airline reservation systems use a lot of mainstream servers and software, Google largely builds its own technology.

I'm sure a number of server companies are sour about it, but Google clearly believes its technological destiny is best left in its own hands. Co-founder Larry Page encourages a "healthy disrespect for the impossible" at Google, according to Marissa Mayer, vice president of search products and user experience, in a speech Thursday.

To operate on Google's scale requires the company to treat each machine as expendable. Server makers pride themselves on their high-end machines' ability to withstand failures, but Google prefers to invest its money in fault-tolerant software.

"Our view is it's better to have twice as much hardware that's not as reliable than half as much that's more reliable," Dean said. "You have to provide reliability on a software level. If you're running 10,000 machines, something is going to die every day."

Breaking in is hard to do
Bringing a new cluster online shows just how fallible hardware is, Dean said.

In each cluster's first year, it's typical that 1,000 individual machine failures will occur; thousands of hard drive failures will occur; one power distribution unit will fail, bringing down 500 to 1,000 machines for about 6 hours; 20 racks will fail, each time causing 40 to 80 machines to vanish from the network; 5 racks will "go wonky," with half their network packets missing in action; and the cluster will have to be rewired once, affecting 5 percent of the machines at any given moment over a 2-day span, Dean said. And there's about a 50 percent chance that the cluster will overheat, taking down most of the servers in less than 5 minutes and taking 1 to 2 days to recover.

A look at a custom-made Google rack with 40 servers from a modern data center. Infrastructure guru Jeff Dean showed the snapshot at the Google I/O conference.

A look at a custom-made Google rack with 40 servers from a modern data center. Infrastructure guru Jeff Dean showed the snapshot at the Google I/O conference.

(Credit: Stephen Shankland-CNET News.com/Jeff Dean-Google)

While Google uses ordinary hardware components for its servers, it doesn't use conventional packaging. Google required Intel to create custom circuit boards. And, Dean said, the company currently puts a case around each 40-server rack, an in-house design, rather than using the conventional case around each server.

The company has a small number of server configurations, some with a lot of hard drives and some with few, Dean said. And there are some differences at the larger scale, too: "We have heterogeneity across different data centers but not within data centers," he said.

As to the servers themselves, Google likes multicore chips, those with many processing engines on each slice of silicon. Many software companies, accustomed to better performance from ever-faster chip clock speeds, are struggling to adapt to the multicore approach, but it suits Google just fine. The company already had to adapt its technology to an architecture that spanned thousands of computers, so they already have made the jump to parallelism.

"We really, really like multicore machines," Dean said. "To us, multicore machines look like lots of little machines with really good interconnects. They're relatively easy for us to use."

Although Google requires a fast response for search and other services, its parallelism can produce that even if a single sequence of instructions, called a thread, is relatively slow. That's music to the ears of processor designers focusing on multicore and multithreaded models.

"Single-thread performance doesn't matter to us really at all," Dean said. "We have lots of parallelizable problems."

The secret sauce
So how does Google get around all these earthly hardware concerns? With software--and this is where you might think about dusting off your computer science degree.

A Google data center, circa 2000. Note the fan on the floor to cool servers.

A Google data center, circa 2000. Note the fan on the floor to cool servers.

(Credit: Stephen Shankland-CNET News.com/Jeff Dean-Google)

Dean described three core elements of Google's software: GFS, the Google File System, BigTable, and the MapReduce algorithm. And although Google helps with a lot of open-source software projects that helped the company get its start, these packages remain proprietary except in general terms.

GFS, at the lowest level of the three, stores data across many servers and runs on almost all machines, Dean said. Some incarnations of GFS are file systems "many petabytes in size"--a petabyte being a million gigabytes. There are more than 200 clusters running GFS, and many of these clusters consist of thousands of machines.

GFS stores each chunk of data, typically 64MB in size, on at least three machines called chunkservers; master servers are responsible for backing up data to a new area if a chunkserver failure occurs. "Machine failures are handled entirely by the GFS system, at least at the storage level," Dean said.

To provide some structure to all that data, Google uses BigTable. Commercial databases from companies such as Oracle and IBM don't cut the mustard here. For one thing, they don't operate the scale Google demands, and if they did, they'd be too expensive, Dean said.

BigTable, which Google began designing in 2004, is used in more than 70 Google projects, including Google Maps, Google Earth, Blogger, Google Print, Orkut, and the core search index. The largest BigTable instance manages about 6 petabytes of data spread across thousands of machines, Dean said.

MapReduce, the first version of which Google wrote in 2003, gives the company a way to actually make something useful of its data. For example, MapReduce can find how many times a particular word appears in Google's search index; a list of the Web pages on which a word appears; and the list of all Web sites that link to a particular Web site.

With MapReduce, Google can build an index that shows which Web pages all have the terms "new," "york," and "restaurants"--relatively quickly. "You need to be able to run across thousands of machines in order for it to complete in a reasonable amount of time," Dean said.

The MapReduce software is increasing use within Google. It ran 29,000 jobs in August 2004 and 2.2 million in September 2007. Over that period, the average time to complete a job has dropped from 634 seconds to 395 seconds, while the output of MapReduce tasks has risen from 193 terabytes to 14,018 terabytes, Dean said.

On any given day, Google runs about 100,000 MapReduce jobs; each occupies about 400 servers and takes about 5 to 10 minutes to finish, Dean said.

That's a basis for some interesting math. Assuming the servers do nothing but MapReduce, that each server works on only one job at a time, and that they work around the clock, that means MapReduce occupies about 139,000 servers if the jobs take 5 minutes each. For 7.5-minute jobs, the number increases to 208,000 servers; if the jobs take 10 minutes, it's 278,000 servers.

My calculations could be off base, but even qualitatively, that's enough computing horsepower to make the mind boggle.

Fault-tolerant software
MapReduce, like GFS, is explicitly designed to sidestep server problems.

"When a machine fails, the master knows what task that machine was assigned and will direct the other machines to take up the map task," Dean said. "You can end up losing 100 map tasks, but can have 100 machines pick up those tasks."

The MapReduce reliability was severely tested once during a maintenance operation on one cluster with 1,800 servers. Workers unplugged groups of 80 machines at a time, during which the other 1,720 machines would pick up the slack. "It ran a little slowly, but it all completed," Dean said.

And in a 2004 presentation, Dean said, one system withstood a failure of 1,600 servers in a 1,800-unit cluster.

Next-generation data center to-do list
So all is going swimmingly at Google, right? Perhaps, but the company isn't satisfied and has a long to-do list.

Most companies are trying to figure out how to move jobs gracefully from one server to another, but Google is a few orders of magnitude above that challenge. It wants to be able to move jobs from one data center to another--automatically, at that.

"We want our next-generation infrastructure to be a system that runs across a large fraction of our machines rather than separate instances," Dean said.

Right now some massive file systems have different names--GFS/Oregon and GFS/Atlanta, for example--but they're meant to be copies of each other. "We want a single namespace," he said.

These are tough challenges indeed considering Google's scale. No doubt many smaller companies look enviously upon them.

Recent posts from News Blog
Woman to virtual ex: 'I won't be ignored!'
Swiss secret sauce to power green choppers
iLink to deliver answers to military online communities
Vonage names new CEO
T-Mobile 'Gekko' officially reveals itself as T-Mobile Sidekick
Add a Comment (Log in or register) 19 comments
by divisionbyzero May 30, 2008 4:49 AM PDT
"It wants to be able to move jobs from one data center to another--automatically, at that."

Maybe that's what they are doing with all of that dark fiber? They'd need a high speed, low latency, dedicated link to ensure the data traveled quickly enough to be useful.
Reply to this comment View reply
by mukindoggy May 30, 2008 8:20 AM PDT
Latest news:

Google has 36 data centers across the globe.

52 in Mars

12 in Moon

True universal linking. Phoenix over to you...
Reply to this comment
by Jamie_Foster May 30, 2008 8:36 AM PDT
How come this geezer dresses in cheap $5 polyester shirts? He probably thinks he's cool.
Reply to this comment View all 2 replies
by johnsin May 30, 2008 12:05 PM PDT
I'm too sexy for your Blade, too sexy for your Blade. You won't see no Blade Servers here ba-by.. Yah!
Reply to this comment View reply
by chettai May 31, 2008 5:09 AM PDT
wowwwwwwwwwwwww
goooooooooooooooooogllllllleeeeeeeeeee..........................
our dreamzzzzzzzzz.................
v done ur technology inour systmzzzzzzzzz
thankz for being a role model..........
v the angelzzzzz
angelsvista.com
Reply to this comment View reply
by notthe600 May 31, 2008 6:26 AM PDT
Phoenix this is Aurora. Google has created 7 new experimental data centers, and is now broadcasting them throughout the universe piggybacked on EM pulses. Over.
Reply to this comment
by hackertarget May 31, 2008 8:13 PM PDT
Blades are likely too expensive - those cut down no frills servers are really interesting - they probably source them directly from China / Taiwan.

So how do they cool those custom racks? In row coolers or massive air-con units blasting away.

Interesting article. :)
Reply to this comment
by thommym June 1, 2008 2:27 AM PDT
Looks like they need to go to Sun for some T5140 and they'll get 128 threads/RU. That's a _lot_ more than Intel can provide...
Reply to this comment View reply
by Fil0403 June 3, 2008 10:10 AM PDT
"Our view is it's better to have twice as much hardware that's not as reliable than half as much that's more reliable," Dean said.

IMHO this would be widely considered completely ignorant, stupid and wrong if it would come from any regular person (or from a Microsoft employee, of course).
Reply to this comment
by irvine_k June 4, 2008 2:21 AM PDT
"...a petabyte being a million gigabytes."

O_O did I miss something?
Reply to this comment
by jjpaul1234 June 8, 2008 3:56 PM PDT
Their Reston location is actually on Business Center Dr., Reston, VA (http://tinyurl.com/6f6gwp). They have all their employees park around back to make the building appear empty.
Reply to this comment
by sugumar_1975 June 18, 2008 5:29 AM PDT
My guess is they have more than four hundred thousand systems. I am yet to completely understand thier MapReduce. a buddy told me about that some time ago. Very nice article with so much data about system and component failures. It is useful for me.

One interesting thing about Google is that they grow with same scale as internet grows. It is thier requirement! And they can do things that is internet wide. Search text, images, now 'share anything' is an other example. Google Rocks!
Reply to this comment
by benjaminstraight July 25, 2008 6:21 PM PDT
Unbelievable and inspirational, at the same time!
Reply to this comment
Powered by Jive Software
advertisement

About News Blog

Recent posts on technology, trends, and more.

Add this feed to your online news reader

News Blog topics

Latest tech news headlines

Featured blogs

Resource center from News.com sponsors
Aligning CIO & CEO visions
What CIOs need to know

It's a simple truth. The closer you and your CEO see things, the greater your chance for success. Our exclusive report can help you get there—and help your business grow. To get the report, featuring the views of 765 CEOs on innovation. click here

Click Here!
What CEOs think: Innovation Insights for CIOs

Learn How CIOs can deliver strategic success for their enterprises

The New CIO: Beyond Technology

Learn how CIOs become heroes

Podcast: Chris Gorog of Napster

Learn about the impact of technology in strategy execution

The future of the Enterprise

Read more about tomorrow's organization

advertisement
advertisement

Inside CNET News

Scroll Left Scroll Right