We have 75 (and growing) servers that need to share data via Redis. All 75 servers would ideally want to write to two fields in Redis with INCRBYFLOAT operations. We anticipate eventually having potentially millions of daily write operations and billions of daily reads on these two fields. This data must be persistent.

We're concerned that Redis locking might cause write operations to be repeatedly retried with many simultaneous attempts to increment the same field.

Questions:

  • Is multiple, simultaneous INCRBYFLOAT on a single field a bad idea under a very heavy load?
  • Should we have an external process "summarize" separate fields and write the two fields instead? (this introduces another failure point)
  • Will reads on those two fields block while writing?
share|improve this question
up vote 8 down vote accepted

Redis does not lock. Also, it is single threaded; so there are no race conditions. Reads or Writes do not block.

You can run millions of INCRBYFLOAT on the same key without any problems. No need for external processes. Reading those fields does not pose any problems.

That said, "Millions of updates to two keys" sounds strange. If you can explain your use case, perhaps there might be a better way to handle it within Redis.

share|improve this answer
    
75 servers will be bidding on potentially billions of auctions per day. For auctions we win (potentially millions per day), we need to write daily and total amounts spent. We must not exceed our daily or total spending amounts. This means that for each our of billions of auctions, we need to read daily and total amounts and stop bidding if they are exceeded. Thus, billions of daily reads and potentially millions of daily writes. – Ovid May 18 '12 at 10:01
    
You claim that reads and writes do not block. blinkov claims that that writes will generally block reads. How do I verify which of you is correct? – Ovid May 18 '12 at 10:14
3  
Sri is correct - a Redis instance is single-threaded so everything is serialized (read or write operations). No lock is required. See redis.io/topics/benchmarks and run your own benchmark to evaluate if performance will suit you. – Didier Spezia May 18 '12 at 10:23

Since Redis is single threaded, you will probably want to use master-slave replication to separate writes from reads, since yes, writes will generally block reads.

Alternatively you can consider using Apache Zookeeper for this, it provides reliable cluster coordination without single points of failure (like single Redis instance).

share|improve this answer
    
Now I'm confused. Below, Sripathi claims that reads and writes do not block. You claim that that writes will generally block reads. How do I verify which of you is correct? – Ovid May 18 '12 at 10:13
    
Actually write operations do not really block read operations. Everything is serialized. Using Zookeeper to avoid the single point of failure is not a bad suggestion though. But the throughput will not be the same (Redis is much more efficient). – Didier Spezia May 18 '12 at 10:26
5  
I meant that because of it's single threaded model at any moment of time Redis instance can't process two or more requests simultaneously regardless of their being reads or writes. And since the question stated that there will be much more reads than writes, it's logical to distribute read load to slaves. – Ivan Blinkov May 18 '12 at 14:14

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.