Introduction
Memcached has been the go-to caching solution for years. It's fast, simple, and works. But it's limited – just key-value storage with expiration. No data structures, no persistence, no pub/sub.
Redis is different. It's an in-memory data structure server that supports strings, lists, sets, sorted sets, and hashes. It has persistence, pub/sub, transactions, and Lua scripting. It's incredibly fast and surprisingly versatile.
I've been using Redis for several months, and it's become indispensable. Let me show you why Redis is more than just a cache.
What is Redis?
Redis (REmote DIctionary Server) is an open-source, in-memory data structure store. Think of it as a data structure server accessible over the network.
Key features:
- In-memory (extremely fast)
- Rich data types (not just strings)
- Persistence (optional)
- Pub/sub messaging
- Transactions
- Lua scripting
- Replication
- Simple protocol
Created by Salvatore Sanfilippo, Redis is used by GitHub, Instagram, Stack Overflow, and many others.
Installing Redis
From source (Linux/Mac):
wget http://redis.googlecode.com/files/redis-2.4.2.tar.gz
tar xzf redis-2.4.2.tar.gz
cd redis-2.4.2
make
Start server:
./src/redis-server
Test:
./src/redis-cli ping
# PONG
Redis is now running on port 6379.
Basic Operations
Connect with redis-cli:
redis-cli
Set and get:
SET name "John"
GET name
# "John"
Increment:
SET counter 0
INCR counter
# 1
INCR counter
# 2
INCRBY counter 10
# 12
Expiration:
SET session "abc123"
EXPIRE session 3600
# Key expires in 1 hour
TTL session
# Seconds remaining
Delete:
DEL name
Simple and fast.
Data Types
Redis isn't just key-value. It has rich data structures:
Strings
Basic key-value:
SET user:1000:name "John Smith"
GET user:1000:name
APPEND user:1000:name " Jr."
GET user:1000:name
# "John Smith Jr."
STRLEN user:1000:name
# 14
Lists
Ordered collections:
LPUSH tasks "Write code"
LPUSH tasks "Review PR"
LPUSH tasks "Deploy"
LRANGE tasks 0 -1
# 1) "Deploy"
# 2) "Review PR"
# 3) "Write code"
RPOP tasks
# "Write code"
LLEN tasks
# 2
Lists are perfect for queues.
Sets
Unordered collections of unique values:
SADD tags:1 "ruby"
SADD tags:1 "rails"
SADD tags:1 "web"
SMEMBERS tags:1
# 1) "ruby"
# 2) "rails"
# 3) "web"
SISMEMBER tags:1 "ruby"
# 1 (true)
SCARD tags:1
# 3 (count)
SADD tags:2 "ruby"
SADD tags:2 "sinatra"
SINTER tags:1 tags:2
# "ruby" (intersection)
Great for tags, unique visitors, etc.
Sorted Sets
Sets with scores for ordering:
ZADD scores 100 "Alice"
ZADD scores 95 "Bob"
ZADD scores 98 "Charlie"
ZRANGE scores 0 -1 WITHSCORES
# 1) "Bob"
# 2) "95"
# 3) "Charlie"
# 4) "98"
# 5) "Alice"
# 6) "100"
ZREVRANGE scores 0 2
# Top 3 scores (highest first)
ZINCRBY scores 5 "Bob"
# Bob now has 100
Perfect for leaderboards, priority queues.
Hashes
Objects with fields:
HSET user:1000 name "John"
HSET user:1000 email "[email protected]"
HSET user:1000 age 30
HGET user:1000 name
# "John"
HGETALL user:1000
# 1) "name"
# 2) "John"
# 3) "email"
# 4) "[email protected]"
# 5) "age"
# 6) "30"
HINCRBY user:1000 age 1
# 31
Store objects efficiently.
Use Cases
Caching
Replace Memcached:
# Rails example
def get_user(id)
cached = $redis.get("user:#{id}")
return JSON.parse(cached) if cached
user = User.find(id)
$redis.setex("user:#{id}", 3600, user.to_json)
user
end
Redis caching is fast and has richer features than Memcached.
Session Storage
Store sessions in Redis:
# Sinatra example
use Rack::Session::Redis, redis: Redis.new
get '/' do
session[:user_id] = 123
session[:name] = "John"
end
Sessions survive application restarts.
Queues
Implement job queues:
# Producer
$redis.lpush("jobs", job.to_json)
# Consumer
loop do
job_data = $redis.brpop("jobs", timeout: 5)
if job_data
job = JSON.parse(job_data[1])
process(job)
end
end
BRPOP blocks until work available.
Counters
Track statistics:
# Page views
$redis.incr("page:#{page_id}:views")
# Unique visitors (using sets)
$redis.sadd("page:#{page_id}:visitors", user_id)
$redis.scard("page:#{page_id}:visitors") # Count
# Rate limiting
key = "rate:#{user_id}:#{Time.now.to_i / 60}"
count = $redis.incr(key)
$redis.expire(key, 59)
if count > 100
raise "Rate limit exceeded"
end
Leaderboards
Using sorted sets:
# Add score
$redis.zadd("leaderboard", score, user_id)
# Get rank
rank = $redis.zrevrank("leaderboard", user_id)
# Top 10
top_10 = $redis.zrevrange("leaderboard", 0, 9, withscores: true)
Pub/Sub
Real-time messaging:
Publisher:
$redis.publish("chat:room1", message.to_json)
Subscriber:
$redis.subscribe("chat:room1") do |on|
on.message do |channel, message|
puts "Received: #{message}"
end
end
Build chat, notifications, live updates.
Persistence
Redis is in-memory but can persist data:
RDB (Snapshots)
Periodic snapshots:
# redis.conf
save 900 1 # Save after 900 sec if >= 1 key changed
save 300 10 # Save after 300 sec if >= 10 keys changed
save 60 10000 # Save after 60 sec if >= 10000 keys changed
Manual save:
SAVE # Blocking
BGSAVE # Background
AOF (Append-Only File)
Log every write:
# redis.conf
appendonly yes
appendfsync everysec # or always/no
AOF is more durable but larger.
Transactions
Group commands:
MULTI
SET user:1000:name "John"
SET user:1000:email "[email protected]"
INCR user:count
EXEC
All or nothing execution.
With WATCH (optimistic locking):
WATCH user:1000:balance
balance = GET user:1000:balance
MULTI
SET user:1000:balance (balance - 100)
EXEC
# Returns nil if balance changed
Using Redis from Ruby
Install gem:
gem install redis
Connect:
require 'redis'
$redis = Redis.new(host: 'localhost', port: 6379)
Basic operations:
$redis.set('name', 'John')
$redis.get('name') # "John"
$redis.incr('counter')
$redis.decr('counter')
$redis.expire('session', 3600)
$redis.ttl('session')
Lists:
$redis.lpush('tasks', 'task1')
$redis.rpush('tasks', 'task2')
$redis.lrange('tasks', 0, -1)
$redis.lpop('tasks')
Sets:
$redis.sadd('tags', 'ruby')
$redis.smembers('tags')
$redis.sismember('tags', 'ruby')
Sorted sets:
$redis.zadd('scores', 100, 'Alice')
$redis.zrange('scores', 0, -1, with_scores: true)
$redis.zrevrank('scores', 'Alice')
Hashes:
$redis.hset('user:1', 'name', 'John')
$redis.hget('user:1', 'name')
$redis.hgetall('user:1')
Using Redis from Node.js
Install:
npm install redis
Code:
var redis = require('redis');
var client = redis.createClient();
client.set('name', 'John', function(err, reply) {
console.log(reply); // OK
});
client.get('name', function(err, reply) {
console.log(reply); // John
});
client.lpush('tasks', 'task1');
client.lpush('tasks', 'task2');
client.lrange('tasks', 0, -1, function(err, tasks) {
console.log(tasks);
});
Practical Example: Simple Analytics
Track page views and unique visitors:
class Analytics
def initialize(redis = $redis)
@redis = redis
end
def track_visit(page_id, user_id)
date = Time.now.strftime('%Y-%m-%d')
# Total views
@redis.incr("views:#{page_id}")
# Daily views
@redis.incr("views:#{page_id}:#{date}")
# Unique visitors
@redis.sadd("visitors:#{page_id}", user_id)
# Daily unique visitors
@redis.sadd("visitors:#{page_id}:#{date}", user_id)
end
def get_stats(page_id)
date = Time.now.strftime('%Y-%m-%d')
{
total_views: @redis.get("views:#{page_id}").to_i,
today_views: @redis.get("views:#{page_id}:#{date}").to_i,
unique_visitors: @redis.scard("visitors:#{page_id}"),
today_visitors: @redis.scard("visitors:#{page_id}:#{date}")
}
end
end
analytics = Analytics.new
analytics.track_visit(123, 'user456')
puts analytics.get_stats(123)
Fast, simple analytics.
Performance
Redis is fast. Really fast.
Benchmarks:
redis-benchmark -q -n 100000
Typical results:
- SET: ~80,000 ops/sec
- GET: ~100,000 ops/sec
- INCR: ~90,000 ops/sec
- LPUSH: ~90,000 ops/sec
All operations are O(1) or close.
Redis vs Memcached
| Feature | Redis | Memcached |
|---|---|---|
| Data types | Many | Strings only |
| Persistence | Yes | No |
| Pub/sub | Yes | No |
| Replication | Yes | No |
| Transactions | Yes | No |
| Speed | Very fast | Very fast |
Redis does more, Memcached is simpler.
Redis vs MongoDB
Different use cases:
Redis:
- In-memory
- Simple data structures
- Extremely fast
- Limited query capabilities
MongoDB:
- Disk-based (with caching)
- Complex documents
- Rich queries
- Better for large datasets
Use Redis for caching and simple data. Use MongoDB for complex data and queries.
Common Patterns
Cache-aside:
def get_user(id)
cached = $redis.get("user:#{id}")
return JSON.parse(cached) if cached
user = User.find(id)
$redis.setex("user:#{id}", 3600, user.to_json)
user
end
Write-through cache:
def update_user(id, attributes)
user = User.find(id)
user.update_attributes(attributes)
$redis.setex("user:#{id}", 3600, user.to_json)
user
end
Counting:
$redis.incr("downloads:#{file_id}")
$redis.incr("downloads:total")
Recent items (lists):
$redis.lpush("recent:items", item.to_json)
$redis.ltrim("recent:items", 0, 99) # Keep 100 items
Monitoring
INFO command:
INFO
INFO stats
INFO memory
Shows server stats, memory usage, key counts.
Monitor commands:
MONITOR
Shows all commands in real-time (debugging only – slow!).
redis-cli:
redis-cli info | grep used_memory_human
redis-cli dbsize
Security
By default, Redis has no authentication. For production:
Set password:
# redis.conf
requirepass yourpassword
Connect with password:
redis-cli -a yourpassword
Bind to localhost:
bind 127.0.0.1
Disable dangerous commands:
rename-command CONFIG ""
rename-command FLUSHALL ""
Never expose Redis to the internet without protection.
Scaling
Replication:
Master-slave replication is easy:
# On slave
slaveof master-ip 6379
Slaves are read-only copies.
Partitioning:
Split data across multiple Redis instances. Client-side hashing or use Redis Cluster (experimental).
Limitations
Memory: All data must fit in RAM. Use persistence and eviction policies.
Single-threaded: One CPU core per instance. Run multiple instances for more cores.
No joins: Not a relational database. Denormalize data.
Network overhead: Many small operations can saturate network.
Know the limitations and design accordingly.
Tools
redis-cli: Command-line interface
redis-benchmark: Performance testing
redis-stat: Monitor Redis stats
redis-commander: Web UI for Redis
redis-dump: Backup/restore tool
Wrapping Things Up
Redis is versatile. It's a cache, a database, a message broker, a queue. The rich data structures enable solutions that are awkward with simple key-value stores.
Performance is excellent. Operations are fast, and the single-threaded model eliminates concurrency issues.
Persistence options provide durability when needed. Pub/sub enables real-time features. Transactions ensure data consistency.
Is Redis perfect? No. It's in-memory, so datasets must fit in RAM. It's not a replacement for traditional databases in all cases.
But for caching, sessions, queues, counters, real-time features, and many other uses, Redis excels. It's become an essential tool in modern web stacks.
Try Redis on your next project. Start with caching, then explore lists for queues, sorted sets for leaderboards, pub/sub for real-time updates. You'll find Redis has a place in nearly every application.
Fast, simple, powerful. That's Redis.