Valkey vs Redis vs KeyDB: Which In-Memory Store After the License Change?
Look, if you've been in the Redis ecosystem for any length of time, the last two years have felt like a soap opera. License changes, community forks, corporate drama — the whole deal. And now you're staring at three options that all look suspiciously similar: Valkey, Redis, and KeyDB.
Let me cut through the noise and help you figure out which one actually fits your stack.
What Happened (The Short Version)
In March 2024, Redis Ltd. switched from the permissive BSD license to a dual RSALv2/SSPL model. The community responded predictably — Linux Foundation launched Valkey as a BSD-licensed fork, and it exploded. Meanwhile, KeyDB (Snap's multithreaded fork from 2019) kept doing its own thing.
Then in 2025, Redis changed course again — Redis 8 ships under AGPLv3, an OSI-approved open-source license. It's a significant pivot: Redis Stack features (JSON, Search, TimeSeries) are now baked into core Redis, and the license is friendlier than SSPL. But AGPLv3 still has teeth — if you modify Redis and offer it as a service, you must release your changes.
So now you've got three in-memory stores that share DNA but diverge in meaningful ways. Here's how they stack up.
Performance: The Numbers That Matter
This is where things get interesting. Valkey 8.0 introduced I/O threading that fundamentally changed the single-threaded bottleneck story:
| Benchmark | Valkey 8.0 | Redis 7.4 | KeyDB |
|---|---|---|---|
| GET ops/sec (single node) | ~999K | ~729K | ~850K |
| Threading model | I/O multithreaded | Single-threaded + I/O threads | Fully multithreaded |
| Connection handling | Async I/O threads | Main thread + helpers | Multi-master threads |
Valkey's I/O threading isn't just a marketing number — it distributes network I/O across threads while keeping the command execution single-threaded (preserving atomicity). KeyDB takes the opposite approach with full multithreading, which gives it raw throughput but adds complexity around thread safety.
Redis 7.4 remains single-threaded at its core with optional I/O thread helpers. It's the most battle-tested approach, but you're leaving performance on the table for high-throughput workloads.
Licensing: The Elephant in the Room
This is probably why you're reading this article:
| Valkey | Redis 8+ | KeyDB | |
|---|---|---|---|
| License | BSD 3-Clause | AGPLv3 | BSD 3-Clause |
| OSI-approved | ✅ Yes | ✅ Yes | ✅ Yes |
| Cloud hosting | ✅ Unrestricted | ⚠️ Must share modifications | ✅ Unrestricted |
| Commercial use | ✅ Full | ✅ Full (with source disclosure) | ✅ Full |
| Fork-friendly | ✅ Yes | ⚠️ Copyleft obligations | ✅ Yes |
All three are now OSI-approved open source — a big deal after the SSPL controversy. The difference is copyleft: AGPLv3 requires you to share source code if you modify Redis and offer it as a network service. For most teams running Redis internally, that's a non-issue. But if you're building a managed database product on top of Redis, the copyleft clause matters.
Valkey and KeyDB both use BSD, meaning zero restrictions on how you use, modify, or distribute them.
Unique Features Worth Knowing
Valkey isn't just a copy-paste fork. It's already shipping features Redis doesn't have:
- RDMA support — kernel-bypass networking for sub-millisecond latency in data center deployments
- Dual-channel replication — separates replication backlog from snapshot delivery, reducing failover time
- 300%+ GitHub star growth since launch, backed by AWS, Google Cloud, Oracle, and Ericsson
Redis reunified its ecosystem with Redis 8:
- Vector sets (Redis 8.0) — native vector similarity search built into core
- Unified core — JSON, Search, TimeSeries, and probabilistic data types all integrated (no separate Redis Stack needed)
- Up to 87% faster commands and 2x throughput improvements in Redis 8
- Redis Copilot — AI-assisted query building in Redis Insight
KeyDB carved out a specific niche:
- Active-active multi-master replication — write to any node, changes propagate everywhere
- FLASH storage tiering — hot data in RAM, warm data on NVMe, massive cost savings for large datasets
- Subkey expires — set TTLs on individual hash fields (something Redis only recently added)
When to Pick What
Choose Valkey if:
- You need a true open-source, community-driven project
- You want maximum single-node throughput (I/O threading)
- You're on AWS, GCP, or any cloud that's standardizing on Valkey
- You care about long-term license freedom
Choose Redis if:
- You want the unified Redis 8 experience (JSON, Search, TimeSeries built-in)
- You're already invested in Redis Stack and Redis Insight tooling
- Vector search is a core requirement and you want it built-in
- AGPLv3 copyleft obligations aren't an issue for your use case
Choose KeyDB if:
- Multi-master active-active replication is non-negotiable
- You have datasets larger than RAM and need FLASH tiering
- You want multithreaded performance without clustering overhead
- You're running workloads where write availability matters more than strict consistency
Deploy on Elestio
All three are available as managed services on Elestio. Skip the infrastructure headaches — automated backups, SSL, monitoring, and updates handled for you. Starting at ~$16/month on a 2-CPU / 4 GB RAM instance (Netcup).
Pick your engine, select a provider and region (2 CPU / 4 GB RAM minimum), and click Deploy — you'll have a running instance in under 3 minutes.
Custom domain and SSL? Follow the official Elestio docs for automated setup.
Migration: It's Easier Than You Think
Switching between these three is straightforward since they all speak the Redis protocol. Your existing client libraries, redis-cli commands, and RDB snapshots work across all three. The main gotchas:
- Redis → Valkey: Near drop-in. If you use Redis 8 built-in features (JSON, Search, TimeSeries), check for Valkey equivalents like
valkey-search - Redis → KeyDB: Compatible at the protocol level. Multi-master config needs explicit setup
- Valkey/KeyDB → Redis: Works, but new features (RDMA, FLASH tiering) won't carry over
Troubleshooting Common Issues
"My app broke after switching" — 99% of the time it's a module dependency. Check if you're using RediSearch, RedisJSON, or RedisTimeSeries and find the equivalent for your target engine.
"Performance dropped after migration" — Tune your threading config. Valkey needs io-threads set explicitly; KeyDB needs server-threads. Default configs are conservative.
"Replication lag is high" — If you switched to KeyDB's multi-master mode, check network latency between nodes. Active-active replication is latency-sensitive.
The Bottom Line
The Redis licensing saga — from BSD to SSPL to AGPLv3 — reshaped the in-memory database landscape. Valkey is winning the momentum war, backed by major cloud providers and already outperforming Redis on raw throughput. Redis 8 fired back with a unified core, AGPLv3 licensing, and major performance gains. KeyDB fills a specific niche for multi-master and FLASH storage workloads.
Pick the one that matches your constraints. And if you'd rather skip the ops work entirely, deploy Valkey, Redis, or KeyDB on Elestio and get back to building.
Thanks for reading! See you in the next one.