One of our founders, Peter Zaitsev took a look at Redis (Remote DIctionary Server) when it first emerged in 2009 https://www.percona.com/blog/looking-at-redis/ which reminded me how far this project has come in sixteen years, evolving from a simple key-value store into a multi-model platform including vector search. This article covers this evolution in four distinct eras.
2010 ——– 2015 ——– 2020 ——– 2025
v1.0 v2.6 v3.0 v5.0 v6.0 v7.0 v8.4
Note: In 2024, Redis changed to a source-available license (and to AGPL in 2025), prompting the creation of Valkey—an open-source fork of v7.2, maintaining the BSD license. This article focuses on Redis’s technical evolution; both projects remain largely compatible at the time of writing.
Foundation ; the data structure server (v1.0 – v2.8)
Core Primitives (v1.0 – v1.2)
Redis 1.0 (2010) introduced the foundational data structures: Strings, Lists, and Sets. Improving from Memcached’s opaque blob storage, Redis allowed manipulation of these structures on the server-side. Strings were binary-safe, supporting up to 512MB of any data type. Lists used doubly linked lists for O(1) push/pop operations, making Redis ideal for task queues.
Persistence came in two forms: RDB snapshots for point-in-time backups, and AOF (Append-Only File) introduced in v1.1, which logged every write operation. This dual model allowed users to balance performance against durability.
Expanded Functionality (v2.0 – v2.8)
Redis 2.0 added Hashes (field-value pairs within a key) and Sorted Sets—arguably the most innovative structure, combining set uniqueness with numerical scores for O(log N) range queries. This enabled real-time leaderboards and sliding-window rate limiters.
Version 2.2 introduced memory-efficient encodings. Small datasets used “ZipLists”—compact contiguous memory blocks—instead of pointer-heavy hash tables, significantly reducing overhead.
A big change came with v2.6 (2012), when Lua scripts were added. Developers could now execute complex operations atomically on the server, eliminating network round-trips for get-check-set patterns. This version also standardized RESP2, the REdis Serialization Protocol which is designed to be human-readable yet efficient.
Scaling ; distributed systems (v3.0 – v5.0)
Redis Cluster (v3.0)
Redis 3.0 (2015) delivered horizontal scaling through Redis Cluster. Instead of consistent hashing, it used 16,384 “hash slots” with each key assigned via: slot = CRC16(key) mod 16384. Nodes/shards owned subsets of these slots, allowing Redis to partition datasets across multiple machines and continue operating if some nodes failed.
Version 3.2 added “Protected Mode” as a security measure to address the problem of exposed Redis instances. If Redis was started with the default configuration and without a password, it would only respond to localhost queries. This version also introduced Geospatial indexes using Sorted Sets and Geohashing.
Extensibility and Streaming (v4.0 – v5.0)
Redis 4.0 (2017) introduced the Module API, enabling extensions like RediSearch, RedisJSON, and RedisGraph. These modules could implement new data types and commands with native performance. The version also brought “Lazy Freeing” (UNLINK command) to delete large keys in background threads, thus preventing blocking the event loop.
Redis 5.0 (2018) added Streams—append-only logs modeled after Kafka. The defining feature was “Consumer Groups,” allowing multiple clients to collaboratively process event data with acknowledgment mechanisms and automatic reclaim of messages from failed consumers.
Enterprise needs : security and performance (v6.0 – v7.4)
Access Control and Multi-threading (v6.0)
Redis 6.0 (2020) moved beyond the single AUTH password to Access Control Lists (ACLs), enabling users with granular permissions on specific commands or key patterns. Native TLS supported encryption for all traffic.
Increasing performance through multi-threaded I/O. While command execution remained single-threaded (preserving atomicity), reading operations from sockets and formatting responses moved to background threads. This addressed the I/O bottleneck that emerged in high-concurrency environments.
RESP3 Protocol
RESP3 improved client capabilities by introducing native types for Maps, Sets, and Doubles. In RESP2, complex types are returned as simple arrays, which means clients need to interpret results based on command context. RESP3 also added “Push” types for out-of-band notifications, enabling client-side caching where Redis notifies clients when cached keys are modified.
Architectural Improvements (v7.0)
Redis 7.0 (2022) introduced “Redis Functions,” evolving Lua scripting into first-class database elements loaded once and callable by any client. This decoupled server-side logic from application code.
The version fundamentally changed AOF persistence with Multi-part AOF (MP-AOF). Previously, AOF rewriting required a rewrite buffer to capture concurrent writes, causing memory spikes. MP-AOF split the AOF into a “base” file (snapshot) and multiple “incremental” files tracked by a manifest, eliminating the rewrite buffer.
| Feature | Version | Impact |
| Redis Cluster | 3.0 | Horizontal scaling via hash slots |
| Lua Scripting | 2.6 | Atomic server-side operations |
| Modules API | 4.0 | Extensible data types |
| ACLs | 6.0 | Granular security controls |
| Multi-threaded I/O | 6.0 | Background I/O processing |
| MP-AOF | 7.0 | Eliminated rewrite buffer overhead |
AI : Multi-Model Platform (v8.0 – v8.4)
The Converged Platform (v8.0)
Redis 8.0 integrated the previously separate “Redis Stack” modules into core, transforming Redis into a multi-model database. The “Redis Query Engine” – evolved from RediSearch – enabled secondary indexing, full-text search, and vector similarity search in one system.
New integrated data structures:
- JSON: Native document storage with JSONPath
- TimeSeries: Optimized timestamped data storage
- Vector Set: High-dimensional data for AI semantic search
- Probabilistic Structures: Bloom Filters, Count-min sketch, Top-K
Performance improvements reached 87% latency reduction and 2x throughput via optimized command paths and asynchronous I/O threading. Replication speed increased 18% through simultaneous base/incremental streaming.
Threading Evolution
The journey from single-threaded to multi-threaded represents careful architectural evolution:
v1.0 – v5.0: main thread handled everything—socket reads, protocol parsing, command execution, response formatting, and socket writes.
v6.0: I/O threads handled socket operations and protocol formatting, while the main thread executed commands atomically.
v8.0: incremental improvements to IO threading.
v8.4: I/O threads assigned to specific clients handle entire read/parse cycles. Main thread processes batches of parsed queries and generates replies, which I/O threads write back. This delivers up to 112% throughput improvement on 8-core systems.
Recent Innovations (v8.2 – v8.4)
Redis 8.2 introduced vector compression (BF16 and FP16 types), reducing memory footprint for AI embeddings. The CLUSTER SLOT-STATS command provided per-slot metrics for CPU, network, and key count.
Redis 8.4 added the FT.HYBRID command for “hybrid search”—combining full-text keywords with semantic vector similarity in a single query. JSON array memory efficiency improved up to 92% through “inlining” numeric values and short strings.
Key Trends and Implications
Beyond caching: Redis’s evolution proves that high-performance in-memory systems have to be able to do more than simple caching. As RAM became cheaper, users demanded sophisticated query capabilities.
Developer experience as a strategy: Redis’s success came from mapping programming language data structures (Lists, Sets, Hashes) directly to the database. The integration of JSON and Vector Sets continues this pattern for web development and AI applications.
Single-threading limitation: while single-threading simplified Redis and provided deterministic behavior, it eventually hit performance limits on modern multi-core CPUs. The careful threading evolution—offloading everything except memory mutation—shows how to modernize architecture while preserving fundamental guarantees like atomicity.
Conclusion
Redis evolved from a 2009 prototype to the world’s most popular in-memory data platform through the pursuit of “data locality” – storing data in structures that fit use cases and executing logic alongside that data. Sixteen years has seen improvements in aspects like performance and scaling. The addition of vector sets and hybrid search will see it maintain its much-loved-by-developers status and keep it relevant for new use cases.