Scaling Your Cache: A Step-by-Step Guide to Setting Up Valkey Replication

April 23, 2026
Author
Arunjith Aravindan
Share this Post:

In the recent open-source data landscape, Valkey has emerged as a prominent player. Born as a Linux Foundation-backed, fully open-source fork of Redis (following Redis’s recent licensing changes), Valkey serves as a high-performance, in-memory key-value data store.

Whether Valkey is deployed as a primary database, an ephemeral cache, or a rapid message broker, a single node is rarely sufficient for production workloads, as it creates a single point of failure. Ensuring high availability and scaling out read operations requires replication.

This comprehensive guide explores how to configure a Primary-Replica (Master-Slave) replication topology in Valkey, detailing its underlying mechanics and the verification process.

How Valkey Replication Works

Valkey’s replication is asynchronous and non-blocking. When a replica connects to a primary node, it initiates a synchronisation process.

Initially, the primary creates a snapshot of its entire dataset in memory (an RDB file) and sends it to the replica. Once this initial full sync is complete, the primary continuously streams a log of all new write operations to the replica. Because this happens asynchronously, the primary node does not wait for the replica to acknowledge writes, meaning applications experience zero latency penalty from the replication process.

Why Use Replication?

Before diving into the configuration commands, it is important to understand the concrete benefits of this architecture:

  • Data Redundancy: Replication maintains a near real-time copy of the data on replicas. However, because replication is asynchronous, there may be a small delay, and recent writes might not be fully replicated at the moment of a primary failure. For applications requiring stronger durability guarantees, the WAIT command can be used to ensure that writes are acknowledged by one or more replicas.
  • Read Scaling: Heavy read operations (like GET or LRANGE commands) can be offloaded to replicas. This frees up the primary node to dedicate its CPU and network bandwidth to handling write operations efficiently.
  • High Availability: When paired with Valkey Sentinel or a cluster manager, replication forms the foundational layer for automatic failover.

Prerequisites

The following components are required:

  • Two servers, virtual machines, or containers with Valkey installed.
  • Network connectivity: The replica must be able to reach the primary on its Valkey port. Ensure firewalls (UFW, iptables) or cloud security groups (AWS, GCP) allow TCP traffic on port 6379 between the specified IPs.

This tutorial uses the following hypothetical IP addresses:

  • Primary Node: 172.31.32.27
  • Replica Node: 172.31.37.55
  • Valkey Port: 6379 (the default)

Step 1: Configure the Primary Node

By default, Valkey binds only to localhost (127.0.0.1), meaning it rejects connections from outside servers. This must be adjusted to allow replica connections.

  1. Open the Valkey configuration file on the Primary Node (typically located at /etc/valkey/valkey.conf, depending on the installation method).

      2. Find the bind directive. Update it to listen on both localhost and the internal network IP. Note: While * can be used to listen on all interfaces, specifying the exact IP provides better security.

bind 127.0.0.1 172.31.32.27

  1. Configure ACLs (Access Control Lists) for Security and Replication:

 While older versions of Redis used the requirepass directive, Valkey utilizes modern ACLs. Securing the default user and creating a dedicated, restricted user specifically for replication is highly recommended. This ensures the replica only has the permissions necessary to sync data. Add the following ACL rules:

  1. Restart the Valkey service to apply the changes:

Step 2: Configure the Replica Node

Now, let’s move over to the Replica Node (172.31.37.55). There are two ways to configure a replica: dynamically via the CLI using the REPLICAOF command (which resets upon reboot) or permanently via the configuration file. For production, we will use the permanent method.

  1. Open the Valkey configuration file on the Replica.
  2. Locate the replicaof directive (in older versions or Redis compat mode, this might be slaveof). Uncomment it and add the Primary’s IP and port:

  1. Authenticate using the Replication ACL: Because a dedicated repl_user was created on the primary node, the replica must be configured to use those specific credentials. Find the masteruser and masterauth directives and update them:

  1. (Optional but recommended) Ensure the replica is strictly read-only by checking this directive:

Restart the Valkey service on the replica:

Step 3: Verify the Replication

The replica should now be connecting to the primary and synchronizing the dataset. Let’s verify the connection and test the data flow.

Checking Replication Status

Log in to the Primary Node using the Valkey CLI.

Pro tip: The CLI generates a warning when passing a password with -a, as it appears in bash history. For production systems, consider utilizing the VALKEYCLI_AUTH environment variable instead. Note for Valkey 7.2.x users: As the first release following the Redis fork, the system still uses the legacy REDISCLI_AUTH environment variable name.

Once inside the prompt, type the INFO replication command:

Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.

Key Metrics to Watch:

  • role: Confirms this node is the master/primary.
  • state=online: The replica is fully synced and streaming.
  • lag=0: The replica is up to date. If this number climbs, the replica is struggling to keep up with the primary’s write volume.
  • offset: Matches the master_repl_offset, confirming data parity

The Write Test

To double-check the data flow, set a key on the primary:

Then, hop over to the Replica’s CLI and read the key:

Wrapping Up

With just a few configuration changes, you’ve transformed a single-node Valkey setup into a scalable, production-ready replication topology. You now have data redundancy, improved read performance, and a solid foundation for growth.

But there’s one missing piece—automatic failover. If the primary node goes down, the replicas won’t take over automatically. That’s where the next evolution comes in.

In the upcoming guide, (Achieving High Availability with Valkey Sentinel) I will dive into Valkey Sentinel and show how to turn this replicated setup into a fully self-healing, highly available system.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Far
Enough.

Said no pioneer ever.
MySQL, PostgreSQL, InnoDB, MariaDB, MongoDB and Kubernetes are trademarks for their respective owners.
© 2026 Percona All Rights Reserved