Modern applications often rely on multiple services to provide fast, reliable, and scalable responses. A common and highly effective architecture involves an application, a persistent database (like MySQL), and a high-speed cache service (like Valkey).
In this guide, we’ll explore how to integrate these components effectively using Python to dramatically improve your application’s performance.
In our example, the setup looks like this:
|
1 |
Application(172.31.68.72)<br> |<br> |-- MySQL client --> Percona Server (172.31.67.228)<br> |<br> |-- Valkey client --> Valkey Server (172.31.22.118) |
The Application: The brain of the system. It decides when to query the database and when to rely on cached data.
Valkey: An in-memory key-value store optimized for extremely fast reads and writes.
MySQL: The durable storage layer. Queries here are slower but permanent.
The Mental Model: Application → Cache (Fast) → DB (Slow, Fallback). This approach reduces database load and dramatically improves response time.
Before diving into code, imagine the flow:

Key points:
This pattern is commonly called Cache-Aside or Lazy Loading.
Note on Client Library Selection: Valkey is fully Redis-compatible. In this demo, I am using the standard Python redis client to demonstrate portability and ease of migration—existing Redis-based applications can work with Valkey without code changes.
Alternatively, a native Valkey Python client library is also available for teams that prefer a Valkey-first ecosystem. Using the native library may provide closer alignment with Valkey-specific features and future enhancements.
⚠️ Demo Note: This example is intentionally simplified. It does not include production-grade features such as connection pooling, comprehensive error handling, or fallback strategies if Valkey is unavailable.
Fetching user information directly from the database for every request is inefficient. By introducing Valkey, we can cache these results to speed up subsequent requests.
First, we create a test database and a user table.
|
1 |
CREATE USER 'appuser'@'%' IDENTIFIED BY '***';<br>GRANT SELECT, INSERT, UPDATE, DELETE ON testdb.* TO 'appuser'@'%';<br>CREATE DATABASE testdb;<br>USE testdb;<br>CREATE TABLE users (<br> id INT PRIMARY KEY,<br> name VARCHAR(100),<br> email VARCHAR(100)<br>);<br>INSERT INTO users VALUES (101, 'Arun', '[email protected]');<br>SELECT * FROM users WHERE id = 101;<br> |
On the Valkey server: Confirms that the valkey-cli can communicate with the valkey-server; the PONG response indicates the server is running and reachable.
|
1 |
ubuntu@ArunValkey:~$ valkey-cli ping<br>PONG<br>ubuntu@ArunValkey:~$ |
Shows that the valkey-server process is running under user valkey with PID 26075, listening on all interfaces (0.0.0.0) at port 6379.
|
1 |
ps -ef | grep valkey-server<br>valkey 26075 1 0 17:57 ? 00:00:00 /usr/bin/valkey-server 0.0.0.0:6379 |
On the application (remote client):
Verifies that the Python client can successfully connect to the valkey server at 172.31.22.118:6379; r.ping() returning True confirms the server is reachable and responding.
|
1 |
(venv) root@App:/home/ubuntu# python3 -c "import redis; r=redis.Redis(host='172.31.22.118', port=6379); print(r.ping())"<br>True<br>(venv) root@App:/home/ubuntu# |
This snippet demonstrates a Python caching workflow using a virtual environment (venv). After activating the venv, the cache_test.py script establishes a connection to a MySQL database and a Valkey-based caching service. The get_user function first attempts to retrieve user data from the cache; if the data is missing (cache miss), it queries the database, then stores the result in the cache with a 5-minute expiration. The script includes test calls that fetch the same user three times, illustrating cache hits and misses, along with timing information to show the performance benefit of caching. Overall, it showcases an efficient pattern for reducing database load by leveraging a fast in-memory cache.
|
1 |
root@App:/home/ubuntu# source venv/bin/activate<br>(venv) root@App:/home/ubuntu# cat cache_test.py<br>import mysql.connector<br>import redis<br>import json<br>import time<br><br># ---- MySQL connection ----<br>db = mysql.connector.connect(<br> host="172.31.67.228",<br> user="appuser",<br> password="***",<br> database="testdb"<br>)<br><br>cursor = db.cursor(dictionary=True)<br># ---- Valkey connection ----<br>cache = redis.Redis(<br> host="172.31.22.118",<br> port=6379,<br> decode_responses=True<br>)<br>def get_user(user_id):<br> key = f"user:{user_id}"<br><br> # 1️⃣ Try cache<br> cached_data = cache.get(key)<br> if cached_data:<br> print("✅ VALKEY HIT")<br> return json.loads(cached_data)<br><br> # 2️⃣ Cache miss → query DB<br> print("❌ DB HIT")<br> cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))<br> data = cursor.fetchone()<br><br> # 3️⃣ Store in Valkey (5 minutes) <br> cache.setex(key, 300, json.dumps(data))<br> return data<br># ---- Test calls ----<br>start = time.time()<br>print(get_user(101))<br>print("Time:", time.time() - start)<br><br>start = time.time()<br>print(get_user(101))<br>print("Time:", time.time() - start)<br><br>start = time.time()<br>print(get_user(101))<br>print("Time:", time.time() - start)<br>(venv) root@App:/home/ubuntu#<br> |
In production systems, proper exception handling and resilience patterns should be implemented.
Running the script clearly shows the performance benefit. The first call fetches from the database, while subsequent calls hit the Valkey cache, making responses nearly instantaneous.
|
1 |
root@App:/home/ubuntu# source venv/bin/activate<br>(venv) root@App:/home/ubuntu# python cache_test.py<br>❌ DB HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.00559544563293457<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0006554126739501953<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0008788108825683594<br>(venv) root@App:/home/ubuntu# |
Observation:
Caching frequently-accessed data in memory drastically reduces latency and database load, making your application faster and more scalable.
Time-To-Live (TTL) in Valkey is essentially a self-destruct timer for your data. When you set a key with a TTL (like the 300 seconds used in your cache.setex command), Valkey guarantees that the key will be automatically deleted from memory once that time elapses. This mechanism is critical for two reasons: it prevents your cache from filling up indefinitely with old data, and it ensures eventual consistency by forcing your application to re-fetch fresh data from the database after the timer expires.
The following script specifically demonstrates the “window of staleness” inherent in this strategy. When you updated the MySQL database directly in Step 2, the cache was unaware of the change and continued serving the old email address in Step 4 because the TTL hadn’t expired yet (a “Cache Hit”). It was only after the 5-minute timer ran out in Step 6 that the key was evicted, forcing a “Cache Miss” that finally pulled the updated email from the database. This illustrates that while TTL ensures data doesn’t stay stale forever, it does not guarantee immediate freshness.
|
1 |
root@App:/home/ubuntu# cat valkey_ttl_demo.sh<br>#!/bin/bash<br>source /home/ubuntu/venv/bin/activate<br>echo "==================== STEP 1: Initial Run ===================="<br>python3 ./cache_test.py<br>echo<br>echo "==================== STEP 2: Update MySQL ===================="<br>echo "Updating testdb.users → setting email='[email protected]' where id=101 (Direct DB update, bypassing cache)"<br>/usr/bin/mysql -h172.31.67.228 <br> -e "UPDATE testdb.users<br> SET email='[email protected]'<br> WHERE id=101;"<br>echo<br>echo "==================== STEP 3: Monitor TTL (2 Minutes) ===================="<br>for i in {1..2}; do<br> echo "Checking TTL (minute $i)..."<br> redis-cli -h 172.31.22.118 TTL user:101<br> sleep 60<br>done<br>echo<br>echo "==================== STEP 4: Run After 2 Minutes ===================="<br>python3 ./cache_test.py<br>echo<br>echo "==================== STEP 5: Checking TTL after second run ===================="<br>echo "Checking TTL after second run..."<br>for i in {3..5}; do<br> echo "Checking TTL (minute $i)..."<br> redis-cli -h 172.31.22.118 TTL user:101<br> sleep 60<br>done<br>echo<br>echo "==================== STEP 6: Final Run ===================="<br>python3 ./cache_test.py<br>echo<br>echo "==================== DEMO COMPLETE ===================="<br>root@App:/home/ubuntu#<br> |
|
1 |
root@App:/home/ubuntu# ./valkey_ttl_demo.sh<br>==================== STEP 1: Initial Run ====================<br>❌ DB HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0047206878662109375<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0005364418029785156<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0005519390106201172<br><br>==================== STEP 2: Update MySQL ====================<br>Updating testdb.users → setting email='[email protected]' where id=101 (Direct DB update, bypassing cache)<br><br>==================== STEP 3: Monitor TTL (2 Minutes) ====================<br>Checking TTL (minute 1)...<br>(integer) 300<br>Checking TTL (minute 2)...<br>(integer) 240<br>==================== STEP 4: Run After 2 Minutes ====================<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0034325122833251953<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0006437301635742188<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0006194114685058594<br><br>==================== STEP 5: Checking TTL after second run ====================<br>Checking TTL after second run...<br>Checking TTL (minute 3)...<br>(integer) 180<br>Checking TTL (minute 4)...<br>(integer) 120<br>Checking TTL (minute 5)...<br>(integer) 60<br>==================== STEP 6: Final Run ====================<br>❌ DB HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.00496220588684082<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0009002685546875<br>✅ VALKEY HIT<br>{'id': 101, 'name': 'Arun', 'email': '[email protected]'}<br>Time: 0.0006501674652099609<br><br>==================== DEMO COMPLETE ====================<br>root@App:/home/ubuntu#<br> |
Best Practice: In production systems, any data modification should be immediately followed by cache invalidation or refresh of the affected keys. Relying solely on TTL provides eventual consistency but allows a window of stale data. To maintain data correctness, the application layer that modifies the database must also handle cache invalidation.
Valkey is also ideal for storing session information, where data needs fast retrieval and auto-expiry.
This Python script demonstrates session management using Valkey as a fast in-memory store. It creates a Valkey connection and defines login and get_session functions: login generates a unique session ID with uuid, stores user data (ID and role) in Valkey with a 1-hour expiration, and prints confirmation, while get_session retrieves the session, handling expired or missing sessions gracefully. The test flow shows creating a session, retrieving it immediately and after a short delay, and also demonstrates a quick-expiring test session to confirm automatic expiration, illustrating efficient and temporary session storage in a cache.
|
1 |
(venv) root@App:/home/ubuntu# cat session_test.py<br>import redis<br>import json<br>import uuid<br>import time<br><br># ---- Connect to Valkey (remote server) ----<br>cache = redis.Redis(<br> host="172.31.22.118",<br> port=6379,<br> decode_responses=True<br>)<br># ---- Function to simulate login and create session ----<br>def login(user_id, role):<br> session_id = str(uuid.uuid4())<br> session_key = f"session:{session_id}"<br><br> session_data = {<br> "user_id": user_id,<br> "role": role<br> }<br><br> # Store in Valkey for 1 hour<br> #Note: An alternative is ValkeyJSON, which enables native JSON storage and provides benefits such as <br> #field-level updates, greater flexibility, and simplified data handling without manual serialization.<br> cache.setex(session_key, 3600, json.dumps(session_data))<br> print(f"✅ Session created: {session_id}")<br> return session_id<br><br># ---- Function to get session data ----<br>def get_session(session_id):<br> session_key = f"session:{session_id}"<br> data = cache.get(session_key)<br><br> if not data:<br> print("❌ Session expired or not found")<br> return None<br><br> print("✅ Session found:", json.loads(data))<br> return json.loads(data)<br><br># ---- Test the flow ----<br>if __name__ == "__main__":<br> # Step 1: Create a session<br> session_id = login(101, "admin")<br><br> # Step 2: Read it immediately<br> get_session(session_id)<br><br> # Step 3: Simulate another read after 2 seconds<br> time.sleep(2)<br> get_session(session_id)<br><br> # Step 4: Quick expiry demo<br> print("n⏳ Testing short expiry...")<br> cache.setex("session:test", 5, json.dumps({"user_id": 999}))<br> print("Session 'test' set for 5 seconds")<br> time.sleep(6)<br> if cache.get("session:test") is None:<br> print("✅ 'test' session expired as expected")<br> else:<br> print("❌ 'test' session still exists")<br>(venv) root@App:/home/ubuntu#<br> |
Sessions are fast to read/write, and expiry ensures memory is not wasted on stale data.
Session caching reduces database hits for authentication and user-specific data, improving both speed and scalability.
Observing Session Caching Results
Running the session caching script demonstrates how Valkey efficiently handles session storage and expiry:
|
1 |
(venv) root@App:/home/ubuntu# python3 session_test.py<br>✅ Session created: c4bd8341-698a-4ea2-82a3-95370cd3011a<br>✅ Session found: {'user_id': 101, 'role': 'admin'}<br>✅ Session found: {'user_id': 101, 'role': 'admin'}<br><br>⏳ Testing short expiry...<br>Session 'test' set for 5 seconds<br>✅ 'test' session expired as expected<br>(venv) root@App:/home/ubuntu#<br><br> |
The output shows that sessions are quickly stored and retrieved from Valkey, and short-lived sessions expire automatically, preventing stale data accumulation. This highlights Valkey’s effectiveness for fast, temporary storage like authentication sessions.
Using Valkey as a caching layer significantly improves application performance by reducing direct database reads and accelerating response times. By implementing the Cache-Aside pattern, the application intelligently determines when to retrieve data from cache and when to fall back to MySQL, ensuring both performance and correctness. Session caching further demonstrates how short-lived, temporary data can be efficiently managed with automatic expiration (TTL), reducing unnecessary persistence overhead.
This architecture highlights a clear separation of responsibilities:
Discover more: A Practical Guide to Valkey: Configuration, Best Practices, and Production Tuning