This blog was first authored by Ibrar Ahmed in 2020. We’ve updated it in 2025 for clarity and relevance, reflecting current practices while honoring their original perspective.
If you’re running PostgreSQL and suddenly hit an error you don’t recognize, you’re in good company. Most issues aren’t actually bugs; they’re things like permissions problems, connection failures, or a full disk that catches you off guard. The tricky part is figuring out what the message actually means and how to fix it fast.
This post walks through 10 of the most common PostgreSQL errors, what causes them, and how to solve them. Whether it’s your first time seeing one of these or your fiftieth, you’ll get straight answers to help you move forward.
Error 1: Connection refused (‘Is the server running?’)
This one usually shows up when PostgreSQL isn’t running at all. But it can also happen when the server is running, just not where or how your connection expects.
If you’re seeing something like this:
1 2 3 4 |
$ psql postgres psql: error: could not connect to server: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? |
…it typically means one of two things: the server is down or your client is looking in the wrong place. Start by checking whether the server process is running. The exact method depends on your OS and installation, but if you’re using systemd:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
sudo service postgresql-12 status Redirecting to /bin/systemctl status postgresql-12.service postgresql-12.service - PostgreSQL 12 database server Loaded: loaded (/usr/lib/systemd/system/postgresql-12.service; disabled; vendor preset: disabled) Active: active (running) since Sun 2020-05-31 23:55:39 UTC; 8s ago Docs: https://www.postgresql.org/docs/12/static/ Process: 32204 ExecStartPre=/usr/pgsql-12/bin/postgresql-12-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS) Main PID: 32209 (postmaster) CGroup: /system.slice/postgresql-12.service ├─32209 /usr/pgsql-12/bin/postmaster -D /var/lib/pgsql/12/data/ ├─32211 postgres: logger ├─32213 postgres: checkpointer ├─32214 postgres: background writer ├─32215 postgres: walwriter ├─32216 postgres: autovacuum launcher ├─32217 postgres: stats collector └─32218 postgres: logical replication launcher |
If the server is running, the next thing to check is the port.
💡 Tip: PostgreSQL defaults to 5432, but if your config uses a different one, your connection will fail unless you specify it.
You can check the active port like this:
1 2 3 4 |
sudo service postgresql-12 status | grep port Redirecting to /bin/systemctl status postgresql-12.service LOG: listening on IPv6 address "::1", port 5432 listening on IPv4 address "127.0.0.1", port 5432 |
Error 2: Initdb cannot be run as root
If you try to run initdb as the root user, PostgreSQL will shut you down immediately, and for good reason. The database cluster has to be owned and managed by a specific non-root user (usually postgres) for security and stability.
Here’s what you’ll see if you try it the wrong way:
1 |
$ initdb -D data initdb: error: cannot be run as root Please log in (using, e.g., "su") as the (unprivileged) user that will own the server process. |
To fix it, just switch to the correct user before running the command. On many systems, that means:
1 |
sudo ./postgresql-12-setup initdb |
Error 3: Initdb fails with ‘Invalid Permissions’
This error means PostgreSQL can’t use the target data directory because the permissions aren’t set correctly. The directory needs to be owned by the PostgreSQL user and locked down, typically 0700 (owner access only) or sometimes 0750. You can fix it by adjusting the permissions (for example, using chmod 0700 data and chown postgres:postgres data if you created it as root), or by starting with a directory the postgres user can set up with the right access.
1 2 3 4 |
$ initdb -D data running bootstrap script ... FATAL: data directory "data" has invalid permissions [1885] DETAIL: Permissions should be u=rwx (0700) or u=rwx,g=rx (0750). child process exited with exit code 1 initdb: removing data directory "data" |
Error 4: Permission denied for table/object
This one’s straightforward: the user trying to run the query doesn’t have the right privileges on the object in question. It often comes up when one user creates a table (like admin) and another user (like app_user) tries to access it without the correct permissions.
To fix it, make sure the proper privileges have been granted for that object, whether it’s a table, view, sequence, or something else.
1 2 3 4 |
$ SELECT * FROM TEST; ERROR: permission denied for table test [1788] STATEMENT: select * from test; ERROR: permission denied for table test |
Error 5: Out of disk space (‘No space left’)
This is one of the more serious errors. It means the disk holding your PostgreSQL data directory ($PGDATA)—or possibly a specific tablespace—has run out of space. PostgreSQL can’t write new data or grow existing files until something changes.
🚨 Warning: When the disk is full, PostgreSQL can’t write WAL files. This can lead to service failure or even data loss if not handled quickly.
You’ll need to either free up space (by archiving old data, clearing logs, or removing unneeded files), move the WAL directory (pg_wal) to a different disk using a symbolic link, or create new tablespaces on other disks and move large tables or indexes there.
1 2 3 |
ERROR: could not extend file "base/30122/331821": No space left on device HINT: Check free disk space. |
Error 6: Replication standby identifier mismatch
This error means the standby server you’re trying to start wasn’t created from the correct base backup of the primary server. For replication to work, both systems need to share the same system identifier.
To fix it, take a fresh base backup from the current primary using pg_basebackup, and use that to initialize the standby server.
1 2 |
FATAL: database system identifier differs between the primary and standby [20595] DETAIL: The primary's identifier is 6832398539310516067, the standby's identifier is 6832397799517112074. |
Error 7: Server closed connection unexpectedly
This message usually means the PostgreSQL server process crashed or was killed while handling a request. It’s a serious issue and often points to a deeper problem.
To figure out what happened, check the PostgreSQL logs right away. Look for signs of things like segmentation faults, assertion failures, or the system’s OOM killer stepping in. If a crash did occur, generating a stack trace or core dump might be needed for further debugging.
1 2 3 4 |
server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed |
Error 8: Out of memory (Allocation failed)
This error shows up when PostgreSQL tries to allocate memory—usually for sorting, hashing, or executing a large query—but the operating system can’t provide it. That often means the server is low on available memory or your PostgreSQL settings are too aggressive.
Check your configuration: work_mem, shared_buffers, and total connection count are good starting points. In some cases, you may just need more physical RAM.
1 2 |
ERROR: out of memory 2020-05-08 DETAIL: Failed on request of size 1880. |
Error 9: OOM killer invoked
This isn’t a PostgreSQL error exactly; it’s a message from the Linux kernel. It means the Out-Of-Memory (OOM) killer stepped in and terminated a PostgreSQL process, often the main server or a large backend, to free up memory.
It’s a sign that your system is under serious memory pressure. Disabling the OOM killer isn’t usually recommended, since it can lead to full system lockups. Instead, review your memory settings and resource limits to prevent this from happening again.
1 2 |
Out of memory: Kill process 1766 (postmaster) score 890 or sacrifice child Killed process 1766, UID 26,(postmaster) total-vm: 24384508 kB, anon-rss:14376288kB, file-rss:138616kB |
🚨 Caution: If the OOM killer targets PostgreSQL repeatedly, it’s a sign that memory limits are too low or other processes are starving the system.
Error 10: Archive command failed (WAL destination issue)
This error means the archive_command used for continuous archiving or point-in-time recovery didn’t work. In the example shown, the cp command failed, most likely because the destination directory doesn’t exist or PostgreSQL doesn’t have permission to write to it.
To resolve it, make sure the target directory exists and that the PostgreSQL server user has the right permissions to write there.
1 2 3 |
cp: cannot create regular file '/usr/local/wal/000000010000000000000001': No such file or directory LOG: archive command failed with exit code 1 DETAIL: The failed archive command was: cp pg_wal/000000010000000000000001 /usr/local/pgsql-logical/wal/000000010000000000000001 |
Where to go from here
If you’ve run into any of these errors, you know how quickly a small issue can turn into lost time, growing alerts, or late-night troubleshooting. Fixing them isn’t always hard, but figuring out what’s actually wrong can be. These are the kinds of things that don’t show up in the getting-started guides. They show up in production.
That’s why we put together a central resource with everything you need to build, tune, and run PostgreSQL without the usual trade-offs. Explore our resources to learn how to cut costs, build smarter, and stay free from vendor lock-in.
Frequently asked questions about common PostgreSQL errors
Q1: Where can I find PostgreSQL error logs for troubleshooting?
A: The location of PostgreSQL error logs depends on your operating system and configuration. Common locations include /var/log/postgresql/, /var/lib/pgsql//data/log/, or within the data directory specified by $PGDATA. Check your postgresql.conf file for settings like log_destination and logging_collector to be certain. Using journalctl -u postgresql-.service on systemd systems is also common.
Q2: What are some of the most frequent PostgreSQL errors developers encounter?
A: Some of the most common PostgreSQL errors include connection failures (“could not connect to server”), permission denials (“permission denied for table…”), disk space issues (“no space left on device”), memory problems (“out of memory”), and errors during setup like initdb permission issues or running as root.
Q3: How do I fix the ‘could not connect to server’ PostgreSQL error?
A: This common PostgreSQL connection error usually means either the server process isn’t running (check its status) or you’re trying to connect to the wrong port or host. Verify the server is active and check the port setting in postgresql.conf. Firewall rules blocking connections can also cause this error.
Q4: What’s the usual fix for a ‘permission denied’ error in PostgreSQL?
A: A ‘permission denied’ PostgreSQL error typically means the user trying to perform an action lacks the necessary privileges on the target object (like a table or schema). The fix usually involves using the GRANT command (e.g., GRANT SELECT ON my_table TO my_user;) executed by a user with sufficient permissions (like the object owner or a superuser).
Q5: What typically causes ‘out of memory’ errors in PostgreSQL?
A: PostgreSQL ‘out of memory’ errors occur when a query requires more memory than available (defined by settings like work_mem for operations like sorting/hashing) or when the server overall is under-resourced. Tuning memory-related parameters in postgresql.conf or increasing server RAM can help resolve these errors. The OS OOM killer might also terminate processes if system memory is exhausted.
One of the most common error is “connect to PostgreSQL server: FATAL: no pg_hba.conf entry for host “XX.XXX.XX.XXX”, user “XXX”, database “XXX”, SSL off…”
You make it 11, thanks for that.