Knowing how to backup a PostgreSQL database is a fundamental skill. In this article, I will walk you through everything you need to know to secure your data using industry-standard tools and strategies.
How To Backup PostgreSQL Database
Understanding PostgreSQL Backup Methodologies
Before we dive into the commands, you need to understand the two main “philosophies” of PostgreSQL backups: Logical and Physical.
Logical Backups (SQL Dumps)
A logical backup extracts the database structure and data into a script file (usually SQL). When you restore it, PostgreSQL essentially “re-plays” every command to recreate your tables, indexes, and rows.
- Best for: Small to medium databases, moving data between different versions of Postgres, or backing up specific tables.
Physical Backups (Binary Copies)
Physical backups involve copying the actual data files (the “blobs” of data on your disk) that PostgreSQL uses.
- Best for: Massive databases (multi-terabyte), disaster recovery, and setting up replication.
Method 1: The Logical Approach with pg_dump
If you are a developer at a tech hub, pg_dump is likely your best friend. It is the most common utility for backing up a single database. The beauty of pg_dump is that it doesn’t lock out other users; your application stays online while the backup runs.
Basic Syntax for pg_dump
To create a standard SQL script backup of a database named marketing_db, I use the following command:
Bash
pg_dump -U postgres -d marketing_db > marketing_backup.sql
Advanced Formats for Power Users
I rarely use plain SQL for production. Instead, I use the Custom Format (-F c). This format is compressed by default and allows you to use pg_restore for parallel processing, which significantly speeds up the recovery time.
| Format | Flag | Description | Recommendation |
| Plain | -F p | Plain text SQL script. | Good for small edits. |
| Custom | -F c | Compressed binary format. | Industry Standard. |
| Directory | -F d | Parallel-capable directory. | Best for massive datasets. |
| Tar | -F t | Standard tar archive. | Good for portability. |
My Go-To Production Command
When I’m backing up a critical database, I use a combination of flags to ensure the highest reliability:
Bash
pg_dump -U admin_user -h localhost -F c -b -v -f "/backups/daily/db_snapshot.dump" sales_db
-b: Includes large objects (blobs).-v: Verbose mode (I like to see what’s happening).-f: Specifies the output file path.
Method 2: Backing Up the Entire Cluster with pg_dumpall
Sometimes, a single database isn’t enough. In a large enterprise environment, you might have dozens of databases in a single PostgreSQL cluster, along with global roles (users) and permissions.
pg_dumpall handles this by extracting every database and the global metadata.
Bash
pg_dumpall -U postgres > full_cluster_backup.sql
Pro-Tip: If you only need to backup your users and roles (without the data), use the
--globals-onlyflag. This is incredibly helpful when migrating to a new server.
Method 3: Physical Backups with pg_basebackup
For those of you managing mission-critical systems where every second of downtime costs thousands of dollars, pg_dump it might be too slow for restoration. This is where pg_basebackup it comes in.
It creates a “binary copy” of the database cluster. It is the foundation for Point-in-Time Recovery (PITR)—the ability to restore your database to a specific millisecond in the past.
How to Run a Base Backup
I typically run this from a standby server or a dedicated backup machine:
Bash
pg_basebackup -h 127.0.0.1 -D /var/lib/postgresql/backups/physical_copy -U replication_user -v -P --wal-method=stream
-D: The destination directory.-P: Shows a progress bar (essential for large files).--wal-method=stream: Ensures the transaction logs are included so the backup is consistent.
The 3-2-1 Backup Strategy
- 3 Copies of Data: Keep your original data and at least two backups.
- 2 Different Media: Store backups on different storage types (e.g., Local SSD and Cloud Object Storage).
- 1 Offsite Location: Keep at least one copy in a different geographic region (e.g., US-East-1 and US-West-2) to protect against regional disasters.
Conclusion
Backing up your PostgreSQL database isn’t just about running a command; it’s about building a resilient system that can save you from human error, hardware failure, or cyber threats. Whether you choose the flexibility of pg_dump or the raw speed of pg_basebackup, the most important step is to test your restores.
You may also like the following articles:
I am Bijay having more than 15 years of experience in the Software Industry. During this time, I have worked on MariaDB and used it in a lot of projects. Most of our readers are from the United States, Canada, United Kingdom, Australia, New Zealand, etc.
Want to learn MariaDB? Check out all the articles and tutorials that I wrote on MariaDB. Also, I am a Microsoft MVP.