Sync data from one Postgres database to another (like pg_dump/pg_restore). Designed for:
- speed - tables are transferred in parallel
- security - built-in methods to prevent sensitive data from ever leaving the server
- flexibility - gracefully handles schema differences, like missing columns and extra columns
- convenience - sync partial tables, groups of tables, and related records
🍊 Battle-tested at Instacart
pgsync is a command line tool. To install, run:
gem install pgsyncThis will give you the pgsync command. If installation fails, you may need to install dependencies.
You can also install it with Homebrew:
brew install pgsyncOr Docker.
In your project directory, run:
pgsync --initThis creates .pgsync.yml for you to customize. We recommend checking this into your version control (assuming it doesn’t contain sensitive information). pgsync commands can be run from this directory or any subdirectory.
First, make sure your schema is set up in both databases. We recommend using a schema migration tool for this, but pgsync also provides a few convenience methods. Once that’s done, you’re ready to sync data.
Sync tables
pgsyncSync specific tables
pgsync table1,table2Works with wildcards as well
pgsync "table*"Sync specific rows (existing rows are overwritten)
pgsync products "where store_id = 1"You can also preserve existing rows
pgsync products "where store_id = 1" --preserveOr truncate them
pgsync products "where store_id = 1" --truncateExclude specific tables
pgsync --exclude table1,table2Add to .pgsync.yml to exclude by default
exclude:
- table1
- table2Sync tables from all schemas or specific schemas (by default, only the search path is synced)
pgsync --all-schemas
# or
pgsync --schemas public,other
# or
pgsync public.table1,other.table2Define groups in .pgsync.yml:
groups:
group1:
- table1
- table2And run:
pgsync group1You can also use groups to sync a specific record and associated records in other tables.
To get product 123 with its reviews, last 10 coupons, and store, use:
groups:
product:
products: "where id = {1}"
reviews: "where product_id = {1}"
coupons: "where product_id = {1} order by created_at desc limit 10"
stores: "where id in (select store_id from products where id = {1})"And run:
pgsync product:123Sync the schema before the data (this wipes out existing data)
pgsync --schema-firstSpecify tables
pgsync table1,table2 --schema-firstSync the schema without data (this wipes out existing data)
pgsync --schema-onlypgsync does not try to sync Postgres extensions.
Prevent sensitive data like email addresses from leaving the remote server.
Define rules in .pgsync.yml:
data_rules:
email: unique_email
last_name: random_letter
birthday: random_date
users.auth_token:
value: secret
visits_count:
statement: "(RANDOM() * 10)::int"
encrypted_*: nulllast_name matches all columns named last_name and users.last_name matches only the users table. Wildcards are supported, and the first matching rule is applied.
Options for replacement are:
unique_emailunique_phoneunique_secretrandom_letterrandom_intrandom_daterandom_timerandom_ipvaluestatementnulluntouched
Rules starting with unique_ require the table to have a single column primary key. unique_phone requires a numeric primary key.
Foreign keys can make it difficult to sync data. Three options are:
- Defer constraints (recommended)
- Manually specify the order of tables
- Disable foreign key triggers, which can silently break referential integrity (not recommended)
To defer constraints, use:
pgsync --defer-constraintsTo manually specify the order of tables, use --jobs 1 so tables are synced one-at-a-time.
pgsync table1,table2,table3 --jobs 1To disable foreign key triggers and potentially break referential integrity, use:
pgsync --disable-integrityThis requires superuser privileges on the to database. If syncing to (not from) Amazon RDS, use the rds_superuser role. If syncing to (not from) Heroku, there doesn’t appear to be a way to disable integrity.
Disable user triggers with:
pgsync --disable-user-triggersSkip syncing sequences with:
pgsync --no-sequencesFor extremely large, append-only tables, sync in batches.
pgsync large_table --in-batchesNote: This requires the table to have a numeric, increasing primary key
The script will resume where it left off when run again, making it great for backfills.
Always make sure your connection is secure when connecting to a database over a network you don’t fully trust. Your best option is to connect over SSH or a VPN. Another option is to use sslmode=verify-full. If you don’t do this, your database credentials can be compromised.
To keep you from accidentally overwriting production, the destination is limited to localhost or 127.0.0.1 by default.
To use another host, add to_safe: true to your .pgsync.yml.
To use with multiple databases, run:
pgsync --init db2This creates .pgsync-db2.yml for you to edit. Specify a database in commands with:
pgsync --db db2If you run pgsync --init in a Django project, migrations will be excluded in .pgsync.yml.
exclude:
- django_migrationsIf you run pgsync --init in a Heroku project, the from database will be set in .pgsync.yml.
from: $(heroku config:get DATABASE_URL)?sslmode=requireIf you run pgsync --init in a Laravel project, migrations will be excluded in .pgsync.yml.
exclude:
- migrationsIf you run pgsync --init in a Rails project, Active Record metadata and schema migrations will be excluded in .pgsync.yml.
exclude:
- ar_internal_metadata
- schema_migrationsTo view the SQL that’s run, use:
pgsync --debugHelp
pgsync --helpVersion
pgsync --versionList tables
pgsync --listUse groups when possible to take advantage of parallelism.
For Ruby scripts, you may need to do:
Bundler.with_unbundled_env do
system "pgsync ..."
endGet the Docker image with:
docker pull ankane/pgsync
alias pgsync="docker run -ti --rm -v .:/conf -w /conf ankane/pgsync"This will give you the pgsync command.
For databases on the host machine, use host.docker.internal as the hostname (on Linux, this requires --add-host=host.docker.internal:host-gateway).
If installation fails, your system may be missing Ruby or libpq.
On Mac, run:
brew install libpqOn Ubuntu, run:
sudo apt-get install ruby-dev libpq-dev build-essentialRun:
gem install pgsyncTo use master, run:
gem install specific_install
gem specific_install https://github.com/ankane/pgsync.gitWith Homebrew, run:
brew upgrade pgsyncWith Docker, run:
docker pull ankane/pgsyncAlso check out:
- Dexter - The automatic indexer for Postgres
- PgHero - A performance dashboard for Postgres
- pgslice - Postgres partitioning as easy as pie
Inspired by heroku-pg-transfer.
View the changelog
Everyone is encouraged to help improve this project. Here are a few ways you can help:
- Report bugs
- Fix bugs and submit pull requests
- Write, clarify, or fix documentation
- Suggest or add new features
To get started with development:
git clone https://github.com/ankane/pgsync.git
cd pgsync
bundle install
createdb pgsync_test1
createdb pgsync_test2
createdb pgsync_test3
bundle exec rake test