TEXT
as the default data type. For data type mapping reference, please refer to the Data Type Mapping page.
Create Mirror Screen
Click on Mirrors tab and then enter new Mirror screen
Mirror Type
Mirror Name
prod_pg_to_snowflake_v1
or dev_pg_to_snowflake
.Select Source and Destination Peers
source and destination drop down
Mirror Configuration
60
seconds. This has implication on the warehouse activity, for cost-sensitive users we recommend to keep this at a higher value (over 3600
).peerdb_publication
), don’t forget adding that in this field.500000
. This is useful when you have a large number of rows in your tables and you want to control the number of rows fetched in each partition.1
. This is useful when you have a large number of tables and you want to control the number of parallel workers used to fetch the initial snapshot. This setting is per-table.4
. This is useful when you have a large number of tables and you want to control the number of tables fetched in parallel.false
by default. If you want to capture soft deletes, you can set this to true
. This will add a _peerdb_is_deleted
column to the destination table and set it to true
when a row is deleted in the source table without actually deleting the row in the destination table.Table Selection
Table Selection
Review and Create
Validate Mirror
button. If there are any errors, you will see them on the screen. If there are no errors, you will see a success message. Once you see the success message, click on the Create Mirror
button to create the mirror ✨.VARCHAR
and VARIANT
column values exceeding 16MB are truncated and stored as NULLs. This is because Snowflake doesn’t support VARCHAR and VARIANT over 16MB.ON_ERROR=CONTINUE
in the COPY command while loading data into Snowflake, applicable to both initial load and Change Data Capture (CDC). This setting ensures that any row that cannot be loaded into Snowflake due to an unsupported feature (e.g., dates out of the supported range) will be skipped. Such scenarios are rare, but they can occur. To check if there were any issues with a load, you can run the following query:
SELECT * FROM information_schema.load_history WHERE STATUS='PARTIALLY_LOADED'