Below table shows supported data types across PostgreSQL, BigQuery and Snowflake.

Primitive Data Types


SourceDestinations
PostgreSQLPostgreSQLBigQuerySnowflakeClickhouseElasticSearch
smallintsmallintINTEGERINTEGERInt16long
integerintegerINTEGERINTEGERInt32long
bigintbigintINTEGERINTEGERInt64long
float4float4FLOATFLOATFloat32float
double precisiondouble precisionFLOATFLOATFloat64float
booleanboolBOOLEANBOOLEANBoolboolean
"char"CHARSTRINGSTRINGFixedString(1)text
varcharvarcharSTRINGSTRINGStringtext
datedateDATEDATEDatedate
jsonjsonJSONVARIANTStringunnested subdocument
jsonbjsonbJSONVARIANTStringunnested subdocument
numericnumericBIGNUMERICNUMBERDecimaltext
texttextSTRINGSTRINGStringtext
timestamptimestampTIMESTAMPTIMESTAMP_NTZDateTime64(6)date
timestamp with time zonetimestamp with time zoneTIMESTAMPTIMESTAMP_TZDateTime64(6)date
timetimeTIMETIMEStringdate
bitbitBYTESBINARYStringtext
bit varyingvarbitBYTESBINARYStringtext
byteabyteaBYTESBINARYStringtext
geographygeographyGEOGRAPHYGEOGRAPHYStringComing soon!
geometrygeometryGEOGRAPHYGEOMETRYStringComing soon!
inetinetSTRINGSTRINGStringtext
macaddrmacaddrSTRINGSTRINGStringtext
cidrcidrSTRINGSTRINGStringtext
hstorehstoreJSONVARIANTStringComing soon!
uuiduuidSTRINGSTRINGuuidComing soon!

Array Data Types


SourceDestinations
PostgreSQL TypePostgreSQLBigQuerySnowflakeClickhouse
ARRAY<INT2>ARRAY<INT2>ARRAY<INT>VARIANTArray<Int16>
ARRAY<INT4>ARRAY<INT4>ARRAY<INT>VARIANTArray<Int32>
ARRAY<INT8>ARRAY<INT8>ARRAY<INT>VARIANTArray<Int64>
ARRAY<FLOAT4>ARRAY<FLOAT4>ARRAY<FLOAT>VARIANTArray<Float32>
ARRAY<DOUBLE PRECISION>ARRAY<DOUBLE PRECISION>ARRAY<DOUBLE PRECISION>VARIANTArray<Float64>
ARRAY<BOOL>ARRAY<BOOL>ARRAY<BOOL>VARIANTArray<Bool>
ARRAY<VARCHAR>ARRAY<VARCHAR>ARRAY<STRING>VARIANTArray<String>
ARRAY<TEXT>ARRAY<TEXT>ARRAY<STRING>VARIANTArray<String>
ARRAY<DATE>ARRAY<DATE>ARRAY<DATE>VARIANTString
ARRAY<TIMESTAMP>ARRAY<TIMESTAMP>ARRAY<TIMESTAMP>VARIANTString
ARRAY<TIMESTAMPTZ>ARRAY<TIMESTAMPTZ>ARRAY<TIMESTAMP>VARIANTString

Design Choices

We recognise that there are various approaches to handling certain data types. Here are some decisions we’ve taken for PeerDB.

Numeric Type

For Snowflake, we map PostgreSQL’s numeric type as follows:

  • numeric with no specified precision and scale is mapped to NUMBER(38,20).
  • numeric with precision OR scale which is beyond 38 and 37 is mapped to NUMBER(38, 20).
  • numeric with precision AND scale within the above limits is mapped to NUMBER(precision, scale).

For BigQuery, we map PostgreSQL’s numeric type as follows:

  • numeric with no specified precision and scale is mapped to BIGNUMERIC(38,20).
  • numeric with precision OR scale which is beyond 38 and 37 respectively, is mapped to BIGNUMERIC(38, 20).
  • numeric with precision AND scale within the above limits is mapped to BIGNUMERIC(precision, scale).

For Clickhouse, we map PostgreSQL’s numeric type as follows:

  • numeric with no specified precision and scale is mapped to Decimal(76,38).
  • numeric with precision OR scale which is beyond 76 and 38 respectively, is mapped to Decimal(76,38).
  • numeric with precision AND scale within the above limits is mapped to Decimal(precision, scale).

Geospatial Data

PeerDB detects invalid shapes (for example, a linestring with only one point) among PostGIS values it pulls, and writes them as null on the destination. We keep a log of this data and it can be retrieved if needed.

Valid geospatial data is written on BigQuery and Snowflake in Well-Known Text (WKT) format, while to PostgreSQL destinations it is written as it is received.

HStore Data

PeerDB writes HSTORE data as JSON on BigQuery. All intricaces of the HSTORE data type are preserved, such as:

  • NULL values. Example: '"a"=>NULL' will be written as {"a":null}
  • Empty keys. Example: '""=>1' will be written as {"":1}
  • Overriding duplicate key values. Example: '"a"=>"1", "a"=>"2"' will be written as {"a":2}

To Snowflake, it is written as a VARIANT data type, although it is formatted as a JSON and can be queried as such - snowflake_hstore_column:key.

Nulls in BigQuery Arrays

PeerDB removes null values from BigQuery arrays. This is because BigQuery does not support null values in arrays during their insertion.

Elasticsearch

We currently rely on Elasticsearch dynamic mapping for data type mappings, so this mapping may not be accurate for all cases. We are working on enabling explicit mappings for Elasticsearch.