Loading data from PostgreSQL to a ClickHouse® data mart
You can migrate a database from PostgreSQL to ClickHouse® using Yandex Data Transfer. To do this:
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Managed Service for PostgreSQL cluster fee: Covers the use of computational resources allocated to hosts and disk storage (see Managed Service for PostgreSQL pricing).
- Managed Service for ClickHouse® cluster fee: Covers the use of computational resources allocated to hosts (including ZooKeeper hosts) and disk storage (see Managed Service for ClickHouse® pricing).
- Fee for public IP addresses assigned to cluster hosts (see Virtual Private Cloud pricing).
- Transfer fee: Based on computational resource consumption and the total number of data rows transferred (see Data Transfer pricing).
Getting started
In our example, we will create all required resources in Yandex Cloud. Set up the infrastructure:
-
Create a source Managed Service for PostgreSQL cluster using any suitable configuration with publicly accessible hosts. Specify the following settings:
- DB name:
db1. - Username:
pg-user. - Password:
<source_password>.
- DB name:
-
Create a Managed Service for ClickHouse® target cluster using any suitable configuration with publicly accessible hosts. Specify the following settings:
- Number of ClickHouse® hosts: Minimum of 2 to enable replication within the cluster.
- DB name:
db1. - Username:
ch-user. - Password:
<target_password>.
-
If using security groups, make sure they are configured correctly and allow inbound connections to the clusters.
-
Grant the
mdb_replicationrole topg-userin the Managed Service for PostgreSQL cluster.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the postgresql-to-clickhouse.tf
configuration file to your current working directory.This file describes:
- Networks.
- Subnets.
- Security groups for cluster connectivity.
- Managed Service for PostgreSQL source cluster.
- Managed Service for ClickHouse® target cluster.
- Source endpoint.
- Target endpoint.
- Transfer.
-
In the
postgresql-to-clickhouse.tffile, specify admin passwords for PostgreSQL and ClickHouse®. -
Validate your Terraform configuration files using this command:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Set up and activate the transfer
-
In the
db1database, create a table namedx_taband populate it with data:CREATE TABLE x_tab ( id NUMERIC PRIMARY KEY, name CHAR(5) ); CREATE INDEX ON x_tab (id); INSERT INTO x_tab (id, name) VALUES (40, 'User1'), (41, 'User2'), (42, 'User3'), (43, 'User4'), (44, 'User5'); -
Create a transfer:
ManuallyTerraform-
Create a
PostgreSQL-type source endpoint and configure it using the following settings:- Installation type:
Managed Service for PostgreSQL cluster. - Managed Service for PostgreSQL cluster: Select
<source_PostgreSQL_cluster_name>from the drop-down list. - Database:
db1. - User:
pg-user. - Password:
<user_password>.
- Installation type:
-
Create a
ClickHouse-type target endpoint and specify its cluster connection settings:- Connection type:
Managed cluster. - Managed cluster: Select
<target_ClickHouse®_cluster_name>from the drop-down list. - Database:
db1. - User:
ch-user. - Password:
<user_password>. - Cleanup policy:
DROP.
- Connection type:
-
Create a Snapshot and replication-type transfer, configure it to use the previously created endpoints, then activate it.
-
In the
postgresql-to-clickhouse.tffile, set thetransfer_enabledvariable to1. -
Validate your Terraform configuration files using this command:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
.The transfer will activate automatically upon creation.
-
Test the replication process
-
Wait for the transfer status to change to Replicating.
-
To verify that the data has been replicated to the target, connect to the target Yandex Managed Service for ClickHouse® cluster. Make sure the
x_tabtables indb1and the source database have identical schemas, including timestamp columns,__data_transfer_commit_timeand__data_transfer_delete_time:SELECT * FROM db1.x_tab WHERE id = 41;┌─id─┬──name──┬─── __data-transfer_commit_time─┬───__data-transfer-delete_time─┐ │ 41 │ User2 │ 1633417594957267000 │ 0 │ └────┴────────┴────────────────────────────────┴───────────────────────────────┘ -
Connect to the source cluster.
-
In the
x_tabtable of the source PostgreSQL database, delete the row with ID =41and update the row with ID =42:DELETE FROM db1.public.x_tab WHERE id = 41; UPDATE db1.public.x_tab SET name = 'Key3' WHERE id = 42; -
Make sure the changes have been applied to the
x_tabtable on the ClickHouse® target:SELECT * FROM db1.x_tab WHERE (id >= 41) AND (id <= 42);┌─id─┬──name──┬─── __data-transfer_commit_time─┬───__data-transfer-delete_time─┐ │ 41 │ User2 │ 1633417594957267000 │ 1675417594957267000 │ │ 42 │ Key3 │ 1675417594957267000 │ 0 │ │ 42 │ User3 │ 1633417594957268000 │ 1675417594957267000 │ └────┴────────┴────────────────────────────────┴───────────────────────────────┘
Query data in ClickHouse®
For table recovery, ClickHouse® targets with replication use the ReplicatedReplacingMergeTree
-
__data_transfer_commit_time: Time inTIMESTAMPformat when this row was last updated. -
__data_transfer_delete_time: Time inTIMESTAMPformat when this row was deleted from the source table. A value of0indicates that the row is still active.The
__data_transfer_commit_timecolumn is essential for the ReplicatedReplacedMergeTree engine. It tracks changes by inserting a new version of a row upon any update or deletion, timestamped with the operation's commit time. Consequently, a query by a primary key may return multiple row versions with different__data_transfer_commit_timevalues.
The source data can be added or deleted while the transfer is in the Replicating status. To ensure an SQL query by a primary key returns a single record, always filter on __data_transfer_delete_time when querying tables transferred to ClickHouse®. For example, to query the x_tab table, use the following syntax:
SELECT * FROM x_tab FINAL
WHERE __data_transfer_delete_time = 0;
To simplify the SELECT queries, create a view filtering rows by __data_transfer_delete_time. Use this view for all your queries. For example, to query the x_tab table, use the following syntax:
CREATE VIEW x_tab_view AS SELECT * FROM x_tab FINAL
WHERE __data_transfer_delete_time == 0;
Note
Using the FINAL keyword reduces query performance, so avoid it whenever possible, especially on large tables.
Delete the resources you created
Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:
-
Make sure the transfer status is Completed, upon which you can delete the transfer.
-
Delete your endpoints and clusters:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-
ClickHouse® is a registered trademark of ClickHouse, Inc