|
| 1 | +--- |
| 2 | +title: "SQLServer CDC" |
| 3 | +weight: 2 |
| 4 | +type: docs |
| 5 | +aliases: |
| 6 | +- /cdc-ingestion/sqlserver-cdc.html |
| 7 | +--- |
| 8 | +<!-- |
| 9 | +Licensed to the Apache Software Foundation (ASF) under one |
| 10 | +or more contributor license agreements. See the NOTICE file |
| 11 | +distributed with this work for additional information |
| 12 | +regarding copyright ownership. The ASF licenses this file |
| 13 | +to you under the Apache License, Version 2.0 (the |
| 14 | +"License"); you may not use this file except in compliance |
| 15 | +with the License. You may obtain a copy of the License at |
| 16 | +
|
| 17 | + http://www.apache.org/licenses/LICENSE-2.0 |
| 18 | +
|
| 19 | +Unless required by applicable law or agreed to in writing, |
| 20 | +software distributed under the License is distributed on an |
| 21 | +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY |
| 22 | +KIND, either express or implied. See the License for the |
| 23 | +specific language governing permissions and limitations |
| 24 | +under the License. |
| 25 | +--> |
| 26 | + |
| 27 | +# SQLServer CDC |
| 28 | + |
| 29 | +Paimon supports synchronizing changes from different databases using change data capture (CDC). This feature requires Flink and its [CDC connectors](https://ververica.github.io/flink-cdc-connectors/). |
| 30 | + |
| 31 | +## Prepare CDC Bundled Jar |
| 32 | + |
| 33 | +``` |
| 34 | +flink-sql-connector-sqlserver-cdc-*.jar |
| 35 | +``` |
| 36 | + |
| 37 | +## Synchronizing Tables |
| 38 | + |
| 39 | +By using [SqlServerSyncTableAction](/docs/{{< param Branch >}}/api/java/org/apache/paimon/flink/action/cdc/sqlserver/SqlServerSyncTableAction) in a Flink DataStream job or directly through `flink run`, users can synchronize one or multiple tables from SQLServer into one Paimon table. |
| 40 | + |
| 41 | +To use this feature through `flink run`, run the following shell command. |
| 42 | + |
| 43 | +```bash |
| 44 | +<FLINK_HOME>/bin/flink run \ |
| 45 | + /path/to/paimon-flink-action-{{< version >}}.jar \ |
| 46 | + sqlserver-sync-table |
| 47 | + --warehouse <warehouse-path> \ |
| 48 | + --database <database-name> \ |
| 49 | + --table <table-name> \ |
| 50 | + [--partition-keys <partition-keys>] \ |
| 51 | + [--primary-keys <primary-keys>] \ |
| 52 | + [--type-mapping <option1,option2...>] \ |
| 53 | + [--computed-column <'column-name=expr-name(args[, ...])'> [--computed-column ...]] \ |
| 54 | + [--metadata-column <metadata-column>] \ |
| 55 | + [--sqlserver-conf <sqlserver-cdc-source-conf> [--sqlserver-conf <sqlserver-cdc-source-conf> ...]] \ |
| 56 | + [--catalog-conf <paimon-catalog-conf> [--catalog-conf <paimon-catalog-conf> ...]] \ |
| 57 | + [--table-conf <paimon-table-sink-conf> [--table-conf <paimon-table-sink-conf> ...]] |
| 58 | +``` |
| 59 | +
|
| 60 | +{{< generated/sqlserver_sync_table >}} |
| 61 | +
|
| 62 | +Currently, only one database is supported for synchronization. Regular matching of 'database name' is not supported. |
| 63 | +
|
| 64 | +If the Paimon table you specify does not exist, this action will automatically create the table. Its schema will be derived from all specified SQLServer tables. If the Paimon table already exists, its schema will be compared against the schema of all specified SQLServer tables. |
| 65 | +
|
| 66 | +Example 1: synchronize tables into one Paimon table |
| 67 | +
|
| 68 | +```bash |
| 69 | +<FLINK_HOME>/bin/flink run \ |
| 70 | + /path/to/paimon-flink-action-{{< version >}}.jar \ |
| 71 | + sqlserver-sync-table \ |
| 72 | + --warehouse hdfs:///path/to/warehouse \ |
| 73 | + --database test_db \ |
| 74 | + --table test_table \ |
| 75 | + --partition-keys pt \ |
| 76 | + --primary-keys pt,uid \ |
| 77 | + --computed-column '_year=year(age)' \ |
| 78 | + --sqlserver-conf hostname=127.0.0.1 \ |
| 79 | + --sqlserver-conf username=root \ |
| 80 | + --sqlserver-conf password=123456 \ |
| 81 | + --sqlserver-conf database-name='source_db' \ |
| 82 | + --sqlserver-conf schema-name='dbo' \ |
| 83 | + --sqlserver-conf table-name='dbo.source_table1|dbo.source_table2' \ |
| 84 | + --catalog-conf metastore=hive \ |
| 85 | + --catalog-conf uri=thrift://hive-metastore:9083 \ |
| 86 | + --table-conf bucket=4 \ |
| 87 | + --table-conf changelog-producer=input \ |
| 88 | + --table-conf sink.parallelism=4 |
| 89 | +``` |
| 90 | +
|
| 91 | +As example shows, the sqlserver-conf's table-name supports regular expressions to monitor multiple tables that satisfy |
| 92 | +the regular expressions. The schemas of all the tables will be merged into one Paimon table schema. |
| 93 | +
|
| 94 | +Example 2: synchronize shards into one Paimon table |
| 95 | +
|
| 96 | +You can also use regular expressions to set the "schema_name" to capture multiple schemas. A typical scenario is to split the table "source_table" into databases "source_dbo1" and "source_dbo2"..., Then all the data of "source_table" can be synchronized to a Paimon table. |
| 97 | +
|
| 98 | +```bash |
| 99 | +<FLINK_HOME>/bin/flink run \ |
| 100 | + /path/to/paimon-flink-action-{{< version >}}.jar \ |
| 101 | + mysql-sync-table \ |
| 102 | + --warehouse hdfs:///path/to/warehouse \ |
| 103 | + --database test_db \ |
| 104 | + --table test_table \ |
| 105 | + --partition-keys pt \ |
| 106 | + --primary-keys pt,uid \ |
| 107 | + --computed-column '_year=year(age)' \ |
| 108 | + --sqlserver-conf hostname=127.0.0.1 \ |
| 109 | + --sqlserver-conf username=root \ |
| 110 | + --sqlserver-conf password=123456 \ |
| 111 | + --sqlserver-conf database-name='source_db' \ |
| 112 | + --sqlserver-conf schema-name='source_dbo.+' \ |
| 113 | + --sqlserver-conf table-name='source_table' \ |
| 114 | + --catalog-conf metastore=hive \ |
| 115 | + --catalog-conf uri=thrift://hive-metastore:9083 \ |
| 116 | + --table-conf bucket=4 \ |
| 117 | + --table-conf changelog-producer=input \ |
| 118 | + --table-conf sink.parallelism=4 |
| 119 | +``` |
| 120 | +
|
| 121 | +## Synchronizing Databases |
| 122 | +
|
| 123 | +By using [SqlServerSyncDatabaseAction](/docs/{{< param Branch >}}/api/java/org/apache/paimon/flink/action/cdc/mysql/SqlServerSyncDatabaseAction) in a Flink DataStream job or directly through `flink run`, users can synchronize the whole SQLServer database into one Paimon database. |
| 124 | +
|
| 125 | +To use this feature through `flink run`, run the following shell command. |
| 126 | +
|
| 127 | +```bash |
| 128 | +<FLINK_HOME>/bin/flink run \ |
| 129 | + /path/to/paimon-flink-action-{{< version >}}.jar \ |
| 130 | + sqlserver-sync-database |
| 131 | + --warehouse <warehouse-path> \ |
| 132 | + --database <database-name> \ |
| 133 | + [--ignore-incompatible <true/false>] \ |
| 134 | + [--merge-shards <true/false>] \ |
| 135 | + [--table-prefix <paimon-table-prefix>] \ |
| 136 | + [--table-suffix <paimon-table-suffix>] \ |
| 137 | + [--including-tables <sqlserver-table-name|name-regular-expr>] \ |
| 138 | + [--excluding-tables <sqlserver-table-name|name-regular-expr>] \ |
| 139 | + [--mode <sync-mode>] \ |
| 140 | + [--metadata-column <metadata-column>] \ |
| 141 | + [--type-mapping <option1,option2...>] \ |
| 142 | + [--sqlserver-conf <sqlserver-cdc-source-conf> [--sqlserver-conf <sqlserver-cdc-source-conf> ...]] \ |
| 143 | + [--catalog-conf <paimon-catalog-conf> [--catalog-conf <paimon-catalog-conf> ...]] \ |
| 144 | + [--table-conf <paimon-table-sink-conf> [--table-conf <paimon-table-sink-conf> ...]] |
| 145 | +``` |
| 146 | +
|
| 147 | +{{< generated/sqlserver_sync_database >}} |
| 148 | +
|
| 149 | +Currently, only one database is supported for synchronization. Regular matching of 'database_name' is not supported |
| 150 | +
|
| 151 | +Only tables with primary keys will be synchronized. |
| 152 | +
|
| 153 | +For each SQLServer table to be synchronized, if the corresponding Paimon table does not exist, this action will automatically create the table. Its schema will be derived from all specified SQLServer tables. If the Paimon table already exists, its schema will be compared against the schema of all specified SQLServer tables. |
| 154 | +
|
| 155 | +Example 1: synchronize entire database |
| 156 | +
|
| 157 | +```bash |
| 158 | +<FLINK_HOME>/bin/flink run \ |
| 159 | + /path/to/paimon-flink-action-{{< version >}}.jar \ |
| 160 | + sqlserver-sync-database \ |
| 161 | + --warehouse hdfs:///path/to/warehouse \ |
| 162 | + --database test_db \ |
| 163 | + --sqlserver-conf hostname=127.0.0.1 \ |
| 164 | + --sqlserver-conf username=root \ |
| 165 | + --sqlserver-conf password=123456 \ |
| 166 | + --sqlserver-conf database-name=source_db \ |
| 167 | + --sqlserver-conf schema-name=dbo \ |
| 168 | + --catalog-conf metastore=hive \ |
| 169 | + --catalog-conf uri=thrift://hive-metastore:9083 \ |
| 170 | + --table-conf bucket=4 \ |
| 171 | + --table-conf changelog-producer=input \ |
| 172 | + --table-conf sink.parallelism=4 |
| 173 | +``` |
| 174 | +
|
| 175 | +Example 2: synchronize and merge multiple shards |
| 176 | +
|
| 177 | +Let's say you have multiple schema shards `schema1`, `schema2`, ... and each schema has tables `tbl1`, `tbl2`, .... You can |
| 178 | +synchronize all the `schema.+.tbl.+` into tables `test_db.tbl1`, `test_db.tbl2` ... by following command: |
| 179 | +
|
| 180 | +```bash |
| 181 | +<FLINK_HOME>/bin/flink run \ |
| 182 | + /path/to/paimon-flink-action-{{< version >}}.jar \ |
| 183 | + sqlserver-sync-database \ |
| 184 | + --warehouse hdfs:///path/to/warehouse \ |
| 185 | + --database test_db \ |
| 186 | + --sqlserver-conf hostname=127.0.0.1 \ |
| 187 | + --sqlserver-conf username=root \ |
| 188 | + --sqlserver-conf password=123456 \ |
| 189 | + --sqlserver-conf database-name='source_db' \ |
| 190 | + --sqlserver-conf schema-name='db.+' \ |
| 191 | + --catalog-conf metastore=hive \ |
| 192 | + --catalog-conf uri=thrift://hive-metastore:9083 \ |
| 193 | + --table-conf bucket=4 \ |
| 194 | + --table-conf changelog-producer=input \ |
| 195 | + --table-conf sink.parallelism=4 \ |
| 196 | + --including-tables 'tbl.+' |
| 197 | +``` |
| 198 | +
|
| 199 | +By setting schema-name to a regular expression, the synchronization job will capture all tables under matched schemas |
| 200 | +and merge tables of the same name into one table. |
| 201 | +
|
| 202 | +{{< hint info >}} |
| 203 | +You can set `--merge-shards false` to prevent merging shards. The synchronized tables will be named to 'databaseName_tableName' |
| 204 | +to avoid potential name conflict. |
| 205 | +{{< /hint >}} |
| 206 | +
|
| 207 | +## FAQ |
| 208 | +
|
| 209 | +1. Chinese characters in records ingested from MySQL are garbled. |
| 210 | +* Try to set `env.java.opts: -Dfile.encoding=UTF-8` in `flink-conf.yaml` |
| 211 | +(the option is changed to `env.java.opts.all` since Flink-1.17). |
0 commit comments