22
22
# 资源地址
23
23
- ** etl-engine下载地址**
24
24
25
- `当前版本最后编译时间20240504 `
25
+ `当前版本最后编译时间20240628 `
26
26
27
27
[下载地址](https://github.com/hw2499/etl-engine/releases)
28
28
51
51
52
52
# 功能特性
53
53
- 支持跨平台执行(windows,linux),只需要一个可执行文件和一个配置文件就可以运行,无需其它依赖,轻量级引擎。
54
- - 输入输出数据源支持influxdb v1、clickhouse、prometheus、elasticsearch、hadoop(hive,hbase)、postgresql、mysql、oracle、sqlite、rocketmq、kafka、redis、excel
54
+ - 输入输出数据源支持influxdb v1、clickhouse、prometheus、elasticsearch、hadoop(hive,hbase)、postgresql(兼容Greenplum)) 、mysql(兼容Doirs和OceanBase) 、oracle、sqlite、rocketmq、kafka、redis、excel
55
55
- 任意一个输入节点可以同任意一个输出节点进行组合,遵循pipeline模型。
56
56
- 支持跨多种类型数据库之间进行数据融合查询。
57
57
- 支持消息流数据传输过程中与多种类型数据库之间的数据融合计算查询。
@@ -1319,7 +1319,7 @@ wal2json解码器,PostgreSQL需要安装wal2json.so插件 ,
1319
1319
| 属性 | 说明 | 适合 |
1320
1320
| ---| ----------| --------------------|
1321
1321
| id | 唯一标示 | |
1322
- | type | 数据源类型 | INFLUXDB_V1、MYSQL、CLICKHOUSE、SQLITE、POSTGRES、ORACLE、ELASTIC、HIVE|
1322
+ | type | 数据源类型 | INFLUXDB_V1、MYSQL(兼容Doirs和OceanBase) 、CLICKHOUSE、SQLITE、POSTGRES(兼容Greenplum) 、ORACLE、ELASTIC、HIVE、HBSE |
1323
1323
| dbURL | 连接地址 | ck,mysql,influx,postgre,oracle,elastic,hive |
1324
1324
| database | 数据库名称 | ck,mysql,influx,sqlite,postgre,oracle,elastic,hive |
1325
1325
| username | 用户名称 | ck,mysql,influx,postgre,oracle,elastic,hive |
@@ -1328,6 +1328,22 @@ wal2json解码器,PostgreSQL需要安装wal2json.so插件 ,
1328
1328
| org | 机构名称 | influx 2x |
1329
1329
| rp | 数据保留策略名称 | influx 1x |
1330
1330
1331
+ - 常用数据源连接
1332
+
1333
+ ` ` ` shell
1334
+
1335
+ < Connection id= " CONNECT_01" dbURL= " http://127.0.0.1:58086" database= " db1" username= " user1" password= " ******" token= " " org= " hw" type= " INFLUXDB_V1" />
1336
+ < Connection id= " CONNECT_02" dbURL= " 127.0.0.1:19000" database= " db1" username= " user1" password= " ******" batchSize= " 1000" type= " CLICKHOUSE" />
1337
+ < Connection id= " CONNECT_03" dbURL= " 127.0.0.1:3306" database= " db1" username= " user1" password= " ******" batchSize= " 1000" type= " MYSQL" />
1338
+ < Connection id= " CONNECT_04" database= " d:/sqlite_db1.db" batchSize= " 10000" type= " SQLITE" />
1339
+ < Connection id= " CONNECT_05" dbURL= " 127.0.0.1:10000" database= " db1" username= " root" password= " b" batchSize= " 1000" type= " HIVE" />
1340
+ < Connection id= " CONNECT_06" dbURL= " 127.0.0.1:5432" database= " db_1" username= " u1" password= " ******" batchSize= " 1000" type= " POSTGRES" />
1341
+ < Connection id= " CONNECT_07" dbURL= " http://127.0.0.1:9200" database= " db1" username= " elastic" password= " ******" batchSize= " 1000" type= " ELASTIC" />
1342
+ < Connection id= " CONNECT_08" dbURL= " 127.0.0.1:1521" database= " orcl" username= " c##u1" password= " ******" batchSize= " 1000" type= " ORACLE" />
1343
+ < Connection id= " CONNECT_09" dbURL= " 127.0.0.1:9090" database= " " username= " " password= " " batchSize= " 1000" type= " HBASE" />
1344
+
1345
+ ` ` `
1346
+
1331
1347
1332
1348
# # Graph
1333
1349
@@ -1458,6 +1474,8 @@ import (
1458
1474
" errors"
1459
1475
" fmt"
1460
1476
" strconv"
1477
+ " github.com/tidwall/gjson"
1478
+ " github.com/tidwall/sjson"
1461
1479
)
1462
1480
func RunScript(dataValue string) (result string, topErr error) {
1463
1481
newRows := " "
@@ -1480,6 +1498,8 @@ import (
1480
1498
" errors"
1481
1499
" fmt"
1482
1500
" strconv"
1501
+ " github.com/tidwall/gjson"
1502
+ " github.com/tidwall/sjson"
1483
1503
)
1484
1504
func RunScript(dataValue string) (result string, topErr error) {
1485
1505
newRows := " "
0 commit comments