Replies: 2 comments 2 replies
-
Hi, the PostgresSQL protocol is not well optimized currently, so I think we can start from supporting CSV format in |
Beta Was this translation helpful? Give feedback.
2 replies
-
I believe the latest build can now fix the problem, you can try importing large parquet files using mysql> copy demo from '/Users/lei/parquet/tz.parquet';
Query OK, 50000000 rows affected (5 min 3.49 sec) Feel free to reopen this discussion if you encounter any problem. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a pretty big
.csv.gz
file that I would like toCOPY
into a table. To my surprise, the documentation does not mention any other format than parquet to be loaded.What is the best way to get this big file loaded into greptimedb? I have tried to convert this file to parquet using duckdb (fails memory error) and vaex (does not support timezoned timestamps). However, I have the feeling converting such a huge file might not be the best approach anyways.
Should I just stream plain insert statements to the psql console? looks like transactions are not implemented so I guess doing a record-wise auto-commit won't be fast either?
PS I have tried to trick the database by using TSDB protocol but since my "tags" would be numbers that doesn't work either.
Beta Was this translation helpful? Give feedback.
All reactions