Releases
v0.4.1
Changelog for v0.4.1
Features:
[Feature]: Self-Optimizing scan files from metadata instead of from file info cache #1093
[Subtask][Flink]: support pulsar read and write without consistency in Flink 1.12 #1007
[Feature]: A new design of resolving data conflicts without relying on AMS to generate TransactionId #994
[Feature]: Introduce a mechanism for concurrency control between Writing and Optimizing #985
[Feature][Spark]: Support drop partition #918
[Feature][SPARK]: Support truncate table #540
[Feature][Spark]: Support Merge Into for Spark3.x #395
Imporvements:
[Improvement]: Self-Optimizing for Mixed Format tables should limit the file cnt for each Optimizing #1213
[Improvement][AMS]: Automatic retry optimizing if commit failed #1103
[Improvement]: Replace the keyword base to express the meaning of the lowest part of sth #1082
[Improvement][AMS]: Support set login user and login password in config yaml file #1081
[Improvement][AMS]:Optimize settings page and terminal page #1005
[Improvement][Flink]: Refactor Log-store Source to FLINK FLIP-27 API #969
[Improvement][Flink]: Support all logstore configuration items to be configured in table properties #933
[Improvement]: More elegant display of error messages in terminal #913
BugFixs:
[ARCTIC-1025][FLINK] Fix data duplication even if there is a primary key table with upsert enabled #1180
[Bug][Spark]: The orphan files cleanup of insert overwrite doesn't take effect for un-partitioned table. #1174
[Bug]: When reading partial fields from Logstore, the number of fields does not match #1171
[Bug]: Address the deserializing exception of the array type in the logstore #1111
[Bug][Core]: The PartitionPropertiesUpdate can't remove partition properties key #1107
[BUG]: The expire snapshot and the orphan file clean scan related files from metadata #1105
[BUG][AMS]: Automatic retry optimizing if commit failed #1103
[Bug]: Browser tab does not display Arctic's icon #1091
[Bug]: When the Metastore type is Hadoop, Terminal executes the spark sql without loading the configuration file #1090
[Bug][Spark]: The read and write authentication user are different in Spark when using mixed Iceberg format #1069
[Bug][Spark]: Insert overwrite select from view will throw exception #1066
[Bug]: When the flink jobs reads the arctic table for a while, the flink job fails #1063
[Bug]: Spark reads the timestamp field eight hours longer than the actual Flink engine writes timestamp #1062
[Bug]: KeyedTableScanTask
confuse files from BaseStore
and ChangeStore
#1045
[Bug][Spark]: Create Table As Select should write to the base store. #1026
[Bug]: When using spark query type timestamp column failed #978
[Bug]: Flink sets watermark on Arctic table fields, but ArrayIndexOutOfBoundsException occurs when reading data #957
[Bug][Spark]: The data are written repeatedly after the Spark Executor failover #917
[Bug]: Spark refresh table error #620
[Bug]: Spark batch write failed w/ Already closed files for partition #613
[Bug][Flink]: reverse message order when retract message from message queue. #482
You can’t perform that action at this time.