From 155470c1cf894cc9e9d5e457640306c082fe1809 Mon Sep 17 00:00:00 2001 From: kbuilder Date: Tue, 31 Oct 2023 11:02:41 -0700 Subject: [PATCH] Release 0.34.0. --- CHANGES.md | 2 +- README.md | 70 +++++++++++++++++++++++++++++++++++------------------- 2 files changed, 46 insertions(+), 26 deletions(-) diff --git a/CHANGES.md b/CHANGES.md index 28d6cac59..68852bedc 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,6 +1,6 @@ # Release Notes -## Next +## 0.34.0 - 2023-10-31 * PR #1057: Enable async writes for greater throughput * PR #1094: CVE-2023-5072: Upgrading the org.json:json dependency diff --git a/README.md b/README.md index 3c18eb332..4e802b5ff 100644 --- a/README.md +++ b/README.md @@ -57,13 +57,13 @@ The latest version of the connector is publicly available in the following links | version | Link | |------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Spark 3.4 | `gs://spark-lib/bigquery/spark-3.4-bigquery-0.33.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.4-bigquery-0.33.0.jar)) | -| Spark 3.3 | `gs://spark-lib/bigquery/spark-3.3-bigquery-0.33.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.3-bigquery-0.33.0.jar)) | -| Spark 3.2 | `gs://spark-lib/bigquery/spark-3.2-bigquery-0.33.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.2-bigquery-0.33.0.jar)) | -| Spark 3.1 | `gs://spark-lib/bigquery/spark-3.1-bigquery-0.33.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.1-bigquery-0.33.0.jar)) | -| Spark 2.4 | `gs://spark-lib/bigquery/spark-2.4-bigquery-0.33.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-2.4-bigquery-0.33.0.jar)) | -| Scala 2.13 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.13-0.33.0.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.13-0.33.0.jar)) | -| Scala 2.12 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.33.0.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.33.0.jar)) | +| Spark 3.4 | `gs://spark-lib/bigquery/spark-3.4-bigquery-0.34.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.4-bigquery-0.34.0.jar)) | +| Spark 3.3 | `gs://spark-lib/bigquery/spark-3.3-bigquery-0.34.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.3-bigquery-0.34.0.jar)) | +| Spark 3.2 | `gs://spark-lib/bigquery/spark-3.2-bigquery-0.34.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.2-bigquery-0.34.0.jar)) | +| Spark 3.1 | `gs://spark-lib/bigquery/spark-3.1-bigquery-0.34.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-3.1-bigquery-0.34.0.jar)) | +| Spark 2.4 | `gs://spark-lib/bigquery/spark-2.4-bigquery-0.34.0.jar`([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-2.4-bigquery-0.34.0.jar)) | +| Scala 2.13 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.13-0.34.0.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.13-0.34.0.jar)) | +| Scala 2.12 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.34.0.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.34.0.jar)) | | Scala 2.11 | `gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.11-0.29.0.jar` ([HTTP link](https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-with-dependencies_2.11-0.29.0.jar)) | The first four versions are Java based connectors targeting Spark 2.4/3.1/3.2/3.3 of all Scala versions built on the new @@ -104,13 +104,13 @@ repository. It can be used using the `--packages` option or the | version | Connector Artifact | |------------|------------------------------------------------------------------------------------| -| Spark 3.4 | `com.google.cloud.spark:spark-3.4-bigquery:0.33.0` | -| Spark 3.3 | `com.google.cloud.spark:spark-3.3-bigquery:0.33.0` | -| Spark 3.2 | `com.google.cloud.spark:spark-3.2-bigquery:0.33.0` | -| Spark 3.1 | `com.google.cloud.spark:spark-3.1-bigquery:0.33.0` | -| Spark 2.4 | `com.google.cloud.spark:spark-2.4-bigquery:0.33.0` | -| Scala 2.13 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.13:0.33.0` | -| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.33.0` | +| Spark 3.4 | `com.google.cloud.spark:spark-3.4-bigquery:0.34.0` | +| Spark 3.3 | `com.google.cloud.spark:spark-3.3-bigquery:0.34.0` | +| Spark 3.2 | `com.google.cloud.spark:spark-3.2-bigquery:0.34.0` | +| Spark 3.1 | `com.google.cloud.spark:spark-3.1-bigquery:0.34.0` | +| Spark 2.4 | `com.google.cloud.spark:spark-2.4-bigquery:0.34.0` | +| Scala 2.13 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.13:0.34.0` | +| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.34.0` | | Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.29.0` | ### Specifying the Spark BigQuery connector version in a Dataproc cluster @@ -120,8 +120,8 @@ Using the standard `--jars` or `--packages` (or alternatively, the `spark.jars`/ To use another version than the built-in one, please do one of the following: -* For Dataproc clusters, using image 2.1 and above, add the following flag on cluster creation to upgrade the version `--metadata SPARK_BQ_CONNECTOR_VERSION=0.33.0`, or `--metadata SPARK_BQ_CONNECTOR_URL=gs://spark-lib/bigquery/spark-3.3-bigquery-0.33.0.jar` to create the cluster with a different jar. The URL can point to any valid connector JAR for the cluster's Spark version. -* For Dataproc serverless batches, add the following property on batch creation to upgrade the version: `--properties dataproc.sparkBqConnector.version=0.33.0`, or `--properties dataproc.sparkBqConnector.uri=gs://spark-lib/bigquery/spark-3.3-bigquery-0.33.0.jar` to create the batch with a different jar. The URL can point to any valid connector JAR for the runtime's Spark version. +* For Dataproc clusters, using image 2.1 and above, add the following flag on cluster creation to upgrade the version `--metadata SPARK_BQ_CONNECTOR_VERSION=0.34.0`, or `--metadata SPARK_BQ_CONNECTOR_URL=gs://spark-lib/bigquery/spark-3.3-bigquery-0.34.0.jar` to create the cluster with a different jar. The URL can point to any valid connector JAR for the cluster's Spark version. +* For Dataproc serverless batches, add the following property on batch creation to upgrade the version: `--properties dataproc.sparkBqConnector.version=0.34.0`, or `--properties dataproc.sparkBqConnector.uri=gs://spark-lib/bigquery/spark-3.3-bigquery-0.34.0.jar` to create the batch with a different jar. The URL can point to any valid connector JAR for the runtime's Spark version. ## Hello World Example @@ -131,7 +131,7 @@ You can run a simple PySpark wordcount against the API without compilation by ru ``` gcloud dataproc jobs submit pyspark --cluster "$MY_CLUSTER" \ - --jars gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.33.0.jar \ + --jars gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.34.0.jar \ examples/python/shakespeare.py ``` @@ -281,8 +281,8 @@ df.write \ ``` Writing to existing partitioned tables (date partitioned, ingestion time partitioned and range -partitioned) in APPEND save mode is fully supported by the connector and the BigQuery Storage Write -API. Partition overwrite and the use of `datePartition`, `partitionField`, `partitionType`, `partitionRangeStart`, `partitionRangeEnd`, `partitionRangeInterval` as +partitioned) in APPEND save mode and OVERWRITE mode (only date and range partitioned) is fully supported by the connector and the BigQuery Storage Write +API. The use of `datePartition`, `partitionField`, `partitionType`, `partitionRangeStart`, `partitionRangeEnd`, `partitionRangeInterval` described below is not supported at this moment by the direct write method. **Important:** Please refer to the [data ingestion pricing](https://cloud.google.com/bigquery/pricing#data_ingestion_pricing) @@ -860,6 +860,26 @@ word-break:break-word Read + + spark.sql.sources.partitionOverwriteMode + + Config to specify the overwrite mode on write when the table is range/time partitioned. + Currently supportd two modes : static and dynamic. In static mode, + the entire table is overwritten. In dynamic mode, the data is overwritten by partitions of the existing table. + The default value is static. +
(Optional) + + Write + + + enableReadSessionCaching + + Boolean config to disable read session caching. Caches BigQuery read sessions to allow for faster Spark query planning. + Default value is true. +
(Optional) + + Read + @@ -1128,7 +1148,7 @@ using the following code: ```python from pyspark.sql import SparkSession spark = SparkSession.builder \ - .config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.33.0") \ + .config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.34.0") \ .getOrCreate() df = spark.read.format("bigquery") \ .load("dataset.table") @@ -1137,7 +1157,7 @@ df = spark.read.format("bigquery") \ **Scala:** ```scala val spark = SparkSession.builder -.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.33.0") +.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.34.0") .getOrCreate() val df = spark.read.format("bigquery") .load("dataset.table") @@ -1145,7 +1165,7 @@ val df = spark.read.format("bigquery") In case Spark cluster is using Scala 2.12 (it's optional for Spark 2.4.x, mandatory in 3.0.x), then the relevant package is -com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.33.0. In +com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.34.0. In order to know which Scala version is used, please run the following code: **Python:** @@ -1169,14 +1189,14 @@ To include the connector in your project: com.google.cloud.spark spark-bigquery-with-dependencies_${scala.version} - 0.33.0 + 0.34.0 ``` ### SBT ```sbt -libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.33.0" +libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.34.0" ``` ### Connector metrics and how to view them @@ -1221,7 +1241,7 @@ word-break:break-word -**Note:** To use the metrics in the Spark UI page, you need to make sure the `spark-bigquery-metrics-0.33.0.jar` is the class path before starting the history-server and the connector version is `spark-3.2` or above. +**Note:** To use the metrics in the Spark UI page, you need to make sure the `spark-bigquery-metrics-0.34.0.jar` is the class path before starting the history-server and the connector version is `spark-3.2` or above. ## FAQ