Skip to content

Commit

Permalink
Preparing 0.13.1-beta release
Browse files Browse the repository at this point in the history
  • Loading branch information
davidrabinowitz committed Feb 14, 2020
1 parent dc14a63 commit d4dabd5
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 7 deletions.
8 changes: 8 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,14 @@
# Release Notes

## 0.13.1-beta - 2020-02-14
* The BigQuery Storage API was reverted to v1beta2. The v1beta2 API has not been
fully integrated with custom IAM roles, which can cause issues to customers using
those. The v1beta1 doesn't have this problem. Once the integration is complete,
the API will be upgraded again.

## 0.13.0-beta - 2020-02-12
**Please don't use this version, use 0.13.1-beta instead**

* Moved to use BigQuery Storage API v1beta2
* changed the `parallelism` parameter to `maxParallelism` in order to reflect the
Change in the underlining API (the old parameter has been deprecated)
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ repository. It can be used using the `--packages` option or the

| Scala version | Connector Artifact |
| --- | --- |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.13.0-beta` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.13.0-beta` |
| Scala 2.11 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.13.1-beta` |
| Scala 2.12 | `com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.13.1-beta` |

## Hello World Example

Expand Down Expand Up @@ -497,7 +497,7 @@ using the following code:
```python
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.13.0-beta")\
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.13.1-beta")\
.getOrCreate()
df = spark.read.format("bigquery")\
.option("table","dataset.table")\
Expand All @@ -507,7 +507,7 @@ df = spark.read.format("bigquery")\
**Scala:**
```python
val spark = SparkSession.builder
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.13.0-beta")
.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.13.1-beta")
.getOrCreate()
val df = spark.read.format("bigquery")
.option("table","dataset.table")
Expand All @@ -516,7 +516,7 @@ val df = spark.read.format("bigquery")

In case Spark cluster is using Scala 2.12 (it's optional for Spark 2.4.x,
mandatory in 3.0.x), then the relevant package is
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.13.0-beta. In
com.google.cloud.spark:spark-bigquery-with-dependencies_**2.12**:0.13.1-beta. In
order to know which Scala version is used, please run the following code:

**Python:**
Expand All @@ -540,14 +540,14 @@ To include the connector in your project:
<dependency>
<groupId>com.google.cloud.spark</groupId>
<artifactId>spark-bigquery-with-dependencies_${scala.version}</artifactId>
<version>0.13.0-beta</version>
<version>0.13.1-beta</version>
</dependency>
```

### SBT

```sbt
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.13.0-beta"
libraryDependencies += "com.google.cloud.spark" %% "spark-bigquery-with-dependencies" % "0.13.1-beta"
```

## Building the Connector
Expand Down

0 comments on commit d4dabd5

Please sign in to comment.