Skip to content

Commit 3a0886a

Browse files
authored
Revert "Rebase to 302"
1 parent 2091240 commit 3a0886a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+9404
-21249
lines changed

CHANGELOG.md

Lines changed: 2 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -1,64 +1,24 @@
11
# Release History
22

3-
# 3.0.2 (2024-01-25)
4-
5-
- SQLAlchemy dialect now supports table and column comments (thanks @cbornet!)
6-
- Fix: SQLAlchemy dialect now correctly reflects TINYINT types (thanks @TimTheinAtTabs!)
7-
- Fix: `server_hostname` URIs that included `https://` would raise an exception
8-
- Other: pinned to `pandas<=2.1` and `urllib3>=1.26` to avoid runtime errors in dbt-databricks (#330)
9-
10-
## 3.0.1 (2023-12-01)
11-
12-
- Other: updated docstring comment about default parameterization approach (#287)
13-
- Other: added tests for reading complex types and revised docstrings and type hints (#293)
14-
- Fix: SQLAlchemy dialect raised DeprecationWarning due to `dbapi` classmethod (#294)
15-
- Fix: SQLAlchemy dialect could not reflect TIMESTAMP_NTZ columns (#296)
16-
17-
## 3.0.0 (2023-11-17)
18-
19-
- Remove support for Python 3.7
20-
- Add support for native parameterized SQL queries. Requires DBR 14.2 and above. See docs/parameters.md for more info.
21-
- Completely rewritten SQLAlchemy dialect
22-
- Adds support for SQLAlchemy >= 2.0 and drops support for SQLAlchemy 1.x
23-
- Full e2e test coverage of all supported features
24-
- Detailed usage notes in `README.sqlalchemy.md`
25-
- Adds support for:
26-
- New types: `TIME`, `TIMESTAMP`, `TIMESTAMP_NTZ`, `TINYINT`
27-
- `Numeric` type scale and precision, like `Numeric(10,2)`
28-
- Reading and writing `PrimaryKeyConstraint` and `ForeignKeyConstraint`
29-
- Reading and writing composite keys
30-
- Reading and writing from views
31-
- Writing `Identity` to tables (i.e. autoincrementing primary keys)
32-
- `LIMIT` and `OFFSET` for paging through results
33-
- Caching metadata calls
34-
- Enable cloud fetch by default. To disable, set `use_cloud_fetch=False` when building `databricks.sql.client`.
35-
- Add integration tests for Databricks UC Volumes ingestion queries
36-
- Retries:
37-
- Add `_retry_max_redirects` config
38-
- Set `_enable_v3_retries=True` and warn if users override it
39-
- Security: bump minimum pyarrow version to 14.0.1 (CVE-2023-47248)
3+
## 2.9.4 (Unreleased)
404

415
## 2.9.3 (2023-08-24)
426

437
- Fix: Connections failed when urllib3~=1.0.0 is installed (#206)
448

459
## 2.9.2 (2023-08-17)
4610

47-
__Note: this release was yanked from Pypi on 13 September 2023 due to compatibility issues with environments where `urllib3<=2.0.0` were installed. The log changes are incorporated into version 2.9.3 and greater.__
48-
4911
- Other: Add `examples/v3_retries_query_execute.py` (#199)
5012
- Other: suppress log message when `_enable_v3_retries` is not `True` (#199)
5113
- Other: make this connector backwards compatible with `urllib3>=1.0.0` (#197)
5214

5315
## 2.9.1 (2023-08-11)
5416

55-
__Note: this release was yanked from Pypi on 13 September 2023 due to compatibility issues with environments where `urllib3<=2.0.0` were installed.__
56-
5717
- Other: Explicitly pin urllib3 to ^2.0.0 (#191)
5818

5919
## 2.9.0 (2023-08-10)
6020

61-
- Replace retry handling with DatabricksRetryPolicy. This is disabled by default. To enable, set `_enable_v3_retries=True` when creating `databricks.sql.client` (#182)
21+
- Replace retry handling with DatabricksRetryPolicy. This is disabled by default. To enable, set `enable_v3_retries=True` when creating `databricks.sql.client` (#182)
6222
- Other: Fix typo in README quick start example (#186)
6323
- Other: Add autospec to Client mocks and tidy up `make_request` (#188)
6424

CONTRIBUTING.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -107,8 +107,6 @@ End-to-end tests require a Databricks account. Before you can run them, you must
107107
export host=""
108108
export http_path=""
109109
export access_token=""
110-
export catalog=""
111-
export schema=""
112110
```
113111

114112
Or you can write these into a file called `test.env` in the root of the repository:
@@ -143,11 +141,6 @@ The `PySQLLargeQueriesSuite` namespace contains long-running query tests and is
143141
The `PySQLStagingIngestionTestSuite` namespace requires a cluster running DBR version > 12.x which supports staging ingestion commands.
144142

145143
The suites marked `[not documented]` require additional configuration which will be documented at a later time.
146-
147-
#### SQLAlchemy dialect tests
148-
149-
See README.tests.md for details.
150-
151144
### Code formatting
152145

153146
This project uses [Black](https://pypi.org/project/black/).

README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,15 @@
33
[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)
44
[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)
55

6-
The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies.
6+
The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL.
77

88
This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.
99

1010
You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).
1111

1212
## Requirements
1313

14-
Python 3.8 or above is required.
14+
Python 3.7 or above is required.
1515

1616
## Documentation
1717

@@ -47,7 +47,8 @@ connection = sql.connect(
4747
access_token=access_token)
4848

4949
cursor = connection.cursor()
50-
cursor.execute('SELECT :param `p`, * FROM RANGE(10)', {"param": "foo"})
50+
51+
cursor.execute('SELECT * FROM RANGE(10)')
5152
result = cursor.fetchall()
5253
for row in result:
5354
print(row)

docs/parameters.md

Lines changed: 0 additions & 255 deletions
This file was deleted.

0 commit comments

Comments
 (0)