Skip to content

Added input spark options and schema for reading from the storage and example of uniqueness check for composite key #773

Added input spark options and schema for reading from the storage and example of uniqueness check for composite key

Added input spark options and schema for reading from the storage and example of uniqueness check for composite key #773

Workflow file for this run

name: downstreams
on:
pull_request:
types: [opened, synchronize]
merge_group:
types: [checks_requested]
push:
# Always run on push to main. The build cache can only be reused
# if it was saved by a run from the repository's default branch.
# The run result will be identical to that from the merge queue
# because the commit is identical, yet we need to perform it to
# seed the build cache.
branches:
- main
permissions:
id-token: write
contents: read
pull-requests: write
jobs:
compatibility:
strategy:
fail-fast: false
matrix:
downstream:
#- name: ucx
- name: remorph
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4.2.2
with:
fetch-depth: 0
- name: Install Python
uses: actions/setup-python@v5
with:
cache: 'pip'
cache-dependency-path: '**/pyproject.toml'
python-version: '3.10'
- name: Install toolchain
run: |
pip install hatch==1.9.4
- name: Check downstream compatibility
uses: databrickslabs/sandbox/downstreams@downstreams/v0.0.1
with:
repo: ${{ matrix.downstream.name }}
org: databrickslabs
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}