Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MINOR] Properly Cache Artifacts in CI #46

Closed
wants to merge 13 commits into from
5 changes: 5 additions & 0 deletions .asf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,11 @@ github:
merge: false
squash: true
rebase: true
features:
# Enable the "Issues" tab
issues: true
# Enable the "Projects" tab
projects: true

notifications:
pullrequests: reviews@spark.apache.org
Expand Down
23 changes: 14 additions & 9 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,27 +68,30 @@ jobs:

- name: Cache Spark Installation
uses: actions/cache@v2
id: cache
with:
key: spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}
key: v2-spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}
path: |
$GITHUB_WORKSPACE/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}
/home/runner/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}

- name: Setup Apache Spark
if: steps.cache.outputs.cache-hit != 'true'
run: |
set -x
echo "Apache Spark is not installed"
# Access the directory.
mkdir -p $GITHUB_WORKSPACE/deps/
cd $GITHUB_WORKSPACE/deps/
mkdir -p ~/deps/
wget -q https://dlcdn.apache.org/spark/spark-${{ env.SPARK_VERSION }}/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}.tgz
tar -xzf spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}.tgz
tar -xzf spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}.tgz -C ~/deps/
# Delete the old file
rm spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}.tgz

rm spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}.tgz

ls -lah ~/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}
du -hs ~/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}

# Setup the Environment Variables
echo "Apache Spark is ready to use"
echo "SPARK_HOME=${GITHUB_WORKSPACE}/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}" >> "$GITHUB_ENV"
cd $GITHUB_WORKSPACE
echo "SPARK_HOME=/home/runner/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}" >> "$GITHUB_ENV"
- name: Run Build & Test
run: |
go mod download -x
Expand All @@ -97,10 +100,12 @@ jobs:
make test
- name: Run Integration Test
run: |
export SPARK_HOME=/home/runner/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}
# make integration
make gen && make && make integration
- name: Run Code Coverage
run: |
export SPARK_HOME=/home/runner/deps/spark-${{ env.SPARK_VERSION }}-bin-hadoop${{ env.HADOOP_VERSION }}
make coverage
- uses: PaloAltoNetworks/cov@3.0.0
with:
Expand Down
Loading