Replies: 1 comment
-
Yes, we need Hadoop dependency. We should set environment variable HADOOP_HOME. Maybe we should modify the document to help more people deploy the shuffle server easily. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We are trying to build uniffle for spark 3.3.0 however when starting the coordinator, we get the following error
I tried building the distribution with both
and
It's not mentioned anywhere in the doc if hadoop needs to be a provided depedency. Is that the case? Do we need to provide the hadoop jars when starting the coordinator? Sorry, but i don't have a lot of experience with hadoop
Beta Was this translation helpful? Give feedback.
All reactions