Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide AWS Glue as an option #267

Open
klescosia opened this issue Nov 22, 2023 · 8 comments
Open

Provide AWS Glue as an option #267

klescosia opened this issue Nov 22, 2023 · 8 comments
Assignees
Labels
new-feature New feature

Comments

@klescosia
Copy link

Provide AWS Glue as a processing layer

@vgkowski
Copy link
Contributor

Thanks for providing feedback! Can you give us more details on what you would like to see in this construct? Think about your user experience and how this construct can help you as a data engineer (with your preferences).

@vgkowski vgkowski added the new-feature New feature label Dec 1, 2023
@dashmug
Copy link

dashmug commented Mar 3, 2024

A few ideas.

  1. Glue is non-trivial to replicate locally. So engineers end up iterating their scripts in the cloud which makes the development cycle slow.
  2. Glue's CDK constructs are still on L1 making it too low-level and the development experience is not so great.
  3. Glue's CFN deployment only deploys a single script for each job. If you are developing multiple scripts and use some common utility functions (to be DRY), you have to package them into a python package, upload to s3, and then indicate in your Glue job. Again, all of this makes it not-so-friendly to developers.

I am in the process of making my own solutions to the above as I haven't heard of data-solutions-framework-on-aws before. I've looked at aws-ddk but it also did not help Glue development either. This is my project: glue-pyspark-dev-tools.

If there is alignment, I'll be happy to help add my planned features here in this project.

@klescosia
Copy link
Author

Bouncing off on your ideas..

  1. Yes, we end up iterating/running/testing scripts in the cloud. We also use Athena to test our transformation logics since I mostly advocated the use Spark SQL scripts for our transformations instead of PySpark.

Our jobs are structured as follows:

  • Ingestion
  • Staging
  • Transformation
  • Loading (to Redshift)

What I did for our deployment was to have 2 config files. One CSV file that contains the JobName, Classification (default/custom), Category (Ingestion, etc.), ConnectionName (since our jobs run in private network). This CSV file will be used by the CDK to loop through and deploy the Glue Jobs. Another config file would be for managing the custom job (Clasification) which were tagged from the CSV file.

@lmouhib
Copy link
Contributor

lmouhib commented Mar 5, 2024

One more point to consider for the feature, provide a way to run unit test, By inferring the arguments from the job construct and running them against the Glue runtime docker container.

@vgkowski
Copy link
Contributor

vgkowski commented Mar 5, 2024

What I did for our deployment was to have 2 config files. One CSV file that contains the JobName, Classification (default/custom), Category (Ingestion, etc.), ConnectionName (since our jobs run in private network). This CSV file will be used by the CDK to loop through and deploy the Glue Jobs. Another config file would be for managing the custom job (Clasification) which were tagged from the CSV file.

@klescosia Do I understand correctly you have implemented a config-file-based approach on top of CDK and Glue to create Glue jobs in a simpler way than the CDK L1 construct?

@vgkowski
Copy link
Contributor

vgkowski commented Mar 5, 2024

I am in the process of making my own solutions to the above as I haven't heard of data-solutions-framework-on-aws before. I've looked at aws-ddk but it also did not help Glue development either. This is my project: glue-pyspark-dev-tools.
If there is alignment, I'll be happy to help add my planned features here in this project.

@dashmug I see your tool as an equivalent of the EMR toolkit but for Glue: a packaged solution based on this blog post. Am I correct?
If yes, your solution would tackle the local dev and unit testing parts which is great! I think DSF would be complementary and can provide value on packaging this local dev to make it deployable in a Glue Job. We just need to ensure both solutions are not mandatory for each other.

What I am thinking of now is to provide as part of DSF:

  1. An abstracted construct for the Glue Job with smart defaults and best practices. Something similar to the SparkEmrServerlessJob construct.
  2. A Glue job packager construct that takes your local env and make it available/consumable by Glue. Something similar to the PySparkApplicationPackage but for Glue specificities.

@klescosia
Copy link
Author

What I did for our deployment was to have 2 config files. One CSV file that contains the JobName, Classification (default/custom), Category (Ingestion, etc.), ConnectionName (since our jobs run in private network). This CSV file will be used by the CDK to loop through and deploy the Glue Jobs. Another config file would be for managing the custom job (Clasification) which were tagged from the CSV file.

@klescosia Do I understand correctly you have implemented a config-file-based approach on top of CDK and Glue to create Glue jobs in a simpler way than the CDK L1 construct?

Yes, that is correct. We have many Glue Jobs, each has different functionality and configurations. So I'm looping through the CSV file then executing glue.CfnJob (i'm using Python CDK) then I also have yaml file to store the configurations (number of workers, worker types, s3 paths, etc.) both for default and custom jobs.

@vgkowski vgkowski self-assigned this Mar 11, 2024
@vgkowski vgkowski assigned lmouhib and unassigned shalaka-k Jul 23, 2024
@lmouhib
Copy link
Contributor

lmouhib commented Aug 7, 2024

There is already an alpha L2 construct for Glue, we will wait to see its final form before we work on this. In the meantime we will deliver a construct to package dependencies for glue jobs similar to the one we offer for EMR Spark runtime constructs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new-feature New feature
Projects
Status: In Progress
Development

When branches are created from issues, their pull requests are automatically linked.

5 participants