Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Loadfilename_as_column_gcs #305

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions scenarios/Loadfilename_as_column_gcs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Workflow: Scenario (Load file name as a column from GCS)

## Scenario

The purpose of this scenario is to capture loaded file name as data from Google Cloud Storage(GCS) on Google Cloud Platform(GCP).


*Steps*
1. Obtain file names and file contents from multiple files in a specific bucket of GCS.
<br>the sample
<br>file name: files_aaa_20210401
<br>contents:
```
aaa
```
The file name is given a date, like a daily file. The file content is simply a single item. Prepare several similar files.


2. Register the file name and the contents of the file in the table.

In this scenario, the custom scripts are used. Please refer to the Treasure data documentation for custom script.

- [Custom Scripts](https://docs.treasuredata.com/display/public/PD/Custom+Scripts)

This scenario is just an example of how to get the file name and file contents using custom script. You don't necessarily have to match the file format to achieve this, but you can use this code as a reference to create your own.

# How to Run for Server/Client Mode
Here's how to run this sample to try it out.

Preparation, you have to do the follows.
- Create a GCP service account with read/write permissions for GCS and obtain a credential file.
- Save a credential file named "credential.json" in the local folder.(This file do not need to upload, Don't leak this file.)
- Since GCS bucket name is unique, change the bucket name from "td_test_bucket" to an appropriate one.
- create a GCS bucket and create folder "files" and "with_filename".
- Upload the files in the files folder of this sample to the files folder of the GCS you created.
- Change "database_name" in gcs_filename_add.dig to an appropriate name.
- Files in the Local folder should be removed before running.


First, please upload the workflow.

# Upload
$ td wf push gcs_filename_add

And set the GCP service account credential to the workflow secret as follows.

td wf secrets --project gcs_filename_add --set gcp.json_key=@credential.json


You can trigger the session manually to watch it execute.

# Run
$ td wf start gcs_filename_add gcs_filename_add --session now

After executed, the "data.tmp" will be created in the "with_filename" folder on GCS and created td table "master_with_filename" in TD.


# Next Step

If you have any questions, please contact to support@treasuredata.com.
26 changes: 26 additions & 0 deletions scenarios/Loadfilename_as_column_gcs/gcs_filename_add.dig
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
timezone: Asia/Tokyo
_export:
gcs:
bucket: td_test_bucket
path_prefix: files
upload_path_prefix: with_filename
datafile: data.tmp
td:
database: database_name
table: master_with_filename
endpoint: api.treasuredata.com

+read_gcsfiles:
docker:
image: "digdag/digdag-python:3.9"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked that it can run without any problems.

py>: gcs_filename_add.main_proc
_env:
gcp_json_key: ${secret:gcp.json_key}
gcs_bucket: ${gcs.bucket}
gcs_path_prefix: ${gcs.path_prefix}
gcs_upload_path_prefix: ${gcs.upload_path_prefix}
gcs_datafile: ${gcs.datafile}


+load_from_gcs:
td_load>: load_gcs.yml
60 changes: 60 additions & 0 deletions scenarios/Loadfilename_as_column_gcs/gcs_filename_add.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import os
import io

BUCKET_NAME=os.environ.get('gcs_bucket')
PATH_PREFIX=os.environ.get('gcs_path_prefix')
UPLOAD_PATH_PREFIX=os.environ.get('gcs_upload_path_prefix')
DATAFILE=os.environ.get('gcs_datafile')

GCP_JSON_KEY=os.environ.get('gcp_json_key')
HOMEPATH=os.environ.get("HOME")
CONFIGDIR=HOMEPATH + "/"
CONFIGFILE="credential.json"

def set_gcp_credential():
file=open(CONFIGDIR + CONFIGFILE, "w")
file.write(GCP_JSON_KEY)
file.close()
print("created credential.json")


def get_all_keys(bucket: str=BUCKET_NAME, prefix: str=PATH_PREFIX, keys: []=[], marker: str='') -> [str]:
from google.cloud import storage

storage_client = storage.Client.from_service_account_json(CONFIGDIR + CONFIGFILE)

response = storage_client.list_blobs(
bucket, prefix=prefix, delimiter=''
)

for blob in response:
print(blob.name)
if blob.name != PATH_PREFIX:
keys.append(blob.name)
print(keys)
return keys

def read_files(keys):
from google.cloud import storage
storage_client = storage.Client.from_service_account_json(CONFIGDIR + CONFIGFILE)

if os.path.isfile(DATAFILE):
os.remove(DATAFILE)
f = open(DATAFILE, 'a')
for key in keys:
bucket = storage_client.get_bucket(BUCKET_NAME)
blob = bucket.blob(key)
body = blob.download_as_string()
f.write(key.replace(PATH_PREFIX, '', 1) + ',' + body.decode('utf-8'))
f.close()

bucket = storage_client.bucket(BUCKET_NAME)
blob = bucket.blob(UPLOAD_PATH_PREFIX + DATAFILE)
blob.upload_from_filename(DATAFILE)


def main_proc():
os.system("pip install google-cloud-storage")
set_gcp_credential()
read_files(get_all_keys())
os.system('cat data.tmp')
24 changes: 24 additions & 0 deletions scenarios/Loadfilename_as_column_gcs/load_gcs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
in:
type: gcs
auth_method: json_key
json_keyfile: { content: "${secret:gcp.json_key}" }
bucket: ${gcs.bucket}
path_prefix: ${gcs.upload_path_prefix}/${gcs.datafile}
parser:
charset: UTF-8
newline: LF
type: csv
delimiter: ","
quote: "\""
escape: "\""
trim_if_not_quoted: false
skip_header_lines: 0
allow_extra_columns: false
allow_optional_columns: false
columns:
- {name: filename, type: string}
- {name: data, type: string}
filters:
- type: add_time
to_column: {name: time}
from_value: {mode: upload_time}