- Upload avro files to bucket in AWS.
- None
- None
None
- 1.0.0:
- Use of APF.
- Use of boto3.
- APF
- boto3
CONSUMER_TOPICS: Some topics. String separated by commas. e.g:topic_oneortopic_two,topic_threeCONSUMER_SERVER: Kafka host with port. e.g:localhost:9092CONSUMER_GROUP_ID: Name for consumer group. e.g:correction
ES_PREFIX: Enables the indexing of term prefixes to speed up prefix searches. e.g:ztf_pipelineES_NETWORK_HOST: Elasticsearch host.ES_NETWORK_PORT: Elasticsearch port.
<<<<<<< Updated upstream
BUCKET_NAME: Mapping of bucket name(s) to topic prefix, e.g.,bucket1:topic1,bucket2:topic2. The example will send the inputs from topics with names starting withtopic1tobucket1and analogously fortopic2andbucket2. =======BUCKET_NAME: Name of bucket to store avro files.
Stashed changes
STEP_VERSION: Current version of the step. e.g:1.0.0STEP_ID: Unique identifier for the step. e.g:S3STEP_NAME: Name of the step. e.g:S3STEP_COMMENTS: Comments of the specific version.
For subscribe to specific topi set the following variable:
CONSUMER_TOPICS: Some topics. String separated by commas. e.g:topic_oneortopic_two,topic_three. Or a regular expression like^topic_.*
Another way is set a topic strategy, where the topic can change the name. For example in ZTF topics the name of topics is ztf_<yyyymmdd>_programid1. How to set up it?
TOPIC_STRATEGY_FORMAT: The topic expression, where%sis the date in the string (e.g.ztf_%s_progamid1).
METRICS_HOST: Kafka host for storing metrics.METRICS_TOPIC: Name of the topic to store metrics.
This step require only consumer.
- None
You can use a docker run command, you must set all environment variables.
docker run --name my_s3_step -e BUCKET_NAME=myhost -e [... all env ...] -d s3_step:versionAlso you can edit the environment variables in docker-compose.yml file. After that use docker-compose up command. This run only one container.
docker-compose up -dIf you want scale this container, you must set a number of containers to run.
docker-compose up -d --scale s3_step=32Note: Use docker-compose down for stop all containers.
For each release an image is uploaded to ghcr.io that you can use instead of building your own. To do that replace docker-compose.yml or the docker run command with this image:
docker pull ghcr.io/alercebroker/s3_step:latest