Skip to content

Commit 9a39090

Browse files
Merge pull request #108 from airavata-courses/project_3
Project 3 integration
2 parents bf04a5d + 995c5ef commit 9a39090

File tree

311 files changed

+79623
-468
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

311 files changed

+79623
-468
lines changed

.DS_Store

-6 KB
Binary file not shown.
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
name: Garuda_CD
2+
on:
3+
push:
4+
branches: [main, release]
5+
jobs:
6+
build:
7+
name: Build
8+
runs-on: ubuntu-latest
9+
steps:
10+
- uses: actions/checkout@v2
11+
- name: executing remote ssh commands using key file
12+
uses: appleboy/ssh-action@master
13+
with:
14+
# host for jetstream server
15+
host: 149.165.154.67
16+
# username for jetstream server
17+
username: exouser
18+
# jetstream userkey, added to server and github secrets
19+
key: ${{ secrets.JETSTREAM_PVT_KEY }}
20+
port: 22
21+
script: |
22+
cd $HOME
23+
rm -rf garuda/
24+
git clone https://github.com/airavata-courses/garuda.git
25+
sh env_replacer.sh
26+
echo "Building docker images"
27+
cd garuda/
28+
docker-compose build
29+
cd kubernetes/
30+
echo "Pushing created images into DockerHub"
31+
sh docker_push.sh
32+
echo "Deploy resources into kubernetes cluster"
33+
sh kubernetes_init.sh

.github/workflows/garuda__github_actions_CI.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ name: Garuda_CI
66
# events but only for the development branch and master branch
77
on:
88
push:
9-
branches: [data_extractor, queue_worker, main]
9+
branches: [main, release]
1010
pull_request:
11-
branches: [data_extractor, queue_worker, main]
11+
branches: [main, release]
1212

1313
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
1414
jobs:

README.md

Lines changed: 38 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,11 @@ Softwares/prerequisites needed to run garuda: [Docker](https://docs.docker.com/e
1414
1515
#### Start Application
1616

17+
Export ENV variables
18+
```sh
19+
export $PROJECT=PROJECT3
20+
```
21+
1722
Pull the latest images from DockerHub
1823

1924
```sh
@@ -25,9 +30,14 @@ Start application services
2530
```sh
2631
docker-compose up
2732
```
28-
2933
Run the above command on your terminal from the root of project folder to create all the resources to run the project.
3034

35+
36+
### Adding hostnames in /etc/hosts
37+
```sh
38+
sudo sh scipts/host.sh
39+
```
40+
3141
> Note: The above command creates 6 containers for the running the application.
3242
3343
> Note: The services run in non-detached mode. On exiting the process from terminal all the containers stop.
@@ -36,7 +46,7 @@ Run the above command on your terminal from the root of project folder to create
3646
3747
#### Access Web-Application
3848

39-
URL for the web-application: http://localhost:3000
49+
URL for the web-application: http://garuda.org:3000
4050

4151
#### Stop Application
4252

@@ -59,6 +69,12 @@ Build resource again if needed
5969
```sh
6070
docker-compose build
6171
```
72+
> Note: Before building make sure you have these env variables exported in terminal
73+
> 1. NASA_USERNAME - NASA MERRA2 dashboard username
74+
> 2. NASA_PASSWORD - NASA MERRA2 dashboard password
75+
> 3. AWS_ACCESS_KEY_ID - JetStream Object Store access key ID
76+
> 4. AWS_SECRET_ACCESS_KEY - JetStream Object Store access secret
77+
> 5. PROJECT - Version of project you want to build
6278
6379
## Run on Windows based systems
6480

@@ -70,6 +86,11 @@ Softwares/prerequisites needed to run garuda: [Docker](https://docs.docker.com/d
7086
7187
#### Start Application
7288

89+
Export ENV variables
90+
```sh
91+
export $PROJECT=PROJECT3
92+
```
93+
7394
Pull the latest images from DockerHub
7495

7596
```sh
@@ -82,25 +103,23 @@ Start application services
82103
docker-compose up
83104
```
84105

106+
Run the above command on your cmd from the root of project folder to create all the resources to run the project.
107+
85108
### Adding hostnames in /etc/hosts
86109
```sh
87110
sudo sh scipts/host.sh
88111
```
89112

90-
Run the above command on your cmd from the root of project folder to create all the resources to run the project.
91-
92113
> Note: The above command creates 6 containers for the running the application.
93114
94115
> Note: The services run in non-detached mode. On exiting the process from terminal all the containers stop.
95116
96117
> Note: This command might take some time to run. It's spinning up all the containers required to run the project. After all the resources are done loading, logs won't be printing on the terminal. You can use the application now !
97118
98119

99-
100-
101120
#### Access Web-Application
102121

103-
URL for the web-application: http://localhost:3000
122+
URL for the web-application: http://garuda.org:3000
104123

105124
#### Stop Application
106125

@@ -131,6 +150,18 @@ docker compose build
131150
> 4. AWS_SECRET_ACCESS_KEY - JetStream Object Store access secret
132151
> 5. PROJECT - Version of project you want to build
133152
153+
## Access JetStream production deployment
154+
155+
Add changes in etc/hosts/ for production url
156+
```sh
157+
sudo sh scipts/prod_host.sh
158+
```
159+
160+
Install CORS plugin in browser to enable cors headers since application is using jetstream object strore
161+
[sample cors plugin](https://chrome.google.com/webstore/detail/allow-cors-access-control/lhobafahddgcelffkeicbaginigeejlf?hl=en)
162+
163+
Access application at http://garuda.org
164+
134165
## Modules
135166

136167
1. [Data Extractor](./data_extractor/README.md) : Apache Maven project to build a utility JAR file which extracts requested NEXRAD data from S3.

ansible/steps.txt

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Run the following lines to install ansible
2+
python3 -m pip install --user ansible
3+
4+
# Create hosts file and paste the following lines in it
5+
[control_plane]
6+
control1 ansible_host=149.165.153.132 ansible_user=garuda
7+
8+
[workers]
9+
worker1 ansible_host=149.165.153.213 ansible_user=garuda
10+
worker2 ansible_host=149.165.155.35 ansible_user=garuda
11+
12+
[all:vars]
13+
ansible_python_interpreter=/usr/bin/python3
14+
15+
16+
17+
# move the hosts file to the ansible directory
18+
sudo mv hosts /etc/ansible/hosts

apigateway/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
__pycache__/*

apigateway/constants.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ def getConstants():
2929
constants["RABBITMQ_HOST"] = os.environ.get('RABBITMQ_HOST')
3030

3131
constants["RABBITMQ_PORT"] = '5672'
32-
if os.environ.get('RABBITMQ_HOST') is not None:
32+
if os.environ.get('RABBITMQ_PORT') is not None:
3333
constants["RABBITMQ_PORT"] = os.environ.get('RABBITMQ_PORT')
3434

3535
return constants

apigateway/server.py

Lines changed: 75 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -25,16 +25,16 @@ def log_error(err_code):
2525
elif err_code == -3:
2626
print("Status value not valid")
2727

28-
def push_to_rabbitmq(data):
28+
def push_to_rabbitmq(data, queue_name):
2929
creds = pika.PlainCredentials('guest', 'guest')
3030
connection = pika.BlockingConnection(
3131
pika.ConnectionParameters(host = constants['RABBITMQ_HOST'], port = constants['RABBITMQ_PORT'], virtual_host = "/", credentials = creds)
3232
)
3333
channel = connection.channel()
34-
channel.queue_declare(queue='offload_request')
34+
channel.queue_declare(queue=queue_name)
3535
message = json.dumps(data)
3636

37-
channel.basic_publish(exchange='', routing_key='offload_request', body=message)
37+
channel.basic_publish(exchange='', routing_key=queue_name, body=message)
3838
connection.close()
3939
'''
4040
Queue Request example:
@@ -121,23 +121,48 @@ def generate_new_request():
121121
}
122122
return jsonify(response)
123123
try:
124-
station_key = request_data['station_name']
125-
db_date_absolute = str(request_data['date'])
126-
db_date = db_date_absolute.replace(" ", "").replace("-", "")
127-
time_start, time_end = str(request_data['time']).replace(" ","").replace(":", "").split('-')
128-
property = request_data['property']
129-
user_email = request_data['user_email']
124+
# request type - nexrad/nasa
125+
data_type = 'nexrad'
126+
if 'type' in request_data:
127+
data_type = request_data['type']
128+
129+
# nexrad data
130+
if data_type == 'nexrad':
131+
station_key = request_data['station_name']
132+
db_date_absolute = str(request_data['date'])
133+
db_date = db_date_absolute.replace(" ", "").replace("-", "")
134+
time_start, time_end = str(request_data['time']).replace(" ","").replace(":", "").split('-')
135+
property = request_data['property']
136+
user_email = request_data['user_email']
137+
# nasa data
138+
else:
139+
minlon = int(request_data["minlon"])
140+
maxlon = int(request_data["maxlon"])
141+
minlat = int(request_data["minlat"])
142+
maxlat = int(request_data["maxlat"])
143+
begTime = request_data["begTime"]
144+
endTime = request_data["endTime"]
145+
begHour = request_data["begHour"]
146+
endHour = request_data["endHour"]
147+
property = request_data["property"]
148+
user_email = request_data['user_email']
130149
except:
131150
response = {
132151
"response_code" : "3",
133152
"response_message" : "Incorrect keys passed in the json request"
134153
}
135154
return jsonify(response)
136-
db_key = f"{station_key}_{db_date}_{time_start}_{time_end}_{property}"
155+
156+
if data_type == 'nexrad':
157+
db_key = f"{station_key}_{db_date}_{time_start}_{time_end}_{property}_{data_type}"
158+
else:
159+
# nasa data request ID
160+
db_key = f"{property}_{minlon}_{maxlon}_{minlat}_{maxlat}_{begTime}_{endTime}_{begHour}_{endHour}_{data_type}"
137161

138162
# Make call to check db for this key
139163
URL = "http://" + constants["DB_MIDDLEWARE_READER_HOST"] + ":" + constants["DB_MIDDLEWARE_READER_PORT"] + "/" + "postCheckRequest"
140164
METHOD = 'POST'
165+
# TODO: remove property
141166
PAYLOAD = json.dumps({
142167
"request_id" : db_key,
143168
"property" : str(property)
@@ -182,28 +207,51 @@ def generate_new_request():
182207
response['data_dump'] = ""
183208
if db_response['data_status'] == "false":
184209
# Make a call to rabbitmq
185-
'''
186-
Queue Request example:
187-
{"requestID":"1234","stationID":"KABR","year":"2007","month":"01","date":"01","start_time":"000000","end_time":"003000","property":"Reflectivity"}
188-
'''
189-
rmq_month, rmq_date, rmq_year = db_date_absolute.split('-')
190-
rabbitmq_data = {
191-
"requestID" : str(db_key),
192-
"stationID" : str(station_key),
193-
"year" : rmq_year,
194-
"month" : rmq_month,
195-
"date" : rmq_date,
196-
"start_time" : time_start,
197-
"end_time" : time_end,
198-
"property" : str(property)
199-
}
210+
rabbitmq_data = {}
211+
if data_type == 'nexrad':
212+
'''
213+
Queue Request example:
214+
{"requestID":"1234","stationID":"KABR","year":"2007","month":"01","date":"01","start_time":"000000","end_time":"003000","property":"Reflectivity"}
215+
'''
216+
rmq_month, rmq_date, rmq_year = db_date_absolute.split('-')
217+
rabbitmq_data = {
218+
"requestID" : str(db_key),
219+
"stationID" : str(station_key),
220+
"year" : rmq_year,
221+
"month" : rmq_month,
222+
"date" : rmq_date,
223+
"start_time" : time_start,
224+
"end_time" : time_end,
225+
"property" : str(property)
226+
}
227+
else:
228+
'''
229+
Queue Request example:
230+
{"minlon":-180,"maxlon":180,"minlat":-90,"maxlat":-45,"begTime":"2021-01-01","endTime":"2021-01-02","begHour":"00:00","endHour":"00:00","requestID":"1234","property":"T"}
231+
'''
232+
rabbitmq_data = {
233+
"minlon":minlon,
234+
"maxlon":maxlon,
235+
"minlat":minlat,
236+
"maxlat":maxlat,
237+
"begTime":begTime,
238+
"endTime":endTime,
239+
"begHour":begHour,
240+
"endHour":endHour,
241+
"requestID":str(db_key),
242+
"property":str(property)
243+
}
200244
try:
201-
push_to_rabbitmq(data=rabbitmq_data)
245+
queue_name = ''
246+
if data_type == 'nexrad':
247+
queue_name = 'offload_request'
248+
else:
249+
queue_name = 'offload_queue_nasa'
250+
push_to_rabbitmq(data=rabbitmq_data, queue_name=queue_name)
202251
except:
203252
response['response_code'] = "3"
204253
response['response_message'] = "Failed to add new job to rabbitmq"
205254
response['data_dump'] = ""
206-
207255
else:
208256
response['response_code'] = "1"
209257
response['response_message'] = "Fail"

db_middleware/Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
FROM node:14-alpine
22

3+
34
WORKDIR /db_middleware
5+
6+
47
COPY package.json .
58
RUN npm install
69
COPY . .

0 commit comments

Comments
 (0)