@@ -18,11 +18,11 @@ The goal is to help to resolve doubts or issues related to scalability or perfor
18
18
* [ Test your 3scale services] ( #test-your-3scale-services )
19
19
* [ Setup traffic profiles] ( #setup-traffic-profiles )
20
20
* [ Run tests] ( #run-tests )
21
+ * [ Sustained load] ( #sustained-load )
21
22
* [ Troubleshooting] ( #troubleshooting )
22
23
* [ Check apicast gateway configuration] ( #check-apicast-gateway-configuration )
23
24
* [ Check backend listener traffic] ( #check-backend-listener-traffic )
24
25
* [ Check upstream service traffic] ( #check-upstream-service-traffic )
25
- * [ Sustained load] ( #sustained-load )
26
26
27
27
Generated using [ github-markdown-toc] ( https://github.com/ekalinin/github-markdown-toc )
28
28
@@ -102,6 +102,18 @@ threescale_services: ""
102
102
103
103
** 3.** Execute the playbook ` injector.yml ` to deploy injector.
104
104
105
+ Avoid ssh issues when running anisble playbooks
106
+
107
+ ``` bash
108
+ $ cat ~ /.ansible.cfg
109
+ [ssh_connection]
110
+ ssh_args = -o ServerAliveInterval=30
111
+ pipelining = True
112
+ ```
113
+
114
+ Start the playbook
115
+
116
+
105
117
``` bash
106
118
ansible-playbook -i hosts injector.yml
107
119
```
@@ -146,6 +158,17 @@ private_base_url: <PRIVATE_BASE_URL>
146
158
147
159
** 3.** Execute the playbook ` profiled-injector.yml ` to deploy injector.
148
160
161
+ Avoid ssh issues when running anisble playbooks
162
+
163
+ ``` bash
164
+ $ cat ~ /.ansible.cfg
165
+ [ssh_connection]
166
+ ssh_args = -o ServerAliveInterval=30
167
+ pipelining = True
168
+ ```
169
+
170
+ Start the playbook
171
+
149
172
``` bash
150
173
ansible-playbook -i hosts profiled-injector.yml
151
174
```
@@ -171,8 +194,32 @@ ansible-playbook -i hosts run.yml
171
194
The test results of the last execution are automatically stored in ** /opt/3scale-perftest/reports** .
172
195
This directory can be fetched and then the ** report/index.html** can be opened to view the results.
173
196
197
+ ## Sustained load
198
+
199
+ Some performance test are looking for * peak* and * sustained* traffic maximum performance.
200
+ * Sustained* traffic is defined as traffic load where * Job Queue* size is always at low levels, or even empty.
201
+ For * sustained* traffic performance benchmark, * Job Queue* must be monitorized.
202
+
203
+ This is a small guideline to monitor * Job Queue* size:
204
+
205
+ - Get backend redis pod
206
+
207
+ ``` bash
208
+ $ oc get pods | grep redis
209
+ backend-redis-2-nkrkk 1/1 Running 0 14d
210
+ ```
211
+
212
+ - Get Job Queue size
213
+
214
+ ``` bash
215
+ $ oc rsh backend-redis-2-nkrkk /bin/sh -i -c ' redis-cli -n 1 llen resque:queue:priority'
216
+ (integer) 0
217
+ ```
218
+
174
219
## Troubleshooting
175
220
221
+ ###
222
+
176
223
Sometimes, even though all deployment commands run successfully, performance traffic may be broken.
177
224
This might be due to a misconfiguration in any stage of the deployment process.
178
225
When performance HTTP traffic response codes are not as expected, i.e. ** 200 OK** ,
@@ -235,24 +282,3 @@ the last usual suspect is upstream or upstream configuration.
235
282
236
283
Check * upstream* uri is correctly configured in your 3scale configuration
237
284
238
- ## Sustained load
239
-
240
- Some performance test are looking for * peak* and * sustained* traffic maximum performance.
241
- * Sustained* traffic is defined as traffic load where * Job Queue* size is always at low levels, or even empty.
242
- For * sustained* traffic performance benchmark, * Job Queue* must be monitorized.
243
-
244
- This is a small guideline to monitor * Job Queue* size:
245
-
246
- - Get backend redis pod
247
-
248
- ``` bash
249
- $ oc get pods | grep redis
250
- backend-redis-2-nkrkk 1/1 Running 0 14d
251
- ```
252
-
253
- - Get Job Queue size
254
-
255
- ``` bash
256
- $ oc rsh backend-redis-2-nkrkk /bin/sh -i -c ' redis-cli -n 1 llen resque:queue:priority'
257
- (integer) 0
258
- ```
0 commit comments