-
Notifications
You must be signed in to change notification settings - Fork 41
/
search.json
1183 lines (1173 loc) · 477 KB
/
search.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[
{
"title": "About OneOps",
"url": "/overview/about.html",
"content": "Cloud Application Lifecycle ManagementOneOps is a cloud management and application lifecycle management platform thatdevelopers use to both develop and launch new products faster, and to moreeasily maintain them throughout their entire lifecycle. OneOps enablesdevelopers to code their products in a hybrid, multi-cloud environment.This means developers can switch between different cloud providers to takeadvantage of better pricing, technology or scalability without being lockedinto one cloud provider.MissionOur mission is to give our customers the most agile, cost-effective, flexibleapplication lifecycle management solution for enterprise class workloads in thecloud.HistoryOneOps was founded in the spring of 2011 by three industry veterans experiencedin running some of the largest web environments. It is through this experiencethat they re-invented how applications should be provisioned and managed in thecloud computing era. OneOps was acquired by @WalmartLabs in May 2013 toaccelerate the adoption of cloud in the context of the Global eCommercedivisions Pangaea initiative. OneOps manages several eCommerce propertieswithin the Walmart portfolio including walmart.com, and SamsClub. In January 2016, @WalmartLabs released OneOps as open source softwareproject under the Apache 2.0 license."
},
{
"title": "Add a Group to an Organization",
"url": "/user/account/add-a-group-to-an-organization.html",
"content": "Add a Group to an Organization Log into your OneOps server. From the top navigation bar, select the organization. Select Settings in the left navigation bar. Select teams. On the left, select the team to be added to the group. Scroll to the bottom of the page to the Group Members section and click Add group. Find the group and click Save."
},
{
"title": "Add a New Azure Cloud",
"url": "/user/account/add-a-new-azure-cloud.html",
"content": "Add a New Azure CloudSolutionTo add a new cloud for Azure, follow these steps: Log into OneOps. Select the organization. Click Add new Cloud. Enter the name. Enter the description. Select Custom and enter the auth Key. Go to the box. Add the inductor with the authorization key. Log into OneOps. Select your cloud. Add the cloud services needed for Azure Compute Select the Service: ex azure-eastus2. Enter the tenant, client id, subscription id, and client secret. These values will come from setting up your organization in the Azure portal. This should be done prior to configuring an Azure cloud. Express Routes are only used if you need a private connection from your network and Azure. By default, do not select it. With Express Routes NOT selected: Enter a Network address range: ex 10.0.0.5/16 Enter a Subnet address range: ex 10.0.0.5/24 Enter a DNS server(ip): ex 8.8.8.8 You can leave the rest with default values. Click save. Edit the service. Click status. Validate the status check is successful. Add the remaining services: Azure DNS Azure LB Azure Traffic Manager Mirrors Add the variable: cloudName: name of the cloud If Express Routes is something you plan on using you will be expected to provide the following. Enter the Resource Group in Azure that has the VNET which is connected to the express route. Enter the VNET name The Resource Group and VNETs are things that need to be setup prior to configuring this cloud. If you are using Express Routes you already know you will be using a specific address space and that will need to be configured on the VNET/Subnets in this resource group."
},
{
"title": "Add a New Cloud",
"url": "/user/account/add-a-new-cloud.html",
"content": "Add a New CloudSolutionA cloud can be defined as a group of services which enables resource allocation/usage. It contains services such as Rackspace/AWS used for the compute creation, Nexus service is used to store the artifacts or repos, f5/citrix netscaler service is used for the load balancing and many more. OneOps creates multiple clouds per organization. Log into OneOps. Select the Organization. Click Add new Cloud. Enter the cloud name. Enter the description. Select Custom and enter the auth Key. Log into OneOps. Select your cloud. Add the Openstack Service Select the Service: ex openstack:cloudName. Enter the Name, tenant, username and password. Click save. Click on the service. Click status. Check whether the Quota is populated or not. Add the remaining appropriate services: Nexus Service Load balancer Nexus Mirrors Add any variables as needed."
},
{
"title": "Add a New Component",
"url": "/developer/content-development/add-a-new-component.html",
"content": "Add a New ComponentTo add a new Component Class to a model so it can be used byPlatforms, follow these steps:In this example, we use Jboss as an example of a new component. Clone the initial packer directory from git@github.com:kloopz/packer.gitcd packer; util/new_component.rb Jbosshis generates the dirs and files with common values. See the sample output in the first Note below.cd cookbooks/Jboss Update metadata.rb. You can reuse existing Jboss chef recipes and attributes by using the open source recipe, http://community.opscode.com/cookbooks/Jboss. For a list of these see the second Note below.cd recipes Copy recipes from the download on the page above, or use the Git chef-Jboss. The cookbook does not have a recipe named add.rb. Their default.rb does the same function. Create an add.rb and update.rb that only has: include_recipe Jboss::default. There is no delete recipe, that must be added. Update relationships metadata for: depends_on, deployed_to, escorted_by,managed_via, realized_as, requires, and watched_by. cd ../<relationship> and edit metadata.rb adding Jboss accordingly. Sync to CMS. Add the Jboss image for the UI: kloopz-app repo public/images/cms/Jboss.png new_component.rb Jboss output:#### Creating new component: /Users/mike/oo/packer/cookbooks/Jboss##DIR: /Users/mike/oo/packer/cookbooks/Jbossupdating metadata.rbupdating README.mdDIR: /Users/mike/oo/packer/cookbooks/Jboss/recipesupdating add.rbupdating delete.rbupdating repair.rbupdating restart.rbupdating start.rbupdating status.rbupdating stop.rbupdating update.rbdone.Jboss Attributes from open source cookbook:* Jboss_home - location for Jboss* version - version to download* dl_url - download url ...we can derive from the version tho, omitting* Jboss_user - default Jboss userSee Also Platforms in Key Concepts Add a PlatformNew Component Screensmetadata.rb - 4 parts: name/desc grouping attrs actions (additional to add, update, delete)"
},
{
"title": "Add a Platform to a Design",
"url": "/user/design/add-a-platform-to-a-design.html",
"content": "Add a Platform to a DesignSolution Your browser daoes not implement HTML5 video.To add a platform to a design, follow these steps: In the Design navigation box, click create platform. If you did not create a Design as part of creating an Assembly, the Design page appears after saving. Click New Platform or [ + Add ]. Provide a unique name. Use letters, numbers, and a dash. No other characters are allowed. You are notified to match the requested format if you use any invalid characters. <The format only says at least one character. No mention of whats not allowed.> Provide a brief description. Select a pack source from the drop-down list. Option Description main Open Source packs. Select a pack name from the drop-down list. Select a pack version from the drop-down list. The Pack you select may only have one version available. Click Save. Review your configuration. If you decide you do not want the configuration, click Delete. Optionally modify the Components or any individual component attributes. When you are done and ready to save the Design, click Commit.See Also Deploy an Application for the First Time Deploy Multiple Clouds in Parallel"
},
{
"title": "Add a Platform",
"url": "/developer/content-development/add-a-platform.html",
"content": "Add a PlatformTo create a new Platform type so it can be used via the UI, follow the steps listed below.For this example, JBoss is an example of a new platform. Before a pack can use JBoss, acomponent for JBoss must be added. Since Tomcat is an application server like JBoss, we can use that as a template. cd packer/packs/platform ; cp tomcat.rb JBoss.rb Edit the JBoss.rb by changing the names and monitors accordingly. Sync to the CMS.The following is a sample platform pack file for Tomcat:# extends genericlb pack# genericlb extends base - where compute, storage, user, etc are modeled.include_pack "genericlb"name "tomcat"description "Tomcat"type "Platform"category "Web Application"environment "single", {}environment "redundant", {}environment "ha", {}resource "tomcat", :cookbook => "tomcat", :design => true, :requires => { "constraint" => "1..1" }, :monitors => { 'JvmInfo' => { :description => 'JvmInfo', :source => '', :chart => {'min'=>0, 'unit'=>''}, :cmd => 'check_tomcat_jvm', :cmd_line => '/opt/nagios/libexec/check_tomcat.rb JvmInfo', :metrics => { 'max' => metric( :unit => 'B', :description => 'Max Allowed', :dstype => 'GAUGE'), 'free' => metric( :unit => 'B', :description => 'Free', :dstype => 'GAUGE'), 'total' => metric( :unit => 'B', :description => 'Allocated', :dstype => 'GAUGE'), 'percentUsed' => metric( :unit => 'Percent', :description => 'Percent Memory Used', :dstype => 'GAUGE'), }, :thresholds => { 'HighMemUse' => threshold('5m','avg','percentUsed',trigger('>',98,15,1),reset('<',98,5,1)), } }, 'ThreadInfo' => { :description => 'ThreadInfo', :source => '', :chart => {'min'=>0, 'unit'=>''}, :cmd => 'check_tomcat_thread', :cmd_line => '/opt/nagios/libexec/check_tomcat.rb ThreadInfo', :metrics => { 'currentThreadsBusy' => metric( :unit => '', :description => 'Busy Threads', :dstype => 'GAUGE'), 'maxThreads' => metric( :unit => '', :description => 'Maximum Threads', :dstype => 'GAUGE'), 'currentThreadCount' => metric( :unit => '', :description => 'Ready Threads', :dstype => 'GAUGE'), 'percentBusy' => metric( :unit => 'Percent', :description => 'Percent Busy Threads', :dstype => 'GAUGE'), }, :thresholds => { 'HighThreadUse' => threshold('5m','avg','percentBusy',trigger('>',90,5,1),reset('<',90,5,1)), } }, 'RequestInfo' => { :description => 'RequestInfo', :source => '', :chart => {'min'=>0, 'unit'=>''}, :cmd => 'check_tomcat_request', :cmd_line => '/opt/nagios/libexec/check_tomcat.rb RequestInfo', :metrics => { 'bytesSent' => metric( :unit => 'B/sec', :description => 'Traffic Out /sec', :dstype => 'DERIVE'), 'bytesReceived' => metric( :unit => 'B/sec', :description => 'Traffic In /sec', :dstype => 'DERIVE'), 'requestCount' => metric( :unit => 'reqs /sec', :description => 'Requests /sec', :dstype => 'DERIVE'), 'errorCount' => metric( :unit => 'errors /sec', :description => 'Errors /sec', :dstype => 'DERIVE'), 'maxTime' => metric( :unit => 'ms', :description => 'Max Time', :dstype => 'GAUGE'), 'processingTime' => metric( :unit => 'ms', :description => 'Processing Time /sec', :dstype => 'DERIVE') }, :thresholds => { } } }resource "build", :cookbook => "build", :design => true, :requires => { "constraint" => "0..*" }, :attributes => { "install_dir" => '/usr/local/build', "repository" => "", "remote" => 'origin', "revision" => 'HEAD', "depth" => 1, "submodules" => 'false', "environment" => '{}', "persist" => '[]', "migration_command" => '', "restart_command" => '' }resource "java", :cookbook => "java", :design => true, :requires => { :constraint => "0..1", :help => "java programming language environment" }, :attributes => { }resource "vservice", :design => false, :attributes => { "protocol" => "http", "vport" => 8080, "iport" => 8080 }# depends_on[ { :from => 'tomcat', :to => 'compute' }, { :from => 'build', :to => 'library' }, { :from => 'tomcat', :to => 'user' }, { :from => 'tomcat', :to => 'java' }, { :from => 'build', :to => 'tomcat' }, { :from => 'daemon', :to => 'build' }, { :from => 'build', :to => 'download'}, { :from => 'java', :to => 'compute' }, { :from => 'java', :to => 'download'}, ].each do |link| relation "#{link[:from]}::depends_on::#{link[:to]}", :relation_name => 'DependsOn', :from_resource => link[:from], :to_resource => link[:to], :attributes => { "flex" => false, "min" => 1, "max" => 1 }end# managed_via[ 'tomcat', 'build', 'java' ].each do |from| relation "#{from}::managed_via::compute", :except => [ '_default' ], :relation_name => 'ManagedVia', :from_resource => from, :to_resource => 'compute', :attributes => { }end"
},
{
"title": "Add a User To a Group",
"url": "/user/account/add-a-user-to-a-group.html",
"content": "Add a User to a Group Log into your OneOps server. Select your user name at the top of the page. Click Groups. Select the appropriate User Group from the left menu. Click Add. Select the user and click Save."
},
{
"title": "Add a User",
"url": "/user/account/add-a-user.html",
"content": "Add a UserSolutionTo add a user, follow these steps: In the header, click organization. Select the users tab. Click Invite New User"
},
{
"title": "Add a Variable",
"url": "/user/design/add-a-variable.html",
"content": "Add a VariableVariables are a way to set key-value pair for easy referencing. They allow you to set one value and have it used across many Components. An example use for Variables would be to create a revision tag across your builds. Your browser does not implement HTML5 video.To add a variable, follow these steps: In Design, select an Assembly. Select the variables tab. Click New Variable. Enter a unique name. Letters, numbers, and dashes are allowed. No other characters are allowed. If you use invalid characters, you are notified to match the requested format. Enter a value for the variable.The three areas to store variables are: Cloud, Global and Local. Cloud variables are defined for a particular Cloud and referenced as $OO_CLOUD{variable-name} Global variables are those set in a particular Assembly. Such variables are available across all platforms in an Assembly. These can be referred as $OO_GLOBAL{variable-name} Local variables are those set in a particular Platform. The variable is available only to the components within the platform. Usage $OO_LOCAL{variable-name}At the time of deployment, these variables are substituted with the actual values.There are 3 implicit variables available which can be directly used in any component attribute $OO_GLOBAL{env_name} : resolves to environment name $OO_CLOUD{cloud_name} : resolves to cloud name $OO_LOCAL{platform_name} : resolves to platform nameSee Also Variable Override"
},
{
"title": "Add CNAME in Azure DNS",
"url": "/user/transition/add-cname-to-azure-dns.html",
"content": "Add CNAME in Azure DNSSolutionThe hostname, by default, is provided by the OneOps system and follows a pattern that is described in theCompute documentation.You can create your own endpoints by adding short or full CNAMEs.Add a Short CNAMEThe first approach is to add a short CNAME with the following steps: In the Transition view of your platform, go to the FQDN component. Click edit. Enter a single word for a short CNAME alias. Save the change. Commit and deploy.The shortname will be added to Azure DNS in a DNS zone created by the system using the name from the ZONE given in the DNS cloud service.</br>The zone is created in a Resource Group.</br>The shortnames will be added to Azure DNS zone with the name: <NEW-SHORT-NAME><ENVIRONMENT-NAME>.<ASSEMBLY-NAME>.<ORGANIZATION-NAME></br>It is possible to add multiple short CNAMEs to have additional hostnames.In addition, the shortname will be used as the DNS Label Name to create the FQDN in the Azure public domain.</br>The result will be <SHORT-NAME>.<AZURE-REGION>.cloudapp.azure.comAdd a Full CNAMEA second approach is to add a full CNAME with the following steps: In the Transition view of your platform, go to the FQDN component. Click edit. Enter a Full CNAME alias Save the change. Commit and deploy.The fullname is added as a CNAME record in the same DNS Zone the short names are.</br>The fullname is not added to the Azure public domain.</br>If you have your own domain and want to delegate to Azure DNS follow these instrutions in Delegate Domain to Azure DNS"
},
{
"title": "Add CNAME",
"url": "/user/transition/add-cname.html",
"content": "Add CNAMESolutionThe hostname, by default, is generated by OneOps and follows a pattern that is described in theCompute documentation.You can create your own endpoints by adding short or full CNAMEs.Add a Short CNAMEThe first approach is to add a short CNAME with the following steps: In the Transition view of your platform, go to the fqdn component. Click edit. Enter a single word for a short CNAME alias. Save the change. Commit and deploy.The additional hostname where the platform can be reached is <NEW-SHORT-NAME><ENVIRONMENT-NAME>.<ASSEMBLY-NAME>.<ORGANIZATION-NAME>.DOMAINIt is possible to add multiple short CNAMEs to have additional hostnames.Add a Full CNAMEA second approach is to add a full CNAME with the following steps: In the Transition view of your platform, go to the fqdn component. Click edit. Enter a Full CNAME alias, for example, test.qa.<your server>. Before using this feature, know the correct domain name. The following domains dev|qa|prod.<your server> are supported. Save the change. Commit and deploy.The new Full CNAME aliases are the additional hostnames where the platform can be reached."
},
{
"title": "Add or Edit Primary and Secondary Clouds",
"url": "/user/transition/add-edit-primary-secondary-clouds.html",
"content": "Add or Edit Primary and Secondary CloudsChange a Secondary Cloud to a Primary Cloud in an Existing EnvironmentTo flip an existing platforms secondary cloud to a primary cloud within an environment, follow these steps: In the transition phase, select the environment. Select the platform. Locate the appropriate cloud and click Make Primary. To edit the environment, in the cloud section, select the appropriate checkbox for primary or secondary clouds. This edit only impacts new platforms that are pulled after the environment is edited.Change a Primary Cloud to a Secondary Cloud in an Existing EnvironmentTo flip an existing platforms primary cloud to secondary within a given environment, follow these steps: In the transition phase, select the environment. Select the platform. Locate the appropriate cloud and click Make Secondary. To edit the environment, in the cloud section, select the appropriate checkbox for primary or secondary clouds. This edit only impacts new platforms that are pulled after the environment is edited.Add a Primary or Secondary Cloud to an Existing EnvironmentTo add a primary or secondary cloud to an existing environment, follow these steps: In the transition phase, select the environment. Select the platform. Locate the appropriate cloud. To edit the environment, in the cloud section, select the right checkbox for primary or secondary clouds. (The default setting is not used.) This edit will add newly selected clouds to all the platforms within the environment."
},
{
"title": "Add ELK Stack to an Application",
"url": "/user/design/add-elk-stack-to-an-application.html",
"content": "Add ELK Stack to an ApplicationThis page details how to add Elasticsearch,Logstash and Kibana the Elastic stack or ELK stack - to an application.Elasticsearch Setup Add a new platform for Elasticsearch using the Elasticsearch with LB pack. If required, edit the configuration under the Elasticsearch component. For example, change the number of shards orreplicas used or other parameters as desired. Commit the design changes and deploy the new platform. Once Elasticsearch deployed successfully, you can access the user interface at http://ipaddress:9200 Kibana Setup Add a new platform for Kibana and a dependency to the Elasticsearch platform. Configure the Kibana component pointing to the Elasticsearch component deployed above. Commit the design changes and deploy Kibana. Verify Kibana by accessing the user interface at http://ipaddress:5601/app/kibana Logstash SetupThe following steps are an example on how to configure Logstash to collect the Tomcat access log. Add a Logstash component under the Tomcat platform. Edit the inputs, filters and outputs options as required. Here is an input example: Inputs : file {path => "/opt/tomcat7/logs/access*.log" sincedb_path => "/opt/logstash/sincedb-access" } Deploy the Logstash component. Verify Logstash started successfully without errors by inspecting the log on the VM running Tomcat and Logstash. Validation After the Logstash deployment, verify that indices are created on Elasticsearch at http://ipaddress:9200/_cat/indicesand that the status is green. Now that logs are parsed and stored in Elasticsearch, you can configure Kibana to generate reports as requiredand detailed in the Kibana documentation. "
},
{
"title": "Add Monitors",
"url": "/developer/content-development/add-monitors.html",
"content": "Add MonitorsAdding a monitor is specific to each component. These steps are very generic and only give an overview of what to do if a new monitor needs to be added to a component:Go to the pack where you want to add the monitor.Add a Monitors using existing script Let say we need to add the log monitor on resource. Since we already have check_logfiles script. We can use it.:monitors => { 'Log' => {:description => 'Log', :source => '', :chart => {'min' => 0, 'unit' => ''}, :cmd => 'check_logfiles!logtomcat!#{cmd_options[:logfile]}!#{cmd_options[:warningpattern]}!#{cmd_options[:criticalpattern]}', :cmd_line => '/opt/nagios/libexec/check_logfiles --noprotocol --tag=$ARG1$ --logfile=$ARG2$ --warningpattern="$ARG3$" --criticalpattern="$ARG4$"' :cmd_options => { 'logfile' => '/var/log/tomcat6/catalina.out', 'warningpattern' => 'WARNING', 'criticalpattern' => 'CRITICAL'}, :metrics => { 'logtomcat_lines' => metric(:unit => 'lines', :description => 'Scanned Lines', :dstype => 'GAUGE'), 'logtomcat_warnings' => metric(:unit => 'warnings', :description => 'Warnings', :dstype => 'GAUGE'), 'logtomcat_criticals' => metric(:unit => 'criticals', :description => 'Criticals', :dstype => 'GAUGE'), 'logtomcat_unknowns' => metric(:unit => 'unknowns', :description => 'Unknowns', :dstype => 'GAUGE')}, :thresholds => { 'CriticalLogException' => threshold('15m', 'avg', 'logtomcat_criticals', trigger('>=', 1, 15, 1), reset('<', 1, 15, 1)), } }, }Add a Monitors , Creating a new script. To create a new monitor, a new script needs to be created. This script can be placed in the monitor cookbook, or if its specific to the component in question, it can be placed under the components own cookbook. refer Monitoring ComponentFor adding to existing directories : Create Directories template/default in Zookeeper Oneops pack in the zookeeper component and add your script file with the extension of filename.extn.erb Add the following code to your add.rb "packer/components/cookbooks/zookeeper/recipes/add.rb". This is used to copy your .erb file in the /opt/nagios/libexec and nagios will read from there.template "/opt/nagios/libexec/check_zk.py" do source "check_zk.py.erb" mode 0755 owner "oneops" group "oneops"endcheck_zk.py is the name of the script. Add the monitor in the pack "packs/platform/zookeeper.rb"'zookeepernode' => {:description => 'ZookeeperNode', :source => '', :chart => {'min' => '0', 'max' => '100', 'unit' => 'Percent'}, :cmd => 'check_zk', :cmd_line => '/opt/nagios/libexec/check_zk.py', :metrics => { 'up' => metric(:unit => '%', :description => 'Percent Up'), }, :thresholds => { 'ZookeeperProcessDown' => threshold('1m', 'avg', 'up', trigger('<', 90, 1, 1), reset('>', 90, 1, 1)) } }LOG Format:if [ $ec != 0 ]; thenecho "$1 down |up=0"elseecho "$1 up |up=100"fiSee Also Add a New Chef Cookbook and Pack to OneOps"
},
{
"title": "Add a New Chef Cookbook and Pack to OneOps",
"url": "/developer/content-development/add-new-chef-cookbook-pack.html",
"content": "Add a New Chef Cookbook and Pack to OneOpsDesign ConsiderationsPlan how you want your component to work. The following are some questions regarding design. Although there are others, hopefully these help. Always refer to existing examples like Tomcat. What actions will you provide? stop, start, restart, attach-debugger, others? If the approach is to download and install a binary, or if it is to download source and compile it, you need to decide where the install tarball is going to live. So far, OneOps has stored things in Nexus, but we are exploring other choices. What input will the user give, and what are the defaults for fields (for example, the download URL location of an install tarball)? Decide on the dependency. For example, it could be a scenario in which the user depends on the file system, which in turn, depends on the compute. See if you need any other features in the recipe. For example, you do if the MD5 sum should be checked after downloading a tarball. Will your component be managed by /etc/init.d or an upstart alternative? This will be in the recipe. Where are the log files going to be configured to go?Add a New Component (Cookbook) and a PackThe existing Tomcat cookbook is used as an example here.Create the Component and Pack Edit/Create a metadata.rb file under your cookbook home. In the metadata.rb file: Define the metadata for your cookbooks attributes. Refer to the file above, as an example. Define the recipe list at the bottom. (Refer to the Tomcat metadata.rb.) These recipes are shown as action buttons on the OneOps GUI when you click that component in the Operations phase. When clicked, OneOps calls that recipe from that cookbook. For example, start/stop/restart of Tomcat. The attribute metadata is OneOps specific here. Checkout the attributes in the Tomcat cookbook metadata.rb when the cookbook will be parsed later by the rake install command. This metadata is fed to the CMS DB as a model that tells the OneOps GUI how to render these attributes on the component (cookbook) configuration page. If this cookbook is not part of an existing pack, create a new OneOps pack. A pack is an application platform type definition. Example packs (or platforms) are Tomcat, MySQL, etc. For additional details, refer to the Tomcat pack. Add the following important details to this pack: Variables These are variables that can be used while providing values for the cookbook attributes (next step). These variables are shown in the GUI and their values can be edited by the end user Resource Define resources. There should be one resource per cookbook. You can set values for the attributes here by substituting the variables. Another synonym for resource is component. These resources are shown as a list on the platform details page. Users can click them and edit attributes if needed. Dependency Define the dependency among the resources you defined above. This is called depends_on. This enables OneOps to create the deployment sequence plan. ManagedVia Define the manage-via relation. See the Tomcat pack for an example. Test the Cookbook and PackNow you are ready to test your cookbook and pack. You need to push your model (cookbook and pack metadata) to the CMS DB. This is done by calling a knife plugin developed by OneOps. This plugin can parse the cookbook metadata and pack and push it to CMS. Follow these steps: Export the CMSAPI env variable to point to the CMS instance that you want to sync to. For example: export CMSAPI=http://cms.<your-server>:8080/. This pairs well with the shared UI on https://web.dev.<your-server>.com/ cd to the packer directory. Do a Git pull to make sure you have pulled all the latest changes. Invoke the knife plugin by executing: $ circuit install. BE CARFEUL when you use this because you load everything and that impacts others that use the same dev server. This takes some time and pushes all the metadata as CI (configuration item) objects to the CMS. In general, if you are only working on a single cookbook/pack, use the individual commands, not ALL. Remember that if you are using a shared dev-packer CMS backend, that modifying model/packs from one dev environment will affect anybody using that dev-packer environment as a backend. Install a single platform and its cookbook cd ': To load a single cookbook as a model definition: $ bundle exec knife model sync <cookbook-name> To load a platform: $ bundle exec knife pack sync platform/<platform-name> --reloadOR Install all cookbooks and packs in separate commands. BE CAREFUL when you use this because you impact others that use the same dev server! To reload ALL cookbooks from the packer repo: $ circuit model To load ALL packs: $ circuit packs We cache the metadata model in the UI so any metadata model changes must be followed by cache clear:$ curl http://cms.<your-server>.com:8080/transistor/rest/cache/md/clear Test the component configurations in the OneOps GUI https://web.dev..com/' To make sure that your platform and cookbook are working. Set up a local inductor. Do a deployment. Commit the pack code, the cookbook code, and the icons files.See AlsoNeed a jump start on Ruby coding? Check out the Chef Ruby reference page."
},
{
"title": "Add a New Platform Pack",
"url": "/developer/content-development/add-new-platform-pack.html",
"content": "Add a New Platform Pack Include_pack (inherit) Name, desc, category Resources: named Components Relations depends_on managed_via "
},
{
"title": "Add or Delete a Security Group to Open or Close an Additional Port",
"url": "/user/design/add-or-delete-a-security-group-to-open-or-close-an-additional-port.html",
"content": "Add or Delete a Security Group to Open or Close an Additional PortSecurity group is a mandatory component in all packs. It can be used to open up additional ports as required by the application.SolutionTo add a Security Group, follow these steps: Go to the design of the platform where you want to add the security group. Select the secgroup component. In the secgroup details page, specify the multiple inbound rules as required in the form:min max protocol cidr Min/Max: Indicates the port range. For a single port, select both as the same value. protocol: tcp/udp/icmp cidr: IP range in CIDR format. Select 0.0.0.0/0 to apply to all IPs"
},
{
"title": "Add or Reduce Capacity",
"url": "/user/transition/add-reduce-capacity.html",
"content": "Add or Reduce CapacitySolutionCapacity equals deployment. 100% capacity is deployment of all instances in the cloud. Partial deployments are usually used to test new code or components. After the partial deployment is recognized as successful, a full deployment can be executed.Two processes are required to change capacity: Transition provides visibility into the variable set and allows you to execute a partial or full deployment. Operate allows you to control capacity and execute a full deployment.TransitionTransition > Environment > Platform > Summary > Scaling Configuration To view the environments page, click Transition. Select a platform.The Summary page displays the Scaling Configuration panel.OperateYou can only complete a full deployment in Operate.Operate > Summary tab > Force DeploymentTo execute a full deployment, complete the following steps: Select the appropriate assembly. Click Operate. Select an environment (for example, prod2). The summary tab displays status information. Click Force Deploy. Force Deploy gathers all changes made to that environment, regardless of how small or large, up to the moment you click Force Deploy."
},
{
"title": "Add a Team to an Assembly",
"url": "/user/design/add-team-to-assembly.html",
"content": "Add a Team to an AssemblySolution Your browser does not implement HTML5 video.To add a team to an assembly, an admin can follow these steps: Log into the OneOps environment. If you are the admin for the organization, create a new assembly or edit an existing one. Select the team to be edited. Click Edit. Select associate the Team/s with assembly."
},
{
"title": "Add a User to a Team",
"url": "/user/account/add-user-to-team.html",
"content": "Add a User to a TeamSolution Your browser does not implement HTML5 video.UserIf you are user without the required admin permissions to add a user to team, follow these steps to add a user to a team: Log into the OneOps environment. Accept the terms and conditions. Ask your Manager or Team admin to add you in the organization (correct team).AdminIf you are a user with the appropriate admin permissions, follow these steps to add a user to a team. Log into the OneOps portal. Select the appropriate organization. Click the Users tab. Click Add. Find the user and select the team. Click Add User."
},
{
"title": "Advanced Search and Search API",
"url": "/developer/integration-development/advanced-search.html",
"content": "By default, search runs against Elasticsearch indices that are generated based on the actualdata in a content management system CMS. Advanced search can be activated by appending ?source=cms to the URL andaccess the CMS directly.Search CriteriaThe advanced search mode adds further controls to the search criteria beyond the criteria fromstandard search.ScopeScope sets the phase of the OneOps lifecycle or narrows the search to a specific core entity such as account.For example, to search realized instances of a compute, select operations(bom*). To go to the compute configurationthat is specific to an environment, choose transition(manifest*).ClassThe class input allows you to define a specific entity class to be searched and supports auto-completion.AttributeOnce a class is selected, the attribute control aloows you to compose a entity-specific condition for the search withthe attribute name, value and a match with equal, not equal or contains.For example, the compute class has various attributes like public_ip, public_dns, osname and many others, which can beused to create a search condition to find a specific compute."
},
{
"title": "Apache HTTP Server Component",
"url": "/user/design/apache-http-server-component.html",
"content": "Apache HTTP Server ComponentThe apache component represent the Apache HTTP Server application itself andis a major component of the Apache HTTP Server pack.AttributesThe attributes allow you to completely control the server installation and configuration:NameInstallation TypeServer AdminListen PortsUserRequest TimeoutKeep AliveServer SignatureServer TokensEnable TRACE HTTP MethodEnable ETagsEnable TLSv1Enable TLSv1.1Enable TLSv1.2Enable PHP Info IndexModulesDSO ModulesPrefork ParametersWorker ParametersCustom Server Configuration"
},
{
"title": "Apache HTTP Server Pack",
"url": "/user/design/apache-http-server-pack.html",
"content": "Apache HTTP Server PackThe Apache pack provides the user with the ability to use theApache HTTP Server as a platform in their assembly.The main components involved are: the website component for the actual content the Apache HTTP Server component for the server configuration the compute componentExamplesSimple WebsiteRunning a website with Apache HTTP server can be implemented with a few simple steps: Create a platform with the apache pack. Inspect and optionally configure the apache component of the platform. Configure the website component Use the attachments tab: Provide a Source URL that points at an archive file with the contents of your website The Destination Path determines the temporary location of the archive. Use Execute Command to configure how to extract the archive file. The content is downloaded with root ownership and 600 permissions. Add commands to ensure the webserver user (typically apache) can access the files with chown and chmod.5 Configure the Run on Events to determine, when the content should be downloaded. Recommended values are After Add,After Replace, After Update, On Demand.6.Commit the design changes and proceed with deployment as usual. Apache HTTP Server does not automatically restart if you make additional changes & deployments after the initialdeployment. Ensure to restart the webserver in operations to load any content changes.Enable HTTPSThere are two options for configuring HTTPS. Options 1 terminates SSL at the load balancer and the traffic is onlyencrypted to the load balancer and is clear text from load balancer to web server.The more advanced Option 2 encrypts the traffic all the way to the web server and its configuration follows below.General tips about SSL certificate usage can be found in thecertificate component documentation.After configuring the Apache HTTP Server platform you need to obtain a valid certificate with the followingcharacteristics: Include Private Key: Enabled Include Root Chain: Enabled Chain Order: End-entity first Format: Base64 (OpenSSL) Password: create a password Change the the Listen Port on the website component to 443 and enable SSL. Turn Enable TLSv1 the configuration of the apache component off and remove 80 from the Listen Ports. Add the certificate details in design to use the same certificate for all environment, or in transition for eachenvironment separate as desired. Add a lb-certificate and certificate component and configure the certificate.6.Commit the design changes and proceed with deployment as usual."
},
{
"title": "Apache Tomcat Pack",
"url": "/user/design/apache-tomcat-pack.html",
"content": "Apache Tomcat PackThe Tomcat pack provides the user with the ability to useApache Tomcat as a platform in their assembly.Example - Log file catalina.outThe default log file for Tomcat is catalina.out and both System.out andSystem.err are redirected to it. The location of the file is configured viaLogFilesPath and defaults to /log/apache-tomcat. The system logrotate isused to control the rotation and retention on the basis of eight days or 2GB percompute.Keep in mind that compute storage is ephemeral and log as therefore notkept. For all critical application logging and statistics gathering usage oflogmon is recommended.Example - SSL Termination for TomcatSSL configuration for Tomcat is similar to the usage with theapache pack relying on thecertificate component. As a Javaapplication, Tomcat also requires configuration of thekeystore component.At Load BalancerIn this method communication from client to the load balancer is encrypted(HTTPS), but the communication from load balancer to Tomcat is server is inclear text (HTTP). Add a new lb-certificate component to the tomcat platform design andconfigure the certificate details. Disable the Enable TLSv1 configuration on the tomcat component. Add a load balancer lb component and set the Listeners to https 443 http8080. If you are using a software loadbalancer such as Octavia, set the Listenersto terminated_https 443 http 8080. Commit the design changes and proceed withdeployment as usual.Directly at TomcatIn this method communication is encrypted from client to load balancer (HTTPS)and from load balancer to Tomcat (HTTPS). Add a new certficate component to the tomcat platform in design andconfigure the certificate. Add a keystore component and configure it. Configure the SSL Port in your tomcat component as needed. The default is8443. If desired, disable the HTTP Connector in the tomcat component. Add a load balancer lb component and set the Listeners to ssl_bridge 443ssl_bridge 8443. If you are using a software loadbalancer such as Octavia,set the Listeners to https 443 https 8443. Commit the design changes and proceed withdeployment as usual.Example - Configure Tomcat HttpConnector AttributesTo add attributes to a connector element or change the default value of aconnector attribute, follow the steps below. For additional details, refer tothe Tomcat Connection documentation. Go to the Tomcat configuration in your design. Change the protocol to what is appropriate for your application or port -Normally done in design. The default value is the same as the Tomcat defaultwhich is 'HTTP/1.1'. Key HTTP/1.1 Value HTTP/1.1 Blocking Java connector org.apache.coyote.http11.Http11Protocol Non blocking Java connector org.apache.coyote.http11.Http11NioProtocol The APR/native connector org.apache.coyote.http11.Http11AprProtocol Change the attributes that require tuning by using Additional attributesneeded for connector config.* <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <!-- A "Connector" using the shared thread pool--> <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" maxKeepAliveRequests="100" <!-- All additional Attributes go here eg below --> /> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --><!-- opted in to ssl activation w/ keystore --><Connector port="8443"protocol="HTTP/1.1" SSLEnabled="true"maxThreads="50"keystoreFile="/app/certs/keystore.jks"keystorePass="changeit"scheme="https" secure="true"clientAuth="false" sslProtocol="TLSv1" sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2" <!-- All additional Attributes go here eg below --> />Currently you can not add multiple connectors to Tomcat. It is important to testperformance on these settings in the lower environment before you do it inproduction.The SSL connector is only configured, if you have KeyStore and certificateoptional component. For instructions on how to enable SSL, refer to SSL Certificate Component"
},
{
"title": "Artifact Component",
"url": "/user/design/artifact-component.html",
"content": "Artifact ComponentThe artifact component is of core importance andavailable for all platforms. It allows the download of files from a remoterepository managers or other URLs, subsequent extraction of archive files andexecution of scripts.Typically use cases are: download of an application to deploy on an application server download of an application installer to subsequently run the installer toinstall the application and potentially also start it retrieval of required binary resources in archives or as plain files and theirextraction and usage"
},
{
"title": "Assemblies API",
"url": "/developer/integration-development/assemblies-api.html",
"content": "To add, update and delete assemblies in your organization, use the Assemblies API.ListGet a list of assemblies in your organization.GET /assembliesResponse<%= headers 200 %> <%= json(:assembly) { |h| [h] } %>CreateCreate a new assembly in your organization. The authenticated user must be a user in the organization.POST /assembliesInputcms_ci : Required HashciName: _Required_ **String**comments: _Optional_ **String**ciAttributes: _Required_ **Hash** description : _Optional_ **String**In Ruby:<%= json %5C :cms_ci => { :ciName => "myassembly", :comments => "These are your comments", :ciAttributes => { :description => "This is your assembly description" } } %>Response<%= headers 200 %> <%= json :assembly %>GetRetrieve the requested assembly.GET /assemblies/:assemblyResponse<%= headers 200 %> <%= json :assembly %>UpdateUpdate the specified assembly with new data.PUT /assemblies/:assemblyInputcms_ci : Required Hashcomments: _Optional_ **String**ciAttributes: _Required_ **Hash** description : _Optional_ **String**Ruby:<%= json %5C :cms_ci => { :comments => "These are your comments", :ciAttributes => { :description => "This is your assembly description" } } %>Response<%= headers 200 %> <%= json :assembly %>DeleteRemove the specified assembly.DELETE /assemblies/:assemblyResponse<%= headers 200 %>CloneTo create a clone with another name, copy the assembly.CatalogSave the assembly into the organization catalog."
},
{
"title": "Assess the Health of Applications, Platforms and Clouds",
"url": "/user/operation/assess-health-applications-platforms-clouds.html",
"content": "Assess the Health of Applications, Platforms and CloudsSolution Your browser does not implement HTML5 video.To monitor the health of applications, platforms and clouds, follow these steps: In the OneOps Dashboard, select the appropriate organization and assembly. Click Operate. The Environments page displays. Select the environment. The Environments page defaults to the summary tab that displays a status for the following conditions: Platforms Deployment Health Auto repair The health is displayed, including the number of instances in the environment. Good = green Disabled = red Click the graph tab to display an overall view of the health of the application. From the graph, view the overall health of the environment. Each component is represented by a circle icon. The color of the icon indicates the component condition: Green: The environment is healthy and no action needs to be taken. Orange: A change is being executed. This could mean a deployment to that instance or that the instance is undergoing an auto repair. Red: Click on a red-colored component to investigate. Drill down to see the status of the component. See Also Control Your Environment Through Operations Video"
},
{
"title": "Attachments",
"url": "/user/design/attachments.html",
"content": "AttachmentsAttachments create files or perform custom actions with your code or command lines. An Attachment can be assigned to execute any combination of the following Component actions: Before or after Add, update, deleteIn addition, you can use Attachments to set up custom actions against components that you can execute by using the operations page."
},
{
"title": "Auto Repair",
"url": "/user/operation/auto-repair.html",
"content": "Auto RepairUse auto repair to automatically heal instances which are marked unhealthy due to someThreshold violation or missing Heartbeat. Notifications are sent when an auto repair action isexecuted. Event component defined in a platform has an associated repair action specific to the component. Therecipe for healing a component differs from one another. There are different set of instructions executed forcompute repair then for tomcat repairFor example: if a Tomcat instance has become unhealthy, then a Tomcat repair action is triggered which eventuallytries to restart the Tomcat service. Similarly if a compute has become unhealthy, it first tries to SSH to theinstance and checks whether the agent process is running. If for some reason the unhealthy compute instance notSSHable, then the next recipe tries to reboot the compute.The user should understand the path of restoration for any unhealthy instance. It makes no sense to defineunhealthy state for diskfull threshold definition. As reboot of compute or restart of some process is not going tofix the disk space issue. Such threshold should be created with notify-only state."
},
{
"title": "Auto Replace",
"url": "/user/operation/auto-replace.html",
"content": "Auto ReplaceUse auto replace to automatically replace unhealthy instances. Notifications are sent to the application owners at an auto replace action event. Auto replacement of unhealthy instances is an extension to auto repair.Unhealthy threshold conditions cause instances to become unhealthy. When an instance becomes unhealthy, an auto-repair action is triggered. Unhealthy instances are replaced based on the definition of auto replace for that instance.Auto replacement can be enabled for a platform under the Automation Status section on the operations summary page of that platform.Following two attributes dictate the replacement of the instance: Max Unhealthy duration (mins): Wait time before auto replacement is triggered. The default value is set to 9999999 which means ~19 years. (In other words, auto replace is turned off for the platform.) Min # of repair attempts: Auto replace is not initiated until the # of auto-repair attempts is greater than or equal to this value. The default value is 9999999. Auto replacement logic counts the number of unhealthy events for different instances (of different components). Be careful when selecting this value for the minimum # of repair attempts.Without exception, all components are auto replaceable. Replacement of instances happens one at a time. Replacement of an unhealthy instance also generates a replace work order for its dependent instances. On completion of the replacement deployment, if other unhealthy instances become healthy, no further replacement is done. If other unhealthy instances remain unhealthy, the appropriate action is performed.SolutionTo enable auto scale for a platform, follow these steps: Go to Operations. Select your environment. Select the platform under the environment. If the platform did not have auto replace enabled, click the button to enable it. Set the below auto replace configuration fields to an appropriate value: Replace unhealthy after minutes: Auto-Replace will intiate for an unhealthy instance only if it stays unhealthy for this much duration Replace unhealthy after repairs: Auto-Replace process will intiate only if auto-repair action executes these many times and if the component is still unhealthy. "
},
{
"title": "Auto Scale",
"url": "/user/operation/auto-scale.html",
"content": "Auto ScaleUse auto scale to automatically flex up or down computes based on some Threshold violation. Notifications are sent to the application owners at an auto scale action event trigger and recovery. Auto scaling is used to balance the load on computes for maximum utilization. The decision to flex up or down is completely at the discretion of application owner.The scaling configuration definition provides the details on the step size for flexing along with boundary limits Min: Minimum number of computes to be present in the platform at all times. Flexing down will stop once the minimum number of instances has reached its limit Max: Maximum number of computes that can be added to the platform. Flexing up will stop once the maximum number of instances has reached its limit Step Up: Number of instances to be added per cloud while flexing up for every over-utilized violation in one deployment Step Down: Number of instances to be removed per cloud while flexing down for every under-utilized violation in one deploymentTo enable auto scale for a platform, follow these steps: Go to Operations. Select your environment. Select the platform and look for the Automation Status section under the summary tab. If the platform did not have auto scale enabled, click the button to enable it. Go to Transition and add or edit the monitor threshold for all those components which have metrics that can indicate if a resource is over or under-utilized. There are four states that you can assign to a Threshold to define when the trigger condition is met: Over-utilized: Used to scale up Under-utilized: Used to scale down Unhealthy: Used to repair and replace Notify only: Notifies only via your notifications With each of these states, you receive notifications as long as you have notifications set up."
},
{
"title": "Availability Modes",
"url": "/user/transition/availability-modes.html",
"content": "Availability ModesAvailability modes are set when you create an environment. They can be set globally or by Platform.There are two availability modes: Single: Usually creates one compute per platform. Redundant: Adds load balancers, clusters, rings (whatever is the Best Practice for the Platform). Allows configuration of minimum and maximum scale options."
},
{
"title": "Variables Override Prevention",
"url": "/user/design/avoid-override-variables.html",
"content": "Variables Override PreventionVariables can be edited anytime in design or transition phase. A new variable can only be added in design and then can be pulled into the environmentOn design pull, any variable which were edited in environment (transition phase) are overridden by design values.To prevent any accidental transitional updates, a locking feature is available in the Transition phases.SolutionThe following are steps to retain transitional values: Edit the component or variable to be retained in the Transition phase. Click the unlock icon. The lock icon changes to locked. Save your changes. Commit and deploy."
},
{
"title": "Azure Setup",
"url": "/user/account/azure-setup.html",
"content": "Azure SetupHow to Set Up Azure for OneOpsRoughly follow these directions, but you dont need all of it. This is the abridged version.PrerequisitesThis assumes you have the Azure CLI installed and in ARM mode.Create an Application in Azure Active DirectoryCreate some application in Azures Active Directory (AD). This will be the client that OneOps uses to control Azure. The name and URL values are not important, but should not clash with anything else in AD. azure ad app create --name "someapp" --password "dontusethis" \ --home-page "https://someapp.azure.example.com" \ --identifier-uris "https://someapp.azure.example.com/someapp"This will return a few things: results info: Executing command ad app create + Creating application someapp data: AppId: {CLIENT_ID} data: ObjectId: abcd1234-bbbb-cccc-dddd-654321fedcba data: DisplayName: someapp data: IdentifierUris: 0=https://someapp.azure.example.com/someapp <https://someapp.azure.example.com/cliapp> data: ReplyUrls: data: AvailableToOtherTenants: False info: ad app create command OKApp ID is your client ID. The password you supplied is your client secret. The application can be found in Azures AD now.Create a Service PrincipalUse the App ID (Client ID) to create a service principal.azure ad sp create {CLIENT_ID}Again, this returns the following: results info: Executing command ad sp create + Creating service principal for application {CLIENT_ID} data: Object Id: {SP_ID} data: Display Name: someapp data: Service Principal Names: data: {CLIENT_ID} data: https://someapp.azure.example.com/someapp <https://someapp.azure.example.com/someapp> info: ad sp create command OKThis has added a key to the application in Azure AD. You wont be able to see its value.Grant the Service Principal PermissionsUse the Object ID {SP_ID} returned above to assign the service principal Contributor permission to the scope of your subscription using your subscription ID (this may be too loose for real usage).azure role assignment create --objectId {SP_ID} -o Contributor -c /subscriptions/{YOUR_SUBSCRIPTION_ID}/Setup OneOpsYou can now use the Client ID (App ID) and Client Secret (password) when configuring your Azure cloud."
},
{
"title": "Azure",
"url": "/user/account/azure.html",
"content": "AzureOnly Azure Resource Manager (ARM) is supported in OneOps. Currently only Linux workloads are supported. Windows will come at a later date.If you are not using Express Routes, everything is dynamically created apart from the 4 pieces of information needed to communicate with Azure; Tenant Id, Subscription Id, Client Id, and Client Secret.When you deploy into an Azure cloud, it will generate a Resource Group and Availability set in the first step of deployment. The resource group and availability set will be named: {org}-{assembly}-{platform}-{env}-{region}During creation of the computes the NICs, OS disk, VNETs, Subnets, and public IPs will be created. If you are using the Express Route option, a public ip, VNET and Subnet will not be created, instead it will use what was configured when the cloud service was setup.After the compute is created the rest of the deployment is the same as it would be for the other cloud providers, except DNS, LB, and GDNS.Azure DNSOneOps creates DNS on Microsoft Azure as a resource by creating A DNS Zone resource in the Resource Group Followed by creating DNS Record-Set(s) in that zoneRecord-Sets are assembled together in a single zone and will use any of two values (Type A, Type CNAME). Record type A maps a hostname to an IP address. Record type CNAME creates an Alias of another domain nameAs a result the deployment has a DNS created with hostname mapping and domain alias if configured.For more details on DNS see: Azure DNSLoad BalancerOneOps creates and configures following resources in resource group to create a load balancer: Front end IP configuration - has public IP addresses for incoming network traffic. Back end address pool - has network interfaces (NICs) for the virtual machines to receive network traffic from the load balancer. Load balancing rules - has rules mapping a public port on the load balancer to port in the back end address pool. Inbound NAT rules - has rules mapping a public port on the load balancer to a port for a specific virtual machine in the back end address pool. Probes - has health probes used to check availability of virtual machines instances in the back end address pool. Servers - your server machines (virtual machines) to entertain your requestsBefore creating a load balancer following steps are performed on Microsoft Azure A Virtual Network is created with the specified subnet pool (for later use in backend IP pool) A Public IP is devised that will be your internet facing IP for your servers An Availability-Set is generated and all your back-end servers belong to that availability-set And finally a Load-Balancer is set-up Next n Virtual Machines are provisioned to run your servers (where n is the number of servers you want to Set-Up) and for each machine a NIC (network interface card) is built.Traffic ManagerBefore a traffic manager is created on Azure, it requires Deployed Azure cloud services, Azure websites, or other endpoints to production environment. A name decided for Traffic Manager domain.This Traffic Manager domain name will also be used as a unique prefix to create the FQDN in the Azure public domain.The result will be <traffic-manager-domain-name>.trafficmanager.net OneOps configured monitoring configuration. Traffic routing method i.e Performance, Weighted or Priority.Based on above information OneOps creates Traffic Manager profile resource on Azure by: Creating a Traffic Manager profile Configuring traffic routing method settings Configuring endpoints Configuring monitoring settingsAs a last step when the Traffic Manager Profile is created on Azure, OneOps points companys domain name DNS resource record to the created profile. Traffic Manager is live after this last step.Note: Please do not make any changes to the Traffic Manager configurations from the portal."
},
{
"title": "Benefits",
"url": "/overview/benefits.html",
"content": "OneOps shortens time to market and fosters a true devops culture with benefitsfor Business Developer OperationBusiness BenefitsApplication Lifecycle Management (ALM) products seek to alleviate the friction between business,technology and operations groups in organizations entering the Digital Economy and adopting cloudcomputing. OneOps redefines PaaS 2.0 capabilities by being the most comprehensive open-source ALM productavailable, backed by @WalmartLabs.OneOps is an application lifecycle management platform that developers can use to develop and launch new productsfaster, forklift legacy applications to the cloud easily and maintain them throughout their entire lifecycle withadvanced auto-healing and auto-scaling capabilities. OneOps enables developers to code their products once and runthem in a hybrid, multi-cloud environment. This means they can test and switch between different cloud providersto take advantage of better pricing, technology and scalability without being locked into one cloud provider.OneOps enables rapid innovation with no barriers meaning developers can quickly spin up infrastructure in amatter of minutes, in public or private clouds, enabling nearly instantaneous time to market. Thats businessagility!OneOps provides continuous lifecycle management Once a developer launches their application through OneOps,it can run that app on auto-pilot. OneOps automatically scales, heals/repairs and even replaces infrastructurewhen needed if unforeseen things go awry in the cloud.OneOps delivers cloud portability enabling developers to move applications, databases or even entireenvironments freely from one cloud or provider to another. Developers are able to cloud shop and take advantageof better technology, capacity, scalability, security, customer service or lower costs on demand.OneOps delivers control of cloud environments back to the developers and IT operations teams. Instead of cloudproviders dictating what proprietary tools and technologies have to be used, suffering with expensive vendorintegrations, or cobbling together multiple solutions, OneOps is the one-stop shop to bring the applicationsrequirements to the cloud.Developer Benefits One design Any Cloud Works out of the box with multiple public and private cloud infrastructure providers and technologies like OpenStack. One design All Environments OneOps enables repeatable deployments across all environments. Multiple test environments? A-B testing? Spinning up additional production clusters? Easy! Cloud Technology Agnostic Future-proof your automation and play nice with infrastructure! Works with bare metal, virtualized, and containerized infrastructure. Manage Deployments Flexible deployment approaches with cancellation capabilities for when you need to roll back. Phase by percent, by cloud. Do some instances in parallel, some sequentially. High Availability So long as you have load balancing services defined for your cloud, creating High Availability environments is as simple as setting how many instances should be in the cluster. Design Catalog Create and share designs between teams or with others in the open source community using the Design Catalog. Auto-healing If a monitor indicates a problem, healing operations are triggered targeting the specific deployed software platform. Auto-replace When auto-healing isnt enough, since OneOps is the system of record for the configuration, it simply discards the old instance and creates a new one. Many supported products OneOps comes out of the box supporting a long list of ISV products. Auto-scaling When a monitor indicates a resource is being over- or under- utilized, OneOps will adjust the size of the cluster automatically, keeping costs optimized. Application Programming Interface Every feature in the web interface is backed by a RESTful service. Monitoring Each integrated ISV product retains instrumentation enabling continuous tracking of metrics against thresholds. Auto-trigger healing, replacement, scaling, or escalation events. Best practices are the default OneOps keeps it safe by coming with best practices automation and configuration settings out of the box. OneOps was Built to Scale Backed by @WalmartLabs, you can trust that OneOps will manage your workload up to whatever size infrastructure your product requires. Continuous Delivery OneOps integrates with Maven/Jenkins to provide your teams software development lifecycle (SDLC) with Continuous Delivery capabilities. Environment Profiles Do you configure your production environments with High Availability and Disaster Recovery as defaults? Are your staging environments always in the same cloud? Use profiles to so configure new environments with one click. IT Operations BenefitsGreater control of cloud environment means that instead of cloud providers dictating what proprietary toolsand technologies we have to use, or how much bandwidth we can have, OneOps puts the control back into the hands ofdevelopers.Cloud portability enables users to move applications, databases or even entire cloud environments freely fromone cloud provider to another. Users are able to cloud shop and take advantage of better technology, capacity,scalability, security, customer service or lower costs. Monitoring All automated software is setup to feed critical metrics to the OneOps monitoring sub-system. Best practices metric thresholds determine when its time to heal, replace, scale, or notify a human to step in. Alerting OneOps has an extensible alerting framework. Configure deployment notifications so the operations center can see whats changing, or send an email or SMS when a monitor crosses a critical threshold. LDAP/AD Identity integration Configure OneOps to use your corporate LDAP/AD service to make new developer on-boarding trivial. Configuration Management System OneOps is the system of record for all aspects of deployed applications system, environment and cloud infrastructure configuration. Gain visibility into cloud utilization, and should a data center disappear, re-create your environments in minutes. Naturally Compliance-Ready OneOps has the necessary features, such as role-based authorization, for managing PCI workloads within compliant infrastructure. Cross-Cloud Showback Understanding cloud usage and cost is a daunting problem the more complex cloud-specific cost models are in play. OneOps delivers great insights by estimating the cost of utilized cloud resources across heterogeneous cost models/clouds by dimensions such as org, team, product, provider, software type, etc. Cross-Cloud Quota Budgets are impossible to manage in a hybrid cloud architecture without controls existing at the next higher level in the stack. OneOps is that layer and will soon use show data to ensure cloud costs are not just visible, but manageable. Configuration Policy Governance Proactively avoid cloud technical debt! As your company's cloud workload configuration management system, OneOps helps you easily govern all application design, software configuration, environments, clouds, and resource utilization settings. Custom-Defined Clouds Mix and match technologies to define your own cloud. Leverage fully IaaS compliant infrastructure, or mix in some enterprise hardware. As long as theres a way to programmatically manage the device, OneOps can automate it! Cloud Migration Made Easy OneOps is a pure automation solution and remains outside of the applications codebase. Applications needn't know they're being managed by OneOps, so moving legacy or proprietary applications to the cloud is easy. Extensive Cloud Usage Reporting Leveraging the ELK stack, its possible to visualize many aspects of your companys cloud operations. OneOps managing OneOps OneOps contains a design for itself that allows an administrator to easily grow and manage the OneOps infrastructure up to extreme scale, simply and easily the same way your DevOps teams will manage theirs. Automation Technology Agnostic OneOps application lifecycle management operates on top of lower level automation that can be implemented in any of the popular frameworks such as Chef, Puppet or Ansible. SaaS-Ready at Birth OneOps was designed from the ground up as a SaaS product. You can run it as a publicly available product, or setup departments as independent organizations with their own cloud definitions, policies, and isolated workspaces. Designed and Proven to Scale Backed by @WalmartLabs, you can trust that OneOps can scale to handle the largest data centers, application clusters, and do it with resiliency and availability. Commitment to Open Source Very soon, all @WalmartLabs development will be done in our GitHub repositories and released regularly. The only aspects of OneOps to remain proprietary will be integrations to proprietary components, products or systems and therefore of no value to the community. This small portion of the codebase will continue to shrink. OneOps has a Bright Future OneOps is the way to the cloud at Walmart and is supported by @WalmartLabs. We are looking to build a vibrant community to help extend the core feature set, add support for more cloud providers, and automate lifecycle for more ISV products. "
},
{
"title": "Build, Install, and Configure an Inductor",
"url": "/admin/operate/build-install-configure-an-inductor.html",
"content": "Build, Install, and Configure an Inductor Build the jar file. It is in target/inductor-VERSION.jar.mvn clean package Build the gem.gem build inductor.gemspecIt creates an inductor-VERSION.gem in the root directory. Install the gem from the root directory.gem install inductor-VERSION.gemConfigureThe inductor gem creates configuration files and directories when the inductor add command is run.For a private zone, the authorization key is specified when you create the zone from the UI.For a public zone, the key is specified in the packer/services/provider file.For reference only:A conf.dir argument is passed to the inductor at runtime and contains an inductor.properties.This is generated via the inductor gem when inductor add is run.The following are sample contents:# usually set by inductor gem or inductor_config_gen and based on zoneamq.user =amq.pass =amq.in_queue = public.packer.providers.aws-ec2.ec2.us-east-1a.controller.workordersamq.out_queue = controller.responseamq.connect_string = failover:(ssl://kloopzmq:61617?keepAlive=true)?initialReconnectDelay=1000&startupMaxReconnectAttempts=2packer_home = /opt/gw-packer/currentretry_count = 2ip_attribute = public_ipscan_path = /opt/oneops/inductor/retryscan_period = 5data_dir = /opt/oneops/tmpmgmt_domain = changeme.oneops.com"
},
{
"title": "Case Studies",
"url": "/overview/case-studies.html",
"content": "Case Study: Walmart.comChallengeFour years ago, the Walmart Global eCommerce system was a monolithicapplication, deployed once every 2 months. This represented an unsustainablerate of innovation given the competitive climate. Walmart recognizedthe need to fundamentally transform technology delivery to compete in the DigitalEconomy.Solution@WalmartLabs was founded in an effort to re-invent how products are designed anddelivered within the eCommerce division. A project code named Pangaea paved theway. Walmarts eCommerce site was re-built following a service orientedarchitecture, while adopting a DevOps culture, and migrating to cloud basedinfrastructure. Knowing that providing developers cloud infrastructure aloneonly reveals the next layer of friction, managing application life cycle, OneOpswas acquired early in 2013 and has been under active internal development since.ResultsToday the Walmart eCommerce platform is hosted in some of the largest OpenStackclouds and is managed exclusively via OneOps. On a typical day there are now over1,000 deployments, executed on-demand by development teams, each taking only minuteson average to complete.The three necessary ingredients for success were: A service oriented architecture for the site, without which the complexities of coordinating integration of themonolithic release dominated the schedule. Localized and empowered ownership of and accountability for each service through building a DevOps culture. Access to infrastructure at the fingertips of the developers, what they needed, when they needed it. This was accomplished through OneOps application lifecycle management backed by cloud infrastructure, enabling teams to focus on the most valuable aspect of their job the code. Jeremy King CTO & Head of @WalmartLabs OneOps was pivotal in Walmarts Global eCommerce technology transformation, where we reinvigorated the culture following the DevOps model, updated the tech stack in line with modern best practices, and adopted cloud based infrastructure. OneOps pulls these dimensions together seamlessly. "
},
{
"title": "Catalogs",
"url": "/user/account/catalogs.html",
"content": "CatalogsOneOps provides design catalogs for common commercial and open source applications. In addition, you can create custom designs and then save them in a catalog enabling you to share those designs to drive architectural consistency within your organizationCatalogs are application templates. Catalogs allow you to design your Assembly more quickly or use them for examples.Catalogs are used at Add Assembly where user can select the design from the catalog to bootstrap their assembly with a popular design blueprintSteps to create Catalogs: Login to the OneOps environment. Goto the assembly whose design could be saved as catalog Click on save to catalog button Provide a valid name to catalog alogn with breif description Click saveTo view list of all available catalogs click on catalog menu from the top menu bar. If catalog menu is not visibile, goto your profile and enable catalog flag"
},
{
"title": "Certificate Component",
"url": "/user/design/certificate-component.html",
"content": "Certificate ComponentThe certificate component is part of every platform andcan be used to add SSL support to the platform e.g. Tomcat, Apache orElasticsearch. The lb-certificate component is part of all platforms thatprovide redundancy via load balancing and adds SSL support for these scenarios.Both components share the setup and allow you to configure a number of detailsabout your SSL certificates. Locate the platform to which you want to add SSLcertificate support and press the + button beside the certificate or thelb-certificate component as desired and provide the necessary details:AttributesName: name for the certificateAuto Generate: flag to enable automatic certificate generationKey: certificate key, .key file contentCertificate: certificate content, .crt file contentSSL CA Certificate Key: certificate of the certificate authorityPass Phrase: pass phrase for the certificateConvert to PKCS12: flag to determine if the certificate should be converted to the PKCS12 archive formatTime remaining to expiry: the time remaining until the certificate expires and needs renewal, supports y (year),m (month) and d (day) vaules such as 3m, this data is taken into account for monitoring and notifications so usersare alerted about upcoming certificate expiries.Directory path: path where the certificate file is savedThese tips will help determining the correct certificate when receiving thecertificate as a pem file: Certificate is the first section from the certificate pem file. SSL CA Certificate Key is comprised of section 2 and 3 from the certificatepem file. Key is the 4th section from your certificate pem file starting with-----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----inclusive. Use openSSL rsa -in filename.pem -out filename.key to create a key file fromthe pem file to determine the SSL Certificate Key field value.Automatic Certificate GenerationAutomatic generation and provisioning of certificates can be enabled with theAuto Generate flag. It relies on the integration with a certificate managementweb service as a cloud service as part of the OneOps deployment modeled.Common Name: Full common name of the certificate to be provisioned. Maximum length is 64 charactersSubject Alternative Name: allows you to insert values into the certificates assubject alternative names. This is an optional attribute and acceptsmultiple SANsExternal (Internet Facing) and Domain Name: enable the setting and add adomain name and the value is passed to the service so that it can be insertedinto the certificate. An example domain attribute value: walmart.com</br>Pass Phrase: certificate download password. Must be minimum 12 andmaximum 20 characters, At least 1 upper case and 1 lower case letter, specialcharacter and a numberOnce generated, the certificate is downloaded and its data is used for thevalues of the attributes Key, Certificate, SSL CA Certificate Key andTime remaining to expiry.MonitoringA Nagios monitoring script is generated for the time remaining until the expiryin each environment for certificates. The created monitoring data is availableon the monitors tab of the certificate component in the platform deployed inan environment.The monitoring triggers notifications when the expiry date is within the nextmonth and alerts are raised about the expiry. If you change the monitorthresholds State from Notify Only to Defunct, the certificate expirytriggers an automatic replacement of the certificate with a new auto-provisionedcertificate.Monitoring and automatic replacement is not supported for non-managedcertificates like the lb-certificates."
},
{
"title": "Chocolatey Package Component",
"url": "/user/design/chocolatey-package-component.html",
"content": "Chocolatey Package ComponentThe chocolatey-package component is used to install additional software artifacts oncomputes from Chocolatey Repos. Chocolatey is a machine level package manager that is built ontop of nuget command line and the nuget infrastructure.Besides the global configuration available for any component such as Name, you can configure thefollowing attributes:Repository:Repository URL: is used to specify the repo used to store artifacts.Repository: is used to specify the specific repo name where the artifacts are located that you want to install.Authentication:Username and Password allow you to authenticate into your above defined repository.Artifact:Identifier: is used to specify the specific artifact name to install. The identifier can be a URL, S3 Path, local pathor Maven identier of the artifact package to download.Version: is used to specify the version of the artifact to install. Can be a specific version or latest to pull the mostrecent version.Checksum: is used to specify the SHA256 checksum of the artifact package.Path: is used to specify the repository path prefix.Destination:Install Directory: is used to specify the directory path where the artifact will be deployed to and versions kept.Variables are typically used here to manage commonly used information in a central place.Deploy as user: is used to specify the system user used to deploy the application. Deploy as group: is used to specify the system group to run the deployment as.Environment Variables: is used to specify any variables to be present during the deployment.Persistent Directories: is used to list directories to be persisted across code updates (ex. logs, tmp, etc.)Expand: is used to expand compressed files such as .tgz, tar.gz, .zip, .tar, .jar and .war.Stages:Configure: is used to specify any commands to be executed to configure the artifact package.Migrate: is used to specify any commands to be executed during the migration stage.Restart: is used to specify any commands to be executed during a restart."
},
{
"title": "CI Notification Format",
"url": "/developer/integration-development/ci-notifications-format.html",
"content": "OneOps broadcasts the CI notifications to all configured sinks as well as to email recipients configured.A CI notification json has a format like below sample:{ ts: "2016-01-01T15:51:42.183", cmsId: 1234, cloudName: "abc", severity: "warning", type: "ci", source: "ops", subject: "webapp-tomcat-compute-ssh:SSH Up is violated.", text: "compute-11031075-2 is in unhealthy state; Starting repair", nsPath: "/OneOps/webapp/prod/bom/tomcat/1", payload: { total: "6", oldState: "good", unhealthy: "5", eventName: "webapp-tomcat-compute-ssh", className: "bom.main.2.Compute", threshold: "SSH Up", state: "open", metrics: "{"avg":{"up":0.0}}", ciName: "compute-11031075-2", good: "1", newState: "unhealthy", status: "new" }, timestamp: 1460404258633, environmentProfileName: "PROD", adminStatus: "active", manifestCiId: 58108355}The new state of a ci (payload.newState) could be any of below : notify unhealthy underutilized overutilized goodAn open type of notification event (payload.state) is created in case a threshold/monitor is violated for a CI (component instance) - for example if cpu idle goes below 20A matching close event notification is created in case a reset condition is met for a CI. - For example if cpu idle moves to above 60A unique notification can be identified by this combination - {cmsId + payload.eventName + payload.threshold}. Here the cmsId identifies a unique ci object - like one particular compute (or tomcat) instance. The ( payload.eventName + payload.threshold) identifies the monitor-threshold that got violated.There will be a matching close notification event with the same {ciId + payload.eventName + payload.threshold} but with payload.state = closeHere is the sample for a matching close event for above open event:{ ts: "2016-01-01T19:51:42.183", cmsId: 1234, cloudName: "abc", severity: "info", type: "ci", source: "ops", subject: "webapp-tomcat-compute-ssh:SSH Up recovered.", text: "compute-11031075-2 is in good state.", nsPath: "/OneOps/webapp/prod/bom/tomcat/1", payload: { total: "6", oldState: "unhealthy", unhealthy: "1", eventName: "webapp-tomcat-compute-ssh", className: "bom.main.2.Compute", threshold: "SSH Up", state: "close", metrics: "{"avg":{"up":100.0}}", ciName: "compute-11031075-2", good: "5", newState: "good", status: "new" }, timestamp: 146040231302183, environmentProfileName: "PROD", adminStatus: "active", manifestCiId: 2345}"
},
{
"title": "Client-Side Aggregations",
"url": "/admin/operate/client-side-aggregations.html",
"content": "Client-Side AggregationsTo minimize complexity and maximize scalability metrics are aggregated and time-aligned on the source.It uses the nagios service_perfdata_file_processing_command on a 30sec interval (0.02s real/wall time usage).Details Monitor ci -> /etc/nagios/conf.d -> nagios interleaves and executes -> /var/log/nagios/perf.log Nagios service_perfdata_file_processing_command /opt/nagios/libexec/calc_perf_buckets.rb (~200 lines) debug log snippet flush 1m-avg CpuIdle - 1433451960 - 1433452020time:val:delta:weight 1433451992:96.39:32:0.53time:val:delta:weight 1433452052:96.18:28:0.47aggregate: 96.292 Outputs to service.perflog which logstash transportsformat:<epoc> <pretty time> <ci_id>:<ci_name>:<bucket-stat (1m-avg)> <perf blob (key=value space delimited)>"
},
{
"title": "Cloud Offerings and Services API",
"url": "/developer/integration-development/cloud-offerings-api.html",
"content": "The Cloud Offerings and Cloud Services APIs allow you to access a list of the different offerings and services of theconfigured clouds.Offerings includes information about DNS, storage and computes and others. Services includes information about NTP,mirrors, computes, load balancing, storage, filestores, logstreams, DNS and othersThe data is available viaGET /cloud/offeringsGET /cloud/servicesfor the public name space.It can be narrowed down to a specific organization named ORG with/ORG/clouds/offerings/ORG/clouds/servicesor a cloud CLOUD within an organizationGET /ORG/clouds/offerings.json?ns_path=/ORG/_clouds/CLOUDGET /ORG/clouds/services.json?ns_path=/ORG/_clouds/CLOUDThe results consists of numerous specific attributes for the various offerings and services as well as cost. An examplefor a compute offering iscompute: [{ ciId: 47034473, ciName: "xxxlarge", ciClassName: "cloud.Offering", impl: "oo::chef-11.4.0", nsPath: "/ORG/_clouds/labs-snv4/cloud.service.Openstack/snv4", ciGoid: "47134391-28934-47035473", comments: "", ciState: "default", lastAppliedRfcId: 0, createdBy: "oneops", updatedBy: "auser", created: 1455113490801, updated: 1476466478082, nsId: 47034391, ciAttributes: { criteria: "(ciClassName matches 'bom.[a-zA-Z0-9.]*.Compute' OR ciClassName=='bom.Compute') AND ciAttributes['size']=='3XL'", specification: "{}", description: "Average 3xlarge Linux vCPU cost per Hour", cost_rate: "1.23", cost_unit: "USD" },attrProps: { }}"
},
{
"title": "Clouds",
"url": "/user/account/clouds.html",
"content": "CloudsClouds are a collection of services including compute nodes, DNS, load balancing and others. Your cloud applications aredefined as assemblies and can be deployed to one or many clouds.Learn more about clouds and their benefits with this video:Supported cloud providers areAmazon AWS, Google Cloud Platform, Microsoft Azure, OpenStack and Rackspace."
},
{
"title": "CMS Sync",
"url": "/developer/content-development/cms-sync.html",
"content": "CMS SyncTo update the CMS database with new component metadata and/or platform management packs, we extended Chefs knife to load (model sync) the files in the directory to the database.This sync is shown in the Preload Configuration section below:# need to be in the root packer directory# full sync components, platforms, services, etcbundle exec rake install# model only - components and relationshipsutil/reload_model-or-knife model sync -aknife model sync -a -r# components onlyknife model sync -a# relations onlyknife model sync -a -r# single packutil/reload_pack <packname>-or-bundle exec knife pack upload platform/<packname> --reload# providers onlybundle exec rake providers"
},
{
"title": "Components",
"url": "/user/design/components.html",
"content": "ComponentsComponents are the building blocks used to assemble packs and therefore the platforms, that define yourassembly. Components in a pack can be required or optional and can depend on each other. Examples for components arethe compute, the operating system, the operating system user and many others.You can see the components in the design view of your assembly in the list on the right.Any component additions or configuration changes need to be deployed to take effect at runtime: Add and/or change the component. Press Save. Create a assembly design release by pressing Commit Design. Select the environment in transition. Retrieve the design changes with Pull Design. Implement the changes in transition by pressing Commit and Deploy Observe the changes in operation of your chosen environment.A number of base components are available in all platforms: Artifact Component Certificate Component Compute Component Daemon Component Download Component File Component Filebeat Component Firewall Component Fully Qualified Domain Name (fqdn) Component Hostname Component Job Component Library Component Logstash Component Objectstore Component Operating System (os) Component Secrets Client Component Security Group (secgroup) Component Sensu Client (sensuclient) Component Share Component SSHKeys Component Storage Component Telegraf Component User Component Volume ComponentIn addition specific components are available in various platforms anddocumentation is available for a limited subset: Apache HTTP Server Component Chocolatey Package Component DotNet Framework Component IIS Website Component Java Component Keystore Component Load Balancer (lb) Component NuGet Package Component Website Component"
},
{
"title": "Compute Component",
"url": "/user/design/compute-component.html",
"content": "Compute ComponentThe compute component is of core importance and all platforms since itrepresents the virtual machine and operating system on which the platform runs.You can configure the compute component as part of your platform in design phase and specific to an environment in thetransition phase.Once your assembly is deployed in an environment you can access the computes in operation.ConfigurationBesides the global configuration available for any component such as Name and Description, you can configure thefollowing attributes:Instance Size: The instance size determines characteristics of the virtual machine created for operation in terms ofprocessing power, memory size, networking bandwidth and operating system. The size values use clothing sizing valuesof from extra small to extra large and beyond - XS, S, M, L, XL, XXL, 3XL, 4XL. Instance sizes optimized for computeperformance, network performance, storage and memory are available. The generic values are mapped tocloud specific sizes.Networking - PAT ports: Configure the Port Address Translation PAT from internal ports (key) to external ports(value). Networking - Require public IP: Check if a public IP is required. Setting is used when the compute cloud servicepublic networking type is interface or floating.The Cloud Services configuration displays the services required by the component and provided by the cloud. Typicallycompute and dns are required, while others such as mirrors or ntp are optional and can be enabled or disabledas desired.The Compute Depends On and Depend On Compute sections contain lists of related components.The attachements tab allows the configuration of attachments associated to the compute.The monitors tab can be used to configure compute-related monitors.Example Use CasesUpdate the Size or OS of a ComputeChanging a compute in design, like any other design change, requires you to: Save the change and commit the overall design. Pull the design to the environment. Deploy the environment. If you are changing a compute configuration like size or a relatedsetting all deployed instances have to be flagged to be replaced.To roll out a change you need to either disable and re-enable the whole platform perform a rolling replacment.A platform wide approach means that the application will be unavailable during the procedure. Change the configuration of the compute in design. Set the action to replace all the computes in operation. Disable the entire platform. Commit and deploy. Enable the platform to commit and deploy.Alternatively you can roll the change out via replacing computes: Change the configuration of the compute in design. Set the action to replace all the computes in operation. Choose a step size of less than 100% for a rolling upgrade. Pull the design changes to the environment. Deploy to the environment."
},
{
"title": "Computes in Operation",
"url": "/user/operation/compute.html",
"content": "Computes in OperationThe compute component represents the virtual machine (VM) and operating system onwhich a platform runs in operation. This section explains all the available data and features and explains some commonuse cases: Overview Example Use Cases Find IP Number of a Compute Fix Unresponsive Computes Replace a Bad Compute Upgrade OS Packages on a Compute Connect to Compute via SSH Update the Size or OS of a Compute OverviewYou can locate computes by navigating to a platform of your assembly within an environment in the operation phase: Assemblies item in the left navigation bar Click on the name of your assembly Click on the name of the desired environment Click on the name of the platform that contains the compute Click on the compute componentA list of computes is displayed with specific information about the compute including: Hostname - dynamically composed from --- Instance Name Instance Id Hypervisor Availability Zone OS Name Server Image Name Number of CPU Cores Ram in MB Private IP Public IPAlternatively you can use [search](../general/search.html] to access one or a list of computes, find a compute via akeyboard short cut or access a compute via a favorite.You can select one or multiple computes and perform _Action_s: reboot: performs a software-based restart of the compute. repair: attempts restart the monitoring of the compute that caused it to report as unhealthy. If unsuccessful,proceed with a reboot automatically. powercycle: perform a hard restart of the compute. apply-security-compliance: upgrade-os-security: apply security-related operating system package upgrades upgrade-os-package: upgrade a specific operating system package. upgrade-os-all: upgrade a specific operating system package. status: display the status in a dialog. list of IPs: show a list of the IP numbers of the selected computes in a dialog. replace: mark the compute for replacement. The actual replacement needs to be forced via a new deployment. undo replace: remove the replacement mark.Clicking on the name of a specific compute allows you to navigate to the details view. It contains tabs related to summary: summary information about the compute including sections for Status, Actions, Availability , Important Attributes_ and Action History. configuration: detailed view of the compute configuration attributes including a replace feature. monitors: list of monitors notifications: charts about the compute availability and notifications. procedures: list of procedures such as status actions, that were performed on the compute logs: access to the logs.Example Use CasesFind IP Number of a ComputeLocate the compute in operation and look at the public ip value.Note that a computess IP Address may change. Avoid building any reliance on an IP address in your application oroperations. Consider an IP Address transparent and changing like a process ID number PID. Whenever a compute goesthrough reboot, repair or replace activities, the compute may receive a new IP Address.Fix Unresponsive Computes with Reboot Locate the computes of the desired platform in operation. Select the check boxes beside the names. Click on the Action button on the right side on top of the list. Select reboot in the dialog. Leave step size to 100% to reboot all selected computes at once, smaller values result the action performed in batchese.g. a step size of 50% and a selection of 10 computes causes 5 computes to be rebooted first and when they are rebooted and healthy the next 5 are rebooted. Press confirm to start.For a single compute you can: Locate the compute. Go to the summary tab. Press on the Choose Action to Execute button under the Actions header. Select Confirm in the dialog. If the compute does not respond after the reboot, try with a repair action.Replace a Bad ComputeReplacing a compute results in the loss of any data available with that VM, for example, log files etc. The new computehas new identifiers and attributes such as IP numbers are changed as well. Ensure that there is no active or pending deployment for the environment. Follow the steps to fix unresponsive computes from above using the action replace. Navigate to the environments summary tab. Press the Force Deploy button Review the deploy plan in the dialog and press the Deploy button.Upgrade OS Packages on a Compute Locate the compute in the list and select it Select the action upgrade-os-package to upgrade a specific package. Set the argument using the package name. Press the Start now button.Alternatively select the upgrade-os-all action to apply all upgrades and install any new required packages orupgrade-os-security to apply security-related upgrades only.All Kernel-related patch updates require a compute reboot. After the packages are installed, do a rolling reboot ofcomputes.Connect to Compute via SSHYou can ssh into a compute VM once you have ensured that your certificate is trusted. This allows you to inspect thecurrent state of the compute and investigate problems and other aspects of the compute configuration at runtime: Ensure that the platform for the compute you want to connect to includes auser component with the desired Username and Authorized Keys. If necessary, add the user component, pull the design in your environment and deploy. Determine the Public IP of the compute. Connect with ssh username@public_ip.Update the Size or OS of a ComputeAn update to the compute configuration including size, OS and others needs to be donein the design phase and then deployed."
},
{
"title": "Configure ECV Check URL on OneOps",
"url": "/user/transition/configure-ecv-check-url-on-oneops.html",
"content": "Configure ECV Check URL on OneOpsSolutionTo configure ECV check URL follow these steps: In Transition, go to the assembly. Select the environment that contains the platforms with redundant availability. Select the platform and then click the LB component within this platform. The default ECV attribute value is GET /. Edit the component and provide an appropriate contextual GET URL. For example: GET /myapp/checkitsup. Save the changes. To reflect this change, go to the environment and perform a commit and deploy. For LB to understand that a service is reachable, the URL used for the ECV check should return in less than two seconds."
},
{
"title": "Contact",
"url": "/overview/contact.html",
"content": " GitHubSource of OneOps and various related tools and the website itself can be foundon in the oneops organization on GitHuband we welcome contributions. OneOps SlackWe use our OneOps Slack as chat platform. Youcan request an invite or sign up yourself. TwitterWe spread project news and more via our blog and our One_Ops on Twitter. EmailIf you are a cloud provider, infrastructure hardware vendor, software vendor, represent an open source product or are a consultancy, wed enjoy hearing fromyou. Please reach out to us via email atpartner@oneops.com."
},
{
"title": "Contribute",
"url": "/overview/contribute.html",
"content": "Our code is publicly available in several repositories on GitHub athttps://github.com/oneopsWe utilize GitHub for issue tracking and contributions. You can contribute in two ways: Report an issue or file a feature request as an issue. Add features, fix bugs yourself, or contribute your code with a pull request.Contributing Code or DocumentationContributions are accepted as pull requests on the relevant repository.Contributor are required to sign our by Contributor License Agreement(the CLA). Each pull request is automatically verified, against the listof contributors that have signed the CLA. If not, you are required to sign theCLA for the contribution to be merged.Further details are available in theWalmart CLA repository.The website is the main documentation for OneOps and we welcome issues and pullrequests for it as well. If you want to help, check out ourdocumentation guideline.Code Review ProcessEach GitHub pull request will go through 3 step before merge: We will execute our automated test cases against the pull request. If thetests failed the pull request will be rejected with comments provided on thefailures. If tests pass, the OneOps engineering team member will do the review of thechanges. Technical communication possible via github.com pull request page. Whenready, your pull request will be tagged with label Ready For Merge. Your patch will be merged into master including necessary documentationupdates.Apache 2.0 LicenseOneOps uses the Apache 2.0 license and any changes orenhancements have to use the same license.OneOps Issue Tracking in GitHubIf you are familiar with OneOps and know the repository that is causing you a problem or if you have a feature requeston a specific component, you can file an issue in the corresponding GitHub project. All of our Open Source Softwarecan be found in our GitHub organization.Otherwise you can file your issue in the OneOps project and we will make sureit gets filed against the appropriate project.To decrease the back and forth in issues, and to help us get to the bottom of them quickly, we use the issue templatebelow. You can copy/paste this template into the issue you are opening and edit it accordingly.### Version:[Version of the project installed]### Environment:[Details about the environment such as the Operating System, cookbook details, etc.]### Scenario:[What you are trying to achieve and you can't?]### Steps to Reproduce:[If you are filing an issue, what are the things we need to do to reproduce your problem?]### Expected Result:[What are you expecting to happen as the consequence of the reproduction steps above?]### Actual Result:[What actually happens after the reproduction steps?]Provide Feedback or Contact UsYou can provide feedback or contact us by sending email to support@oneops.com or by using one of the correspondingOneOps Slack channels : #admin, #devel, or #user."
},
{
"title": "Control Environment",
"url": "/user/operation/control-environment.html",
"content": "Control EnvironmentSolutionUse Operations to perform operations on a component. Each component has specific controls. Your browser does not implement HTML5 video.To work with the controls for an environment, follow these steps: Go to Operations. Select the environment. Select the component. Summary: Compute: Status, Reboot, upgrade-os-security, powercycle, repair, upgrade-os-all Tomcat/Jboss: Status, Stop, Start, Restart, Repair, Debug Artifact: Repair, Redeploy, Custom User Attachment And more Configuration: Displays the latest configuration deployed on the compute. It is possible to Replace or Cancel the Active deployment here. Monitors: Displays the graphs for the monitors that are associated with the component. Read more details atMonitors. Logs: Procedures: Displays the procedures that are called from Operations."
},
{
"title": "Cost Management",
"url": "/user/account/cost-management.html",
"content": "Cost ManagementTo provide an overview of cost management capabilities in OneOps product. The cost management feature allows organization administrators to: Define a cost structure for all cloud services offerings Set limits/quotas on cloud service allocations (actual + reserved) of cloud services offerings per assembly Reporting on current cloud service allocation (actual + reserved)DetailsCost management feature will add the following: Cloud service offerings information in CMS with index in ES to capture the unit resource costs associated with consuming cloud resources (to be used both for trackingCost as well as for active quota limits) Relationships between assembly environments and cloud offerings in CMS with index in ES to capture the current allocations, actual + reserved. (to be used for current allocationReports and for real-time quota limits) Deployment time cost capture in ES as part of the workorders to capture the actual utilization of resources over time (to be used as historical event stream for showback and chargeback reporting) Cost Management features can be categorized primarily into the following three categories: Cost Tracking - modeling and maintenance of offerings and cost tracking during deployments with reporting Cost Reporting - implement cost index in the backend and a cost explorer in the UI Cost Management - limits, budgets, projections, monthly billing/chargeback"
},
{
"title": "Create a Custom Payload",
"url": "/developer/content-development/create-a-custom-payload.html",
"content": "Create a Custom PayloadTo get configuration data from other parts of your assembly, you can add a custom payload definition to the resource in a circuit.Prerequisite: You must understand the bom, manifest, and base models.Lets start with an sample payload that gets all the computes in an environment for the Cassandra component. Go to a bom.oneops.1.Cassandra component. Go up to the manifest component (manifest.oneops.1.Cassandra). Use the DependsOn relation to get to the manifest compute. To get the bom instances, use the base.RealizedAs relation.'computes' => { 'description' => 'computes', 'definition' => '{ "returnObject": false, "returnRelation": false, "relationName": "base.RealizedAs", "direction": "to", "targetClassName": "manifest.oneops.1.Cassandra", "relations": [ { "returnObject": false, "returnRelation": false, "relationName": "manifest.DependsOn", "direction": "from", "targetClassName": "manifest.oneops.1.Compute", "relations": [ { "returnObject": true, "returnRelation": false, "relationName": "base.RealizedAs", "direction": "from", "targetClassName": "bom.oneops.1.Compute" } ] } ] }' } Use the cms-admin tool which is part of the CMS to visualize / verify the relation names, directions, and class names. You can browse cms-admin using a / nspath starting point:http://localhost:8080/cms-admin/ci.do?nspath=%2F&classname=&ciname=&Search=Search You can use the instance /ci id in the url of OneOps ui to go directly to the ci:http://localhost:8080/cms-admin/ci.do?id=482717There are many examples of payloads in the circuits. Most likely there is an existing payload you can reuse by changing a few classes."
},
{
"title": "Create a Team in an Organization",
"url": "/user/account/create-a-team-in-an-organization.html",
"content": "Create a Team in an OrganizationSolution Your browser does not implement HTML5 video.Creating a team is the cleaner way to assign roles in OneOps. Login to the OneOps environment. If you are an admin of a organization, you can see the tab, Team. Create a new team and assign the appropriate roles to it. Edit the assembly. Click Team. Click edit and select the newly created Team.About Team FieldsThe following is some information about team fields: Name: Name of team Description: A brief description about the purpose of the teamAccess Management Manage Access: When checked and this team is added to a cloud or an assembly, manage access allows team members to manage other team assignments. Additionally, when checked, manage access allows teams members to create new clouds and assemblies. Organization Scope: When checked, this team automatically has access to all assemblies and clouds in the organization (without the need to assign the team explicitly and governed by the permissions specified below) Cloud Permissions: Allows members to have a complete view of the cloud with limited or complete write/execute privileges depending upon the selected permissions listed below Services: When checked and this team is added to a cloud or has Organization Scope, it allows team members to manage cloud services, for example, add/update/delete cloud services. Compliance: When checked and this team is added to a cloud, it allows team members to approve deployment, involving corresponding cloud. For example, if a team has this permission and there is one cloud named qa-azure1 with compliance object added, then all deployments involving the qa-azure1 cloud would require an approval. This approval can be granted by admin or members of this teams. Support: When checked and this team is added to a cloud, it allows team members to approve deployment, involving the corresponding cloud. For example, suppose a team has this permission and there is one cloud named stg-openstack1 with a support object added, then all deployments involving the stg-openstack1 cloud would require an approval. This approval can be granted by admin or members of this teams. Assembly Permissions: Allows members to have a complete view of the assembly with limited or complete write/execute privileges depending upon the selected permissions listed below. The permissions are also called DTO (design, transition and operations) permissions. Design: When checked and this team is added to an assembly, it allows the team members to manage the design including add/update/delete platform, components and variables within the assembly Transition: When checked and this team is added to an assembly, it allows team members to manage the transition phase which allows members access to: Add/update/delete environment Add/update/delete component monitor thresholds Update components and variables within the environment Pull design Commit open releases Perform deployments Operations: When checked and this team is added to an assembly, it allows team members to manage the operate phase allowing members access to execute any actions/procedures and mark any instance for replacement. User Members: Add one or more individual users Group Members: Add one or more groups"
},
{
"title": "Create an Environment",
"url": "/user/transition/create-an-environment.html",
"content": "Create an EnvironmentAn Environment captures your operational requirements for an instance of your Assembly. Examples of Environments are dev, qa, or prod environments. Because all environments use the same base application model, the overall effort to maintain all of the environments is minimized.Solution Your browser does not implement HTML5 video.To create an environment, follow these steps: Go to the assembly. In the Transition phase, click create environment. Select the appropriate environment profile Click New Environment. Enter the properties for the new environment. Include the following: Environment name that is unique to your Assembly Description Administrative Status from the options listed below. Provision: Environment is under provisioning state Active: Environment is up and serving production traffic Inactive: Environment is up however doesnt serve production traffic Decommission: Environment is no longer in use and can be removed. Continuous Deployment: Not in use DNS DNS Subdomain: Makes the environment unique Global DNS: If checked, creates the GLB For a new environment, Provision is the correct status value. Other values should be used after the environment is created and functioning. This status value is used only for administrative purpose by NOC or OneOps admins to understand the state of the environment. Configure the availability. Availability mode: Single: Generates an environment without load balancers, clusters, etc. Redundant: Inserts and configures clusters, load balancers, rings, etc., depending on what the best practice is for each platform. High-Availability: Adds multi-provider or multi-region to a redundant environment. Availability can only be changed when creating the environment. Select one or more primary and secondary clouds from the list of available options. Debug should not be used by users. It is used to debug OpenStack issues. To save the environment, click Save. A new Environment manifest is generated and the Environment details page is shown. Review the changes, which are now a set of add Actions. Click Commit and Deploy. The deployment plan is generated. Review the deployment steps and then click Deploy."
},
{
"title": "Create Assemblies to Design Applications",
"url": "/user/design/create-assemblies-to-design-application.html",
"content": "Create Assemblies to Design ApplicationsAssembly design is the high-level representation of your business application. This is your Golden configuration which serves as a source for the realization of your applications in new or existing environments. Your browser does not implement HTML5 video.To add an assembly, follow these steps: In the Account navigation box, click create assembly. Enter a unique name. You can use letters, numbers, and a dash. No other characters are allowed. You are notified to match the requested format if you use any invalid characters. Enter a brief description. Enter a single email address to which notifications are sent. This can be a distribution list, but it can only be a single email address. Optionally, select a Design from the drop-down list. To find a design, navigate the catalogs by clicking the arrows. The available designs here in the list is also called as Catalog Click Save.See Also Add a Platform to a Design More on the lifecycle of an Assembly"
},
{
"title": "Create Environment Dependency with Environment Profiles",
"url": "/user/design/create-environment-dependency-environment-profiles.html",
"content": "Create Environment Dependency with Environment ProfilesConsider the following when creating environment dependencies with environment profiles: When a user attempts to create a new environment, a drop-down menu is shown to enable (and require) the user to choose from a list of predefined environment profiles. The selected profile is used to pre-fill the environment attribute values (including cloud association). Select the correct Administrative Status. The user has the ability to keep the default configuration settings as is (typical scenario) or edit some or all of the values, if required, before saving this new environment. The environment profiles are not meant to restrict the user while selecting the choices available to create environment. Instead the profiles are meant to pre-fill the most commonly used values for a typical environment setup. After a new environment is saved, it has an attribute (profile) that points to the underlying environment profile by referencing the profiles name. The profile association can be updated at any time by editing the environment. Changing the existing environment profile association does not mean that any of the newly associated profile attributes or settings overwrite any of the existing environment configuration values. The environment profile attribute bootstrapping is only applied during initial environment creation. It is not intended to be maintained as an active attribute value mirroring during any life-cycle changes to either a concrete environment or its underlying profile.See Also Environment Profiles View, Add, or Edit Environment Profiles"
},
{
"title": "Create Parameterized Component Actions",
"url": "/developer/content-development/create-parameterized-component-actions.html",
"content": "Create a Parameterized Component ActionsSummaryAs a cookbook or circuit developer, if you want to accept user inputs before executing any component actions and use those input values inside the action recipe, you need to specify those details in the metadata.rb of that cookbook as mentioned in the details below.DetailsLets say you need to add an action (or modify an existing action) to accept a text input called path and some more inputs from the user. You need to modify the metadata.rb of that cookbook and add args metadata to the action as shown below:metadata.rbrecipe "restart", "restart application"recipe "stop", "stop application"recipe 'run-script', :description => 'Run a script', :args => { "path" => { "name": "path", "description": "Path to a file", "defaultValue": "", "required": true, "dataType": "string" }} Right now, only string (text field) is supported on gui. Rest of the types will be supported soon and this document will be updated then The content of the args can be either Ruby hash or a plain JSON. After you sync this new metadata.rb using the knife plugin, the end user sees the GUI dialog box to enter those inputs before starting the procedure execution. In your recipe code, you can use the arglist field from the json node payload to use the input values. Example take a look at volume component and look for log-grep-count "
},
{
"title": "Daemon Component",
"url": "/user/design/daemon-component.html",
"content": "Daemon ComponentThe daemon component is available on all platforms. Itcan be used to create and configure an operating-system level daemon toguarantee service startup upon reboots."
},
{
"title": "Default Monitor Thresholds",
"url": "/developer/content-development/default-monitor-thresholds.html",
"content": "Default Monitor ThresholdsThe tables below list all of the default monitor thresholds implicitly added in all environments. As an app owner, you should review and update these thresholds to what is best suited for your app. Monitor Type Resource Name Threshold Definition Description Action CPU Load Heartbeat compute If collection for any of the load metrics (load1, load5 or load15) is missed, raises a missing heartbeat pulse event which makes the compute instance unhealthy. Unhealthy notification is raised. Repair action is executed on the affected instance. CPU Load compute 'HighLoad' => threshold('1m','avg','load5',trigger('>=',30,3,1),reset('<',15,1,1)) Compute is heavily loaded if the load5 average value goes above 30. Then set the trigger. Notify only. No action. CPU Usage compute 'HighCpuUsage' =>threshold('5m','avg','CpuIdle',trigger('<=',10,15,2),reset('>',15,15,1)) Compute utilization is very high if cpuidle goes below 10% which means that more than 90% is utilized. Notify only. No action. Socket Connection compute No default threshold is defined. Monitor can be set up with different State: TIME_OUT, ESTABLISHED, CLOSE_WAIT, etc. Network compute No default threshold is defined. Filesystem root volume / 'LowDiskSpace' => threshold('1m', 'avg', 'space_used', trigger('>=', 90, 5, 2), reset('<', 85, 5, 1)) Compute has low disk space when space_used is more than 90% at root disk. /'LowDiskInode' => threshold('1m', 'avg', 'inode_used', trigger('>=', 90, 5, 2), reset('<', 85, 5, 1)) Compute has low inode when inode_used is more than 90% at root disk / Notify only. No action. System messages file /var/log/messages Memory Compute 'HighMemUse' => threshold('1m', 'avg', 'free', trigger('<', 50000, 5, 4), reset('>', 80000, 5, 4)) Compute is using too much memory when available (free) memory goes lower than 50MB. Notify only. No action. Process cron crond process 'CrondProcessLow' => threshold('1m', 'avg', 'count', trigger('<', 1, 1, 1), reset('>=', 1, 1, 1)) crond process should be running. If not, the process count goes below 1 and raises the alert. 'CrondProcessHigh' => threshold('1m', 'avg', 'count', trigger('>=', 200, 1, 1), reset('<', 200, 1, 1)) crond process count should not be above 200. If found, raises the alert. Notify only. No action. Process sendmail postfix process 'PostfixProcessLow' => threshold('1m', 'avg', 'count', trigger('<', 1, 1, 1), reset('>=', 1, 1, 1)) postfix process should be running. If not, the process count goes below 1 and raises the alert. 'PostfixProcessHigh' => threshold('1m', 'avg', 'count', trigger('>=', 200, 1, 1), reset('<', 200, 1, 1)) postfix process count should not be above 200. If found, raised the alert. Notify only. No action. Process SSH Daemon sshd process 'SshdProcessLow' => threshold('1m', 'avg', 'count', trigger('<', 1, 1, 1), reset('>=', 1, 1, 1)) sshd process should be running. If not, the process count goes below 1 and raises the alert. 'SshdProcessHigh' => threshold('1m', 'avg', 'count', trigger('>=', 200, 1, 1), reset('<', 200, 1, 1)) sshd process count should not be above 200. If found raises the alert. Notify only. No action. Volume /app Thresholds Monitor Type Resource Name Threshold Definition Description Action Filesystem /app volume 'LowDiskSpaceCritical' => threshold('1m', 'avg', 'space_used', trigger('>=', 90, 5, 2), reset('<', 85, 5, 1)) Volume has low disk space when space_used is more than 90% at root disk /app 'LowDiskInodeCritical' => threshold('1m', 'avg', 'inode_used',trigger('>=', 90, 5, 2), reset('<', 85, 5, 1)), Volume has low inode space when inode_used is more than 90% at root disk /app Notify only. No action. Tomcat Thresholds Monitor Type Resource Name Threshold Definition Description Action Tomcat process tomcat-daemon 'TomcatDaemonProcessDown' => threshold('1m', 'avg', 'up', trigger('<=', 98, 1, 1), reset('>', 95, 1, 1)) tomcat daemon process is considered down if its process availability goes below 90%. Even though the threshold says below 90%, in reality the process no longer exists. Do not change the average values to 100%. Notify only. No action. JvmInfo tomcat 'HighMemUse' => threshold('1m','avg', 'percentUsed',trigger('>=',90,5,1),reset('<',85,5,1)) Note: Values are calculated from http://localhost:#{port}/manager/status?XML=true ThreadInfo tomcat 'HighThreadUse' => threshold('5m','avg','percentBusy',trigger('>=',90,5,1),reset('<',85,5,1)) Note: Values are calculated from http://localhost:#{port}/manager/status?XML=true RequestInfo tomcat No Threshold defined. Note: Values are calculated from http://localhost:#{port}/manager/status?XML=true Log tomcat 'CriticalLogException' => threshold('15m', 'avg', 'logtomcat_criticals', trigger('>=', 1, 15, 1), reset('<', 1, 15, 1)) AppVersion tomcat Artifact App App-Specific Thresholds Monitor Type Resource Name Threshold Definition Description Action Exception Monitoring artifact Level * Log Path: * /log/logmon/logmon.log * Pattern to look for: Exception * thresholds: 1 (Alert on every occurrence ) * Severity: Major * If more than 2 Critical 'CriticalLogException' => threshold('15m', 'avg', 'logtomcat_criticals', trigger('>=', 1, 15, 1), reset('<', 1, 15, 1)), 'logfile' => '/log/apache-tomcat/catalina.out', 'warningpattern' => 'WARNING', 'criticalpattern' => 'CRITICAL' The three parameters above define the file to be monitored for warning and critical patterns. Notify only. No action. Apache Server Thresholds Monitor Type Resource Name Threshold Definition Description Action ServerStatus Apache 'TooBusy' => threshold('5m','avg','idle_workers',trigger('<',5,5,5),reset('>',5,5,5)), 'HighUserCpu' => threshold('5m','avg','cpu_user',trigger('>',60,5,1),reset('<',60,5,1)), 'HighSysCpu' => threshold('5m','avg','cpu_sys',trigger('>',30,5,1),reset('<',30,5,1)) Note: All the metrics are calculated using http://localhost:#{port}/server-status Notify only. No action. ActiveMQ Thresholds Monitor Type Resource Name Threshold Definition Description Action BrokerStatus activemq Note: Metrics values are calculated using queues: <protocol>://<host>:<port>/admin/xml/queues.jsp topics: <protocol>://<host>:<port>/admin/xml/topics.jsp Log activemq 'CriticalLogException' => threshold('15m', 'avg', 'logtomcat_criticals', trigger('>=', 1, 15, 1), reset('<', 1, 15, 1)), 'logfile' => '/opt/apache-activemq-5.5.1/data/wrapper.log', 'warningpattern' => 'OutOfMemory', 'criticalpattern' => 'OutOfMemory' The three parameters above define the file to be monitored for warning and critical patterns. Log Path: /log/logmon/logmon.log Pattern to look for: Exception. Notify only. No action. Memory activemq No threshold defined 'protocol' => 'http', 'port' => '8161', 'path' => '/admin/index.jsp?printable=true' Note: Metrics values are calculated using <protocol>://<host>:<port>/admin/index.jsp?printable=true Notify only. No action. Process Daemon 'ActiveMQDaemonProcessDown' => threshold('1m', 'avg', 'up', trigger('<=', 98, 1, 1), reset('>', 95, 1, 1)) Notify only. No action. "
},
{
"title": "Delete an Environment",
"url": "/user/transition/delete-an-environment.html",
"content": "Delete an EnvironmentSolution Your browser does not implement HTML5 video. Go to the Transition phase. Select the environment name that you want to delete. Select the summary tab. Click Disable Environment. In the Warning pop-up window, click Ok. Commit and Deploy This disables all the Platforms and deletes the environment."
},
{
"title": "Delete a Platform",
"url": "/user/account/delete-platform.html",
"content": "Delete a PlatformSolutionTo delete a platform, follow these steps: Log into the OneOps portal. Select your organization from the left or top navigation. Select your assembly from the right navigation. Select the Platform to be deleted. Click delete. Commit.See Also Manage Assemblies Video Edit or Delete a Platform Video"
},
{
"title": "Deploy an Organization",
"url": "/user/account/deploy-an-organization.html",
"content": "Deploy an OrganizationSolutionTo deploy an organization, follow these steps: Click the Organization name on the top, left side of the screen. Select the Deployment tab. The Deployment tab displays all the pending deployment for the organizations. Deployment displays the status and progress for the deployment. "
},
{
"title": "Deploy Application after Design Changes",
"url": "/user/transition/deploy-application-after-design-changes.html",
"content": "Deploy Application after Design ChangesSolution Your browser does not implement HTML5 video.To deploy an application after making design changes, follow these steps: Go to the assembly. Edit the design. Remember to commit the changes. Go to the Transition phase. Click Pull Design. Verify the changes. Click Commit & Deploy."
},
{
"title": "Deploy an Application for the First Time",
"url": "/user/transition/deploy-application-for-first-time.html",
"content": "Deploy an Application for the First TimeSolution Go to the assembly. Create the environment. If the cloud is not created, OneOps redirects you to the Create Cloud screen. On the create environment screen, select the primary and secondary cloud. Click Commit & Deploy."
},
{
"title": "Deploy Application With Database",
"url": "/user/transition/deploy-application-with-database.html",
"content": "Deploy Application With DatabasePurposeThe purpose of this document is to briefly explain how to configure and deploy a simple web application with a MySqldatabase.Highlevel Overview Setup MySQL database on OneOps. Create a schema on MySQL. Configure the web application. Deploy the web application on the server. Step by Step Select the organization where the web application needs to be deployed. Create a new assembly. Go to design tab and add a new platform for the database from list of available packs e.g. use the MySQL pack. Configure the database schema either using Additional DB statements or the Attachment option. Once all the required components under the database platform are configured, you have to commit the changes. Go to the transition tab and create a new environment. The newly created environment will pull all the designchanges. Once the design pull is completed, click on the commit and deploy button. It shows the deployment plan. If no changesare required, click on the Deploy button. This action acquires a new VM, installs MySQL, configures the database andstarts MySQL. Once deployment is completed successfully, perform a simple test for database connection. Now, go back to design tab and another platform for the server where the web application will be deployed. In thisarticle we will be using a Tomcat pack. Configure the web app to be deployed either using the artifact component or attachment option. Commit the changes. Go to the transition tab and pull the new design changes into the environment. Commit and Deploy generates the deployment plan for the new server. After successful deployment go to the Tomcat server fqdn component and find the DNS details to access theapplication. Best Practices Add users SSH keys under the user component for each platform in design. Add the tomcat-daemon component as part of tomcat platform design. For the attachment option, always use a public URL for remote file which do not require authentication. For all environment specific configuration like database add new VM arguments for Tomcat to load them accordingly. "
},
{
"title": "Deploy an Environment",
"url": "/user/transition/deploy-environment.html",
"content": "Deploy an EnvironmentSolution Your browser does not implement HTML5 video.To deploy an environment, follow these steps: Click deploy. The deployment plan is created and the event is published. The Controller reacts on the new deployment event, changes the deployment state to in progress, and dispatches Workorders from step 1 to Inductors. The inductors consume the Workorders. Call provider API instantiates the Compute, Storage, etc. After the compute is up, there is an ssh remote execution of Chef recipes to deploy software, configuration, monitors, etc. The inductor sends the Workorder result back to the Controller to update the CMS. After all Workorders from step 1 are successfully processed, move to the next step. Metrics and Logs are collected (via flume) to Cassandra. "
},
{
"title": "Deploy Multiple Clouds in Parallel",
"url": "/user/transition/deploy-multiple-clouds-in-parallel.html",
"content": "Deploy Multiple Clouds in ParallelSolutionParallel Cloud Cloud priority is used during a deployment plan generation to determine the deployment step order. Ordering of steps happens based on an ascending order of priority. Cloud priority can be defined per platform on the transition platform page. By default, all cloud priority is set to: 1 for cloud names ending with odd number 2 for cloud names ending with even number. Use discretion when updating the priority before deployment. All primary clouds are executed before secondary ones. The cloud priority field allows any numeric value. If all primary clouds have the same priority, then all the changes in that platform will be deployed in parallel. For example: if the Tomcat platform across all the primary clouds has the priority of 1, then all the Tomcats get updated at the same time.Edit Cloud Priority In the Transition view, go to the environment. Select the platform. From the list of clouds for the platform, click the list icon on top, right side of the cloud and select Cloud deployment order. To set the ordering of cloud for deployment, enter a numeric value. All primary clouds are executed before secondary ones."
},
{
"title": "Deploy and Provision an Application and Environment for the First Time",
"url": "/user/transition/deploy-provision-application-environment-first-time.html",
"content": "Deploy and Provision an Application and Environment for the First TimeSolutionUser Setup Login to OneOps. At first, you are not part of any organization in OneOps so the home screen displays like this: Find out whether your Team already has an organization created in OneOps, and if so find out which person is your administrator. If you are instead a new organization, ask your administrator to create an organization for you. Find your organization in OneOps. After you identify your organization, log into the OneOps system with your Active Directory username and password. This creates your account after you agree to the Terms and Conditions. Find your administrator and ask the administrator to add your new account into your Team.The administrator completes the following steps to add your account to the team: In the top header, click Organization. Select the tab, users. Click Add User. Enter the login name of the user to be added in the text box. Save.DevOps or Support UsersYou may want to add people outside of your team into your OneOps organization so that you can get support help or just to share your work. You can assign this type of user the permissions of operations to keep them from altering assemblies. To do this, follow these steps: Create a new Team in your Organization. Name it something like Devops. Add individual users into that Team.Granular PermissionsGranular permissions are set at the Team level. The choices are design, transition, and operations.Tenant Setup To get a tenant for your organization in the OpenStack infrastructure, contact the OneOps team. This is to allocate a quota of VMs (virtual machines) against your organization. Indicate how many VMs you need in each of your environments and in each data center. Specify how much memory your application needs to run on each VM. The OneOps team creates a tenant for you and sets up the Cloud configuration for your organization and then notifies you.Then you can create an Assembly.AssemblyThink of an assembly as the blueprint of your application. It contains the design of your application architecture and all its components including infrastructure.To create an assembly follow these steps: Click assemblies. Click new assembly. Add the name and description on the next form and leave the catalog as none. Click save. Now you are in the Design phase. PlatformA Platform is an instance of a pre-defined template design called a pack. Examples of packs include: Tomcat, MySQL, etc. To add a platform in your design, click New Platform. Enter the following: Platform name (something like tomcat-webapp for example) Pack Source Pack: You can select this in the Pack Name drop-down. If you dont see the platform you need, contact your administrator. Latest Pack Version. Click Save. You should now see a component diagram. These are the components that will be installed on your VM in the appropriate order. This sequence comes from the pack definition. Also, on the right hand side, you should see the list of the same components. To modify the size of your compute, if this is required, select the compute. The default setting for the compute size is medium. For the VM size details, refer to your administrator. Select the variables tab. On the variables page, select the variables to be modified. Click edit on the right side. Modify and save the variables as required: appVersion: Maven artifact version artifactId: Maven artifactId. For example, for a Tomcat platform, the artifactId is the artifact id of the war artifact. groupId: Maven artifact group id for your artifact repository: shaVersion: You can leave this empty deployContext: Name of the Tomcat context when the app is deployed. (http://<server vip>/). If your packs have multiple added components of the same type on top of the default design, make sure to select each component of the same type and change the attributes to override the variables substitution. For example, if you have more than 1 war in the same Tomcat, make sure you select each artifact (war) component and edit all the attributes where a variables value is different from what you set above For example, the artifactId would be different for each war and it is necessary to change all the attributes wherever it is referred to. Remove the ${OO_LOCAL* and put the actual value carefully. Also remove the checksum attribute value. When you are finished editing the variables values, you are ready to mark your design as complete. Click the green Commit button."
},
{
"title": "Deployment Approval Process",
"url": "/user/account/deployment-approval-process.html",
"content": "Deployment Approval ProcessDeployment approval is a mechanism by which a team can govern deployment of environment attributed with approval flagKey information about approval process Create a new support object to the specific clouds who require approval The support object can be added/removed from cloud by users with support permissions Only users with Support permission to the cloud can approve the deployment record associated with the cloud Multiple approval records will be generated for deployment corresponding to the number of support objects per clouds. Few or all approval records can be approved or rejected with single API call.Add Team with Support permissions Log into your OneOps server. Make sure you have Admin permission before going to next step From the top navigation bar, select the Account link. Select teams tab On the left, click on +Add Team Give relevant name to the support object. Make sure to enable Support cloud permissions. Make sure this team has access to environment to be approved or has admin/organization scope access Add users/groups to the team, to provide this support access click Save.Add Support object to cloud Log into your OneOps server. Make sure you have Admin/Support permission before going to next step From the left navigation bar, select the clouds link. Select the cloud which you want to add approval process on Select support tab On the left, click on +Add Support Give relevant name to the support object. Make sure its state is enabled and deployment approval flag is turned on click Save.Any deployment to the environment using clouds with support object would require approval from support team member. There is no GUI support available for approving the deployment for such environment. Use below 2 API to approve the deploymentGet list of deployment records to be approved for a given deploymentMethod: GET<OneOps-URL>/<ORG-NAME>/assemblies/<ASSEMBLIY-NAME>/transition/environments/<ENVIRONMENT-NAME>/deployments/<DEPLOYMENT-ID>/approvalsCollect list of approvalIds to be approved, to form the body of next API callApprove list of deployment recordsMethod: PUTURL: //assemblies//transition/environments//deployments//approvals/settleBody:{ "approvals": [{ "state": "approved", "expiresIn": 5, "approvalId": "<APPROVAL-ID-1>", "comments": "approved by luke" }, { "state": "approved", "expiresIn": 5, "approvalId": "<APPROVAL-ID-2>", "comments": "approved by luke" }]}"
},
{
"title": "How to Deprecate a Pack",
"url": "/developer/content-development/deprecate-a-pack.html",
"content": "How to Deprecate a PackThere are two flags available that you use in pack.rb to deprecate a specific pack version:ignore true|falseenabled true|falseignoreThe ignore flag is used before loading of the pack. If it is set to true, the pack will not be re-loaded andtherefore updated in OneOps. However a currently loaded pack will not be removed and therefore remain dormant inplace.enabledThe enabled flag defines the visibility of pack during the creation of a platform. If it is set tofalse, the pack will not be visible to a user when creating a platform.The pack can however still be used to create a platform with the API even when this flag is set to false.Note: If an existing environment uses a deprecated pack, it will not be affected and continue to function."
},
{
"title": "Design Attachments API",
"url": "/developer/integration-development/design-attachments-api.html",
"content": "Use the Attachment API to store additional configuration entities for any component. You can useattachments to add configuration files, certificates, custom scripts etc. Attachmentcontent can be specified directly in the request or from a specified remote URL location.Each attachment has a :runs_on attribute that allows for optional callback executions before or after component lifecycle events of add, update and delete. In addition, you can specify the on-demand option in the runs_on list which makes the attachment with the associated execution command to be available as an on-demand component action to be invoked at any time during operations.ListGet a list of attachments for a design component.GET /assemblies/:assembly/design/platforms/:platform/components/:component/attachmentsResponse<%= headers 200 %> <%= json(:design_attachment) { |h| [h] } %>CreateAdd a new attachment to this design component.POST /assemblies/:assembly/design/platforms/:platform/components/:component/attachmentsRestrictionsAttachment names must be unique within a given platform namespace.Inputcms_dj_ci : Required HashciName: _Required_ **String**comments: _Optional_ **String**ciAttributes: _Required_ **Hash** content : _Optional_ **String** - Attachment content. source : _Optional_ **String** - Source URL where the content to be downloaded is located. The location must be a file, binary or text. headers : _Optional_ **Hash** - Optional HTTP headers to be used to force MIME types or for other customizations to the download request. basic_auth_user : _Optional_ **String** - Username for protected URL location. basic_auth_password : _Optional_ **String** - Password for protected URL location. checksum : _Optional_ **String** - Checksum to compare after the download. path : _Optional_ **String** - Destination filename path where the content should be saved or the remote source URL content should be downloaded to. exec_cmd : _Optional_ **String** - Command-line to execute after the content is saved. runs_on : _Optional_ **String** - Comma-separated list of lifecycle events for automatic callbacks. The list can contain only the following events: **before-add, after-add, before-update, after-update, before-delete, after-delete, on-demand.** priority : _Optional_ **String** - Specify priority of executions in case there are multiple attachments in the same callback event.Ruby:<%= json %5C :cms_dj_ci => { :ciName => "myattachment", :comments => "These are your comments", :ciAttributes => { "content" => "Some file or script content here", "source" => "", "headers" => "", "basic_auth_user" => "", "basic_auth_password" => "", "checksum" => "", "path" => "/tmp/myattachment.sh","exec_cmd" => "/tmp/myattachment.sh", "run_on" => "before-add,on-demand", "priority" => "1" } } %>Response<%= headers 200 %> <%= json :design_attachment %>GetRetrieve the requested attachment in this design component.GET /assemblies/:assembly/design/platforms/:platform/components/:component/attachments/:attachmentResponse<%= headers 200 %> <%= json :design_attachment %>UpdateUpdate the specified attachment in this design component with new data.PUT /assemblies/:assembly/design/platforms/:platform/components/:component/attachments/:attachmentInputcms_dj_ci : Required Hashcomments: _Optional_ **String**ciAttributes: _Required_ **Hash** content : _Optional_ **String** - Attachment content. source : _Optional_ **String** - Source URL where the content to be downloaded is located. The location must be a file, binary or text. headers : _Optional_ **Hash** - Optional HTTP headers to be used to force MIME types or for other customizations to the download request. basic_auth_user : _Optional_ **String** - Username for protected URL location. basic_auth_password : _Optional_ **String** - Password for protected URL location. checksum : _Optional_ **String** - Checksum to compare after the download. path : _Optional_ **String** - Destination filename path where the content should be saved or the remote source URL content should be downloaded to. exec_cmd : _Optional_ **String** - Command-line to execute after the content is saved. runs_on : _Optional_ **String** - Comma-separated list of lifecycle events for automatic callbacks. The list can contain only the following events: **before-add, after-add, before-update, after-update, before-delete, after-delete, on-demand.** priority : _Optional_ **String** - Specify priority of executions in case there are multiple attachments in the same callback event.Ruby:<%= json %5C :cms_dj_ci => { :comments => "These are your comments", :ciAttributes => { "content" => "Some file or script content here", "source" => "", "headers" => "", "basic_auth_user" => "", "basic_auth_password" => "", "checksum" => "", "path" => "/tmp/myattachment.sh","exec_cmd" => "/tmp/myattachment.sh", "run_on" => "before-add,on-demand", "priority" => "1" } } %>Response<%= headers 200 %> <%= json :design_attachment %>DeleteRemove the specified attachment in this design component.DELETE /assemblies/:assembly/design/platforms/:platform/components/:component/attachments/:attachmentResponse<%= headers 200 %>"
},
{
"title": "Design Best Practices",
"url": "/user/design/design-best-practices.html",
"content": "Design Best PracticesIn the Design phase, follow these best practices: Follow the naming conventions for assembly names Add the owners email address to each assembly Be a watcher for your assembly Add a description for each platform to help other users to understand the purpose of the platform Make sure that VM (instance size on compute) and JVM sizes (max heap size on Tomcat) are consistent Edit the default values of the variables of each platform, as needed Review the default values of each component If an additional port support is required, add the secgroup component To SSH into the compute, add the SSH key to the user component. To ensure that the platform dependency is correct, review the design diagram After creating or making changes to a design, remember to commit the design"
},
{
"title": "Development Environment Setup",
"url": "/developer/core-development/development-environment-setup.html",
"content": "Development Environment SetupThis document contains some older notes for the development environment setupand build. Refer to theoverview for building from source and running OneOps.Environment SetupConfigure the required environment variables as per your local setup.export OO_HOME=~/work/projects/walmartexport CASSANDRA_HOME=~/install/apache-cassandra-1.2.6/export AMQ_HOME=~/install/apache-activemq-5.11.1export AMQ_PLUGIN_HOME=$OO_HOME/amq-pluginexport PG_HOME=/Library/PostgreSQL/9.2export KLOOPZDB_HOME=$OO_HOME/db-schema/dbOptionally configure these variables in a script or even in your shell startup in ~/.bash_profile or~/.profile.Add a number of host names for OneOps in your etc/hosts file in addition to localhost:#Before127.0.0.1 localhost#After127.0.0.1 localhost api antenna opsmq daq kloopzappdb kloopzcmsdb cmsapi sensor activitidb kloopzmq kloopzapp search searchmq opsdb activemqdbDatabase SchemaCreate the database schema: Navigate to $OO_HOME/db-schema/db Connect to the local postgres database via command line with - $sudo -u postgres psql postgres Execute the scriptspostgres=# \i single_db_schemas.sqlpostgres=# /q./install-db.sh./install-activitidb.shValidate database setup by connecting to all 3 databases - user, cms and activity.| Database | Jdbc URL | Credentials || ----------- |:------------- | ----- || User DB | jdbc:postgresql://127.0.0.1:5432/kloopzapp | kloopz / kloopz || CMS DB | jdbc:postgresql://127.0.0.1:5432/kloopzdb | kloopzcm / kloopzcm || Activiti DB | jdbc:postgresql://127.0.0.1:5432/activitidb | activiti / activiti |ActiveMQ SetupCopy the file amqplugin-1.0.0-fat.jar from ~/.m2/repository/com/oneops/amqplugin/1.0.0/ to ActiveMQs libfolder.Copy activemq.xml to the ActiveMP conf folder.$ cd $AMQ_HOME/conf$ curl -o activemq.xml https://raw.githubusercontent.com/oneops/amq-plugin/master/src/main/resources/conf/amq_local_kahadb.xmlSet environment variable KLOOPZ_AMQ_PASS with export KLOOPZ_AMQ_PASS=kloopzamqpassNow start ActivemqMQ server with$ cd $AMQ_HOME/bin$ cd /macosx$ ./activemq restart && tail -100f ../../data/wrapper.logEnsure to use the OS specific folder i.e macosx or linux-x86-64 or linux-x86-32.Once the server started successfully, check the user interface at http://localhost:8161/admin and log in with the default credentials admin/admin.Inductor SetupSetup the stub for inductor:$ cd $OO_HOME/dev-tools/inductor-stub$ mvn clean installPrepare and install the Inductor gem:$ cd $OO_HOME/oneops-admin$ mkdir target$ cp $OO_HOME/inductor/target/inductor-1.1.0.jar target/$ gem build oneops-admin.gemspec$ gem install oneops-admin-1.0.0.gemThis step might take 2-3 mins.You can validate a successful install with the command inductor help. In case of any errors, it can be helpfulto provide complete permissions to rvm or rubies folder.Create one inductor for each cloud like aws, azure, openstack, etc..$ cd ~/install$ inductor create$ cd inductor$ inductor addQueue? /public/oneops/clouds/awsWhat is the authorization key? awssecretkeyEdit cloud related information in~/install/inductor/clouds-enabled/public.oneops.clouds.aws/conf/inductor.properties as shown belowmax_consumers = 10local_max_consumers = 10amq.authkey = awssecretkeyamq.zone = /public/oneops/clouds:aws# Following needs to be uncommented in case if we want to stub the cloud#stub.clouds=dev-dfwstg2 #This is the cloud we create through OneOps display UI.#stub.responseTimeInSeconds=1#stubResultCode=0Provide trustStore as JVM startup argument for proper activeMQ connection in~/install/inductor/clouds-enabled/public.oneops.clouds.aws/conf/vmargs.-Djavax.net.ssl.trustStore=$AMQ_HOME/conf/client.tsLink circuit-oneops-1 inside inductor.$ cd ~/install/inductor$ ln -s $OO_HOME/circuit-oneops-1 circuit-oneops-1Start the inductor.$ inductor startYou can check the status of Inductor with inductor status (or) ps ef | grep inductorRunning the Applications on TomcatStart Cassandra.$ cd $CASSANDRA_HOME/bin$ sudo -S ./cassandra -fYou can stop Cassandra with sudo -S pgrep -f cassandra | xargs -n 1 sudo kill -9Add the following projects to Tomcat server.Add the additional JVM arguments to Tomcat server startup parameters-Doneops.url="http://localhost:3000" -Dcom.oneops.sensor.chdowntime=315360000 -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -Xms512M -Xmx1024M -XX:MaxPermSize=512m -Dcom.sensor.cistates.cl=ONECreate a file with 24 byte random string in $OO_HOME:$ cd $OO_HOME$ dd if=/dev/random count=24 bs=1 | xxd -ps > oo.desAdd environment variables to Tomcat server.ACTIVITI_DB_HOST=kloopzcmsdbACTIVITI_DB_USER=activitiACTIVITI_DB_PASS=activitiAMQ_USER=superuserAPI_ACESS_CONTROL=permitAllAPI_SECRET=oneops@ce$$API_USER=oneops-apiCMS_DB_HOST=kloopzcmsdbCMS_DB_USER=kloopzcmCMS_DB_PASS=kloopzcmCMS_DES_PEM=$OO_HOME/oo.descom.oneops.sensor.channel.uptimedelay=100000000000com.sensor.cistates.cl=ONECONTROLLER_SEARCH_PUBLISH_POOLSIZE=20ECV_SECRET=switch@ECV_USER=oneopsIS_SEARCH_ENABLED=trueJAVA_OPTS=-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -Xms512M -Xmx1024M -XX:MaxPermSize=512mKLOOPZ_AMQ_PASS=kloopzamqpassKLOOPZ_NOTIFY_PASS=secretMAX_HB_SEED_EVENT_DELAY=600NOTIFICATION_SYSTEM_USER=adminOPAMP_SEARCH_PUBLISH_POOLSIZE=40ORPHAN_HANDLER_DELAY=1ORPHAN_HANDLER_INIT_DELAY=2SEARCH_PUBLISH_ASYNC=falseSEARCHMQ_USER=superuserSEARCHMQ_PASS=kloopzamqpassTRANSMITTER_SEARCH_PUBLISH_POOLSIZE=30 Replace $OO_HOME value as the expression is not evaluated here.We provided default credentials. Relace those according to yours.Create missing retry directories.$ mkdir -p /opt/oneops/controller/antenna/retry$ mkdir -p /opt/oneops/opamp/antenna/retry$ mkdir -p /opt/oneops/cms-publisher/antenna/retry$ mkdir -p /opt/oneops/transmitter/antenna/retry$ mkdir -p /opt/oneops/transmitter/search/retry$ mkdir -p /opt/oneops/controller/search/retry$ mkdir -p /opt/oneops/opamp/search/retryStart the Tomcat server. All applications should be deployed without any error in console.Circuit SetupRun below command to install the Circuit component after installing Inductor.----Optional start-------------------$ export CMS_API=http://localhost:8080$ export CMSAPI=http://localhost:8080$ mkdir -p $OO_HOME/circuit-install$ cd $OO_HOME/circuit-install$ circuit create$ cd circuit$ circuit init$ cd $OO_HOME/circuit-oneops-1$ bundle install$ circuit installIn case if you face any errors try bundle exec circuit create and ensure that Tomcat is runningRunning OneOpsAdd below environment variables to ~/.bash_profile.export OODB_HOST=localhostexport OODB_USERNAME=********export OODB_PASSWORD=********Install bundler$ cd $OO_HOME/display$ gem install bundler$ bundle installSet up the databases DDL & DML$ cd $OO_HOME/display$ bundle exec rake db:schema:load$ bundle exec rake db:migrateStart the Ruby on Rails server$ cd $OO_HOME/display$ rails server If the above gives error, try with bundle exec rails serverNow, the OneOps UI is available at http://localhost:3000ElasticSearchAs part of development environment setup Elasticsearch is already downloaded at ~/install/elasticsearch1.7.1.Change cluster name to oneops in ~/install/elasticsearch1.7.1/config/elasticsearch.ymlcluster.name: oneopsStart Elasticsearch and access the UI at http://localhost:9200/$ cd ~/install/elasticsearch1.7.1/bin$ ./elasticsearchSetup OneOps related templates & data. Refer to README.$ cd $OO_HOME/search/src/main/resources/es/templates$ curl -d @./cms_template.json -X PUT http://localhost:9200/_template/cms_template$ curl -d @./event_template.json -X PUT http://localhost:9200/_template/event_template$ curl -d @./cost_template.json -X PUT http://localhost:9200/_template/cost_template$ cd $OO_HOME/searchRun SearchListener$ java -jar -Dnodes=localhost:9300 -Dindex.name=cms-all -Damq.user=system -Damq.pass=abcd -Dcluster.name=oneops target/search.jar -Dsun.net.spi.nameservice.provider.1=dns,sun -Dsun.net.spi.nameservice.provider.2=default"
},
{
"title": "Documentation Guideline",
"url": "/overview/doc-guideline.html",
"content": "The OneOps documentation is part of the publicly available web site at http://www.oneops.com.Contributions are welcome and managed in the same manner as code contributions.Technical DetailsThe web site, including the documentation, is managed in the repository: https://github.com/oneops/oneops.github.ioIt is a static web sitegenerated from source using Jekyll. All content is written using Markdown, specificallyGitHub Flavored Markdown parsed bykramdown. More information about the site and documentation is availablein the README.The master branch contains the current deployment and any changes in this branch are automatically deployed. Thereforesubmit your changes as pull requests, if possible to allow us to run a verification.Writing GuidelineWhen editing existing, or writing new documentation, please try to adhere to the following guidelines. Ideal line width is 120 characters or less. Do not use TAB and other characters that invalidates JSON formatted content. Use a spell checker. Observe the usage of title case in section and page titles. Use consistent naming in text e.g. always OneOps and not oneops, SSH and not ssh, URL and not url.For image inclusion there are some specific requests: Do not add too many images. Keep the image size suitable for the web site. Images should concentrate on the relevant content. PNG format is preferred.The better you follow these guidelines, the faster your changes will be merged.Pull Requests and ReviewBefore merging a pull request we will perform some validation. Review of content regarding guidelines above. Provide feedback beyond the guidelines as applicable. Perform a local build with Java Jekyll and Ruby Jekyll. Visual inspection of new content to ensure rendering works as intended in markdown source. Link check run."
},
{
"title": "Documentation",
"url": "/overview/documentation.html",
"content": "Written and maintained so you can get the most out of OneOps. The documentation is arranged by user type. To find the answers you seek, please select a role: User > Admin > Developer > "
},
{
"title": "Download Component",
"url": "/user/design/download-component.html",
"content": "Download ComponentThe download component is available for all platformsand allows the download of files from a remote service and execution of scripts."
},
{
"title": "Edit a Platform",
"url": "/user/design/edit-a-platform.html",
"content": "Edit a PlatformSolution Your browser does not implement HTML5 video.To edit a platform, follow these steps: Log into OneOps. Select your organization from left or top navigation. Select your assembly from the right navigation. Select Design. Select the component you want to edit from the right navigation. View the properties associated with the component. Click Edit at the top of the screen. Modify the values as per the requirement. Click Save."
},
{
"title": "Edit an Environment",
"url": "/user/transition/edit-environment.html",
"content": "Edit an EnvironmentSolution Your browser does not implement HTML5 video.To edit an environment, follow these steps: Go to the assembly. Select the environment to edit and click edit. Use the following guidelines to edit the environment: Properties Environment name cannot changed (It is static and once created cannot be changed.) Description can be updated Continuous Deployment: Not in use DNS DNS Subdomain: cannot updated Global DNS: Can be updated Availability Monitoring: Cannot changed Availability Mode: Cannot changed Others Debug: This is not used by users. It is used to debug openstack issues. Primary Cloud: Can be updated Secondary Cloud: Can be updated To save the environment, click Save. Commit and deploy."
},
{
"title": "Email Notification Relay",
"url": "/user/transition/email-notification-relay.html",
"content": "Email Notification RelayOneOps introduced notification relays at environment level. A notification relay allows notification filtering and routing configuration for a given environment. All notification for a given environment will be matched against existing environment relays for filtering based on source, severity, subject/text and ns path to specified list destination addresses.Relay Locationedit environment->relay tabAdditional Information on Relay There is a default relay available to all environments, which route alerts to assembly owner. Users have the privilege to delete default relay Environment can have multiple relays. Relay can be enabled/disabled.Relay Attributes Name: Destination Email: comma list of email addresses to be alerted **Severity Filters: **Source Filters: NS Path: blank nspath defaults to all notifications for the environment. Additional nspath can be added for platforms specific notification e.g ////manifest// -> bulk action notification ////bom// -> single operation notification Message Pattern: Regexp to match against subject or text of notification. If blank notifications will not be filtered Component Correlation: If this flag is turned on, email notifications will be sent only at the state change of component. Suppose tomcat component has 20 instances and 5 instances are flipped to unhealthy state. One email will be sent: When state of first few instances changes to unhealthy All instances get recovers back to good state No email will be sent: If one or more instances under this component go unhealthy If one or more instances recover from unhealthy state however some continues to remain unhealthy For repair action execution Relay configuration management is available as relays tab on the environment transition edit page."
},
{
"title": "Enable Access to an Assembly for a User on a Team",
"url": "/user/account/enable-access-to-an-assembly-for-a-user-on-a-team.html",
"content": "Enable Access to an Assembly for a User on a TeamSolutionAfter a user is added to a team within an organization, that team needs to be added to the respective assemblies to access them. Select the Assembly name. Click Edit. Select the Team tab. Click Edit. For all the teams that you want to give access to, check the appropriate checkbox.See Also Restrict Access with Teams"
},
{
"title": "Enable https for a Service (LB Certs)",
"url": "/user/design/enable-https-for-service-lbcerts.html",
"content": "Enable https for a Service (LB Certs)This assumes that you have an SSL certificate.SolutionDesign In the Design phase, add an optional LB certificate. Enter the certificate details: Key Contents. Cert Contents. Passphrase Directory Transition Edit the LoadBalancer component. Enable https in the LB component. Change the Virtual Server port to 443. Set the SSL termination at the NetScaler. Change the Instance (for your server) port to 8080 Set the SSL termination at the server or Tomcat. Add the Certificate into the Design. Add the KeyStore. Click commit and then deploy. If you are modifying the existing environment, touch the FQDN component and then commit. "
},
{
"title": "Enable Platform to Auto Replace Unhealthy Components",
"url": "/user/operation/enable-platform-auto-replace-unhealthy-components.html",
"content": "Enable Platform to Auto Replace Unhealthy ComponentsTo enable your platform to auto replace unhealthy components, follow these steps: On the Platform Operations page, find the Automation Status section. There is a button to enable or disable Auto Replace which is disabled by default. Click it in turn the auto-replace ON. Configure the Auto Replace feature with the following parameters: Replace unhealthy after minutes: OneOps replaces your unhealthy components only after they are in an Unhealthy state for more than or equal to this many minutes. The default value is so large that it keeps the auto replace disabled by default. 1 hour could be a good value. Replace unhealthy after repairs #: If the time above is elapsed, OneOps checks the number of repairs done on this component after it was reported Unhealthy. If it is greater than the value of this attribute, OneOps triggers the Auto Replace provided that there is no open release or open deployment for your environment. The default value is so large that it keeps the auto replace disabled by default. 3 could be a good value. To enable your platform to auto replace, it is necessary to complete both steps. Also, if there is any open release or a cancelled/ongoing deployment, Auto Replace is postponed."
},
{
"title": "Ensure that Alerts for Production Environment are Sent to NOC",
"url": "/user/operation/ensure-that-alerts-for-production-environment-are-sent-to-noc.html",
"content": "Ensure that Alerts for Production Environment are Sent to NOCSolutionIf you want to enable NOC alerts for your production environment so that NOC can see them and take action, follow these steps: Create CEN page(s) for your application. After the CENs are created, send the link(s) to your team to get them reviewed and approved by the NOC. Go to the environment in OneOps (Transition section) for which you want to enable NOC alerts. Select each platform and then each component inside that platform and follow next steps for all of the components. Select the monitors tab. Select each monitor that you see on the page. At the bottom on the monitor details page, in the Documentation text field, enter the link to your CEN page Save it. If you have approval from NOC for your CENs and if you have completed the Documentation fields as mentioned above, NOC will configure your environment to send alerts to NOC.See AlsoWatching an Assembly"
},
{
"title": "Environment Profiles",
"url": "/user/account/environment-profiles.html",
"content": "Environment ProfilesEnvironment profiles are templates that are used to derive concrete environments based on pre-defined templates. Essentially, they are abstract environment definitions that allow environments to be categorized or classified by associating a given environment with an underlying environment profile. Typical examples of profiles are prod, QA, Test, etc.To add a new Environment profiles, follow these steps: Goto Account. Select the environments tab. Click Add button Enter a unique name. Letters, numbers, and dashes are allowed. No other characters are allowed. If you use invalid characters, you are notified to match the requested format. Fill in appropriate details for the enviornment template SaveUsage Environment profiles enable support, operations, and DevOps teams to easily categorize an environment to determine its support level and/or its critical problem resolution parameters (for example, SLA levels). Environment profiles are intended to streamline new environment creation by bootstrapping an environment with a set of default attribute values. In other words, environment profiles are abstract best practice environment definitions from which to derive real environments. A set of environment profiles is defined and managed at the organization level and realized in the assembly environment. OneOps does not allow the creation of a new environment without first providing a profile and it is important to select the best suited profile before creating a new environment. Profiles are available within the organization. For backward compatibility reasons, the profile association is not enforced: In situations where no profiles are defined for an organization For environments that existed prior to when the initial environment profile was added to an Organization Environment Profile Association When appropriate, the concrete environment profile association tag is indicated by an explicit name label while navigating through environment, transition and operations pages. For example, this occurs in breadcrumb sections, page header sections and environment lists. Environment profiles help DevOps teams to quickly categorize an environment with a given support level, based on the defined line-up of environment profiles."
},
{
"title": "Environment Releases",
"url": "/user/transition/environment-releases.html",
"content": "Environment ReleasesSolution Your browser does not implement HTML5 video.To view environment release activities, follow these steps: Go to Transition and select the environment. Select the Releases tab. View the release activities. Closed: Displays the releases deployed on the environment Open: Displays the open releases Cancelled: Deployment is cancelled in between "
},
{
"title": "Environment",
"url": "/user/transition/environment.html",
"content": "EnvironmentAn Environment is a combination of your assembly (base design) with operational attributes to match the associated business requirements, such as a new dev, qa, or prod Environment. Changes from the base design can be pulled on demand, via UI or API. Based on the availability modes, in the transition you could see platform components defined for that mode . For eg for redundant mode you would see load balancer. Mix Platform Availability Mode: For example, a load-balanced web with a single db (with backups) DNS Domain: Use custom DNS providers (route53, rackspace, dyndns) Automation: Continuous Delivery"
},
{
"title": "Favorites",
"url": "/user/general/favorites.html",
"content": "FavoritesThe favorites feature allows you to mark entities in the user interface. Subsequent navigation to them is cut short toa simple click on the item in the favourites drop down.Favorites in ActionUsing FavoritesFavorites are available in the navigation in the header on the right or in the navigation on the left.The Manage Favorites link directs to a dedicated tab in the user profile. This view includes all favorites of the useracross all organizations in a list that includes a filter feature on the top of the list.Entities can be marked as favorites with the bookmark icon to the right of the entity name in its main displayOr by selecting Mark favourite from the more functions button on the right side of any entity in a list."
},
{
"title": "File Component",
"url": "/user/design/file-component.html",
"content": "File ComponentThe file component is available on all platformsand allows the configuration of the content of a text or script file includingsubsequent execution."
},
{
"title": "Filebeat Component",
"url": "/user/design/filebeat-component.html",
"content": "Filebeat ComponentThe filebeat component is available on all platforms.and allows log forwarding with Filebeat."
},
{
"title": "Find a Platform VIP Name",
"url": "/user/operation/find-platform-vip-name.html",
"content": "Find a Platform VIP NameSolutionTo find a platform VIP name, follow these steps: Click Operate. Select the environment. Select the platform. Click the FQDN component from the list. To see the hostname entry, select the configuration tab. For single availability environment, FQDN/hostname is assigned to compute. For redundant availability environment, FQDN/hostname is assigned to lb (load balancer). "
},
{
"title": "Firewall Component",
"url": "/user/design/firewall-component.html",
"content": "Firewall ComponentThe firewall component is available on all platforms. Itcan be used to configure firewall rules and run a local firewall."
},
{
"title": "Fully Qualified Domain Name FQDN",
"url": "/user/design/fqdn-component.html",
"content": "Fully Qualified Domain Name FQDNThe fqdn component is of core importance and part of mostplatforms since it defines the fully qualified domain that configuration used toaccess the platform.You can configure the fqdn component as part of your platform in design phaseand specific to an environment in the transition phase.Once your assembly is deployed in an environment you can access the fqdncomponent in operation to see all generated values to access your runningassembly in a specific environment using a cloud-specific domain name or theglobal domain name."
},
{
"title": "User Getting Started",
"url": "/user/general/getting-started.html",
"content": "User Getting Started 1 Account create clouds create assembly 2 Design create platform commit design 3 Transition create environment deploy to cloud 4 Operate view status control environment AccountThis section describes how to set up your account, organization, assemblies.To set up your user account, follow these steps: Login/Register in OneOps to access/create your Organization. If necessary, ask an administrative user e.g. your manager, whether your team already has an organization created in OneOps and request to be added to the appropriate organization. If you are on-boarding as part of a new organization, ask your manager to add you to the appropriate team. Not seeing your Organization? Look up the Organization you are interested in and ask an administrator to grant you access . Accept the Terms and Conditions, if it is your first time using OneOps.Create Organization Click profile (your user name) on the left nav bar. Select Organization tab. Select new Organization from the right side. Enter Organization name and click save. This will bootstrap an Organization with reasonable defaults which can be changed later.Create Cloud Normally administrator of the organization creates a cloud, it may require creating aninductor or adding cloud to an exiting inductor. Create Organization if one does not exist. Click clouds link on left nav bar . Click Add CloudNext Create AssemblyCreate an AssemblyAssembly is workspace for you application. To create an assembly Click Assemblies from left nav or top nav . In the assemblies listing page click New Assembly on the right hand sideor towards the end of listing. Enter assembly details, Click save. Next Create a PlatformTo learn about additional Account activities for assemblies, teams, users and notifications refer to: Teams in Organization Add a User to a Team Manage Assemblies Adding a Team to an Assembly Watching an Assembly Create an Assembly Design an Application OneOps Design PhaseIn the design phase, one can create platforms (building blocks) from existing available packs.In this example, we will create simple application (Tomcat) which talks to back end database.Create a Platform Click design (icon) from left nav or top nav or wizard. Click New Platform, choose from existing packs in pack name Modify any attributes of component to suit your application design.Commit a design Click review to your changes, all changes are buffered in a release and are not applied unless you commit. Once satisfied click commit to commit the changes . Next [Create Environment][] Change attributes of component which are common across environments. Add optional components /attachments in design phase.See also: Platforms Add a platform Platform Dependency Packs View Design ReleasesTransition PhaseTransition is where you define environment specific attributes as needed. The dev or qa environments may differ in terms of availability , resources needed and can be defined at environment level.Create an Environment Click environment (icon) from left nav or top nav or wizard. Click New Environment . Select availability mode for your environment either at platform. Modify any attributes of components which may differ from design.For example qa environment compute size requirements may differ from development/test or production environment, in such scenario you may choose default compute size based on what matches most of the environment requirements.Its not uncommon to choose development environment compute size as default for design which allows you to create multiple test environments without changing design.This helps in creating environments faster without changing too many attributes at design level. As a best practice tryto have most used configuration in design. Also see variablesLock any environment specific attributes to prevent the environment changesto be override from design pulls. Click review and commit and deploy your changes. Next Deploy.See also: Environment Upgrade an Application Version in an Environment Environment Releases Environment-profilesDeploy an Application Click commit and deploy Review the deployment plan generated by OneOps. Click on a particular step to know what change is going to be deployed. Want to change plan, discard and no changes would be deployed. If satisfied click green DeployIt can take few minutes to deploy the application to cloud infrastructure selected during environment creation.At this time One Ops is executing actual work orders on the cloud of your choice, switching clouds is then matter of adding clouds and shutting down clouds which may not be needed.See also: Multi Cloud deployment Check Deployment Status Deploy and Provision an Application and Environment for the First Time Doing regular releasesOperate PhaseThe successful deployment will create actual instances of components (computes,tomcats) on to cloud(s) chosen. Once you have running environment you would need to operate the environment which typically involvesView Operations View the status of your overall application. View notifications, alerts, filter instances based on their state.Control Environment Perform operational activities on components level, like restart of all tomcats.Some of the commonly used operations but not limited to these Replace of compute in case of hardware failures. Restart of services.(tomcat, Cassandra, elastic search) Some of attachments can be exposed as operations. Some of the popular one used are taking nodes out of traffic. Redeploy artifacts. Log Searches on volume components. Control auto-repair / auto-scale. Perform repairs at cloud level.See also: Assess the Health of Applications, Platforms and Clouds Operations Summary Control EnvironmentMonitoring OneOps by default will send emails (default notification mechanism) if any of components are in unhealthy or notifystate. see Monitors If auto-repair is enabled, OneOps will auto-repair the instance. The actions taken to recover an instance are prescribed by repair recipe of the component. For example, if Compute is alerting for missing heartbeat by default Computes repair action involves the following Check the ssh port If not able to connect after timeout, it will attempt reboot. These might recover the compute, if not then auto-replace would be triggered. OneOps Documentation for UsersBefore You Start with OneOpsBefore you start with OneOps, it is recommended that you read the following documentation. It is the most essential information you need to begin well. Overview: OneOps business-level description of main benefits versus alternative solutions Key Concepts: Conceptual description and diagrams of how OneOps works Getting Started: How to start using OneOps (this section) Best Practices: How you should use OneOps for best results"
},
{
"title": "Grep or Search Text in Files on Computes",
"url": "/user/operation/grep-or-search-text-in-files-on-computes.html",
"content": "Grep or Search Text in Files on ComputesSolutionOneOps has a couple of actions available on Volume components to help text-search one or multiple files on a compute. log-grep: You can search a regex text in one or multiple files. The result of the grep search is printed on the action execution log. log-grep-count: Same as above but the result printed is the total number of lines matched in each files and not the actual matched text content.Action Location Go to the operations of any volume component and find the two actions in the actions drop-down menu as shown below. Alternatively, go one level up and select multiple volume component objects and execute the actions on all of them at the same time.Action Parameters Window It is necessary to enter two mandatory arguments: Files (Use the absolute file path in which you want to search a regex. If you have multiple files to search in, separate their full paths with a space) Search Regex Pattern There are also two optional arguments to specify at what line # the search should start and at what line it should end. By default, it searches the whole file or files.log-grep Action ResultAs shown below, the result of the log-grep action is printed on the output of the action execution. For each match, it prints the name of the file followed by the line # matched and then the actual line.log-grep-count Action ResultAs shown below, the result of the log-grep-count action is printed on the output of the action execution. It prints the filename followed by the total number of lines matched."
},
{
"title": "Hostname Component",
"url": "/user/design/hostname-component.html",
"content": "Hostname ComponentThe hostname component is available on all platforms. Itcan be used to create a persistently hostname for each compute."
},
{
"title": "How Cost Tracking and Reporting Works",
"url": "/user/account/how-cost-tracking-and-reporting-works.html",
"content": "How Cost Tracking and Reporting WorksKey concepts involved here are: Offering An instance of cloud.Offering class.Has the per hr cost rate and matching criteria among other info. mgmt.cloud.Offering instances are defined in the cloud template file. cloud.Offering instances can be added either via API or directly from the UI. Offerings are associated with cloud services. For instance an example of an offering associated with the Compute cloud service could be as follows. "large" => { "description" => "Average large Linux vCPU cost per Hour", "cost_unit" => "USD", "cost_rate" => "0.12", "criteria"=> "(ciClassName:bom.*.[1 TO *].Compute OR ciClassName:bom.Compute) AND ciAttributes.size:L", "specification" => "{}" } WorkOrder The deployment payload for a component. Percolation Is like reverse operation of indexing and then searching (using ElasticSearch). Instead of sending docs, indexing them,and then running queries. One sends queries, registers them, and then sends docs and finds out which queries match that doc. In this case the offerings are the queries registered as percolators and CIs are the docs matched against the queries. During the WorkOrder generation of a resource (for example when a Compute, Storage or DNS service is getting deployed), the resource CI doc is percolated against the registered offerings. This returns the matching offerings out of which the lowest cost offering is selected. The lowest cost offering is then added to the the WorkOrder document of the resource and then indexed in ElasticSearch. This is the mechanism by which the deployment time cost of a resource is tracked in the WorkOrder index .Cost ReportingCost Tracking results in the workorder index in ES having history with cost change events (workorders). However constructing a query/report for a specific time range aggregated over multiple cis will be too complex as it will require on-the-fly processing of the events timestamps against the requested range. To improve access to the cost information have a daily batch job that uses the workorder index as input, read the cost changes from the workorder payload and construct a daily cost index with the exact cost for each CI for the given day. This will allow for simple ES queries that can retrieve the exact cost for a given time range using simple aggregations. The daily cost job is basically a ruby script which is based on the following cost calculation strategy.for a given ci if the first WO is add if the add is AFTER the target day, then cost is 0 if the add is DURING the target day, then calculate cost if the first WO is update if the update is AFTER the target day, then lookup last known WO prior to target day and use that cost for the full day if the update is DURING the target day, then lookup last known WO prior to target day and calculate cost if the first WO is delete if the delete is AFTER the target day, then lookup last known WO prior to target day and use that cost for the full day if the delete is DURING the target day, then lookup last known WO prior to target day and calculate cost endA cost document in the daily cost index looks like this:date: "2016-10-01T00:00:00Z",packName: "<pack-name>",unit: "USD",nsPath: "<path to app>",envProfile: "<env-profile>",cloud: "<cloud-name>",packVersion: "<version>",manifestId: <manifestId>,packSource: "oneops",ciClassName: "bom.Compute",organization: "<org-name>",serviceType: "compute",ts: "2016-10-02T09:00:28Z",servicensPath: "<path to app>",ciId: <ciId>,cost: 1.44Using this data in the daily cost index the Cost Explorer widget on the UI gives the user a single barchart graph with a dynamic form fields allowing selection of daily time ranges and filters for: nspath, cloud, cloud service type (data query against new cost index in ES)."
},
{
"title": "Import and Export Catalog",
"url": "/user/account/import-and-export-catalog.html",
"content": "Import and Export CatalogSolutionProvide the Catalog Log into OneOps. To edit the Organization, select the Organization in the top navigation bar. Select Provides catalogs. Save the changes.Export the Catalog In the top Navigation, select Catalog. Select Manage Catalog. Select the Catalog you want to export. Click Export. Save the file on file system.Import the Catalog In the top Navigation, select the Catalog. Select Manage Catalog. In the right navigation, click Import. Import the exported catalog. Save the Catalog."
},
{
"title": "In The Press",
"url": "/overview/in-the-press.html",
"content": "Whether OneOps was mentioned in the press, another blog or a conference site - here is where we try to collect those references. You can also find audio, video and slide presentation related resources and links here.If you have a relevant link to be added - let us know or send a pull request!And of course you should not forget to check out our own OneOps Team Blog.2017January 13, 2017 - Code Monkey Talks (Podcast): WalmartLabs with Tim OBrienJanuary 9th, 2017 - Light Reading: How Walmart Builds Open Source Culture2016December 29th, 2016 - Tech Better (Podcast): Open Source Cloud at Walmart: A Story of ScalabilityDecember 25th, 2016 - eWeek: How Walmart is Embracing the Open Source OpenStack ModelDecember 20th, 2016 - Tech Better: How Walmart Technologys Big Bet on OpenStack Is Paying OffDecember 22nd, 2016 - Tech Better: Open Source Innovation Delivers an Unforgettable Christmas to 100+ ChildrenDecember 9, 2016 - WalmartLabs (Blog) Kafka Ecosystem on Walmarts CloudDecember 5, 2016 - DevOps in Walmart Meetup (Slides): How We Do DevOps at Walmart: OneOps OSS Application Lifecycle Management PlatformNovember 30th, 2016 - Tech Better: Four Things You Didnt Know About Black Friday at Walmart TechNovember 1, 2016 - WalmartLabs (Blog) Tech Transformation: Real-time Messaging at WalmartOctober 26th, 2016 - All Things Open 2016 (Slides): Avoiding the Pitfalls of Being Locked into One Cloud ProviderOctober 25th, 2016 - OpenStack Summit 2016 (Video): How Walmart is Building a Successful Open Source Culture with Megan Rossetti and Andrew MitryOctober 25th, 2016 2016, OpenStack Days Barcelona, 2016 (Video): Building Large Scale Private Clouds with OpenStack and Ceph with Andrew Mitry and Anton ThakerOctober 14th, 2016 2016, OpenStack Days Seattle 2016 (Video): Transforming Walmart with Sean RobertsSeptember 28th, 2016 2016, OpenStack Days East 2016 (Video): Case Study: OpenStack at Walmart with Kire Filipovski and Andrew MitrySeptember 22, 2016 - WalmartLabs (Blog) Application Deployment on OpenStack via OneOpsAugust 17th, 2016, OpenStack Days Silicon Valley 2016 (Video): Whats Next for OpenStack at WalmartAugust 9th, 2016 2016, OpenStack Days Silicon Valley 2016 (Video): The Cube Interview with Sean RobertsFebruary 24th, 2016 - InfoQ: @WalmartLabs Open-Sources OneOps PaaSFebruary 17th, 2016 - Microsoft Azure:Announcing more open options and choice on Microsoft AzureFebruary 15th, 2016 - AppDeveloperMagazine: Forget Groceries at Walmart - You can Now Get Open Source Cloud InfrastructureFebruary 5th, 2016 - Turbonomic: Walmart Hops into the Cloud Deployment Biz with OneOpsFebruary 4th, 2016 - Release Management: Walmarts OneOps: A Closer LookJanuary 29th, 2016 - DevX.com: Cloud and Hybrid Application Lifecycle Management with OneOpsJanuary 29th, 2016 - The New Stack: DevOps the Walmart Way, with the Newly-Released OneOps Cloud PlatformJanuary 28th, 2016 - TechNewsWorld: Walmart Opens OneOps Cloud Management to the MassesJanuary 28th, 2016 - CloudPro: Walmarts OneOps cloud management code comes to GitHubJanuary 28th, 2016 - TechTarge: Walmart shakes up application lifecycle managementJanuary 26th, 2016 arsTechnica: A new open source cloud management tool from WalmartJanuary 26th, 2016 - FierceDevOps: Walmart goes open source with development and lifecycle management offeringJanuary 26th, 2016 - TechCrunch: Walmart Launches OneOps, An Open-Source Cloud And Application Lifecycle Management PlatformJanuary 26th, 2016 - Forbes: Walmart Makes It Easier To Switch Clouds With OneOpsdJanuary 26th, 2016 - BusinessWire: Couchbase Supports @WalmartLabs Release of OneOps Cloud Management to Open Source CommunityJanuary 26th, 2016 - Fortune: Walmart Says it Can Cut Your Cloud CostsJanuary 25th, 2016 - WalmartLabs: OneOps Open-Source PaaS now available!2015 and EarlierOctober 16th, 2015 - GeekWire: Walmart plans to release OneOps as open source, targeting AWS and AzureOctober 15th, 2015 - ITProPortal: Walmart OneOps cloud platform goes open sourceOctober 15th, 2015 - The Stack: Walmart open-sources portable cloud code OneOpsOctober 14th, 2015 - NYTimes: Walmart takes aim at Cloud Lock-inOctober 14th, 2015 - Fortune: Walmart wants to help unchain companies from their cloudOctober 14th, 2015 - CIO: Walmart decries cloud lock-in, plans to open-source OneOpsOctober 14th, 2015 - WalmartLabs: OneOps - Were Going Open SourceMay 14th, 2013 - WalmartLabs: OneOps is joining WalmartLabs"
},
{
"title": "Developer Overview",
"url": "/developer/index.html",
"content": "Developer OverviewA OneOps developer carries out modifications of existing aspects of OneOps or adds new aspects. Typicaldeveloper activities are: Core development: Development on any component of the OneOps application stack itself Content development: Creation and maintenance of OneOps packs and circuits. Integrations development: Usage of the OneOps API to create integrations with other applications or workflowsAt a minimum a developer needs to understand the basic concepts of OneOps itself andfor OneOps users. Depending on the development task, OneOps administratorknowledge is potentially required as well.Available resources for developers include: All source code on GitHub Overview Key Concepts Core Development: Development Environment Setup Getting Started with Core Development Relations Content Development: Add a New Component Add a Platform Add Monitors Add a New Chef Cookbook and Pack to OneOps Add a New Platform Pack CMS Sync Create a Custom Payload Create Parameterized Component Actions Default Monitor Thresholds How to Deprecate a Pack Content Development Content Development Introduction Metadata Modify existing component Monitor Override Platform Attributes Pack Development Pack policy Platform Management Pack Relationships Integration Development: Advanced Search and Search API Assemblies API CI Notification Format Cloud Offerings and Services API Design Attachments API Integration Development Inspecting OneOps with the OneOps API OneOps API Documentation "
},
{
"title": "User Overview",
"url": "/user/index.html",
"content": "User OverviewA OneOps user typically desires to manage applications deployed tovirtual environments, called cloud applications.The user interacts with OneOps web application via the user interface and potentially via the API.Activities performed depend on the security access level and are typically focussed around application lifecyclemanagement with OneOps including design, deployment and operation of cloud-based applications.Administrative users perform tasks in OneOps itself that enable other users to manage their applications byconfiguring security, clouds and other aspects of OneOps, potentially workingwith administrators.Available documentation resources for users include: Key Concepts Getting Started General Aspects for Users Account Configuration Design Phase-Related Documentation Transition Phase-Related Documentation Operation Phase-Related DocumentationOneOps enables continuous lifecycle management of complex, business-critical application workloads on anycloud-based infrastructure. You can expect: Agility and speed Faster SDLC due to consistency between environments ,see Design Phase Improved end-to-end process, not just individual steps,see lifecycle Operational Efficiency, see Operations Phase Platform re-usability via best practices, see Packs Real-time resource utilization via auto-scale, see auto-scaleand Monitors Application-driven access control policies, see Teams in Organization Abstraction and dynamic modeling of the demand and supply, see clouds If you dont have access to a OneOps installation you can get started with the installation instructions and more inthe administrator documentation."
},
{
"title": "Integration Development",
"url": "/developer/integration-development/index.html",
"content": "Integration DevelopmentIntegrations development is all about using the various OneOps APIs to create integrations with otherapplications or workflows. These could be simple scripts to automate commonly performed actions or integrationwith other applications to support your specific use cases. Advanced Search REST API API Usage Examples API Reference Advanced SearchOne aspect of integration work is to find the correct resources and entities to work with. The usage ofadvanced search is crucial as it works directly of the raw data rather than secondary indices.REST APIOneOps includes a very comprehensive REST-based API. Most URLs in OneOps that result in a rendered user interface can bemodified to return JSON data for the same entity.For example the configured clouds in your organization example in your OneOps instance are available as user interfaceURL via http://server/example/clouds. On the other hand you can retrieve this list of clouds in JSON format with aHTTP GET call of http://server/example/clouds.json and it would look similar to:[ { ciId: 12346769, ciName: "admin", ciClassName: "account.Cloud", impl: "oo::chef-11.4.0", nsPath: "/example/_clouds", ciGoid: "78267-1249-6754169", comments: "", ciState: "default", lastAppliedRfcId: 0, createdBy: "admin", updatedBy: null, created: 1393611787803, updated: 1404278533896, nsId: 78467, ciAttributes: { adminstatus: "active", auth: "", description: "Admin network", location: "/public/example/clouds/admin" }, attrProps: { } },...The same pattern of a JSON equivalent for a user interface URL applies to specific entities. E.g. the edit URL for acomponent could be https://server/oneops/assemblies/31561093/design/platforms/65971847/components/65971878/edit. Appending.json to the URL so it ends in .../edit.json displays the data in raw JSON.API Usage Examples Inspecting OneOps with the OneOps APIAPI Reference OneOps API Overview Assemblies API CI Notifications API Cloud Offerings API Design Attachments API"
},
{
"title": "Content Development",
"url": "/developer/content-development/index.html",
"content": "Content DevelopmentContent development is all about the creation and maintenance of OneOps packs and circuits and otherconfiguration data in OneOps. The elements implement support for different cloud providers, operating system, applicationplatforms, application servers and other components in OneOps.Read the introduction to get started and take advantage of the other available resources:"
},
{
"title": "Developer - General Aspects",
"url": "/developer/general/index.html",
"content": "Developer - General AspectsBefore you go about developing on OneOps itself - Core Development, on configurationfor OneOps - Content Development, or integrating OneOps with other tools via APIs Integration Development, you want to learn more about general aspects ofOneOps-related development.Here are a bunch of resources to do just that:"
},
{
"title": "Operation",
"url": "/user/operation/index.html",
"content": "OperationOperations is where you monitor andcontrol your environments. On the summary tab, you can drill down by using the right navigation bar.From the top level, with the graph tab, you can visualize the entire health of an environment. On the graph, you candrill down to a component instance. Each component instance has configuration, monitors, logs, and actions.Documentation about the Operation phase in OneOps includes:"
},
{
"title": "Getting Started with Core Development",
"url": "/developer/core-development/index.html",
"content": "Getting Started with Core DevelopmentCore development is all about development on any component of theOneOps application stack itself.All source code for the various components is available on GitHub.The OneOps build relies on Unix scripts and should work on OSX and Linuxoperating systems versions. Prerequisites Building Vagrant Setup Database Schema Versioning and Releasing Common Issues and TipsPrerequisitesThe following tools are required to build and run OneOps on a local developermachine.Must have: Java Development Kit 8 from Oracle Apache Maven 3.5.0 (unless included Maven wrapper is used) Git Ruby 2.0.0, potentially via using RVM Gems Packer Bundler graphviz PostgreSQL 9.2 development file (libpg) Virtual Box VagrantNice to have: Favorite IDE like EclipseIDE or STS Some Git UI And so on BuildingFork and clone the oneops source repositoryand run a build with the Maven wrapper in the created oneops directory:cd oneops./mvnw clean installor directly with Maven, if you have it installed already.cd oneopsmvn clean installThe build compiles, tests and builds all packages.If you want to run OneOps after a build, you can use the vagrant profileduring a build. It creates all necessary configuration for Vagrant to spin upthe newly built OneOps in a VirtualBox virtual machine.mvn install -P vagrantAfter a successful build with the profile you can find the necessary files forstarting a VM with OneOps running in the ~/.oneops/vagrant directory and startthe VM from there.cd ~/.oneops/vagrant/vagrant upOnce the VM is up and running, you can access the OneOps user interface athttp://localhost:9090.Subsequently you can suspend or halt the VM with vagrant or use theVirtualBox user interface as desired.Suspend the VMvagrant suspendand subsequently start it again withvagrant resumeAlternatively you can use halt and up for a clean shutdown:vagrant haltand later a reboot:vagrant upIn order to inspect the VM content itself, you can connect via SSH with vagrant.Find further information about the vagrant command with vagrant help as wellas in the Vagrant documentation.The VirtualBox user interface can be used alternatively.Vagrant SetupBy default the Vagrant instance automatically includes all configuration ofpacks and more fromcircuit-oneops-1.If you want to make this Oneops instance use a modified circuit code on yourhost machine then you need to create shared folders and set up the inductorcomponent to use it instead. You also need to install the circuit to update theCMS database. Below is a modified vagrant file to setup inductor to use ourcircuit-oneops-1 code and to install the circuit. A similar approach can betaken with other, potentially additional circuits.$script = <<SCRIPT echo "configuring inductor to use circuit-oneops-1" cd /opt/oneops/inductor echo "removing existing circuit-oneops-1 and shared symlinks" sudo unlink circuit-oneops-1 sudo rm -Rf shared echo "creating symlinks to shared folders" sudo ln -s /Some/Path/On/Vagrant/circuit-oneops-1 circuit-oneops-1 echo "circuit-oneops-1: circuit install" cd /opt/oneops/inductor/circuit-oneops-1 circuit install echo "script completed successfully"SCRIPTVagrant.configure(2) do |config| config.vm.box = "oneops" # Use the vagrant-cachier plugin, if installed, to cache downloaded packages if Vagrant.has_plugin?("vagrant-cachier") config.cache.scope = :box end config.vm.network "forwarded_port", guest: 3001, host: 3003 config.vm.network "forwarded_port", guest: 3000, host: 9090 config.vm.network "forwarded_port", guest: 8080, host: 9091 config.vm.network "forwarded_port", guest: 8161, host: 8166 config.vm.provider "virtualbox" do |vb| vb.gui = false vb.memory = 6144 vb.customize ["modifyvm", :id, "--cpuexecutioncap", "70"] end config.vm.synced_folder "/Some/Path/On/Host/circuit-oneops-1", "/Some/Path/On/Vagrant/circuit-oneops-1",owner: "root",group: "root" config.vm.provision "shell", inline: $scriptendDatabase SchemaOneOps uses a PostgreSQL database for model storage. Some information about themodel is available in the relations documentation.Versioning And ReleasingThe OneOps project uses a version scheme of yy.mm.dd-enumerator, e.g.17.08.02-01 and in development 17.08.02-01-SNAPSHOT.The version can be manually updated to a new value such as17.08.09-01-SNAPSHOT withmvn versions:set -DgenerateBackupPoms=false -DnewVersion="17.08.09-01-SNAPSHOT"Automated CI builds increase the enumerator and are used to create releases withthe Maven release plugin viamvn release:prepare release:performCommon Issues and TipsIf you encounter problems installing postgresql on OSX you may need to use brew.brew updatebrew install postgresqlgem install pg -v '0.17.0'If the mvn commands above give you any trouble. Then make sure you are in thetop level folder of theoneops source repository clone and run./mvnw clean package -Pvagrant"
},
{
"title": "Administrator Overview",
"url": "/admin/index.html",
"content": "Administrator OverviewA OneOps administrator is responsible forinstalling, updating and operating the OneOps application and itscomponents.Once OneOps is running the administrator or administrative users can configure clouds, organizations and othercomponents with the OneOps application. This enables the creation of assemblies that contain applications and allowtheir deployment and management by other users.All these activities and concepts and the relevant terminology are described in the user documentation.Resources for OneOps administrators: Overview Key Concepts Installation Operate: Build, Install, and Configure an Inductor Client-Side Aggregations Administrator Operation Inductor Load Content Metric Data Source Type Metrics Collection oneops-admin gem OneOps Manages OneOps Running OneOps in Production Sensors Set Up Log Forwarding to ES with Logstash Administrator Testing and Debugging Upgrading OneOps "
},
{
"title": "Inductor",
"url": "/admin/operate/inductor.html",
"content": "InductorThe Inductor consumes WorkOrders (rfc / configuration change) or ActionOrders(start, stop, etc) from a queue by zone, executes them and posts a resultmessage back to the controller.A WorkOrder is a request for change (RFC) of a configuration item (CI). An ActionOrder is request to perform an action thatis typically not associated to a configuration item. Example action are reboot, repair, snapshot, restore, etc.The account.Cloud.location is used by the controller to publish the order into a queue. The inductors consume,does the work and publish the result back to the controller.response queue.Here is the logical flow from CMS:Inductor DetailsThe inductor is Java core with Ruby for control. The Java side is standard maven + spring,basically a Spring DefaultMessageListenerContainer using Apache Commons DefaultExecutorto spawn either local chef-solo (for IaaS or non-managed via orders) or a remotevia SSH chef-solo execution.The image below shows a logical view of the classes in com.oneops.inductor.There is a ruby gem to simplify setup and control.Source / Downloads Repo GemControlAn Inductor runs using Java jar with a several arguments. There is a gem or bash script to make easier.Inductor gem: inductor help, start, stop, restart, statusorInductor control bash script located in the root dir of the repo: ./inductor start,stop,restart,statusLogs / Inductor Log Agent and SinkThe Inductor will put logs where the conf.dirs log4j.xml specifies. The gem redirects to the relative log dir.The inductor logs are shipped using logstash forwarder to backe end elastic search cluster.The UI uses the daq api (Spring based) PerfController to get data.Inductor Directory StructureThe directory structure after you have created inductor successfully will look like this,cd /opt/oneops/inductor circuit-oneops-1 -> /home/oneops/build/circuit-oneops-1 from (https://github.com/oneops/circuit-oneops-1) clouds-available # All inductor which are created will go in this public.oneops.clouds.aws clouds-enabled public.oneops.clouds.aws -> ../clouds-available/public.oneops.clouds.aws Gemfile Gemfile.lock init.d inductor lib client.ts log shared ## Refer (https://github.com/oneops/oneops-admin/tree/master/lib/shared) cookbooks exec-gems.yaml exec-order.rb hiera.yaml"
},
{
"title": "Inspecting OneOps with the OneOps API",
"url": "/developer/integration-development/inspect-oneops-with-api.html",
"content": "The typical way of running OneOps is by having it managed by OneOps itself.The following example assumes that it is running in the organization oneops and the open source circuit of packscircuit-oneops-1.Listing PacksThe user interface URL for showing the packs is https://server/oneops/catalog#packs. The equivalent JSON data isavailable at https://server/oneops/catalog/packs.json:{ packs: { oneops: { docker: [ "1" ], cassandra: [ "1" ] } }}Inspecting a PackAs a next steps we investigate the cassandra pack in the user interface athttps://server/oneops/catalog/packs/oneops/cassandra/1/platforms/cassandra.The components in the pack are available as JSON data athttps://server/oneops/catalog/packs/oneops/cassandra/1/platforms/cassandra/components.json. The information in theresponse includes required as well as optional components in the pack:[ { rfcId: 0, releaseId: 0, ciId: 1592134537, nsPath: "/public/oneops/packs/cassandra/1", ciClassName: "mgmt.catalog.oneops.1.Storage", impl: null, ciName: "storage", ciGoid: "1592134537-31977-159212537", ciState: "default", rfcAction: null, releaseType: null, createdBy: null, updatedBy: null, rfcCreatedBy: null, rfcUpdatedBy: null, execOrder: 0, lastAppliedRfcId: 0, comments: "root:/usr/local/rvm/gems/ruby-1.9.3-p547/bin/knife", isActiveInRelease: false, rfcCreated: null, rfcUpdated: null, created: 1473894033684, updated: 1473894033684, ciAttributes: { volume_type: "GENERAL", size: "20G", slice_count: "1" }, ciBaseAttributes: { }, ciAttrProps: { owner: { } }},The map in ciAttributes contains the actual attributes of the component, based on its meta data definition and packoverrides. You can get more information about a specific component by accessing the JSON formatted component data usingthe component name. E.g. for the volume component usehttps://server/oneops/catalog/packs/oneops/cassandra/1/platforms/cassandra/components/volume.json. The returned data includesdefined relationships for the component and other metadata such as ciAttributes, requires, dependents and dependsOn.This data includes constraints such as only allowing one component per platform (1..1) and the requirement for anothercomponent like a compute:requires": { relationAttributes": { "constraint": "1..1", "services": "compute", }}Class MetadataThe API includes a method to access the metadata for a given class with the help of its name - the ciClassName oftenreturned as part of other API calls for entities. The metadata API endpoints supports the class_name argument for arequest to a URL such as https://server/metadata?class_name=catalog.oneops.1.Volume. The metadata API returns all known dataabout the specific class such as its default values, relations and other attributes:{ classId: 32172, className: "catalog.oneops.1.Volume", shortClassName: "Volume", superClassId: 32155, superClassName: "base.oneops.1.Volume", accessLevel: "global", impl: "oo::chef-11.18.12", isNamespace: false, flags: null, useClassNameNS: false, description: "Volume", extFormat: null, created: 1452727128309, mdAttributes: [ ... ], fromRelations: [ ... ], toRelations: [ ... ], actions: []}"
},
{
"title": "Installation",
"url": "/admin/installation.html",
"content": "InstallationOneOps is a complex application with many components and interactions as you can see in thekey concepts documentation. As a result installation is potentially a complicatedprocess.The installation process can be performed by:Building OneOps from source including creation of VM via VagrantFrom that initial install you can then to proceed to successfully operatingOneOps for your users including Manage OneOps with OneOps Upgrade OneOps Running OneOps in Productionand others."
},
{
"title": "Integrations",
"url": "/overview/integrations.html",
"content": "OneOps is a marketplace of cloud providers and software products designed for DevOps developers.Supported Clouds Supported By Notes @WalmartLabs Private Cloud Nova, Neutron, Swift, Cinder, Glance Liberty, Kilo and Juno Releases Rackspace Public & Private Cloud Cloud Servers, Cloud Images, Cloud Load Balancers, Cloud DNS, Cloud Network @WalmartLabs & Microsoft Public & Private Cloud Virtual Machines, Storage, ExpressRoute, Azure DNS, Virtual Network, Load Balancer @WalmartLabs Public Cloud EC2, ELB, EBS, Route 53 @WalmartLabs & Google Public Cloud If your company is interested in integrating your cloud in to the OneOps product, then please Email UsSupported Products@WalmartLabs has many products integrated with OneOps which will be released within the first few months after having made OneOps open source. Weve launched OneOps with a verified capability to manage Node, Java and LAMP based applications on all the supported cloud providers. The other products included in this initial release are required for OneOps to be deployed and to manage itself. Theyre part of the building blocks of the OneOps technology. The source of the circuit is the authoritative reference for available integrations. Supported Versions >Supported By 2.2.0, 2.5.2, 4.0 Couchbase 0.10.17, 0.10.26, 0.10.33, 0.10.35 @WalmartLabs 1.9 @WalmartLabs 1.2, 2.0, 2.1, 2.2 @WalmartLabs 9.1, 9.4.* @WalmartLabs 6, 7 @WalmartLabs 14.04 @WalmartLabs 7 @WalmartLabs 1.8.7, 1.9.3, 2.0.0 2.2.* with Rails 4.2.* @WalmartLabs 5.6.* @WalmartLabs 2.2.21, 2.4.* @WalmartLabs 1.1.1, 1.3.2, 1.4.1, 1.4.4 @WalmartLabs 2.1.0 @WalmartLabs Apache ZooKeeper 3.4.5 @WalmartLabs 5.1.73, 5.7.* @WalmartLabs 2.6.16, 3.0.1 @WalmartLabs 2.6.16, 3.0.1 @WalmartLabs 2.8.5, 3.4.2, 3.4.2 @WalmartLabs 5.5.1, 5.9.1, 5.10.0, 5.12.* @WalmartLabs 6, 7, 8 @WalmartLabs 5.1.2 @WalmartLabs 4.10.3 @WalmartLabs 1.1.2 @WalmartLabs @WalmartLabs If you or your company are interested in contributing support for a software product or open source project to OneOps, then please Email UsGetting HelpConsulting and Professional Services: If youre interested in providing consulting or professional services to companies interested in using OneOps, then please Email Us"
},
{
"title": "Content Development Introduction",
"url": "/developer/content-development/introduction.html",
"content": "Content Development IntroductionUse this section to set up your environment for OneOps pack (circuits)/cookbook development on a Mac OS X or aLinux system and to create a new component (cookbook) with its accompanying pack.PrerequisiteFor both API and Circuit development you need access to a running OneOps deployment. To build your own deployment,start with the Installation section.Circuit DeveloperFor Circuit development, follow the instructions below.InstallationInstallation of OneOps is described in theadministration section.Setup and ConfigurationDevelopmentThe easiest way to start is to copy an existing cookbook and pack (for example, Tomcat) and develop on that. To doit from scratch, follow the procedure described below: Select the circuit repo. To create a new component (cookbook), use the knife command.Create a Cookbook To create a cookbook, enter the following:Change to repo$ cd circuit-oneops-1Create new component called mycomp$ bundle exec knife cookbook create mycomp** Creating cookbook mycomp** Creating README for cookbook: mycomp** Creating CHANGELOG for cookbook: mycomp** Creating metadata for cookbook: mycompStart defining the attributes$ vi components/cookbooks/mycomp/metadata.rb Develop the recipes for your component (for example, add, delete, update, replace, repair lifecycle actions). After you complete the cookbook design, create the pack under the /packs directory. Define its resources (components) and relationship between them. For more details, refer to an existing pack(for example, Tomcat).Create a PackTo create a pack, follow these steps:$ cd circuit-oneops-1/packs$ cp tomcat.rb mypack.rb$ vi mypack.rbAs in the case of chef recipes, a OneOps pack is also defined using a custom Ruby DSL with syntax like variable,resource, relation, etc. Because the Pack DSL is a Ruby DSL, anything that can be done using Ruby can also be donein a Pack, including if and case statements, using the include? Ruby method, etc.For detailed information on how to develop a pack, see Add a Platform.Create a CircuitTo create a circuit, refer to: Add a New ComponentConfirming it WorksTesting This section is useful if you are using a shared dev instance in your organization .Before you start the testing, make sure you have access to your OneOps dev instance and have added your SSH keysto the Inductor.# Sync the metadata and packs.# This is only required if there are any changes in the metadata or pack:# Pack sync commands.cd circuit-oneops-1/Export the OneOps CMS API.export CMSAPI=http://<your OneOps instance>:8080/Sync the mycomp metadata.bundle exec knife model sync mycompSync mypackbundle exec knife pack sync packs/mypack --reloadClear the Cachecurl http://<your OneOps instance>:8080/transistor/rest/cache/md/clearSelect the Repo.cd circuit-oneops-1Copy the Cookbook to the Corresponding Repo in the Inductor (/opt/oneops)scp -r components/cookbooks/mycomp ooadmin@<your OneOps instance inductor>:/opt/oneops/circuit-oneops-1/current/components/cookbooks/Copy the Pack File to the Corresponding Repo in the Inductor (/opt/oneops) scp packs/mypack.rb ooadmin@<your OneOps instance inductor>:/opt/oneops/circuit-oneops-1/current/packs/Add an icon image for the pack Component Icon : 128x128 PNG graphic - Add to circuit-oneops-1/components/cookbooks//doc (ex. apache_cassandra) Pack(Platform) Icon : 128x128 PNG graphic - Add to circuit-oneops-1/packs/doc/ see https://github.com/oneops/circuit-oneops-1/tree/master/packs/docTesting via GUI Create a new Assembly and environment. Now you are ready to test the pack by creating a new assembly and env in https://<your OneOps instance inductor>/. Do not store any of your source code in oneops/inductor dev env. This env is upgraded every Wednesday as part of the regular OneOps release cycle. Create a pull request. After you make sure that everything is working fine in the dev env, commit the code and create a pull request from your forked repo."
},
{
"title": "Java Component",
"url": "/user/design/java-component.html",
"content": "Java ComponentThe java component represent theJava environment available for platform runtime usage.AttributesFlavor: Configure usage of a Java distribution from Oracle or the open sourcereference OpenJDK.Package Type: Defines whether a Java Development Kit JDK, a Java RuntimeEnvironment JRE or a server-optimized runtime Server JRE is used.Version: The major version of Java to use. Update: The optional update version of Java to use. An empty value defaults tothe version configure in the Java component. To use, for example, Java 8u144,the update value has to be set to 144. Binary Package: Optional path for downloading the package.Installation Directory System Default Package TypesServer JREA server JRE should be used when deploying Java applications on servers. Itincludes tools for JVM monitoring and tools commonly required for serverapplications, but does not include browser integration (the Java plug-in),auto-update, an installer or development tools such as a compiler.JDKA Java Development Kit JDK is suitable for Java development tasks. It includesthe JRE and server JRE components as well as development tools such as compiler,debugger, monitoring tools and others.JREA Java Runtime Environment JRE is suitable for end users running Java on adesktop and is suitable for most end-user needs."
},
{
"title": "Job Component",
"url": "/user/design/job-component.html",
"content": "Job ComponentThe job component is available in a number of platforms and can be used to execute specificcron tasks on a regular schedule. Examples are cassandra, es and many others.Typically it is configured as an optional component and can simply be activated by adding it with the + button in thecomponent list.You can locate defined jobs with a [search](../general/search.html] using a Class value of manifest.Job forconfigured jobs in the design phase or bom.Job for deployed jobs in the operation phase.AttributesBesides the global configuration available for any component such as Name and Description, you can configure thefollowing attributes:Schedule: You define the schedule by specifying a values for Minute, Hour, Day, Month and Weekday. Numbersas applicable for the range as well as * as placeholder for any value are valid.Command: The command line command or script to execute.Options - User: Specify the operating system user to use for the command execution.Options - Variables: You can define environment variable values to set prior to the job execution."
},
{
"title": "Developer Key Concepts",
"url": "/developer/general/key-concepts.html",
"content": "Key ConceptsOneOps System ArchitectureAs a pack developer, you dont need to know the details of system architecture, but if you want to learn morerefer System Architecture.Model Overview Components: are the lowest level building blocks Platforms: consist of components and relationships for dependencies and management Assemblies: consist of platforms with interdependencies Environments: consist of assemblies plus availability mode components inserted for youThe UI allows customization of components, platforms, assemblies, and environments. To add new components or platforms to the system, it is currently necessary to add files to the packer directory structure.Model and Schema Classes All of the metadata in the OneOps system is class-like Class metadata is loaded into the Configuration Management System (CMS) by the Packer Class methods are backed by Recipes Classes are identified by names and attributes Classes belong to packages Packages are related to workflow Catalog.* = Design Phase Manifest.* = Transition Phase Bill of Materials (BOM.*) = Operational PhaseCatalog Design RelationsIn the Design aspect, we model the base application with: No environmental No operational componentsFor a Design relations diagram, see Relations in the Reference section.Transition RelationsIn the Transition aspect, we model two additional objects: IaaS components: Can be load balancers (haproxy) or DNS (djbdns). Can also use provider-based ones like route-53, dyndns, elb, etc. Monitors: Use to customize monitors for each environmentFor a Transition relations diagram, see Relations in the Reference section.Bill of Materials Operational RelationsIn the Operations aspect, we create bom components for the manifest components with relation to the Binding(cloud provider). For a operational relations diagram, see Relations in theReference section.ComponentA Component is a cookbook directory and the lowest level building block that is modeled. There are three aspects of aComponent: Component Class: Attribute and idempotent add, update, delete, start, stop logical operators Component Resource: Used in a Platform Management Pack tomap the Component class to a component in a platform that is available in the UI Component Instance: Component instance in an Environment.The Relational Model shows how a Component is modeled in Design, Transition, and Operations with regard to aspects of the OneOps UI.A Component is a basic building block of a OneOps platform. A OneOps component is a chef-solo cookbook with its UIcomponents and actions defined in a cookbooks metadata.rb. For example: Compute, Secgroup, Volume, User, Java,Tomcat, Artifact, etc.Model Directory StructureOneOps extends Chefs Ruby-based DSL and reuses their directory structure.OneOps adds packs, an extensible object-oriented management model, that contains: Relationships with flex/scale attributes Monitors and attribute defaults Metrics and thresholds/triggers to repair, scale, notify only Providers for compute, DNS, etc.CMS SyncTo update the CMS database with new component metadata and/or platform management packs, we extended Chefs knife to load (model sync) the files in the directory to the database.Component ClassComponent Class is the lowest-level configuration entity of OneOps metadata and: Is defined by name, attributes and methods Includes the notion of simple inheritance Is backed by a Chef cookbook Has attribute metadata (such as type, default values, format, etc) defined in the cookbooks metadata.rb file Has all related methods (such as add, update, delete, etc) are defined as Chef recipes Has attributes and idempotent control code (chef recipes) to manage its lifecycle Has control/recipes: add, update, delete, start, stop, restart, status, repair, customExample components are: cassandra, compute, rabbitmq, storage, php.A Component Class must have: Cookbook and Packer: See Model Directory Structure. Metadata: Model that describes attributes, help, default values Recipes: add, update, delete, start, stop, restart, repairFor instructions on how to add a new component, seeAdd a New Component.Component ResourceComponent resource mappings are covered in the sections, Platforms andPlatform Management Packs..MetadataMetadata models a component. The metadata.rb file contains information that is primarily used for theOneOps UI to provide information to the user and collect configuration information from the user. The file has severalparts: base/required Attributes (name, description, etc.) grouping Sub-groups of attributes CMS models. For an example, see Metadata attributes Defaults, format: UI metadata recipes Default actions. Add, update and delete are assumed and do not need to be added. Actions can also be addedvia the UI as on-demand attachments.RelationshipsWe also model relationships. A relationship describes a dependency order between components. For additional detail,refer to Relations.Relation ClassThe Relation Class defines which Component types that it can establish relationships with. Like theComponent Class, the Relation Class is identified by names and attributes.PlatformA Platform is a collection of components that are grouped for reusability.These are building blocks to create applications via the UI by adding platforms to anassembly.CircuitCircuit is a ruby application (packaged as gem) that loads Component Classes, Relation Classes and Platform Templates defined in pack files into the Configuration Management System (CMS) database.Platform PackA Pack (circuit) is a collection of components with dependencies defined between them. The dependency relation is used to define the execution order (for example, when and where to execute these components). Essentially, packs are how you wire components together. Packs include: Pack structure Inheritance Pack uploadPacks also contain configuration for: Cloud service dependency Component cardinality constraints Default attribute values Monitor and threshold definitions Custom payload details Flex relations Entry point information and different relationships like: depends_on managed_via secured_by Packs are used to define OneOps platforms. For example:Tomcat, Apache, NodeJS, Couchbase, Postgres, etc.Platform Management PackA Platform Template is added to the system by creating a Platform Management Pack (Pack) file and loading it into the CMS. A Pack is a Ruby DSL file that models a platform with respect to each availability mode. It exists in the model directory structure.The file contains three parts: Component Resources: Named resources with a type (cookbook attribute). See the Component Class name in the sample pack below. Relationships/dependencies with flexing/scaling attributes (Optional) Metrics and ThresholdsThe Platform Management Pack defines how corresponding Platforms should look, how they should be deployed, and how they function in different operational Environments. It defines required and optional Component Classes for the Platform based on SLA requirements. It can also define the default values for Component attributes. In OneOps terms, the Platform Management Pack is analogous to the Platform Factory definition. Management Packs are defined using OneOps proprietary Ruby-based DSL.Pack CreationInheritanceA platform can inherit one or more additional packs by using include_packFor example:include_pack 'generic_ring'PropertiesA platform contains properties that are used in a variety of ways to include the OneOps UI. Examples include: Name: couchbase Description: CouchBase Type: Platform Category: Database NoSQLResourceA resource is a component that may be used in a pack.Uploading PacksThe OneOps Packer is a Ruby-based application that loads Component Classes, Relation Classes and Platform templates from Platform Management Packs into the Configuration Management System (CMS) database.Component and Pack Validations Chef node Basic payload Dynamic payloadCloud Service Types Mandatory: Packs that depend on such services cannot be enabled (deployed) unless the required service is present in the cloud. For example: a compute service is required by most packs to provision computes. Optional: Packs that depend on such services can be enabled even when the associated service is absent in the cloud. For example: NTP service is optional in compute deployment. If the cloud service is present and the associated component has this service enabled, then deployment makes use of this service.OneOps CloudsA OneOps cloud can be defined as a logical group of cloud services that enable resource allocation/usage. Typically a cloud contains: Compute provisioning service: Openstack, Rakespace, etc. DNS provisioning service: Infoblox, etc. Repository with packages/repositories/application-artifacts: Nexus, etc.OneOps creates multiple clouds per organization. Currently clouds are based on the availability zone that is created by the elastic cloud team per data center.OneOps and ChefChef is a Ruby-based DSL for systems administration of Unix or Windows systems by Opscode. As a configuration management tool, Chef uses a pure-Ruby, domain-specific language (DSL) for writing system configuration recipes or cookbooks. Chef is released as open source under the Apache License 2.0.CookbookCookbook is the fundamental unit of configuration and policy distribution. A cookbook defines a particular scenario, for example, everything needed to install and configure Apache or Tomcat server and the resources that support that.RecipeRecipe describes configuration conditions. A recipe is stored in a cookbook and declares everything that is required to configure part of a system. For example, a recipe can: Install and configure software components Manage files Deploy applications Run other recipes Perform component lifecycle actions like add, update, and deleteBuildMost platforms have an optional Build Component. This allows you to add your code via Git or SVN. It extends the Chef deploy resource and adds many features."
},
{
"title": "Administrator Key Concepts",
"url": "/admin/key-concepts.html",
"content": "Key Concepts OneOps is a multi-cloud application orchestrator. It lets you design your application in a cloud agnostic way (byabstracting multiple cloud providers) and manages your applications design, deployments, operations &monitoring. Check out the list of supported cloud providers.Architecture OverviewOneOps includes a self service portal for users to administer the applications, has a back end automationengine to support complete application life cycle management.OneOps has a back end loop to monitor resources and can trigger auto-repairs, auto-scales ornotificationsSystem ArchitectureThe diagram below depicts a detailed system architecture .Web App aka display Self service portal for managing applications, clouds, organization,services. Rest based APIs to do almost anything which can be done on UI. Can be integrated with sign on from AD sourceCLICommand line ruby gem for managing almost all aspects of OneOps. sourceUser DBUser schema to manage users, organization.Packer/CircuitIts a ruby based gem which is responsible for loading packs. sourceCMS API aka adapterJava based rest api to manage model, assemblies, environment. sourceTransistorTransistor is core web application responsible for creating design, deployment plan, comparing whats deployed towhats intended conforming to pack, user changes to configuration on design or Transistor.. sourceDAQDAQ provides rest apis to get data collected via collectors. Used for graphing monitor details in UI. sourceAntennaAntenna is responsible for persisiting/serving OneOps notifications into Cassandra db and distribute them to theconfigured Notification Sinks. sourceConfiguration Management Database System of records for all assemblies,enviroments,deployments. source Transmitter (Publisher)This component tracks the CMS changes and post the events on the messaging bus. sourcePerfdataWe store metrics collected from back end into CassandraElastic SearchElastic search is used to store notifications generated from OneOps and deployment logs are stored.SearchAll cms,controller events and notifications are fedinto elastic search which helps in implementing Policy Cost Deployment/Release histories. sourceMessage BusOneOps uses apache active mq as messaging layer for internal internal communication between components.SensorSensor consumes metrics coming from collector and generate events if thresholds violations are detected andgenerate Ops events. Esper based CEP to detect monitor thresholds violations sourceOpampIts an OneOps event processor to trigger auto-healing, auto-replace,or generate notifications. sourceCollectorIts a Logstash collector which collect metrics from managed instances in OneOps. sourceControllerIts an activiti based workflow engine responsible for distributing OneOps work orders and action orders. sourceInductorThe Inductor consumes WorkOrders or ActionOrders from a queue by zone, executes them and posts a resultmessage back to the controller. It is written in Java and uses a Spring Listener Container and Apache Commons Exec for process execution.Inductor can be installed via oneops-admin gem.WorkordersA WorkOrder is a collection of managed objects that are used to add, update or delete a component.ActionOrdersAn ActionOrder is almost identical to a workorder, but instead of an rfcCi, it has only a CI. An ActionOrder isdispatched by the controller to run some action such as: reboot, repair, snapshot, restore, etc."
},
{
"title": "User Key Concepts",
"url": "/user/general/key-concepts.html",
"content": "Key ConceptsThe OneOps process uses the following phases: Design: Where an applications architecture is described Transition: Where an application design is realized in an environment Operations: Where instances are managed and monitoredOneOps Continuous LifecycleWith OneOps, your design becomes much more than a simple template. Its a continually maintained dataset where thenotion of change is always recognized.In fact, OneOps was created from the ground up to manage the issues that arise with continuous change. Inaddition, OneOps automatically scales and repairs your application to ensure high availability and optimalutilization of your cloud infrastructure. Your browser does not implement HTML5 video. The following diagram describes how Platforms, Components, and Instances relate to each other in the differentlifecycle phases: Design, Transition, and Operations.OneOps streamlines the three phases of the lifecycle design, transition and operations.Design in OneOpsDesign is an area within an assembly where the application architecture is described. The applicationcomprises of platforms containing optional or required components. Applications can be: Designed from scratch by adding platforms Bootstrapped from predefined Application templates called catalogs Multiple environments of application can share common design configuration (eg OS version would be common indev, qa, prod environments). Configuration changes are buffered in a Release and are not applied until the release is committed, makingthem trackable and audit able. (Releases apply to transition too)Define your application workload based on your architectural and application requirements. Visually assemble your application Select from a library of platform packs Fine-tune components inside each platform Modify your design with version control Your browser does not implement HTML5 video.See also Add a platform Add a componentTransition in OneOpsTransition is an area within an Assembly where the application Design is realized in an Environment. You canhave multiple Environments derived from a single Design.Provision environments by mapping the design output against operational requirements. Create and customize multiple environments Specify availability requirements Bind to your cloud provider of choice Deploy with effortless automation Your browser does not implement HTML5 video.EnvironmentAn environment is a realization of the application Design after its operational requirements (e.g. single for dev/redundant for qa and prod) are applied. It is an abstract layer of configuration, no real instances exist untilan Environment is deployed.See also Environment Profiles Availability Modes Transition Environment MonitorsOperations in OneOpsOperations is an area within an Assembly where instances are managed and monitored. Each Environment isrepresented in Operations.Monitor and control your environments to maintain the required operational levels. Monitor the health of your application View configuration, metrics and logs Enable auto repair, auto scale and auto replace Perform manual control actions Your browser does not implement HTML5 video.CloudsClouds in OneOps is logical collection of supply side services which satisfy business requirements ofOrganization. Some examples of cloud services Compute service (supplied by private openstack , Azure, AWS, Rackspace) Storage Load Balancing DNSAssemblyAn Assembly is an independent workspace where Applications are managed. One Organization can have multipleAssemblies. Each Assembly has corresponding subspaces for Design, Transition and Operations. Your browser does not implement HTML5 video.Platforms, Platform Links and ComponentsPlatformA Platform is a building block of an application. Each Platform type is backed by metadata which defines itsrequired and optional Component sets, along with its operational behavior.Platform LinksWithin an Assembly, the end user can set links to dependencies between Platforms. These dependencies are used togenerate a proper deployment sequence for the Platforms. For example, when you link a web Platform to a databasePlatform, the database deploys first. Then, when the web Platform comes up, the database Platform is ready.ComponentEach Platform is comprised of Components which are the lowest-level building blocks in the OneOps system. SomeComponent examples are compute, storage, Cassandra, PHP, and available components.When you add a Platform to your application design, a set of required Components with default attribute valuesare automatically created within the Platform. You can modify the attribute values for required Components and youcan add optional Components. A Component has model and control logic to manage its lifecycle, such as add,update, repair, and more.Component DependencyThere are Dependencies between Components within a platform. These Dependencies are used to generate a propersequence of deployment steps. The end user can add lateral Dependencies between neighboring optionalComponents. For example: If you have a Build that depends on another Build, you can set a Component Dependencybetween them.CatalogYou can bootstrap your design using pre-loaded application templates called Catalogs. OneOps provides Catalogs forcommon commercial and open-source applications. There are different categories of Catalogs, such as contentmanagement (e.g. WordPress).You can also create custom Designs and save them in a private Catalog. This enablesyou to share Designs, which helps to drive architectural consistency within your organization.OrganizationThe OneOps Software as a Service (SaaS) solution is a multi-tenant application. An Organization is an isolatedentity within which all related Operations are performed.Environment ProfilesEnvironment profiles are templates that are used to derive concrete environments based on pre-definedtemplates. Environment profiles are abstract environment definitions that allow environments to be categorized orclassified by associating a given environment with an underlying environment profile. Typical examples of profilesinclude prod, QA, etc."
},
{
"title": "Keystore Component",
"url": "/user/design/keystore-component.html",
"content": "Keystore ComponentThe keystore component works together with the certificate component of a platform tomanage SSL certificates and uses the certificates in the Keystore used in the Javaplatform and therefore platforms such as Tomcat, Kafka and others.AttributesKeyStore Filename: the filename for the keystore, typically with a .jks extension.KeyStore password: the password to access the keystore. It should be the same passphrase as used in the certificatecomponentExampleSee a full example usage in the certificate component documentation"
},
{
"title": "Load Balancer Component",
"url": "/user/design/lb-component.html",
"content": "Load Balancer ComponentThe lb component is of core importance and part of mostplatforms since it defines the load balancing of request being received by theplatform.You can configure the lb component as part of your platform in design phaseand specific to an environment in the transition phase.Once your assembly is deployed in an environment you can access the lb inoperation.ConfigurationBesides the global configuration available for any component such as Name, youcan configure the following attributes:LB Service Type: Defines type of LoadBalancing service to use. Two optionsavailable: lb: for physical loadbalancer service e.g. Citrix Netscaler. slb: for software loadbalancer service e.g. Octavia from the openstack project. Switching the service type after an initial deployment is not advised and canlead to inconsistent state of the deployed system.Listeners: Defines the ports that the LB will be listing on for incomingtraffic.LB Method: Defines the protocol used to forward traffic to balance the load tothe servers. Two methods are available described below: Least Connection Method: The default method, when a virtual server isconfigured to use the least connection, it selects the service with the fewestactive connections. Round Robin Method: This method continuously rotates a list of services thatare attached to it. When the virtual server receives a request, it assigns theconnection to the first service in the list, and then moves that service to thebottom of the list.Session Persistence: Directs the LB to send related requests to the sameserver. Additional attributes described below if session persistence is checked.Persistence Type, Cookie Domain and LB Custom Attributes are shown when SessionPersistence is checked. Persistence type: Defines the method the LB will provide thepersistence. Two available are described below.v SourceIP: The LB caches the IP of the server to send related traffic to. Cookieinsert: The LB caches a cookie for a period of time which directs the traffic. Cookie Domain: Defines the domain where the cookie is valid.Create Cloud vips: Vips at the cloud level are available by default. Thischeckbox will allow the vips to be avaialble to all cloud regions in the datacenter.Enable LB Group: Used to enable the LB group for persistence across alllisteners.LB Custom Attributes: Used to define and utilize additional attributesavaialble on the LBs. Examples of this are: Enable Access Logs and ConfigureConnection Draining. (ex. Key/Value - ConnectionDraining/Enabled: True,Timeout:300). Please check for the available attributes for your lbs.Required Availability Zone: Used to horizontally scale physical LBDevices.Connection Limit: Applicable only for software loadbalancers. The maximumnumber of connections per second allowed for the vip. Valid values: a positiveinteger or -1 for unlimited (default). Compute Related AttributesECV: Used to define service monitors. (ex. 80 => GET/someservice/node). Port-available monitors are used for tcp(s) and udp.Note: Currently ECV checks will use TCP on Azure. If your app server(ex. Tomcat) is listeneing on port HTTP 8443 and runs out of listeners, the lbwill think the server is still listening and direct traffic to it. Please beaware.ServiceGroup Custom Attrs: Used to define additional attributes available tothe Server Groups defined. Examples of these are: servicegroupname, servicetype,maxclient, cipheader. Please check for the available attributes for yourlbs.Octavia Software Load BalancerOctavia is an open source scalable software load balancer that is part of theOpenStack ecosystem. Compared to using physical load balancers such asNetscaler, it can increase operational efficiency and reduces outages.Octavia and barbican service need to be enabled in the OpenStack cloud to usethis feature.The connection limit parameters is only applicable for software loadbalancerand set the maximum number of connections per second allowed for the virtual IP.Valid values are a positive integer or -1 for unlimited (default).SLB listener options available are explained below.Octavia SLB offers all three types of LB connections as available in Physical Netscalars.Listener configurations for each type is explained in the following.Non-Secure Load balancerAn unencrypted, non-secure LB connection uses HTTP with a listener configuration such as:http 80 http 8080This configuration forwards any incoming requests on port 80 to the internal port 8080.Non-Terminated HTTPS Load BalancerThis type of connection enables end to end SSL encryption with HTTPS.https 443 https 8443Any incoming requests on the default HTTPS port 443 are forwarded to theinternal computes at 8443. It is necessary to configure the needed certificatein the certificate component.The certificate is copied to the backend servers and certificate verification isdone at the backend.Terminated HTTPS Load BalancerThis type of connection allows SSL encryption between client and the loadbalancer. Connectivity from the load balancer to the backend servers isunencrypted:https 443 http 8080This configuration takes incoming HTTPS requests on port 443 and forwards themas HTTP requests to port 8080. The advantage of this configuration is that itrequires no certificate configuration on the computes.It is necessary to configure the certificate in thecertificate component. The certificateis copied to the load balancer only and certificate verification is done at theload balancer.Usage of the terminated HTTPS load balancer configuration requires the barbicanservice in the openstack cloud. Certificate details entered in lb-certificatecomponent are converted to barbican secret and containers for octaviaload balancer to use with the TLS_Termination option."
},
{
"title": "Library Component",
"url": "/user/design/library-component.html",
"content": "Library ComponentThe library component is available on all platforms. Itcan be used to install operating system packages."
},
{
"title": "Load Content",
"url": "/admin/operate/load-content-model-images.html",
"content": "Load ContentCircuit OverviewA circuit is a chef ruby-dsl based model of some application or service. It contains the model of what resources / components are required, optional and how they relate to each other. Two common relations are depends_on and managed_via. An architect would usually design a pack to capture best practices for the different availablity modes / operational modes.An example would be a tomcat circuit. In the circuit file it would contain a compute, java, tomcat, and artifacts and/or build components and describe the dependencies and where they run (managed_via relation).A prior name of a circuit was a pack. Whenever you see the term pack in oneops, its the same thing as a circuit.Circuit repoThe basic circuit repo is: https://github.com/oneops/circuit-oneops-1It contains 3 primary directories for the models: /components - which can be implmented in chef cookbooks, puppet modules, and soon ansible playbooks. Each component has a doc dir which has the image the ui will use for the component. /clouds - default / templates for clouds and cloud services. /packs - the directory with all the packs/circuitsLoad contentThere is a circuit command which is part of the oneops-admin gem. This circuit command is used to load content / the model.Download and install the latest gem from the build server or build yourself from https://github.com/oneops/oneops-adminadvanced option to use object store backed images and docs, is to modify the circuit repos .chef/knife.rb with some object store config. Example lines to add to circuit-oneops-1/.chef/knife.rb:object_store_provider 'OpenStack'object_store_user 'oneops'object_store_pass 'redacted'object_store_endpointenvironment_name 'int-1503'Then to perform the content upload, aka model sync:cd /opt/oneopscircuit createcd circuit# this will load the base model / classes and relationships.circuit initcd /opt/oneops/circuit-oneops-1# this runs knife model sync, knife pack sync and knife cloud synccircuit install"
},
{
"title": "Load/Extract",
"url": "/user/design/load.html",
"content": "Load/ExtractThe main purpose of Design Extract/Load feature is two-fold: to provide an ability to save and source control assembly design in some external tool (i.e. github). to enable easy and seemless way to share designs with other assemblies/organization/users.Other benefits include: making it easy to apply a large number of changes without having to click-thru UI pages and automation enablement (i.e. external tool integration via API).Extract can be used to save a design created in OneOps into a YAML or JSON file. The generated file (YAML or JSON) is not meant to store a full snapshot of current design but rather contain a delta style description capturing the list of assembly platforms and only the difference between their configuration and the default configuration defined in the platform packs. When any attributes is changed in design a user should lock those changed attributes in order to explicitly have them included in design yaml/json file as well as to preserve their changed values from overwriting (to pack defaults) during Pack Pull operation. A lock button is available next to every component attribute and variables. Enable locking to extract any specific attributes into the yaml file.Therefore the design yaml/json file generated by Extract will explicitly include: all the design variables; all platforms; all platform variables; platform dependencies (links); optional components (explicitly) added by user to platforms; additional component dependencies (explicitly) added by user; attachments; locked (only) component and attachment attributes regardless of whether they different from default values (from pack or metadata).Unlocked attributes/variable has icon as below:The lock icon changes to locked.NOTE: User MUST mark attributes as locked to indicate that such attribute is to be included in the extract design file. Changed attribute values are NOT included in the design file (by default) unless they are locked.Load feature can be used to bulk-update an assembly design using a YAML/JSON file typically generated by Extract. Load will generate design changes to bring it in alignment with design configuration specified in the design YAML/JSON file by adding them to the current open design release (when possible if there are no changes for the same platforms in current release) or by opening a new design release. The resulting release is not automatically committed and stays open for user to review the changes before committing or discarding it.Design YAML/JSON file can be composed of smaller files (i.e. one design file per platform) by referencing them via special import directive. The same file can also include explicit content further in the file. For example:import:- https://raw.githubusercontent.com/oneops/setup/master/design/adapter-design.yaml- https://raw.githubusercontent.com/oneops/setup/master/design/admin-design.yaml- https://raw.githubusercontent.com/oneops/setup/master/design/wire-design.yaml- https://raw.githubusercontent.com/oneops/setup/master/design/antenna-design.yaml...variables: VAR1: "foor" VAR2: "bar"...platforms: db: pack: oneops/postgresql:1 variables: db-user: $OO_GLOBAL{VAR1} db-pass: $OO_GLOBAL{VAR2} components: compute/oneops.1.Compute: compute: size: L database/oneops.1.Database: database: dbname: cool_app username: $OO_LOCAL{db-user} password: $OO_LOCAL{db-pass}...The file can be loaded using any of the following: UI page for design load by uploading the yaml file or posting the yaml content directly in the text area. To getto the design load page in the UI go to the assembly design and click on the Load button in the header. CLI command oneops design load. The defaults path for the Design file is ./oneops-design.yaml. For additional information see CLI section. API request for design load. Some examples using cURL:GET request to extract design in YAML format:curl -u API_TOKEN: .../assemblies/:assembly/design/extract.yamlGET request to extract design in JSON format:curl -u API_TOKEN: .../assemblies/:assembly/design/extract.jsonPUT request to load design from YAML file:curl -u API_TOKEN: .../assemblies/:assembly/design/load.json -X PUT -F "data_file=@design.yaml"PUT requst to validate design load (no design release will be created) from JSON string:curl -u API_TOKEN: .../assemblies/:assembly/design/load.json -X PUT -d "preview=true&data=..."Global VariablesVariables that can be used anywhere in the design and are referenced via $OO_GLOBAL{...} syntax. For additional information on global variables see variables reference page.variables: MYGLOBALVAR1: "foo" MYGLOBALVAR2: "bar" MYENCRYPTEDVAR1: "::ENCRYPTED::<foobar>"Encrypted variables can either be provided into yaml in plain text using above format(e.g. MYENCRYPTEDVAR1) OR load of yaml with encrypted variable/attribute would require a manual override on load.PlatformsDefinition of platforms to be loaded inside the assembly design. Multiple design files can be loaded with separate platform definitions in each. Load operation performs an upsert for each platform found in the file, but does not do any platform deletions. Deletes must be done directly via the UI/CLI/API per platform, not via load.This section contains a list of all configuration options supported by a platform definition.packA string in the form of <source>/<name>:<version> declaring the pack to be used for this platform. For additional information on packs see packs.pack: oneops/tomcat:1major_versionMajor version of the platform. For new design this will usually be set to 1 and increased when a platform version upgrade is needed.major_version: '1'linksLinks are used to describe dependencies between platforms. For additional information on links between platforms see platform links.links: - db - mqplatform variablesPlatform variables that can be used inside the specified platform in design and are referenced via $OO_LOCAL{...} syntax. For additional information on platform variables see variables reference page.variables: MYLOCALVAR1: "foo" MYLOCALVAR2: "bar"ComponentsDefinition of components inside the platform. Only optional components or components with custom attribute values need to be specified in this section. All other components declared in the packs are inherited using the default pack values.components is a three-level structure with the 1st key a string in the format <resource template>/<class name> matching the corresponding entities in the pack. The 2nd key is the unique name of the component and the 3rd key is any of the attributes supported in the metadata for that component class. See the corresponding pack and component documentation for possible values.artifact/Artifact: artifact: install_dir: /app/artifact version: '1.0'Attachmentsattachments can be used in the same level as the component attributes to declare component attachments.attachments: myscript: path: /tmp/myscript.sh exec_cmd: /tmp/myscript.sh priority: '1' content: |- #!/bin/sh echo "hello" run_on: before-add,before-replace,before-updateExample yaml filevariables: MYGLOBALVAR1: "foo" MYGLOBALVAR2: "bar"platforms: app: pack: oneops/tomcat:1 major_version: '1' variables: MYLOCALVAR1: "foo" MYLOCALVAR2: "bar" links: - db - mq components: artifact/Artifact: artifact: install_dir: /app/artifact version: '1.0' tomcat/Tomcat: tomcat: tomcat_install_dir: '/opt' attachments: myscript: path: /tmp/myscript.sh exec_cmd: /tmp/myscript.sh priority: '1' content: |- #!/bin/sh echo "hello" run_on: before-add,before-replace,before-update"
},
{
"title": "Logstash Component",
"url": "/user/design/logstash-component.html",
"content": "Logstash ComponentThe logstash component is available on all platforms. Itcan be used to configure usage ofLogstash."
},
{
"title": "Manage Assemblies",
"url": "/user/design/manage-assemblies.html",
"content": "Manage AssembliesSolution Your browser does not implement HTML5 video.To manage assemblies, follow these steps: In the navigation bar, click Assemblies. Optionally, to view by ID, Name, or Created (date), click Sort. Optionally, to filter the list of Assemblies, enter filter information in the text box. Select the checkbox next to an Assembly. You can select multiple assemblies by clicking the checkbox next to each one. You can also click the checkbox button to automatically toggle selecting all or no listed assemblies. Optionally, to select Watch or Ignore, click Action. Watching an Assembly means you will receive email notifications about deployment and operation events for this assembly. Watched Assemblies display an eye icon above the Assembly name. Optionally click the Configure icon and select from the following: Edit: displays the Assembly Attributes page. Delete: removes the Assembly, after you click OK in the confirmation dialog. Copy: prompts you to provide a new Assembly Name and Description for a new copy of the Assembly you selected. Save to Catalog: prompts you to provide a Catalog Name and Description. See Also Create Assemblies to Design Applications Add a Team to an Assembly"
},
{
"title": "Manage OneOps User Accounts",
"url": "/user/account/manage-oneops-user-accounts.html",
"content": "Manage OneOps User accountsAccount Access OverviewOneOps administrator accounts have powerful controls that can help you customize team and individual access. By following these best practices, you can easily set up OneOps to suit your business needs.Make customizing access within OneOps simply by creating teams within your organization. You can add users to a team at any time, and assign privileges to the team as a whole. That way you can give groups of users the same access quickly and easily.Definitions User: An individual registered user to OneOps. It can be any user or a Service Account Group: List of users grouped together under one name. Team: A way to manage user access by allowing or restricting accesses and permissionsRegister New UserA new user has to log into the OneOps site and accept the terms and conditions. When the user is added to the system, the user can be granted access to one or more organizations by adding the user to an existing group or team or simply adding the user without any specific team access. User once added to an organization, without team specific priviledge has read-only access to all clouds within the organization, but no access to assemblies.About GroupsUsers can also be managed using groups. A group has one or more users. Such groups can then be added to one or more teams. Groups can be created under user profile section /account/profile#groups Any number of users can be added to a group Add either a user or a group to teams inside organizations Groups can span organizations, hence the group name is unique in the system. Groups are self-governed by group admins regardless of organization associations.About TeamsTeam is the way to manage access control for the users. Teams are unique per organization New teams can be added to an account under an organization //organization/edit#teamsSee Create TeamFrequently used team access scenariosComplete Control within an Organization. The Admin is the most powerful control and should be limited to very few associates in an organization. Admin has priviledge to add/update/delete any resource within an organizationAn admin team pre-exists in all of the organizations. Add the user or group to this admin team.Read-Only Access to all resources within an OrganizationCreate a team with both Manage Access and Organization Scope checked and uncheck all Cloud/Assembly permissions. Members of this team have read-only access to the complete organization.Ability to Create Assembly and Cloud without any DTO PermissionsCreate a team with Manage Access checked only. Members of this team are only able to create a new assembly or cloud without the ability to add platforms/environments to the assembly or cloud services to the cloud.Ability to Manage an AssemblyCreate a team with Assembly Permissions checked (all DTO). Add this team to the assembly where members are required management accessAbility to Create and Manage an AssemblyCreate a team with Manage Access checked along with Assembly permissions. Members of this team have rights to create and manage their assemblies.Ability to Approve DeploymentsCreate a team with Organization Scope checked along with compliance or support permission as required. Add a Support/Compliance object to the clouds which would require deployment approval from this team members to proceed."
},
{
"title": "Metadata",
"url": "/developer/content-development/metadata.html",
"content": "MetadataMetadata files model components and have several parts: base/required Attributes (name, desc, etc) grouping Sub-groups of attributes CMS models (For an advanced example, see the token metadata.) attributes Defaults, format: UI metadata recipes Default actions. Add, update and delete are assumed and do not need to be added. Actions can also be added using the UI as on-demand attachments.The following is an example of a metadata file:1: base/required attributesname "Apache"description "Installs/Configures Apache"long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))version "0.1"maintainer "Kloopz, Inc."maintainer_email "dev@kloopz.com"license "Copyright OneOps, All rights reserved."2: grouping - sub-groups of attributes cms modelsUsually dont need to change this. Its for when different types can have different attributes. See the token metadata for example.grouping 'default', :access => "global", :packages => [ 'base', .. 'manifest', 'bom' ]3: attributesattribute 'install_type', :description => "Installation Type", :required => "required", :default => "repository", :format => { :category => '1.Source', :help => 'Select the type of installation - standard OS '+ 'repository package or custom build from source code', :order => 1, :form => { 'field' => 'select', 'options_for_select' => [['Repository package','repository'], ['Build from source','build']] } }4. recipes - default actions.Actions can also be added via UI in design mode as on-demand Attachments.recipe "status", "Apache Status"recipe "start", "Start Apache"recipe "stop", "Stop Apache"recipe "restart", "Restart Apache"recipe "repair", "Repair Apache""
},
{
"title": "Metric Data Source Type",
"url": "/admin/operate/metric-data-source-type.html",
"content": "Metric Data Source TypeA dstype (Data Source Type) defines how the values are aggregated. We re-wrote RRD logic for a cassandra data store.The dstype can be COUNTER, DERIVE, or GAUGE.COUNTER will save the rate of change of the value over a step period. This assumes that the value is always increasing (the difference between the current and the previous value is greater than 0). Traffic counters on a router are an ideal candidate for using COUNTER as DST.DERIVE is the same as COUNTER, but it allows negative values as well. If you want to see the rate of change in free disk space on your server, then you might want to use the DERIVE data type.GAUGE does not save the rate of change. It saves the actual value itself. There are no divisions or calculations. Memory consumption in a server is a typical example of gauge."
},
{
"title": "Metrics Collection",
"url": "/admin/operate/metrics-collection.html",
"content": "Metrics CollectionOverviewOneOps uses Logstash and Logstash-Forwarder to collect performance metrics (like CPU, Memory, jvm metrics)Logstash Log/Event processing engine written in jruby and runs as a jvm application The log lines flow through 3 different stages: Input Filters and Outputs There are many standard inputs, filters and output plugins available We have custom output plugin for calling using collector java code. Logstash needs a simple config file in json format specify input, filters and outputs Logstash coexist on collector machine.Logstash-Forwarder It is a go binary which tails log files and forwards the lines to downstream Logstash servers over lumberjack protocol Main goals of this tool are: Minimize resource usage where possible (CPU, memory, network). Secure transmission of logs. Easy to deploy with minimal moving parts. Runs on user VMs. Gets installed as part of compute cookbookHow it all Works TogetherSetup/Installation DetailsThe Lumberjack protocol between the Logstash-Forwarder (Perf Agent running on each compute) and Logstash (Perf collector on server side) communicates over ssl and needs cert.This is how it is set up on OneOps Core and Gateway assemblies: You need to generate a ssl cert for the domain name of the daq platform. You can check the fqdn component on the daq platform for the cname. Add that cert in the Perf Collector Certificate attribute of the inductor component under inductor platform (gateway assembly).Inductor uses it to configure the perf-agents (logstash-forwarder) on each computes provisioned. Add the same cert on the logstash-cert File component of daq platform (core assembly). There are 2 attributes : Content - this should have the cert. Destination Path - the path where the cert should be created (/opt/.certs/logstash.crt). Add the ssl key for the cert on the logstash-key file component: Content should have the key Destination Path - the path where the cert key should be created. (/opt/.certs/logstash.key)"
},
{
"title": "Modify existing component",
"url": "/developer/content-development/modify-a-component.html",
"content": "Modify Existing ComponentScenario ContextAdding a new attribute to component is done routinely. For example you want to add support for pre shut down command in apache webserver. You can do by the following Modify the components metadata.rb file to add the attribute details. Modify the recipes to use prestart attribute to execute the pre shut down command. To upload the metadata and test follow the instructions on getting-started"
},
{
"title": "Monitor",
"url": "/developer/content-development/monitor.html",
"content": "MonitorAn optional monitor within a Pack Component Resource contains: Name, desc, optional source Charting defaults: For example, min/max y-axis, unit Nagios command name and command line to execute: cmd and cmd_line Metrics: name, unit, desc, dstype Thresholds: When to trigger eventsFor additional information about dstype, see Metric Data Source Type in the OneOps Admin Documentation.The following is a sample monitor definition from Tomcats pack:resource "tomcat", :cookbook => "tomcat", :design => true, :requires => { "constraint" => "1..1" }, :monitors => { 'JvmInfo' => { :description => 'JvmInfo', :source => '', :chart => {'min'=>0, 'unit'=>''}, :cmd => 'check_tomcat_jvm', :cmd_line => '/opt/nagios/libexec/check_tomcat.rb JvmInfo', :metrics => { 'max' => metric( :unit => 'B', :description => 'Max Allowed', :dstype => 'GAUGE'), 'free' => metric( :unit => 'B', :description => 'Free', :dstype => 'GAUGE'), 'total' => metric( :unit => 'B', :description => 'Allocated', :dstype => 'GAUGE'), 'percentUsed' => metric( :unit => 'Percent', :description => 'Percent Memory Used', :dstype => 'GAUGE'), }, :thresholds => { 'HighMemUse' => threshold('5m','avg','percentUsed',trigger('>',98,15,1),reset('<',98,5,1)), } }, ..."
},
{
"title": "Navigation to Monitors",
"url": "/user/operation/monitors-nav-video.html",
"content": "Navigation to MonitorsThis video shows how to navigate to monitors. Your browser does not implement HTML5 video."
},
{
"title": "Monitors",
"url": "/user/operation/monitors.html",
"content": "MonitorsMonitoring of numerous metrics about components is a powerful feature available to users. It includes support foraspects such as thresholds and notification via alerts tracking of metrics over long time ranges usage for compute heartbeat signal extensive charting for visual and interactive inspectionsMetrics can be collected for numerous aspects for various levels of behavior of your assembly in operations such as memory usage CPU utilization network processes IO metrics like open files process specific aspects e.g. JVM or database-specific aspectsAny component can be monitored and most components included a number of monitors by default. Monitoring in OneOps scalesfor tracking thousands of metrics for long periods of time.Under the cover OneOps facilitates the industry standard open source solution Nagios and thenumerous checks supplied by it. Configuration Default Monitors Custom Monitors Attributes Alerting with Thresholds and Heartbeats Heartbeats Thresholds Usage in Operation Charts Charts in Action ExamplesConfigurationDefault MonitorsDefault monitors are automatically created from the component definition and can be configured in the transition viewof a component: Navigate to the desired assembly. Press Transition in the left hand navigation. Select the environment by clicking on the name in the list. Select the platform in the list on the right by clicking on the name - e.g. tomcat. Select the component in the list on the right by clicking on the name - e.g. compute. Go to the monitors tab. The monitors are listed on the left and can be _Edit_ed individually. Check out the demo video showing how to navigate to monitors.Custom MonitorsCustom monitors allow you to define additional metric monitoring for any component. They can be configured in the designview of a component: Navigate to the desired assembly. Press Design in the left hand navigation. Select the platform in the list on the right by clicking on the name - e.g. tomcat. Select the component in the list on the right by clicking on the name - e.g. compute. Go to the monitors tab. Press Add to start create a custom monitor Alternatively select an existing custom monitor and Edit it as desired.AttributesThe following Global and Collection related attributes can be configured for existing and new monitors:Name: simple name for the monitor as visible in the list.Description: descriptive text about the behavior of the the monitor.Command: Nagios command that defines the metric gathering.Command Line Options Map: map of key value pairs that are passed to the command line invocation.Command Line: specific command line invocation for the metrics gathering.Each monitor can include one or more Metrics defined by:Name: simple name for the metric.Unit: the unit used for data points, used in charts and notifications.Description: descriptive text about the metric.DS Type: the data source type. GAUGE signals that this metric gather as a measurement each time. DERIVE on theother hand signals a rate of change from a prior measurement is tracked.Display: flag to signal if the metric should be displayed.Display Group: string to allow grouping of metrics.Sample Interval (in sec): Number of seconds between each metric measurement event.In addition, aspects for alerting can be configured as documented in the following section.Additional options are available in the Advanced Configuration:Receive email notifications only on state change: enable this flag to reduce notifications to be sent only when themonitor state changes.URL to a page having resolution or escalation details: This allows you to add a URL to an external website or otherresource that provides further information for the user receiving notifications from this monitor. The URL is added toall notifications.Alerting with Thresholds and HeartbeatsHeartbeatsConfigured in the Alerting section the Heartbeat flag and Heartbeat Duration allow a metric to be used as acritical metric signaling the health of the component itself.If the data collection fails for a metric with the heartbeat flag enabled and the heartbeat duration has passed, anunhealthy event is generated. Ideally at least one metric per component is flagged as a heartbeat metric. Heartbeatmetrics are automatically collected every minute from all components.Heartbeat Duration: defines the wait time (in minutes) before marking a component instance as unhealthy due to amissing heartbeat.The unhealthy event caused by missing heartbeat leads to execution of a repair action on the instances marked asunhealthy. The automatic healing of instances using Auto Repair enables therecovery of components instances back to a healthy state.ThresholdA Threshold uses a metric and a set of conditions to change the state of a component. These changes can trigger eventssuch that result in notifications,automatic scaling or automatic repair events.Components include a predefined set of default thresholds that are used implicitly with any environment deployment.Users can add a new threshold definitions that are suitable for their operation or edit existing thresholds.Threshold are visible as part of the monitor configuration.The following attributes characterize a threshold:Name: Name the threshold so that it is easy to understand what happened. For example: HighThreadUse implies threadcount going too high. This name is seen as part of the alert message and should be intuitive enough to understandwhat happened when the threshold was crossed.State: Defines the state of the instance when the threshold is crossed. Depending on the state of instance, certainactions are performed implicitly to recover the component back to good health. The user can select a value to define theexpected state of the threshold.The following states are available:Notify-Only: Use this state when no automated action is expected. When the trigger condition is met, the state of theinstances is flipped to notify and an event is triggered. The event can be seen on the environment operation view.Unhealthy: When a threshold is defined with an unhealthy state, the instances meeting trigger condition require somerepair action to fix their state and the repair action associated with the component is executed. The automatic healingof instances using Auto-Repair helps in recovery of instances back to good state.Over-utilized: Use this state to define a threshold where the load is not sustainable and the component requiresadditional capacity. Auto scale) is used to add more capacity until themaximum limit of scaling configuration is reached.Under-utilized: This state signifies that the component instance is not being used to its capacity and can be removed.Auto scale) is used to remove capacity until the minimum limit of scaling configuration isreached.Further threshold configuration attributes are:Bucket: Time interval used for each metric collection.Stat: Stat determines the value selection from the bucket for aggregation. Values are average, min, max, count, etc.Metric: The metric to use for the threshold.Trigger and Reset determine when an event is raised and subsequently removed. They are configured with an expressionusing and Operator and Value to create and expression. The Duration defines the time window during which thecollected metric value is evaluated. Occurrences defines the number of repetitions needed to triggerCool-off: The time after which a repeated threshold crossing raises another event. Before that time repeatedviolations do not raise additional events.An alert is generated for any state trigger. If you are watching the assembly, you can expect anotification about the event. The events can be viewed in the operation view.Usage in OperationThe actual usage of monitors occurs in the operation view for each individual component: Navigate to the desired assembly. Press Operation in the left hand navigation. Select the environment by clicking on the name in the list. Select the platform in the list on the right by clicking on the name - e.g. tomcat. Select the component in the list on the right by clicking on the name - e.g. compute. Go to the monitors tab. The monitors are listed on the left as a list. Click on an individual monitor name to view a chart visualizing the monitor data. Check out the demo video showing how to navigate to monitors.The list of monitors shows the names of the monitors and additional icons that highlight heartbeat monitors and definedthresholds. You can also mark them as a favorite.The header includes a filter for the monitors, select/deselect all buttons, a sort features and well as the Actionsbutton. If you select a few monitors in the list with the checkboxes beside the names, you can use theCompound charts action to merge all metrics from the selected monitors into one chart. The Stack charts actiontriggers all selected charts to be displayed above each other.Threshold and heartbeat configuration for the monitor is displayed below the chart.ChartsChart inspections can be used to visually analyze your component behavior over time. Enjoy our demo video showcasing usage of charts.A number of features are available in the chart display:Time range control: The top left corner contains a control with buttons to select time range for the whole chartdisplaying of one hour, six hours, one day, one week, one month or one year.Time navigation: The top right corner contains a control to navigate the chart time data by the size of the range.<< navigates a full period back, < half a period back, > half a period forward, >> a full period forward. Nowjumps to the current date and time.Read value: Moving the mouse pointer over the chart triggers a marker that displays the metric value at the currentlocation in the chart.Legend: The legend beneath the chart shows the different metric names for the monitor. Clicking on a metric triggersthe chart to display only that metric vs. all metrics.Threshhold display: Threshold levels are displayed as horizontal lines in the chart using a dotted line of the samecolor as the metric with the threshold. The legend includes a dot beside the metric name. The color of the dot reflectthe state (blue for notify, red for unhealthy). Hovering over the dot shows the threshold definition.Zoom: You can select a rectangle on the chart to enlarge a specific x/y region of the chart. This can be repeatedmultiple times until you see the region of interest. Double-click causes a zoom back out.Standalong view: The button on the top right corner in the chart title display triggers the current chart to bedisplayed in a new browser window without the rest of the user interface.The data available in the chart depends on a few aspects: Actual metrics taken successfully and component operational times e.g. there wont be any old data for a compute thatwas just started today TTL policies for storing the data. One minute buckets are used for hour and 6 hours charts up to two days into thepast. Then metrics switch to 5 minute buckets.Charts in ActionExamplesOpen Files MonitorThe Open Files Monitor monitors the open files on the process and is includes in a number of components and disabled bydefault. You can simply activate it and enter the process name in the configuration if you want to montior files openede.g. by your application as the artifact component.App Version MonitorThe App Version monitor is a monitor of the tomcat component used to validate that the server is restarted after allartifacts are deployed. By default, the monitor is disabled.You can enable it in transition view of the component. The ValidateAppVersion action can perform the same check as themonitor as an on demand action."
},
{
"title": "DotNet Framework Component",
"url": "/user/design/ms-dotnetframework-component.html",
"content": "DotNet Framework ComponentThe dotnetframework component gives the ability to install different.NET Framework on windowscompute/node. This component uses chocolatey to install .Net frameworks.Chocolatey packages can be used from public chocolatey repository or from the repository hosted within the firewall. Thiswill be configured based on your OneOps instance configuration.Besides the global configuration available for any component such as Name, you can configure thefollowing attributes:Chocolatey Package Source:Package URL: is used to specify the repo used to store artifacts. Add the url of the chocolatey package source. NOTE: thiswill be overridden if mirror cloud service has been defined (top right of screen). In the mirror cloud service the keyused is chocorepo.Framework Version:.NET Framework version: is used to specify the framework version in install. Additional details below. Format: .Net framework version = chocolatey package name Format examples: .Net 3.5 = dotnet3.5 and .Net 4.5.2 = dotnet4.5.2Attachments and Monitors Tabs:In addition to the above configuration for this component, you can also specify Attachments andMonitors for this component."
},
{
"title": "Microsoft IIS Pack",
"url": "/user/design/ms-iis-pack.html",
"content": "Microsoft IIS PackThe iis pack is available as Internet Information Services(IIS) in the Web Application sectionand provides the user with the ability to use Microsoft IIS as a platform in their assembly.DescriptionThis pack enables flexible configurations for: .Net Frameworks Load Balancing Windows compute sizes defined in the OneOps instance you are using NuGet Packages allowing additional software to be installed on your server Website details in the iis-framework such as: Core settings for application and logging paths as well as port bindings Application Pooling Static and Dynamic Compression Filtering Session States Default settings and inline help (both help pages and attribute pop-up) available and shown when utilizing the pack in your design.Components UtilizedThe pack uses both common components and introduces new ones as listed below.Core Components compute: Used to define the virtual machine and OS. volume: Used to define storage associated with you deployment. lb: Used to configure settings for Load Balancing of incoming requests to the servers.Windows Related Components iis-website: Used to configure website specific details. dotnetframework: Used to specify path of artifacts and framework version to use in the website deployment. nuget-package: Used to specify where website artifacts are located and version to use in the deployment. chocolatey-package: Used to define additional software packages to install on the server."
},
{
"title": "IIS Website Component",
"url": "/user/design/ms-iis-website-component.html",
"content": "IIS Website ComponentThe iis-website component is used to configure attributes for deploying and runningInternet Information Services (IIS) for Windows Server.Besides the global configuration available for any component such as Name, you can configure thefollowing attributes:IIS Web SiteWeb Site Physical Path: is used to specify the physical path on disk this Web Site will point to.Log File Directory: is used to set the central w3c and binary log file directory.Mime Types: is used to add MIME type(s) to the collection of static content types.Binding Type: is used to select HTTP/HTTPS bindings that should be added to the IIS Web Site.Binding Port: is used to set the binding port.Windows authentication: is used to enable windows authentication.Anonymous authentication: is used to enable anonymous authentication.IIS Application Pool.Net CLR version: is used to specify the version of .Net CLR runtime that the application pool will use.Identity type: is used to select the built-in account which the application pool will use.IIS Static CompressionEnable static compression: is used to enable static compression for URLs.Compression level: is used to set the compression level - from 0 (none) to 10 (maximum).Mime types: is used to specify which mime-types will/will not be compressed.CPU usage disable: is used to specify the percentage of CPU utilization (0-100) abovewhich compression is disabled.CPU usage re-enable: is used to specify the percentage of CPU utilization (0-100) belowwhich compression is re-enabled after disable due to excess usage.Minimum file size to compression: is used to specify the disk space limit (in megabytes),that compressed files can occupy.Maximum disk usage: is used to specify the minimum file size (in bytes) for a file to be compressed.Compression file directory: is used to specify the location of the directory to store compressed files.IIS Dynamic CompressionEnable dynamic compression: is used to enable dynamic compression for URLs.Compression level: is used to set the compression level - from 0 (none) to 10 (maximum).Mime types: is used to specify which mime-types will/will not be compressed.CPU usage disable: is used to specify the percentage of CPU utilization (0-100) abovewhich compression is disabled.CPU usage re-enable: is used to specify the percentage of CPU utilization (0-100) belowwhich compression is re-enabled after disable due to excess usage.Dynamic compression before cache: is used to specify whether the currently available response is dynamicallycompressed before it is put into the output cache.Compression file directory: is used to specify the location of the directory to store compressed files.Session StateCookieless: is used to specify how cookies are used for a Web application.Cookie name: is used to specify the name of the cookie that stores the session identifier.Time out: is used to specify the number of minutes a session can be idle before it is abandoned.Request FilteringAllow double escaping: is used to allow escaping in URLs. If set to false, request filtering willdeny the request if characters that have been escaped twice are present in URLs.Allow high bit characters: is used to allow non-ASCII characters in URLs. If set to true, requestfiltering will allow non-ASCII characters in URLs.Verbs: is used to specify which HTTP verbs are allowed or denied to limit types of requests sent tothe Web server.Maximum allowed content length: is used to specify the maximum length of content in a request, in bytes.Maximum URL length: is used to specify the maximum length of the query string, in bytes.Maximim query string length: is used to specify the maximum length of the URL, in bytes.File extension allow unlisted: is used to specify whether the Web server should process filesthat have unlisted file name extensions.Attachments and Monitors TabsIn addition to the above configuration for this component, you can also specify Attachments andMonitors for this component."
},
{
"title": "Microsoft SQL Server Pack",
"url": "/user/design/ms-sqlserver-pack.html",
"content": "Microsoft SQL Server PackThe mssql pack is available as MS SQL Server in the Database Relational SQL section and providesthe user with the ability to use Microsoft SQL Server as a platform in theirassembly.Platform variables data_drive - drive letter for persistent storage, used to hold user data and log files. Default is F. temp_drive - drive letter for ephemeral storage, used to hold tempdb data and log files. Default is T.Secgroup componentBy default this component is configured to allow all incoming traffic to these TCP ports: 22 - SSH 1433 - default port for MS SQL Server 3389 - RDPIf youre planning to use custom ports for your Sql Server instance, please add them here.Compute componentDefault compute size is M-WinOS componentDefault OS Type is Windows 2012 R2.vol-temp componentThis is a volume component used to specify details for the ephemeral storage that comes with the VM.Only Mount Point attribute is applicable to Windows VMs. The rest of the attributes are ignored.storage componentThis component defines the size and type of the persistent storage that will be used to hold user data and log files.Slice Count attribute must be equal to 1.volume componentAnother volume component which depends on the storage component and is used to specify details for the persistent storage.Only Mount Point attribute is applicable to Windows VMs. The rest of the attributes are ignored.dotnetframework componentBy default these .Net frameworks will be installed on the VM: .Net 3.5 .Net 4.6mssql componentThis component configures the following attributes of MS SQL Server installation: MS SQL Server version and edition - currently only 2014 Enterprise is supported sa Password - make sure to specify a strong password, otherwise the installation will fail. TempDB data directory - default is T:\MSSQL (via platform variable temp_drive) TempDB log directory - default is T:\MSSQL (via platform variable temp_drive) User db data directory - default is F:\MSSQL\UserData (via platform variable data_drive) User db data directory - default is F:\MSSQL\UserLog (via platform variable data_drive)Note: if OneOps deployment fails at mssql step most likely the error message will not be descriptive enough.In that case please RDP or SSH to the VM and investigate the content of installation logs.For MS SQL Server 2014 version the log is located at C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\Log\summary.txtAdd user component to your design to create a local account with specified SSH keys and\or password (for RDP connections).database componentThis component creates a user database, login and a database user with db_owner rights in that database.Please note that the Instance Name attribute is actually for specifying a database name.If creating a Sql Server login (not from Windows domain account), please specify a strong password."
},
{
"title": "Naming Conventions",
"url": "/user/design/naming-conventions.html",
"content": "Naming ConventionsFollow these naming conventions for assemblies, platforms and environments.Assembly NameThe name of the assembly should represent the name of your product or the service that you are offering. Keep it short and relevant.Use: wmsDo not use: warehouse management systemPlatform NameThe name of the platform should be the name of the tier that you are adding. For example: if you are adding a web tier, call it tomcat or jboss. For a database tier, call it db.Use: tomcat, jboss, db, cacheDo not use: cache-app, dal-schema-common-app, tomcat-prodEnvironment NameThe environment name should represent the environment where the design is deployed.Use: dev, qa, qa-int, stg, prodDo not use: tomcat-prod, oracle-db WARNING: Do NOT repeat the assembly name in the platform name. Do NOT repeat the assembly name in the environment name. Do NOT use environment names in platform names. Each assembly, and each environment inside the assembly, run in their own separate namespaces. These names do not have to be globally unique. Repeating the names in these three entities results in very long resource names in the cloud environment."
},
{
"title": "Notifications",
"url": "/user/account/notifications.html",
"content": "NotificationsA number of events in an organization in OneOps can trigger notifications. Theseevents include deployments, monitors, scale and repair actions.Notifications can be sent to a number of receiving notification sinksincluding simple URLs, Jabber, Amazon SNS and Slack.ConfigurationTo set up and configure notifications, follow these steps: Access the settings for the desired organization Click Settings under the specific organization in the left hand navigation Alternative click on the settings icon in the header Select the notifications tab. Press the Add button on top of the list of notifications to create a newnotification sink. Or click on the name of a specific notification sink in the list to accessits configuration in the Details section on the right. Pressing the Editbutton allows you to change the configuration. Provide all desired values and press Save.Each notification sink includes a number of generic as well as type-specificconfiguration settings. The type is selected as a first action when creating anew notification sink.The generic settings are: Name: the required name of the notification sink.</dd> Global - Description: optional description for the notification sink</dd> Filtering: fine-grained control over which messages are sent is possiblewith filtering enabled. You can configure filters with criteria such as EventType, Severity Level, Environment Profile Pattern, NS Paths, MonitoringClouds and Message Pattern. Typically filtering should be enabled so thatspecific the notification sync is not flooded by all events for theorganization. Instead it can be filtered to e.g. only receive specific eventsfor a specific assembly with a combination of the available criteria. Transformation: message can be transformed before they are sent Dispatching - Message Dispatching: configure to use synchronous orasynchronous mechanism for dispatching event messages.</dd> You can select multiple notification sinks to similar targets with differentfilters to achieve the desired verbosity and message frequence.Type-specific configuration and usage is explained in the the sink specific sections: Notifications to URL Notifications to Concord Notifications to Amazon SNS Notifications to Jabber Notifications to SlackNotifications to URLNotifications can be configured to use a custom URL as notification sink.In preparation you need to create a web application that receives notificationson a specific URL.Then follow these steps to configure and use a URL notification sink. Create a new notification sink with the typeaccount.notification.url.sink. Configure the Endpoint URL of the server in the Endpoint section. If the URL is protected from anonymous posts, provide Service Username andService Password in the Credentials section. Save the new notification sink. Notifications to ConcordThe workflow orchestration system Concord is an example of a system that can beconfigured as a URL sink to receive notifications fromOneOps.The Endpoint URL needs to be configured to use the oneops event endpoint ofthe Concord API e.g. at https://concord.example.com/api/v1/events/oneops.Typically credentials are required and need to be configured with a serviceusername and password.To enable Concord triggers for compute replacements filtering is setup with: Event Type set to Deployment Severity Level set to All NS Path to / to send events for all assemblies in the org Include Ci enabled with the class patterns bom.Compute, bom.oneops.1.Computeand bom.main.2.Compute to capture all computes Include Ci on Replace enabledThis allows a Concord project to configure a trigger that can react to the replace compute events in OneOps by calling a workflow. This can for example beused to run an Ansible playbook against a replaced compute.Notifications to Amazon SNSNotifications can be configured to use theAmazon Simple Notification Service as anotification sink.In preparation you need to create an Amazon SNS account and an access key. Thenfollow these steps to configure and use an SNS notification sink. Create a new notification sink with the typeaccount.notification.sns.sink. Provide your SNS credentials in Credentials section including the AccessKey and the Secret Key. Save the new notification sink. Go to the SNS section in the Amazon AWS console. The first notification event creates an SNS topic for thatenvironment. Subsequent notifications are posted to the same topic. Subscribe to the topic with your email or distribution list. Notifications to JabberNotifications can be configured to use theopen XMPP messaging standard originally introduced byJabber as notification sink.In preparation you need to get access details for the XMPP/Jabber server. Thenfollow these steps to configure and use an SNS notification sink. Create a new notification sink with the typeaccount.notification.jabber.sink. Configure the connection to the XMPP/Jabber server in Settings section. Chat Server - hostname of the chat server. Chat Server Port - TCP port of the chat server. Conferences Identifier - identifier for the chat conference to receivethe notifications. User Account - account to authenticate to the server and use to postthe notifications. User Password - password of the user account. Save the new notification sink.Notifications to SlackNotifications can be configured to use Slack channels as anotification sink.Slack Administrator SetupIn preparation an administrator needs to create acustom bot user for each Slack team thatwants to receive notifications: Create a new custom bot user. If you are currently logged into Slack in yourbrowser, try this link. Select the Slack team that will receive the notifications. Choose a username for your bot - e.g. oneopsbot Click the Add Bot Integration button Update other settings for the bot user as desired. Note down the API Token for the configuration in OneOPs.In addition you need to configure the Slack integration in the OneOpsnotification service called Antenna by setting the environment variables forthe Tomcat server running Antenna:slack.url: The URL to reach the Slack chat service. Defaults tohttps://slack.com. You need to ensure that this host can be reached on thenetwork. In an open deployment on the internet this is already the case. IfOneOps runs in an isolated network you need to open up the network or introducea reverse proxy server that can forward requests between Antenna and Slack. Oneoption for such a reverse proxy server isNGINX.slack.tokens: The Slack bot API tokens need to be provided to Antenna withthis configuration. The supported syntax is a comma separated list of all yourbot user tokens for each team like team1=<token1>,team2=<token2>.Assuming your OneOps installation is managed and run by OneOps itself you canconfigure those environment variables for Antenna with the OneOps userinterface: Locate the assembly for OneOps core Inspect the Design and locate the Antenna platform in the platforms liston the right Click on the Antenna platform Click on the Tomcat component in the list of components on the right Press the Edit button Locate Environment Variables and add slack.url and slack.tokens asrequired Press the Save buttonIf OneOps is running via a manual install and is not managed by OneOps itselfyou have to configure the environment variables in the startup scripts for theTomcat instance running Antenna.Slack User ConfigurationWith the administrator setup completed you can create your Slack notificationsinks: Create a new notification sink with the typeaccount.notification.slack.sink. Add the desired Channels to receive notifications in the Slack Configsection. Optionally configure Text Formats for inserting additional texts into themessage based on detected text. E.g., setting the Key field to critical andValue to :fire: ${text} results in the Fire emoji to be inserted before anyoccurrence of critical. If desired, enable Include Notification Fields. Save the new notification sink. Ensure that the Slack bot user has access to the channel. For publicchannels, this is automatically the case. For private channels the bot userneeds to be added to the channel users. "
},
{
"title": "NuGet Package Component",
"url": "/user/design/nuget-package-component.html",
"content": "NuGet Package ComponentThe nuget-package component gives the ability to download NuGet packageson the Windows machine. After downloading the NuGet package, you can use Configure, Migrate and Restart attributes for post processing of your NuGetpackage. This component expects an http url of the NuGet package hosted in nexus repo.Besides the global configuration available for any component such as Name, you can configure thefollowing attributes:RepositoryRepository URL: Not used at this time. Full path for artifact is from Identifier below.Repository: is used to specify the specific repo name where the artifacts are located that you want to install.AuthenticationUsername and Password allow you to authenticate into your above defined repository.ArtifactIdentifier: is used to specify the artifact to install including the repo name. The identifier can be a URL, S3 Path, local pathor Maven identifier of the artifact package to download. The artifact should be a .nupkg file.Version: is used to specify the version of the artifact to install. Can be a specific version or latest to pull the mostrecent version.Checksum: is used to specify the SHA256 checksum of the artifact package.Path: is used to specify the repository path prefix.DestinationInstall Directory: is used to specify the directory path where the artifact will be deployed to and versions kept.Variables are typically used here to manage commonly used information in a central place.Deploy as user: is used to specify the system user used to deploy the applicationDeploy as group: is used to specify the system group to run the deployment as.Environment Variables: is used to specify any variables to be present during the deployment.Persistent Directories: is used to list directories to be persisted across code updates (ex. logs, tmp, etc.)Expand: Not used at this time. The NuGet package defined above will be expanded automatically.StagesConfigure: is used to specify any commands to be executed to configure the artifact package.Migrate: is used to specify any commands to be executed during the migration stage.Restart: is used to specify any commands to be executed during a restart.Attachments and Monitors TabsIn addition to the above configuration for this component, you can also specify Attachments andMonitors for this component."
},
{
"title": "Objectstore Component",
"url": "/user/design/objectstore-component.html",
"content": "Objectstore ComponentThe objectstore component is available for all platforms."
},
{
"title": "oneops-admin gem",
"url": "/admin/operate/oneops-admin-gem.html",
"content": "oneops-admin gemOneops-admin is a Ruby thor-based command-line to manage the inductor. Jenkins copies the JAR built from the inductor repo into the gem. The oneops-admin gem installs an inductor command which has:$ inductorCommands:inductor add # Add cloud to the inductorinductor check # Inductor checkinductor check_agent # Inductor check agentinductor create # Creates and configures a new inductorinductor disable PATTERN # Disable inductor clouds matching the PATTERNinductor enable PATTERN # Enable inductor clouds matching the PATTERNinductor force_stop NAME # Inductor force stop (will kill -9)inductor help COMMAND # Describe available commands or one specific commandinductor install_initd # Install /etc/init.d/inductorinductor list PATTERN # List clouds in the inductorinductor restart NAME # Inductor restartinductor restart_agent NAME # Inductor restartinductor restart_logstash_agent NAME # Inductor logstash agent restartinductor start NAME # Inductor startinductor start_agent NAME # Inductor log agent startinductor start_logstash_agent NAME # Inductor logstash agent startinductor status # Inductor statusinductor status_agent # Inductor log flume agent statusinductor status_logstash_agent NAME # Inductor logstash agent statusinductor stop NAME # Inductor stop (will finish processing active threads)inductor stop_agent NAME # Inductor log agent stopinductor stop_logstash_agent NAME # Inductor logstash agent stopinductor tail # Inductor log tail"
},
{
"title": "OneOps API Documentation",
"url": "/developer/integration-development/oneops-api-documentation.html",
"content": "Note: all calls will use the api token - see Get Auth Token: Account Profile GET all organizations Organization GET clouds GET cloud by name POST cloud by name GET Supported locations GET Supported Services GET Cloud Service By Name POST New Service PUT Update to Service GET Compute report Assembly GET List of assemblies for organization GET Assembly by name POST A new assembly PUT an updated assembly DELETE An assembly by name Platform GET List of platforms GET Platform by name POST a platform PUT and updated design component attribute PUT an updated design platform variable DELETE a Platform Environment GET Transition by name POST a new transition PUT Cloud configuration for environment platform Commit & Deploy POST a commit to an environment GET Latest release id POST A new deploy GET deployment status PUT Disable environment GET Pull Latest DELETE Environment POST Discard a release by ID Operations PUT Replace Component Instance GET All available actions GET Instance ids POST Request to execute action GET status GET Computes for a Platform Account ProfileGET Auth Token via UI in your browswer (not via curl or api)https://<your-server>/account/profile#authenticationSample curl using the auth-token:curl -i -u <AUTH-TOKEN>: -H "Content-Type:application/json" -H "Accept:application/json" -X GET -v https://<your-server>/account/organizationsOrganizationGET all organizationshttps://<your-server>/account/organizationsGET cloudshttps://<your-server>/<ORGANIZATION-NAME>/cloudsGET cloud by namehttps://<your-server>/<ORGANIZATION-NAME>/clouds/<CLOUD-NAME>POST cloud by namehttps://<your-server>/<ORGANIZATION-NAME>/cloudsGET Supported locationshttps://<your-server>/<ORGANIZATION-NAME>/clouds/locations.jsonGET Supported Serviceshttps://<your-server>/<ORGANIZATION-NAME>/clouds/<CLOUD-NAME>/services/available.jsonGET Cloud Service By Namehttps://<your-server>/<ORGANIZATION-NAME>/clouds/<CLOUD-NAME>/services/<SERVICE-NAME>POST New ServiceFirst fetch the new service body content using:https://<your-server>/<ORGANIZATION-NAME>/clouds/<CLOUD-NAME>/services/<SERVICE-NAME>/new.jsonThen:POST: https://<your-server>/<ORGANIZATION-NAME>/clouds/<CLOUD-NAME>/services/Body: Take the response from new call and update all necessary fields/attributes to create bodyPUT Update to ServiceFirst fetch the new service body content using:https://<your-server>/<ORGANIZATION-NAME>/clouds/<CLOUD-NAME>/services/<SERVICE-NAME>Then:PUT https://<your-server>/<ORGANIZATION-NAME>/clouds/<CLOUD-NAME>/services/<SERVICE-NAME>Body: Take the response from get call and update all necessary fields/attributes to create bodyGET Compute reporthttps://<your-server>/<ORGANIZATION-NAME>/reports/compute.jsonThe response has two major sections: One describes metrics metadata. One has the data itself in hierarchal form. A third describes scope. Because scope is different for different reports, if you want to run the report by cloud rather than by assembly, you pass extra param in the url. For example:https://<your-server>/<ORGANIZATION-NAME>/reports/compute.json?grouping=cloudAssemblyGET List of assemblies for organizationhttps://<your-server>/<ORGANIZATION-NAME>/assembliesGET Assembly by namehttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>POST A new assemblyhttps://<your-server>/<ORGANIZATION-NAME>/assembliesBody:{ "cms_ci": { "comments": "<COMMENT>", "ciName": "<ASSEMBL-NAME>", "ciAttributes": { "description": "<description>", "owner": "<EMAIL ADDRESS>" } }}DELETE An assembly by namehttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>PlatformGET List of platformshttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/design/platformsGET Platform by namehttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/design/platforms/<PLATFORM-NAME>POST a platformhttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/design/platformsBody:{ "cms_dj_ci": { "comments": "<my comment>", "ciName": "<PLATFORM-NAME>", "ciAttributes": { "source": "<PACKSOURCE>", "description": "<description>", "major_version": "<MAJOR-VERSION>", "pack": "<pack>", "version": "<VERSION>" } }}PUT and updated design component attributehttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/design/platforms/<PLATFORM-NAME>/componentsBody:{ "cms_dj_ci": { "createdBy": "klohia", "execOrder": 0, "ciName": "user-app", "ciId": "12752422", "nsPath": "/testing/testassm/_design/t1", "ciClassName": "catalog.User", "ciAttributes": { "username": "app", "system_account": "true", "description": "App User", "login_shell": "/bin/bash", "home_directory": "/app", "authorized_keys": "<SSH-KEY>", "ulimit": "16384", "sudoer": "true" } }} Attributes of component are very specific to each component.PUT an updated design platform variablehttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/design/platforms/<PLATFORM-NAME>/variablesBody:{ "cms_dj_ci": { "comments": "", "impl": "oo::chef-11.4.0", "createdBy": "klohia", "execOrder": 0, "ciName": "appVersion", "ciId": "12752469", "nsPath": "/testing/testassm/_design/t1", "ciGoid": "12752412-1873-12752469", "ciClassName": "catalog.Localvar", "ciAttributes": { "value": "1.0" } }}DELETE a Platformhttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/design/platforms/<PLATFORM-NAME>EnvironmentGET Transition by namehttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>POST a new transitionhttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>Body:{ "clouds": {"<CLOUD_CID>":"<Priority>", "<CLOUD_CID>":"<Priority>" }, "platform_availability": { "<PLATFORM-DESIGN-ID>": "redundant" }, "cms_ci": { "ciName": "<ENVIRONMENT-NAME>", "nsPath": "<ORGANIZATION-NAME>/<ASSEMBLY-NAME>", "ciAttributes": { "autorepair": "true", "monitoring": "true", "description": "<DESCRIPTION>", "dpmtdelay": "60", "subdomain": "<ENVIRONMENT-NAME>.<ASSEMBLY-NAME>.<ORGANIZATION-NAME>", "codpmt": "false", "debug": "false", "global_dns": "true", "autoscale": "true", "availability": "redundant/single" } }}PUT Cloud configuration for environment platformhttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/platforms/<PLATFORM_NAME>/cloud_configurationBody:{ "cloud_id": "<cloud ci-id>", "attributes": { "adminstatus": "active OR inactive OR offline", "priority": "1 OR 2", "pct_scale": ..., "dpmt_order": ... }}All attributes are optional (pass in only what needs to be updated).Attributes: adminstatus - administrative status of cloud priority - cloud priority (primary => 1 or secondary => 2) pct_scale - scale percentage dpmt_order - deployment orderCommit and DeployPOST a commit to an environmenthttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/commitGET Latest release idhttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/releases/bomPrevious Path [DEPRECATED]https://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/releases/latestPOST A new deployhttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/deployments/Body:{ "cms_deployment": { "releaseId": "<LATEST-RELEASE-BOM>", "nsPath": "/<ORGANIZATION-NAME>/<ASSEMBLY-NAME>/<ENVIRONMENT-NAME>/bom" }}GET deployment statushttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/deployments/<DEPLOYMENT_ID>/statusPUT Disable environmenthttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/disableBody:{ "platformCiIds" : ["<platformCiId>","<PlatformCiId>",...] }POST Pull Latesthttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>/pullDELETE Environmenthttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/transition/environments/<ENVIRONMENT-NAME>POST Discard a release by IDhttps:////assemblies//transition/environments//releases//discardOperationsPUT Replace Component Instancehttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/operations/environments/<ENV_NAME>/platforms/<PLATFORM-NAME>/components/<COMPONENT-NAME>/instances/<INSTANCE_ID>/stateBody:{ "state" : "replace" }GET All available actionshttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/operations/environments/<ENV_NAME>/platforms/<PLATFORM-NAME>/components/<COMPONENT-NAME>/actionsGET Instance idshttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/operations/environments/<ENV_NAME>/platforms/<PLATFORM-NAME>/components/<COMPONENT-NAME>/instances?instances_state=allPOST Request to execute actionhttps://<your-server>/<ORGANIZATION-NAME>/operations/proceduresBody:{ "cms_procedure": { "procedureCiId": "0", "procedureName": "<Name>", "ciId": "<Component_id>", "procedureState": "active", "definition": "{"flow":[{"targetIds":["<Instance_id>"],"relationName":"base.RealizedAs","direction":"from","actions":[{"actionName":"<Action-name>","stepNumber":1,"isCritical":true}]}],"name":"<Action-name>"}" }}For example:{ "cms_procedure": { "procedureCiId": "0", "procedureName": "reboot", "ciId": "9277281", "procedureState": "active" "definition": "{"flow":[{"targetIds":["9277720"],"relationName":"base.RealizedAs","direction":"from","actions":[{"actionName":"reboot","stepNumber":1,"isCritical":true}]}],"name":"reboot"}" }}GET statusUse procedure_id from previous call:https://<your-server>/<ORGANIZATION-NAME>/assemblies/<Assembly-name>/operations/environments/<ENV-NAME>/platforms/<Platform-name>/procedures/<Procedure-id>GET Computes for a Platformhttps://<your-server>/<ORGANIZATION-NAME>/assemblies/<ASSEMBLY-NAME>/operations/environments/<ENV-NAME>/platforms/<PLATFORM-NAME>/components/compute/instances.json?instances_state=all"
},
{
"title": "OneOps Manages OneOps",
"url": "/admin/operate/oneops-manages-oneops.html",
"content": "OneOps Manages OneOpsWe have two separate instances of OneOps named prod and admin.They are in different regions, managing each other.We did this by creating a seed environment from our qa env.oneops-manages-oneops The seed env created our first env: admin. Then we took a snapshot of the database in the seed env and shutdown the env. The admin env then created a prod env and restored the snapshot from the seed. When prod came up it had all the data to manage admin. This enabled prod and admin to manage each other."
},
{
"title": "OneOps Policy Management",
"url": "/user/account/oneops-policy-management.html",
"content": "OneOps Policy ManagementPolicy Management provides in-line technical debt to identify Cloud anti-patterns that are at risk to cause a service disruption.Policies are defined per organization. OneOps admin users have privileges to add, edit, and delete a policy. A policy definition includes: Name: Unique name of the Policy Description: Brief description about the Policy Query: Elastic Search query to identify candidate objects that violate the policy Execution Mode: Active: Prohibits users from saving objects under violation. For example: A policy can be defined to allow only a few compute images like centos-6.6, redhat 6.5. Any attempt to save a compute component with an image value other than these two, fails. Passive: All components and instances that violate such policies are marked with a failed policy status with the reason for the specific policy failures. All components and instances that violate one or more policies are marked with a failed policy status along with the reason of the specific policy failures."
},
{
"title": "Operations",
"url": "/user/operation/operations-reference.html",
"content": "OperationsOperations is where you monitor andcontrol your environments. On the summary tab, you can drill down by using the right navigation bar.From the top level, with the graph tab, you can visualize the entire health of an environment. On the graph, you can drill down to a component instance. Each component instance has configuration, monitors, logs, and actions."
},
{
"title": "Operations Summary",
"url": "/user/operation/operations-summary.html",
"content": "Operations SummarySolutionTo view an operations summary, follow these steps: Go to Assembly. Select the operations tab. Select the environment where you want to perform the action. Select the summary tab. It tells you about the details that the system is performing. Other details include: Platform details (Enable/Disable) Deployment Status and Deploy/Force Health of the system Auto repair status. Notifications "
},
{
"title": "Organization Summary",
"url": "/user/account/organization-summary.html",
"content": "Organization SummarySolution Select the organization. Select the Summary tab. Summary tab displays all the notifications sent out for that particular organization. In the Right panel, enter the number of: Users that belong to the organization Clouds associated with the organization Assemblies associated or created in the organization Below the number of assemblies, list all the assemblies."
},
{
"title": "Operating System Component",
"url": "/user/design/os-component.html",
"content": "Operating System ComponentThe os component is available for all platforms andconfigures the operating system."
},
{
"title": "Override Platform Attributes",
"url": "/developer/content-development/override-platform-attributes.html",
"content": "Override Platform AttributesYou can override the platform attributes like auto-replace or auto-scale at individual circuit (pack) level.Edit your circuit and add below hash to the circuitplatform :attributes => { "replace_after_minutes" => 60, "replace_after_repairs" => 3 }Apart from the above attributes, you can also configure any other platform attribute to have default values.For the full set of attributes, refer to the platform metadata.rb:https://github.com/oneops/oneops-admin/blob/master/lib/base/platform/metadata.rbOneOpss base circuit is going to have the Auto-Replace set to true and have the values like in above example by default.Individual circuit owners need to override these values in their circuits if they want different configuration. If not overriden, the values are inherited from base circuit."
},
{
"title": "Pack Development",
"url": "/developer/content-development/pack-development.html",
"content": "Pack DevelopmentGood Defaults Have reasonable defaults for resources included in pack. For example, What would be default value of compute size for tomcat.Use VariablesSome of the use case for variables would be: If you use the same value in multiple places in the platform and you want them in sync Very frequently used attributes so the user doesnt need to drill-down like version of an artifact attributes that the user must change, like name of the app or something like that The variables are de-referenced during the deployment plan generation on OneOps and by the time the attribute is passed to the cookbook (workorder) its already substituted"
},
{
"title": "Pack policy",
"url": "/developer/content-development/pack-policy.html",
"content": "Pack PolicyPolicies can also be specified as part of the pack definition.This enables policy evaluation on all CIs (components, attachments, platform variables, monitors) under the platform for a given pack. The violated policies can then be fixed to avoid issues further down the application lifecycle.The policy definitions are added to the pack .rb file in a given circuit. Following are few examples of pack based policies.policy "env-profile", :description => 'custom pack policy for env-profile', :query => 'ciClassName:manifest.Environment AND _missing_:ciAttributes.profile' :docUrl => '<document url link for the policy>'` :mode => 'passive'policy "compute-ostype", :description => 'custom pack policy for compute-ostype', :query => 'ciClassName:(catalog.*Compute manifest.*Compute bom.*Compute) AND NOT ciAttributes.ostype:("centos-6.5" OR "centos-6.6" OR "redhat-6.5" OR "redhat-6.6" OR "default-cloud")' :docUrl => '<document url link for the policy>'` :mode => 'active'policy "env-automation", :description => 'custom pack policy for env-automation', :query => 'ciClassName:manifest.Environment AND ciAttributes.profile:(PROD EBF STAGING) AND NOT (ciAttributes.autorepair:true AND ciAttributes.autoreplace:true)' :docUrl => '<document url link for the policy>'` :mode => 'passive'Brief description about each field in the policy definition is as follows Description: Brief description about the Policy. Query: Elastic Search query to identify candidate objects that violate the policy. Doc URL: URL to the document about the policy details. Mode: Can be Active/Passive. Active: Prohibits users from saving objects under violation. For example: A policy can be defined to allow only a few compute images like centos-6.6, redhat 6.5. Any attempt to save a compute component with an image value other than these two, fails. Passive: All components and instances that violate such policies are marked with a failed policy status with the reason for the specific policy failures. "
},
{
"title": "Packs",
"url": "/user/design/packs.html",
"content": "PacksA pack is a logical grouping of components to provide a manageable software. Packs are used todefine the platforms that define your assembly.OneOps supports numerous packs for applications such as Apache Tomcat, Apache HTTP server, mySQL, Cassandra, Nginx andmany others. OneOps maintains management metadata and code that deploys,configures, and manages the software.Pack documentation beyond the components with specific details and use cases is available for the following packs: Apache HTTP Server Pack Apache Tomcat Pack Microsoft Internet Information Services(IIS) Pack Microsoft SQL Server Pack"
},
{
"title": "Platform Links",
"url": "/user/design/platform-links-reference.html",
"content": "Platform LinksA User can set Links To dependencies between Platforms within an Assembly. These dependencies are used to generate a proper deployment sequence for the Platforms. For example, when you link a web Platform to a database Platform, the database deploys first. Then, when the web Platform comes up, the database Platform is ready."
},
{
"title": "Platform Management Pack",
"url": "/developer/content-development/platform-management-pack.html",
"content": "Platform Management PackA platform is added to the system by creating a Platform Management Pack (Pack) file and loading it into theCMS. A Pack is a Ruby DSL file that models a platform. It exists in the packer directory structure.The file contains: Component Resources: Named resources with the type (cookbook attribute) and theComponent Class name Relationships/dependencies with flexing/scaling attributes Metrics/Thresholds (optional)A Pack can extend another Pack, which keeps the model clean and manageable. Packs are versioned to match a set of recipes.For instructions on how to add a new platform, refer to Add a Platform."
},
{
"title": "Platforms",
"url": "/user/design/platforms.html",
"content": "PlatformsEach Platform has a set of Components (optional and required) that are predefined by a Platform template.Each Component has configuration attributes that are specific to the type of Component. Components, like Platforms, have interdependencies that are used during the generation of a deployment plan. On the Platform detail page, there is a diagram and a list of Components. The list is grouped by Component type with an indication of the number of Components within each group.After all changes are committed, you can move on to Transition to realize/promote your design/changes to the new/existing environment."
},
{
"title": "Ports by Platform",
"url": "/user/design/ports-by-platform.html",
"content": "Ports by Platform Pack Name Versions Port Numbers Cassandra** 0.81.2 22,1024-65535 Couchbase 2.5.22.2.0 Secgroup already present22 22 tcp 0.0.0.0/0, 4369 4369 tcp 0.0.0.0/0, 8091 8092 tcp 0.0.0.0/0, 18091 18092 tcp 0.0.0.0/0, 11214 11215 tcp 0.0.0.0/0, 11209 11211 tcp 0.0.0.0/0, 21100 21299 tcp 0.0.0.0/0 Squid 3.1.10 22, 80, 8080 CouchDB 1.4.0 22,5984,6984 Postgressql 9.1 22,5432 MySQL 5.1.7 22 22 tcp 0.0.0.0/03306 3306 tcp 0.0.0.0/0 Activemq 5.5.15.9.15.10.0 22 22 tcp 0.0.0.0/061616 61617 tcp 0.0.0.0/0 Rails 22,80,443 Java ws 22,8080,8443 Ruby 1.8.71.9.32.0.0 22 PHP 22,80,443 Tomcat 6.0 7.0 22, 8080, 8443 Add 8009 Jboss 5.1.2 5.1.sterling 22 22 tcp 0.0.0.0/0, 8080 8080 tcp 0.0.0.0/0, 8443 8443 tcp 0.0.0.0/0, 8009 8009 tcp 0.0.0.0/0 Apache 2.2.21 22,80,443 Elastic Search with LB 22,9200-9400 Custom 22 "
},
{
"title": "Propagation",
"url": "/user/design/propagation.html",
"content": "PropagationComponent propagation section is an advanced configuration option that generally should not be changed. However, in some very rare cases it may be used to fine tune the behavior of how configuration (and therefore deployment) changes to one component will trigger the deployment of its dependent components or its master components (the ones that depend on it). This typically will be done with the purpose of optimizing deployment plan size and reducing the total deployment time. Use extreme caution when editing propagation configuration. When used incorrectly it will result in broken deployments and/or unexpected application behaviour.ExampleLets consider a tomcat platform for redundant environment. This platform has a load balancer - LB - component that depends on a Compute component (its master). Internally these components are tied by a DependsOn relation (LB depends on Compute) with a special propagation attribute (propagate_to) set to both. That means that when there is any deployment change for one of these components (due to configuration changes or just a touch update) the other component will be re-deployed as well regardless of whether it actually had any configuration changes. So if, for example, a user changes the size (size attribute) of Compute, then LB will get re-deployed together with Compute during next deployment even though its configuration has not technically changed. And vice versa: deployment of LB (due to some active changes) will be accompanied by re-deployment of compute regardless of whether its configuration is changed by user.Valid ValuesPossible propagation (propagate_to) values are: both - changes to this component OR to component it depends on (master) will cause deployment of BOTH components; from - changes to master component (the one this component depends on) will also cause deployment of this (from) component regardless whether this component configuration has changed, but not the other way around; to - changes to this component (dependent) will also cause deployment of master (to) component, but not the other way around; none - no propagation of deployment in either direction, changes to either master or dependent components will not cause additional deployment of the other one.APIAPI end-point to list DependsOn relations (including propagate_to attribute) from a given component:GET https://<your-server>/<ORGANIZATION-NAME>/assemblies/ASSEMBLY/transition/ENVIRONMENT/platforms/PLATFORM/components/COMPONENT/depends_on.json"
},
{
"title": "Provide Only Necessary Privileges to Accounts",
"url": "/user/account/provide-only-necessary-privileges-to-accounts.html",
"content": "Provide Only Necessary Privileges to AccountsProvide only the necessary privileges to accounts with these best practices: Limit Admin permissions within an organization. Limit access by phase as listed below: Design: Can add or delete components Transition: Can change variables and deploy code Operations: Can do control operations like stop and restart "
},
{
"title": "Relations",
"url": "/developer/core-development/relations.html",
"content": "RelationsThe following model diagrams describe the relationships for Design, Transition, and Operations in the OneOps UI.Relationships also have attributes, some of which are used to scale.DesignIn the Design aspect, we model the base application: No environmental No operational componentsTransitionIn the Transition aspect, we model two additional objects: IaaS components: Can be load balancers (haproxy) or DNS (djbdns). Can also use provider-based ones like route-53, dyndns, elb, etc. Monitors: Use to customize monitors for each environmentOperationsIn the Operations aspect, we create bom components for the manifest components with relation to the Binding (cloud provider)."
},
{
"title": "Relationships",
"url": "/developer/content-development/relationships.html",
"content": "RelationshipsRelationships have attributes like other objects modeled in OneOps. We are working on adding pages to visualize and explain all of them.There are two primary relationships used in packs: depends_on Sets the order of deployment and dependency tree for escalation managed_via How to know where to connect for management. In most cases, this is a compute, but in some, it is a cluster or ring.Relationships are modeled like components, with the same directory structure. Relationships also have attributes.For more detail regarding relations for Design, Transition and Operations, seeRelations.List of relationships:authenticatesbinds_tocomposed_ofcontrolled_bydepends_ondeployed_toescorted_byforwards_tolinks_tomanaged_viamanagesprovidesrealized_asrealized_inrequiressecured_byserviced_bysupplied_byutilizesvalue_forwatched_by"
},
{
"title": "Remove an Unused Cloud from an Environment",
"url": "/user/transition/remove-unused-cloud-from-environment.html",
"content": "Remove an Unused Cloud from an EnvironmentAmong other reasons, it is necessary to remove a cloud from an environment when there is: Resource under-utilization Instability of one cloud over another No longer support for the cloud by its infrastructure teamSolutionTo remove a cloud that is no longer required from an environment, follow the steps below: Ensure that the cloud does not have any live instances within the environment. To do this: Shut down the cloud from all platforms. Disable some or all of the platforms where the cloud has live instances. Go to the Environment Configuration tab and select Edit. Check not used for the cloud and then click Save. If all live instances are removed, the Save is successful. If not, then a failure message appears and the cloud update is not saved. Upon successful removal, the cloud is deleted from the environment and all of the platforms under it. Removal of a cloud from an environment does not require deployment because the live instances are already deleted."
},
{
"title": "Reports Summary",
"url": "/user/account/reports-summary.html",
"content": "Reports SummaryThe Reports Summary offers an overall report of all environments.To view the Reports Summary, follow these steps: Select the Organization name on the top left. Select the Report tab.Reports can: Show the overall reports of the environment Be generated on assembly or cloud basis. In each, the ge can get reports on the core memory abd instance basis. Be viewed as a graph or in tabular formThere are three types of reports: Core: Overall report of the core used Memory: Overall report of the memory used Instance: Overall report of instances used"
},
{
"title": "Restrict Access with Teams",
"url": "/user/account/restrict-access-with-teams.html",
"content": "Restrict Access with TeamsUse Teams to restrict user access to an organization. Create teams, add users to those teams, and then provide only the necessary privileges to each team.DiscussionThis should be done by the organization admin. After creating a team, it can be given a permission to have access to Design, Transition and Operations phases. Do not make every user an admin. Admin access should be limited to selected users in an organization.See AlsoEnable Access to an Assembly for a User on a Team"
},
{
"title": "Rollback Code",
"url": "/user/transition/rollback-code.html",
"content": "Rollback CodeSolutionWhen a code push is determined to be unsuccessful by a TDO or engineer, a rollback becomes necessary to restore the environment to a previously stable state.Transition > environment > platform > variables > appVersion > Value In the Transition page, select an environment. The Environments page displays. From Environments page, select a platform. The Platform page displays. From the Platform page, select the Variable tab and then click appVersion. The Value displays the current version number. Click Edit and modify the value to a previous version. IMPORTANT: There is no log of the previous appVersion. To determine the appropriate version number, contact the application owner. Click Save. Program owners can name the Value anything they want, but the variable is always identified as appVersion."
},
{
"title": "Running OneOps in Production",
"url": "/admin/operate/running-oneops-in-production.html",
"content": "Running OneOps in ProductionUse OneOps to deploy OneOps in production, so you get all the benefits of OneOps in managing your application refer this article.See also: Follow these general prescribed practicesDeploy in redundant mode.OneOps backend applications are primarily java web apps deployable on separate tomcats or in single tomcat with multiple artifacts. For example you can deploy adapter,transistor,transmitter,controller apps on one tomcat, deploy the other apps on separate tomcat. Choose multiple fault zones.You can deploy like WebApps Platform adapter,transistor,transmitter,controller Tomcat sensor,opamp Tomcat User DB,CMS DDB Postgres elastic search es amq Messaging Platform opsmq OpsMQ daq Cassandra Use Command Line Interface Install OneOps CLI to perform any control actions from command line or search.Configure DB backups You can use gluster fs to set up a store to back up cms db.Use security groups.For each platform use ports which are required for application to work.Use Elastic Search Kibana Logstash Set up an elastic search to ship your web logs or other logs for troubleshooting. More details coming.Avoid manual changes :Anything manual you do to make things work, will get overwritten in next deployment. Be sure to make change to design,transition configuration so that changes are not lost."
},
{
"title": "Search",
"url": "/user/general/search.html",
"content": "# SearchThe search feature allows you to locate entities such as users, assemblies,environments, computes and many others. It is available via the _Search_ icon inthe shape of a magnifying glass in the top right corner or _Search_ item in theleft hand navigation.The _Organization Dashboard_ and the _Enviroment_ overview page both includededicated search tab that automatically narrow the search results to therelevant context.Search provides a number of input fields to control the search and is startedafter pressing the _Run_ button. Results are displayed on the right and arelimited to a specific organization. They can be further refined with the filtercontrol above the list.![Search](/assets/img/ui/search.png)# Search in Action# Search Criteria## QueryThe _Query_ input is used to provide the search criteria. In its simplest formit is a simple string, while you can refine the search using the[Elasticsearch Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html)## Filters - NamespaceNamespace allows you to define the hierarchical path that restricts the searchresult. By default the path is set to the current organization with`/`. Further restrictions can be achieved by appendingassembly and environment names and others in the syntax`////[manifest/bom]/platform-name/platform-version`. `manifest`narrows the search to the transition phase, while `bom` sets the operationsphase.## Filters - ClassClass enables fine-grained control to search only in specific entities andattributes. Available selections can be determined with the auto-completionfeature of the input. Simply start typing and inspect the list of availablechoices displayed.Examples for available entities are:- `catalog.*` equivalent to the design phase- `manifest.*` equivalent to the transition phase- `bom.*` equivalent to the operations phase- account- cloud- catalog- service# Quick SearchQuick Search provides pre-configured searches that kick off at the press of abutton or prefill the query and filters input fields.## All FQDNsThis quick search allows you to find all fully qualified domain nameconfigurations by simply pressing the _All FQDNs_ button.## Compute Instances by IPThis search allows you to locate a specific compute instance based on the IPnumber:1. Press the _Compute Instances by IP_ button.2. Enter the IP address in the _Query_ field. You can search for a range of addresses using an asterisk e.g. `192.168.1.*`3. Press _Run_.# Examples## Locate Artifacts within an Environment in an Assembly1. Set the _Namespace_ composed like `///` in the _Filters_ section.2. Set the _Class_ to `catalog.Artifact` for the design phase artifacts, or `manifest.Artifact` for the transition phase4. Click _Run_ to obtain the search results.## List Loadbalancer Components in Transition PhaseSet the _Class_ to `manifest.Lb` and set the _Namespace_ to your organizationand press _Run_. E.g. in the `services` organization this results in`https:///services/organization/search.json?ns_path=/services&class_name=manifest.Lb`.## List Loadbalancer Components in Operations PhaseSet the _Class_ to `bom.Lb` and set the _Namespace_ to your organization andpress _Run_. E.g. in the `services` organization this results in`https:///services/organization/search.json?ns_path=/services&class_name=bom.Lb`.## List all Full Qualified Domain Name Components in OperationSet the _Class_ to `bom.Fqdn` and set the _Namespace_ to your organization andpress _Run_. E.g. in the `services` organization this results in`https:///services/organization/search.json?ns_path=/services&class_name=bom.Fqdn`.## List all Compute Nodes in OperationSet the _Class_ to `bom.Compute` and set the _Namespace_ to your organizationand press _Run_. E.g. in the `services` organization use`https:///services/organization/search.json?ns_path=/services&class_name=bom.Compute`.## List of All EnvironmentsSet the _Class_ to `manifest.Environment` and set the _Namespace_ to yourorganization and press _Run_.## Locate the Compute for a Specific IP AddressSet the _Query_ to `ciAttributes.public_ip: 127.0.0.1` using the IP number youare looking for and press _Run_. You can alternative use the `private_ip`# Advanced Search and Search APIFurther documentation covering advanced search and search API usage can be[found in the developer section](/developer/integration-development/advanced-search)."
},
{
"title": "Security Group Component",
"url": "/user/design/secgroup-component.html",
"content": "# Security Group ComponentThe _secgroup_ [component](./components.html) is available for all platforms."
},
{
"title": "Secrets Client Component",
"url": "/user/design/secrets-client-component.html",
"content": "# Secrets Client ComponentThe _secrets client_ [component](./components.html) exposes files containingsecrets such as property files with password, keystore files, ssh keys andothers on the file system of each compute of a platform.The default mount point is `/secrets` and exposes the secret files on a tmpfsfile system. tempfs is a temporary storage facility intended to appear as amounted file system that persists in memory rather than on disk. Access can belimited by configuring _User_ and _Group_ ownership.Currently only Linux-based computes are supported.The secrets are managed via the[OneOps Secrets Proxy](../account/secrets-proxy.html) and stored byKeywhiz. OneOps users can interact with the proxy to manage their secret filesusing the [OneOps Secrets CLI](#oneops-secrets-cli).Secrets are synchronized to the computes every 30 seconds and can be[accessed via normal file system operation in your application](#secret-access).Typical steps necessary to start using the secrets client component are:- identify files that contain secrets- prepare your [OneOps security configuration](#security-config)- provision secrets using the [OneOps Secrets CLI](#secrets-cli)- update your [OneOps assembly](#assembly)- modify the [secret access](#secret-access) to load from the new location## OneOps Security ConfigurationAccess to secrets for an assembly in OneOps is managedvia membership in teams:- Create a team in your organization named `secrets-admin`, or forassembly-specific access named `secrets-admin-`.- Add the _Assembly Permissions_ to to allow modifications in _design_ and_transition_.- Add the _User Members_ or _Group Members_ as desired.- Add the team to the assemblies where you want to use secrets.## OneOps Secrets CLIThe OneOps Secrets CLI is a command line tool that allows a user to manage theirsecret files in the [OneOps Secrets Proxy](../account/secrets-proxy.html).### Downloading and InstallingDownload the latest version of the CLI from the Central Repository at[http://repo1.maven.org/maven2/com/oneops/secrets-cli](http://repo1.maven.org/maven2/com/oneops/secrets-cli)and locate the latest version in the above folder. Then download the file named`secrets-cli-*-executable.jar`, rename it to secrets and add it to your _PATH_.For example:```mkdir ~/bincd ~/bincurl -o secrets http://repo1.maven.org/maven2/com/oneops/secrets-cli/1.0.3/secrets-cli-1.0.3-executable.jarchmod a+x secretsexport PATH=~/bin;$PATH```Now you can run the application using the command `secrets info` as a firsttest:```$ secrets infoOneOps Secrets CLI: v1.0.3Built on 2017-10-04 11:12:57 PM UTC```Note that if the secrets application does not work on your operating system, youcan download the`secrets-cli-*-uber.jar` and use it with```java -jar secrets-cli-1.0.3-uber.jar```Apart from the different invocation, the command behaves identically.To configure the secrets CLI to access your specific OneOps and OneOps secretproxy instances securely, you need to configure the URLs, the truststore and thetruststore password and expose them as environment variables:```export SECRETS_PROXY_BASEURL=export SECRETS_PROXY_TRUSTSTORE=file:export SECRETS_PROXY_TRUSTSTORE_PASSWD=export ONEOPS_BASEURL=```Alternatively your organization can build a binary with the necessaryconfiguration embedded and make it available for download to your users.### Adding SecretsWith the secrets CLI configured, you can now add a secrets file such as`access.properties` to a specific environment in the chosen assembly.```secrets add -u -a -d access.properties```The `username` value is your username in OneOps. Execution triggers a prompt forthe password.The `application` is the concatenated value from your organization name, theassembly name and the name of the environment separated by underscore -`orgname_assemblyname_envname`. E.g. for the `qa` environment in the `petstore`assembly within the `training` organization the application value to use is`training_petstore_qa`.### Other OperationsThe secrets CLI supports numerous other operations that are listed via abuilt-in help accessible via an invocation without parameters:```$ secretsusage: secrets []The most commonly used secrets commands are: add Add secret for an application. clients Show all clients for the application. delete Delete a secret. details Get a client/secret details of an application. get Retrieve secret from vault. help Display help information info Show OneOps Secrets CLI version info. list List all secrets for the application. log Tail (no-follow) secrets cli log file. revert Revert secret to the given version index. update Update an existing secret. versions Retrieve versions of a secret, sorted from newest to oldest update time.See 'secrets help ' for more information on a specific command.```Detailed help for each command is available via the help command e.g.```$ NAME secrets add - Add secret for an application.SYNOPSIS secrets add -a -d [-n ] [-u ] [-v] [--] OPTIONS -a OneOps App name (org_assembly_env), which you have secret-admin access...```## Update OneOps AssemblyNow that the secrets are available via the proxy and security configuration iscompleted, you can edit your assembly to access them:- add the `secrets client` component to the relevant platform in design- pull the design changes to the desired environments- release and deploy the environments to operationOnce the deployment is completed you can verify that everything is working byaccessing a compute via SSH and checking the contents of the `/secrets` folder.It contains all the secrets added for the specific environment of the assembly.## Secret AccessWith the secrets client component in place, all your secrets are available viastandard file system operations.Typically applications load the secrets during their startup procedure. As aconsequence, you need to restart the application after any relevant secretchanges. Alternatively, you can implement a polling for secrets and automaticreloading.### Configuration FilesIf your application loads configuration files to access secrets, you can simplymanage those files with the secrets proxy and then update the reference to loadthose files.For example, if the default location is configured to use`/opt/myapp/conf/access.properties`, change it to e.g.`/secrets/access.properties`.If this is not possible you might be able to use symbolic links to the files asan alternative.### JavaJava offers numerous ways to load files and secrets. The following example loadsa properties file from `/opt/myapp/conf/access.properties`.```javaString configPath = "/opt/myapp/conf/access.properties"";Properties props = new Properties();props.load(new FileInputStream(configPath));```To change the loading to use the secrets location, simply change the `configPath`variable value to e.g. `/secrets/access.properties`.### NodeJSNodeJS can, for example, load JSON formatted properties file with the `require`function and you can simply change the path to the file.For example with the `config.json` file of```{ username: "admin" password: "mNQTic8mUtYLtdm"}```Loading the content can be achieved with```var config = require('./config/config.json');```and the values are available at `config.username` and `config.password`.Changing this to use the secret storage, is as simple as changing the path:```var config = require('/secrets/config.json');```### PythonReading a secret file in Python can use the standard `open` function with thepath to the `/secrets` mount.```open("/secrets/my-mysql-passwd").read()```"
},
{
"title": "Secrets Proxy",
"url": "/user/account/secrets-proxy.html",
"content": "# Secrets ProxyThe OneOps Secrets Proxy is a proxy server that sits in front of a[Keywhiz server](https://square.github.io/keywhiz/)used for secrets storage.Secrets are any file resources that contain information that needs to be keptprivate and secure. Examples are- TLS/SSL certificate files/keys- property files and other files containing usernames, password or access tokens- API tokens- Java KeyStore filesand others.## UsageThe secrets proxy understand the concepts and access configurationof OneOps and allows a user to store secrets in Keywhiz and access them intheir OneOps assemblies via the[secrets client component](../design/secrets-client-component.html).The source code and REST API documentation can be found on GitHub at[https://github.com/oneops/secrets-proxy](https://github.com/oneops/secrets-proxy).## InstallationCurrently installation requires you to build the proxy from source and deploy itvia a custom generated OneOps assembly using one customlb platform with thenecessary configuration.In addition a Keywhiz server installation is required for the secretstorage. This installation can be using OneOps via a customlb platform or asimilar approach or use a separate deployment outside OneOps.## Cloud ConfigurationOnce the Secrets Proxy is installed and up and running, the cloud service withthe type `secret` has to be added to each cloud and configured to point at thesecrets proxy.In addition, a cloud service with the type `certificate` has to be configured oneach cloud."
},
{
"title": "Security Groups",
"url": "/user/account/security-groups.html",
"content": "# Security GroupsA security group is a named collection of network access rules that is used to control the types of traffic that have access to your application. The associated rules in each security group control the traffic to platforms in the group.In the old clouds, the security groups were provided by default at the cloud level. This allowed inter-communication between any platforms, which was a security concern. As a result, the new clouds that run Juno do not have any default security groups defined. Application teams need to add the secgroup component available in OneOps to open the relevant ports required for their application. The default list of ports for each platform is added to the circuits and shows up when the sec group component is added to the design in OneOps.## SolutionsThree possible scenarios are described below.### Platform with Security Groups1. Review the ports that are added as part of this component and ensure that all the ports that are needed for your application are added.2. If it is necessary to add or edit the ports, follow the steps in [Add or Delete a Security Group to Open or Close an Additional Port](/user/design/add-or-delete-a-security-group-to-open-or-close-an-additional-port.html). 1. Make sure that port 22 is in the list or deployment will fail at the compute provisioning step. 2. The default ports for the Tomcat, Jboss, nodejs should be open if the configuration has not changed and should have these ports. For a complete list of platforms, see the [List of Ports by Platform](/user/design/ports-by-platform.html).| Platform | Port Rule|----------|-----------|Tomcat |22 22 tcp 0.0.0.0/08080 8080 tcp 0.0.0.0/08443 8443 tcp 0.0.0.0/08009 8009 tcp 0.0.0.0/0|JBOSS |22 22 tcp 0.0.0.0/08080 8080 tcp 0.0.0.0/08443 8443 tcp 0.0.0.0/08009 8009 tcp 0.0.0.0/0|nodejs |22 22 tcp 0.0.0.0/08080 8080 tcp 0.0.0.0/08443 8443 tcp 0.0.0.0/0|gluster |22 22 tcp 0.0.0.0/024007 24100 tcp 0.0.0.0/024007 24100 udp 0.0.0.0/034865 34867 tcp 0.0.0.0/034865 34867 udp 0.0.0.0/0111 111 tcp 0.0.0.0/0111 111 udp 0.0.0.0/049152 49153 tcp 0.0.0.0/049152 49153 udp 0.0.0.0/0### Platform with No Security Groups1. Go to your Design.2. Click the Platform to which you want to add the security group.3. Click **+** on the secgroup component.4. Review the list of default ports available for that platform. (For example: below is a screenshot for the JBoss platform.)5. To add new ports that are not part of the list, follow the instructions at [Add or Delete a Security Group to Open or Close an Additional Port](/user/design/add-or-delete-a-security-group-to-open-or-close-an-additional-port.html).### New PlatformThe secgroup component is added by default as part of every platform.> If an application requires other ports to be opened, it is important to do this so that the application works."
},
{
"title": "Sensors",
"url": "/admin/operate/sensors.html",
"content": "# SensorsSensors wrap an Esper CEP engine with sharding logic to load EQL statements and consume PerfEvents that are produced by the collectors. If threshold/statements are violated, then OpsEvents are produced and opamp consumes to produce ActionOrders or WorkOrders.The sharding logic does a mod of the manifest ID and poolsize global var to load statements from sensor_ksp on DAQ/Cassandra and consume from an opsmq `perf-in-q-`.## re-shard1. Change the `SENSORPOOLSIZE` global var.2. Disable opamp (soon to be: disable sensor heartbeat monitoring).3. Commit and deploy only the sensor.4. While the sensor is bootstrapping (takes ~10 minutes), touch the update daq collector-artifact and daq logstash. (The env var is set in an attachment for logstash.)5. Commit and deploy (takes ~10 minutes).6. Verify that the queues on opsmq are clear and that the number of unhealthy components is normal.7. Enable opamp / sensor heartbeat monitoring."
},
{
"title": "Sensuclient Component",
"url": "/user/design/sensuclient-component.html",
"content": "# Sensuclient ComponentThe _sensuclient_ [component](./components.html) is available for all platforms."
},
{
"title": "Set Up a Custom Action",
"url": "/user/operation/set-up-a-custom-action.html",
"content": "# Set Up a Custom Action## SolutionA Custom Action can be created for any Component. To do this, follow these steps:1. Add an Attachment.2. Specify the content and command line.IMPORTANT: It is necessary to select when to run `on-demand` for this to be exposed as an Action. The name that you give to the Attachment appears as an Action in Operations."
},
{
"title": "Set Up Log Forwarding to ES with Logstash",
"url": "/admin/operate/set-up-log-forwarding-to-es-with-logstash.html",
"content": "# Set Up Log Forwarding to ES with LogstashThe inductor logstash-forwarder agent is installed as part of the inductor setup . For more details on inductor setup refer,[build-install-configure-an-inductor](/admin/operate/build-install-configure-an-inductor.html) document.Retrieve the logstash cert from any of the ES nodes and update it at this path /logstash-forwarder/cert/logstash-forwarder.crtOnce the cert is updated restart the inductor logstash agent via the command *inductor restart_logstash_agent cloud-name*."
},
{
"title": "Set Up Multiple Ports/Protocols in Load Balancer",
"url": "/user/design/set-up-multiple-ports-protocols-load-balancer.html",
"content": "# Set Up Multiple Ports/Protocols in Load BalancerThere is a syntax to declare the virtual port/protocol and the instance port/protocols for the LB component. Each vport/vprotocol and iport/iprotocol combination is encapsulated in a single listener array as shown in the screenshot below. For a single listener, the syntax is `"vprotocol vport iprotocol iport"`. For multiple ports/protocols, it is possible to have multiple entries of listeners to be configured in the LB component.![Multiple ports protocols](/assets/docs/local/images/multiple-ports-protocols.png)There is also a map-based syntax for the ECV declaration. The ECV map entries have the key as the instance port and the URL as the health-check URL pattern, with the HTTP method for the service listening on that port. To ensure that monitors are created for all service groups, it is necessary to add ECV entries for the non-http services by using non-existing URL patterns.[comment]: # (IMAGE-REQUIRED: set-up-multiple-ports.png)"
},
{
"title": "Set Variable Cloud Scaling Percentage",
"url": "/user/transition/set-variable-cloud-scaling-percentage.html",
"content": "# Set Variable Cloud Scaling Percentage## SolutionCloud scaling percentage is used to determine the percentage to add or reduce a compute count for a given cloud.For example: A platform scale configuration has current=10 and has 4 clouds added.* If one of the cloud scale percentages is updated to 120%, then the target cloud will be scaled-up to 12 total computes.* If another cloud scale percentage is updated to 80%, then the target cloud will be scaled-down to 8 total computes.This allows a variable number of computes per cloud for a given platform.* Cloud scale can be edited per platform on the transition platform page.* By default, all cloud scale percentages are set to 100.* The Cloud scale field allows any positive integer value.## Edit Cloud Scale Percentage1. In the transition phase, select the environment and then the platform.2. From the list of clouds for the given platform, click the list icon on top, right side of the cloud and select **Cloud scale percentage.**3. To set the scaling percent of cloud, enter a positive integer value."
},
{
"title": "Share Component",
"url": "/user/design/share-component.html",
"content": "# Share ComponentThe _share_ [component](./components.html) is available for all platforms."
},
{
"title": "Shutdown a Cloud",
"url": "/user/account/shutdown-cloud.html",
"content": "# Shutdown a Cloud## SolutionCAUTION: **Shutdown** removes the deployed instance from a particular cloud. Be careful when using this feature.1. In the Transition phase, go to Environment.2. On the right side, select the Platform where you want to disable the cloud.3. Under Cloud Status, click **Shutdown.**4. Complete the previous steps for all of the Platforms that are to be shut down.5. Go back to **View** environment and **Commit & Deploy** the pending release."
},
{
"title": "SSH Keys Component",
"url": "/user/design/sshkeys-component.html",
"content": "# SSH Keys ComponentThe _sshkeys_ [component](./components.html) is available for all platforms."
},
{
"title": "Storage Component",
"url": "/user/design/storage-component.html",
"content": "# Storage ComponentThe _storage_ [component](./components.html) is of core importance and part of most platforms since itdefines any attached "block" storage accessible to the compute.Once you define your storage configuration using the below information, you would continue the persistent storagesetup using the [volume compnent](./volume-component.html).## ConfigurationBesides the global configuration available for any component such as _Name_, you can configure thefollowing attributes:_Size_: is used to specify the total space allocated represented in GB. Note: specific RAID configurationswill result in smaller usable volume sizes._Slice Count_: is used to specify how many sections of storage to slice the size into. If you are wanting twolinux volumes named "/mystuff" and "/archive" you would specify 2. Please refer to the notes below for RAID levels._Storage Type_: is used to specify the speed of disk. Typically, Standard is for spinning disks and IOPS isfor Premium SSD's. Please refer to the Types offered by your cloud provider and how they are mapped to theseoptions.### Miscellaneous Notes:RAID levels and Slice Count reference:* Raid 0 (Stripe)* Raid 1 (Mirror) 2 Drives* Raid 5 (Drives with Parity) Minimum 3 Drives* Raid 6 (Drives with Double Parity) Minimum 4 Drives* Raid 10 (Mirror+Stripe) or 0+1 (Stripe+Mirror) Minimum 4 Drives* Raid 50 (Parity+Stripe) Minimum 6 Drives* Raid 60 (Double Parity+Stripe) Minimum 8 Drives"
},
{
"title": "Take a Node out of Traffic (ECV Disable)",
"url": "/user/operation/take-node-out-of-traffic-ecv-disable.html",
"content": "# Take a Node out of Traffice (ECV Disable)In general, the LB (load balancer) determines the health of an individual instance (back-end server) by monitoring a health URL that is configured for the app. If the health URL is an http monitor and it returns an HTTP 200 OK response, then the LB considers the instance to be healthy and traffic gets routed to the instance. Otherwise, no traffic goes to this instance. The standard ECV check in Tomcat is "/" which needs to be changed as prescribed in Configure ECV Check URL on OneOpsa. (See the diagram below.)# Solution![ECV check](/assets/docs/local/images/ecv-check.png)## Details1. Add the *On-Demand* attachment ecv-enable to Tomcat or the artifact.2. The on-demand file type is any custom file action that is available as an action on the corresponding operations page of the instance.3. When you add an attachment, select **On Demand** for Run on Event.4. Copy the contents on the Execute Command section as appropriate.5. Similarly add the corresponding **On-Demand** attachment, ecvDisable.6. If it is necessary to disable the ECV, select Tomcat from Operations and select **disableECV.**7. To enable ECV and traffic, enable Tomcat. ![ECV disable toncatup](/assets/docs/local/images/ecv-disable-tomcatup.png)8. To remotely debug, there is a debug action in Operations by default. To start the Tomcat in debug mode, select **debug**. ![ECV disable debug tomcat](/assets/docs/local/images/ecv-disable-debug-tomcat.png)9. The default jpda port is 8000. To open the ports, refer to Add or Delete a Security Group to Open or Close an Additional Port.10. If you are using eclipse, conf. looks like this: ![ECV disable eclipse debug conf](/assets/docs/local/images/ecv-disable-eclipse-debug-conf.png)"
},
{
"title": "Telegraf Component",
"url": "/user/design/telegraf-component.html",
"content": "# Telegraf ComponentThe _telegraf_ [component](./components.html) is available for all platforms."
},
{
"title": "User Testing and Debugging",
"url": "/user/general/testing.html",
"content": "# User Testing and Debugging## Find the URL for Your AppThe default URL for your webapp that is deployed in OneOps is made up of the deployment details.| Part | Source |----------------|--------------------| perf-test | platform name| d1 | environment name| tomtest | assembly name| testing | organization name| OneOps QA Server | zone in Infoblox for the cloudMore details are available in [Computes in Operation](../operation/compute.html)## View the Number of VMs in UseTo see what you are using in OneOps, follow these steps:1. To go to the 'organization summary' page, click your organization name, next to your username at the top right.2. Click the **reports** tab which is part way down the page above notifications. The reports show the compute resources that you are using. You can spin the data by cloud and by assembly name.## Error Message Meanings| No | Error | Desc | Symptom | Fix|----|-------|------|---------|----| 1. | Rsync error | Deployment fails with rsync connection error in deployment log (Assemblies -> Transition -> Env -> View Deployment) | cmd error: rsync: connection unexpectedly closed (0 bytes received so far) [sender] cmd error: rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] | Check the compute health status on the Operations page (Operations -> Env -> platform ) and make sure that the compute is reachable. If not ,Reboot the compute from the compute summary page (Operations -> Env -> platform -> Compute -> Summary tab & Actions). | Icon |Select 'Auto Repair' when creating an env, whichenables automatic repair of component instances based on monitors with enabled heartbeats and metrics you define with Unhealthy event triggers| 2. | RequestEntityTooLarge | Deployment fails with RequestEntityTooLarge in deployment log (Assemblies -> Transition -> Env -> View Deployment) | FATAL: Excon::Errors::RequestEntityTooLarge: Expected([200, 202]) Actual(413 Request Entity Too Large | Your organization tenant hasreached the quota limit for compute nodes. Create a JIRA ticket for IAAS team. IconCleanup any unwanted vms/env to make more room for new computes| 3. | FATAL: Invalid imageRef provided | Deployment fails with FATAL: Invalid imageRef provided in deployment log. | FATAL: Invalid imageRef provided | This scenario can occur when: (a) the user selected bad/not-supported OS Type (b) the user selected correct OS Type, but it's missing mapping in the cloud Solution is to choose the OS that is supported on the selected cloud or leave the OS to "Default to cloud"| 4. | ERROR: error in lvcreate | Deployment fails in 'Volume' component with following error message in failed deployment log: ERROR: error in lvcreate | ERROR: error in lvcreate | This scenario can occur when the value to "Size" attribute in 'Volume' component exceeds than the permitted values.The permitted values varies from compute flavor to compute flavor. The permitted or maximum size can be viewed here:Standard VM Size DefinitionRefer to column "Resized (GB) ".| 5. | No space left on deviceENOSPC: No space left on device | Deployment fails in 'compute' component with following error message in failed deployment log: No space left on device ENOSPC: No space left on device | ENOSPC: No space left on device | Potential cause for this scenario is that the affected compute does not have enough space under '/app' due to which deployment is failing. SSH to the affected compute and (a) check the memory usage using "df -hP /app" if there is zero space left over or very minimal space left over, then clear the unwanted files or logs to increase the free space.(b) if significant space is available in /app ,then check for the inodes using 'df -hi' command| 6. | ERROR:Net::HTTPServerException: 404 "Not Found" | Deployment fails in 'artifact' component with following error message in failed deployment log: error: Net::HTTPServerException: 404 "Not Found"OrNet::HTTPServerException: 403 "Forbidden | error: Net::HTTPServerException: 404 "Not Found" | The potential cause for this scenario :(a) artifact is absent in nexus(b) repository is mentioned as pangaea_snapshot instead of pangaes_releases or vice-versa.| 7. | FATAL: RuntimeError: ruby_block Install plugin: elasticsearch-kopf had an error: RuntimeError: [!] Failed to install plugin | Deployment fails in 'Elastic-search' component with following error message in failed deployment log: Exception in thread "main" java.lang.UnsupportedClassVersionError: org/elasticsearch/plugins/PluginManager : Unsupported major.minor version 51.0 | Exception in thread "main" java.lang.UnsupportedClassVersionError: org/elasticsearch/plugins/PluginManager : Unsupported major.minor version 51.0 | To remediate this scenario, ensure that Java-7 is installed on the computes, where Elastic-search is being installed. Elastice-Search needs Java-7. How to update Java version: How to update JDK/JRE version on OneOps ?| 8. | message=>String length exceeds maximum [name, 127]", "severity"=>"ERROR", "errorcode"=>1106 | Deployment fails with 'failed without any specified faults' in lb component. | server exists: {"message"=>"String length exceeds maximum [name, 127]", "severity"=>"ERROR", "errorcode"=>1106} | Potential cause for this scenario is that the provided 'environment' name is too long. It should not exceed 127 chars,as there is a limitation at netscalar level for LB creation.LB names should not exceed 127 chars.| 9. | Compute Action 'status' fails. | Compute Action 'status' results in below error: `{"code":404,"errorCode":4003,"message":"Given ci 363128 already has active ops procedure."}` | `{"code":404,"errorCode":4003,"message":"Given ci 363128 already has active ops procedure."}` | In case of failure of any action/procedure with error message indicating that there is an 'active' cid present, then that inprogress action/procedure needs to be completed/canceled. Until then,no other action can be worked upon same component.| 10.| LB is not working, but 'curl' to individual IPs work. | 'curl' to FQDN fails, whereas to Individual IP-Address works. | 'curl' to FQDN fails, whereas to Individual IP-Address works | Potential causes: check whether ECV is correctCheck whether the ECV check has the port - 80 or 8080 and also the listener port should be the same as ECV check 80 or 8080.| 11. | Not all services available for platform: xxxxxxxx, the missing services:[Wmt_oracle] | Deployment fails with error message : >>>>> Not all services available for platform: xxxxxxxxx, the missing services: | >>>>> Not all services available for platform: xxxxxxxxx, the missing services: | Potential Cause: the cloud on which the deployment has failed does not contain the listed service.In this case it is Oracle.# Debugging## Troubleshooting LogsTransition > environment > deployment tab1. From the Dashboard, select an assembly.2. Click Transition. The Environments page displays3. Select an environment.4. Click the deployment tab.5. Select a deployment. A list of deployment details displays an update to each component in the Deployment Details panel.6. Select an update.7. Select the log tab. The colored tabs display the conditions of deployment and allow you to sort the log information.>The logs display in chronological order, newest to oldest and are stated in Chef coding language.![Troubleshoot Logs](/assets/docs/local/images/troubleshoot-logs.png)"
},
{
"title": "Administrator Testing and Debugging",
"url": "/admin/operate/testing.html",
"content": "# Administrator Testing and DebuggingBuild a *test* environment where you can test any new OneOps code changes and also validate any or new *pack* changesyour organization might do.# Debugging## UI does not come up Most likely the rails server didn't start properly (used in vagrant image and aws image), try to ssh to your vm and do```sudo service display start# check the logstail -f /opt/oneops/log/rails.log```if using apache* Make sure the **apache** is up if running *display* in apache.* Run `nc -v host port` to see if the ports are not blocked. Do this for any of the services.## Deployments failing* Check if all consumers can connect to messaging bus.* All the OneOps webapps (adapter,transistor) expose health check /rest//ecv/status.. so check if all web contexts are up.## Inductor not coming up* Make sure the *auth-key* is same which you used for setting up the cloud.# Deployment failsCheck the github commit log for any *cookbook* fixes which were done. Refresh the cookbooks. ## Update cookbooks1. Update [cookbooks](https://github.com/oneops/circuit-oneops-1/tree/master/components/cookbooks) to latest and greatest.```cd /home/oneops/build/circuit-oneops-1git remote -v # if its like git@oogit:/oneops/circuit-oneops-1 (fetch), replace with httpssudo git remote set-url origin https://github.com/oneops/circuit-oneops-1.git # Get the latestsudo git pull#If there are merge conflicts, resolve them or want to overwrite with the latest#This *replaces* all the cookbooks used by inductor to the latest in sync with githubsudo git reset --hard origin/master## refer ls -la /opt/oneops/inductor#For *shared cookbooks*, we can do the samecd /home/oneops/build/oneops-adminsudo git pull## If conflicts and want to overwritesudo git reset --hard origin/mastersudo cp -r /home/oneops/build/oneops-admin/lib/shared /opt/oneops/inductor```## Inductor does not start throws Bad password```Caused by: java.lang.Throwable: com.oneops.amq.plugins.CmsAuthException: Bad password for user: /public/oneops/clouds:rackspace-dfw# Check inductor propertiescat ///opt/oneops/inductor/clouds-enabled/public.oneops.clouds.aws/conf/inductor.properties|grep auth# Note amq.authkey = awssecretkey# The value of authkey should be same as what was loaded during metadata changerefer https://github.com/oneops/circuit-oneops-1/tree/master/clouds```## Inductor does not start throws SSL connect error```Failed to connect to [ssl://localhost:61617?keepAlive=true] after: 1 attempt(s)Looks like a cert error for java: Cause: The JMS connection has failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target# Check inductor client ts file.# As part of the latest Vagrant scripts inductor is created in /opt/oneops/inductor# and proper cert is copied there, so if you bring the fresh VM,# don't do inductor create - just do "inductor add"# But if you do please docp /opt/activemq/conf/client.ts /opt/oneops/inductor/libcd /opt/oneops/inductor/inductor start```## Cookbook does not exist .```#016-01-25 20:53:54,381 INFO ProcessRunner:65 2822:52176 - cmd out: [2016-01-25T20:53:54+00:00] DEBUG: Re-raising #exception: Chef::Exceptions::CookbookNotFound# This is mostly caused by missing symlink for the cookbooks in inductor ;# Caused by manually deleting the inductor homecd /opt/oneops/inductor ; ln -s /home/oneops/build/circuit-oneops-1 .```## Compute Provisioning fails Image does not exist```# The compute service metadata has image id which has been deleted.# Try correcting the image id in compute cloud service# Run the deployment again.```## OS step fails```# cmd out: service[named]: unable to locate the init.d script!# This is fixed with latest code```Refresh cookbooks following [this](#update-cookbooks)"
},
{
"title": "OneOps Training",
"url": "/overview/training.html",
"content": "The OneOps team provides training material and uses it regularly for internaltraining classes at Walmart. The material includes slides with speaker notesand videos.All slides are [available online](/oneops-training/) and with[the source and usage instructions in GitHub](http://github.com/oneops/oneops-training).The currently available resources are:* [OneOps on the Walmart Technology Community YouTube channel](https://www.youtube.com/playlist?list=PLjDnb0653uBDMBpTBoLVkVtGIDO-P8e3U)* [OneOps User Training: Level 1 - Beginner](#user-1)* [OneOps User Training: Level 2 - Advanced](#user-2)* [OneOps User Training: Level 2 - Advanced Examples](#user-3)Upcoming material includes* OneOps Administrator Training* OneOps Developer Training## OneOps User Training: Level 1 - BeginnerThis OneOps User training aims to cover the following topics:- understanding of cloud application characteristics- role of OneOps as a PaaS and cloud application lifecycle management tool- knowledge of OneOps terminology- ability to create a cloud application in OneOps- and manage it in multiple deployments__Check out the [slides.](../oneops-training/user-1-beginner.html)__All modules are available as recorded videos: Training Overview Motivation Introducing OneOps Getting Started a.k.a Design Moving a.k.a. Transition In Business a.k.a. Operation At Home in OneOps and Conclusion## OneOps User Training: Level 2 - AdvancedThis OneOps User training builds on the basis from the beginner class andcovers the following topics:- assembly design with multiple platforms- more component knowledge including network aspects- in-depth understanding of cloud usage and configuration- scaling operations and release flows- integrating with OneOps__Check out the [slides.](../oneops-training/user-2-advanced.html)__## OneOps User Training: Level 3 - Advanced ExamplesThis OneOps User training is currently in development with modules for specifictopics added on demand:- secrets management__Check out the [slides.](../oneops-training/user-3-advanced-examples.html)__"
},
{
"title": "Transition Best Practices",
"url": "/user/transition/transition-best-practices.html",
"content": "# Transition Best PracticesIn the Transition phase, use the following best practices:* Ensure that the design is deployed in proper clouds* Check the organization report for available capacity* Keep Monitoring turned on* When creating a new environment, DO NOT Use Debug Mode. This is strictly to be used by Ops for debugging purposes.* Configure ECV to check the LB component* If you don't want to accidentally override design values on pull, keep variables/attributes locked in the Transition phase.* Review monitors and their thresholds, to add or edit more alerts to be suitable for your application* Add CEN to the individual monitors in your production environment* Enable NOC alerts for your production environment* Add your own CNAME to give to your customers* Keep watch on your environment and compute usage. If an environment is not in use,disable the environment.* Tomcat Log Files: The location of log and access log should be `/log/apache-tomcat`.>* The computes *can not* be resized once provisioned. So, if you change the size to be different from the design, lock the attribute.* The volume "app" size by default, is 10G (in the case of Tomcat). If needed, adjust and lock the size.* Don't rely on the storage in the volume. It is ephemeral and only remains until the compute is there."
},
{
"title": "Transition",
"url": "/user/transition/transition.html",
"content": "# TransitionAfter you create a new Environment, the Environment detail page (Manifest page) displays. This is where you see the realization of your Design into a given Environment that is based on its SLA requirements.All configuration values for Platforms and Components are pulled from the Assembly Design. You may also see additional required/optional Components in some Platforms based on Env properties.## Pull DesignEvery time a new change is committed to the Assembly Design, you see the notification on the Transition summary/Manifest page. You have the option to promote those changes into this Manifest.## Manifest ConfigurationOn the Manifest page, you can make configuration changes that are specific to your Environment. To mark these as permanent, click the lock icon next to the attribute value. Otherwise these get overwritten the next time you perform a Design pull."
},
{
"title": "Update or Upgrade New OneOps Code",
"url": "/user/operation/update-upgrade-new-oneops-code.html",
"content": "# Update or Upgrade New OneOps Code## SolutionThe two ways to roll out new code are:* **Update:** Rollout the new version of your apps in multiple batches with each batch containing only a certain percentage number of the total number of nodes.* **Upgrade:** Spin up a new parallel hierarchy of nodes and then switch the traffic to this new cluster. Your browser does not implement HTML5 video.## Roll Out a Code UpdateRoll out the new version of your apps in multiple batches with each batch containing only a certain percentage number of the total number of nodes. This is done by following these steps:1. Go to the platform to be updated within an environment in transition view and change the % Deploy value in scaling configuration to 25 for this example. ![Scaling Configuration](/assets/docs/local/images/scaling-configuration.png)2. Change the version number environment variable of the platform to the latest version you want to update to.3. Commit and deploy. You will notice that the deployment plan has only 25% of the nodes in each data center (edc/ndc). Go ahead and complete the deployment.4. At this stage, your 25 nodes in each data center (assuming 100 hundred nodes in each data center) are upgraded with the latest version.5. Change the % Deploy value to 50% and then save, commit and deploy. Now you have the next batch of (25%) of your nodes upgraded to the latest version.6. Repeat this until you have completed 100%.7. During this upgrade process, 75% nodes are always available to serve the traffic.>>* You must have a proper next version (not same snapshot version) of your application(s) for doing this update process.>* Between percentage change from 25% to 100%, there should not be any update in design/transition so that this is the only change propagated to the specified percentage of computes.>* The previous steps can also done by DC.## Roll Out a Code UpgradeCreate a new parallel hierarchy of nodes and then switch the traffic to this new cluster. Here are the steps:1. Go to the design of your assembly.2. Select your Tomcat platform (choose your platform) and click **edit**.3. Increment its version and save. Here you are telling OneOps that you have a new major version of your product.4. Go to your environment in Transition and pull the design. OneOps is going to add a new platform to your environment.5. Select the redundancy (redundant or singleton) of your new platform.6. There will be two platforms in your environment. One with old version and one with new version.7. Click the new platform and change the version variable to the new release maven version of the application.8. Commit and deploy. This creates a brand new cluster of nodes for your application but it does not yet change the dns entries, etc.9. On the environment page, select the drop-down for the new platform and select **activate.** This changes the DNS entries and all your traffic starts to hit the new nodes.10. You can choose to keep the old nodes and platform for a few days in case you want to switch back to it (by activating it) or you can get rid of it by clicking the terminate on the drop-down menu.![Rollout Summary](/assets/docs/local/images/rollout-summary.png)"
},
{
"title": "Upgrade an Application Version in an Environment",
"url": "/user/design/upgrade-application-version-in-environment.html",
"content": "# Upgrade an Application Version in an EnvironmentTo upgrade an application version in an environment, follow these steps:1. Log into the OneOps environment.2. Go to your Assembly.3. Select Design.4. Select the Platform.5. Update the version: 1. Version from variable: 1. Click the **variable** tab. 2. Select the variable. 3. Edit the variable. 4. Save. 5. Commit and deploy. 2. Version hard coded in the component: 1. Select the component from the right navigation. 2. Click **Edit.** 3. Update the value. 4. Save. 5. Commit and deploy.>If the change is for all the environments it is better to do that in the design phase. If the change is environment-specific, update in transition phase and lock it.## See AlsoVariables Override Prevention"
},
{
"title": "Upgrading OneOps",
"url": "/admin/operate/upgrading-one-ops.html",
"content": "# Upgrading OneOpsMost of the times, **upgrading** OneOps is *just* deploying a latest build of source code in production. This typically involves* Building OneOps deployable refer [build](https://github.com/OneOps/build-wf).* Changing **version** of OneOps which will result in release comprising of almost all platforms (where ever variable is referred)* We do **Zero Downtime Deployments** by having 1. Setting up suitable cloud priority on deployments. 2. Disable *ECV* before deployment and then **ON** after the code is deployed.This is simplified by having OneOps (management) managing OneOps."
},
{
"title": "User Component",
"url": "/user/design/user-component.html",
"content": "# User ComponentThe _user_ [component](./components.html) controls operating system user accounts and their creation on the[compute component](./compute-component.html) of the same platform and is typically an optional component. Adding auser for example allows you to connect to the [compute in operation](../operation/compute.html) via ssh.## Attributes_Username__Description__Home Directory__Home Directory Mode__Max Open Files__Login shell__System Account__Enable Sudo Access__Secondary Groups__Authorized Keys_: Add one or multiple ssh keys that are authorized for a remote connection. You can get yourpublic key with e.g. `cat ~/.ssh/id_rsa.pub`._Password (currently Windows only)_"
},
{
"title": "User Interface",
"url": "/user/general/user-interface.html",
"content": "# User InterfaceOneOps provides a very powerful user interface with numerous features to enable the user to perform their tasksefficiently:- [Overview](#overview)- [Navigation Bar](#navigation-bar)- [Phases Wizard](#phases-wizard)- [Lists and Bulk Actions](#lists-and-bulk-actions)- [Keyboard Shortcuts](#keyboard-shortcuts)- [Short URLs](#short-urls)- [Search](./search.html)- [Favorites](./favorites.html)> Explore the various features to save time in your usage of OneOps.## OverviewThe user interface provides simple top bar to access some features on the top:The top bar includes the following control and segments:- [Navigation bar](#navigation-bar) toggle- OneOps logo- Organization dialog with items to search, manage and navigate to organizations- Username with link to profile- [Search](./search.html) button- Favorites dialog with items to manage and navigate to [favorites](./favorites.html)- Feedback link- Sign out buttonClicking on the the navigation bar toogle replaces the top bar with the more powerful navigation bar resulting indisplay similar to the following example.The black horizontal bar displays a breadcrumb navigation to the current entity and starts at the current organization.The right side of the same bar contains link to the three assembly lifecycle phases - design, transition and operations.## Navigation BarThe main navigation bar visible on the left includes the following features (top to bottom, left to right)- OneOps logo- Water drop icon to change color scheme used for the navigation bar- Pin icon to prevent automatic collapse of navigation bar and keep it expanded- X icon to close the navigation bar- Display of current organization and search button to change organization- Catalogs access link- Clouds access link- Assemblies access link- [Search](./search.html) access link- Settings link to access the current organization's profile- Navigation aid for the current assembly- Optional, configurable links to support, file issues, provide feedback, documentation and release notes- Username with link to profile- [Search](./search.html) button- Favorites dialog with items to manage and navigate to [favorites](./favorites.html)- Feedback link- Sign out buttonThe navigation aid for the current assembly includes links to the design, transition and operate phases and theapplicable entities. Some items can be expanded and contracted.The navigation bar adapts to the current user privileges and the current context.## Phases WizardThe phases wizard is displayed above the main content area and contains links to actions related to Account setup andthe phases Design, Transition and Operate. The links are all context sensitive to the current organization andassembly. The current phase is highlighted in green. It can be disabled with the close button on the right.## Lists and Bulk ActionsLists consist of a powerful header and the line items. The header on lists numerous features:- _Sort_ button, optionally with selection of field to use for sorting- Filter input with display of the number of records in the list- Select all and unselect all boxes- _New_ button to create new record- _Action_ buttonIt items in the list itself display data and include a check box for bulk operation actions on the left anditem-specific action buttons on the right.Bulk actions can be performed by checking one or multiple of the select check boxes on the left of the records and thenpressing the _Action_ button in the header and selecting the desired action in the drop down.The available actions vary based on the records in the list and include actions such as save, edit, mark as favorite,reboot and many others.## Keyboard ShortcutsOneOps supports a number of keyboard shortcuts that enable an advanced user to navigate to specific components and otherentities within the context of the current organization. These keyboard shortcuts trigger the appearance of an inputcontrol in a pop up dialog. You can type multiple values to narrow down the returned data to the desired results. Up to20 results are displayed. Clicking on a result allows the user to navigate to the entity.`ALT-o`: navigate to a specific _organization_.`ALT-d`: navigate to an entity in the _design_ phase.`ALT-t`: navigate to an entity in the _transition_ phase.`ALT-p`: navigate to an entity in the _operate_ phase.`ALT-g`: _go_ to an entity, adding a stand-alone `d`, `t` or `o` character to the query narrows the results to the design,transition or operation phases.## Short URLsOneOps supports some short URLs. A user can type these URLs faster and navigate to the entity with a known identifier.Deployment:* UI access at `/r/deployment/` or shorter `/r/d/`* JSON at `/l/deployment/` or shorter `/l/d/`Releases:* UI access at `/r/release/` or shorter `/r/r/`* JSON at `/l/release/` or shorter `/l/r/`Procedure:* UI access at `/r/procedure/` or shorter `/r/p/`* JSON at `/l/procedure/` or shorter `/l/p/`Instance:* UI access at `/r/instances/` or shorter `/r/i/`* JSON at `/l/i/`"
},
{
"title": "Variables",
"url": "/user/design/variables.html",
"content": "# VariablesUse Variables to externalize configuration attributes values which may change for an application at cloud , global( shared across platforms), or platform specific. You can create secure variables.The three areas to store variables are:* **Global:** A Global Variable is an Assembly-wide, named value. You can use Global Variables within a Components attribute values in this form: `$ONEOPS{variable-name}`. OneOps then evaluates the actual attribute values during deployment.* **Cloud:** Defined for a particular Cloud* **Local:** Set in a particular Platform. For example you may have a Tomcat Platform and in it, set a variable like version to 2.2.2 for use in the platformVariables are put to use when you have an attribute, as you saw above in the Tomcat example. For another example, in the Download Component, there is an attribute called Source URL. This defines where to go to download the file, for example a JDK to download. It is possible to hard code a value in this circumstance, but variables enable a more flexible approach.Variable reference example syntax:```$OO_CLOUD{cloudvarname1}$OO_GLOBAL{globalvarname2}$OO_LOCAL{localvarname3}```"
},
{
"title": "View a Reports Summary",
"url": "/user/account/view-a-reports-summary.html",
"content": "# View a Reports SummaryThe Reports Summary page displays the overall reports of an environment. Reports can be generated on an assembly or cloud basis. In each report, the ge can get the reports on the core, memory abd instance basis.Reports can be viewed as a graph or tabular form.There are three types of reports:* Core: It gives the overall reports on the basis of core used* Memory: It gives the overall reports on the basis on memory* Instance: It gives the overall reports on the basis on instancesTo view the Reports Summary for an organization, follow these steps:1. Select the Organization name on the top left side.2. Select the **Report** tab."
},
{
"title": "View, Add, or Edit Environment Profiles",
"url": "/user/account/view-add-edit-environment-profiles.html",
"content": "# View, Add, or Edit Environment Profiles## SolutionEnvironment profiles can be viewed by users with any access level within an organization. However they can only be managed (created, edited or deleted) by organization admin users. To view or manage organization environment profiles, go to the organization edit page and then select the **environments** tab.Because environment profiles are abstract environment templates, their creation is very similar to assembly environments. Essentially, they have the same attributes as concrete assembly environments, including the settings for specifying utilized clouds.CAUTION: Be cautious while adding to or editing an environment profile. Even though the access is available to all admin users, only NOC and DevOps teams are supposed to manage these profiles."