Before following these instructions make sure you have followed the instructions in Getting Started. Follow the instructions for the time series database of your choice and then perform the instructions listed under Do for All Pipelines.
NOTE: Splunk server install instructions tested on RHEL 8.3 NOTE Currently, we are not using a Splunk container and are running Splunk from a manual installation detailed below
-
Download trial of Splunk
-
By default it will install to /opt/splunk. Run
/opt/splunk/bin/splunk start
(I suggest you do this in tmux or another terminal emulator) -
Run the following command:
vim /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf
and make sure your config looks like this:[http] disabled=0 port=8088 enableSSL=0 dedicatedIoThreads=2 maxThreads = 0 maxSockets = 0 useDeploymentServer=0 # ssl settings are similar to mgmt server sslVersions=*,-ssl2 allowSslCompression=true allowSslRenegotiation=true ackIdleCleanup=true
-
Run
firewall-cmd --permanent --zone public --add-port={8000/tcp,8088/tcp} && firewall-cmd --reload
-
Make splunk start on boot with
/opt/splunk/bin/splunk enable boot-start
- Browse to your Splunk management dashboard at
<IP>:8000
. - Go to Settings -> Indexes
- In the top right of the screen click "New Index"
- Create a name, set Index Data Type to Metrics, and Timestamp Resolution to Seconds
- Browse to your Splunk management dashboard at
<IP>:8000
. - Go to Settings -> Data Inputs
- On the following screen click "Add new" next to HTTP Event Collector
- Select any name you like for the collector and click "Next" at the top of the screen
- Select "Automatic" for Source type and for Index select the metrics index you created previously
- Click Review at the top, make sure everything is correct and then click "Submit" (again at the top)
At this juncture, you have done everything you need to on the Splunk side to get everything up and running. Next you need to finish configuring the docker pipeline. Proceed to Do for All Pipelines
For Elasticsearch, there are some external settings you must configure first. The below instructions are written for Linux and were tested on Ubuntu 20.04.3 LTS.
- Set vm.max_map_count to at least 262144
grep vm.max_map_count /etc/sysctl.conf
. If you do not seevm.max_map_count=262144
edit the file and add that line.- You can apply the setting to a live system with
sysctl -w vm.max_map_count=262144
- Depending on whether this is a lab or production there are several other settings which should be configured to tune ES' performance according to your system. See: https://www.elastic. co/guide/en/elasticsearch/reference/current/docker.html
Next you need to finish configuring the docker pipeline. Proceed to Do for All Pipelines
These instructions apply to all pipelines
See Get Docker for installation details. You will need to be able to run Docker commands as a user rather than root. See these instructions. On most Linux distributions this consists of running:
sudo groupadd docker
sudo usermod -aG docker $USER
and then log out and back in. Run docker run hello-world
as the user in question to test your privileges.
You will also need to install docker-compose version 2. The code will not work with version 1. Instructions for a standalone installation of docker compose version 2 are here. The following versions of docker-compose have been tested -
- 2.3.3
- 2.6.0
- 2.17.1
- 2.20.2
Note -
- Standalone docker compose installation is required because backward compatibility for the docker-compose command is needed to run
compose.sh
. - Use docker compose version greater than 2.3.3 for a faster container setup.
NOTE These instructions are performed on whatever host you would like to use to connect to all of your iDRACs
- git clone https://github.com/dell/iDRAC-Telemetry-Reference-Tools
- (For Splunk) Set the following environment variables as per the "HTTP Event Collector" configuration in the Splunk.
SPLUNK_HEC_KEY=<Token value>
SPLUNK_HEC_URL=http://<Splunk hostname or ip>:<HTTP Port Number>
SPLUNK_HEC_INDEX=<Index name> - Next run
bash compose.sh
. The options you use will depend on what you want to do. There are five different "pumps" for the five different databases:--influx-pump
,--prometheus-pump
,--splunk-pump
,--elk-pump
,--timescale-pump
. These pumps are responsible for feeding the data from the pipeline into the pipeline of your choice. The other option you may want to add is a command to build the time series database of your choosing. If you already have an external instance of the database running then this won't be necessary. These options are:--influx-test-db
,--prometheus-test-db
,--elk-test-db
,--timescale-test-db
. We have not currently built out a splunk option. So the command to connect to build a data pipeline for an external splunk instance would bebash compose.sh --splunk-pump start
Running this command will trigger a build of all the necessary containers in the pipeline as specified in the Docker compose file.- WARNING There is a known bug where docker compose throws erroneous errors. These can be safely ignored. See #46. It will look like this:
-
Running influx with grafana This is a 2 step process of generating influx and grafana tokens and starting --influx-test-db. a. ./docker-compose-files/compose.sh setup --influx-test-db b. ./docker-compose-files/compose.sh start --influx-test-db
-
On your system, you will need to allow ports 8161 and 8080 through your firewall
- If you are running Elasticsearch, you will also need to open port 5601 for Kibana if you chose to run compose
with the
--elk-test-db
option. - If you ran compose with the
--influx-test-db
,--prometheus-test-db
, or--timescale-test-db
options you will need to open port 3000 for Grafana.
- If you are running Elasticsearch, you will also need to open port 5601 for Kibana if you chose to run compose
with the
-
After you run this, the next step is to specify the iDRACs of the machines you are using. There is a webgui called
configui
which by default runs on port 8080. Browse to it and click "Add New Service". Alternatively you can upload a CSV file with three columns and no headers with each line in the format:host, username, password
for each iDRAC.
- Below is an example CSV file.
- Refresh the page and you should see your host appear in the list.
- At this point the pipeline should be up and running. You can run
docker ps -a
to make sure all the containers in the pipeline are running. To confirm the final stage of the pump is working rundocker logs <pump_name>
and you should see that the pump is forwarding events. - For additional troubleshooting steps see DEBUGGING.md
This is not required. It only demonstrates a possible Elasticsearch workflow.
- To configure the data source and visualization dashboards please access Kibana homepage in the
browser (
http://<YOUR_IP>:5601
) - Select Stack Management from the Management section in the tools menu. Now go to the Data -> Index Management tab and find the index named "poweredge_telemetry_metrics". In the index pattern tab, create an index pattern called
poweredge_telemetry_metrics*
.
- Next, browse to the discover tab to view the ingested data. You can find fields of interest and view the data in tabular form with your chosen fields.
- Next we will create charts which can then be placed in a dashboard. We will configure an aggregation metric on CRCErrorCount:
- Browse to influx (
http://<YOUR_IP>:8086
) using the admin/DOCKER_INFLUXDB_INIT_PASSWORD - load data
- view telmetry metrics from my-org-bucket database
- Browse to Grafana (
http://<YOUR_IP>:3000
) - Add InfluxDB datasource, select the url (
http://influx:8086
) with the headerAuthorization: Token DOCKER_INFLUXDB_INIT_ADMIN_TOKEN
, andorganization:my-org
. Correct addition of datasource will show the available buckets. - Visualize metrics using add panel and wrting query for the respective metric in the Query inspector(quick way is to get the query from the influxUI)
- For Prometheus:
- For TimescaleDB: