Splunk HA Index Clustering

Splunk Cluster Deployment

Deploying splunk in HA index clustering is bit tricky if don’t understand the context how splunk actually developed it.

We need atleast one Master node, one Search head and one or many indexer (Peer Node).The basic idea is to sync index data across indexer , so if one indexer down we still have the searchable data.

In the virtual lab I have used 4 Splunk instance (Enterprise). One Firewall Pfsense HAproxy and Webserver (docker) to generate syslogs sent to one indexer.




Webserver –

Pfsense –

Splunk1 (Master) –

Splunk2 (Peer -Indexer) –

Splunk3 (Search Head) –

Splunk4 (Peer -Indexer) –


ESXI – ALL the four splunk instance, Pfsense HAproxy and Webserver on Centos Docker installed running successfully.


My Webserver – Listening on port 4000

Demo docker hello page

image002The Above Webserver is behind Pfsense HAproxy firewall which forwarding logs to Splunk indexer. We can even use webserver to send log directly to splunk. I just tried to simulate an enterprise web environment.

Pfsense Firewall HAproxy to send logs to Splunk on port 514-

Configured splunk 4 ( ) in the firewall to send HAproxy syslog.


Configure Master node –

Now login to Splunk1 – and make it master node –  Go to Settings > Distributed Environment> Index Clustering


Click on “enable index clustering” and then select “Master Node” and hit next


Make sure your Replication factor matches your number or Peer Node you are going to configure. In my case I have 2 peer ndoe to be configured. Enable Master node, it will ask for restart.


Configure Search head –

 A search head that performs only searching, and not any indexing, is referred to as a dedicated search head. Search head clusters are groups of search heads that coordinate their activities. Will configure search head here the instance



Configure Peer nodes –

Same way we need to navigate to Index Clustering in the peer node. In my case Splunk2 ( and Splunk4( are the peer nodes.

We need to put master URL and port. Make sure master node is listening on that port.Also need to mention Secret if any you have specified during master configuration.



Config Validation –


After successful configuration of Master node, search head and indexer (peers) will look like below.

Master Node


Search head



Indexer -Peer Nodes




Lets Create Index  –


NOTE – When splunk is part of Index cluster , we should not create any index in peer nodes or search head. We will create index only in Master node. There is no way to create index via GUI to replicate. We must have to create in CLI by editing indexes.conf file.

To edit indexes.conf file in master , login to splunk instance via SSH and navigate to



***make the permission of indexes.conf to  755 and chown to splunk:splunk

Create a file “indexes.conf” and create index as below – I used index name as “hapxy” refer to haproxy


After create and save the file here , login to splunk GUI master node and push the config across peers

Login to and navigate to “Distributed environment” > Index Clustering > Edit > Config bundle Actions


Do Validate and Check restart


Push the changes click on “Push”



Now login to peers nodes and create UDP data input for the “hapxy” index. Its not mandatory to create in both peers. Here  I have added one input in Splunk4 –

Login to splunk4 GUI and navigate to Settings > Data > Data inputs > UDP > New Local UDP



Sourcetype as “Haproxy:http” and select index as “hapxy”


Now click on “Preview and Submit”. We are now successfully created the index.

Makes sure pfsense sending the logs to this index.Login to via ssh and run tcpdump  and access our website – tcpdump -vv -i ens192 port 514


Great we can see logs are getting landed in our peer node

Now login to peer node GUI and we should be seeing the index count and able to search the index –  Settings > indexes


Navgate to Search to search the log – generally splunk homepage



So far we are GOOD, we can see logs in our peer nodes. now login to master node and we should be able to see the “hapxy” index is in sync


Its time to login and try to initiate our search from “Search Head” -Login to Splunk3 –


We are seeing our events while searching from search head. Great stuff. Now what happen if one indexer fail ! Lets do that , login to one indexer and make it down. SSH to splunk4 and shutdown, and check the status in Master node.

Login to master node and navigate to index clustering –

image028Oops Splunk4 is down. But no worries, lets login to search head and we should be able to see our data and search them successfully,



YES!!! We are good. We can see all of our logs in the event of a peer node offline.


Thank you!!!!