Configure Kibaba node
Configure Kibana to an existing Elasticsearch cluster using a transparent approach. The example illustrated here will be based on usage of coordinator node instance capabilities of Elastic Search, this will add the capability of smart load balancer to the respective data nodes at the existing cluster.
Elasticsearch node types used:
- Master: Perform cluster scope operations like create and delete indexes, shards allocation, decide which nodes are part and active members.
- Data: Holds the shards that contain the documents and handle index CRUD operations and aggregations.
- Local coordinator: Smart load balancer. Can’t be used in higher number as clients because master nodes will handle their cluster state. This number can’t be superior the data nodes instances.
To add Kibana using this concept is required use a local Elasticsearch at Kibana host and configure using the directive bellow:
cluster.name: my_cluster
node.master: true
node.data: false
node.ingest: false
discovery.zen.ping.unicast.hosts: [“es-m-1_IPADDR”, “es-m2_IPADDR”, “es-data-1_IPADDR”, “es-data-2_IPADDR”, “coordinator-1_IPADDR” ]
cluster.initial_master_nodes: [“es-m-1, es-m-2”]
cluster.name: my_cluster
node.master: false
node.data: true
node.ingest: true
discovery.zen.ping.unicast.hosts: [“es-m-1_IPADDR”, “es-m2_IPADDR”, “es-data-1_IPADDR”, “es-data-2_IPADDR”, “coordinator-1_IPADDR” ]
cluster.initial_master_nodes: [“es-m-1, es-m-2”]
cluster.name: my_cluster
node.master: false
node.data: false
node.ingest: false
discovery.zen.ping.unicast.hosts: [“es-m-1_IPADDR”, “es-m2_IPADDR”, “es-data-1_IPADDR”, “es-data-2_IPADDR”, “coordinator-1_IPADDR” ]
cluster.initial_master_nodes: [“es-m-1, es-m-2”]
- Remember to open TCP ports:
- 9300: intra cluster communications;
- 9200: rest calls
If you protected Kibana exposure using NGINX also remember to allow user address space programs to perform local tcp calls. On Oracle Linux, RHEL, CentOS based distribution check run:
# sudo setsebool httpd_can_network_connect on -P