Overview
This code is structure in terraform module. You can consume this code and use the module to create GlusterFS structure. Code is in this GitHub repo. Version 1.0 consider the usage of remote provisioners to after the compute instance installation succeeded proper configure the GlusterFS.
Configuration Phase
You can run the following command line in windows or linux/mac to clone from repo.
git clone https://github.com/kernell128/oci_gluster.git
now you can enter to oci_cluster directory and run:
terraform init
In Terraform you can define variables in different forms in this case we will use the “terraform.tfvars” file defining:
- target_compartment_id – id of compartment where all resources will be created
- vcn_compartment_id – id of the compartment of VCN that will be used to attach the GlusterFS nodes.
- ssh_public_key – ssh public key that will be used to connect against the GlusterFS nodes.
tenancy_ocid – Id of tenancy
gluster_node_shape – Definition of oci compute shape for GlusterFS nodes. OCI Shapes. E.g.: VM.Standard2.2
gluster_arbt_shape – Definition of oci compute shape for GlusterFS arbiter nodes. OCI Shapes. GlusterFs docs. E.g.: VM.Standard2.2
number_of_nodes – Number of nodes for this initial cluster. Will be an integer value.
gluster_redundancy – Default value 1 fixed as an initial sample check GlusterFs docs
bv_size_in_gbs – Size of block storage that will be the base for brick.
private_key_path – private key path used a terraform local host. Info used to estabilish SSH connections.
fs_volume_label – Label used to mkfs command for block storage format.
fs_dev – Block device presented to O.S.
fs_mount_point – Specify the mountpoint for the GlusterFS brick.
gs_vol_name – First volume name.
After setup the proper imput variables you can further use as internal module to your code to define how the cluster can be created.
You can use the main.ft file and use datasources to provide the module impute values.
Once the module values has addressed terraform plan and apply will be able to run.
Enjoy !
The remote provisioned will add the block volumes for all cluster members. Initially this sample create 3 nodes.
Second block of the remote provisioned create the cluster to probe all cluster members adjust and firewall O.S. rules.
Third “b” block that will against first node and create the first volume using disperse layout and replication as 1. If you need to change the volume that you need to create adjust this block line 97 to proper reflect the volume creation that you are looking for.
References
Here some very important references that need to be read.
- GlusterFS core utilities
- Setup Volumes
- Manage Volumes
- Manage GlusterFS service
- Client setup
- Performance tuning